Written by Axel Rosenberg
Many say that Artificial Intelligence reaching singularity is just around the corner. Around the corner is something that some say is 5 years and some say 100, but quite possibly in our lifetimes. Artificial Intelligence reaching singularity, or becoming an Artificial Super Intelligence (ASI), is expected to have extreme changes on society and its construction, while also being speculated to be the next step in evolution and making humans extinct.
I think Lieutenant Commander Data, an android from Star Trek, is an interesting take on Artificial Intelligence that I find quite soothing. One could argue that Data and his intelligence has achieved singularity, but in the series, he struggles with human (and other races) emotions. His greatest ambition is to be human himself, even though he is both stronger, faster both cognitively and physically, smarter, has seemingly infinite memory capacity with the ability to recall anything, and is immortal.
Data acknowledges that he can not feel emotion, but he still tries to imitate what he sees other officers doing, often failing miserably. While many could argue that he is a superb being and has surpassed humanity, his struggle to achieve humanity is interesting to watch.
He does, however, have psychological knowledge and understands what emotions are. He thinks of everything objectively, but still often fail to make sense of relations between emotional beings. His objectivity does however provide great advantages, and not having emotions gives him clarity in difficult situations.
In the setting of a military command, an objective android with AI is what the command needs and requires. In some cases, however, an emotionless objective AI can be disastrous. For our project for this course, we designed a chatbot meant for individuals suffering from mental heath issues. Severe cases can have extreme situations which can escalate very quickly to something disastrous, so an AI behind the chatbot must understand and react to emotions and psychological aspects correctly depending on the situation. It also must know when its skills are not enough, and when a trained professional is needed (or at least for now, since no chatbot has ASI capabilities.
In any case, the designer has to have a basic knowledge about psychology. Preferably there is a trained psychologist in the team that is designing and advanced AI, but context is key. Some AI solutions requires psychological skills while some don’t. I am however sure that humanity will some day create and AI that evolves itself in such a way that human development of the AI is no longer needed.
Many say that Artificial Intelligence reaching singularity is just around the corner. Around the corner is something that some say is 5 years and some say 100, but quite possibly in our lifetimes. Artificial Intelligence reaching singularity, or becoming an Artificial Super Intelligence (ASI), is expected to have extreme changes on society and its construction, while also being speculated to be the next step in evolution and making humans extinct.
I think Lieutenant Commander Data, an android from Star Trek, is an interesting take on Artificial Intelligence that I find quite soothing. One could argue that Data and his intelligence has achieved singularity, but in the series, he struggles with human (and other races) emotions. His greatest ambition is to be human himself, even though he is both stronger, faster both cognitively and physically, smarter, has seemingly infinite memory capacity with the ability to recall anything, and is immortal.
Data acknowledges that he can not feel emotion, but he still tries to imitate what he sees other officers doing, often failing miserably. While many could argue that he is a superb being and has surpassed humanity, his struggle to achieve humanity is interesting to watch.
He does, however, have psychological knowledge and understands what emotions are. He thinks of everything objectively, but still often fail to make sense of relations between emotional beings. His objectivity does however provide great advantages, and not having emotions gives him clarity in difficult situations.
In the setting of a military command, an objective android with AI is what the command needs and requires. In some cases, however, an emotionless objective AI can be disastrous. For our project for this course, we designed a chatbot meant for individuals suffering from mental heath issues. Severe cases can have extreme situations which can escalate very quickly to something disastrous, so an AI behind the chatbot must understand and react to emotions and psychological aspects correctly depending on the situation. It also must know when its skills are not enough, and when a trained professional is needed (or at least for now, since no chatbot has ASI capabilities.
In any case, the designer has to have a basic knowledge about psychology. Preferably there is a trained psychologist in the team that is designing and advanced AI, but context is key. Some AI solutions requires psychological skills while some don’t. I am however sure that humanity will some day create and AI that evolves itself in such a way that human development of the AI is no longer needed.