Pages

Friday, 21 September 2018

Rational thinking and robotics


Written by Louis Sosa & Fernanda Pinto
 
Rational thinking and robotics
How do we justify our thinking and decision making? We humans are born with a consciousness and build our rational thinking throughout our entire life, of course, as long as we maintain our mental sanity.  Our experiences and perceptions of the external world are the container and catalyst of our knowledge to which are linked to ideas as internal units in our brain. We think and process information constantly by memorizing, solving problems, making decisions, creating new ideas and reflecting on our thoughts.  Activities like writing or hand articulated movements might help to improve the quality of thinking.
According to Daniel Kahneman, thinking is divided in two intellectual systems:
Intuitive thinking (fast thinking) – Defined as system number one, is where quick thinking with very low effort and no sense of voluntary control operates. Automatic perpetual activities like distances to objects, recognition of colors and easy tasks are a few examples of fast thinking.
Bringing in and connecting this aspect to user centered design, user interfaces should be directed and oriented towards this approach so that they are more intuitive, easy to perceive and use.
Rational thinking (slow thinking) – Defined as system number two, where more complex computations are operated. In this area, a higher effort is required due to the examination and validation of complex logical arguments. A couple of good examples are when we are walking at a certain pace or speed and try to maintain it. The other example is when we are counting (mathematics). 
When poorly designed user interfaces are met, it will require an overburden mental process and cognitive overload from the user. Multitasking is not applicable here but relevant for the intuitive thinking.
Decision making – Believe or not, our everyday is based on the decisions we make. It’s a part of the constitution of our existence and way of living. We make decisions to take ourselves forward in life, for survival, reproduction and knowledge purposes. As horrible as it might sound, human beings also make decisions that might be life-threatening. Decision making is a selection between different options. In most or many cases, the outcome of our decisions cannot be predicted. Elements that affect our decisions are our emotions, non-rational and rational biases.
Aspects and factors in decision making
We prefer to use familiar decisions connected to our experience, existing knowledge, mental models and schemas. These are recalled in situations that can correlate to earlier happenings. The correlations are interrupted when a new problem occurs. An example of this interruption is when designing a user interface for building constructors. The same user interface design is not applicable for office workers in a bank. The old design cannot be applied and implemented on the new target group.
The level of knowledge can also make it difficult to reach a decision. The realization of not possessing the knowledge, might lead to a decision paralysis. It is important to remember that former stimuli will always affect the interpretation of latter stimuli, as well as decisions and behaviors. The first impression matters and primes immediately the impression one will get. This correlates to the fact of our social interaction which is more reliant when meeting new people. A good example is when we go on a date. First impression is the best impression, right? If your first impression of the other person is bad, would like to give it another try? No? Well I thought so. J
Moving forward in our subject of rational thinking and robotics, we have come to the stage of cognitive dissonance. Humans tend to form consistent and logical entities between their thoughts, believes, opinions, knowledge and actions to find balance. In our last lecture we learned that cognitive dissonance is a theory based on three fundamental assumptions. Humans are sensitive to inconsistency between actions and beliefs. If an inconsistency that causes dissonance is recognized, it will motivate individuals to resolve the dissonance or contradiction through three basic methods. By changing your beliefs, actions and by changing the perception of the actions. One way of putting this into perspective is, when we choose a wrong action or choice. We are required to change our “belief” and our “way of thinking” regarding that action. The last method is the perception of your own action, where you think about your own action from different perspectives, contexts and manners.
Cognitive biases – it is said that humans are extremely good at fooling themselves. Here is, where cognitive biases come into the picture. We have inclinations towards different ways of thinking, that can lead to divergences from legitimate and rational judgment. In other words, humans are good at cheating themselves.
The choices we make are remembered mostly with a positive approach, which is connected to our positive state of mind and therefore, make us feel better and see ourselves in a better way. This inclination refers to, choice-supported bias. Another cognitive bias is confirmation bias, where the human being focuses more on the concentrating aspect of the correlations and match of information that supports our existing knowledge and beliefs. It confirms our way of thinking.
One example we want to also bring up is the “social comparison bias’’. We people make constant attributions towards others and our own behaviors. This accredited comparison does not always reflect the genuineness of the object perceived. Therefore, we tend to be more susceptible to influence and manipulation from autoreactive and charismatic people. Propaganda and manipulation are strong factors connected to this cognitive bias.
Now let’s reflect about these subjects regarding our rational thinking and connect these dots to robotics and artificial intelligence. Our way of thinking is more complex than what we just explained but because of the lack of space to write, we brought up what we have learned so far in in lecture four.
Robots for the average John tend to reflect the vision of humanoids in science fiction, setting the bar incredibly high. There are industrial robots that perform one or more tasks to automate jobs in factories. However, they look like the stereotype of a machine or device and are sometimes under several security layers, as they can be dangerous to humans and infrastructure. People may also imagine a robot doing the dishes. Nonetheless, we have dishwashers that have a specific design and mechanism to perform the task in a more effective way than a humanoid could. Yet, we do not see dishwashers as robots, but as commodity machines. There are also automated vacuum cleaners, but we can hardly call them robots according to general expectations. Why is there such big gap between science fiction and real-life robots? One part of it may be the fact that robots do not “think” in the real world as they do in most movies/tv shows/books. For example, the way most industrial robots work is through repetition and predetermined commands without any real understanding.
The Turing test has a way of determining how close to human thinking the response of the subject (robot) is. Chatbots tend to rank quite high being very efficient in having social interactions with humans via online messaging. Consequently, what stops us from putting a chatbot inside a robot? Well firstly, it would still be lacking the environmental stimulus and other qualities, such as human-like gaze, blinking, gestures, vocal tone, and other emotional expressions. The first step to modify or advance this, could be to add sensors, cameras and microphones to the robots, in order to enable intelligent and environment-based behaviour. A closer example would be Sophia: a social robot that uses AI to see people, understand conversation, and form relationships. With these technologies it can joke, simulate facial expressions and give the impression of understanding. Nevertheless, what Sophia actually does is to determine when is the right time to say a prestored. 


In the future, to build robots that resemble human expectations, we would need to improve the hardware capabilities and release more complex algorithms that have learning capabilities. Not all the behaviours will need to be programmed but can be learned from the robot’s experience. Rather than as now, when they are powered by rulebooks of all sizes, containing all quantities of data stored in a database or on the Internet.

Sources
Määttänen, P. 2015. Mind in action: Experience and embodied cognition in pragmatism. Cham: Springer.
Aino Ahtinen - Lecture 4 (17.9.2018) from the course: Psychology of Pervasive Computing

1 comment:

  1. Tuomas Kaartoluoma22 November 2018 at 16:19

    Human thinking is hard to mimic and insert into a robot. I'm glad that not one robot has passed Turing test yet, it makes me uneasy to think we couldn't separate robots from humans. If you are interested more in the subject I would recommend to watch Ex Machina, it's a sci-fi movie where artificial intelligence is trying to pass the Turing test. I think in the future it is likely to develop robots to think like humans, or design an AI that can develop those skills, but I hope it is far in the future still.

    -Tuomas

    ReplyDelete