Pages

Friday, 21 September 2018

Would you trust your life on robots?


Written by Jouko Makkonen and Otto Österman
Sometimes technology fails. There is no news in that. Still, people trust more and more on technology in their everyday life. Sometimes even a bit too much. A good example if this is the use of GPS navigators while driving. People rely on the instructions of navigation systems, although those sometimes lead to detours, sub-optimal or even flawed routes or at the worst case to breaking traffic rules and even dangerous accidents.1

In cases like above, technology is not the only one that fails. The (bad) decision to follow blindly on the navigation system’s instructions and not to look ahead and re-consider, is made by the driver. Luckily most drivers make better decisions. Human is also a good backup system preventing errors made by technology.

Let’s assume the similar navigation error would happen in a driverless car. Most probably there are also other sensors than GPS that would prevent the car from getting off the road or hitting obstacles, so they should prevent the autonomous car driving off the road. But if the sensors or the software fails, autonomous mobile devices are likely to cause accidents. Recent example of this is when Uber’s self-driving car detected a pedestrian but did not react fast enough.2

So, what is the difference between human and robot decision making? Robot decision making is simple: robot makes decisions in a way that it is programmed to make. It might also have other technical limitations, like the sensors it gets it data from. Positive sides are, that as long as it is not out of power, it will not get tired or let feelings affect its decision making. But as a downside, in case of malfunction, bad decisions can cause problems and dangers.

Human decision making can be weakened by for example tiredness or mental state. The biggest difference compared to robots is the effect of feelings. Feelings might sometimes weaken but they also give totally different aspects on decision making. This is something the robots do not have.
Another thing that robots lack in decision making compared to humans, is ethical thinking. This is also difficult to program, because in ethical issues sometimes even the questions are hard to form, not to mention the outcome. It is more describing to call ethical thinking ”thinking” than just ”decision making”, and it also takes into consideration our feelings, worldview, philosophy, previous experiences and so on. Ethical thinking is also needed in decision making in many fields. Getting back to the previous examples on self-driving cars, a big problem still is how would they make decisions that would need ethical thinking.3
When it comes to rationality, machines beat humans
According to PwC’s Aldous Birchall, who is the head of AI and machine learning, when it comes to making decisions, humans still believe they know best. Machines are expected to be 100% accurate all the time, whereas humans are well under that in their decision-making accuracy. A machine only needs to be more accurate than a human. That’s when machines become useful.4

In his TED talk “The key to growth? Race with the machines”, Erik Brynjolfsson, who is a professor at the MIT Sloan School of Management in Cambridge, Massachusetts, argues that “HiPPOs” (highest paid person’s opinions) are usually relying on human-like traits like intuition and biases. In his opinion, this leads to poor decisions. A machine, on the other hand, is stripped of these traits and makes a decision based solely on cold hard facts. This can be beneficial in some situations, but it is not this black and white. Who gets to decide what the machine decides?4, 5

There are many examples of machines making better decisions than humans. One milestone was achieved in 2017 when Google’s AI ‘AlphaGo’ beat the world’s best player of Go, a board game bearing a likeness to chess. It has been said to be the world’s most complicated strategy board game, so this is no small step for AI. A machine is able to predict moves so far into the future, that humans just cannot keep up, even though the grandmaster did win one out of the five games he played with AlphaGo.6

There is no question about it - machines can make better decisions than humans. Not all kinds of decisions obviously, only when the data fed to the machine is sound and the decision at hand does not include heavy ethical factors.7 Technology is advancing at a very fast pace so we won’t have much time to learn from our mistakes. So let’s just hope nobody creates Skynet.


[1] Driver follows GPS, drives car into a lake:
[2] Uber’s self driving car detected pedestrian, but did not react fast enough:
[3] The ethical dilemmas of self-driving cars: 
[4] Why humans must accept robots make better decisions:
[5] The key to growth? Race with the machines (Erik Brynjolfsson TED talk):
[6] Google’s AlphaGo AI wins three-match series against the world’s best Go player:
[7] When should machines make decisions?:

7 comments:

  1. Interesting post! Trusting AI in important situations like driving cars is not very easy. Technology has to be improved so that dangerous situations can be avoided. On the other hand people need to know that in many situations AI can make more rational and better decisions than humans. The ethics are problematic. Maybe we need to make rules for how the machines are programmed and what kind of ethical decisions they can make.

    ReplyDelete
  2. I agree that letting technology make decisions for us can seem scary. Autonomously driving cars are definitely one example. However, ethical questions like whose life would be prioritized in a dangerous situation need to be answered before the car can even begin to make those decisions.

    There is the recurring example of a car that needs to 'decide' between its occupant and a group of pedestrians. Whichever decision would be the more rational one, very few potential buyers would decide to buy a car that might not prioritize their life in a dangerous situation.

    Ethical questions like that need to be considered before autonomous cars can become commonplace.

    ReplyDelete
  3. Very interesting post! I agree that one of the most difficult areas in autonomous vehicles are the ethics and decision making in hard situations. I think the main reason why it is important to people that vehicles do not make mistakes is the fact that vehicle is supposed to calculate everything whereas humans only can react on reflexes on fast situations. Obvious important thing is the moral aspect when driving (Who is prioritized etc.). Other problem that comes to my mind is the liability. Who is responsible if autonomous car makes a mistake? Driver who didn't watch the road close enough or the manufacturer that released flawed vehicle?

    Tuomas

    ReplyDelete
  4. Henrik Sillanpää1 October 2018 at 14:57

    Really interesting post! Myself I would not trust my life on a robot, but when you start to think about it, I am already trusting (just like driving a car and listening to the GPS) even I don't acknowledge it. I read also somewhere that an AI called libratus played poker against the best players in the world and still manage to win them. So guess like in GO its all about % in that kind of game, when human players are basing all their actions basically on feelings and state of mind.

    ReplyDelete
  5. It is a difficult question to answer and has been up for debate for a long time now. The premise of iRobot comes to mind where the police detective was involved in an accident and as he and a small girl were sinking towards the bottom of a river, a robot jumps in, calculates that the policeman has a better chance of survival than the little girl, and decides to save him and leave the girl. That seems cold and that’s where the mistrust for robot takes seed in the policeman. However, it is important to note that this would have been a dilemma for all of us. If we were in that position, what would we do?
    But we are far away from a world of fully automated and articulated robots walking down the streets with us. I am a firm believer in robots taking over of the repetitive tasks assigned to humans today, and driving is no exception. While it is hard to produce statistics to show which option is safer, it is also hard to argue against the safety offered by a system where there is one type of driver driving all the cars. That alone reduces the chances of something unexpected happening and makes it easier to predict and plan the course of action.
    It would have been more helpful if you had provided some examples of ethical dilemmas autonomous cars could face. However, your blog did get me thinking about my position on the matter and I realized that having fully autonomous vehicle would be another extreme opposed to fully manual cars, and extremes are seldom beneficial. So, having some kind of collaborative system is probably the best way to move forward.

    ReplyDelete
  6. Hi!

    Interesting post. I think trusting machine over human is more dependant on the context and situation. Let's take the navigation for example. If I am in the car with someone who lives in the area where I am currently driving, of course I trust the human more than the navigator but if I am driving alone in foreign area, trusting navigator seems better option than trusting myself. Of course every driver should also think themselves and follow the traffic rules, but more in the sense of where am I supposed to go.

    - Joni Eronen

    ReplyDelete
  7. Tuomas Kaartoluoma22 November 2018 at 16:08

    First of to answer your title, no but then again I don't want to place my life on humans neither. It is true that mental state doesn't affect on robots but my concern is that robots are made by humans and humans are prone to error. There have been fatalities caused by industrial robots so who is to blame in those cases? Wasn't there enough precautions by the designers of the robot or was it just carelessness from the workers who got too close to a robot?

    -Tuomas

    ReplyDelete