Pages

Wednesday 19 December 2018

How AI Hub is bringing us closer to robot friends

Written by Miikael Lehtimäki and Ivan Dubrovin

AI Hub Tampere is a new research center hosted by Tampere University and will bring together people from all aspect of AI research and bring ai research closer to robotics and other uses. The teaching goals of the center are to train people to develop AI, engineers to implement AI and managers to outsource AI. The primary disciplines brought forward are visual imaging, machines learning and audio & signal processing and with experts of these areas working together they can all take strides together. 

With new advancements in visual imaging coming forth AI becomes more capable of visually recognizing things, being able of recognizing people, specific pieces of clothing and other equipment along with reading emotions from faces. This comes in use for helping social robots read situations and feelings of their subjects, the ability to read items, people and faces together can help recognize suicide or other attempts at self-harm along with being able to play with toy or devices together with people. 

With individualized solutions for tasks and subject using the new algorithms and solutions, including ones which enable learning from example and two-handed manipulation of objects, robots and devices can learn unique ways to accomplish tasks for unique situations, like taking time and its passing into account knowing what is recent and what that means. Solution that can come from this are throwing balls with two different children, the robot can start with a general solution with the height and reach of the child and specialize its throws for each to be catchable and if the child threw balls last a long time ago, start by going easy on them, learning what improves the mood of their owner, this combined with data sharing between platforms can allow robots to try new or different things to solve new problems, a robot that sees that you are in a bad mood tries cartwheels, bringing you a ball, singing or playing a song or sound for example. 

Advancements in audio and signal processing allows for the detection and categorization of individual sounds, from speech, birds, cars and being able to isolate these sounds out from the others. With these improvements robots can use simple audio receptors to become far more aware of their surroundings, recognize people more accurately and listen to multiple people at the same time, improving social robots used in groups as they will get less confused from crowds, able to recognize authorized commands from a specific individual even in loud situations and better able to read situation context as in a case of their owner playing sad or happy music or their tone of voice.
Alone these directions can already bring great strides to social robotics in general but together they bring us ever closer to the robot friend, an ai companion that can learn to know you, knows what cheers you up, suggest things to do, complains about things that annoy it and can tell you to get them maintenance. Less a creepy android servant and more a companion device that can journey through life together with you in this ever-expanding digital world.