Rise of the mimic-bots that act like we do Human-machine teamwork

Robot Intelligence Technology Lab. at KAIST

An online magazine, New Scientist Magazine, based in the UK published an article, dated January 8, 2011, on a robot research project led by Professor Jong-Hwan Kim from the Electrical Engineering Department. The article follows below:

Rise of the mimic-bots that act like we do Human-machine teamwork

[January 08, 2011]

(New Scientist Via Acquire Media NewsEdge) Rise of the mimic-bots that act like we do. A robot inspired by human mirror neurons can interpret human gestures to learn how it should act. A human and a robot face each other across the room. The human picks up a ball, tosses it towards the robot, and then pushes a toy car in the same direction.

Confused by two xobjects coming towards it at the same time, the robot flashes a question mark on a screen. Without speaking, the human makes a throwing gesture. The robot turns its attention to the ball and decides to throw it back.

In this case the robot’s actions were represented by software commands, but it will be only a small step to adapt the system to enable a real robot to infer a human’s wishes from their gestures.

Developed by Ji-Hyeong Han and Jong-Hwan Kim at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, the system is designed to respond to the actions of the person confronting it in the same way that our own brains do. The human brain contains specialised cells, called mirror neurons, that appear to fire in the same way when we watch an action being performed by others as they do when we perform the action ourselves. It is thought that this helps us to recognise or predict their intentions.

To perform the same feat, the robot observes what the person is doing, breaks the action down into a simple verbal dexscription, and stores it in its memory. It compares the action it observes with a database of its own actions, and generates a simulation based on the closest match.

The robot also builds up a set of intentions or goals associated with an action. For example, a throwing gesture indicates that the human wants the robot to throw something back. The robot then connects the action “throw” with the xobject “ball” and adds this to its store of knowledge.

When the memory bank contains two possible intentions that fit the available information, the robot considers them both and determines which results in the most positive feedback from the human?- a smile or a nod, for example. If the robot is confused by conflicting information, it can request another gesture from the human. It also remembers details of each interaction, allowing it to respond more quickly when it finds itself in a situation it has encountered before.

The system should allow robots to interact more effectively with humans, using the same visual cues we use. “Of course, robots can recognise human intentions by understanding speech, but humans would have to make constant, explicit commands to the robot,” says Han. “That would be pretty uncomfortable.”Socially intelligent robots that can communicate with us through gesture and expression will need to develop a mental model of the person they are dealing with in order to understand their needs, says Chris Melhuish, director of the Bristol Robotics Laboratory in the UK. Using mirror neurons and humans’ unique mimicking ability as an inspiration for building such robots could be quite interesting, he says.

Han now plans to test the system on a robot equipped with visual and other sensors to detect people’s gestures. He presented his work at the Robio conference in Tianjin, China, in December. nAs the population of many countries ages, elderly people may share more of their workload with robotic helpers or colleagues. In an effort to make such interactions as easy as possible, Chris Melhuish and colleagues at the Bristol Robotics Laboratory in the UK are leading a Europe-wide collaboration called Cooperative Human Robotic Interaction Systems that is equipping robots with software that recognises an xobject they are picking up before they hand it to a person. They also have eye-tracking technology that they use to monitor what humans are paying attention to. The goal is to develop robots that can learn to safely perform shared tasks with people, such as stirring a cake mixture as a human adds milk.

(c) 2011 Reed Business Information – UK. All Rights Reserved.