First social skills framework for robots developed at MIT
In yet another example of how clairvoyant Isaac Asimov's Three Laws of Robotics were, MIT researchers have managed to create what could end up being the first social interaction framework for robots. Their goal is to try and make the machines consider not only their task at hand but also how their actions would affect others. The mathematical model of the framework splits the robots into three basic categories, or "levels."
A level 0 machine performs its tasks without recognizing a common goal. The next, level 1 robots, can cooperate on common goals but assume that only they can interact in such a manner. The most sophisticated level 2 robots would know that other machines around have social skills, too, and partner up on dealing with a task faster, for instance. As per Ravi Tejwani, research assistant at MIT’s Computer Science and Artificial Intelligence Laboratory:
We have opened a new mathematical framework for how you model social interaction between two agents. If you are a robot, and you want to go to location X, and I am another robot and I see that you are trying to go to location X, I can cooperate by helping you get to location X faster. That might mean moving X closer to you, finding another better X, or taking whatever action you had to take at X. Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically.
To test if their mathematical framework for giving robots social skills matches the human understanding, the researchers showed volunteers videos of 0-2 level robots interacting. In most cases, the human perception of the types of interaction happening was in sync with the model. Boris Katz, a member of MIT's Center for Brains, Minds, and Machines, says in comments on the framework's viability:
Robots will live in our world soon enough, and they really need to learn how to communicate with us on human terms. They need to understand when it is time for them to help and when it is time for them to see what they can do to prevent something from happening. This is very early work and we are barely scratching the surface, but I feel like this is the first very serious attempt for understanding what it means for humans and machines to interact socially.
For now, the robot's social interactions are a 2D environment simulation, but the framework will be expanding into 3D realms like "manipulation of house objects" and also test the concept of failure.
Are you a techie who knows how to write? Then join our Team! Wanted:
- News translator (DE-EN)
- Review translation proofreader (DE-EN)
Details here