A robot breaks the finger of a 7-year-old: a lesson in the need for stronger regulation of artificial intelligence

Artificial intelligence is developing quickly, and the law needs to catch up.
Disturbing footage emerged this week of a chess-playing robot breaking the finger of a seven-year-old child during a tournament in Russia.
Public commentary on this event highlights some concern in the community about the increasing use of robots in our society. Some people joked on social media that the robot was a “sore loser” and had a “bad temper”.
Of course, robots cannot actually express real human characteristics such as anger (at least, not yet). But these comments do demonstrate increasing concern in the community about the “humanisation” of robots. Others noted that this was the beginning of a robot revolution – evoking images that many have of robots from popular films such as RoboCop and The Terminator.
While these comments may have been made in jest and some images of robots in popular culture are exaggerated, they do highlight uncertainty about what our future with robots will look like. We should ask: are we ready to deal with the moral and legal complexities raised by human-robot interaction?
Human and robot interaction
Many of us have basic forms of artificial intelligence in our home. For instance, robotic vacuums are very popular items in houses across Australia, helping us with chores we would rather not do ourselves.
But as we increase our interaction with robots, we must consider the dangers and unknown elements in the development of this technology.
Examining the Russian chess incident, we might ask why the robot acted the way it did? The answer to this is that robots are designed to operate in situations of certainty. They do not deal well with unexpected events.
So in the case of the child with the broken finger, Russian chess officials stated the incident occurred because the child “violated” safety rules by taking his turn too quickly. One explanation of the incident was that when the child moved quickly, the robot mistakenly interpreted the child’s finger as a chess piece.
Whatever the technical reason for the robot’s action, it demonstrates there are particular dangers in allowing robots to interact directly with humans. Human communication is complex and requires attention to voice and body language. Robots are not yet sophisticated enough to process those cues and act appropriately.
What does the law say about robots?
Despite the dangers of human-robot interaction demonstrated by the chess incident, these complexities have not yet been adequately considered in Australian law and policies.
One fundamental legal ...