From robotic security guards to self-driving cars to cyborg nannies, the era of machines living amongst humans has taken a giant leap forward. As with any new technology, whether it’s driven by artificial intelligence or machine learning algorithms, there’s a pretty steep learning curve and manufacturers usually have to “work out the kinks” before the technology is ready for worldwide acceptance. Robots are no exception.


Conference on The Evolution of Autonomous Robots

On the heels of Knightscope’s K5 security robot plunging into a fountain in Washington D.C. and the world’s first self-driving fatal car crash, scientists from around the world have put together a Conference at the University of Surrey to discuss the evolution of autonomous robots and their impact on society.

“Accidents, we hope, will be rare, but they are inevitable,” Alan Winfield, professor of robot ethics at the University of the West of England in Bristol, told The Guardian. “Anywhere robots and humans mix is going to be a potential situation for accidents.”

Recent Safety Issues

Last year a K5 security robot patrolling the Stanford Shopping Center knocked over a one-year-old – running over his foot at the same time. Knightscope, the manufacturer, has since updated its K5 security robot. But what ensures this from happening again? How can Tesla engineers ensure that its autonomous driving sensor system perpetually recognizes a truck crossing its path? Scientists suggest the same way an aircraft does – via its black block technology.

Winfield and Marina Jirotka, professor of human-centered computing at Oxford University, both argue that robotics firms should follow the same guidelines set by the aviation industry. During a plane crash, the FAA (Federal Aviation Administration) has access to black box recordings, flight data, and cockpit voice recordings. From there, they can piece together the exact reason behind the crash – whether it’s a mechanical failure or human error – imperative safety lessons are learned.

“The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification process and, when things go wrong, robust and publicly visible processes of air accident investigation,” the researchers write in a paper to be presented at the Surrey meeting.

Black Boxes for Robots

According to the scientists, a robot should have a built-in ‘ethical black box’ so that investigators could easily analyze every step of the machine’s decision-making process. In the event of an accident, investigators could uncover the basis for making the decision, the reason behind the robot’s movements and at the same time, pull scene data from cameras, microphones, and rangefinders.

“Serious accidents will need investigating, but what do you do if an accident investigator turns up and discovers there is no internal dialogue, no record of what the robot was doing at the time of the accident? It’ll be more or less impossible to tell what happened,” Winfield explained to The Guardian.

The U.S. National Highways and Transport Safety Agency ruled the fatal self-driving car crash on human error – but there was some extensive criticism towards Elon Musk and Tesla for releasing autonomous driving technology without the proper safety precautions. We would certainly have greater insight into the details of the crash if the Model S came with a built-in black box.

Source: The Guardian