Robotics technologies are showing up all around us. There are vacuum cleaners that drive themselves in your home, and toys that move, talk and learn. Robots are being used in battlefields, as cuddly alternatives to pet-therapy in care centres, and as programmable and customizable sex dolls in bedrooms.
So should we be worried about a robot uprising? Perhaps not yet. But there are many reasons why we should be more cautious about the technology. Not only are these technologies increasingly prevalent, they are about to generate novel dilemmas for us humans.
We are now asking deeper questions about our creations: What kind of relationship should we have with robots? How should a robot behave? What kind of decisions should robots be allowed to make? And what should a robot do? Existing engineering standards do not govern ethical use, deployment, or behaviour of robots.
For example, if you were to see a child banging a squirming and screaming toy dinosaur robot against a table, would you take the robot away, or discipline the child so she learns it is bad to hurt the robot? The robot is just a toy and doesn’t actually get hurt in the same sense animals do, so what is the difference between this robot and a ball that gets kicked around?
These kind of conundrums are driving researchers to consider the next big thing in robotics: the ethics of robots and robotics.
About ten years ago, a field of study called Roboethics was established to explore these social, legal, and ethical questions pertaining to robotics. Roboethics is becoming more important to roboticists who strive to make robots more human-friendly.
This is not a trivial field of inquiry. Although people seem to know what is socially, legally, and ethically appropriate in daily life, what someone should do tends to change quite a lot depending on the context, the situation and from person to person. If robots don’t know what is appropriate and what is not, then we would not want them roaming around our homes, interacting with our elder parents or children.
At the UBC CARIS (Collaborative Advanced Robotics and Intelligent Systems Laboratory) lab, we are trying several ideas. One approach is to make robots communicate better. This way, what a robot is doing or is about to do is easily understood by people interacting with it. When conflicts occur, robots could communicate with people to figure out what it should do next.
Another idea is to seek help from lots of people. We think that listening to the feedback from all stakeholders of the technology can help us better understand how to implement human ethics into design. My colleagues and I at the CARIS lab , CNR-IEIIT (Italy) and School of Robotics (Italy) will launch an initiative we call the Open Roboethics initiative. In the near future, we will have an online space for you to share your opinions about what you think is acceptable robot use and robot behaviour. All of your feedback will be heard by designers and can also inform policy makers. We also hope to share some geeky contents, such as benchmarking and simulation platforms where designers all around the world can test their robot behaviours against what people have said is acceptable.
Maybe this way, everyone will get to have a say in what a robot should do, and define what ‘ethical’ and ‘friendly’ really means in a robot universe.