Quantcast
Channel: georgedvorsky
Viewing all articles
Browse latest Browse all 945

Who should pay when your robot breaks the law?

$
0
0

Who should pay when your robot breaks the law?Robots are unquestioningly getting more sophisticated by the year, and as a result, are becoming an indelible part of our daily lives. But as we start to increase our interactions and dependance on robots, an important question needs to be asked: What would happen if a robot actually committed a crime, or even hurt someone — either deliberately or by mistake?

While our first inclination might be to blame the robot, the matter of apportioning blame is considerably more complicated and nuanced than that. Like any incident involving an alleged criminal act, we need to consider an entire host of factors. Let's take a deeper look and find out who should pay when your robot breaks the law.

To help us tackle this issue we spoke to robot ethics expert Patrick Lin, the Director of Ethics + Emerging Sciences Group at California Polytechnic State University. It was through our conversation with him that we learned just how pertinent this issue is becoming. As Lin told io9, "Any number of parties could be held responsible for robot misbehaviour today."

Robot and machine ethics

Before we get too far along in the discussion, a distinction needs to be made between two different fields of study: robot ethics and machine ethics.

Who should pay when your robot breaks the law? We are currently in the age of robot ethics, where the concern lies with how and why robots are designed, constructed, and used. This includes such things as domestic robots like Roomba, self-driving cars, and the potential for autonomous killing machines on the battlefield. These robots, while capable of "acting" without human oversight, are essentially mindless automatons. Robot ethics, therefore, is primarily concerned with the appropriateness of their use.

Machine ethics, on the other hand, is a bit more speculative in that it considers the future potential for robots (or more accurately, their embodied artificially intelligent programming) to have self-awareness and the capacity for moral thought. Consequently, machine ethics is concerned with the actual behavior and actions of advanced robots.

So, before any blame can get assigned to a robot for any nefarious action, we would need to decide which of these two categories apply. For now and the immediate future, robot ethics most certainly qualifies, in which case accountability should to be attributed to either the manufacturer, the owner, and in some cases even the victim.

But looking further into the future to a time when robots match our own level of moral sophistication, the day is coming when they will very likely to have to answer for their crimes.

Manufacturer liability

For now and the foreseeable future, culpability for a robot that has gone wrong will usually fall on the manufacturer. "When it comes to more basic autonomous machines and systems," said Lin, "a manufacturer needs to ensure that any software or hardware defect should have been foreseen."

He cited the hypothetical example of a Roomba that experiences a perfect storm of confusion — a set of variables that the manufacturer could not have anticipated. "One could imagine the Roomba falling off an edge and landing right on top of a cat," he said, "in which case it could be said that the manufacturer is responsible."

Indeed, because the robot is just operating according to the limits of its programming, it cannot be held accountable for its actions. There was absolutely no malice involved. And assuming that the robot was being used according to instructions and not modified in any way, the consumer shouldn't be held liable either.

Outside intended use

Which, as Lin pointed out, raises another issue.

Who should pay when your robot breaks the law? "It's also possible that owners will misuse their robots and hack directly into them," he said. Lin pointed to the example of home defense robots that are being increasingly used in Asia — including robots that go on home patrol and can shoot pepper spray and paint-ball guns. "It's conceivable that someone might want to weaponize the Roomba," he told io9, "in which case the owner would be on the hook and not the manufacturer." In such a scenario, the robot would act in a way completely outside of its intended use, thus absolving the manufacturer from liability.

But as Lin clarified for us, it's still not as cut-and-dry as that. "Just because the owner modified the robot to do things that the manufacturer never intended or could never foresee doesn't mean they're completely off the hook," he said. "Some might argue that the manufacturer should have foreseen the possibility of hacking, or other such modifications, and in turn build in safeguards to prevent this kind of manipulation."

Blame the victim

And there are still yet other scenarios in which even the victim could be held responsible. "Consider self-driving cars," said Lin, "and the possibility that a jay-walker could suddenly run across the street and get hit." In such a case it's the victim that's really to blame.

And indeed, one can imagine a entire host of scenarios in which people, through their inattention or recklessness, fall prey to the growing number of powerful and autonomous machinery around them.

Machines that are supposed to kill

Complicating all this yet even further is the potential for autonomous killing machines.

Currently, combat drones are guided remotely by human operators, who are in turn responsible for any violent action committed by the device. If an operator kills a civilian or fellow soldier by mistake, they will have to answer for their mistake and likely face a military tribunal depending on the circumstances.

Who should pay when your robot breaks the law? But that said, there are already sentry bots on duty in Israel and S. Korea. What would happen if one of these robots were to kill somebody by mistake? Actually, as Lin informed us, it's already happened. Back in October 2007 a semi-autonomous robotic canon deployed by the South African army malfunctioned, killing nine "friendly" soldiers and wounding 14 others.

It would be all too convenient, and even instinctive, to blame the robot for an incident like this. But because these systems lack any kind of moral awareness, they cannot be held responsible.

Who, therefore, should account for such an egregious mistake? The person who deployed the machine? The procurement officer? The developer of the technology? Or as Lin asked, "Just how far up the chain of command should we go — and would we ever go so far as to implicate the President, who technically speaking is the Commander-in-Chief?"

Ultimately, suggested Lin, these incidents will have to be treated on a case-by-case basis. "It will all depend on the actual scenario," he said.

Quasi-persons

Looking ahead to the future, there's the potential for a kind of behavioral grey area to emerge between a fairly advanced AI and a fully robust moral machine. It's conceivable that a precursor moral AI will be developed that has a very limited sense of self-awareness and personal responsibility — but a sense of subjectivity and awareness nonetheless. There's also the potential for robots to have ethics programmed right into them.

Unlike more simple automatons, these machines would be capable of actual decision making — albeit at a very rudimentary level. In a sense, they'd be very much like children — who, depending on their age, aren't entirely held accountable for their actions.

"There's a kind of strange disconnect when it comes to robot ethics," noted Lin, "in that we're expecting near perfect behavior from robots when we don't really expect it from ourselves." He agrees that children are a kind of special case, and that they're essentially quasi-persons. Robots, he argues, may have to regarded in a similar way.

Consequently, owners of robots would have to serve as parents or guardians, ensuring that they learn and behave appropriately — and in some cases even take full responsibility for their actions. "It's the same with children," said Lin, "there will have to be a sliding scale of responsibility for robots depending on how sophisticated they are."

The rise of moral machines

And finally, there's the potential for bona fide moral machines — those robots capable of knowing right from wrong. But again, this is still going to prove a tricky area. An artificially intelligent robot will be endowed with a very different kind of mind than one possessed by a human. By its very nature it will think very different than we do. And by consequence, it will be very difficult to know its exact inner cogitations.

Who should pay when your robot breaks the law? But as Lin noted, this is an area that, as humans, we're still struggling to deal with ourselves. He noted how the latest neuroscience suggests that we may not have as much free will as we think. Indeed, courts are beginning to have difficulty in assigning blame to those who may suffer from biological impairments.

All this said, could we ever prove, for example, that a robot can act out of free will? Or that it truly understands the consequences of its actions? And does it really feel empathy?

If the answers are yes, then a robot could truly be made to pay for its crimes.

But more conceptually, these questions are important because, as a society, we tend to confer rights and freedoms to those persons capable of such thoughts. Thus, if we could ever prove that a robot is capable of moral action and introspection, we would not only have to hold it accountable for its actions, we would also have to endow it with fundamental rights and protections.

It would appear, therefore, that we're not too far from the day when robots will start to demand their one phone call.

Top image via Sargeras/DeviantArt. Inset images via Honda, tf2.digitaljedi.com, IEEE Spectrum, Friducation.


Viewing all articles
Browse latest Browse all 945

Trending Articles