paint-brush
Artificial Intelligence and Robotics: Who’s At Fault When Robots Kill?by@Luke Fitzpatrick
396 reads
396 reads

Artificial Intelligence and Robotics: Who’s At Fault When Robots Kill?

by Luke FitzpatrickAugust 3rd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Embodied artificial intelligence systems pose the most significant challenges for lawmakers. We’ve yet to truly test how our laws will cope with the arrival of more sophisticated automation technology. It’s one thing to make a computer program that quietly gets better and faster at analyzing vast amounts of data, but who's to know what an embodied AI is going to do next — physically? By definition, the designer won’t — the robot learns for itself, so how can we legislate for safety and blame?

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Artificial Intelligence and Robotics: Who’s At Fault When Robots Kill?
Luke Fitzpatrick HackerNoon profile picture

Up to now, any robots brushing with the law were always running strictly according to their code. Fatal accidents and serious injuries usually only happened through human misadventure or improper use of safety systems and barriers. We’ve yet to truly test how our laws will cope with the arrival of more sophisticated automation technology — but that day isn’t very far away.

AI already infiltrates our lives on so many levels in a multitude of practical, unseen ways. While the machine revolution is fascinating — and will cause harm to humans here and there — embodied artificial intelligence systems perhaps pose the most significant challenges for lawmakers.

Robots that run according to unchanging code are one thing and have caused many deaths and accidents over the years — not just in the factory but the operating theatre too. Machines that learn as they go are a different prospect entirely — and coming up with laws for dealing with that is likely to be a gradual affair.

Emergent robot behavior and the blame game

Emergent behavior is going to make robots infinitely more effective and useful than they’ve ever been before. The potential danger with emergent behavior is that it’s unpredictable. In the past, robots got programmed for set tasks – and that was that. Staying behind the safety barrier and following established protocols kept operators safe.

Embodied AI is going to change all that. It’s one thing to make a computer program that quietly gets better and faster at analyzing vast amounts of data, but who’s to know what an embodied AI is going to do next — physically? By definition, the designer won’t — the whole point is that the robot learns for itself, so how can we legislate for safety and blame?

Let’s consider artificbial intelligence and robots as mere products. In some ways, you can compare these products to the new technologies of the past — like trains and cars. A car manufacturer can get sued if they produce an unsafe vehicle. Yet, you can’t sue the maker just because you drive at 100mph and kill a pedestrian. Both the manufacturer and the user of such a product need to be responsible in different ways.

Marc J. Shuman of Marc J. Shuman & Associates has over 33 years of experience working for those injured by others, but says robots are such a new concept and certainly make claims more complicated: “Manufacturers currently get sued under product liability laws when there’s damage to property or persons caused by defects or false claims about their product. The trouble with product liability law is it isn’t black and white, and robots equipped with AI are going to further muddy the waters. The way things stand, something has to give if manufacturers are going to continue to innovate reasonably freely.”

Blame the user, blame the maker, or blame the robot?

Let’s look at a hypothetical example. We’ll consider a robot in some future setting tasked with washing cars. The manufacturer creates it to learn ever more efficient and effective ways to clean vehicles. The machine can even rewrite its algorithms to that end. Within a year, the robot is working faster, better, and exceeding all expectations when an accident occurs. A motorist gets seriously injured as a result of this mishap, but who’s at fault?

In the US, such an incident might lead to a civil case based on negligence or defective design. It might be argued that any user of the product had a right to be safe and unharmed by the experience. The manufacturer would probably appear pretty careless to the plaintiff, who only wanted to get their car washed and trusted the tech to do the job safely. All a user really needs to avoid any share of responsibility is to use the product for its intended purpose.

Product liability laws are all about attributing blame, but who’s really at fault in such a scenario? After all, this is a machine that learns as it goes along, and is unpredictable as a result. The manufacturer, far from trying to hide that, probably featured the fact prominently in any marketing. Is it fair to blame the maker for something that’s a result of the robot’s environment and learning ability — and could not have been foreseen? It’s probably also true to say that by the time of the accident, the robot was operating according to algorithms it wrote itself — so do we blame the robot instead?

It’s a gray area, to say the very least. Ultimately, the manufacturer chose to sell a self-learning machine and is profiting from the technology. If the nature of the product caused harm, maybe they need to bear some of the brunts. However, there’s a very real danger that progress could be slowed significantly or even halted if companies can’t balance the risk of legal actions against profits. The fact is that while artificial intelligence is relatively new, the problems that it poses for lawmakers are not.

Embodied AI won’t be the first time product liability laws met entirely new types of products. 

New technologies and lawmaking through history

Automobiles caused chaos during the late eighteenth and early twentieth centuries, but they got regulated gradually. Nobody knew how to legislate cars when they were rolled out — because nobody knew what to expect. Issues got dealt with after issues began to arise. For instance, jaywalking laws only came along after thousands of people had been killed. 

The end of the 20th century saw society grapple with regulating the internet. That resulted in the introduction of the Communications Decency Act, with its now infamous and much-debated section 230. Platforms like Twitter and Facebook likely couldn’t operate without the protection section 230 gives them. 

US highways would be killing fields where pedestrians were permitted to roam them freely. Only time can tell how twenty-first-century lawmakers deal with AI and emergent robot behavior. The only thing that seems clear is it won’t be the last time society is required to accommodate new tech.