13 September 2018 - Events
Artificial Intelligence (AI) is constantly evolving and continues to roll into all facets of industry – from neural networks that connect recycling systems to sort recycling better to AI that makes self-driving cars safer or teaches machines to do more with less. As AI becomes more commercialised, it reveals the complexities around allocating risk and liability for the autonomous actions of AI.
Existing EU laws governing liability are largely stable where human input is prevalent in the machine-decision making process. Consider a human instructing Siri to download illegal content. Even though Siri may exercise some autonomy over processing this command, the human user, as the decision maker, should be responsible for the illegal download.
However, when we move closer to fully autonomous machine decision making, the link back to human intervention falls away. Consider a human instructing a self-driving car to drive from A to B and the car decides to swerve into oncoming traffic to avoid hitting a car in front. If a user was to be held liable for the latter scenario, then we have endorsed allocating liability to a user who is not at fault. However, without such allocation, victims could be left with no compensation.
With purer forms of AI on the horizon, our laws no longer provide all the answers on how to address liability. We are at a fork in the road on this issue with each major player (developers, policy makers, and insurers) keen to have their input on how it should be addressed. The route which we take now is likely to shape the entire AI market.
Legislation on the table
In Europe, Members of the European Parliament (MEP) have called for EU-wide liability rules and ethical standards to deal with AI. MEPs have emphasised the need for the EU to take the lead on setting these standards to avoid being subjected to rules set by third countries. On the self-driving car front, for example, current requests from MEPs include mandatory insurance schemes and funds for victims of driverless cars. In the UK, the Vehicle Technology and Aviation Bill is already working its way through Parliament, which requires insurers to extend cover to automated vehicles. However, some of the more extreme suggestions, such as specific legal status for robots, would result in a much more revolutionary approach to this issue.
Setting the bar
It is a difficult task being asked of the EU Commission to set such standards for a market in its infancy and which is somewhat of a moving target. Care needs to be taken not to set the bar too high, particularly given this market is heavily populated by start-ups without the necessary resources to grapple with yet more regulation. There are clear benefits for the EU leading the way here; however, the EU Commission needs to ensure that it doesn’t stifle innovation.
Shifting liability too far towards manufacturers/developers may create insurmountable hurdles for new players, whilst allocating liability too far towards the user may result in a poor uptake of new technologies. Burdening insurers will not solve the problem as ultimately costs will be pushed down to consumers.
Start-ups at the coal face of development should rise to the challenge of shaping future regulation, be it alone or through lobbyist groups or industry associations, to ensure their interests are taken into consideration. As we wait on the output of the EU Commission, companies operating in the AI market should be alert to the complexities around liability identified above, ensure that at a contractual level, liability is clearly allocated to the fullest extent possible; and tailor their approach to obtaining insurance products accordingly.
The information in this blog post is provided for general informational purposes only, and may not reflect the current law in your jurisdiction. No information contained in this post should be construed as legal advice from Withers or the individual author, nor is it intended to be a substitute for legal counsel on any subject matter.