Home Transports Self Driving Car Makers May Face Jail if Artificial Intelligence Causes Harm

Self Driving Car Makers May Face Jail if Artificial Intelligence Causes Harm

According to the British government, artificial intelligence (AI) technologies that harm workers could result in the prosecution of their creators.

Specifically, the Department of Work and Pensions said that the manufacturers of autonomous-driving cars and other AI systems may face not only jail term but also fines reaching multi-million pounds, in case their creation cause harm to workers.

While responding to a written parliamentary query, Baroness Buscombe, a government spokesperson, ascertained that available health and safety law “applies to artificial intelligence and machine learning software”.

This move helps in clarifying one element of the law revolving around artificial intelligence (AI), which is a topic that is owed to significant debate in governmental, legal and academic circles.

Based on the Health and Safety Act of 1974, those directors who will be found guilty of neglect or “consent or connivance” may face a jail term, particularly a maximum of two years.

Michael Appleby, Fisher Scoggins Waters’ health and safety lawyer, said that this provision stipulated by the Health and Safety Act is “hard to prosecute.”

He also added: because directors have to have their hands on the system.” Nevertheless, if artificial intelligence systems are created by startups, it may be easier to come up with a clear connection between the software product and director.

Under the same Health and Safety Act, companies could also be prosecuted, specifically with fines relative to the turnover of the firm. In case the company boasts revenue that is higher than £50 million, the fines could be unlimited.

Since the Health and Safety Act is yet to be applied to cases of machine learning and artificial intelligence software, these provisions would require to be evaluated in court. In case the real importance of the announcement is the functions that the ruling provides the Health and Safety Executive (HSE).

“There is nothing magical about AI or machine learning, and someone building or deploying it needs to comply with the relevant regulatory framework,” asserted Neil Brown, the director of legal technology firm decoded: Legal.

Nonetheless, others queried the Health and Safety Executive’s capability to comprehend the advanced technology, which is left in the hands of companies to test under the present regime.

“I’m skeptical both that industry’s own tests will be deep and comprehensive enough to catch important issues, and that the regulator is expert enough to meaningfully scrutinize them for rigor,” explained Michael Veale, a researcher in responsible public sector machine learning at University College London.

Source Sky

 

Subscribe to our newsletter

Signup today for free and be the first to get notified on the latest news and insights on artificial intelligence

KC Cheung
KC Cheung has over 18 years experience in the technology industry including media, payments, and software and has a keen interest in artificial intelligence, machine learning, deep learning, neural networks and its applications in business. Over the years he has worked with some of the leading technology companies, building and growing dynamic teams in a fast moving international environment.
- Advertisment -

MOST POPULAR

AI Model Development isn’t the End; it’s the Beginning

AI model development isn’t the end; it’s the beginning. Like children, successful models need continuous nurturing and monitoring throughout their lifecycle. Parenting is exhilarating and, if...