For Justice Dr Jamal Al Sumaiti, director-general of the Dubai Judicial Institute (DJI), uncertainty and unsolved questions still surround the work of artificial intelligence (AI) even on the global scale.

"Even if an ideal and perfect AI could be invented, would it be an entity with legal liability? Would we be able to treat it like if it were a persona with legal responsibility and bring it to legal accountability when its work goes wrong?" Dr Al Sumaiti raised these questions in an exclusive interview with Khaleej Times on the sidelines of the 'Shaping the future of Judicial Knowledge' workshop. The first edition of the workshop was organised by the DJI in cooperation with the United Nations Crime and Justice Research Institute under the theme 'Artificial Intelligence Today and Beyond'.

"It is a very broad subject and it is still a matter of study worldwide," Justice Al Sumaiti said, raising some questions: "Can the robot be seen as a legal entity like we look at corporations and business enterprises - those entities that have names, addresses, rights, and obligations? Can we reach a day when it can replace the judicial employee in his/her work? The legalists will be the ones to answer to that.

Justice Al Sumaiti noted that the journey might take quite long to reach the day when a robot could replace a judge, a prosecutor or even an investigator but said "we have already begun exploring the possibilities and risks involved".

The two-day workshop was inaugurated by Dr Al Sumaiti in presence of Taresh Eid Al Mansouri, director of the Dubai Courts and vice-chairman of the DJI.

It saw the participation from elite speakers, legal personnel and experts in the field of AI.

'AI has strengths but also weaknesses'

Artificial intelligence (AI) has its strengths but also has its weaknesses. And thus stems the importance of the law to set up a framework of the right mechanism to attribute the responsibility of actions controlled by AI, said an expert.

At the first edition of a workshop organised by the Dubai Judicial Institute (DJI), Minesh Tanna, Simmons and Simmons, UK, explained why it is important to attribute legal liability when a work by artificial intelligence goes wrong.

"Since the AI acts autonomously, it makes it difficult to attribute responsibility to individuals or entities. Hence, there is a need for foreseeability of the responsibility if the AI work goes wrong. An example of that is a self-driving vehicle that hits a woman crossing the street while the vehicle was driving below the speed limit and in the presence of a safety driver in the car, who was not looking at the road at the time of accident. The question is whom to hold responsible - the AI or the human? Ultimately, whether we should hold humans liable for the AI actions is a legal and social choice," Tanna explained.

He stated that any solution in law should be consistent with the aims of the rule of law.

Copyright © 2019 Khaleej Times. All Rights Reserved. Provided by SyndiGate Media Inc. (Syndigate.info).

Disclaimer: The content of this article is syndicated or provided to this website from an external third party provider. We are not responsible for, and do not control, such external websites, entities, applications or media publishers. The body of the text is provided on an “as is” and “as available” basis and has not been edited in any way. Neither we nor our affiliates guarantee the accuracy of or endorse the views or opinions expressed in this article. Read our full disclaimer policy here