Two years ago, the European Commission proposed legislation governing artificial intelligence (AI) and by EU standards, talks and contemplation continued along at a brisk pace, culminating with a 108-page bill expected to be approved last month at a meeting in Strasbourg.

But as the release of AI software ChatGPT blazed across computer screens and headlines, setting the record for the fastest growing user base of any consumer application in history, EU lawmakers are taking another deep dive into the possible implications and dangers in such revolutionary technology.

Now look for almost unbelievably complex talks as they work through the more than 3,000 proposed amendments covering a potential new AI watchdog office in the EU and the sweeping implications inherent everything ranging from facial recognition to deep fakes and possibly plagiarised computer-driven creations.

Will the EU need AI to help sort through all the possible scenarios? As a leader in digital governance, the bloc will require all the wisdom it can muster to come up with the new rules that are now expected by the end of the year.

The pace of new AI systems is making regulation a real challenge, but some aspects remain consistent: the need for transparency, quality controls and protection of various legal and personal rights.

The current approach is to classify AI tools according to their perceived risk level — ranging from minimal to limited, high and unacceptable. High-risk tools won’t be banned, but will require companies to be highly transparent in their operations.

As 20-member committees from the EU parliament discuss AI with the aid of the best experts they can find, one can wonder if the genie is already out of the bottle and has begun to operate on its own. New reports come daily of surprising uses and misuses discovered by users and the media.

The cautionary tale of AI danger was written long ago and made into an iconic movie. Stories by pioneering science fiction author Arthur C Clarke that were published in 1951 were adapted into a film entitled ‘2002: A Space Odyssey’ in 1968 directed by Stanley Kubrick. Today the movie is considered a masterpiece and a milestone in film, our perceptions of space travel and science fiction.

In it, the supercomputer HAL 9000, used to run a spacecraft, is given a soothing voice and a human-like personality, but astronauts are shocked to learn it is actually trying to kill them despite its core programming to be their most trusted aide. Unknown to the actual humans is a secret subtext to their mission that headquarters gave HAL. Because lying violates the computer’s prime directive, yet the mission’s real purpose must remain hidden, HAL decides to solve the conundrum by eliminating the humans and the accompanying need to lie.

So far the implications of today’s AI aren’t nearly so dramatic, but experts and commentators are positing questions about a technology that can be used to actually create new, or potentially fake, works of art — both text and graphic — along with contracts, computer programmes, investment schemes and a bewildering range of other real-world manifestations.

In a sense, the programmes are even able to even replicate — writing programming code in seconds, often of a very high standard — as AI improves at a meteoric rate. The sheer pace of AI applications has left many trying to catch their breath as they attempt to keep up.

Yet though impressive in its authentic-sounding answers, big questions remain about complete accuracy and rationality in today’s AI.

Prabhakar Raghavan, a senior vice president at Google, told the German press that current AI “can sometimes lead to something we call hallucination”, offering a convincing but completely fictitious answer. He says the huge language models behind the technology make it impossible for humans to monitor every conceivable behaviour by the system.

For their part, ChatGPT developers say their dialogue format makes it possible for the AI software to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.

Yet a chatty dialogue, no matter how authentic sounding, is not what is needed in the medical or emergency situations that will no doubt be addressed with the help of artificial intelligence. What is needed is complete accuracy and reliability.

With such a formidable subject, EU has its work cut out for it in crafting laws and regulations. Some applications of AI are clear, such as voice and image recognition, while a there remains a bewildering range of other potential uses that might not have even yet been explored. That field — so-called General Purpose AI Systems (GPAIS) — is itself a regulatory new world at present.

Some lawmakers are seeking to classify AI systems that are GPAIS as high-risk technology. For their part, tech companies are pushing back, insisting their own in-house guidelines are robust enough to ensure the technology is deployed safely.

The user fascination with ChatGPT continues but regulators and developers alike recognise the technology’s potential for spam, misinformation, malware creation, targeted harassment and other misuse.

Like the computer HAL in Kubrick’s movie, AI is now on an odyssey into uncharted territory. The EU is working to ensure it is guided well — and knows who is in ultimate control.

Jon Van Housen and Mariella Radaelli are veteran international veteran journalists based in Italy.

Copyright © 2022 Khaleej Times. All Rights Reserved. Provided by SyndiGate Media Inc. (Syndigate.info).