G7 technology ministers meeting in Italy pledged Friday to "achieve a common vision and goal of safe, secure, and trustworthy" artificial intelligence, but said the framework could vary between countries.

The pledge came two days after the European Parliament gave final approval to the world's most far-reaching rules to govern artificial intelligence, including powerful systems like OpenAI's ChatGPT.

ChatGPT wowed the world in late 2022 with its human-like capabilities -- from digesting complex text to producing poems within seconds or passing medical exams. Other AI followed, like DALL-E, which produce images based on a simple input in everyday language.

But there are a series of risks, not least that AI-generated audio and video "deepfakes" could turbocharge disinformation campaigns.

"We are committed to achieving an appropriate balance between fostering innovation and the need for appropriate guardrails," the Group of Seven (G7) nations said in a statement.

The group, which includes the United States, Japan, Germany, France, Britain, Italy and Canada, said there were "ongoing efforts... to advance and reinforce interoperability between AI governance frameworks".

But the ministers, who issued the statement after two days of talks in Verona and Trento, said they recognised "like-minded approaches and policy instruments to achieve the common vision and goal of safe, secure, and trustworthy AI may vary across G7 members".

Some G7 member countries, such as the United States and Britain, are in favour of more lenient rules, relying instead on self-regulation or voluntary adherence by tech giants to surveillance systems.

"Our approach to the regulation of artificial intelligence is different from that of the EU," Britain's Technology Minister Michelle Donelan told Italy's La Repubblica daily.

"We do not only want to focus on risks, but also promote innovation and avoid hindering it."