The U.S. Senate's top Democrat is bringing U.S. technology leaders including Tesla CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg and Alphabet CEO Sundar Pichai to Capitol Hill on Wednesday for a closed-door forum on how Congress should set artificial intelligence safeguards.

"For Congress to legislate on artificial intelligence is for us to engage in one of the most complex and important subjects Congress has ever faced," Senate Majority Leader Chuck Schumer said on Tuesday.

Lawmakers are grappling with how to mitigate the dangers of the emerging technology, which has experienced a boom in investment and consumer popularity after the release of OpenAI's ChatGPT.

Lawmakers want safeguards against potentially dangerous deepfakes, election interference and attacks on critical infrastructure.

Other expected attendees include feature OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates, AFL-CIO President Liz Shuler and Senators Mike Rounds, Martin Heinrich, and Todd Young.

Schumer, who talked AI with Musk in April, wants attendees to talk "about why Congress must act, what questions to ask, and how to build a consensus for safe innovation." Sessions begin at 10 a.m. ET and are to last until 5 p.m. ET.

In March, Musk and a group of AI experts and executives called for a six-month pause in developing systems more powerful than OpenAI's GPT-4, citing potential risks to society.

This week, Congress is holding three separate hearings on AI. Microsoft President Brad Smith told a Senate Judiciary subcommittee on Tuesday Congress should "require safety brakes for AI that controls or manages critical infrastructure."

Smith compared AI safeguards to requiring circuit breakers in buildings, school buses having emergency brakes and airplanes having collision avoidance systems.

Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and generate images whose artificial origins are virtually undetectable.

Adobe, IBM, Nvidia and five other companies on Tuesday said they signed President Joe Biden's voluntary AI commitments, which require steps such as watermarking AI-generated content.

The commitments announced in July were aimed at ensuring AI's power was not used for destructive purposes. Google, OpenAI and Microsoft signed on in July. The White House has also been working on an AI executive order. (Reporting by David Shepardson; Editing by Lincoln Feast.)