Here's what happened during OpenAI CEO Sam Altman's first hearing on artificial intelligence

featured image

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology & the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, May 16, 2023.

Elizabeth Frantz | Reuters

Artificial intelligence regulation should not repeat the same mistakes Congress made at the dawn of the social media era, lawmakers at the Senate Judiciary subcommittee on privacy and technology made clear on Tuesday.

During the hearing, where OpenAI CEO Sam Altman testified for the first time, senators from both sides of the aisle stressed the need to figure out guardrails for the powerful technology before the greatest of its harms emerge. They repeatedly compared the risks of AI to those of social media, while acknowledging AI is capable of greater speed, scale and very different kinds of harms. The lawmakers did not arrive at specific proposals, though they bounced around ideas of new agencies to regulate AI or a way of licensing the tool.

The hearing came after Altman met with a receptive group of House lawmakers at a private dinner Monday, where the CEO walked through risks and opportunities in the technology. Tuesday’s hearing had a somewhat skeptical but not quite combative tone to industry members on the panel, which included both Altman and IBM Chief Privacy and Trust Officer Christina Montgomery, alongside New York University Professor Emeritus Gary Marcus.

Chair Richard Blumenthal, D-Conn., opened the hearing with a recording of his remarks, which he later revealed was created by AI, both in substance and the voice itself. He read a flattering description of why ChatGPT wrote the opening remarks the way it did, pointing to Blumenthal’s record on data privacy and consumer protection issues. But, he said, the party trick would not be so amusing were it used to say something harmful or untrue, like falsely endorsing Ukraine’s hypothetical surrender to Russia.

Blumenthal compared this moment to an earlier one that Congress had let pass.

“Congress failed to meet the moment on social media,” Blumenthal said in his written remarks. “Now we have the obligation to do it on AI before the threats and the risks become real.”

Ranking Member Josh Hawley, R-Mo., noted that Tuesday’s hearing couldn’t have even happened a year ago because AI had not yet entered the public consciousness in such a big way. He envisioned two paths the technology could take, likening its future to either the printing press, which empowered people throughout the world by spreading information, or the atom bomb, which he called a “huge technological breakthrough, but the consequences: severe, terrible, continue to haunt us to this day.”

Several lawmakers brought up Section 230 of the Communications Decency Act, the law that has served as the tech industry’s legal liability shield for more than two decades. The law, which helps expedite the dismissal for lawsuits against tech platforms when they are based on other users’ speech or the companies’ content moderation decisions, has recently seen critiques on both sides of the aisle, though with different motivations.

“We should not repeat our past mistakes, Blumenthal said in his opening remarks. “For example, Section 230. Forcing companies to think ahead and be responsible for the ramification of their business decisions can be the most powerful tool of all.”

Sen. Dick Durbin, D-Ill., who chairs the full committee, said passing 230 in the early internet days was essentially Congress deciding to “absolve the industry from liability for a period of time as it came into being.”

Altman agreed that a new system to deal with AI was needed.

“For a very new technology we need a new framework,” Altman said. “Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world but tool users do as well.”

Altman continued to receive praise from lawmakers Tuesday for his openness with the committee.

Durbin said it was refreshing to hear industry executives calling for regulation, saying he couldn’t remember other companies so strongly asking for their industry to be regulated. Big Tech companies like Meta and Google have repeatedly called for national privacy regulation among other tech laws, though often such efforts come in the wake of regulatory pushes in the states or elsewhere.

After the hearing, Blumenthal told reporters that comparing Altman’s testimony to those of other CEOs was like “night and day.”

“And not just in the words and rhetoric, but in actual actions and his willingness to participate and commit to specific action,” Blumenthal said. “Some of the Big Tech companies are under consent decrees, which they have violated. That’s a far cry from the kind of cooperation that Sam Altman has promised. And given his track record, I think it seems to be pretty sincere.”

Subscribe to CNBC on YouTube.

WATCH: How Nvidia grew from gaming to A.I. giant now powering ChatGPT

Read More

Share on Google Plus
    Blogger Comment
    Facebook Comment

0 Comments :

Post a Comment