Weeks after OpenAI released its ChatGPT chatbot last year, Sam Altman, the chief executive of the artificial intelligence start-up, launched a lobbying blitz in Washington.
He demonstrated ChatGPT at a breakfast with more than 20 lawmakers in the Capitol. He called for A.I. to be regulated in private meetings with Republican and Democratic congressional leaders. In all, Mr. Altman has discussed the rapidly evolving technology with at least 100 members of Congress, as well as with Vice President Kamala Harris and cabinet members at the White House, according to lawmakers and the Biden administration.
“It’s so refreshing,” said Senator Richard Blumenthal, Democrat of Connecticut and the chair of a panel that held an A.I. hearing last month featuring Mr. Altman. “He was willing, able and eager.”
Technology chief executives have typically avoided the spotlight of government regulators and lawmakers. It took threats of subpoenas and public humiliation to persuade Mark Zuckerberg of Meta, Jeff Bezos of Amazon and Sundar Pichai of Google to testify before Congress in recent years.
But Mr. Altman, 38, has run toward the spotlight, seeking the attention of lawmakers in a way that has thawed icy attitudes toward Silicon Valley companies. He has initiated meetings and jumped at the opportunity to testify in last month’s Senate hearing. And instead of protesting regulations, he has invited lawmakers to impose sweeping rules to hold the technology to account.
Mr. Altman has also taken his show on the road, delivering a similar message about A.I. on a 17-city tour of South America, Europe, Africa and Asia. In recent weeks, he has met with President Emmanuel Macron of France, Prime Minister Rishi Sunak of Britain and Ursula von der Leyen, president of the European Commission.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Mr. Altman said in last month’s Senate hearing.
His charm offensive has put him in an important seat of influence. By engaging with lawmakers early, Mr. Altman is shaping the debate on governing A.I. and educating Washington on the complexities of the technology, especially as fears of it grow. Taking a page out of recent history, he is also working to sidestep the pitfalls that befell social media companies, which are a constant target of lawmakers, and to pave the way for A.I.
His actions may help cement OpenAI’s position at the forefront of a debate on A.I. regulation. Microsoft, Google, IBM and A.I. start-ups have drawn battle lines on proposed rules and differ on how much government interference they want in their industry. The fissures have led other tech chiefs to plead their cases with the Biden administration, members of Congress and global regulators.
So far, Mr. Altman’s strategy appears to be working. U.S. lawmakers have turned to him as an educator and adviser. Last month, he gave a briefing on ChatGPT to dozens of members of the Senate Select Committee on Intelligence and the House A.I. caucus. He has proposed the creation of an independent regulatory agency for A.I., licensing of the technology and safety standards.
“I have a lot of respect for Sam,” said Senator Mark Warner, Democrat of Virginia, who hosted Mr. Altman for dinner with more than a dozen other senators last month.
But how long such good will can last is uncertain. Some lawmakers cautioned against becoming overly reliant on Mr. Altman and other tech leaders to educate them on the explosion of new A.I. technologies.
“He does seem different, and it was nice for him to testify,” said Senator Josh Hawley, the ranking Republican in the Senate hearing. “But I don’t think we ought to be too laudatory of his company just yet.”
OpenAI said that with the benefit of learning from the tech industry’s past mistakes, it wanted to bridge the knowledge gap between Silicon Valley and Washington on A.I. and help shape regulations.
“We don’t want this to be like previous technological revolutions,” said Anna Makanju, OpenAI’s head of public policy, who leads a small team of five policy experts. Mr. Altman, she said, “knows that this is an important period, so he tries to say yes to as many of these kinds of meetings as possible.”
Mr. Altman has been sounding the alarm over A.I.’s potential risks for years while also talking up the technology. In 2015, while leading the start-up incubator Y Combinator, he co-founded OpenAI with Elon Musk, the chief executive of Tesla, and others. He wrote in a blog post at the time that governments should regulate the most powerful tools of A.I.
“In an ideal world, regulation would slow down the bad guys and speed up the good guys,” he wrote.
Mr. Altman has long held the view that it is better to engage early with regulators, Ms. Makanju said.
In 2018, when OpenAI published a statement on its mission, it promised to put a priority on safety, which implied the involvement of regulators, Ms. Makanju said. In 2021, when the company released DALL-E, an A.I. tool that creates images from text commands, the company sent its chief scientist, Ilya Sutskever, to showcase the technology for lawmakers.
In January, Mr. Altman traveled to Washington to speak at an off-the-record breakfast with members of Congress organized by the Aspen Institute. He answered questions and previewed GPT-4, OpenAI’s new A.I. engine, which he said was built with better security features.
Mr. Altman has surprised some lawmakers with his candor about A.I.’s risks. In a meeting with Representative Ted Lieu, Democrat of California, at OpenAI’s San Francisco offices in March, Mr. Altman said A.I. could have a devastating effect on labor, reducing the workweek from five days to one.
“He’s very direct,” said Mr. Lieu, who holds a degree in computer science.
Mr. Altman visited Washington again in early May for a White House meeting with Ms. Harris and the chief executives of Microsoft, Google and the A.I. start-up Anthropic. During the trip, he also discussed regulatory ideas and concerns about China’s development of A.I. with Senator Chuck Schumer of New York, the majority leader.
In mid-May, Mr. Altman returned for a two-day marathon of public and private appearances with lawmakers, starting with a dinner hosted by Mr. Lieu and Representative Mike Johnson, Republican of Louisiana, with 60 House members at the Capitol. Over a buffet of roast chicken, potatoes and salad, he wowed the crowd for two and a half hours by showing ChatGPT and answering questions.
“Write a bill about naming a post office after Representative Ted Lieu,” he typed into the ChatGPT prompt that appeared on a big screen, according to Mr. Lieu. “Write a speech for Representative Mike Johnson introducing the bill,” he wrote as a second prompt.
The answers were convincing, Mr. Lieu said, and elicited chuckles and raised eyebrows from the audience.
The next morning, Mr. Altman testified at the Senate hearing about A.I.’s risks. He presented a list of regulatory ideas and supported proposals by lawmakers, including Mr. Blumenthal’s idea of consumer risk labels on A.I. tools that would be akin to nutrition labels for food.
“I’m so used to witnesses coming in and trying to persuade us with talking points,” Mr. Blumenthal said. “The difference with Sam Altman is that he is having a conversation.”
After the hearing, which lasted three hours, Mr. Altman briefed the Senate Intelligence Committee on A.I.’s security risks. That evening, he spoke at Mr. Warner’s dinner at the Harvest Tide Steakhouse on Capitol Hill. (Mr. Altman is vegetarian.)
He has also benefited from a partnership between OpenAI and Microsoft, which has invested $13 billion in the start-up. Brad Smith, Microsoft’s president, said he and Mr. Altman provided each other feedback on drafts of memos and blog posts. The companies also coordinated messaging ahead of the White House meeting, Mr. Smith said.
“Any day that we can actually support each other is a good day because we’re trying to do something together,” he said.
Some researchers and competitors said OpenAI had too much influence over debates on A.I. regulations. Mr. Altman’s proposals on licensing and testing could benefit more established A.I. companies like his, said Marietje Schaake, a fellow at the Institute for Human-Centered Artificial Intelligence at Stanford and a former member of the European Parliament.
“He’s not only an expert, he’s a stakeholder,” Ms. Schaake said.