OpenAI is touting new artificial intelligence models that the company claims are capable of “reasoning” at the level of doctorate students, even as questions remain about the powerful tools’ safety.
The company unveiled its “OpenAI o1” model this week and headed to the White House on Thursday with other AI executives as questions persist about the technology’s security in Washington.
OpenAI is promoting its tools as scoring on tests in a similar fashion as well-educated humans.
“We trained these models to spend more time thinking through problems before they respond, much like a person would,” OpenAI said in a preview of the models on its website. “Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”
The company said its next model update “performs similarly to Ph.D. students on challenging benchmark tasks in physics, chemistry and biology.”
Among those that OpenAI expects to benefit most from the models are health care researchers, physicists, and developers.
As news of the o1 model spread, others in the AI industry appeared rankled by marketing hype ascribing human characteristics to technology.
Hugging Face CEO Clement Delangue criticized on Thursday various descriptions of AI systems as “thinking.” He said an AI system processes and predicts in ways that are similar to classical computers and search engines.
“Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it’s more clever than it is,” Mr. Delangue said on X.
Nevertheless, security concerns persist about powerful AI tools and the models undergirding them.
The White House huddled with OpenAI CEO Sam Altman on Thursday alongside executives from major AI companies, including Nvidia, Microsoft, Meta and Google’s parent company Alphabet.
White House press secretary Karine Jean-Pierre told reporters that participants at the meeting discussed public-private collaboration, the workforce and the permitting needs of the industry.
OpenAI will likely face tough questions about its products and ambitions after an ex-employee testifies before the Senate Judiciary Committee on Sept. 17. William Saunders, the former OpenAI employee, signed an open letter on the right to warn about advanced AI earlier this year and is set to appear before Congress.
OpenAI has looked to smooth over national security concerns about its business. The company’s board added retired Army Gen. Paul M. Nakasone earlier this year, after he left the helm of the National Security Agency and U.S. Cyber Command.
• This story is based in part on wire service reports.