OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects
When OpenAI’s ChatGPT took the world by storm final yr, it caught many energy brokers in each Silicon Valley and Washington, DC, unexpectedly. The US authorities ought to now get advance warning of future AI breakthroughs involving massive language fashions, the expertise behind ChatGPT.
The Biden administration is making ready to make use of the Defense Production Act to compel tech firms to tell the federal government once they prepare an AI mannequin utilizing a big quantity of computing energy. The rule may take impact as quickly as subsequent week.
The new requirement will give the US authorities entry to key details about a few of the most delicate tasks inside OpenAI, Google, Amazon, and different tech firms competing in AI. Companies may even have to supply data on security testing being executed on their new AI creations.
OpenAI has been coy about how a lot work has been executed on a successor to its present prime providing, GPT-4. The US authorities stands out as the first to know when work or security testing actually begins on GPT-5. OpenAI didn’t instantly reply to a request for remark.
“We’re using the Defense Production Act, which is authority that we have because of the president, to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results—the safety data—so we can review it,” Gina Raimondo, US secretary of commerce, said Friday at an event held at Stanford University’s Hoover Institution. She did not say when the requirement will take effect or what action the government might take on the information it received about AI projects. More details are expected to be announced next week.
The new rules are being implemented as part of a sweeping White House executive order issued final October. The executive order gave the Commerce Department a deadline of January 28 to come up with a scheme whereby companies would be required to inform US officials of details about powerful new AI models in development. The order said those details should include the amount of computing power being used, information on the ownership of data being fed to the model, and details of safety testing.
The October order calls for work to begin on defining when AI models should require reporting to the Commerce Department but sets an initial bar of 100 septillion (a million billion billion or 1026) floating-point operations per second, or flops, and a level 1,000 times lower for large language models working on DNA sequencing data. Neither OpenAI nor Google have disclosed how much computing power they used to train their most powerful models, GPT-4 and Gemini, respectively, but a congressional research service report on the executive order suggests that 1026 flops is slightly beyond what was used to train GPT-4.
Raimondo additionally confirmed that the Commerce Department will quickly implement one other requirement of the October govt order requiring cloud computing suppliers similar to Amazon, Microsoft, and Google to tell the federal government when a overseas firm makes use of their sources to coach a big language mannequin. Foreign tasks should be reported once they cross the identical preliminary threshold of 100 septillion flops.