Top AI labs have minimal defense against espionage, researchers say – Washington Times

Posted: Published on June 10th, 2024

This post was added by Dr Simmons

Some of the nations top artificial intelligence labs have pitiful security measures in place to protect them from espionage, leaving potentially dangerous AI models exposed to thieves and spies, according to U.S. government-backed researchers.

The firm Gladstone AI, which advises federalagencies on AI issues, recently met with insiders from OpenAI, Google DeepMind, Anthropic and other leading AI outfits as part of a sweeping probe into security measures across the sector.

While refusing to attribute various problems to specific labs in order to protect the investigators sources, the Gladstone AI team told The Washington Times that it found various assessments of security issues were totally untethered to reality about national security.

In general, what you will find is security practices that I think its fair to say security professionals would be really concerned about if they saw them, Jeremie Harris, Gladstone AI CEO, said. For example, just to give one, folks taking their laptops out down the street to the nearest Starbucks and just hacking away on them.

Mr. Harris and his brother, Edouard Harris, Gladstone AIs tech chief, conducted the probe into AI safetyin coordination with the State Department. Investigators found minimal security measures and cavalier attitudes about safety among AI professionals.

While remote work at Starbucks is not outlandish in the American tech sectors startup culture, it is a much bigger problem for AI researchers playing with powerful models outside of a secure environment.

Gladstone AI learned of individuals running experiments with AI models on laptops without proper supervision, which could fuel fears that experimenters may eventually lose the ability to restrain the AI from doing damage.

Security professionals also do not seem to fully understand the threat to AI labs posed by foreign espionage, some say.

Edouard Harris said one insider relayed a conversation between an AI worker and a security official in a lab that demonstrated the lack of knowledge about tech theft from China.

The security official told the AI worker that the lab had no concern about Chinese theft of the labs work because the lab had not seen any models emerge in China that resembled their work, according to Edouard Harris.

The Gladstone AI team was perplexed that leading labs expected to see suspected China-sponsored thieves to publicly flaunt what they stole.

The idea is, We havent seen any public leaderboard results come out of China that are [our] level quality, so therefore they must not have stolen, Jeremie Harris said.

The State Department told The Washington Times its efforts to understand AI research and development are ongoing, it is working to mitigate harm, and Gladstone AIs assessment does not explicitly represent the views of the U.S. government.

The department said in a statement that Gladstone AIs work is just one of many sources we and the interagency may reference as we work to assess the risks associated with artificial intelligence.

Recognizing there are differing views in the AI community on the nature and urgency of risk posed by AI, the department will continue to work within the interagency to address concerns associated with AI safety and security, the State Department said.

Some labs are not blind to safety and security issues in their work.

For example, Google DeepMind told House lawmakers last year that the company was rethinking how to publish and share its work because of concerns about how China would use the information.

The Big Tech companys AI research team appeared more focused on research security than their colleagues in academia, The Times learned from a source close to the lawmakers meeting with Google DeepMind in their U.K. headquarters last year.

Asked about Gladstone AIs findings, Google DeepMind said it takes security seriously.

Our mission is to develop AI responsibly to benefit humanity and safety has always been a core element of our work, the company said in a statement late last week. We will continue to follow our AI principles and share our research best practices with others in the industry, as we advance our frontier AI models.

OpenAI and Anthropic did not respond to requests for comment.

The concern extends beyond major AI labs. Edouard Harris said the security is far lower at smaller AI labs that are not hyperscalers, such as Google and Microsoft.

You will often hear folks say things like, Oh, we dont want to slow down AI progress or to take the time to do safety and security because then the Chinese will catch up. Thats an illusion, Edouard Harris said. The reality is we have no margin, we have no leadership. The stuff that we are developing here in the United States is just being stolen on a routine basis.

The rest is here:

Top AI labs have minimal defense against espionage, researchers say - Washington Times

Related Posts
This entry was posted in Ai. Bookmark the permalink.

Comments are closed.