Anthropic’s CEO Dario Amodei is worried about competitor DeepSeek, the Chinese AI company that took Silicon Valley by storm with its R1 model. And his concerns could be more serious than the typical ones raised about DeepSeek sending user data back to China.
In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei said DeepSeek generated rare information about bioweapons in a safety test run by Anthropic.
DeepSeek’s performance was “the worst of basically any model we’d ever tested,” Amodei claimed. “It had absolutely no blocks whatsoever against generating this information.”
Amodei stated that this was part of evaluations Anthropic routinely runs on various AI models to assess their potential national security risks. His team looks at whether models can generate bioweapons-related information that isn’t easily found on Google or in textbooks. Anthropic positions itself as the AI foundational model provider that takes safety seriously.
Amodei said he didn’t think DeepSeek’s models today are “literally dangerous” in providing rare and dangerous information but that they might be in the near future. Although he praised DeepSeek’s team as “talented engineers,” he advised the company to “take seriously these AI safety considerations.”
Amodei has also supported strong export controls on chips to China, citing concerns that they could give China’s military an edge.
Amodei didn’t clarify in the ChinaTalk interview which DeepSeek model Anthropic tested, nor did he give more technical details about these tests. Anthropic didn’t immediately reply to a request for comment from TechCrunch. Neither did DeepSeek.
DeepSeek’s rise has sparked concerns about its safety elsewhere, too. For example, Cisco security researchers said last week that DeepSeek R1 failed to block any harmful prompts in its safety tests, achieving a 100% jailbreak success rate.
Cisco didn’t mention bioweapons but said it was able to get DeepSeek to generate harmful information about cybercrime and other illegal activities. It’s worth mentioning, though, that Meta’s Llama-3.1-405B and OpenAI’s GPT-4o also had high failure rates of 96% and 86%, respectively.
It remains to be seen whether safety concerns like these will make a serious dent in DeepSeek’s rapid adoption. Companies like AWS and Microsoft have publicly touted integrating R1 into their cloud platforms — ironically enough, given that Amazon is Anthropic’s biggest investor.
On the other hand, there’s a growing list of countries, companies, and especially government organizations like the U.S. Navy and the Pentagon that have started banning DeepSeek.
Time will tell if these efforts catch on or if DeepSeek’s global rise will continue. Either way, Amodei says he does consider DeepSeek a new competitor that’s on the level of the U.S.’s top AI companies.
“The new fact here is that there’s a new competitor,” he said on ChinaTalk. “In the big companies that can train AI — Anthropic, OpenAI, Google, perhaps Meta and xAI — now DeepSeek is maybe being added to that category.”
Source:
techcrunch.com
Source link