Openai, Anthropic and Google urge action as our AI leader to reduce

According to the U.S. government’s technological leadership in AI, American artificial intelligence companies OpenAI, Humans and Google warn that U.S. technology leaders are “not broad and are shrinking” because Chinese models such as DeepSeek R1 show more and more capabilities.
Starting in March 2025, these latest submissions highlight the need for national security risks, economic competitiveness, and strategic regulatory frameworks to maintain U.S. leadership in amid global competition and China’s improvement in national advancement in the field. Anthropic and Google submitted their reply on March 6, 2025, while Openai’s submission was subsequently submitted on March 13, 2025.
China Challenge and DeepSeek R1
The emergence of the DeepSeek R1 model in China has attracted significant attention from major U.S. AI developers who believe it is no better than the U.S. technology, but convincing evidence that the technology gap is closing rapidly.
Openai explicitly warned: “DeepSeek shows that our lead is not broad and is shrinking”, describing the model as “simultaneously receiving state subsidies, state control and free use”, a combination they believe is particularly threatening to U.S. interests and global AI development.
According to Openai’s analysis, the risks posed by DeepSeek are similar to those of Chinese telecom giant Huawei. “Like Huawei, like Huawei, there is a lot of risk to build on the DeepSeek model in critical infrastructure and other high-risk use cases, as CCP may force DeepSeek to manipulate its model to cause harm,” Openai said.
The company further raised concerns about data privacy and security, noting that Chinese regulations may require DeepSeek to share user data with the government. This could enable the Chinese Communist Party to develop more advanced AI systems while harming state interests and personal privacy.
Human assessments focus mainly on biosecurity. Their assessment shows that DeepSeek R1 “complyses with most bioweaponization issues even when raised with obvious malicious intentions.” This willingness to provide potentially dangerous information is contrary to the security measures implemented by leading U.S. models.
“While the United States remains leading in AI today, DeepSeek shows that our lead is not broad and is shrinking.” The personification’s own submission response strengthened the urgent tone of the warning.
Both companies compete in an ideological way, and Openai describes the competition between the US-led “democratic AI” and China’s “authoritarian, authoritarian AI”. They argue that DeepSeek’s willingness to report directed “instructions on illegal and harmful activities such as identity fraud and intellectual property theft (e.g., theft), reflecting a fundamentally different ethical approach to AI development between the two countries.
The emergence of DeepSeek R1 is undoubtedly an important milestone for the global AI race, despite U.S. export control over advanced semiconductors and highlights the urgency of coordinating government actions to maintain leadership in the field.
Impact of national security
All three companies’ opinions highlight important national security issues arising from advanced AI models, although they address these risks from different perspectives.
Openai’s warning focuses on the impact of CCP on Chinese AI models like DeepSeek. The company stressed that Chinese regulations could force DeepSeek to “compromise critical infrastructure and sensitive applications” and require the sharing of user data with the government. These data sharing can align the development of more complex AI systems with China’s national interests, thus posing both immediate privacy issues and long-term security threats.
Anthropic’s focus on the biosecurity risks posed by advanced AI capabilities, regardless of the country of origin. Humans are particularly shockingly revealing, “Our recent system, Claude 3.7 sonnet, demonstrates the improvement of its ability to support biological weapons development.” This candid admission highlights the dual use nature of advanced AI systems and the need for powerful safeguards.
Anthropology also identified what they called the “regulatory gap in U.S. chip restrictions” related to NVIDIA’s H20 chip. Although these chips meet the reduced performance requirements for China’s exports, they “perform well in text generation (‘sampling’), an essential component of advanced reinforcement learning approaches that are critical to the advancement of current boundary model capabilities.” Humans urged “immediate regulatory action” to address this potential vulnerability in the current export control framework.
While acknowledging AI security risks, Google advocates a more balanced export control approach. The company warns that current AI export rules “could undermine economic competition targets by imposing a disproportionate burden on U.S. cloud service providers…”. Instead, Google recommends “balanced export controls to protect national security while enabling U.S. exports and global business operations.”
All three companies stress the need to enhance government assessment capabilities. Anthromorphization calls for the establishment of “the ability of the federal government to test and evaluate strong national security capabilities AI models” to better understand potential abuse of opponents. This will involve maintaining and strengthening the AI Security Institute, directing NIST to develop security assessments, and assembling a team of interdisciplinary experts.
Comparison table: Openai, Human, Google
Key areas | Openai | Human | |
Main issues | State-controlled AI is under political and economic threats | Biosecurity Risks in Advanced Models | Stay innovative while balancing security |
View DeepSeek R1 | “State subsidies, state controls and uses them freely” has Huawei-like risks | Willing to answer “bioweaponization questions” with malicious intentions | Focus on DeepSeek is less specific and more on broader competition |
National Security Priorities | CCP impact and data security risks | Biosecurity Threats and Chip Export Vulnerabilities | Balanced export control, not burdening U.S. providers |
Supervision method | Voluntary cooperation with the federal government; single point of contact | Enhanced government testing capabilities; hardened export control | “Federal Federal Framework”; Governance of Specific Sectors |
Infrastructure focus | Government adopts Frontier AI tools | Energy expansion for AI development (to 50GW by 2027) | Coordinate energy action and allow reforms |
Unique suggestions | Layered export control framework promotes “democratic AI” | Direct regulatory action on NVIDIA H20 chips exported to China | Publicly available data for industry access to public learning |
Economic competitiveness strategy
Infrastructure requirements, especially energy demand, are key factors in maintaining our AI leadership. “Training a single boundary AI model by 2027 will require network computing clusters to map about five GW of power,” humans warn. They have proposed an ambitious national goal to build 50 times specifically targeting the AI industry by 2027 and take measures to simplify allowing and speeding approval of transmission lines.
Openai once again constitutes competition as an ideological competition between “democratic AI” and the “authoritarian AI” established by the CCP. Their vision for “democratic AI” emphasizes “a free market that promotes freedom, fair competition” and “the freedom of developers and users to use and guide our tools within the right security guardrail.”
All three companies provide detailed advice on maintaining U.S. leadership. Anthromorphism emphasizes the importance of “enhancing the competitiveness of the United States’ economy” and ensures “widely shared AI-driven economic benefits in society.” They advocate “ensure and expand U.S. energy supply,” a key prerequisite for keeping AI development within U.S. borders, warning that energy restrictions could force developers overseas.
Google calls for decisive action to “enhance US AI development” with a focus on three key areas: investment in AI, accelerating government AI adoption, and promoting pro-innovation approaches internationally. The company stressed the need to “coordinate federal, state, local and industry policy actions to transmit and allow reforms to meet energy needs” and “fighting export controls” and “continuous funding for basic AI research and development.”
Google’s submission highlighted the need to establish a “federal AI federal framework” that would block the patchwork of state regulations while ensuring industry access to publicly available training model data. Their approach emphasizes “centralized, industry-specific and risk-specific AI governance and standards” rather than extensive regulation.
Regulatory advice
In all comments, a unified federal AI regulation approach is a consistent subject. Openai warned against “regulatory arbitrage created by U.S. states” and proposed a “holistic approach to voluntary partnership between the federal government and the private sector.” Their framework envisions the Department of Commerce oversight, which has the potential to provide AI companies with a single link to government-related security risks through the reimagined Institute of AI Security.
Under export control, Openai advocates a hierarchical framework designed to promote national adoption consistent with democratic values while limiting access to China and its allies. Through enhanced collaboration with intelligence agencies, the anthropomorphic person similarly demands “strengthening export controls to expand U.S. AI leadership” and “significantly improve the security of U.S. border laboratories.”
Copyright and intellectual property considerations are prominent in both OpenAI and Google’s recommendations. Openai emphasizes the importance of maintaining the principle of rational use so that AI models can learn from copyrighted materials without undermining the commercial value of existing works. They warn that overly restricted copyright rules may be detrimental to U.S. AI companies compared to Chinese competitors. Google responded to this view, advocating “balanced copyright rules such as fair use and text and data mining exceptions”, which they describe as “critical for AI systems to learn from prior knowledge and publicly available data.”
All three companies stress the need to accelerate government adoption of AI technology. Openai calls for an “ambitious government adoption strategy” to modernize the federal process and safely deploy Frontier AI tools. They specifically recommend removing barriers to AI adoption, including outdated certification processes such as FedRamp, restrictive testing authorities and inflexible procurement pathways. Humans similarly advocate “promoting rapid AI procurement across the federal government” to revolutionize operations and enhance national security.
Google recommends that governments “simplify outdated certification, authorization and procurement practices” within the government to accelerate AI adoption. They highlight the importance of effective public procurement rules and improve interoperability of government cloud solutions to promote innovation.
The comprehensive comments from these leading AI companies present a clear message: Maintaining U.S. leadership in artificial intelligence requires coordinated federal action in multiple aspects – from infrastructure development and regulatory frameworks to national security protection and modernization of government – especially as competition in China intensifies.