Insight

AI Export Controls: Balancing National Security and AI Innovation

Executive Summary

  • Recent reports have revealed that, despite export controls, foreign companies affiliated with U.S. adversaries such as China can access restricted artificial intelligence (AI) chips via cloud-based methods and advanced AI models, highlighting critical regulatory gaps.
  • To address these loopholes, two bipartisan groups of lawmakers this year introduced the Remote Access Security Act, which would restrict remote access to advanced AI chips, and the Enhancing National Frameworks for Overseas Critical Exports Act, which would give the Department of Commerce’s Bureau of Industry and Security authority to control exports of covered AI systems.
  • While these bills would likely restrict the ability of foreign adversaries to develop military equipment and AI technologies, they would also likely limit innovation and development of AI capabilities in the United States, and lawmakers should carefully weigh these trade-offs as they consider new export-control legislation.

Introduction

Recent reports have revealed that, despite export controls, foreign companies affiliated with U.S. adversaries such as China can access restricted artificial intelligence (AI) chips via cloud-based methods and advanced AI models produced domestically. The United States has prioritized national security in its export-control policies to restrict foreign companies, particularly those from China, from advancing their AI and military capabilities. Specifically, over the last few years, the Department of Commerce’s Bureau of Industry and Security (BIS) has set up export-control policies to limit Chinese firms’ ability to access AI chips produced in the United States. These chips power high-level AI systems critical for defense, intelligence, and cybersecurity. Yet there are gaps in U.S. export controls, specifically those that allow access to restricted AI chips via cloud-based methods and advanced AI models.

To address these gaps in the regulations, two bipartisan groups of House lawmakers this year introduced the Remote Access Security Act, which would limit cloud access to advanced AI chips by broadening the scope of the U.S. export-control system, and the Enhancing National Frameworks for Overseas Critical Exports Act (ENFORCE Act), which would give the BIS authority to control exports of covered AI systems.

While these bills would likely limit the ability of foreign adversaries to access AI-related technology produced in the United States, overly strict and broad AI export control policies would also likely limit innovation and development of AI capabilities in the United States. This would, in turn, significantly disrupt the AI development stages and AI usage in sectors not tied to national security, such as for U.S. universities. To better balance these interests, lawmakers could design export controls with well-defined criteria that restrict access to the core models – models that serve as the fundamental building blocks for various AI applications – and potentially specific use cases that pose genuine national security threats, while permitting access to commercial applications and embedded uses of AI that carry minimal risk. This would allow regulators to focus on specific risk cases rather than general applications that may only theoretically be misused.

Current AI Export Controls, Gaps and Proposed Legislation

While currently no specific laws control AI exports, authorities such as the Export Controls Reform Act (ECRA) of 2018 restrict the export of dual-use items – commodities, software, or technology that have both commercial and military applications. ECRA is administered by the BIS, which holds the primary authority of approving or denying the export licensing of AI technology and sets guidelines to control its exports. Licensing is a complex, multi-agency process – often involving the Departments of State, Defense, and Energy – with rounds of requests and referrals that may delay approval. Over the past two years, the BIS has released a series of export-control rules to restrict China’s ability to obtain advanced computing chips used for AI development. While BIS can limit the transfer of advanced semiconductors to China and other adversaries, it lacks the clear legal framework to govern the export of AI systems and cloud-based access to AI chips.

Recent reports have revealed Chinese entities’ use of cloud computing services to access restricted U.S. AI chips. Using the cloud, foreign companies can avoid the direct purchase of AI chips by accessing the computing power online and paying on a usage basis. Additionally, other reports have highlighted that China and other countries, including Russia, Iran, and North Korea, are using commercial AI models to advance their military capabilities and refine their cyberattacks. For example, as Reuters revealed recently, Chinese research institutions affiliated with the People’s Liberation Army have leveraged Meta’s publicly available Llama model to create an AI tool capable of gathering and processing intelligence for critical military decision-making.

To address these gaps, two bipartisan groups of lawmakers this year introduced two bills. First, the Remote Access Security Act would expand the U.S. export control system to regulate remote access to AI technology – limiting the ability of foreign individuals to access U.S.-controlled “items” via networks, such as the Internet or cloud computing. By extending the ECRA to include remote access, the bill would allow the BIS to issue licenses and enforce penalties for unauthorized access to controlled items through networks. These changes are intended to better secure U.S. technologies used for both civilian and military purposes. Second, the Enhancing National Frameworks for Overseas Critical Exports Act (ENFORCE Act) aims to strengthen and modernize the ECRA by giving the BIS the authority to require licenses for exporting AI systems that may pose national security risks, and allowing BIS to implement “U.S. persons controls,” which could mandate, for example, U.S. AI labs to implement security checks before collaborating with AI labs linked to the Chinese military.

Concerns with AI Export Controls

While these bills would likely limit the ability of foreign adversaries to develop military equipment and AI technologies, the AI export control policies may be overly broad in that they do not differentiate between advanced AI systems with potential national security risks and commercially embedded AI applications that would likely have such risks. Thus, the policies would almost certainly have unintended, stifling effects on U.S. AI innovation.

First, while some export restrictions are justifiable for national security, they may also limit AI’s use in various sectors and applications. The wide-ranging applications of AI and its potential for both beneficial and harmful uses complicate the task of identifying risky technologies without hindering innovation in safe AI applications. Consequently, broad export controls could mistakenly classify technologies or AI applications in embedded systems that lack relevance to national security. For example, under the ENFORCE Act, the “covered” artificial intelligence systems includes those “exhibit[ing], or could foreseeably be modified to exhibit, capabilities in the form of high levels of performance at tasks that pose a serious risk to the national security and foreign policy of the United States or any combination of those matters.” This broad language could inadvertently encompass AI systems embedded in commercial or civilian sectors, such as in applications like face recognition technologies that can be employed for security in airports, retail stores, and customer identification for banking and financial services, but that also have military applications for identifying and tracking potential threats and conducting surveillance in conflict zones.

Lawmakers should consider narrowing the language in export-control legislation to better target applications, or use cases, that pose a specific, identifiable security risk, such as AI-driven systems capable of autonomous operation (e.g., drones) and targeting systems, or simply block access to the underlying models while allowing companies to offer applications overseas.

Second, as the ENFORCE Act would require government approvals before exporting, it would introduce extra compliance steps that could slow down innovation within the early-stage AI research of U.S. tech companies. For example, by covering AI systems based on their “technical similarity” to models with security risks or capabilities that might emerge unexpectedly, the ENFORCE Act could target AI models still in developmental stages. If licensing approvals would require the input of multiple agencies, causing long wait times, U.S. tech companies would face higher compliance costs that interfere with the development of their technologies – potentially from research and development to model creation – slowing the pace of AI advancements.

Third, adopting overly broad new export control restrictions aimed at blocking cloud-based access to AI computation and AI technologies also risks impairing AI research at U.S. universities. Typically, universities and high-technology research and development institutions are required to obtain what are called deemed-export licenses, which cover the sharing or release of controlled technology or source code to a foreign person within the United States. According to a report from the National Science Foundation, in 2021, 57 percent of master’s and 83 percent of doctoral degrees earned by international students were in science and engineering fields, where 59 percent of earned degrees were in computer sciences, 60 percent in engineering, and 54 percent in mathematics and statistics. The report also states that the higher education sector is the largest performer of U.S. basic research (46 percent) covering the theoretical foundations of AI, such as developing new algorithms and exploring how machines can learn and reason. Because U.S. universities host many international students working in key research areas, and much research is conducted in academic settings, these bills would significantly increase compliance costs, as universities would have to obtain licenses for AI chips, models, and cloud services for their foreign students. Thus, overbroad export controls may shift AI research primarily toward industry – rather than fostering a balanced approach in which academia can contribute to research in the public interest – which could weaken the overall innovation and diversity of the U.S. technology sector.

Conclusion

In crafting legislation to address AI-related national security concerns, policymakers should strive to find the best possible balance of mitigating foreign security risks and allowing U.S. AI innovation to thrive. Doing so will require AI export controls that can more precisely differentiate between AI embedded in general-use applications and models that may pose security risks. While achieving this balance would take time, it would go a long way toward ensuring that U.S. AI development remains competitive and robust, which itself is a national security priority.

Disclaimer