Insight

Future of AI Innovation: Considerations for Congress

Executive Summary:

  • In April, a bipartisan group of senators introduced the Future of AI Innovation Act (FAIIA), designed to spur U.S. innovation in artificial intelligence (AI) and emerging technologies, receiving broad support from both sides of the aisle.
  • The FAIIA would take several positive steps in the development of responsible AI in the United States through the establishment of voluntary standards, the creation of AI testbeds, making datasets available publicly for researchers, establishing grand challenges in AI to provide incentives for private research, and focusing on international collaboration.
  • As many of the bill’s proposals come with potential challenges and drawbacks, legislators should work carefully to ensure the bill mitigates harms without impeding AI innovation and development.

Introduction

In April, a bipartisan group of senators introduced the Future of Artificial Intelligence Innovation Act (FAIIA) of 2024 in an attempt to solidify U.S. leadership in AI development and use. Building upon Senators Maria Cantwell and Todd Young’s prior Future of AI Act, which established the National AI Advisory Committee, the FAIIA would implement several initiatives to jumpstart responsible AI development in the United States. These include establishing voluntary standards through the creation of an AI Safety Institute, the creation of cost-effective AI testbeds (facilities staffed with experts to evaluate AI models), and public datasets to assist researchers. The FAIIA would also seek to foster international collaboration on AI ethics and encourage private companies to address national needs with AI solutions through the creation of grand challenges in AI, which offer prizes or investments to firms that develop or commercialize AI that solves or advances federal AI priorities.

While the FAIIA has received broad bipartisan support, legislators should tackle several key challenges within the bill to ensure it mitigates harm without impeding AI innovation and development. First, the AI Safety Institute should be inclusive, so that a diverse range of voices contributes to the establishment of safety standards. Second, lawmakers may wish to establish clear metrics for developing AI in testbeds, which are safe and controlled environments where researchers can experiment with and evaluate AI models before deploying them in real-world applications. Third, to ensure large and small companies can participate in the grand challenges, competition must be encouraged. Fourth, AI tools, such as data cleaning and pre-processing tools, could be introduced to make better use of publicly available datasets, promoting efficiency and accuracy. Finally, lawmakers should establish well-defined rules for international AI collaboration to drive efficient and responsible AI development globally.

Provisions of the FAIIA of 2024

The FAIIA would implement several initiatives to boost U.S. AI development and leadership. First, the bill would formally create an AI Safety Institute to set standards for safe and ethical AI development. Additionally, the FAIIA would create new AI testbeds with national labs that would task experts with simulating real-world scenarios and assessing AI models’ capabilities. To encourage private-sector participation, the bill would establish grand challenges in AI focused on tackling national challenges with AI solutions. These challenges would provide prizes or investments to firms that successfully develop or commercialize AI solutions in furtherance of federal goals. Furthermore, the FAIIA would make federal science agency datasets public to accelerate research and innovation, meaning AI researchers will have more data with which to train and improve models. Finally, the bill would create international alliances on AI to establish global standards and foster international research collaboration. These initiatives would aim to strengthen U.S. AI advancements and ensure responsible and beneficial AI development.

Considerations for Congress

The FAIIA would create the National Institute of Standards and Technology (NIST) AI Safety Institute to foster economic growth through responsible AI development. The institute would collaborate with industry and government to create voluntary AI standards. Research suggests that such voluntary standards, along with opportunities for testing and feedback, can boost innovation by creating a clear and trusted environment for investment. The success of the standard-setting process relies on inclusivity, however. A diverse range of stakeholders, from ethicists to consumer groups, must be involved in developing these standards. To that end, the institute should hold public forums, ensure leadership reflects this diversity, and communicate clearly to the public regarding its activities. Regular reviews and updates, using mechanisms such as sunset clauses, would allow the standards to adapt to the ever-changing nature of AI and society’s concerns. By following this comprehensive approach, the NIST Safety Institute can create robust, inclusive, and forward-looking standards.

Second, legislators should consider establishing clear metrics and benchmarks for evaluating models within the AI testbeds to ensure fairness and mitigate bias. Standardized evaluation helps identify and address biases that may creep into models during development. The testbeds established by the bill would allow researchers to evaluate AI models in controlled environments, mimicking situations such as traffic patterns or complex deliveries. Research confirms another advantage, highlighting lower costs with the testing of AI in AI testbeds. These testbeds may fail to achieve the desired results, however, and would be strengthened by establishing clear metrics and benchmarks for evaluating AI models. This would allow researchers to compare different approaches objectively and make informed decisions about which models are best suited for specific applications.

Third, the bill would create grand challenges in artificial intelligence, in which contestants compete to find solutions to pressing economic, political, and social issues. To incentivize solving these problems, prize money gathered from corporate sponsors would be distributed among participants with the most innovative inventions. These challenges could incentivize private AI innovations, but the size of the program should ensure that firms of all sizes can participate. Competitions such as that envisioned by the bill would leverage private-sector expertise and resources without overreliance on government-funded research. Additionally, competitions can strategically target critical national priorities, ensuring AI advancements address pressing issues. For instance, a competition might focus on AI for border security or optimizing shipping routes. This targeted investment could result in a faster return on benefits compared to scattershot research funding. Yet the initiative would need to allow for small and medium businesses to participate in the challenge. Research shows that larger firms typically benefit more from incentive programs, creating an innovation gap. While the program shouldn’t favor small or medium firms, legislators could consider support structures such as grants, mentorships, and tax breaks specifically designed for smaller firms.

Fourth, FAIIA’s provisions making public datasets available for AI researchers could significantly boost innovation, but Congress should consider making tools to analyze these data sets available as well. This would benefit researchers by removing data collection bottlenecks and fostering collaboration through shared datasets. For example, making a data set such as IncluSet publicly available can promote innovation in areas such as accessibility research, and studies show public datasets can improve AI model accuracy and transparency in many sectors, in particular health care. To maximize this potential, the initiative should be complemented by the development of data cleaning and pre-processing tools. Data cleaning tools will make data more usable and reliable by identifying and correcting errors, inconsistencies, and missing values. Pre-processing tools will address the structure and format of the data, ensuring it’s compatible with the testbed and ready for analysis.

Finally, by establishing alliances and fostering research collaboration, FAIIA aims to create unified standards to accelerate progress. To strengthen the international collaboration proposed by the FAIIA, legislators can establish a multi-tiered-collaboration structure that addresses potential delays inherent in global collaboration, ensuring efficient decision-making alongside broad international input.

Conclusion

The FAIIA would take several positive steps in the development of responsible AI in the United States through the establishment of voluntary standards, the creation of AI testbeds, making datasets available publicly for researchers, establishing grand challenges in AI to provide incentives for private research, and focusing on international collaboration. While these are valuable initiatives, Congress must ensure the legislation addresses inclusivity in safety standards, sets clear metrics for testing, provides for fair participation among companies, makes efficient use of public data, and allows for international collaboration, all of which are crucial to maximize the bill’s effectiveness.

 

Disclaimer