The Daily Dish

Open-source AI Central to Innovation

Last week, the CEOs of Meta and Spotify published a joint statement arguing that the European Union’s (EU) artificial intelligence (AI) regulations are holding back innovation in the industry. In particular, they argued that these regulations are based on unclear and complex rules, with inconsistent guidelines on how to comply with them, raising uncertainty for developers. As a result of these regulations, Meta – a company committed to make many of its AI models open-source – has been forced to delay the open release of its advanced Llama multimodal model in Europe, harming companies such as Spotify that would want to leverage Meta’s AI technology to enhance its products. 

Open-source AI – AI systems with their components available for further study, use, modification, and sharing – has the potential to reduce the gap between those who have the means to develop AI systems and those who do not. AI startups, public institutions, researchers, and tech companies rely on these open systems to build their own models with less cost and without having to commit significant investment in research and development, offering great potential to drive technological progress. 

But to do so, regulations must foster industry stability. And while the CEOs’ joint statement highlights the problems in Europe, regulatory uncertainty and complexity is not the exclusive domain of the EU.  

California’s recent legislation to regulate AI, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, may in fact replicate the same problems with the EU’s regulatory environment and hamper the country’s dynamic and innovative landscape for open-source AI models. The intention of such legislation may be noble: setting AI safety measures and making big AI developers accountable for modifications of their systems to mitigate the potential catastrophic harms of AI. But with technology companies, researchers, and even lawmakers expressing concerns with the bill, attempts to amend its more concerning provisions have largely failed, raising alarm and uncertainty among developers for the future of open AI development. If California follows this path, U.S. tech companies may find themselves facing a largely uncertain, EU-like regulatory environment. In response, they may think twice about investing billions in cutting-edge AI open research, knowing that any misstep could lead to lawsuits or worse. 

As states begin to delve deeper into the regulation of AI models, it may be time for Congress to set a clear national standard. While it’s crucial to address safety concerns – few could disagree on the importance of protecting the public from the risks that AI might pose – a well-crafted federal framework should provide the consistency and clarity that tech companies need to develop models without fear of significant liability. By learning from the EU’s and California’s regulatory challenges, Congress can establish a unified approach that, if done right, could strike the balance between mitigating potential harms and allowing AI to flourish.  

Disclaimer

Fact of the Day

Presidential candidate Kamala Harris’s Medicare for All proposal is estimated to increase federal spending by $44 trillion between 2026 and 2035.

Daily Dish Signup Sidebar