The Daily Dish

Goldilocks AI Regulation

There are two reasons to write about copyright protection and artificial intelligence (AI). The first is that I greatly appreciate a well-tortured acronym, and I haven’t seen anything like the Content Origin Protection and Integrity from Edited and Deepfake Media Act (COPIED Act) in a long time. It’s a beauty and must be celebrated.

The second is that there is – at least to this inexpert eye – a pattern developing in the nascent business of building a regulatory framework for the nascent AI applications. It’s a lot like the Goldilocks story, where a law or regulation gets put forward out of a fear – rational and identifiable or not – that poor Goldilocks will be harmed if the AI regulation is too weak or non-existent. But we also fear that a draconian regulation will stifle valuable AI innovation. And we are leaving these decisions in the hands of elected officials and agency staffs that may not be particularly versed in the issues. In effect, we keep asking Goldilocks: Is it enough? Overdone? Or just right?

Angela Luna’s analysis of the COPIED Act carries a flavor of this. We don’t want AI to abuse the creative efforts of individuals:

The COPIED Act would mainly direct the National Institute of Standards and Technology (NIST) to develop standards that facilitate the deteYction of synthetic content and prohibit the use of protected material to train AI models or generate AI content, giving creators more control over their material. Because these provisions would give creators and other users greater control over their images, likenesses, and copyrighted works, the bill has received broad support from large stakeholders in the creative industries, who have praised in particular the legislation’s provisions to allow owners of copyrighted works and state attorneys general to take legal action against those who misuse copyrighted content.

Yes, by all means, protect creative work, especially my creative work! But, of course, you can go too far and make off-limits all access to copyrighted material and, in the process, cross some existing copyright lines and also forgo some new AI-generated materials:

First, by essentially prohibiting the training of AI models on data with content provenance marks, the bill would undermine copyright law and could raise First Amendment issues, as it eliminates the fair use right that allows for certain uses of copyrighted material. Second, by making it illegal to knowingly alter or disable content provenance information, the bill could inadvertently discourage legitimate and valuable forms of expression, such as parody and satire, as well as other forms of speech designed to expand on existing works. Finally, by increasing potential liability for AI developers, the bill could harm innovation and development of new models, as firms no longer have the necessary training data to improve their systems.

At any point in time, this trade-off will exist and will bedevil policymakers. But, especially so early in the process, policymakers would be well served to let the knowledge base and court cases accumulate before undertaking strong AI regulation. Luna has a more complete exploration of the issues, but for the moment, can Goldilocks refrain from charging into the cottage and testing all the porridge?

Disclaimer

Fact of the Day

Presidential candidate Kamala Harris’ Medicare for All proposal would cover all undocumented and illegal aliens residing in the United States, costing an estimated $1.8 trillion.

Daily Dish Signup Sidebar