In addition to prohibiting the removal of AI watermarks, the new bill would impose a slew of other restrictions on the technology.
Meaningful AI restrictions are finally on the horizon, with a new proposed measure from the US Senate that may make it unlawful to remove watermarks from AI-generated content.
The potential for generative AI to pose major difficulties, particularly during an election year, cannot be denied. That is why watermarks on AI-generated content are so vital, as they let consumers identify when something is plainly fraudulent.
Fortunately, legislators have begun to take this type of threat seriously, enacting legislation that make removing AI watermarks unlawful.
A New Bipartisan Law Could Soon Ban AI Watermark Removal From Content
A new bipartisan bill from the US Senate, the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED ACT), proposes a slew of measures to control generative AI technologies.
One of the bill’s primary themes is to create a clear distinction between real content and AI-generated content through a watermarking procedure that will not only effectively designate this content, but also make tampering illegal.
“Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content. The COPIED Act takes an important step to better defend common targets like artists and performers against deepfakes and other inauthentic content.” – Marsha Blackburn, US Senator from Tennessee
Given the growing amount of scams that use celebrity likenesses to sell items, steal data, and overall make the online world a poorer place to be, this type of rule is a much-needed departure from the anarchic nature of technology in recent years.
So What Exactly Is the COPIED Act ?
The COPIED Act is a bipartisan measure championed by Maria Cantwell (D-Washington), Marsha Blackburn (R-Tennessee), and Martin Heinrich (D-New Mexico) to fight the growing problem of deepfakes in the era of generative AI technology.
In addition to prohibiting the removal of watermarks from AI-generated content, the new measure would impose a slew of other restrictions on the technology. The following are some of the measures that the bill says it would establish:
- Establish transparency guidelines.
- Allow individuals to sue violators.
- Put creators in control of content.
The bill has already received significant support from organizations around the country, including SAG-AFTRA, the National Music Publishers’ Association, the News/Media Alliance, the National Association of Broadcasters, and Public Citizen.
The Potential Dangers of AI Deepfakes
Some may not believe that this type of law is required. After all, AI image generators are mainly for making funny dog images and professional headshots, right? Okay, not quite.
In recent years, scammers have used the enhanced performance of deepfake technology to cause significant harm to people all around the world. Mr. Beast and Taylor Swift are two celebrities whose likenesses have been used to promote phony items, steal user data, and swindle unsuspecting users online.
While watermarks will ideally become the norm for this type of information, this rule is still in its infancy. As a result, you may wish to understand how to identify AI material before becoming a victim of these unpleasant scams.