Lawmakers continued until next session two bills that created penalties for deceptive use of artificial intelligence software, amid a growing number of cases in which fake images and audio of people have surfaced.
Del. Michelle Lopes Maldonado, D-Manassas, a former tech lawyer of 20 years, introduced House Bill 697. Sen. Adam P. Ebbin, D-Alexandria, introduced a similar measure. Both were continued to next year in the Senate Courts of Justice committee.
The lawmakers wanted to expand the penalty for defamation, slander and libel to include synthetic media, which is AI-generated or digitally altered media. Someone found guilty would face a class one misdemeanor charge and could still face a civil suit.
“The concern is that we are seeing people being tricked and scammed into first having images be presented to them that make them believe that it is accurate, true and something they should listen to, and then on the flip side of that the people that are having this synthetic media be used in ways to attack them, to undermine them,” Maldonado said.
Lawmakers are concerned how AI-generated media might be used to misrepresent political candidates, especially with the looming presidential election. There was a recent New Hampshire voter suppression investigation after robocalls used an AI-generated voice of President Joe Biden to encourage voters not to participate in the primary. “I think that we have seen over, through the last presidential election cycle forward to present day, the proliferation of deepfakes,” Maldonado said.
California, Michigan, Minnesota, Texas and Washington have already enacted legislation to regulate synthetic media in lieu of federal regulation, but most states have introduced some type of proposal, according to the advocacy group Public Citizen.
Gov. Glenn Youngkin issued an executive order last September about AI, to acknowledge the risks and opportunities from the emerging technology. Youngkin noted the need to ensure public protections.
Many states see the need for public protection in two areas, either elections or revenge porn, Maldonado said. But often, if someone is in violation of the law for use of synthetic media, they likely have violated state code provisions regarding libel, slander and fraud, she said.
Virginia lawmakers tried a different approach instead of having standalone bills. Instead, lawmakers have proposed “threaded” penalties throughout state code for use of synthetic media in the commission of one of these crimes. It is a way to create a “sufficient penalty” to help discourage people from generating fake media, Maldonado said. “I think we are trying to meet the moment with these emerging technologies that are spreading different information and confusion and also undermining people across communities.”
Del. Nicholas Freitas, R-Culpeper, introduced a bill focused on a criminal penalty for use of AI-generated images that portrayed a nonconsenting person in sexually graphic ways. “This has increasingly become a problem as artificial intelligence gets a lot more intelligent and the capabilities here are pretty significant,” Freitas said in a House committee meeting.
Freitas detailed a scenario where someone could manipulate and exploit a picture of a child taken from a family photo. There are currently no potential consequences for that, he said. “I think it is very important that we get something in the code very quickly that will allow us to send a very strong message that this is not going to be accepted in the commonwealth of Virginia,” Freitas said.
A subcommittee killed the Freitas bill on a tied vote, citing that more research was needed.
Freitas did not respond to multiple email requests for comment.
Women are most often the targets of AI-generated images.
Over 20 juvenile girls in Almendralejo, Spain were targeted last September when their social media pictures were converted into nude pictures and posted online. Sexually explicit, AI-generated images of pop star Taylor Swift garnered millions of views last month on social media before they were removed.
A congressional bill was introduced after fake naked photos of at least 30 female students at Westfield High School in New Jersey were circulated online, according to WABC-TV. The bill would update the Violence Against Women Act Reauthorization Act of 2022 to include definitions and penalties for use of deepfake intimate images. The bill has not advanced.
Spencer Overton, a George Washington University law professor, testified last November before a U.S. House of Representatives oversight subcommittee on Cybersecurity, Information Technology and Government Innovation. The meeting was held to discuss the risks and challenges of deepfake technology.
Over 415,000 pornographic deepfake images were uploaded in 2023 to the top 10 websites that share that type of content, according to a transcript of the hearing. Deepfake pornography videos increased 464% within the last year.
Women, people of color and religious minorities are targeted the most by deepfake technology, according to Overton’s testimony. Women, especially those in the public eye, are the most targeted.
Men are rarely targeted in deepfake pornography, at only 1%, Overton said.
By Olivia Dileo / Capital News Service