back to top

VA Tech Experts explain ways to safeguard against AI-enhanced scams

|

Date:

May 1, 2025

Scams enhanced by artificial intelligence (AI) have the potential to reach a new level of deception with the introduction of features such as ChatGPT 4o, that allow users to create convincing, photorealistic images, including fake documents, and realistic deepfake voices.

A panel of Virginia Tech experts, including computer ethics educator Dan Dunlap, digital literacy educator Julia Feerrar, cybersecurity researcher Murat Kantarcioglu, and criminologist Katalin Parti, discussed the implications of this ever-advancing technology.

They cautioned against relying only on the safety measures built into the AI tools in order to avoid scams and explained ways to be vigilant and protect data, including the potential use of blockchain.

Dan Dunlap on educating the public about fraud detection

“Scams using AI are certainly newer and more widespread, and the increasing scale and scope are immense and scary, but there is nothing fundamentally new or different about exploiting available technologies and vulnerabilities for committing fraud. These tools are more accessible, easier to use, higher quality, and faster, but not really fundamentally different from previous tools used for forgery and fraud,” Dunlap said.

“There is a constant need to educate the public and update detection and policy as criminals use the available tools,” he said. “Computer science professionals have a moral obligation to help in both educating the public and developing tools that help identify and protect all sectors.”

“Unfortunately, disseminating knowledge can also help to exploit the weaknesses of the technology,” Dunlap added. “Powerful, available, and accessible tools are destined to be co-opted for both positive and negative ends.”

Julia Feerrar on watching for telltale signs of scams

“We have some new things to look out for when it comes to AI-fueled scams and misinformation. ChatGPT 4o’s image generator is really effective at creating not just convincing illustrations and photo-realistic images, but documents with text as well,” Feerrar said. “We can’t simply rely on the visual red flags of the earliest image generators.”

“I encourage people to slow down for a few extra seconds, especially when we’re unsure of the original source,” she said. “Then look for more context using your search engine of choice.”

“Generative AI tools raise complex questions about copyright and intellectual property, as well as data privacy,” Feerrar warned. “If you upload your images to be transformed with an AI tool, be aware that the tool’s company may now claim ownership, including to further train the AI model.”

“For receipts or documents, check the math, the address — basic errors can be telling. Large language models struggle with basic math. However, know that a committed scammer can likely fix these kinds of issues pretty easily. You should also be asking how this image got to you. Is it from a trusted, reputable source?” she said.

“Basic digital security and anti-phishing advice applies whether a scammer uses generative AI or not. Now is also a great time to set up 2-factor authentication,” she added. “This kind of decision-making is a key part of what digital literacy and AI literacy mean today.”

Murat Kantarcioglu on using blockchain to prove files are unaltered

“It’s very hard for end users to distinguish between what’s real versus what’s fake,” Kantarcioglu said.

“We shouldn’t really trust AI to do the right thing. There are enough publicly available models that people can download and modify to bypass guardrails.”

“Blockchain can be used as a tamper-evident digital ledger to track data and enable secure data sharing. In an era of increasingly convincing AI-generated content, maintaining a blockchain-based record of digital information provenance could be essential for ensuring verifiability and transparency on a global scale,” Kantarcioglu said.

He also offered a simple but powerful low-tech solution: “A family could establish a secret password as a means of authentication. For instance, in my case, if someone were to claim that I had been kidnapped, my family would use this password to verify my identity and confirm the situation.”

Katalin Parti on the profiles of scammers and victims

“The accessibility of AI tools lowers the barrier for entry into fraudulent activities,” Parti said. “Not only organized scammers, but loner, amateur scammers, will be able to misuse these tools. In addition, countries may use these tools for disinformation campaigns, creating fake documents to influence public opinion or disrupt other countries’ internal affairs.”

“The primary targets of these AI-enhanced scams include but are not limited to: job seekers, investors, consumers, and businesses. Since our proof system is primarily visual, and it’s increasingly harder to tell AI-generated images from realistic ones, this makes scams even harder to detect,” she said.

Parti suggested an unexpected strategy beyond standard procedures already in place: “The imperfect nature of human-created visuals might be successfully used as a control in order to judge what is real as opposed to AI-made images.”

Latest Articles

- Advertisement -

Latest Articles

Related Articles