Market News

Fraud Watch: How Fraudsters are Leveraging GenAI to Modernize Traditional Scams and Deceive Victims

AI in Banking, customer service, Cybersecurity, fraud detection, generative AI, Scams, voice cloning

Paul Benda discusses the increasing role of AI in banking, particularly in fraud detection, credit scoring, and customer service. With the rise of generative AI tools like ChatGPT, scammers are now using these technologies to enhance their schemes, creating convincing fake voices and images to manipulate victims. For instance, voice cloning can mimic a person’s voice convincingly, making scams like impersonation requests more effective. Benda emphasizes the importance of understanding how these tools work to combat fraud, suggesting families establish a password to verify requests. As AI technologies evolve, staying informed is crucial to protect against these sophisticated scams.



By Paul Benda

AI has been part of our lives for many years, especially in banking. Banks have utilized artificial intelligence for various purposes like fraud detection, credit scoring, and customer service chatbots. Recently, generative AI has taken things to a new level. Companies like Microsoft and Google are enhancing their services with AI. Microsoft is focusing on its Copilot feature in Office 365 to help users work more efficiently, while Google is using AI to summarize search results. The popular AI tool ChatGPT has also changed how we interact with technology since its launch.

Generative AI isn’t just about text; it can also create videos and audio. ChatGPT’s DALL-E, for example, can produce images based on user prompts. New features, like a paid addition called Sora, can help generate videos as well. This means that creating high-quality video content is now accessible to everyone, not just big studios.

However, these technological advancements also have a downside. Criminals have started using AI to enhance their scams. With AI technology being open source, anyone can build tools for fraudulent activities. One such tool, called FraudGPT, provides criminals with AI capabilities, leading to a surge in impersonation scams that are increasingly difficult to combat.

A common example is the business email compromise scam. These scams typically involve a request for a wire transfer. Traditionally, victims would confirm the request by calling the source. However, with cheap AI voice cloning technology now available, scammers can impersonate voices convincingly, making it harder for targets to detect the fraud.

I recently tested a voice cloning service using my own voice from previous ABA podcasts. The result was surprisingly realistic in just a few minutes. While there are still minor imperfections, the technology is rapidly improving, giving scammers ample opportunity to refine their techniques.

Additionally, AI is being used to create realistic avatars and voice effects. This technology can enhance scams, such as the “grandparent scam,” where fraudsters impersonate a distressed grandchild needing money. With the ability to generate convincing audio and video, scammers can trick their targets into taking action.

As generative AI tools evolve, scammers gain access to a more sophisticated toolkit for deception. Understanding how these technologies work is crucial in the fight against fraud.

For families, one protective measure is to establish a family password. This unique and hard-to-guess word can help verify the authenticity of any unusual requests, providing an additional layer of security against these growing threats.

Paul Benda is a senior vice president for fraud and operational risk policy at ABA and host of the ABA Fraudcast.

Stay informed about the latest fraud trends and prevention tips by tuning into the ABA Fraudcast. For more information, visit aba.com/fraudcast.

What is Fraud Watch?
Fraud Watch is a program aimed at helping people stay informed about scams and fraud. It keeps an eye on new ways that fraudsters might try to trick you, especially using new technology.

How do fraudsters use generative AI?
Fraudsters use generative AI to improve old scams. They create convincing fake messages, emails, or even audio that look and sound real. This makes it harder to spot the lies.

What types of scams should I be aware of?
Be on the lookout for phishing emails, fake tech support calls, and investment scams. Fraudsters often pretend to be trusted sources to trick you into giving them money or personal information.

How can I protect myself from these scams?
To protect yourself, always double-check the sender of emails and messages. Don’t share personal details unless you’re sure who you’re talking to. Be cautious when clicking on links or downloading attachments.

What should I do if I think I’ve been scammed?
If you think you might have been scammed, act quickly. Change your passwords, report the incident to your bank, and alert local authorities. The sooner you respond, the better you can protect yourself.

Leave a Comment

DeFi Explained: Simple Guide Green Crypto and Sustainability China’s Stock Market Rally and Outlook The Future of NFTs The Rise of AI in Crypto
DeFi Explained: Simple Guide Green Crypto and Sustainability China’s Stock Market Rally and Outlook The Future of NFTs The Rise of AI in Crypto
DeFi Explained: Simple Guide Green Crypto and Sustainability China’s Stock Market Rally and Outlook The Future of NFTs The Rise of AI in Crypto