Generative AI and Fraud: What You Need to Know, Spring 2024
Generative AI fraud is becoming more prevalent in several areas, including advanced email phishing attacks, deepfake videos, audios, and photos, as well as synthetic identities and document forgeries.
April 24, 2024
Generative AI tools such as large language models are increasingly being utilized by malicious actors in schemes designed to defraud both institutions and individuals. Here are some actionable tips for Blinn College District employees to help protect themselves against AI-generated fraud.
Generative AI fraud is becoming more prevalent in several areas, including advanced email phishing attacks, deepfake videos, audios, and photos, as well as synthetic identities and document forgeries.
Phishing Attacks
Phishing attacks have historically relied on a sense of urgency combined with deceptive tactics that often fail to withstand thorough scrutiny. Scammers attempt to instill urgency to persuade the target to bypass additional confirmation steps before sending money. For example, they may demand payment for "overdue" bills to the wrong recipient. Previously, a key step in due diligence was checking for obvious grammatical errors, as many of these scams originated from users with limited English proficiency overseas. However, with the advent of generative text AI, this no longer serves as a reliable standard. Recipients of emails or messages demanding payment of any kind should maintain a skeptical mindset and take extra precautions to ensure that any online transactions are directed to the intended recipient. One effective method is to verify directly with the website instead of clicking on links in emails. Fake websites and payment forms have become increasingly sophisticated, closely resembling legitimate retail sites with only subtle differences in website addresses.
Deepfakes
Deepfake videos, recordings, photos, and voice spoofing have made headlines due to banking scams wherein key stakeholders were duped into accepting fake audible instructions for wiring large sums of money. While such large transactions may not be commonplace at the college, and the requirements for executing a sophisticated video or audio spoofing attack are substantial, double-checking with another individual remains a prudent fallback. According to one source, up to 37% of organizations worldwide have experienced these types of attacks. Employing common sense when responding to requests over the phone or online can significantly reduce the risk of fraud. Hanging up and calling back at a later time remains a reliable method for conducting due diligence over the phone.
Synthetic Identities
Synthetic identities and forged documents are now easier than ever to fabricate, thanks to generative image programs and the proliferation of bespoke identity forgers online. There is a concern that increasingly important documents may be forged to be indistinguishable from authentic ones. It's important to recognize that forged identities can also be leveraged in social engineering scams, wherein employees are deceived into disclosing sensitive information they shouldn't. Once again, due diligence is crucial in safeguarding against this type of fraud, which may include cross-referencing online documents with other sources and conducting phone calls with live individuals for verification.
For more information on generative AI and its role in scams, as well as strategies for prevention, please refer to the following resources:
- Deloitte: Generative AI and Fraud: What Are the Risks That Firms Face?
- Experian: Experian's Fraud Forecast Predicts Generative AI Fraud and Deceptive Scams in 2024
- SafeWise: AI Scams
Enjoy this article? Share it with one click!