 |
Artificial Intelligence (AI) offers transformative benefits by equipping institutions and the broader public with advanced tools to combat fraud and other criminal activities. Since the early 2000s, banks and financial institutions have leveraged dynamic AI models to detect and prevent fraud, money laundering, account takeovers, identity theft, and various forms of financial crime. However, in the ongoing arms race between fraudsters and security professionals, malicious actors have continuously adapted, employing sophisticated tools and methodologies to evade or undermine these powerful detection systems. Fraudsters are increasingly leveraging AI-powered tactics and other advanced technologies to outmaneuver fraud detection mechanisms, creating an urgent need for financial institutions and organizations to adopt enhanced countermeasures. These adversarial tactics may involve one or a combination of innovative methods designed to exploit vulnerabilities in AI-driven fraud detection systems.
 |
1. Adversarial Attacks on Models – “Tricking the AI system through trial and error”
Armed with various tools and tactics, fraudsters use trial and error to test fraud detection systems. Using various tools and techniques such as submitting slightly different synthetic identities or altering the timing and amount of financial transactions fraudsters learn how to adjust their approach to avoid detection.
Recent example: As identified earlier this year by AU10TIX and reported in TechRadar, Fraudsters are deploying techniques known as “repeaters,” where they test and exploit digital platform defenses using slightly varied synthetic identities – altering features like facial characteristics or background details – to bypass KYC and biometric checks. Once weak points are identified in the KYC and fraud detection systems, fraudsters redeploy those synthetic identities, with slight variations across multiple platforms in large-scale, coordinated fraud campaigns. Security experts noted that “traditional identity checks fail because they treat each verification as an isolated event.” These “Repeaters” are being used against banks, crypto platforms, and other institutions. [1]
For additional information on Synthetic Identities, refer to Cornerstone August 2025 Issue #66
For information on AI-driven fraud, refer to Cornerstone Issue #55, August 2024
2. Behavior Mimicry and Simulation – “Blending in to avoid detection”
Using tools such as device emulators and online proxies, fraudsters closely imitate the behavior and legitimate users, such as replicating login times, spending habits, device types, and geolocation patterns. By employing these tactics, their activity appears routine to traditional and AI-augmented anomaly detection systems.
According to a global 2024 Regula survey, nearly one quarter of Fintech organizations reported losses over $1 million from deepfake fraud. The average losses due to deep fake fraud for organizations in the financial sector were approximately $600,000, with video and audio deepfakes being the most prominent.[2] Based on the study, Regula also noted that “The study also reveals a concerning gap between organizations’ confidence and competence. While 56% of businesses claim they are very confident in their ability to detect deepfakes, only 6% report having avoided financial losses from these attacks.” [3] These statistics suggest that fraudsters are successfully bypassing deepfake detection mechanisms within the financial sector and fintech industry.
3. Data Manipulation and Obfuscation – “Sliding under the radar with convincing fakes”
Adversaries can manipulate data sources or mask malicious behavior to make it more difficult for fraud systems to detect. Fraudsters manipulate data and obfuscate certain information by creating genuine looking but fake or altered data to trick AI fraud detection systems. Such data manipulation can include the generation and use of synthetic identities, which involves the production of highly realistic fake documents and receipts. Increasingly sophisticated fake documents may confuse detection algorithms making it more difficult for systems to spot inconsistencies and fraud. Thus resulting in financial institutions approving fraudulent transactions or accounts based upon falsified data.
According to Samsub, in the first three months of 2025, synthetic identity document fraud increased more than 300% in North America compared to the same period the previous year, with fraudsters using generative AI to create fake passports, IDs, and biometric data. Additionally, Samsub noted that deepfake fraud incidents have jumped by more than 1,000% during the same period. Such statistics suggest that generative AI is being used to bypass facial recognition and biometric checks.[4] While high fraud activity was recorded in e-commerce, healthtech and edtech within the U.S. in particular, a sharp increase in fintech fraud attempts was identified. As part of the survey, Sumsub analyzed millions of verification checks conducted on its platform across industries. One of the most pressing concerns they noted was “the rise of synthetic identity document fraud, where criminals use AI tools to generate fake identity documents such as driver's licenses or passports. These synthetic identity documents are often realistic enough to bypass basic KYC checks.”[5]
 |
4. Bypassing Input Signals and Sensors – “Attacking the data intake pipeline itself”
Fraudsters exploit weaknesses in the hardware or data sources that feed fraud detection systems, such as intercepting or manipulating sensor inputs like biometric scanners, device fingerprints, or transaction signals. By injecting false or altered data at the input stage, they can prevent the systems from accurately recognizing fraudulent activity.
As identified by iProov and reported by Biometricupdate.com, a Latin American-based dark web group has amassed “a substantial collection of identity documents and corresponding facial images specifically designed to defeat Know Your Customer (KYC) verification process.” These synthetic identity documents are being used to bypass biometric authentication systems which may pose significant challenges to a financial institutions fraud detection capabilities. According to the article, the images for this dataset were provided by individuals selling their own identity documents and biometric data.[6]
5. Insider Threat, Social Engineering & Model Exploitation – “Targeting humans as the weakest link”
By manipulating people or systems, fraudsters can bypass AI defenses and fraud detection systems altogether. Similar to common fraud techniques, fraudsters may employ social engineering to gain access to providing sensitive information or authorizing transactions. Additionally, a disgruntled employee or other insider may provide knowledge of how fraud detection systems and AI models work, such as understanding what attributes trigger alerts, to inform future fraud attempts.
Cybercriminal collective “Scattered Spider” employs advanced social engineering tactics to bypass multi-factor authentication (MFA) systems. The group engages in SIM-swapping and helpdesk deception to gain initial access to enterprise networks. Once they’ve gained access, they utilized legitimate administrative tools and escalate privileges and exfiltrate data from various organizations.[7] “Scattered Spider,” has been successful due in particular, as noted by TechRadar, by perfecting social engineering tactics to including voice phishing (vishing), SMS phishing (smishing), and chat-based manipulation to convincingly impersonate legitimate employees. These social engineering tactics have been crafted through highly personalized approaches based on organizational structure, employee behaviors, and other facets.[8]
6. Automation at Scale – “Using Bots or Scripts to Commit Fraud Faster and at Scale”
Fraudsters leverage automated tools, often powered by AI, to conduct fraud campaigns on a massive scale, mimicking legitimate behavior to avoid detection. Automation enables rapid, repetitive attempts that adapt based on AI responses, making it difficult for static detection systems to keep up. For example, bots can simulate human-like interactions, generating thousands of synthetic accounts or transactions with varying behaviors to evade rule-based and AI-driven fraud filters. This volume and adaptability may strain financial institution’s’ detection capabilities and increase the chance of successful fraud.
Many modern fraud organizations have shifted from creating several fake identities to creating a perfect fake ID that will fool case-level detection, and use it in automated mass attacks against hundreds or thousands of businesses simultaneously.[9]
Fraud-as-a-Service
Each of these services can be scaled using custom built AI tools that are often sold on the dark web to other criminal actors. Such actions promote a low barrier to entry to commit these types of actions at scale, often with limited technical knowledge. Some popular darknet fraud tools include phishing kits, identity spoofing kits, and deepfake database filled with synthetic identities.[10]
Strategies to Combat Fraud Tactics
To combat fraud tactics and measures taken against existing AI-enabled fraud prevention measures, consider employing the following:
1. A Multi-layered Defense Strategy
Combine AI-driven detection tools with traditional controls, human review, biometric verification, and robust access management. This reduces the chance of a single vulnerability being exploited across categories.
2. Continuously Monitor and Update AI Models
Retrain models as needed with fresh, diverse, and adversarial data to detect evolving fraud patterns, including new synthetic behaviors or manipulated inputs.
3. Integrate Threat Intelligence and Data Sharing Across Industry
Collaborate within the industry to share fraud intelligence and indicators of comprise. This collective defense and information sharing helps institutions anticipate and identify new tactics used to counteract existing detection and prevention measures.
4. Ensure Strong Data Governance
Ensure data integrity and multi-source verification to prevent data manipulation or obfuscation of identity and other critical data.
Taken together, these recommendations help build a more resilient fraud prevention framework and provide financial institutions with more robust protection against an increasingly sophisticated threat landscape.
|