In the world of finance, processes like KYC (Know Your Customer) and AML (Anti Money laundering) are vital safeguards, verifying customer identities and preventing fraud. A robust KYC framework is essential for combating financial crimes, ensuring compliance, and protecting business reputation. With the rise of AI-generated deepfakes or document replicas, organisations are re-evaluating their KYC procedures for enhanced security.
What are the challenges of deepfakes in KYC processes and how can businesses spot and address the threats?
Pierre-Antoine Boulat, FDM North America Delivery Lead for Risk, Regulation & Compliance and Patrick Wake, FDM Group Director of Information Security, discuss.
According to Pierre-Antoine Boulat, ‘An arm’s race between money launderers and financial compliance and law enforcement is all but certain. The tools that enable document counterfeiting, physical impersonation and convincing messages “en masse” are also deployed to counter such nefarious strategies and tactics: GenAI.’
According to Patrick Wake, ‘Deepfakes created through advanced AI techniques like generative adversarial networks (GANs), pose a significant challenge for identity verification processes. These AI-generated fakes can mimic various documents and personal details so convincingly that they threaten the reliability of traditional verification methods, raising concerns about security and fraud prevention.’
How deepfakes are made
Patrick explains that ‘deep fakes can be created with openly available tools like Deepfakes Web and Faceswapper.ai, as well as community-created and downloaded from platforms like GitHub. This raises concerns about their misuse in identity checks, especially in KYC procedures.
These models use neural networks to create realistic facial expressions, lip movements and voices, making the fake content look remarkably authentic. Techniques such as facial landmark detection and audio synthesis further enhance the believability of deepfakes.’
Mimicking documents and personal information
While specific examples may be scarce, deepfake technology has shown it can replicate various documents and personal attributes quite accurately. Examples include fake robocalls impersonating public figures like US President Joe Biden and complex financial scams where deepfakes impersonate company executives, causing significant financial losses. Deepfake creators use AI algorithms to manipulate text, signatures, and other document features, creating fake documents that are hard to spot. Advances in natural language processing help generate realistic text, adding to the authenticity of fabricated documents.
Improving Deepfake Detection
Patrick says, ‘efforts are underway to enhance detection technologies in response to the growing threat of deepfakes. One approach involves using advanced machine learning algorithms trained on large datasets of real and fake content. These algorithms analyse subtle differences in facial expressions, language patterns and context to identify signs of manipulation.
Researchers are also exploring biometric authentication methods like facial recognition and voice analysis to improve detection accuracy. Real-time detection systems are being developed to quickly analyse incoming media streams and flag suspicious content.’
Pierre-Antoine says, ‘standardized patterns of behaviours or financial transaction history can be recognized as falsified by trained professionals, including digital doppelgängers. Human beings with the right risk management mindset and penetration techniques supported by AI will continue to stand against fraudulent methods.
KYC tools and processes will develop and adapt to mitigate the risk of fraud. Just like cybersecurity resorted to the threatening techniques and turned them to the benefit of the good (hacking, big data), GenAI can and will be the artisan of its own control.
Furthermore, other techniques can be leveraged and will find expanded areas of application to complement Generative AI in the fight against counterfeit or fake identities. Video-proofing practices as well as common sense countermeasures difficult for trained electrons to master (think “Captcha” tests) will flourish. Blockchain certification and government-sponsored E-Identification will at last find their generic and welcome use.’
How to mitigate fraud risks
Pierre-Antoine maintains that ‘the pandemic ways of working and FinTech innovation had introduced remote identification well before GenAI became headline material. Online fraud is a constant irritant, and devastating for its victims, but society through regulation and education will manage to fight back.
Despite privacy concerns, platforming of shared identity details (already in use for credit scoring, criminal history, driving abilities, immigration statuses and health data for instance, across sectors and jurisdictions) will expand to physical features, since those are used for identification (prints, face, voice) as well as criminal purposes. There again GenAI will be both the perpetrator and the guardian, and history to date has proven that the guardians win in the long term. Of course, strong safeguards for this laudatory use of personal data need to be set up and permanently enforced in order to ensure take-up and allay legitimate security concerns.’
Collaboration among industry stakeholders and regulatory bodies
Patrick believes collaboration between industry, academia, and government is crucial for advancing detection solutions and staying ahead of new threats. Despite progress, challenges remain, including the risk of adversaries evading detection through clever manipulations. Continued research and innovation are needed to strengthen defences against deepfake threats.
According to Pierre-Antoine, areas of collaboration between the public and private sectors, be it for profit or not, and government agencies including law enforcement are numerous in this field. For organizations, it is an opportunity to converge physical and digital security counter-measures and adopt a holistic risk management posture.
While deepfake technology poses significant challenges for identity verification, ongoing efforts to improve detection offer hope in the fight against fraud. By combining advanced algorithms, biometric authentication, real-time analysis, and collaborative efforts, businesses can better protect themselves and their customers from the dangers of AI-generated fake identities.
How can FDM’s AML/KYC solutions help bolster your organisation’s defences against financial crime threats?
At FDM our consultants are imbued with both an ethical and technical baggage through experiential training and constant access to practitioners. They combine familiarity with advanced AI tools, observation skills applied to identity and transaction patterns and appropriate risk identification and escalation.
Our dedicated Risk, Regulation & Compliance team delivers services across first and second lines of defence, anti-financial crime, client and securities lifecycle and middle-office operations. From upskilling frameworks, execution plans, independent reviews, and AI adoption, they use best practices to meet your regulatory compliance and risk requirements.
In North America? Contact us to book a discovery call today.
In the UK? Join our in-person event on 5 June 2024, at FDM’s London centre. Register your interest here.
Meet vendors and multidisciplinary thought leaders across sectors to explore the best approach to address the constant evolution of your requirements satisfy the business and regulators.