Deepfakes, and generative artificial intelligence (gen AI) in general, are becoming an increasingly worrying problem for financial services.
The emerging technology is evolving right before our very eyes and there remains more questions than answers in being able to combat this rising threat, in particular when it pertains to gen AI falling into the wrong hands.
You might have seen deepfakes and gen AI being used for more harmless efforts, such as recreating songs and speech from famous artists. But being as widely accessible as it is, combined with its limitless capabilities, people are now using the technology to perform fraud attacks, putting financial and payment companies on high alert.
Delivering a keynote at this week’s Money 20/20 Europe event, Jordi Torres, Managing Director EMEA of Veridas, likened gen AI as the next new wave in the technology space.
He said: “Gen AI is just another wave. It can have positive things, helping companies to automate services for clients to become more personalised and being more efficient.
“But at the same time, gen AI is representing a new kind of fraud, it has monetised the identity of fraud. It can increase identity fraud with really easy, simple tools to use.”
Businesses are alert but underprepared
Torres believes that we now have an ultimatum to decide on. We either ride this gen AI wave, or we can drown and let it take over. And whilst payment companies are already alert to the rising wave, are they prepared for the explosion that is about to come?
A recent report from Signicat highlighted last week that most fraud decision-makers surveyed agreed that AI is a major driver of identity fraud (73%), and that AI will enable almost all identity fraud cases in the future (74%).
However, Signicat revealed a damning revelation. Its report outlined that businesses are unprepared for the looming threat that awaits, with AI-related fraud prevention tools are still not up to scratch, with finding specialised talent within this field also proving to be a problem.
Furthermore, there is an education gap between knowing what and when an AI-related fraud attack is happening, but Torres and Veridas were on hand to outline what solutions they are already working with to mitigate this threat.
Facial & Voice Detection solutions
The two most common forms of deepfake attacks that Torres outlined were facial and voice recognition fraud attempts. Both require the use of biometric verifications.
Facial recognition, enabled by gen AI, can sophistically replicate a person’s face to be able to mimic their face to gain access to sensitive information, such as using facial ID to log into one’s online banking account.
It only feels like yesterday that people were blown away with the seamlessness of using your face to unlock your iPhone or access your bank account. But now AI is catching up.
Torres revealed that Veridas has been working diligently to prevent more facial deepfake attacks from occurring. The company has in place a virtual camera detection system that detects a deepfake image from being used. This virtual camera solution can identify this as the deepfake is logged into a system to become assessed as if it were captured on a real camera/device.
The Veridas Managing Director expounded on the process further: “The only way to detect a fraud attack (facial recognition) is to use face biometric technology. Our solution is able to assess facial features compatibility with verification to detect an impersonator.
“It is a really challenging verification, where you have to detect more than 1,500 level one detector tabs, and in level two, more than 800 similar attacks, and you can not fail a single one. You have to detect every single one of those attacks for the certification to run the test.”
Voice deepfakes pose similar issues, albeit less prevalent than face deepfakes, within the industry today.
Synthetic voices enabled by gen AI – similar to what is happening within the music industry at the moment – are being used to perform fraud attacks, and can be performed over the phone to banks to obtain sensitive information and data.
Torres revealed a synthetic voice as part of his presentation and revealed that Veridas are working on solutions to prevent this.
The detection game – could the audience detect deepfake fraud?
But that was not the only synthetic gen AI voice that Torres presented. To round out the session, he played his own voice speaking the same sentence in three different languages; Spanish, English and German.
Before he played each voice, he seeked engagement from the audience asking them what voice they believed to be his real voice as opposed to the other two synthetic voices. The answer told a revealing truth.
The fact is, the audience, just like many businesses across the world, could not correctly identify the real voice, as all three languages coming from Torres were all in fact not real.
This was a harrowing realisation to many in the audience as it appears that we are yet to fully understand the nuances and abilities in being able to detect not just voice deepfakes, but deepfakes overall.
The responsibility currently falls on the specialists within the industry, like Veridas, who are developing solutions and have the right amount of talent to negate the growing gen AI fraud threat.
But soon the responsibility will also have to fall on the finance and payments industry, because as deepfakes and gen AI become more sophisticated, companies must put the right parameters in place to protect their customers from this new wave of fraud.