A senior government official has commented on Artificial Intelligence’s (AI) impact on fraudulent activity following the release of research data by Starling Bank, a leading UK digital challenger bank.
Starling’s data showcases the extent to which AI is being utilised by fraudsters to commit their crimes. Deep Lakes have become a common talking point in the fields of fraud and payments, and to an extent have become a household name – for want of a better term.
According to Starling’s research, 28% of UK adults say they have been targeted by an AI voice cloning scam, where fraudsters use the tech to replicate the voice of another person to trick the victim into sending money.
However, whilst deepfakes have been gaining significant media attention and have been mentioned in political discourse, 46% of the UK public state that they have never heard of these scams and do not know how to protect themselves.
The UK government has been focused on responsible development of AI, with the previous Conservative administration setting up the AI Safety Institute. The newish Labour government seems to have similar ambitions, pledging to create national data centres to support the tech’s development in its 2024 election manifesto and signing an international treaty around safe AI.
Private sector actors are also playing a role, however. Starling, for example, has launched a ‘Safe Phrase’ initiative, where a phrase is agreed with a friend and family member in order to verify identity. This has been endorsed by the government’s anti-fraud campaign, ‘Stop! Think Fraud’.
Commenting on Starling’s data, Lord Sir David Hanson, Minister of State at the Home Office with Responsibility for Fraud, said: “AI presents incredible opportunities for industry, society and governments but we must stay alert to the dangers, including AI-enabled fraud.
“As part of our commitment to working with industry and other partners, we are delighted to support initiatives such as this through the Stop! Think Fraud campaign and provide the public with practical advice about how to stay protected from this appalling crime.”
Everyone is focused on fraud
UK consumers are becoming increasingly concerned about AI voice cloning scams, Starling states. The bank’s research shows that 79% of UK adults are concerned about being targeted by these scams, followed by social media impersonation scams at 76% HMRC, High Court impersonation scams at 75%, safe account scams at 73% and investment scams at 70%.
The data comes at a crucial time for fraud prevention in the UK. The industry is preparing for new regulations coming into force on 7 October, enforced by the Payment Systems Regulator (PSR), which will require firms to reimburse fraud victims.
The extent of reimbursement is the subject of an ongoing consultation, with the PSR assessing whether to reduce the reimbursement cap from the initially proposed £415,000 to £85,000, though some are calling for it to cut down even more to £30,000.
Several banks have repeatedly raised concerns that the rules do not take into account social media scams, which Starling’s data shows are clearly an area of concern for consumers. The extent of Ai-backed fraud is clearly a larger concern as well though.
The Labour government is, according to documents seen by the FT, also keen to see Big Tech firms share responsibility for fraud reimbursement. As Big Tech firms encompass social media firms, AI developers, and others, this responsibility could potentially be shared far more widely.