13 May 2024
The Bank of England and Financial Conduct Authority conducted a survey into the state of . Machine learning (ML) is a branch of artificial intelligence (AI) that involves developing algorithms and models that allow computers to learn and improve from data without being explicitly programmed. The survey found that 72% of firms reported using or developing ML applications, up from 67% the year before. In the UK, the government is aiming to enhance the country's competitiveness by adopting a 'pro-innovation' stance. A recent paper, released in February, has urged regulators, including the Financial Conduct Authority, to by the end of April.
The trend in use of generative AI is set to increase in the financial services sector. The advantages it can provide for banks and building societies in improving customer experience through the use of chat bots, generating personalised financial advice, creating simulated scenarios for risk assessments, pricing, generating regulatory reports, streamlining internal processes but also enhancing fraud detection are significant. But, with generative AI becoming an ever-increasing security threat, banks and building societies will need to stay ahead of the game.
AI fraud defensive
AI already plays a significant role in fraud detection within banks and building societies, which is being improved with the increased adoption of generative AI. For example, use of device analytics deployed to analyse big data to detect fraudulent transactions and as part of the onboarding process. These approaches enhanced further by generating synthetic data to train fraud detection systems, particularly new fraud methods enabling security systems to stay ahead of the curve. Another approach is utilising machine learning-driven models to enable the detection of deepfakes and synthetic fraud. This strategy aids banks and building societies in reducing potential financial harm and nurtures a sense of trust and assurance among customers, who can be confident in the security of their financial details.
Cyber security teams are increasingly relying on defensive AI, where machine learning identifies what is ‘normal’ for a business 24/7. Where abnormalities are detected, this is a red flag for potential malicious activity. The technology is used to identify abnormalities rapidly and autonomously at the earliest stage of an attack, which is the most easily salvageable position. Bad actors will often compromise a network and wait for an opportunity to attack. It is at this point of compromise that AI defences come into their own, protecting the security of data and assets. Human defences alone are insufficient.
AI fraud offensive
The fraud landscape is changing at a rapid pace. Criminals are now leveraging the power of offensive AI, adding a further layer of complexity, with tools to create emails indistinguishable from true communications and deepfakes becoming more widely available. Without increasingly rigorous protection and prevention, there is a far higher chance of banks and building societies falling victim to attacks.
As biometric security is becoming ever more compromised, this is particularly concerning for banks and building societies that will need to consider what multifactor authentication can be used to maintain security. At present, unlike in the EU and some states in the United States, there is no current legislation banning deepfakes.
Emails
Banks and building societies are training their staff, their front line of defence, to be on high alert for suspicious emails. These days, most employees know to be wary of communications addressed to ‘dear sirs’, with obvious spelling and grammatical errors and hyperlinks to questionable sites. With many fraudulent approaches and malware being delivered by email and it also being the easiest route in for social engineering, it makes sense to educate employees in this way. However, since the pandemic, suspicious emails directed at individuals, known as spear phishing, have become far less obviously suspect and far more targeted, tailored and frequent, enhanced with the use of generative AI.
Ransomware
AI is scaling up a huge threat to banks and building societies - ransomware attacks. With generative AI developing at pace, this allows large parts of the process of an attack to be automated, rather than a human-driven targeted and tailored attack which cannot be done at scale. AI allows automation with systems to be monitored, codes to be changed and new domains to be registered, all without time-consuming human intervention.
The availability of malicious technology to perform attacks is not restricted to cybercrime ‘specialists’. The likes of ransomware-as-a-service and vishing-as-a-service is a business model that provides paid-for access to targeted ransomware or vishing tools.
The root cause of ransomware attacks are largely due to exploited vulnerabilities in financial services, followed by compromised credentials and malicious emails. This means that businesses in the financial services industry, in anticipation of the rise in generative AI enhanced attacks, are needing to bolster their security practices, upping employee education and increasing security monitoring and testing.
Deepfakes
There has been a tenfold increase in the number of from 2022 to 2023, with the UK seeing the highest attacks as a percentage of fraud cases, second to Spain.
Deepfakes can be in the form of voice, image or video, which is currently a significant concern for our biometric data. Deepfake technology can alter or clone voices in real time, resulting in the artificial simulation of a person’s voice. This is a real risk for the banks and building societies utilising customer voice authentication as a method of verification. In a survey conducted by McAfee, a computer security company, it was revealed that one in four adults had encountered a voice scam or knew someone who had experienced one, with 77% of the victims of these voice scams reportedly losing money. The study also highlighted that half of all adults disclosed their voice online or on social media platforms at least once a week. These findings underscore the growing prevalence and risks associated with voice-related scams.
As biometric security is becoming ever more compromised, this is particularly concerning for banks and building societies t will need to consider what multifactor authentication can be used to maintain security. At present, unlike in the EU and some states in the United States, there is no legislation banning deepfakes.
Fake documentation
AI can also enable fraudsters to exploit its capabilities in creating counterfeit identities and forged documents to open accounts, apply for credit and support changes to account details. For example, generative AI can be utilised in the creation of a fabricated bank statement to support a request for a change in payment.
Fake websites
With the convenience offered by AI, websites can now be created rapidly and effortlessly, eliminating the requirement for coding skills. However, this accessibility is not only advantageous to legitimate users but also exploited by fraudsters. In a matter of minutes, fraudulent websites can be designed to mimic the appearance of legitimate ones or be completely fraudulent but appear professional. These websites are then used to legitimise a request for information or in support of a payment request.
AI on side
The rise of generative AI technology is leading to increased and sophisticated forms of fraud that pose significant challenges in detection for banks and building societies. Fraudsters are capitalising on the capabilities offered by AI to carry out criminal activities, reshaping the fraud landscape. To address this emerging threat, it is crucial to adopt a proactive approach that embraces defensive AI technology, considering the ethical and regulatory requirements surrounding AI usage, as well as emphasising awareness and education about generative AI's ongoing advancements in this space.