91̽»¨

Generative AI – fraud friend or foe?

19 March 2024

According to the Office for National Statistics, there were 1.15 million fraud and computer misuse offences recorded in England and Wales in 2022/23, which is the highest in a reporting year. As generative artificial intelligence (AI) becomes more accessible and impressive in its output, fraudsters are exploiting its power to deceive and harm. This trend could have devastating consequences for businesses. 

 

AI fraud defensive

AI already plays an important part in fraud detection within financial institutions and will become even more prominent with the growing use of generative AI. For example, through analysing big data to detect fraudulent transactions or using device analytics during the onboarding process. These approaches can be further enhanced by generating synthetic data to train fraud detection systems, particularly focusing on new fraud methods to stay ahead of the curve. This forward-thinking strategy helps banks and building societies reduce potential financial harm and nurtures a sense of trust and assurance among customers, who can be confident in the security of their financial details.

Cyber security teams are also increasingly relying on defensive-AI, where machine learning identifies what is ‘normal’ for a business 24/7. When abnormalities are detected, it serves as a red flag for potential malicious activity. The technology is used to identify abnormalities rapidly and autonomously at the earliest stage of an attack, which is the most easily salvageable position. Bad actors will often compromise a network and wait for the best opportunity to launch their attack. It is at this point of compromise that AI defences come into their own, protecting the security of data and assets. Human defences alone are insufficient.

AI fraud offensive

The fraud landscape is changing at a rapid pace. Criminals leveraging the power of offensive AI add a further layer of complexity, with tools to create emails indistinguishable from true communications and deepfakes becoming more widely available. Without increasingly rigorous protection and prevention, there is a far higher chance for banks and building societies to fall victim to this threat.

Emails

Many organisations are training their staff, their front line of defence, to be on high alert for suspicious emails, which is a preferred platform for fraudsters. Most employees know to be wary of communications addressed to ‘dear sirs’, with obvious spelling and grammatical errors and hyperlinks to questionable sites. Since most malware is delivered by email, and it is the easiest route in for social engineering, it makes sense to educate employees in this way. However, since the pandemic, suspicious emails directed at individuals, known as spear phishing, have become more sophisticated – far less obviously suspect and far more targeted, tailored and frequent. 

 

Ransomware

AI is scaling up one of the biggest threats to businesses: ransomware. The introduction of AI allows large parts of the process to be automated, rather than a human-driven targeted and tailored attack which cannot be done at scale. Specifically, it allows automation with systems to be monitored, codes to be changed, and new domains to be registered, all without time-consuming human intervention.

The availability of malicious technology to perform attacks is not restricted to cyber crime ‘specialists’. The likes of ransomware-as-a-service and vishing (voice phishing) as-a-service represent a business model that provides paid-for access to targeted ransomware or vishing tools. 

With the rise in generative AI accessibility, there has also been an increase in chatbot tools sold on the dark web, such as FraudGPT, WormGPT and ChaosGPT. Criminals are using these to create phishing emails, cracking tools, and malicious code, producing malware and allowing fraudsters to detect vulnerabilities. As these threats grow, businesses must keep enhancing their security practices, upping employee education and increasing security monitoring and testing.

Fake websites

With the convenience AI offers, users can create websites rapidly and effortlessly, eliminating the requirement for coding skills. However, this accessibility is not only advantageous to legitimate users but also to fraudsters. In a matter of minutes, they can create fraudulent websites which appear entirely professional. Unsuspecting individuals are then directed to these deceptive websites, where fraudsters employ various tactics: either legitimising their request for information; prompting victims to provide personal details; or even enticing them to make payments. A way to combat fake websites is to avoid links in emails from legitimate businesses, and instead guide the user to the usual secure portal.

Fake documentation

AI also enables fraudsters to easily create counterfeit identities and forged documents. One particularly concerning example is the use of image generation AI, which can generate realistic images of individuals in various locations based on a prompt. These fabricated identities can then be used to carry out fraudulent activities, deceiving unsuspecting victims. This technology not only amplifies the challenges in detecting and preventing fraud but also underscores the need for robust security measures to safeguard against such malicious practices. There is no current regulation that will necessitate artificial intelligence images to be flagged as fake or artificial, although Google has introduced a digital marker so you can identify fake images through a reverse image search. 

Deepfakes

Deepfake technology can not only generate images, as noted above, but can also produce videos and alter or clone voices in real time, resulting in the artificial simulation of a person’s voice. So, for example, the ‘chief finance officer’ of a company can request an urgent payment from one of their team members over the phone, gathering valuable intelligence, all without arousing suspicion. The  conducted by computer security company, McAfee, found that one in four adults had encountered voice fraud or knew someone who had. Notably, 77% of the victims of the AI voice scams reported losing money. The study also highlighted that half of all adults disclosed their voice online or on social media platforms at least once a week. As biometric security becomes ever more compromised, businesses need to consider what multifactor authentication can be used to maintain security.

AI on side

AI technology is enabling sophisticated forms of fraud that pose significant challenges in detection. Fraudsters are capitalising on AI's capabilities to carry out criminal activities, reshaping the landscape of fraudulent practices. Examples from real-world cases highlight the alarming reality and pace of AI-driven fraud. To address this emerging threat, it is crucial to adopt a proactive approach that emphasises awareness and education of AI's ongoing advancements, while investing in next-generation AI technology to fight the increased risk of fraud.