Scammers are quick to adopt new and emerging technologies to advance their schemes, making it difficult for the average person to keep pace with their evolving tactics.
Recently, we have seen a surge in artificial intelligence-driven fraud attempts and deepfake scams. As AI-powered tools rapidly become more advanced and adaptable, scammers are able to target not just unaware individuals, but also those who are otherwise well-versed in their techniques.
The rise of Fraud-as-a-Service
As AI programmes become more sophisticated and accessible – only needing to be downloaded and run – Fraud-as-a-Service (FaaS) has emerged as a new threat.
FaaS means a scammer can sell their AI-based cyber crime business products and services that are capable of carrying out fraud to other criminals, who can then use these to attack their victims. This has not only made fraud as simple as downloading an app but has also caused a multiplier effect.
Generative AI-enabled fraud
Generative AI tools empower scammers to scale their operations with precision. AI tools can be used to create personalised phishing campaigns, greatly increasing the likelihood of deceiving recipients. Emails written by generative AI can come across as authentic, reputed brand-led messages that accurately mimic legitimate communications, bypassing traditional security awareness training.
AI is also being increasingly used to create fake chatbots to impersonate legitimate customer service representatives, manipulating victims into revealing sensitive information or generating fake documents, such as invoices or contracts, to deceive individuals or organisations.
As these campaigns are automated, they can be executed on a scale that was previously not feasible. Thousands of victims are being attacked simultaneously with minimal effort.
Deepfakes and synthetic identity fraud
Deepfakes are proving to be a significant risk to the banking industry. This tactic involves using AI to change a person’s appearance or voice in real-time to make them seem to be someone they’re not, such as a chief executive or a management representative.
Last year, a deepfake scam managed to trick a financial executive at a multinational company in Hong Kong into paying out $25 million. The employee was on a video call with a scammer impersonating the company’s chief financial officer.
Deepfake synthetic voices can even be used to trick voice authentication systems to authorise a fraudulent transaction. Deepfakes are a sophisticated way to enable social engineering and manipulation, fooling masses of people in a way that can be difficult to counter.

In the US, financial institutions lost $994 million to synthetic identity fraud through credit cards alone in the first half of 2023, an 8.4 per cent increase from the previous year, according to global insights company TransUnion. Synthetic fraud is the use of personally identifiable information to fabricate a person or entity in order to commit a dishonest act for personal or financial gain.
Turning the tables
As educating people to identify deepfakes is challenging, institutions should explore other ways to combat such fraud. AI and machine learning can also counter scams effectively. When used to implement real-time monitoring of customer transactions and adopt scenario-based risk assessments, such technology can help detect unusual patterns in real-time, reducing false positives and identifying emerging tactics.
For instance, a UK telecom provider has employed an AI tool that mimics a gullible “granny" to waste scammers’ time, keeping them occupied for up to 40 minutes and reducing harm to real victims.
The human element can also complement technological defences; banks and other institutions must design and implement regular training programmes to ensure staff can recognise and respond to sophisticated scams. Also, a collaborative approach further enhances protection by integrating fraud prevention efforts with anti-money laundering and cyber security teams, while also fostering the sharing of intelligence between institutions to strengthen defences across the entire industry.
Balancing security and user experience
Robust security measures can sometimes be seen as detrimental to user experience, creating friction points in the customer journey. Compliance with data privacy laws can also necessitate additional steps for customers to complete.
It’s important to balance customer experience, while being careful not to concede security.
Evolving threat landscape
As cyber criminals continually innovate, fraud tactics have become more dynamic in nature. Institutions must remain proactive in their approach to security by investing in the latest technologies and collaborating across industries and regulators to share intelligence. Employees must regularly be trained to recognise and respond to sophisticated scams, and steps must be taken to raise customer awareness about identifying potential fraud risks.
By leveraging AI’s potential to detect and prevent fraud, organisations can protect their customers and maintain trust in an increasingly digital world.
Gurcharan Chhabra is head of the fraud prevention and intelligence division at Mashreq