
AI in AML: Opportunities, Risks, and Regulatory Considerations
Artificial Intelligence, otherwise known as AI, is technology which permits computers and machines to simulate human intelligence and problem-solving capabilities. AI technology can process substantial amounts of data, recognize patterns in ways that would be impossible for humans and can make predictions, recommendations, or decisions influencing real or virtual environments. Whilst there are several categories of AI, machine learning, which involves the development of algorithms modeled to imitate intelligent human behaviour, is the most established and developed form of AI, with an estimated two thirds of financial institutions already using this technology.
How AI Supports AML/CFT Objectives
AI-powered systems for AML purposes can comb through large volumes of data by processing, monitoring, and analysing transactions, and flagging patterns which warrant closer scrutiny by the financial service provider (“FSP”).
The following provides the most significant benefits to utilising AI for AML compliance:
Reduction of False Positives:
Traditional parameter-based sanctions screening systems typically generate an elevated level of false positives, which places an onerous burden on compliance staff. AI has the potential to reduce the number of false positives, which will reduce compliance costs without undermining the FSP’s ability to meet its regulatory obligations.
AI – Driven Risk Assessments:
AI-employed AML compliance tools with their sophisticated algorithms facilitate a more in-depth insight into money laundering risks. This provides a feasible solution for detecting and analysing money laundering activities digitally, giving the user greater awareness of its ML risk.
Ongoing Monitoring:
AI technology can perform ongoing monitoring of transactions to identify complex behavioural patterns in vast volumes of data in real time, which would be impossible for humans to do.
Detect Fraudulent Activity:
AI technology can automatically stop the creation of fake accounts, whether attempted by bots or developed by highly coordinated human fraud rings. It detects bot-like behaviour (irregular typing speed, mouse movements, scrolling patterns), hesitation, excessive tab switching on the user’s browser and flags newly created email addresses and the use of log-in details that were copied and pasted versus typed in.
Decision Making:
As AI technology continuously analyses data from multiple sources, this can in turn improve its accuracy and support better decisions, including those around new, previously unidentified scenarios.
Challenges
While promising, AI adoption must be approached cautiously as the use of AI technology may generate new risks or intensify existing ones.
- Machine learning models are increasingly large and complex, and therefore demand appropriate risk management and controls processes.
- A lack of machine learning model explainability means that the way in which the machine works cannot always be easily validated, controlled and governed.
- Machine performance may be affected in scenarios where it has not previously acquired intelligence or where human experience, knowledge and judgement are required.
- Staff may not be sufficiently trained to use the system and understand and address risks.
- Issues with data quality and algorithms, including biased data, can produce unintended results, inaccurate predictions, and lead to poor decisions.
Regulation
Notwithstanding the challenges, the Cayman Islands Monetary Authority, which takes a technology-neutral stance, acknowledges that FSPs may incorporate AI technology into their AML compliance frameworks; however, it encourages the adoption of a responsible approach. FSPs must have documented policies and procedures in place when relying on AI-driven technology and must perform and document formal risk assessments of these technologies, which are regularly reviewed and updated.
Leave a comment