
The rise of artificial intelligence (AI) has brought about a range of reactions in the financial services industry from confusion and worry to excitement. As I interact with other data scientists, fraud analysts and risk managers, the benefits and risks of AI and its subdivision Generative AI (GenAI) have become a common topic of discussion. One observation in these conversations is the conflation of terms like Machine Learning (ML), AI and GenAI
AI is here and it isn’t going anywhere. It’s been in use in financial services for decades and it’s important to appreciate the capabilities and limitations of these approaches. It is equally important to be a steward of the data, the solution and your customers. As we use AI to build data moats around consumers, these techniques can decrease friction for good actors, while increasing friction for bad actors – which is the ultimate goal for effective application of AI for fraud detection.
[1] There were exceptions, most notably, HNC’s application of neural nets in the transaction fraud space. HNC has since been acquired by FICO.
[2] Generative AI is more than capable of using live inputs as a learning dataset. When unconstrained, these sorts of models have historically been “chatbot” type elements that struggle to differentiate fact from fiction in the inputs without human oversight.
It’s important to demonstrate that analytically developed products are empirically derived and statistically sound. This means they are fair, accurate, transparent, well documented and lack bias. Decentralized application of statistical techniques like GenAI will bring risks if not incorporated into an institution’s evolving regulatory processes. While GenAI has facilitated the generation of these models across an organization, it does not excuse the institution from these regulatory requirements. Indeed, no regulation has carve outs for GenAI relative to other approaches.
There are numerous opportunities and risks in using AI in general, particularly GenAI. Data science combined with AI, for example, can function as a force multiplier, a concept from military applications indicating that a combination of factors can accomplish greater feats than when alone. For example, whereas physical mining adopted earth-moving equipment to replace picks and shovels, data mining can employ tools that increase data mining capabilities and insights to replace frequencies and univariate analyses.
Force-multiplying statistical models can automate complex processes that scale abilities to create customized deliverables that serve specific needs or insights. These models are powerful, but despite these benefits, it is also vital to closely monitor strengths and weaknesses of AI models regarding accuracy, bias or other elements.
It’s also imperative that models are grounded in the context for which they are applied. For instance, does it make more sense in a small community experiencing a widespread yet locally isolated health issue to utilize algorithms focused on available treatments in a community hospital or a leading global research hospital? GenAI and AI models are far more likely to be generated based on the latter institution type, creating some concerns if implemented at smaller community hospitals.
Responsible AI is vital to the successful implementation of AI. LexisNexis® Risk Solutions has developed a set of practical guidelines and checkpoints for developing and supporting products and services under a Responsible AI framework, established strategic guardrails for permissible use of data science. Our organization is part of RELX – collectively, we adhere to the RELX Responsible AI Principles. While these principles are not replacements for sound regulatory support, they define what good looks like and provide differentiation, especially in regulated markets.

