Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Perimeter

6/20/2019
02:00 PM
Cleber Martins
Cleber Martins
Commentary
Connect Directly
LinkedIn
RSS
E-Mail vvv
50%
50%

'Democratizing' Machine Learning for Fraud Prevention & Payments Intelligence

How fraud experts can fight cybercrime by 'downloading' their knowledge and experience into computer models.

Throughout the financial industry, executives are acknowledging that machine learning can quickly and successfully process the vast volume and variety of data within a bank's operations — a task that's nearly impossible for humans to do at the same speed and scale. However, only about half of enterprises are using machine learning. Why? A bank-wide machine learning project is a huge undertaking, requiring major investment in technology and human resources.

Moreover, past projects that have failed to deliver a return on investment (ROI) have led to internal disappointment and, in some cases, mistrust in the technology. However, with smaller, more tactical machine learning projects that can be rapidly deployed, banks can reap the benefits from Day One. One such project is fraud prevention.

The current barrier to delivering fraud protection through machine learning often lies in the solutions themselves, which require data scientists to create the initial models. A fraud expert knows that a specific correlation between transaction types in a sequence is a strong fraud indicator, but the data scientist will need many more interactions with the same data to draw the same conclusion.

Fraud Experts or Data Scientists?
A machine learning model is only as good as the instructions it is given. This can be particularly challenging while setting up fraud prevention algorithms because fraud is a relatively small percentage of successful transactions, which means the model has fewer opportunities to learn. As a result, solutions that enable fraud experts, instead of data scientists, to input the initial correlations will deliver results faster in terms of identifying new correlations across different data sets as they're more familiar with the instances where fraud is likely to occur. This allows the organization to reduce the time to ROI of their machine learning projects.

This "democratization of machine learning" empowers fraud experts to "download" their knowledge and experience into computer models. It's particularly effective in areas where fraud has not yet reached critical mass to support fraud experts as they use their experience and instincts to investigate certain transactions or customer behaviors, even if they are not yet fraudulent or highly indicative of fraud. Feedback based on these kinds of instincts will aid the machine learning model to fine-tune itself, and improve accuracy and consistency in identifying more complex fraud indicators.

Teaching the Machine
Continuous involvement of fraud experts is key to developing the machine learning model over time. By bringing fraud experts closer to machine learning, they have transparent views into the models and can apply strategies and controls to best leverage the outcome of the intelligence from the models. As they input to the model, they're able to investigate the output as the model generates intelligence and use their human expertise to confirm fraud instances. They can also combine their all-encompassing customer view with insights correctly generated by the model.

If the "human intelligence" confirms the insights, these can be fed back into the model. If the model consistently flags a correlation between data sets as potential fraud and the analysts consistently confirm this as fraud, then a strategy and control based on this information can be added to underpin the model. These can, in turn, be used to automate the decisions that fraud analysts themselves have consistently made, and, ultimately, reduce the need for automatic actions that impact the customer experience, such as freezing a credit card when a suspicious purchase is made — even though the purchase is legitimate.

Related Content:

Cleber Martins is head of payments intelligence, ACI Worldwide. He has nearly two decades' experience in fraud prevention and anti-money laundering strategies. View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Stop Defending Everything
Kevin Kurzawa, Senior Information Security Auditor,  2/12/2020
Small Business Security: 5 Tips on How and Where to Start
Mike Puglia, Chief Strategy Officer at Kaseya,  2/13/2020
5 Common Errors That Allow Attackers to Go Undetected
Matt Middleton-Leal, General Manager and Chief Security Strategist, Netwrix,  2/12/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
6 Emerging Cyber Threats That Enterprises Face in 2020
This Tech Digest gives an in-depth look at six emerging cyber threats that enterprises could face in 2020. Download your copy today!
Flash Poll
How Enterprises Are Developing and Maintaining Secure Applications
How Enterprises Are Developing and Maintaining Secure Applications
The concept of application security is well known, but application security testing and remediation processes remain unbalanced. Most organizations are confident in their approach to AppSec, although others seem to have no approach at all. Read this report to find out more.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-20477
PUBLISHED: 2020-02-19
PyYAML 5.1 through 5.1.2 has insufficient restrictions on the load and load_all functions because of a class deserialization issue, e.g., Popen is a class in the subprocess module. NOTE: this issue exists because of an incomplete fix for CVE-2017-18342.
CVE-2019-20478
PUBLISHED: 2020-02-19
In ruamel.yaml through 0.16.7, the load method allows remote code execution if the application calls this method with an untrusted argument. In other words, this issue affects developers who are unaware of the need to use methods such as safe_load in these use cases.
CVE-2011-2054
PUBLISHED: 2020-02-19
A vulnerability in the Cisco ASA that could allow a remote attacker to successfully authenticate using the Cisco AnyConnect VPN client if the Secondary Authentication type is LDAP and the password is left blank, providing the primary credentials are correct. The vulnerabilities is due to improper in...
CVE-2015-0749
PUBLISHED: 2020-02-19
A vulnerability in Cisco Unified Communications Manager could allow an unauthenticated, remote attacker to conduct a cross-site scripting (XSS) attack on the affected software. The vulnerabilities is due to improper input validation of certain parameters passed to the affected software. An attacker ...
CVE-2015-9543
PUBLISHED: 2020-02-19
An issue was discovered in OpenStack Nova before 18.2.4, 19.x before 19.1.0, and 20.x before 20.1.0. It can leak consoleauth tokens into log files. An attacker with read access to the service's logs may obtain tokens used for console access. All Nova setups using novncproxy are affected. This is rel...