According to a new report from Carnegie Mellon University's Software Engineering Institute, malicious insiders within the financial industry typically get away with their fraud for nearly 32 months before being detected. For the report, the institute's CERT Insider Threat Center examined 67 insider fraud cases, as well as 13 external fraud cases, that occurred between 2005 and the present.
On average, the actual monetary impact of the fraud was $382,000 or more, with the frauds that lasted more than 32 months costing an average of $479,000. Typically, the average internal fraudster had been an employee for five years. By and large, their methods were not sophisticated, the report found. In more than half of the cases, the insider used some form of authorized access, including credentials that should have been voided due to a job change or for some other reason.
"We also found that nearly 93 percent of fraud incidents were carried out by someone who did not hold a technical position within the organization or have privileged access to organizational systems," said Randy Trzeciak, technical lead of the Insider Threat Research Team, in a statement. "Many people think that insider crimes can be addressed solely by technical controls, but the most effective way to prevent and detect insider crimes is to make it an enterprisewide effort to master both the technical and behavioral aspects of the problem."
Core reliance on technology is prone to failure, says Agiliance vice president of worldwide marketing Torsten George, because technology can only help in analyzing big chunks of data to identify suspicious patterns of activity.
"However, as the study pointed out, often insider attacks lack technical sophistication," he notes. "For instance, an organization might rely on data loss prevention systems or SIEM systems, but if an insider simply prints confidential paper documents and takes them home, these technologies will not help. Since these technologies most likely do not monitor the usage of printers, insider fraud would only be detected by behavioral observations from co-workers. Thus, it is important to implement training programs for managers and employees that educate around signs of insider threats and the employees responsibilities."
However, just as co-workers can look for anomalies by analyzing current behavior as it compares to past behavior, security systems should be tasked to do the same, argues Andreas Baumhof, chief technology officer at ThreatMetrix.
"The authors seem to focus on behavioral measures, but in the end you just have to make sure that you have a system that identifies anomalies in how the system is used," Baumhof tells Dark Reading. "This can also be done through access control, for example. One use case could be: Don't just verify whether someone is authorized to log in to a particular system [e.g., an internal CRM or database]. Make sure you also log who is looking at what. This way you can fairly easily identify people who are looking at things that they aren't supposed to look at."
Have a comment on this story? Please click "Add Your Comment" below. If you'd like to contact Dark Reading's editors directly, send us a message.