By ensuring that each layer of protection scours an application for unintended uses, you can find the flaws before the bad guys do.

Jerry Gamblin, Director of Security Research at Kenna Security (now part of Cisco)

October 9, 2018

5 Min Read

The fact that a relatively simple flaw allowed an anonymous hacker to compromise 50 million Facebook accounts serves as a powerful reminder: When hackers, professional or amateur, find business logic errors, as defined by CWE 840, the exploitation can be incredibly damaging.

The worst part is that finding logic errors can't be solved with automated tools alone. The best advice on how to avoid logic errors comes from Aristotle: "Knowing yourself is the beginning of all wisdom."

In practice, that means three stages of review: At the DevOps level, use automated tools to find the low-hanging fruit and developers who code with security in mind. Use quality assurance (QA) teams that have deep knowledge of how your application should work. Lastly, beef up your bug bounty program to ensure an external review of your application from talented and experienced security researchers.

While logic errors are some of the hardest vulnerabilities to spot, setting up your teams and incentives the right way can help you catch them before a hacker does.

What Happened at Facebook
Logic errors, sometimes called design flaws, are flaws in code that allow users to take unwanted actions because they weren't foreseen by the original developers. In the early days of e-commerce, some sports fans discovered they could change the date of an event in a URL to gain access to tickets before the rest of the public. The flaw is that nobody told the system not to sell tickets before a certain date.

In Facebook's case, the flaw involved the ability to preview birthday videos as someone else using Facebook's "View As" function. Before posting a birthday video, users could see what the post would look like to their co-worker or friend using View As. But once there, if the user right-clicked to view the page's source code, instead of seeing the access token for their own profile in that code, it contained the access token of the person they were viewing the video as.

The discovery of the vulnerability itself required no special tools, just the ability to right-click. Exploiting it required only a minimal understanding of how access tokens work. And with knowledge of the flaw, most college freshmen software developers would be able to write a code to scrape 50 million tokens. In other words, the breach was not an incredibly sophisticated attack.

And while the exploit itself was not technically sophisticated, it took place in a fairly obscure portion of Facebook's user experience. The exploited code had been released 14 months previously, so the person behind this could have been someone who was simply in the right place at the right time to discover it, or it could have been an extremely methodical bug hunter.

Finding Logic Errors Is a Personnel Issue
Some apps have millions of lines of code for developers and production teams to review. It's not surprising that something like this flaw slipped through at Facebook. Frankly, it can happen to any company.

Logic errors are the result of a logical failure that another human made. They cannot be discovered with automated scanners because there has not been a development tool created that can replicate the creativity of the human mind when interacting with an app.

Finding logic errors is simply a people management issue, in an era where good cybersecurity talent is hard to find. At a basic level, developing secure code is the responsibility of the team that wrote the code. But at a higher level, it's up to executives to institute processes and gather resources to give development teams the time and resources to uncover these issues.

The Three Layers of Checks Needed to Spot Logic Errors
The first step of minimizing logic errors is to create a development team that codes with security in mind. Developers should be maniacal about looking for logical errors in their code. To do that successfully, they need to be in tune with the business, its architecture, and the entire application, not just their subset.

Second, a strong QA team can also reduce the risk of design flaws, but it must have an in-depth understanding of how the application is designed. The QA team should also have a few people on board with an inclination to experiment outside the application's intended uses.

When a company unveils a photo sharing tool, you want someone on QA who thinks "I wonder what happens when I upload a terabyte at once" or "How can I delete photos on someone else's account?" QA isn't about making sure the app fits your use case, it's about making sure the app fits the real world.

Lastly, the importance of a bug bounty program can't be overstated, but it has to be managed for success. There are two reasons for this. Lots of firms operate on the "move fast and break stuff" model. They code fast and don't always follow best practices. This means they rely more heavily on security researchers to do their QA.

Also, if your security program is mature, you have automated security tools in place that allow security researchers to spend more of their time looking for potential logic errors. You should compensate them accordingly.

Bug bounties should be proportionate to the size and importance of the app. Your bug bounties should also take into account the market for zero-day exploits which is often willing to pay more than you.

Facebook is one among many companies that have been caught with a logic error. It certainly won't be the last. But by ensuring that each layer of protection, from the developers to QA to bug bounty hunters, scours the application for unintended uses, you can find the flaws before the bad guys do.

Related Content:

 

Black Hat Europe returns to London Dec. 3-6, 2018, with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions, and service providers in the Business Hall. Click for information on the conference and to register.

About the Author(s)

Jerry Gamblin

Director of Security Research at Kenna Security (now part of Cisco)

Jerry Gamblin's interest in security ignited in 1989 when he hacked Oregon Trail on his 3rd grade class Apple IIe. As a security evangelist, researcher, and analyst, he has been featured on numerous blogs, podcasts and has spoken at security conferences around the world. When he's not helping companies be more secure, he's usually taking his son to swim lessons or hacking embedded systems in cars and IoT devices. He is principal security engineer at Kenna Security, now part of Cisco.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights