Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

ABTV

// // //
10/30/2017
12:15 PM
Larry Loeb
Larry Loeb
Larry Loeb

CAPTCHA Is Vulnerable

A group of researchers have demonstrated a vulnerability in the widely used CAPTCHA scheme – a vulnerability that may mean the end of CAPTCHA as we know it.

CAPTCHA is an image system that is used by many websites as a way to foil interactions with programs pretending to be humans. It consists of an image field with an object like letters or numbers segmented inside the image. Desegmentizing the characters requires a preexisting understanding of them, which messes up any automated classifiers. They have proven to be useful for years.

One problem, though. Twelve researchers got together and found a way to beat it. They were able to decipher about two thirds of the CAPTCHAs they were given, with a two orders of magnitude less training than was needed by any other previous method that had attempted this.

Their work was just published in Science and outlines how they reproduced the way that an eye functions and the computation that goes on behind the scenes with the information that it sees.

The AI algorithm has components which recognize the edges of viewed shapes, then will categorize the shape. Another part of the AI will take into account the angle at which the shape is being looked at. Only then will another component attempt to match the shape with a standard form of a letter or number (which was stored inside the AI as a Georgia font character).

The researchers call this kind of AI a Recursive Cortical Network (RCN). It is different from other AI CAPTCHA breakers which work on a Convolutional Neural Network (CNN) model. Whereas slight CAPTCHA segmentation changes would throw off a CNN-based decryption, that does not work for RCN.

In one of the scariest sentences in their write-up, the authors say, "RCN breaks the segmentation defense in a fundamental way and with very little training data, which suggests that websites should move to more robust mechanisms for blocking bots." That's academic-speak for: "You guys are hosed."

Websites will need to remind themselves specifically why they don't want automated processes to pass, and do it fairly fast. It may be they don't want a bot to automagically register for services, for example. Putting in a registration limit number for a time period would serve the same purpose, and not be CAPTCHA vulnerable.

Whatever the purpose, the CAPTCHA field is no longer a valid guardian. It can be defeated without signaling that it has been defeated. It is just a small matter of programming for threat actors to incorporate RCN attacks into their actions.

While the front door lock seems to have a new master key out there, security people have to consider the framework around that door and how it can be strengthened. Reviewing why a CAPTCHA field was used in the first place may help in figuring out what next to do.

Related posts:

— Larry Loeb has written for many of the last century's major "dead tree" computer magazines, having been, among other things, a consulting editor for BYTE magazine and senior editor for the launch of WebWeek.

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Edge-DRsplash-10-edge-articles
I Smell a RAT! New Cybersecurity Threats for the Crypto Industry
David Trepp, Partner, IT Assurance with accounting and advisory firm BPM LLP,  7/9/2021
News
Attacks on Kaseya Servers Led to Ransomware in Less Than 2 Hours
Robert Lemos, Contributing Writer,  7/7/2021
Commentary
It's in the Game (but It Shouldn't Be)
Tal Memran, Cybersecurity Expert, CYE,  7/9/2021
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
How Machine Learning, AI & Deep Learning Improve Cybersecurity
Machine intelligence is influencing all aspects of cybersecurity. Organizations are implementing AI-based security to analyze event data using ML models that identify attack patterns and increase automation. Before security teams can take advantage of AI and ML tools, they need to know what is possible. This report covers: -How to assess the vendor's AI/ML claims -Defining success criteria for AI/ML implementations -Challenges when implementing AI
Flash Poll
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2022-41355
PUBLISHED: 2022-10-06
Online Leave Management System v1.0 was discovered to contain a SQL injection vulnerability via the id parameter at /leave_system/classes/Master.php?f=delete_department.
CVE-2022-39284
PUBLISHED: 2022-10-06
CodeIgniter is a PHP full-stack web framework. In versions prior to 4.2.7 setting `$secure` or `$httponly` value to `true` in `Config\Cookie` is not reflected in `set_cookie()` or `Response::setCookie()`. As a result cookie values are erroneously exposed to scripts. It should be noted that this vuln...
CVE-2022-39279
PUBLISHED: 2022-10-06
discourse-chat is a plugin for the Discourse message board which adds chat functionality. In versions prior to 0.9 some places render a chat channel's name and description in an unsafe way, allowing staff members to cause an cross site scripting (XSS) attack by inserting unsafe HTML into them. Versi...
CVE-2022-27810
PUBLISHED: 2022-10-06
It was possible to trigger an infinite recursion condition in the error handler when Hermes executed specific maliciously formed JavaScript. This condition was only possible to trigger in dev-mode (when asserts were enabled). This issue affects Hermes versions prior to v0.12.0.
CVE-2022-41525
PUBLISHED: 2022-10-06
TOTOLINK NR1800X V9.1.0u.6279_B20210910 was discovered to contain a command injection vulnerability via the OpModeCfg function at /cgi-bin/cstecgi.cgi.