Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Vulnerabilities / Threats

9/19/2016
08:00 AM
Mike Baker
Mike Baker
Commentary
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail vvv
100%
0%

What’s The Risk? 3 Things To Know About Chatbots & Cybersecurity

Interactive message bots are useful and becoming more popular, but they raise serious security issues.

Fueled by the exponential growth in mobile messaging, chatbots — interactive messaging bots that harness recent advances in artificial intelligence and machine learning — are the hottest new technology going right now. Facebook opened up its Messenger platform to bot developers earlier this year; messaging app Telegram is offering developers up to $1 million in prizes to develop bots that are fast, useful, and work in inline mode; and over 20 million people chat with the Xiaoice bot on the Chinese micro-blogging service Weibo. Even the White House has gotten into the act with its Obama Facebook chatbot.

Chatbot technology is still in its infancy, but it’s quickly being embraced by businesses because of its vast potential for sales, marketing, and customer service. Chatbots stand to help organizations build deeper relationships with their customers and improve service quality, while at the same time save money by automating certain administrative tasks.

However, as organizations build and deploy enterprise chatbots, it’s important to step back for a moment and consider the security implications of this brave new technology.

Be Aware of the Chatbot’s Channel Encryption
For maximum security, chatbot communication should be encrypted, and chatbots should be deployed only on encrypted channels. While these sound like obvious safeguards, unfortunately, it’s not that simple. An in-house bot that runs on an organization’s system can be set up on a private, encrypted channel, but if an organization wishes to deploy a chatbot on a public channel such as Facebook Messenger, it’s at the mercy of that platform’s security capabilities.

While Facebook is testing end-to-end encryption for its Messenger platform, the feature is still in beta and isn’t widely available. Until public channels begin offering encryption services, organizations should be wary of the type of chatbots they employ using those platforms. Chatbots used on unencrypted channels shouldn’t accept or transmit sensitive information, and for the protection of the organization, these bots shouldn’t have access to the organization’s systems.

Establish Rules Regarding Chatbot Data Handling and Storage
By their nature, chatbots collect information from users; that’s how they respond to questions, and it’s how they train themselves to get better at their “jobs” over time. Where this information is stored, how long it’s stored, how it’s used, and who has access to it must be addressed, especially in highly regulated industries that handle very sensitive information, such as healthcare and finance. Before implementing a chatbot, organizations must establish rules regarding the data the bot will gather and make these rules clear to the customers who will be using the bot.

Additionally, companies must consider where this data will reside, especially if the bot collects personal or sensitive information. This is another issue that limits the functionality of bots on public platforms until the platforms can ensure secure storage and provide additional tools regarding what gets stored and for how long.

Be on the Lookout for Criminal Chatbots
Finally, organizations must be aware of the bigger picture of chatbot security. As chatbots become better at imitating humans, the technology will be used by hackers in phishing schemes and other social engineering hacks. For example, a chatbot designed to imitate a customer or a vendor could strike up a conversation with an employee through a messaging app. After rapport has been established, the chatbot could entice the employee to click on a malicious link or hand over sensitive information.

This has already happened on the consumer level; recently, a number of men using the Tinder dating app were swindled by a bot that pretended to be a female user. After a few back-and-forth messages, the chatbot convinced the men to click on a link to become “Tinder verified.” The link required that they input their credit card information, at which time they were unwittingly signed up for a recurring online porn subscription.

Until technology can be developed to identify and intercept malicious chatbots, the best defense is to train employees to never click on links sent by customers or vendors, and to prohibit them from transmitting sensitive information through email or messaging services. Organizations should be doing these things already to defend against “traditional” phishing schemes.

Because chatbot technology is so new, specific security protocols are still being developed, particularly regarding chatbots deployed on public platforms such as Facebook, and the rapid pace of chatbot development means that new features — and threats — are continuously emerging. 

Chatbots have the potential to transform how business is conducted online. They can also be quite destructive and end up causing cybersecurity nightmares for organizations that don’t employ them properly, especially at this early stage. It is critical for organizations to not get caught up in the frenzy surrounding this new technology and to take a conservative, deliberate approach to chatbot development and deployment, particularly on public platforms.

Related Content:

Mike Baker is founder and Principal at Mosaic451, a bespoke cybersecurity service provider and consultancy with specific expertise in building, operating and defending some of the most highly-secure networks in North America. Mosaic451 offers a unique blend of deep technology ... View Full Bio
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
AshishJ981
50%
50%
AshishJ981,
User Rank: Apprentice
1/19/2018 | 6:26:29 AM
Re: As tech advances, so do threats...

The question also comes that can platforms that integrates with chatbot platforms provide end to end encryption? And if not, then how is this communication channel secure (given that it doesn't have end-to-end encryption. We at Engati www.engati.com have started the journey. Do read our collection of blogs at Engati.com, test our platform and provide us feedback.

edyang
100%
0%
edyang,
User Rank: Apprentice
9/29/2016 | 12:01:59 PM
Re: Criminal
Good point. I bet if you ask the average person what a chatbot is, they'll stare at you blankly...
MikeBaker
100%
0%
MikeBaker,
User Rank: Author
9/28/2016 | 1:50:03 PM
Re: Criminal
That's a good question-

My impression has been that the average consumer isn't even aware of what chat bots are, or that they're in use. Until the awareness increases, criminal chat bots could be the perfect way for for bad actors to programatically split-test out different approaches/questions/scenarios to what's only been used up until now in old-fashioned social engineering based attempts.
Whoopty
100%
0%
Whoopty,
User Rank: Ninja
9/21/2016 | 7:51:38 AM
Criminal
The criminal chat bots is a really interesting idea. I wonder if people will become less concious of their own security when talking to bots, as they will assume they are from a legitimate source?

They may assume that bots are stupid, but I doubt anyone would expect a bot to screw them over. 
EdnaBaron
100%
0%
EdnaBaron,
User Rank: Apprentice
9/21/2016 | 6:25:29 AM
Re: As tech advances, so do threats...
Chatbot technology is one of the highly discussed part and this article describes each and every important points related to this particular area of technology. Cyber security is more important and it plays important role to keep confidential data. 
edyang
100%
0%
edyang,
User Rank: Apprentice
9/19/2016 | 12:44:55 PM
As tech advances, so do threats...
It's an exciting time. Artificial intelligence, machine learning, chatbots, self-driving cars, augmented reality games. But as technology advances, it's a given that there will be cybersecurity threats. The difference will be the magnitude of the impact. What if malware infected chatbots siphoned financial data, or worse social security numbers, from unsuspecting users? What if IoT health care devices were hacked? It's an exciting but also a dangerous time.
Virginia a Hot Spot For Cybersecurity Jobs
Jai Vijayan, Contributing Writer,  10/9/2019
How to Think Like a Hacker
Dr. Giovanni Vigna, Chief Technology Officer at Lastline,  10/10/2019
7 SMB Security Tips That Will Keep Your Company Safe
Steve Zurier, Contributing Writer,  10/11/2019
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Current Issue
7 Threats & Disruptive Forces Changing the Face of Cybersecurity
This Dark Reading Tech Digest gives an in-depth look at the biggest emerging threats and disruptive forces that are changing the face of cybersecurity today.
Flash Poll
2019 Online Malware and Threats
2019 Online Malware and Threats
As cyberattacks become more frequent and more sophisticated, enterprise security teams are under unprecedented pressure to respond. Is your organization ready?
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-17660
PUBLISHED: 2019-10-16
A cross-site scripting (XSS) vulnerability in admin/translate/translateheader_view.php in LimeSurvey 3.19.1 and earlier allows remote attackers to inject arbitrary web script or HTML via the tolang parameter, as demonstrated by the index.php/admin/translate/sa/index/surveyid/336819/lang/ PATH_INFO.
CVE-2019-11281
PUBLISHED: 2019-10-16
Pivotal RabbitMQ, versions prior to v3.7.18, and RabbitMQ for PCF, versions 1.15.x prior to 1.15.13, versions 1.16.x prior to 1.16.6, and versions 1.17.x prior to 1.17.3, contain two components, the virtual host limits page, and the federation management UI, which do not properly sanitize user input...
CVE-2019-16521
PUBLISHED: 2019-10-16
The broken-link-checker plugin through 1.11.8 for WordPress (aka Broken Link Checker) is susceptible to Reflected XSS due to improper encoding and insertion of an HTTP GET parameter into HTML. The filter function on the page listing all detected broken links can be exploited by providing an XSS payl...
CVE-2019-16522
PUBLISHED: 2019-10-16
The eu-cookie-law plugin through 3.0.6 for WordPress (aka EU Cookie Law (GDPR)) is susceptible to Stored XSS due to improper encoding of several configuration options in the admin area and the displayed cookie consent message. This affects Font Color, Background Color, and the Disable Cookie text. A...
CVE-2019-16523
PUBLISHED: 2019-10-16
The events-manager plugin through 5.9.5 for WordPress (aka Events Manager) is susceptible to Stored XSS due to improper encoding and insertion of data provided to the attribute map_style of shortcodes (locations_map and events_map) provided by the plugin.