Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Threat Intelligence

10/10/2017
09:15 AM
Connect Directly
Twitter
LinkedIn
Google+
RSS
E-Mail
50%
50%

Artificial Intelligence: Experts Talk Ethical, Security Concerns

Global leaders weigh the benefits and dangers of a future in which AI plays a greater role in business and security strategy.

CYBERSEC EUROPEAN CYBERSECURITY FORUM - Kraków, Poland - The future of artificial intelligence was a hot topic at the third annual CYBERSEC Cybersecurity Forum, where security professionals representing Poland, the Netherlands, Germany, and the United Kingdom discussed the pitfalls and potential of AI, and its role in the enterprise.

Is it too soon to have this discussion? Absolutely not, said Axel Petri, SVP for group security governance at Deutsche Telekom AG. "Now is the time to ask the questions we'll have answers for in ten, twenty years," he added. Cybersecurity supported by AI and machine learning can leverage data to generate more insight and fight fraud.

"You are able to use the workforce you have in a smarter and better way by using AI," he said. "How nice would it be if we could have a junior SOC analyst act as well as the smartest guy in the SOC, of which you currently have very few?"

Andrzej Zybertowicz, research fellow at Nicolaus Copernicus University and social adviser to the President of Poland, explained that while locally used artificial intelligence can increase cybersecurity, the effects are "potentially disastrous" on a global scale. It's time to discuss the broad risks of AI and regulations to avoid them, he said, and others agreed.

"The problem is, there are so many risks," said Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, who believes there are both opportunities and threats in the field. "There's not just one thing."

Sharkey presented an example in the medical field, where AI could help doctors research diseases. This is a good thing, he said, but what happens when the machine is right long enough and the doctor stops questioning it? Should a doctor automatically agree with a machine? What are the implications if they do, and the machine someday gets it wrong?

"What's core is making sure there's clear accountability, and being concerned with the types of controls we seek in AI," Sharkey continued. There is a need for deep learning, and deep reinforcement learning, as we seek AI applications in child care, elder care, transport, and agriculture. "Future-proofing" AI should consider its implications for human rights.

"Artificial intelligence transforms everything around us; every industry, our health, our education," explained Aleksandra Przegalinska-Skierkowska, assistant professor at Kozminski University and research fellow for Collective Intelligence at MIT. "Especially if we want autonomous vehicles or virtual agents, we need a code of conduct for them."

We are at a point when people have begun to reflect on issues related to machine ethics and morality, she added. Building a structure for ethical AI systems should be a collaborative effort, especially as more businesses generate connected products.

"From the perspective of a company selling digital services, we should put one very important thing at the center of our attention -- this is a customer using the AI," said Petri. "What we need is the trust of users in every technical system. If we don't have they trust, we don't have users."

The discussion of regulatory measures soon turned to threat actors who will break them. Almost every technology is dual-usage and can be weaponized, Zybertowicz pointed out. "We are here talking about rules, but we are dealing with a group of bad actors who don't play by the rules," said Petri. "The bad guys are innovating faster than the good guys."

Related Content:

Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.

Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial ... View Full Bio

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
Exploits Released for As-Yet Unpatched Critical Citrix Flaw
Jai Vijayan, Contributing Writer,  1/13/2020
Microsoft to Officially End Support for Windows 7, Server 2008
Kelly Sheridan, Staff Editor, Dark Reading,  1/13/2020
Active Directory Needs an Update: Here's Why
Raz Rafaeli, CEO and Co-Founder at Secret Double Octopus,  1/16/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon Contest
Write a Caption, Win a Starbucks Card! Click Here
Latest Comment: This comment is waiting for review by our moderators.
Current Issue
The Year in Security: 2019
This Tech Digest provides a wrap up and overview of the year's top cybersecurity news stories. It was a year of new twists on old threats, with fears of another WannaCry-type worm and of a possible botnet army of Wi-Fi routers. But 2019 also underscored the risk of firmware and trusted security tools harboring dangerous holes that cybercriminals and nation-state hackers could readily abuse. Read more.
Flash Poll
[Just Released] How Enterprises are Attacking the Cybersecurity Problem
[Just Released] How Enterprises are Attacking the Cybersecurity Problem
Organizations have invested in a sweeping array of security technologies to address challenges associated with the growing number of cybersecurity attacks. However, the complexity involved in managing these technologies is emerging as a major problem. Read this report to find out what your peers biggest security challenges are and the technologies they are using to address them.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2019-15625
PUBLISHED: 2020-01-18
A memory usage vulnerability exists in Trend Micro Password Manager 3.8 that could allow an attacker with access and permissions to the victim's memory processes to extract sensitive information.
CVE-2019-19696
PUBLISHED: 2020-01-18
A RootCA vulnerability found in Trend Micro Password Manager for Windows and macOS exists where the localhost.key of RootCA.crt might be improperly accessed by an unauthorized party and could be used to create malicious self-signed SSL certificates, allowing an attacker to misdirect a user to phishi...
CVE-2019-19697
PUBLISHED: 2020-01-18
An arbitrary code execution vulnerability exists in the Trend Micro Security 2019 (v15) consumer family of products which could allow an attacker to gain elevated privileges and tamper with protected services by disabling or otherwise preventing them to start. An attacker must already have administr...
CVE-2019-20357
PUBLISHED: 2020-01-18
A Persistent Arbitrary Code Execution vulnerability exists in the Trend Micro Security 2020 (v160 and 2019 (v15) consumer familiy of products which could potentially allow an attacker the ability to create a malicious program to escalate privileges and attain persistence on a vulnerable system.
CVE-2020-7222
PUBLISHED: 2020-01-18
An issue was discovered in Amcrest Web Server 2.520.AC00.18.R 2017-06-29 WEB 3.2.1.453504. The login page responds with JavaScript when one tries to authenticate. An attacker who changes the result parameter (to true) in this JavaScript code can bypass authentication and achieve limited privileges (...