Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Network Security

8/13/2019
07:00 AM
Oliver Schonschek
Oliver Schonschek
Oliver Schonschek
50%
50%

European Approach to Artificial Intelligence: Ethics Is Key

The socio-economic, legal and ethical impacts of AI must be carefully addressed, says the European Commission.

Artificial intelligence (AI) has become an area of strategic importance in the European Union (EU). However, socio-economic, legal and ethical impacts must be carefully addressed, says the European Commission. Ethics guidelines are proving key to the European approach here, and there will be additional structure to come: AI certifications concerning privacy, security and social impacts.

There is a strong global competition in AI among the US, China, and Europe, says the European Commission's Science and Knowledge Service: "The US leads for now but China is catching up fast and aims to lead by 2030. For the EU, it is not so much a question of winning or losing a race but of finding the way of embracing the opportunities offered by AI in a way that is human-centred, ethical, secure, and true to our core values."

The European Union wants to embrace the opportunities afforded by AI but not without critical calculation. The black box characteristics of most leading AI techniques make them opaque even to specialists, states the EU Science Hub. The EU should challenge the shortcomings of AI and work towards strong evaluation strategies, transparent and reliable systems, and good human-AI interactions. Ethical and secure-by-design algorithms are crucial to build trust in the disruptive technology, so the authors of the EU study "Artificial Intelligence: A European Perspective" state.

The European approach to Artificial Intelligence is based on three pillars:

  • Being ahead of technological developments and encouraging uptake by the public and private sectors
  • Prepare for socio-economic changes brought about by AI
  • Ensure an appropriate ethical and legal framework

Two of these pillars are among the soft factors, which are seen as the main differences to the AI developments in other countries and regions.

AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making, so the European Commission has appointed 52 experts to a High-Level Expert Group on Artificial Intelligence (AI HLEG), comprising representatives from academia, civil society, as well as industry. The AI HLEG delivered the so-called "Ethics Guidelines on Artificial Intelligence."

These guidelines list seven key requirements that AI systems should meet in order to be trustworthy:

    Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all lifecycle phases of AI systems.

Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

Transparency: The traceability of AI systems should be ensured. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Commission Vice-President for the Digital Single Market Andrus Ansip said: “Step by step, we are setting up the right environment for Europe to make the most of what artificial intelligence can offer. Data, supercomputers and bold investment are essential for developing artificial intelligence, along with a broad public discussion combined with the respect of ethical principles for its take-up. As always with the use of technologies, trust is a must.”

Carlos Moedas, Commissioner in charge of Research, Science and Innovation, added: “Artificial intelligence has developed rapidly from a digital technology for insiders to a very dynamic key enabling technology with market creating potential. And yet, how do we back these technological changes with a firm ethical position? It bears down to the question, what society we want to live in. Today's statement lays the groundwork for our reply.”

Commissioner for Digital Economy and Society Mariya Gabriel said: "To reap all the benefits of artificial intelligence the technology must always be used in the citizens' interest and respect the highest ethical standards, promote European values and uphold fundamental rights. That is why we are constantly in dialogue with key stakeholders, including researchers, providers, implementers and users of this technology."

„Compared to other regions like the US or China, Europe still needs to walk an extra mile to catch up with the development of Artificial Intelligence, especially in its real-world implementation. However, European digital SMEs produce AI solutions that are trusted by the consumers: they offer more security, privacy and higher quality“, commented Dr. Oliver Grün, President of DIGITAL SME, the largest network of the small and medium-sized ICT enterprises in Europe. The European data protection authorities have already published how the data protection of AI should be evaluated (Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01) https://ec.europa.eu/newsroom/article29/document.cfm?action=display&doc_id=49826).

On the other hand, the research institute Center for Data Innovation (https://www.datainnovation.org) says that the EU needs to reform the GDPR to remain competitive in the Algorithmic Economy. „The General Data Protection Regulation (GDPR), while establishing a needed EU-wide privacy framework, will unfortunately inhibit the development and use of AI in Europe, putting firms in the EU at a competitive disadvantage to their North American and Asian competitors. The GDPR’s requirement for organizations to obtain user consent to process data, while perhaps being viable, yet expensive, for the Internet economy, and a growing drag on the data-driven economy, will prove exceptionally detrimental to the emerging algorithmic economy.“ User surveys in Germany however show that 53% consider technologies developed in Europe to be more trustworthy in terms of privacy and security than those originating from the US or China. Only 29% do not think so. Trust and mistrust play a role in AI solutions.

In a project led by the German Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, and with the participation of Germany’s Federal Office for Information Security (BSI), an interdisciplinary team of scientists from the Universities of Bonn and Cologne are drawing up an inspection catalog for the certification of AI applications. They have now published a white paper presenting the philosophical, ethical, legal and technological issues involved.

According to the white paper, it must be determined during the initial design process whether the application is ethically and legally permissible -- and if so, which checks and controls must be formulated to govern this process. One necessary criterion is to ensure that use of the application does not compromise anyone using it in their ability to make a moral decision -- as if the option existed to decline the use of AI -- and that their rights and freedoms are not curtailed in any way.

Transparency is another important criterion: the experts emphasize that information on correct use of the application should be readily available, and the results determined through the use of AI in the application must be fully interpretable, traceable and reproducible by the user. Conflicting interests, such as transparency and the nondisclosure of trade secrets, must be balanced against one another. The plan is to publish an initial version of the inspection catalog by the beginning of 2020 and then begin with the certification of AI applications. This development could be of great interest for solution providers in the field of AI trying to enter the European market, which seems to be promising.

In a recent study by the auditing and consulting firm EY, on behalf of Microsoft Germany, 86% of German companies surveyed said that artificial intelligence will have a very strong or strong influence on their industry in the next five years.

However, from a business perspective, the technology also carries risks, according to most German companies. At the forefront, 63% of German firms cite regulatory requirements as a major issue. For many, the guidelines in the country for the use of AI are still too unclear. Also, providing a clear indication of apprehension here, 54% even fear that they lose control of the AI and that it becomes independent.

— Oliver Schonschek, News Analyst, Security Now

Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
COVID-19: Latest Security News & Commentary
Dark Reading Staff 7/9/2020
4 Security Tips as the July 15 Tax-Day Extension Draws Near
Shane Buckley, President & Chief Operating Officer, Gigamon,  7/10/2020
Russian Cyber Gang 'Cosmic Lynx' Focuses on Email Fraud
Kelly Sheridan, Staff Editor, Dark Reading,  7/7/2020
Register for Dark Reading Newsletters
White Papers
Video
Cartoon
Current Issue
Special Report: Computing's New Normal, a Dark Reading Perspective
This special report examines how IT security organizations have adapted to the "new normal" of computing and what the long-term effects will be. Read it and get a unique set of perspectives on issues ranging from new threats & vulnerabilities as a result of remote working to how enterprise security strategy will be affected long term.
Flash Poll
The Threat from the Internetand What Your Organization Can Do About It
The Threat from the Internetand What Your Organization Can Do About It
This report describes some of the latest attacks and threats emanating from the Internet, as well as advice and tips on how your organization can mitigate those threats before they affect your business. Download it today!
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
CVE-2020-15105
PUBLISHED: 2020-07-10
Django Two-Factor Authentication before 1.12, stores the user's password in clear text in the user session (base64-encoded). The password is stored in the session when the user submits their username and password, and is removed once they complete authentication by entering a two-factor authenticati...
CVE-2020-11061
PUBLISHED: 2020-07-10
In Bareos Director less than or equal to 16.2.10, 17.2.9, 18.2.8, and 19.2.7, a heap overflow allows a malicious client to corrupt the director's memory via oversized digest strings sent during initialization of a verify job. Disabling verify jobs mitigates the problem. This issue is also patched in...
CVE-2020-4042
PUBLISHED: 2020-07-10
Bareos before version 19.2.8 and earlier allows a malicious client to communicate with the director without knowledge of the shared secret if the director allows client initiated connection and connects to the client itself. The malicious client can replay the Bareos director's cram-md5 challenge to...
CVE-2020-11081
PUBLISHED: 2020-07-10
osquery before version 4.4.0 enables a priviledge escalation vulnerability. If a Window system is configured with a PATH that contains a user-writable directory then a local user may write a zlib1.dll DLL, which osquery will attempt to load. Since osquery runs with elevated privileges this enables l...
CVE-2020-6114
PUBLISHED: 2020-07-10
An exploitable SQL injection vulnerability exists in the Admin Reports functionality of Glacies IceHRM v26.6.0.OS (Commit bb274de1751ffb9d09482fd2538f9950a94c510a) . A specially crafted HTTP request can cause SQL injection. An attacker can make an authenticated HTTP request to trigger this vulnerabi...