AI scientists belong to a highly enthusiastic and positive community, supportive of social and humanistic values. Most AI publications highlight good motivations and excellent possible effects of their contributions. But not many do investigate their inherent risks. Every AI development involves particular risks that demand to be studied and addressed specifically. There are a few general categories of risks that are common to many applications. These are notably: (i) the safety of critical AI applications, (ii) the security and privacy for individual users, and (iii) the social risks. The issues in these three categories are not independent; many of them may not be exclusive to AI. They entail distinct scientific, technical, political and legal challenges, with different time horizons.
Safety critical AI applications
AI techniques are frequently integrated within artifacts and systems endowed with sensory-motor capabilities and increasing levels of autonomy. These are robots, drones, cyber-physical components, automated plants, networks and infrastructures. These techniques are more and more being deployed in safety critical applications and areas that can have very high economic or environmental costs, such as for example in:
-
health: stimulators, prostheses, monitors, surgical devices, drug processes;
-
transportation: autonomous vehicles, traffic control;
-
network management: energy, logistics, hydraulics, various infrastructures; and
-
surveillance and defense systems.
Relatively few industrial sectors have to comply with very strict certification procedures, as in aeronautics or intrusive medical devices. Procedures requiring informal technical descriptions and declarations of conformity to standards may not be sufficient given:
The risks in human lives and social and environmental costs are not sufficiently studied and assessed. Comparisons to human-controlled systems (without AI) often raise hopes that are still difficult to quantify, e.g., reduction in road accidents or in medical errors. These comparisons are not always convincing given the public expectation and acceptance: a victim of an autonomous system is naturally much less accepted than one due to a human error.
The technical challenges here are about the extension of Verification and Validation (V&V) methods to AI and their industrial deployment. It is essential to be able to accurately analyze and qualify the safety properties of components and systems using AI. Formal methods (deterministic or stochastic), and/or simulation and testing methods, should in particular allow:
-
to state formally the assumptions about the environment of a system, which are required for its correct functioning;
-
to specify its expected functionalities and limitations; and
-
to determine its essential characteristics: correction, reliability, probability of errors, false positives, sensitivity to uncertainty of data and parameters.
V&V is a very active field in Computer Science. It is well advanced for closed, well-modeled functioning environments. AI brings to the V&V field a rich set of challenges to handle software, robots, and cyber-physical systems that interact with open, partially known and imperfectly modeled environments. Among these challenges, the following issues are outstanding:
-
how to formally quantify the uncertainty of a system while taking into account the nature of the data and models used, e.g., in medical diagnosis [3]?
-
how can a system monitor online its environment and own state with respect to the assumptions that are needed for its correct functioning, and adapt when these assumptions are not met?
-
how to assess the V&V properties of a complex system integrating AI techniques from the V&V properties of its components (compositional properties) ? how about blackbox-type components?
-
what are the possible V&V approaches for a system that learns and evolves continually in interaction with its environment?
These issues, and others, are major research challenges, of concern to a large community (see for example [1, 13, 24]). However, many deployments will certainly take place before all these challenges are solved. Furthermore, theoretical restrictions in computational complexity and decidability have been known for decades, or recently uncovered (e.g., learning undecidability [4]). Nonetheless, it remains essential to raise the awareness of designers and users of critical applications about open issues and limitations of current techniques, about mitigating methods and the required vigilance in rapid deployments.
Security and privacy for individual users
AI techniques have become the mediator between the users and the digital world. Access to online data produced by the billions of people and connected systems, and, beyond data, to knowledge relevant to each user, is increasingly based on semantic content. A vocal assistant must correctly perceive oral requests in natural language. An associated querying engine must interpret each request in its context and in relation to the user’s profile, which is constantly learned, refined and evolving. Images, videos and data from various physical, chemical, or physiological sensors, are to be interpreted and indexed with respect to their semantic content. Increasingly, a person’s interactions with her environment, with machines and systems (at home, in stores and public equipments), or even her interactions with other persons, are performed digitally and mediated via AI. Each person generates a growing and potentially indelible “digital trace” of her behavior. Even without direct use of digital interfaces, it is difficult to avoid leaving such a trace (e.g., walking in areas with video surveillance and facial recognition, or making purchases).
The mediation role of AI with the digital world has become so important that, for many, AI is undistinguishable from digital technologies. Studies about opinions and attitudes regarding AI can be highly instructive (e.g., [29])Footnote 8. They can provide insight about where research and education efforts should concentrate. The general public has often ambivalent perceptions of the field, sometime mixing:
-
uncritical expectations: algorithms and computations are accurate and correct, decisions recommended by a machine are “rational”;
-
legitimate concerns about the security and confidentiality of a user’s interactions, the exploitation of personal and aggregated data, and opinion manipulation capabilities; and
-
unfounded fears about the “singularity”, or the currently improbable perspective of machines with intentions, emotions, consciousness, that may take control of human.
AI mediated interactions raise social risks (covered in Social risks), as well as individual risks. The latter correspond to real and subjective vulnerabilities, frustrations and the possible rejection of digital technologies by a part of the population, which can feel marginalized.
The needs at this level are technical, but also educational, institutional and legal. The technical problems concern in particular the following points:
-
Security of digital interactions: the state of the art is well advanced but the deployment of known techniques is clearly insufficient, specially in portable applications and connected objects. Security vulnerabilities frequently make the news headlines, e.g., in vacuum cleaner robots or vocal assistants [6]. There are also hard open problems that need to be addressed, e.g., the susceptibility of neural network techniques to attacks and adversarial examples [9].
-
Confidentiality, privacy and use of personal data: here also there is an insufficient deployment of the state of the art.
-
Intelligibility and transparency: these issues raise challenging scientific and technical problems. A decision support system should be able to explain its assumptions, limitations, and criteria. The important issue of the decision criterion is often overlooked: a rational decision is almost always relative to some criterion, which may not necessarily meet a user’s inclination and priorities. A decision support system must be able to explain and justify the response to a request. All this must be done in terms that are understandable to the user.
The insufficient deployment of known security and confidentiality techniques is generally due to weak economic incentives and regulatory constraints. The recent EUGDPR measures reinforce confidentiality and respect for privacy. However, these and other similar measures are criticized as addressing the problems in partial and insufficient manners. The contractual relationship between a user and a platform is unbalanced. The imbalance highlights the user’s vulnerability to platforms deployed by a small number of corporations that have huge economic and legal support potentials. It is natural for these corporations to pursue their own interests, including by harvesting profitable behavior data, as long as this is legal. They offer services regarded as essential to everyone for a modern social life, but at a largely hidden cost. Furthermore, a user may decide (in theory) about the use of her personal data, but she has not much to say about the aggregated data and the resulting models to which she contributes. These models represent an important source of revenues, as well as risks. In some cases, a user may not agree to the elaboration of a behavior model, or she may view it as a public resource to be used solely for open research. Additional legal and technical studies are needed, e.g., for the development of accountable data trusts, which can play an intermediary role between users and platforms to better balance contractual relationships [8].
Guidelines (e.g., the UN Guiding Principles or the EU AI Ethics Guidelines) and ethical commitments of companies are certainly useful and needed, but not sufficient. The urgent requirements here are more in regulations and public policies than in ethics [27]. Legal studies and possibly social experiments are needed to raise awareness, support deliberations, and foster international cooperations regarding AI and digital regulations.
Social risks
The acceptability of a technology is often interpreted in terms of customers, i.e., the existence of a sufficiently broad public that adopts and uses the technology. But social acceptability is much more demanding than individual acceptance. Among other things, social acceptability needs:
-
to take into account the long term, including possible impacts on future generations;
-
to worry about social cohesion, in particular in terms of employment, resource sharing, inclusion and social recognition;
-
to integrate the imperatives of human rights, as well as the historical, social, cultural and ethical values of a community; and
-
to consider global constraints affecting the environment or international relations.
Biases Decision support tools can be biased. In some cases, systems are intentionally designed as unbalanced, e.g., for a recommender system integrating propaganda or commercial goals. Users should be explicitly warned about the underlying objectives of systems that may distort their outcomes. More problematic are the hidden and non intensional biases of systems required to be neutral and fair. Numerous cases of gender, ethnical or seniority biases have been reported in decision support systems for health, banking, insurance, recruitment, career assessment, or even in public services such as legal assessment and city surveillance applications [14, 20, 25]. This is generally the case because these systems lacks transparency, intelligibility and rely on training data which is biased in hidden ways difficult to uncover and mitigate. There is a need for further research in techniques for auditing the fairness of a system, and in regulations requiring their use for certification mechanisms.
Behavior manipulation. It has been known for ages that individuals can be manipulated. AI technologies augment their vulnerability, in particular with the worldwide deployment of ergonomic and playful devices that implement powerful communication, sensing, processing and decision making functions. Manipulation capabilities are illustrated by the increasingly more effective techniques for social monitoring, text and audio-visual “optimization”, debate steering, behavior modeling and shaping, and market driving [30]. The incentives for using available techniques toward profitable purposes are very high. Dubious practices with high social, political and economic risks will remain in use as long as they are unregulated. In addition to regulations, and for supporting them, further research in AI may contribute to methods for detecting manipulation attempts.
Democracy The political risks, illustrated by the Cambridge Analytica scandal, are analyzed by several authors as a threat to democracy [19, 31]. Studies show that AI presents opportunities as well as risks on the full range of human rights, with already observed impacts [21].
Economy Economic risks correspond to several AI deployments, for example in High Frequency Trading (HFT), or in algorithmic pricing. The possible destabilization effects of HFTs are far from being well understood [26]. Algorithmic pricing using learning, profit optimization and indirect interactions between computational agents can lead, even without any explicit agreement, to artificially higher prices, as with the illegal price cartel mechanisms [11]. The main assumption of the liberal economy postulates a supposedly neutral free market, considered as a virtuous “unknowable and uncontrollable” information processor, which should remain unregulated. The real time observation, learning, modeling and feedback control capabilities permitted with AI tools are in clear contradiction with this assumption. Regulations to mitigate the corresponding risks are urgently needed.
Employment AI contributes to the increasing automation of services, industry and agriculture, which brings progress, as well as important social risks for employment. There is no general consensus on this risk (nor is there one on global warming). However, available studies, which remain insufficient, converge toward a substantial reduction of jobs in the short to medium term. According to an OECD study for its 21 countries [2], 9% of jobs have a high risk of automation; a higher percentage of 20 to 25% of jobs have a medium risk (other studies conclude to more alarming risk levels, e.g., [15]). Furthermore, technology developments are strongly suspected to be a contributing factor for the observed increase in social inequalities [5], which reduce social involvement.
It is clear to most observers that the existing social measures for handling temporary fluctuations (e.g., unemployment benefits) are inadequate for a long-standing, continuing change. Several laudable studies and initiatives are undertaken to mitigate the unemployment risks, in terms of training and job creation (e.g., Innovation for Jobs), resource sharing, social recognition and integration. The challenge here is to further develop these initiatives in order to respond in time to the undesirable consequences of numerous technology deployments.
Military systems. AI in weapons and military systems correspond to another area of worry, which raises ethical concerns, as well as risks of international instability and increased conflicts. AI technologies greatly enhance the military capabilities of perception, surveillance, intelligence, fighting, and intervention. AI is naturally a dual-use technology, easily transposed from the commercial to the military domain. This makes impracticable control procedures such as those used for nuclear weapons containment [7]. This also makes weapons and devices with integrated AI relatively more “affordable” than other heavy military technologiesFootnote 9. These weapons may be more easily accessible to rogue groups. In addition, international arm trade agreements, including the recent Arms Trade Treaty, do not cover digital weapons, such as drones, robots, ROV and AUV. The widely supported Open Letter for a ban on autonomous weapons is an excellent initiative which needs to be pursued into studies and regulations.
Can we mitigate technically the above social risks by extending the problem solving and reasoning competences of our tools with moral appraisal capabilities? We certainly need machines which are, by design, provably safer, more secure, intelligible, unbiased, respectful of privacy, and meeting in their functioning the constraints and rules demanded by society. These and similar properties can be reasonably well understood, formalized, and machine implementable [10, 23]. Technical standards for meeting them in AI systems should be developed and deployed, as for other artifacts. However it is unclear what might be the specification of an automated weapon, or an automated trader, capable of resolving ethical choices on the basis of moral principles. Several approaches to the notion of an “artificial moral agent” in a general sense (i.e., levels 3 and 4 of [18]), are criticized as being philosophically illegitimate (e.g., [12, 28]). They can be quite misleading. We should strive to clarify and disseminate widely the knowledge about the capabilities and limitations of our tools, and to integrate the social involvement and assessment of their potential uses as an essential component in our research and design methodology.
The needs for responsible AI developments with respect to the social risks correspond in particular to political and legal measures and to international agreements. However, the required measures are part of the regulatory mechanisms of society. These mechanisms have a quite long response time: decades are needed to better understand, educate, spread the awareness and build up the social forces required to impose regulations. But the momentum of technology has become much faster. The discrepancy between the two dynamics demands for proactive approaches. However, no predictive models of the possible social and economic effects of a given technical deployment are readily available. A proactive approach must rely on social experiments, and integrative research about social risks and mitigation measures. Here too, a change of paradigm is required to fund and develop joint investigations between AI and social scientists, to give a better understanding of AI to the former, and of social and economic mechanisms to the latter. More involvement of AI within relatively recent areas such as “Science, Technology and Society” (e.g., at Stanford or MIT) should provide opportunities to complement the usual empirical observation methodology of social sciences with significant experimentation, modeling and even simulation. It should be noted that simulation, based on elementary models, is emerging in a few areas of social sciences. AI can actively contribute to its development and effectiveness. Finally, let us remark that social experimentation before a technical deployment reduces the discrepancy between the technology momentum and the social regulation dynamics.