Skip to main content
  • Position paper
  • Open access
  • Published:

Responsible AI: requirements and challenges

Abstract

This position paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (i) how to foster research and development efforts toward socially beneficial applications, and (ii) how to take into account and mitigate the human and social risks of AI systems.

Introduction

AI significantly contributes to and benefits from the accelerated momentum of technology development, which is opening a wealth of opportunities and has already brought numerous social and human benefits, as assessed for example by the evolution of the Human Development Index throughout the world. AI technologies help medical professionals improve prevention, diagnosis and care procedures. They are of benefit in environment preservation and monitoring programs, in agricultural projects, and in the modeling and management of cities, infrastructures and industries. They contribute to safer and more efficient mobility and transportation systems. They offer effective tools for multi-modal and multi-lingual interaction and information querying. However, these rapid technology developments are also the matter of legitimate concerns about risks, disruptive effects and social strains that need to be properly understood and addressed.

The concerns about AI are expressed in various forums and programs seeking to leverage AI developments toward social good, to mitigate the risks and investigate ethical issues. This is notably illustrated through the initiatives taken by international organizations, such as the United Nations and its specialized agencies,Footnote 1 the European Union,Footnote 2 or the Organisation for Economic Cooperation and DevelopmentFootnote 3. The G7 political leadership has recently announced the future setup of an International Panel on AI, akin to the IPCC for the climate change. Other initiatives have been taken by technical societies,Footnote 4 NGOs, foundations, corporations, and academic organizationsFootnote 5.

The requirements and challenges regarding responsible AI developments can be analyzed with respect to two interdependent purposes: (i) how to foster research and development efforts toward socially beneficial applications, and (ii) how to take into account and mitigate the risks of AI systems. These objectives correspond to technical as well as legal and social challenges, which are briefly summarized in this position paper.

AI for the social good

AI technologies, as most digital technologies, have become ubiquitous. Learning, reasoning, heuristic search and problem solving algorithms are found in a very wide range of applications, directly integrated into artifacts or indirectly via cloud connections. Most industrial and economic sectors are deploying these techniques in their engineering methods and products. Even culture and arts are experimenting with AI in their creative tools.

The needs for socially beneficial AI applications are tremendous and raise numerous challenges. Several initiatives are attempting to address some of these needs. For example, the AI for Global Good Summit of the ITU is concerned with encouraging R&D in AI to actively contribute to the 17 Sustainable Development Goals (SDGs) of the UN. The last edition of the Summit considered a few development areas such as:

  • the interpretation and processing of satellite images in food and agronomic applications (SDG 2), and in environment preservation programs (SDG 6, 13 et 17);

  • the collection, treatment and open dissemination of medical data and knowledge related to epidemics and various health conditions (SDG 3); and

  • the simulation of urban environments for the management and decision-making support in smart cities (SDG 11).

AI techniques can contribute to other UN sustainable development objectives, such as in education (SDG 4), water resource management (SDG 6) and industrial production (SDG 9 and 12)Footnote 6.

The challenges for fostering AI toward social good fit in two main categories: incentives and integrative research.

Incentives The usual market incentives tend to focus on high and rapid return on investment. They may not provide research funding and investments meeting the significant needs of socially beneficial developments, specially in their initial and risky phases. A few non-profit foundations are to be commended for funding exemplary projectsFootnote 7. However, more support is needed from international cooperation and public funding, which should bring significant and concentrated resources on key objectives. Although all OECD countries (and many developing countries) have an AI development plan, their funding remains modest, as compared to the R&D investments of the few main industrial players of the field. Public incentives need to be scaled-up on socially beneficial programs.

Integrative research. The usual organization and granularity of academic research in many fields, including in AI, tend to favor focused analytical methods and disciplinary targets. They promote investigations within the useful but often narrow assumptions of each community in order to bring further in-depth and well formalized knowledge. This is certainly needed and essential for the progress of science. But it is not sufficient for driving and amplifying AI contributions toward social good. The developments required for contributing to the SDGs mentioned above and similar projects, are not just “a matter of application”. They raise rich integrative research problems, within AI, as well as with other fields.

Integrative research within AI is demanded for addressing heterogenous tasks, which are inherent to socially beneficial applications. Such tasks require multiple cognitive functions, e.g., sensing, data association, as well as extraction and reasoning on the underlying ontology of a domain, in order to better actively perceive, organize, explain and rationalize a perceived field. The challenges require integrating data-based modeling and model-based reasoning. They demand combining bottom-up learning and correlation with top-down causal rationalization. They also require fusing a diversity of input sources, and integrating consistently multiple knowledge representations and processing approaches that are mathematically heterogeneous.

Integrative research problems between AI and other fields are clearly at the core of most socially beneficial developments of AI. They correspond to targeted interdisciplinary projects, as well as to long term transdisciplinary programs. They also require the involvement of non-academic contributors, social actors and stakeholders within investigations and developments. These integrations are usually more complicated because of the diversity of cultural and methodological backgrounds. But they are needed in order to ground the work into real issues and to develop relevant contributions, which have to be assessed mainly from their effective field success than from their formal computational properties.

Integrative research is intrinsically difficult. It requires a long time span, due in particular to the overhead of collaborations and field tests. Given the usual criteria and bibliometrics indicators used for the funding and assessment of academic work and careers, integrative research appears risky. Furthermore, the view that science is “neutral” with respect to its possible uses is still appealing. Many researchers perceive their role as mainly to contribute to knowledge, and to leave it up to society to make use of it. But the intricacy and high pace of technosciences, particularly in AI, no longer support such a view. Today, a significant part of the AI community is concerned with promoting a research agenda that anticipates and takes into account the social utility of its investigation (see, for example, the widely supported Open Letter, and the subsequent agenda of [22]). However, a shift in the academic cultural and organizational paradigm may be needed to amplify integrative research in AI. In this regard, studies in epistemology (e.g., [17]), and examples from other domains such as the earth and climate science community [16] can be very informative.

Mitigating AI risks

AI scientists belong to a highly enthusiastic and positive community, supportive of social and humanistic values. Most AI publications highlight good motivations and excellent possible effects of their contributions. But not many do investigate their inherent risks. Every AI development involves particular risks that demand to be studied and addressed specifically. There are a few general categories of risks that are common to many applications. These are notably: (i) the safety of critical AI applications, (ii) the security and privacy for individual users, and (iii) the social risks. The issues in these three categories are not independent; many of them may not be exclusive to AI. They entail distinct scientific, technical, political and legal challenges, with different time horizons.

Safety critical AI applications

AI techniques are frequently integrated within artifacts and systems endowed with sensory-motor capabilities and increasing levels of autonomy. These are robots, drones, cyber-physical components, automated plants, networks and infrastructures. These techniques are more and more being deployed in safety critical applications and areas that can have very high economic or environmental costs, such as for example in:

  • health: stimulators, prostheses, monitors, surgical devices, drug processes;

  • transportation: autonomous vehicles, traffic control;

  • network management: energy, logistics, hydraulics, various infrastructures; and

  • surveillance and defense systems.

Relatively few industrial sectors have to comply with very strict certification procedures, as in aeronautics or intrusive medical devices. Procedures requiring informal technical descriptions and declarations of conformity to standards may not be sufficient given:

  • the complexity and opacity of many AI models and techniques; and

  • the intricate traceability of the hardware and software components within systems that are becoming larger and more complex.

The risks in human lives and social and environmental costs are not sufficiently studied and assessed. Comparisons to human-controlled systems (without AI) often raise hopes that are still difficult to quantify, e.g., reduction in road accidents or in medical errors. These comparisons are not always convincing given the public expectation and acceptance: a victim of an autonomous system is naturally much less accepted than one due to a human error.

The technical challenges here are about the extension of Verification and Validation (V&V) methods to AI and their industrial deployment. It is essential to be able to accurately analyze and qualify the safety properties of components and systems using AI. Formal methods (deterministic or stochastic), and/or simulation and testing methods, should in particular allow:

  • to state formally the assumptions about the environment of a system, which are required for its correct functioning;

  • to specify its expected functionalities and limitations; and

  • to determine its essential characteristics: correction, reliability, probability of errors, false positives, sensitivity to uncertainty of data and parameters.

V&V is a very active field in Computer Science. It is well advanced for closed, well-modeled functioning environments. AI brings to the V&V field a rich set of challenges to handle software, robots, and cyber-physical systems that interact with open, partially known and imperfectly modeled environments. Among these challenges, the following issues are outstanding:

  • how to formally quantify the uncertainty of a system while taking into account the nature of the data and models used, e.g., in medical diagnosis [3]?

  • how can a system monitor online its environment and own state with respect to the assumptions that are needed for its correct functioning, and adapt when these assumptions are not met?

  • how to assess the V&V properties of a complex system integrating AI techniques from the V&V properties of its components (compositional properties) ? how about blackbox-type components?

  • what are the possible V&V approaches for a system that learns and evolves continually in interaction with its environment?

These issues, and others, are major research challenges, of concern to a large community (see for example [1, 13, 24]). However, many deployments will certainly take place before all these challenges are solved. Furthermore, theoretical restrictions in computational complexity and decidability have been known for decades, or recently uncovered (e.g., learning undecidability [4]). Nonetheless, it remains essential to raise the awareness of designers and users of critical applications about open issues and limitations of current techniques, about mitigating methods and the required vigilance in rapid deployments.

Security and privacy for individual users

AI techniques have become the mediator between the users and the digital world. Access to online data produced by the billions of people and connected systems, and, beyond data, to knowledge relevant to each user, is increasingly based on semantic content. A vocal assistant must correctly perceive oral requests in natural language. An associated querying engine must interpret each request in its context and in relation to the user’s profile, which is constantly learned, refined and evolving. Images, videos and data from various physical, chemical, or physiological sensors, are to be interpreted and indexed with respect to their semantic content. Increasingly, a person’s interactions with her environment, with machines and systems (at home, in stores and public equipments), or even her interactions with other persons, are performed digitally and mediated via AI. Each person generates a growing and potentially indelible “digital trace” of her behavior. Even without direct use of digital interfaces, it is difficult to avoid leaving such a trace (e.g., walking in areas with video surveillance and facial recognition, or making purchases).

The mediation role of AI with the digital world has become so important that, for many, AI is undistinguishable from digital technologies. Studies about opinions and attitudes regarding AI can be highly instructive (e.g., [29])Footnote 8. They can provide insight about where research and education efforts should concentrate. The general public has often ambivalent perceptions of the field, sometime mixing:

  • uncritical expectations: algorithms and computations are accurate and correct, decisions recommended by a machine are “rational”;

  • legitimate concerns about the security and confidentiality of a user’s interactions, the exploitation of personal and aggregated data, and opinion manipulation capabilities; and

  • unfounded fears about the “singularity”, or the currently improbable perspective of machines with intentions, emotions, consciousness, that may take control of human.

AI mediated interactions raise social risks (covered in Social risks), as well as individual risks. The latter correspond to real and subjective vulnerabilities, frustrations and the possible rejection of digital technologies by a part of the population, which can feel marginalized.

The needs at this level are technical, but also educational, institutional and legal. The technical problems concern in particular the following points:

  • Security of digital interactions: the state of the art is well advanced but the deployment of known techniques is clearly insufficient, specially in portable applications and connected objects. Security vulnerabilities frequently make the news headlines, e.g., in vacuum cleaner robots or vocal assistants [6]. There are also hard open problems that need to be addressed, e.g., the susceptibility of neural network techniques to attacks and adversarial examples [9].

  • Confidentiality, privacy and use of personal data: here also there is an insufficient deployment of the state of the art.

  • Intelligibility and transparency: these issues raise challenging scientific and technical problems. A decision support system should be able to explain its assumptions, limitations, and criteria. The important issue of the decision criterion is often overlooked: a rational decision is almost always relative to some criterion, which may not necessarily meet a user’s inclination and priorities. A decision support system must be able to explain and justify the response to a request. All this must be done in terms that are understandable to the user.

The insufficient deployment of known security and confidentiality techniques is generally due to weak economic incentives and regulatory constraints. The recent EUGDPR measures reinforce confidentiality and respect for privacy. However, these and other similar measures are criticized as addressing the problems in partial and insufficient manners. The contractual relationship between a user and a platform is unbalanced. The imbalance highlights the user’s vulnerability to platforms deployed by a small number of corporations that have huge economic and legal support potentials. It is natural for these corporations to pursue their own interests, including by harvesting profitable behavior data, as long as this is legal. They offer services regarded as essential to everyone for a modern social life, but at a largely hidden cost. Furthermore, a user may decide (in theory) about the use of her personal data, but she has not much to say about the aggregated data and the resulting models to which she contributes. These models represent an important source of revenues, as well as risks. In some cases, a user may not agree to the elaboration of a behavior model, or she may view it as a public resource to be used solely for open research. Additional legal and technical studies are needed, e.g., for the development of accountable data trusts, which can play an intermediary role between users and platforms to better balance contractual relationships [8].

Guidelines (e.g., the UN Guiding Principles or the EU AI Ethics Guidelines) and ethical commitments of companies are certainly useful and needed, but not sufficient. The urgent requirements here are more in regulations and public policies than in ethics [27]. Legal studies and possibly social experiments are needed to raise awareness, support deliberations, and foster international cooperations regarding AI and digital regulations.

Social risks

The acceptability of a technology is often interpreted in terms of customers, i.e., the existence of a sufficiently broad public that adopts and uses the technology. But social acceptability is much more demanding than individual acceptance. Among other things, social acceptability needs:

  • to take into account the long term, including possible impacts on future generations;

  • to worry about social cohesion, in particular in terms of employment, resource sharing, inclusion and social recognition;

  • to integrate the imperatives of human rights, as well as the historical, social, cultural and ethical values of a community; and

  • to consider global constraints affecting the environment or international relations.

Biases Decision support tools can be biased. In some cases, systems are intentionally designed as unbalanced, e.g., for a recommender system integrating propaganda or commercial goals. Users should be explicitly warned about the underlying objectives of systems that may distort their outcomes. More problematic are the hidden and non intensional biases of systems required to be neutral and fair. Numerous cases of gender, ethnical or seniority biases have been reported in decision support systems for health, banking, insurance, recruitment, career assessment, or even in public services such as legal assessment and city surveillance applications [14, 20, 25]. This is generally the case because these systems lacks transparency, intelligibility and rely on training data which is biased in hidden ways difficult to uncover and mitigate. There is a need for further research in techniques for auditing the fairness of a system, and in regulations requiring their use for certification mechanisms.

Behavior manipulation. It has been known for ages that individuals can be manipulated. AI technologies augment their vulnerability, in particular with the worldwide deployment of ergonomic and playful devices that implement powerful communication, sensing, processing and decision making functions. Manipulation capabilities are illustrated by the increasingly more effective techniques for social monitoring, text and audio-visual “optimization”, debate steering, behavior modeling and shaping, and market driving [30]. The incentives for using available techniques toward profitable purposes are very high. Dubious practices with high social, political and economic risks will remain in use as long as they are unregulated. In addition to regulations, and for supporting them, further research in AI may contribute to methods for detecting manipulation attempts.

Democracy The political risks, illustrated by the Cambridge Analytica scandal, are analyzed by several authors as a threat to democracy [19, 31]. Studies show that AI presents opportunities as well as risks on the full range of human rights, with already observed impacts [21].

Economy Economic risks correspond to several AI deployments, for example in High Frequency Trading (HFT), or in algorithmic pricing. The possible destabilization effects of HFTs are far from being well understood [26]. Algorithmic pricing using learning, profit optimization and indirect interactions between computational agents can lead, even without any explicit agreement, to artificially higher prices, as with the illegal price cartel mechanisms [11]. The main assumption of the liberal economy postulates a supposedly neutral free market, considered as a virtuous “unknowable and uncontrollable” information processor, which should remain unregulated. The real time observation, learning, modeling and feedback control capabilities permitted with AI tools are in clear contradiction with this assumption. Regulations to mitigate the corresponding risks are urgently needed.

Employment AI contributes to the increasing automation of services, industry and agriculture, which brings progress, as well as important social risks for employment. There is no general consensus on this risk (nor is there one on global warming). However, available studies, which remain insufficient, converge toward a substantial reduction of jobs in the short to medium term. According to an OECD study for its 21 countries [2], 9% of jobs have a high risk of automation; a higher percentage of 20 to 25% of jobs have a medium risk (other studies conclude to more alarming risk levels, e.g., [15]). Furthermore, technology developments are strongly suspected to be a contributing factor for the observed increase in social inequalities [5], which reduce social involvement.

It is clear to most observers that the existing social measures for handling temporary fluctuations (e.g., unemployment benefits) are inadequate for a long-standing, continuing change. Several laudable studies and initiatives are undertaken to mitigate the unemployment risks, in terms of training and job creation (e.g., Innovation for Jobs), resource sharing, social recognition and integration. The challenge here is to further develop these initiatives in order to respond in time to the undesirable consequences of numerous technology deployments.

Military systems. AI in weapons and military systems correspond to another area of worry, which raises ethical concerns, as well as risks of international instability and increased conflicts. AI technologies greatly enhance the military capabilities of perception, surveillance, intelligence, fighting, and intervention. AI is naturally a dual-use technology, easily transposed from the commercial to the military domain. This makes impracticable control procedures such as those used for nuclear weapons containment [7]. This also makes weapons and devices with integrated AI relatively more “affordable” than other heavy military technologiesFootnote 9. These weapons may be more easily accessible to rogue groups. In addition, international arm trade agreements, including the recent Arms Trade Treaty, do not cover digital weapons, such as drones, robots, ROV and AUV. The widely supported Open Letter for a ban on autonomous weapons is an excellent initiative which needs to be pursued into studies and regulations.

Can we mitigate technically the above social risks by extending the problem solving and reasoning competences of our tools with moral appraisal capabilities? We certainly need machines which are, by design, provably safer, more secure, intelligible, unbiased, respectful of privacy, and meeting in their functioning the constraints and rules demanded by society. These and similar properties can be reasonably well understood, formalized, and machine implementable [10, 23]. Technical standards for meeting them in AI systems should be developed and deployed, as for other artifacts. However it is unclear what might be the specification of an automated weapon, or an automated trader, capable of resolving ethical choices on the basis of moral principles. Several approaches to the notion of an “artificial moral agent” in a general sense (i.e., levels 3 and 4 of [18]), are criticized as being philosophically illegitimate (e.g., [12, 28]). They can be quite misleading. We should strive to clarify and disseminate widely the knowledge about the capabilities and limitations of our tools, and to integrate the social involvement and assessment of their potential uses as an essential component in our research and design methodology.

The needs for responsible AI developments with respect to the social risks correspond in particular to political and legal measures and to international agreements. However, the required measures are part of the regulatory mechanisms of society. These mechanisms have a quite long response time: decades are needed to better understand, educate, spread the awareness and build up the social forces required to impose regulations. But the momentum of technology has become much faster. The discrepancy between the two dynamics demands for proactive approaches. However, no predictive models of the possible social and economic effects of a given technical deployment are readily available. A proactive approach must rely on social experiments, and integrative research about social risks and mitigation measures. Here too, a change of paradigm is required to fund and develop joint investigations between AI and social scientists, to give a better understanding of AI to the former, and of social and economic mechanisms to the latter. More involvement of AI within relatively recent areas such as “Science, Technology and Society” (e.g., at Stanford or MIT) should provide opportunities to complement the usual empirical observation methodology of social sciences with significant experimentation, modeling and even simulation. It should be noted that simulation, based on elementary models, is emerging in a few areas of social sciences. AI can actively contribute to its development and effectiveness. Finally, let us remark that social experimentation before a technical deployment reduces the discrepancy between the technology momentum and the social regulation dynamics.

Conclusion

AI, like any other technology, can have virtuous effects, as well as much less desirable consequences. AI as a research field cannot be blamed for the latter. The specific historical, social and economic context of a deployment can make an AI machine “a Dr Jekyll or a Mr Hide”. The discrepancy between the slow social and legal mechanisms and the fast technology momentum renders the steering of the deployments and uses of AI more challenging.

AI scientists and professionals do not have, obviously, the full steering control. But neither are they powerless nor irresponsible. They are accountable for and capable of raising the social awareness about the current limitations and risks of their field. Up to some point, they can choose or at least influence their research agenda. They can engage into integrative research and work toward the needed paradigm shift in order to foster socially beneficial developments and address the human and social risks of AI. The initiatives and projects referred to here illustrate many of these engagements which are going on and gaining strength. The growing effectiveness of AI is simply commensurate with its social responsibility. The technical and organizational challenges are tremendous, but the AI scientific community has to face them.

Notes

  1. E.g., ITU studies or UNESCO initiatives.

  2. E.g., EU High level Expert Group on AI.

  3. E.g., AIGO and the forthcoming OECD AI Policy Observatory.

  4. E.g, IEEE Ethically Aligned Design.

  5. E.g., HUMANe AI, International Observatory on the Societal Impacts of AI, AI4People, Human-Centered AI, AI Now, Center for the governance of AI, AI for Good Foundation.

  6. The IAP 2019 Conference and General Assembly of Inter-Academy Partnership is devoted to these issues.

  7. E.g., Thorn Spotlight project to fight human trafficking; Allen Philanthropies with the Planet project for the conservation of coral reefs; the Rainforest Connection NGO for forests and environment conservation.

  8. The MIT Tech Review presentation of this study is untitled: “Americans want to regulate AI but don’t trust anyone to do it”.

  9. E.g, the SGR-A1 autonomous Sentry Gun, capable of covering a radius of several kilometers, is said to cost about 200K$.

Abbreviations

AI:

Artificial intelligence

EU:

European Union

G7:

Group of seven

IAP:

Inter-academy partnership

IPCC:

International panel on climate change

ITU:

International telecommunication union

NGO:

Non-governmental organization

OECD:

Organization for economic cooperation and development

UN:

United Nations

UNESCO:

United Nations educational, scientific and cultural organization

References

  1. Amodei D, Olah C, Steinhardt J, Christiano PF, Schulman J, Mané D. Concrete Problems in AI Safety. Comput Res Repository. 2016. https://arxiv.org/abs/1606.06565v2.

  2. Arntz M, Gregory T, Zierahn U. The Risk of Automation for Jobs in OECD Countries. OECD Soc, Employ Migr Working Pap. 2016;(189). https://www.oecd-ilibrary.org/social-issues-migration-health/the-risk-of-automation-for-jobs-in-oecd-countries_5jlz9h56dvq7-en.

  3. Begoli E, Bhattacharya T, Kusnezov D. The need for uncertainty quantification in machine-assisted medical decision making. Nat Mach Intell. 2019; 1(1):20–3. https://www.nature.com/articles/s42256-018-0004-1.

  4. Ben-David S, Hrubeš P, Moran S, Shpilka A, Yehudayoff A. Learnability can be undecidable. Nat Mach Intell. 2019; 1(1):44–8. https://www.nature.com/articles/s42256-018-0002-3.

  5. Brynjolfsson E, McAfee A. The Second Machine Age.Norton; 2016. https://books.wwnorton.com/books/detail.aspx?id=4294987234. Accessed Apr 2019.

  6. Carlini N, Wagner DA. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. Comput Res Repository. 2018. https://arxiv.org/abs/1801.01944.

  7. Chien AA. Open Collaboration in an Age of Distrust. Commun ACM. 2019;62(1). https://cacm.acm.org/magazines/2019/1/233509-open-collaboration-in-an-age-of-distrust/fulltext.

  8. Delacroix S, Lawrence ND. Disturbing the ‘one size fits all’, feudal approach to data governance: bottom-up data Trusts. SSRN. 2018;:1–30. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3265315.

  9. Edwards C. Hidden Messages Fool AI. Commun ACM. 2019;62(1). https://cacm.acm.org/magazines/2019/1/233515-hidden-messages-fool-ai.

  10. Etzione A, Etzioni O. Designing AI Systems that Obey Our Laws and Values. Commun ACM. 2016; 9(59):29–31. https://cacm.acm.org/magazines/2016/9/206255-designing-ai-systems-that-obey-our-laws-and-values/fulltext.

  11. Gal MS. Illegal Pricing Algorithms. Commun ACM. 2019;62(1). https://cacm.acm.org/magazines/2019/1/233516-illegal-pricing-algorithms/fulltext.

  12. Hunyadi M. Artificial moral agents. really? In: Laumond J, Danblon E, Pieters C, editors. Wording Robotics, Tracts in Advanced Robotics. Springer: 2019.

  13. Ingrand F. Recent Trends in Formal Validation and Verification of Autonomous Robots Software. In: IEEE International Conference on Robotic Computing: 2019. https://arxiv.org/abs/1606.06565v2.

  14. Larson J, Mattu S, Kirchner L, Angwin J. How We Analyzed the COMPAS RecidivismAlgorithm. Propublica. 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.

  15. Manyika J, Lund S, Chui M, Bughin J, Woetzel J, Batra P, Ko R, Sanghvi S. Jobs lost, jobs gained: Workforce transition in a time of automation. Tech Rep. 2017. https://tinyurl.com/yykwovnu.

  16. Mauser W, Klepper G, Rice M, Schmalzbauer BS, Hackmann H, Leemans R, Moore H. Transdisciplinary global change research: the co-creation of knowledge for sustainability. Curr Opin Environ Sustain. 2013; 5(3-4):420–31. https://www.sciencedirect.com/science/article/pii/S1877343513000808.

  17. Mittelstrass J. On transdisciplinarity. Trames J Humanit Soc Sci. 2011;15(4). https://www.ceeol.com/search/article-detail?id=187806.

  18. Moor J. Four kinds of ethical robots. Philos Now. 2009; 72:12–4. http://mrhoyestokwebsite.com/AOKs/Ethics/Related%20Articles/Ethical%20%Robots.pdf.

  19. Nemitz P. Constitutional democracy and technology in the age of artificial intelligence. Phil Trans R Soc A: Math Phys Eng Sci. 2018;376(2133). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3234336.

  20. O’Neil C. Weapons of math destruction: How big data increases inequality and threatens democracy.Crown Random House; 2016. https://weaponsofmathdestructionbook.com/.

  21. Raso F, Hilligoss H, Krishnamurthy V, Bavitz C, Kim LY. Artificial Intelligence & Human Rights: Opportunities & Risks. SSRN. 2018. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259344.

  22. Russell SJ, Dewey D, Tegmark M. Research Priorities for Robust and Beneficial Artificial Intelligence.AI Magazine; 2015. https://www.aaai.org/ojs/index.php/aimagazine/article/view/2577.

  23. Sandewall E. Ethics, Human Rights, the Intelligent Robot, and its Subsystem for Moral Beliefs. Int J Soc Robot. 2019:1–11. https://doi.org/10.1007/s12369-019-00540-z. Accessed Apr 2019.

  24. Seshia SA, Sadigh D, Sastry S. Towards Verified Artificial Intelligence. Comput Res Repository. 2016. https://arxiv.org/abs/1606.06565v2.

  25. Skeem JL, Lowenkamp C. Risk, Race, & Recidivism: Predictive Bias and Disparate Impact. SSRN. 2016. https://doi.org/10.2139/ssrn.2687339. Accessed Apr 2019.

  26. Sornette D, von der Becke S. Crashes and High Frequency Trading. SSRN. 2011. https://doi.org/10.2139/ssrn.1976249. Accessed Apr 2019.

  27. Vardi MY. Are We Having An Ethical Crisis in Computing?Commun ACM. 2019;62(1). https://cacm.acm.org/magazines/2019/1/233511-are-we-having-an-ethical-crisis-in-computing/fulltext.

  28. Yampolsky RY. Artificial Intelligence Safety Engineering: Why Machine Ethics is a Wrong Approach In: Muller V, editor. Philosophy and Theory of Artificial Intelligence. Springer Verlag: 2013. p. 389–96. https://link.springer.com/content/pdf/10.1007%2F978-3-642-31674-6.pdf. Accessed Apr 2019.

  29. Zhang B, Dafoe A. Artificial Intelligence: American Attitudes and Trends. Tech Rep. 2019. https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/executive-summary.html. Accessed Apr 2019.

  30. Zuboff S. Big other: surveillance capitalism and the prospects of an information civilization. J Inf Technol. 2015; 30(1):75–89. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2594754.

  31. Zuboff S. The Age of Surveillance Capitalism.PublicAffairs; 2019. https://www.publicaffairsbooks.com/titles/shoshana-zuboff/the-age-of-surveillance-capitalism/9781610395694/. Accessed Apr 2019.

Download references

Acknowledgements

Many thanks to the reviewers whose comments and suggestions have helped improve this paper.

Funding

Not applicable.

Availability of data and materials

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

The author wrote the paper. The author read and approved the final manuscript.

Corresponding author

Correspondence to Malik Ghallab.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional information

Authors’ information

Not applicable.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ghallab, M. Responsible AI: requirements and challenges. AI Perspect 1, 3 (2019). https://doi.org/10.1186/s42467-019-0003-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s42467-019-0003-z

Keywords