Thematic day at PRIMA2017 – 30 October 2017

Organisers:

  • Virginia Dignum, Delft University of Technology, m.v.dignum@tudelft.nl
  • Louise Dennis, Liverpool University
    Invited speaker: Raja Chatila

In the near future, the public will encounter new AI applications in domains such as transportation and healthcare. These applications must be introduced in ways that build trust and understanding, and respect human and civil rights. In fact, the issue of the Ethics of AI is hot. You can’t click on a news site nor open a newspaper these days without finding an article about the role of ethics in Artificial Intelligence. If we wish to avoid unintended negative consequences for society, the hype around this subject is warranted. However, many of the issues raise questions, from a technical and societal view.

Ethics by Design concerns the methods, algorithms and tools needed to endow autonomous agents with the capability to reason about the ethical aspects of their decisions, and, the methods, tools and formalisms to guarantee that an agent’s behavior remains within given moral bounds. How and to what extent can agents understand the social reality in which they operate, and the other intelligences (AI, animal and humans) with which it co-exists? What are the ethical concerns in the emerging new forms of society, and how do we ensure the human dimension is upheld in interactions and decisions by autonomous agents.

During this thematic meeting, the central question is:

Can we, and should we, build ethically-aware agents?

We will take a highly interactive approach, rather than a traditional paper presentation format. After a short introduction of the topic and aims of the day, the topic will be further elaborated by the invited speaker, Raja Chatila, and short pitches by participants. After this, participants will discuss specific hot topics in groups, following a semi-structured deliberation method aimed at generating a few concrete points for action. The day will conclude with the plenary presentation of these points.

Topics for discussion include, but are not limited to:

  • From BDI to ethical-awareness: what is needed?
  • Can ethical-awareness be built over normative (meta)reasoning models? Or, what are main differences between norms and values?
  • Human-in-the-loop: function, need, or requirement?
  • Ethics in multi-cultural (heterogeneous agents) environments? What is ‘good’ and ‘bad’?
  • How to elicit, represent and maintain the link between human values, social norms and system functionalities, in dynamic situations?
  • Accountability, Responsibility and Transparency: how to formalize and how to ensure that agents comply to these principles?
  • Ethical verification (design-time) versus ethical monitoring (run-time)

Tentative program:

  • 9:15 - Welcome
  • 9:30 – Presentation by Raja Chatila
  • 10:30 – coffee break
  • 11.00 – Pitches:
    • 9 participants (Einar, Maite, Marija, Juan, Marlies, Matthijs, Gonzalo, Tristan, Maurizio)
    • 5 minutes each, plus short Q&A
  • 12.30 – Lunch
  • 14.00 – discussion (3 parallel groups):
    • How to assign/understand responsibility when AI systems make decisions?
    • How to choose the ‘right’ norms and implement them ‘right’?
    • What does it mean for a machine to take ethical decisions?
  • 16:00 – coffee break
  • 16:30 – plenary presentation and discussion
  • 17.30 – closing

Invited talk

Title: Ethical Considerations in Artificial Intelligence, Robotics and Autonomous Systems
Presenter: Prof. Raja Chatila, ISIR, Université Pierre et Marie Curie, Paris, France

Abstract:

Ethical, legal and societal issues (ELS) raised by the development of Artificial Intelligence, Robotics and Autonomous Systems have recently gained strong interest both in the general public and in the involved scientific communities, with the development of applications often based on deep learning programs that are prone to bias, the wide exploitation of personal data, or new applications and use cases, such as personal robotics, autonomous cars or autonomous weapons. These ELS questions cover a wide range of issues such as: future of employment, privacy and data protection, surveillance, interaction with vulnerable people, human dignity, autonomous decision-making, moral responsibility and legal liability of robots, imitation of living beings and humans, human augmentation, or the status of robots in society.

These questions sometimes raise classical issues in ethical philosophy and law by transposing them to intelligent machines, but they also pose new problems on which reflection must mobilize interdisciplinary communities in order to grasp globally the scientific, technical, and social aspects. The question in developing theses technologies, which might have an unprecedented impact on our society, is finally about how to make them aligned with the values on which are based human rights and well-being.

From the perspective of the designers of such systems, two main issues are central. Firstly, research methodologies and design processes themselves: how to define and adopt an ethical and responsible methodology for developing these technological systems so that they are transparent, explainable and so that they comply with human values? This involves several aspects that transform product lifecycle management approaches. Secondly, when decisions are delegated to so-called autonomous systems, is it possible to embed ethical reasoning in their decision-making processes? These two issues will be overviewed in the talk, inspired by the ongoing reflection and work within the IEEE Global Initiative on Ethical Considerations in AI and Autonomous Systems.