Ethics: A responsible AI framework
Not everything that is allowed is a good idea. (Nor is it necessarily the case that everything that is a good idea is allowed.) Ethical reflection can help us see more clearly when such conflicts arise. The PHS wants PrevBOT to live up to the principles of ethical and responsible artificial intelligence, and we have tried to concretise how they can do that in the sandbox project.
We have focused on the research and development phase of the project for the ethical issues as well. Often, however, questions about what is ‘ethically right’ in the development phase will depend on what consequences and benefits we envisage during the use phase. In this chapter, we have therefore to a greater extent envisioned alternative ways in which the PrevBOT can operate, without this necessarily reflecting what the PHS has actually planned.
The goal
How can we measure whether the PHS and PrevBOT maintain the desired ethical level? What characterises the development process behind and a product with responsible artificial intelligence?
‘Responsible artificial intelligence’ is not a protected term that you can use to label your AI tool if it checks off all the items on a specific list of requirements. It is a designation for artificial intelligence that maintains a certain level of accountability when it comes to the consequences of the system – in terms of both development and use – for users and society.
Ethical, responsible or trustworthy AI?
- ‘Ethical AI’ primarily refers to adjusting artificial intelligence systems in accordance with ethical principles and values. This could be ensuring that the system does not perpetuate prejudice or injustice, and that it makes a positive contribution to human rights and welfare.
- ‘Responsible AI’ is about operationalising ethics into practical measures, and ensuring that conscious efforts are made when developing and using AI systems to avoid harm and misuse. A general definition of responsible AI is that AI technology is developed and used in a responsible, transparent and sustainable manner.
- ‘Trustworthy AI’ is a common term in EU contexts and refers to AI systems being lawful, ethical and robust. It is not enough that the technology is in line with laws and regulations – it must also be developed and implemented in a way that earns the trust of users and society by being reliable, secure and transparent.
Although there is considerable overlap between these concepts, the differences often lie in the emphasis: ethical AI focuses on the moral aspects, responsible AI on accountability and the operationalisation of these ethics, and trustworthy AI on earning and maintaining public trust through compliance with legal, ethical and technical standards.
Several different agencies have drawn up principles and criteria for artificial intelligence. First and foremost, there is the 2019 Ethics guidelines for trustworthy AI , prepared by an expert group commissioned by the European Commission. The OECD has developed the OECD AI Principles, which encourage innovation and responsible growth of AI that respects human rights and democratic values. In 2022, UNESCO published its Recommendation on the Ethics of Artificial Intelligence. The consulting company PwC developed nine ethical AI principles on behalf of the World Economic Forum. Over time, academic institutions, think tanks and technology players such as Google and Microsoft have come up with different approaches to ethical, responsible and trustworthy AI. Several of these principles and guidelines are general and mostly aimed at political governance. Others are more concrete, and therefore useful for developers. One example is the thorough checklist contained in the guidelines published by the European Commission’s expert group.
There are also guidelines for responsible AI that apply to a specific domain, such as the health service and the financial sector. INTERPOL and the United Nations Interregional Crime and Justice Research Institute (UNICRI) have published the document Principles for Responsible AI Innovation specifically aimed at development in law enforcement agencies, which are relevant to the PrevBOT project.
The Institute of Electrical and Electronics Engineers (IEEE) has also developed standards for the responsible and ethical development of artificial intelligence. These include standards for specific challenges, such as IEEE P7001, which focuses on transparency in autonomous systems, IEEE P7002, which addresses data protection and privacy, and IEEE P7003, which addresses algorithmic bias. They have also prepared the more comprehensive and comprehensive guidelines Ethically Aligned Design (EAD), which highlights key principles to ensure that the development of artificial intelligence and autonomous systems is in line with ethical norms and values.
Ethics in the national AI strategy
In the sandbox project, we choose to look at the National Strategy for Artificial Intelligence, which defines seven ethical principles for artificial intelligence based on the guidelines drawn up by the European Commission’s expert group. As such, the PrevBOT project should strive to comply with the following:
-
AI-based solutions must respect human autonomy and control
The development and use of artificial intelligence must foster a democratic and fair society by strengthening and promoting the fundamental freedoms and rights of the individual. Individuals must have the right not to be subject to automated processing when the decision made by the system significantly affects them. Individuals must be included in decision-making processes to assure quality and give feedback at all stages in the process ('human-in-the-loop').
-
AI-based systems must be safe and technically robust
AI must be built on technically robust systems that prevent harm and ensure that the systems behave as intended. The risk of unintentional and unexpected harm must be minimised. Technical robustness is also important for a system's accuracy, reliability and reproducibility.
-
AI must take privacy and data protection into account
Artificial intelligence built on personal data or on data that affects humans must respect the data protection regulations and the data protection principles in the General Data Protection Regulation.
-
AI-based systems must be transparent
Decisions made by systems built on artificial intelligence must be traceable, explainable and transparent. This means that individuals or legal persons must have an opportunity to gain insight into how a decision that affects them was made. Traceability facilitates auditability as well as explainability. Transparency is achieved by, among other things, informing the data subject of the processing. Transparency is also about computer systems not pretending to be human beings; human beings must have the right to know if they are interacting with an AI system.
-
AI systems must facilitate inclusion, diversity and equal treatment
When developing and using AI, it is especially important to ensure that AI contribute to inclusion and equality, and that discrimination be avoided. Datasets that are used to train AI systems can contain historical bias, be incomplete or incorrect. Identifiable and discriminatory bias should, if possible, be removed in the collection phase. Bias can be counteracted by putting in place oversight processes to analyse and correct the system’s decisions in light of the purpose.
-
AI must benefit society and the environment
Artificial intelligence must be developed with consideration for society and the environment, and must have no adverse effects on institutions, democracy or society at large.
-
Accountability
The requirement of accountability complements the other requirements, and entails the introduction of mechanisms to ensure accountability for solutions built on AI and for their outcomes, both before and after the solutions are implemented. All AI systems must be auditable.
Artificial intelligence and research ethics
The national strategy also points out that artificial intelligence research must be conducted in accordance with recognised standards for research ethics. In addition, it refers to the National Committee for Research Ethics in Science and Technology’s Statement on research ethics in artificial intelligence, in which they launch nine principles for AI research in three areas:
- Responsibility for the development and use of autonomous systems:
AI research must safeguard human dignity, assign responsibility, be possible to inspect (inspectability) and contribute to informed debate (dissemination of research). - Societal consequences and the social responsibility of research:
AI research must recognise uncertainty and ensure broad involvement. - Big data:
AI research must ensure data protection and consideration of individuals, ensure verifiability and quality, and enable fair access to data.
We demonstrate how ethical issues can be assessed against the relevant principles from the national strategy towards the end of this chapter. But first, we will try to identify, as best we can, ethical issues inherent in the PrevBOT project and look at which tools and clarifications can lay the foundation for good ethical assessments.