Logo and page links

Main menu


The Norwegian Police University College, exit report: PrevBOT

Goals for the sandbox project

PrevBOT raises a number of questions and some clear ethical dilemmas. A fundamental legal question is whether it is even possible for the PHS/the police to develop and use such a tool without breaching data protection regulations.

Will the answer depend on how PrevBOT is intended to operate and which features are integrated or not? Or will the answer depend on which data, including personal data, is used to train such a robot? And if it is the latter, how should that data be processed when developing the tool, and where should it be stored when it is in use? Can PrevBOT – developed with the use of personal data – be taken into use in police online crime prevention activities?

The number of questions such a tool raises does not diminish as we move from a purely legal to an ethical perspective. Is it ethical to monitor ‘everyone’ in order to catch a few perpetrators (even if they are becoming increasingly numerous)? How is it most ethical to calibrate the PrevBOT in the spectrum between flagging a conversation as early as possible with the risk of mislabelling an innocent individual – or delaying flagging a conversation until the grooming is more obvious with the risk of letting the perpetrator slip away and lead the victim into a private conversation? Also, if PrevBOT becomes a purely preventive tool that simply frightens off abusers and alerts potential victims, would it be ethical if the police did not to attempt to apprehend a potentially dangerous individual, if they had received information that could identify the perpetrator?

Topics and delimitations

With the project at such an early phase and with so many directional choices to be made, it was simply not possible to make an overall assessment of the PrevBOT project.

For assessing the legality of PrevBOT, the sandbox project was delimited to the development phase. The most central discussions revolved around the confidential text data from Norwegian criminal cases that involved online abuse, which the PrevBOT project wishes to use as training and test data in the development phase. The project already has a small amount of such data by virtue of a permit from the Director of Public Prosecutions, cf. the Police Databases Act Section 33, which allows the duty of confidentiality to be lifted for research purposes.

The sandbox project also aimed to identify and partially discuss some of the ethical questions surrounding the start-up and early phase of PrevBOT’s research project, so that they have some guidance when setting the course for the tool’s development.

The objectives were specified as follows:

  1. Clarify legal requirements for processing textual data used as evidence in completed criminal cases, during the development phase of PrevBOT.
  2. Specify what ‘responsible AI’ means when the police use the technology to analyse communication on the internet, with perhaps particular focus on explainability.