Ethics: First step analysis
Based on the seven elements in the previous section, the sandbox project will assess whether it is ethically right for the PrevBOT project to take the first step into the research.
Legitimate authority
Is it legitimate for the PHS to develop technologies such as PrevBOT? Is it even legitimate for law enforcement authorities to be drivers in the development of new technology?
The police have been criticised for not keeping up with the digital transformation. In autumn 2023, the Office of the Auditor General levied considerable criticism in a report stating that the police have obsolete IT systems, that there is internal dissatisfaction with digital services and tools, and that the Ministry of Justice and Public Security and the National Police Directorate had inadequate knowledge of technology and how it can be used to develop the police and prosecution services of the future.
Long-term omissions do not justify unrestricted development in the field, however. A lack of experience and knowledge may indicate that the police should now be particularly mindful when attempting to develop (or conduct research on) new and advanced technology. At the same time, the Office of the Auditor General states that the police’s failure to prioritise digitalisation and technology has reduced security and weakened crime prevention efforts. Doing nothing could therefore be an equally problematic option, ethically speaking.
Perhaps there is a point of ’balanced advancement’ (see figure below) to take advantage of the opportunities new technology brings about?
For the PrevBOT project specifically, we are talking about serious crime and it is reasonable that the police attempt to combat it. Based on the number of reported crimes and the assumed hidden statistics, the problem is of such a magnitude and nature that the police do not consider it possible to ever get to the bottom of it. Methods to prevent or in some way avert the problem are therefore necessary. Crime prevention is in any case also the police’s main strategy.
That is not to say that such a system should only be used by the police. It is also conceivable that all or parts of a fully developed PrevBOT technology could be usefully employed by other actors, that automated alerts could be sent and/or calls intercepted, without the police being involved. In other words that internet actors, both commercial and public, could use PrevBOT technology to moderate what is taking place on the platforms.
It would nevertheless be legitimate for the PHS to be responsible for the development of such a tool. Assuming transparency of the results, there is reason to argue that it is precisely an institution linked to the police authorities that should be responsible for this research.
Just cause
Is there just cause to develop such a system? The need for protection is clear. As pointed out in the first chapter, each abuser could have tens or hundreds of victims, and the consequences of sexual abuse are a public health problem. So there is clearly just cause to take action on the issue. But are there convincing reasons for doing so in line with how PrevBOT is envisioned, by intercepting private conversations (albeit on open forums)?
Does it violate the children’s autonomy if the police keep track of and intervene in conversations on suspicion of attempted grooming? Yes. It reduces the young people’s possibility and ability to assess and decide for themselves how to cope with the situation. Yet there may be just cause to do so nonetheless. After all, the situation concerns minors, who are also entitled to protection.
Online minors, the group PrevBOT is intended to protect, is by no means a homogeneous group. There is considerable variation in how well parents look out for and guide their children on netiquette. The age of those in need of protection also varies greatly. Many of the minors using platforms where grooming occurs are almost of age, while some are as young as 10 years old. There is considerable variation in sexual development, curiosity and experience. There is also some variation in knowledge and experience of dealing with attempts at manipulation. So in sum, how vulnerable they are varies to an extent. The most vulnerable may be characterised by poor follow-up at home, low digital literacy and a high degree of risk-seeking behaviour.
The UN Convention on the Rights of the Child states that children have the right to protection. Article 34 deals explicitly with the right of children to be protected from sexual exploitation, while Article 16 deals with the right to, and protection of, a private life. Article 12 deals with respect for the views of children and recognises the right of children to be active participants in decision-making processes that affect them. Children are therefore entitled to some kind of autonomy, but this freedom seems – both in words and practice – to be subordinate to the requirement for protection.
See also ‘Barnet – et menneske uten krav på fulle menneskerettigheter?’ (‘The Child – A Human Being with No Claim to Full Human Rights?’) by Paul M. Opdal (in Norwegian only)
UN Convention on the Rights of the Child
‘No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation.’
(Article 16.1)
To avoid PrevBOT being perceived as an arbitrary interference of privacy, it is important that the bot can provide real protection. It is not enough to point out the prevalence of sexual exploitation to be combated. Here, we must analyse the actual situation in which the bot (or the operator of the bot) is to intervene in the child’s online activity, and measure the degree of threat on the one side against the degree of vulnerability on the other. As mentioned, the degree of vulnerability will vary, but many young people will be highly vulnerable in the form of little experience of recognising attempts at manipulation, often in combination with sexual curiosity and/or insecurity. The threat, in terms of the risk of attempted grooming and the consequences of any abuse, is also great. Neither party in these conversations is particularly capable of long-term thinking (on consequences for others and consequences for themselves, respectively). The fact that an abuser can meet a victim in a chat room, which is more or less unregulated, is obviously a problem. A tool that intercepts such meetings would provide real protection.
Sexual abuse in general, and grooming in particular, is a problem of such magnitude and complexity that one measure alone is unlikely to overcome it. However, PrevBOT can undoubtedly serve as a useful tool, and the reasons for its development appear just.
Right intention
A third aspect of the first step analysis concerns the intention for the development of a PrevBOT. In practice, this depends on an assessment of the intention. Can we assume that the idea is based on, and that development will take place, with respect for the integrity/human dignity of the parties the technology targets? Are we sure that the intention of PrevBOT is to eliminate the crime, not the people and groups as such? The police and the PHS must consider and assess this themselves.
We might both suspect and understand that it is tempting for the police to also let the bot collect evidence to start an investigation based on flagged conversations. This is also featured in early sketches of the PrevBOT. Such a version may also be compatible with good intentions and an honest purpose to fight crime. It is admittedly more pertinent to address the potential intention problem by using a purely preventive PrevBOT, which is content to uncover and intercept.
Proportionality
The principle of proportionality means that the police should not ‘use any stronger means until milder means have been attempted in vain’ (the Police Instructions Section 3-1). The benefits of preventing abuse must also be weighed against the disadvantages of the development and use of a PrevBOT.
The sandbox project has not investigated whether there are other, milder means that the police should attempt before PrevBOT. Whether the bot in this context will be a particularly strong means depends on its design. An evidence-collecting PrevBOT is likely a more powerful tool than a purely preventive bot. This means that an evidence-collecting bot can only (possibly) be justified if a purely preventive bot has been attempted in vain.
We must also consider whether the use of a PrevBOT tool would be proportional to the problem to be combated. Could it be a case of using a sledgehammer to crack a nut? Sexual exploitation and abuse of children is not a ‘nut’. It is a serious crime and a public health problem. However, we must expect the tool to be accurate and that its use does not affect a vast number of people who are not at risk of becoming victims or abusers. Is there a need for such ‘mass surveillance’ to avert the crime you want to eliminate? Or in other words: can the PrevBOT be designed to minimise interfering with the privacy of ‘the masses’.
Can we ensure that flagged conversations, which according to the plan will be saved to continuously improve and linguistically update the model, are not stored for longer than strictly necessary? In situations where the police have intervened with a warning, they may be required to document the electronic traces that gave grounds for the intervention. However, storing unnecessarily large amounts of personal data for an unnecessarily long time is not good for privacy and data protection. Updates could, for example, take place relatively frequently, both to avoid an extensive inventory of logs, but also to ensure that PrevBOT performs optimally. The project can also assess whether it should only save the logs where a bot operator has intervened rather than all of the flagged logs. This would provide human quality assurance, which both reduces noise in the continued learning material and strengthens privacy.
It is important that proportionality is actively considered throughout the course of the research and development processes. As part of a first step analysis, we consider the project to be in line with the principle.
Probability of success
Taking a first step can be most clearly justified if there is a reasonable probability of success. Technologically speaking, it is so well proven that machines can identify specific conversational features and conduct sentiment analyses that we can safely assume there is a reasonable chance of succeeding in creating a bot that can detect and flag grooming attempts. However to ensure that it will not be a first step toward straying from the ethical path, we need to decide whether a technically functioning PrevBOT will have a reasonable probability of succeeding in preventing CSEA in practice.
Technically, it is important, for example, that the system is fast enough to be able to intercept before the conversation is moved to a closed forum. It is both about the bot’s ability to detect and flag suspicious conversations in time, but also whether the police’s online patrols have the capacity to follow up all conversations flagged by the PrevBOT and intervene quickly enough where needed. If the decision rests on the latter, the PrevBOT, which is to provide decision-making support for human online patrols, may quickly turn into a fully automated tool. This would in such case mean stricter legal and ethical requirements, where parts of the assessment will concern whether processing performed by the tool has legal implications for or, correspondingly, significantly affects the individual.
As such, it is possible to discuss whether an automatic pop-up warning is that intrusive? Maybe not in itself. However, many will perceive a warning sent by the police ‘labelling’ you as a potential abuser or potential victim to be intrusive, even though it has no legal consequences. So the wording of the warnings, and whether it is the potential victim, the potential abuser or both, who receive the warning, will require consideration.
PrevBOT’s chance of success is not just a technical or organisational issue, however. Other equally decisive factors that will determine whether it works as intended are: Will potential abusers be stopped by a pop-up warning on their screen? If the police are open about how the tool works, which is assumedly a prerequisite if PrevBOT is to be called responsible AI (cf. the principle of transparency), the well-informed will know that ignoring the warning will not affect the risk of being caught. Is it conceivable that the most dangerous abusers will be cold-blooded enough to defy the warnings and continue their chat?
How, then, will the potential victim experience a warning of a possible grooming attempt? As mentioned, the potential victims are by no means a homogenous group. The effect of a warning will probably depend on the situation. Attempts at grooming can occur on chat or gaming platforms intended for general socialising. These are places young people may perceive as being safe home turf, where they are less vigilant and may be caught off guard by flattery and grooming attempts. A warning in such case may be an effective wake-up call.
At the other extreme are minors who have already defied warnings and have ‘snuck’ into pornographic websites with an 18-year age limit. If someone attempts to groom you in such a context, when you are lying about your age and are looking to push (sexual) boundaries: Would you be bothered about a warning about attempted grooming?
Professor Elisabeth Staksrud from the Department of Media and Communication at the University of Oslo has been monitoring children’s internet use since the 1990s. Her research shows that those who are subject to sexual abuse after meeting people online usually have a hunch that they are meeting an adult who is looking for something of a sexual nature. So a warning about that particular danger will not bring anything new to the table. This does not necessarily mean that it will not have an effect though. A warning sent by the police could ensure that the gravity of the matter sinks in. However, we do not know whether such warnings will have an equally good effect on everyone. And perhaps least effect on the most vulnerable?
In addition, the potential abusers are unlikely to be a homogeneous group when it comes to age and ‘aggression’. Some are serial abusers with a conscious plan and method for luring their victims. Others may slip more subconsciously past their normal, moral scruples, and have ‘suddenly’ done things online that they would be unlikely to do in their ‘analogue’ lives. For these potential abusers, a police warning may be effective.
Further reading: ’Dette er de norske nettovergriperne’ (‘These are the Norwegian net abusers’ (aftenposten.no) – in Norwegian only)
PrevBOT is unlikely to be 100% effective in averting abuse in its operating platforms, even in cases where the bot has detected grooming and attempts have been made to intercept it. But it is reasonable to believe that it will be able to stop a fair amount. The uncertainty associated with the chat participants’ reactions to warnings and police interception indicates that more research on how the tool works in practice is essential as soon as it is taken into use.
In the sandbox project, we have also discussed the use of the words ‘victim’ and ‘abuser’. The people involved may not see themselves as potential victims and potential abusers, and these kinds of expressions may seem alienating. The wording used by the police in their interception attempts could therefore be decisive to whether PrevBOT has a reasonable chance of success.
One aspect is whether the chat participant responds to the warnings. Another is whether they believe they are genuine. How should young people, who learn how to be critical internet users at school, trust that it is actually the police intervening? What if it is the warning itself that they become critical towards? Hopefully, the police’s online patrols are experienced in handling this. It is in any case a possible outcome, which is important to include when developing the project.
The above problem would be reduced if young people were well informed about PrevBOT’s existence. General awareness of the fact that the bot and the police are keeping track of online activities could also have an effect in itself.
It could of course lead to crime relocation, i.e. that the grooming moves to arenas that PrevBOT is unable to access. If the problem then moves to the darkest corners of the internet, it will in any case mean that both victims and abusers must to a much greater extent seek out the situations consciously. At present, new victims are to some degree being picked up ‘out on the open street’. If the problem relocates, the ‘streets’ will at least be safe for the majority, and the problem would be reduced, if not completely eliminated.
On the other hand, the knowledge that PrevBOT is keeping track could provide a false sense of security. If people blindly trust that ‘big brother’ will intervene in everything that seems suspicious, could they become more vulnerable to attempts that PrevBOT is unable to detect? It is relevant in this respect that the vast majority of abuse is committed by people of the same age. According to the plan, however, PrevBOT will detect large age differences between the chatters, without specifying what should qualify as a ‘large age difference’. Research shows that convicted groomers are mostly men between the ages of 25 and 45. Will a conversation between a man in his mid-20s and a girl of 15-16 be defined as a large age difference? And will it be as easy for PrevBOT to detect as if the man was 40? The greater the required age gap for PrevBOT to intervene, the fewer cases it will detect. And the more attempts by people of a similar age that go under the PrevBOT radar, the greater the false sense of security for people who believe that PrevBOT makes the platform safe.
To summarise the PrevBOT project’s chance of creating an effective tool, there are a number of factors that affect its possibility of success. Of the many possible outcomes, there are admittedly many that can be solved in the design of the tool and its use. PrevBOT is unlikely to detect or be able to avert all grooming attempts, but it will hopefully stop a fair amount. So the chance of success is reasonable enough to defend the first step into the PrevBOT research.
‘Last resort’
If there is only one way to avert this type of crime, a last resort, the requirement for proportionality could be adjusted. It is unlikely to be relevant in this context, however. PrevBOT is neither the only nor the last resort in the fight against this type of crime.
As mentioned under the section on proportionality, the police are more or less obliged to attempt a purely preventive variant of PrevBOT before developing a tool that also collects evidence and facilitates investigation.
Nor is it certain that an evidence-collecting bot would be the very last resort. Should it become relevant to proceed on that track, an assessment and comparison with other methods would be required.
Consideration for third parties
The last point in this first step analysis is about consideration for ‘innocent’ users and others who do not or should not necessarily come into close contact with a PrevBOT in action.
On the positive side, many people would probably welcome such a tool. Parents will appreciate that something is being done. Politicians will be grateful for measures that can make society safer. Had a physical space been as prone to crime as the internet, we would expect the police to send uniformed patrols there or address the problem in one way or another.
But this ‘one way or another’ does not necessarily include a PrevBOT. On the negative side, such a tool can lead to a cooling effect. The mere knowledge that the police have such a tool that tracks, stores and potentially intervenes in our activities, when we go about our lives in open, digital spaces, could make us feel less free and not want to use these arenas. Such a cooling effect could be reinforced if, in practice, PrevBOT intervenes in harmless affection between couples or consenting internet users.
Where the PrevBOT is used and how it is set up will therefore be crucial. How effective should it be? How sure should it be that what is going on is grooming with a (high) risk of ending up in sexual exploitation? Should it be content to warn against and scare away the most obvious serial abusers? Or should it have a lower threshold for intervening in chatters’ online attempts to challenge one another in a sexual manner, with the risk of many ‘false positive’ flags?
A cooling effect may also occur in the absence of a PrevBOT. If nothing is done and the internet continues to be perceived as an increasingly lawless and dangerous space, there is reason to believe that many will steer more clear of it. Parents may want to set stricter limits on their children’s internet use. There may not be anything wrong with that in itself, but there is a risk that those remaining online will be the most vulnerable.
In other words, not taking up the fight against online crime also appears to be negative with respect to people’s sense of freedom and possibility of having a private life online.
All in all, the sandbox project concludes that the criteria in the first step analysis have been met, and that it is ethically right to initiate research on PrevBOT.