The way forward
The sandbox project has assessed and outlined how the PHS can legally conduct research into such an AI tool. However, a green light for PrevBOT research may be of little value if the tool being researched and developed will not be lawful to use.
In practice, such a tool will inevitably need to process (sensitive) personal data. Depending on how it is implemented, its use could be perceived as somewhat intrusive to the privacy of victims and abusers, as well as to random individuals whose conversations are analysed by PrevBOT while they are online.
It would probably be wise to establish a plan early on for assessing the legality of using such a tool in practice, and that could definitely be the topic of a new sandbox project.
The PrevBOT project is still at an early stage, and the way forward depends on many decisions yet to be made. From a data protection perspective, it will be particularly interesting if the ambition is maintained that it will be a tool of prevention used to intercept attempts at grooming. The PrevBOT project is now clear that this is the goal. However, during the transition from idea to ready-to-use AI tool, there are forces that may seek to influence the project, giving the tool the capability to collect evidence against and pursue abusers. The Data Protection Authority recommends that the project identifies at an early stage the uses of PrevBOT it considers unethical and undesirable, and strive during the development phase to prevent such uses from being pursued.
The desire for freedom and the desire for security are often presented as conflicting goals. The PrevBOT project is an excellent example of freedom, security and privacy being interdependent – and that it is all about finding the right balance. Minors have a right to autonomy and a private life , but without a certain level of internet security, they would not be able to exercise their autonomy and freedoms. As the tool is gradually designed in more detail, an important part of the project will be to find this equilibrium.
Trust is essential for a project that seeks to be in line with both regulations and guidelines for responsible artificial intelligence. Emphasising transparency and the involvement of relevant stakeholders through the research project provides a good basis for this.
During the course of the sandbox process, LLMs (Large Language Models) have made their breakthrough, and SLMs (Small Language Models) are set to be launched imminently. The same applies to LAM (Large Action Models). New opportunities are emerging, and the sandbox project has identified several ways in which PrevBOT can help make the internet and everyday life safer for vulnerable groups.
The technology from a successful research project could, for example, be used in apps that run locally on phones and laptops. These would process what is visible on the screen rather than operating on the websites’ domains. You can therefore set up who should be notified, in addition to the person looking at the screen who is subject to attempted grooming.
PrevBOT may end up being not just one tool, but the basis for a number of different measures, which together provide effective protection against online grooming.