-
Artificial intelligence (AI) can identify meaning in text and say something about the state of mind and traits of the person writing it. The Norwegian Police University College, which is leading the PrevBOT project, wants to explore the possibility of creating a tool that can automatically patrol the open internet with the aim of detecting and preventing the sexual exploitation of minors.
-
The enterprise Doorkeeper aims to strengthen data protection in modern video monitoring systems. They want to achieve this by using intelligent video analytics to censor identifying information – such as faces and human shapes – in the video feed. They also want to ensure fewer recordings are saved, compared to more traditional monitoring systems.
-
The use of artificial intelligence (AI) makes it possible to identify which patients are at risk of rapid readmission to hospital. Use of such a tool could enable the health service to provide better cross-functional follow-up, in the hope of sparing the patient (and society) from unnecessary hospital admissions. In the regulatory sandbox, the Norwegian Data Protection Authority and Helse Bergen have explored what such a tool should look like in order to comply with the data protection regulations.
-
The use of artificial intelligence (AI) is associated with a wide range of issues and aspects that fall under the broad umbrella of transparency. As soon as AI is used in connection with personal data, transparency is required by the data protection regulations. Yet under this umbrella we also find ethical questions and technological issues relating to communication and design. We've made an experience-based report on how to communicate when using artificial intelligence (AI).
-
Ruter has participated in the Norwegian Data Protection Authority's sandbox for responsible artificial intelligence in connection with their plans to use artificial intelligence in their app. In the sandbox project, the NDPA and Ruter have discussed how they can be open about the processing of personal data that will take place in this solution. A particularly interesting issue relates to how clearly one must delineate the purposes of the treatment in advance. After all, artificial intelligence's strength is to discover new connections and possibilities.
-
Is an algorithm, which is supposed to predict heart failure, able to behave discriminatingly? Is it a symptom of injustice if this AI tool is better at diagnosing one type of patient, rather than others? In this sandbox project, the Norwegian Data Protection Authority, Ahus and the Equality and Anti-Discrimination Ombud looked at algorithm bias and discrimination in an artificially intelligent decision-support tool, under development for clinical use at Ahus.
-
In the sandbox, Simplifai and the Norwegian Data Protection Authority have looked at whether the privacy rules allow public administration to use a machine learning solution to record and archive e-mails. And together with NVE, they have explored how public administration can make informed choices when purchasing intelligent solutions, such as DAM.
-
How can you learn from data you do not have? Can federated learning be the solution when data sharing is difficult? The Norwegian Data Protection Authority's sandbox has explored the challenges and benefits of federated learning, a machine learning methodology that is presumed to be privacy friendly, which the start-up enterprise Finterai wishes to use in the fight against money laundering and the financing of terrorism.
-
How can the Norwegian school system give students individual assessments and adapted education using learning analysis, and at the same time ensure students good privacy? This has been the central question in the regulatory AI-sandbox project with the Norwegian Association of Local and Regional Authorities (KS), the Centre for the Science of Learning & Technology (SLATE) at the University of Bergen (UiB) and the City of Oslo's Education Agency.
-
Secure Practice wants to develop a service that profiles employees with regard to the cyber security risk they pose to the organizations. The purpose is to enable a follow-up with adapted security training based on which profile categories the employees fall into. The project entered the sandbox in the spring of 2021. Here is the exit report.
-
This sandbox project addresses NAV’s development of an AI tool to predict the development of sick leave on an individual level. NAV joined the sandbox the spring of 2021, and the project was completed during the fall. Here is the exit report.