On 28 January, the first FRIA model was presented by the Catalan DPA.
On the occasion of the International Data Protection Day, the Catalan Data Protection Authority presented to the Parliament of Catalonia a pioneering model in Europe for the development of AI solutions that respect fundamental rights. This is the result of a challenging project led by Prof. Mantelero at the APDCAT with the participation of various private and public entities.
The impact on fundamental rights is a key component of both conformity assessment and assessment under Article 27 of the AI Act, but there is a lack of concrete methodologies and models for carrying it out. Some of the proposed methodologies have shortcomings, others have not been tested in concrete cases or there is no detailed evidence of their performance.
What has been done in Catalonia is to create a methodology for assessment and a model template that streamlines the process. Instead of templates that cover a variety of issues with a limited focus on fundamental rights, methodologies that use approaches that do not fit the fundamental rights context, this model template combines the risk assessment methodology with the fundamental rights legal framework.
The project led by Prof. Mantelero has adopted a case-based empirical approach, which is crucial to test the effectiveness of the proposed model in achieving the policy and design objectives of the FRIA as elaborated by the EU legislator in the AI Act. The use cases conducted have shown that it is possible to streamline the FRIA procedure by avoiding the adoption of a long checklist and focusing on the core elements of the impact on fundamental rights.
The use cases have also demonstrated that, at least for the first round of assessment, mitigation and reassessment, people with the appropriate background can complete the FRIA with limited effort. The FRIA, if properly designed, does not impose an excessive additional burden on private and public entities in the EU to comply with the AI Act.
In terms of the areas covered, the use cases relate to four of the key areas listed in Annex III of the AI Act, namely education, workers’ management, access to healthcare, and welfare services. The nature of the use cases discussed will also make them useful to many other public and private entities in other countries interested in designing AI systems/models that are compliant with fundamental rights in these core areas.
The report, which includes the methodology, model template and use cases, is available here.