Skip to content

Professor Mantelero in Brasilia to discuss the Brazilian and EU approach to AI regulation, 9-10 November 2023

Prof. Mantelero was invited at the conference Democracia e Direitos Fundamentais na Era Digital organised by the Instituto Brasileiro de Ensino, Desenvolvimento e Pesquisa (IDP) in Brasilia with distinguished scholars and important Brazilian authorities involved in the legislative process on AI and in data protection. His presentation was on the EU and the Brazilian approaches to AI regulation; below is a brief summary of the main points.

A superficial reading of the Brazilian proposal under discussion might suggest a case of the Brussels effect, but it is wrong. There are many similarities with the AI Act, but the reason for this is deeper and not related to the economic influence of the EU (a neo-colonialist misinterpretation).

First, Brazilian legal culture is quite close to the European one (see, e.g., the influence of Italian scholars in the past on Brazilian doctrine). Second, on AI, Brazil and the EU are in a similar position as regards the AI industry, being largely ‘areas of conquest’ by foreign companies. The scenario is different in the US (or China).

Given the relationship between AI development and the values underpinning AI models, as well as the possibility for the US government to use the so-called bully pulpit to guide industry (see e.g. the recent Executive Order on AI), the convergence between EU and Brazil is not surprising, just as their divergence from the soft law-oriented approach adopted in the US.

Another important element of the Brazilian proposal is its lean approach. Here, the Brazilian solution is elegant in several parts: (i) a general impact assessment requirement for both AI providers and deployers; (ii) a two-stage assessment (as in the GDPR), the first preliminary assessment to define the risk level and the second for high-risk cases; (iii) high risk based on presumptions, as in Art. 35.3 GDPR, rather than limited by critical lists of high-risk cases; (iv) clear key parameters for impact assessment, compared to a confusing approach in the AI Act (three types of assessment without common parameters).

There is also a greater emphasis on the role of participation, both in the public sector (prior public consultation) and in continuous evaluation, as well as on the role of advisors.

The Brazilian bill is also susceptible to improvement. For instance, limiting the impact assessment to high-risk cases identified by AI providers does not capture the contextual nature of risk, making it difficult for the provider to foresee all possible risk cases. Hence, deployers should perform impact assessment not only in the high-risk cases defined by providers. The proposal positively includes a section on AI liability (unfortunately decoupled from the AI Act in Europe), but the text could take into account a general extension of the so-called state-of-the-art defence (with some mitigation options).