On March the 31st, the Italian Data Protection Authority (Garante, “the Italian DPA”) issued a decision targeting OpenAI’s ChatGPT service, mandating a temporary ban on processing personal data from Italian residents. On the 12th of April OpenAI was given a deadline until April the 30th to comply with several data protection and privacy requirements. The Italian ban on ChatGPT raises critical regulatory questions concerning Large Language Models (LLM), Artificial Intelligence (AI), potential privacy and data protection risks on LLMs, and how different authorities will enforce “old” laws, such as the GDPR, and potentially further regulate similar concerns amidst the ongoing AI-wave. On the 13th of April, the European Data Protection Board (EDPB) created a ChatGPT task force to investigate some of these concerns.
Interest in AI has reached new heights after the recent breakthrough of large-scale LLM based AI such as ChatGPT. These AI services have gone through machine learning on vast amounts of internet data and can in some scenarios both beat and trick humans at general tasks. The AI-service “ChatGPT” created by the US based company OpenAI, has for instance the ability to comprehend context, generate coherent responses, write functioning computer code and is not only able to “pass the bar” to become a lawyer, but also performs better than the average law student on several different bar examinations.
However, ChatGPT has not entered the market without controversy. On the 22nd of March, one of the early founders of OpenAI in an open letter with other prominent AI-founders called to “… immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. Furthermore, the letter warns that AI systems with human-competitive intelligence can pose profound risks to society and humanity.
Other concerns have also been raised, such as how personal data is processed in these systems and the potential risk of exposing such services to children. So, on the 30th of April the Italian DPA imposed a temporary ban on the processing of personal data of Italian residents in ChatGPT, thusmaking the service unavailable to Italian residents. This decision came only a few days after the letter signed by Elon Musk and others calling for a pause in AI-development, and which also stated that AI labs should have regulatory oversight and be audited by independent experts. Furthermore, the letter calls for AI developers to work with policy makers to dramatically accelerate development of robust AI governance system. The Italian DPA did not reference this open letter, but they acknowledged “numerous media interventions regarding the functioning of the ChatGPT service” as a reason for the ban.
In the Italian DPA’s immediate temporary ban on processing, the DPA highlights that no information is provided to users or data subjects whose data are collected by Open AI. Furthermore, the Italian DPA claims that there appears to be no legal basis underpinning the massive collection and processing of personal data in order to “train” the algorithms on which the platform relies.
The Italian DPA emphasizes that users were uninformed about the nature of information collected and processed by the service. Additionally, the authority expressed apprehension over the potential misuse of personal data to train the AI chatbot and questioned the corresponding legal foundation. The Italian DPA also found the absence of age verification for users under 13 and the possibility of exposing children to unsuitable content disconcerting.
However, the information from the Italian DPA has insofar been scarce.
It also appears from the decision that the OpenAI team was not contacted prior to the ban. The enforcement process under the GDPR usually comprises a series of steps, including prior contact and information requests. The decision does not mention any such prior contact. Did the Italian DPA neglect to contact OpenAI before determining the temporary ban? If so, this would raise important questions about how GDPR enforcement should be for matters like this, as many enterprises could risk temporary ban on their business without prior dialogue with the relevant data protection authority.
The Italian DPA has since held a videoconference meeting on the 5th of April with executives at OpenAI, including the CEO, Sam Altman. OpenAI has agreed to cooperate with the Italian DPA and asks for the temporary ban on ChatGPT in Italy to be lifted and to allow OpenAI to continue processing the data of Italian data subjects.
OpenAI will have to comply by the 30th of April with the measures set out by the Italian DPA concerning transparency, the right of data subjects, including users and non-users, and the legal basis of the processing for algorithmic training relying on users’ data. Only in that case will the Italian DPA lift its order that placed a temporary limitation on the processing of Italian users’ data in ChatGPT.
We are now witnessing discussions about crucial regulatory concerns surrounding the use of AI ahead. The recent rapid technological developments have emphasized the need for additional governmental regulations. For instance, the CEO of OpenAI, Sam Altman, worries that AI developers working on similar generative AI tools won’t implement sufficient security and safety limits in the current development race and competition between AI providers, and requests for further governmental regulations to be put in place.
The EU already have several new legislations planned that will affect the regulatory landscape surrounding the use and development of AI tools and services. One of them is the EU proposal for an Artificial Intelligence Act, which was introduced back in April 2021. The AI Act will, when incorporated in member states law, be a risk-based framework which classifies different AI technology into three different risk categories, (1) prohibited AI, (2) high-risk AI and (3) all other types of AI.
For the main obligations in the AI Act to apply, the AI technology must be deemed as “high-risk”. However, defining high-risk AI has been challenging, since unintended consequences may occur when using AI in different contexts and environments. Note that EU’s AI Act is still not adopted, and changes may still occur.
The release of ChatGPT is indicative that it is still difficult to predict AI tools’ capabilities and thus also the corresponding social and ethical risk potential. EU lawmakers have recently decided to go back to the drawing board and redefine the scope of the AI Act, so that it includes LLMs without human oversight as “high-risk AI”. This entails that AI providers, such as ChatGPT, might need to comply with further cybersecurity measures, risk management and transparency obligations when the AI Act enters into force, and not “only” the GDPR. The result of the negotiation between the European Commission, Council and Parliament on the scope for high-risk AI is expected soon.
In the meantime, it will be interesting to observe whether the GDPR will be considered the main regulatory bottleneck for AI-innovation or a necessary safeguard from AI damages. Depending on the viewpoint, GDPR is likely to be considered both.