OpenAI used personal data to train ChatGPT without having a valid basis for doing so. In addition, the organization concealed a data breach, was not transparent towards users, lacked age verification and the mandatory information provision was inadequate, according to the Italian privacy regulator GPDP. It imposed a fine of fifteen million euros on OpenAI and obliged the organization to conduct a six-month information campaign. OpenAI is appealing against the fine.
Earlier this year, the GPDP ruled that ChatGPT is likely to violate the GDPR. Last year, the regulator decided to ban ChatGPT for a month. The reason for the ban was the illegal collection of personal data for training the algorithms and the failure to check the age of minors. OpenAI, the developer of the chatbot, was called on by the Garante per la protezione dei dati personali (GPDP) to immediately stop processing the data of Italian users.
Based on the results of the factual investigation earlier this year, the GPDP stated that it had found evidence indicating that ChatGPT violates one or more provisions of the GDPR. OpenAI was then given time to respond. The GPDP has now fully completed its investigation and states that OpenAI has not complied with privacy legislation. When imposing the fine, the regulator says it took into account the ‘cooperative attitude’ of the ChatGPT developer. OpenAI expects revenue of $2.7 billion this year.