OpenAI’s efforts to generate fewer false outputs from ChatGPT have not fully met European Union data rules.
A task force from the EU’s privacy watchdog said: “While the measures taken to comply with the transparency principle help prevent misinterpretation of ChatGPT’s output, they are insufficient to meet the data accuracy principle.”
The task force was established by Europe’s national privacy watchdogs last year.
It was formed after concerns raised by Italy’s data protection authority about the widely used AI service.
OpenAI did not immediately respond to a request for comment from Reuters.
Investigations by national privacy watchdogs in several EU member states are ongoing.
Need Career Advice? Get employment skills advice at all levels of your career
The report noted it is not yet possible to provide a complete description of the findings.
The report’s conclusions represent a “common denominator” among national authorities.
Data accuracy is a fundamental principle of the EU’s data protection rules.
The report highlighted that, due to ChatGPT’s probabilistic system, the current training approach can produce biased or fabricated outputs.
The report added: “Furthermore, end users are likely to consider ChatGPT’s outputs as factually accurate, including information about individuals, regardless of their actual accuracy.”