This guidance has been prepared by the Confederation of European Data Protection Organization’s AI and Data Working Group.
Artificial intelligence is not a new concept for DPOs and data protection professionals. Generative AI, however, is. When OpenAI’s ChatGPT launched in November 2022, the majority of data protection professionals had never heard of generative AI, and were certainly not concerned with such technologies in their day-to-day work.
Now, with ChatGPT in the hands of over 100m users globally, and many other providers such as Google Bard and Anthropic’s Claude entering the market, it has become an operational reality, and necessity, for data protection professionals to deal with the consequences of generative AI tools being rapidly utilised within organisations. Whether these tools are adopted simpliciter or are fine-tuned by organisations using their own data sets, novel and as-yet unexamined data protection implications exist, all of which data protection professionals must rapidly come to terms with.
The aim of this paper is to guide data protection professionals through the maze of issues that are unfolding as these technologies gain rapid adoption in organisations. Amongst other key issues, this paper looks at data-sharing risks, accuracy of personal data, conducting DPIAs on generative AI tools, implementing data protection by design, selecting a lawful basis for training generative AI systems, optimising organisational structures, applying privacy-enhancing techniques, and handling data subject rights in the context of these technologies.
There will be no future without generative AI, and with data playing such a pivotal role in the training and operating of these systems, DPOs will play a central role in ensuring that both data protection and data governance standards are at the heart of these technologies.”
You can download the paper here: