AI Working group: New guidance “Generative AI: The Data Protection Implications” released
The Confederation of European Data Protection Organisation (CEDPO) has released a new guidance document focusing on the data protection implications surrounding generative AI technologies. Prepared by CEDPO’s AI and Data Working Group, the document aims to help data protection professionals navigate the complex landscape of generative AI, particularly in the context of emerging tools like OpenAI’s ChatGPT, Google Bard, or Anthropic’s Claude. The guidance covers a range of pertinent issues, including data-sharing risks, the accuracy of personal data, data protection impact assessments (DPIAs), and data subject rights. With generative AI technologies now in the hands of over 100 million users globally, the guidance aims to shed light on novel data protection challenges that professionals must urgently address. You can download the full paper here.
EDPS: Opinion on the EU’s proposal for Artificial Intelligence Act
The European Data Protection Supervisor (EDPS) has published its own initiative Opinion 44/2023 on the Proposal for Artificial Intelligence Act as it approaches the final stages of legislative discussions within the EU. The EDPS, led by Wojciech Wiewiórowski, aims to clarify its forthcoming responsibilities as the designated authority for overseeing AI systems in EU Institutions, Bodies, Offices, and Agencies (EUIs). The Opinion calls for explicit prohibitions on AI systems that pose unacceptable risks to individuals and their fundamental rights. It also advocates for the EDPS to have explicit authority to receive complaints related to infringements of the AI Act. You can read the press release here and download the full Opinion here.
EDPS: Seminar highlights concerns on CSAM proposal
The European Data Protection Supervisor (EDPS) recently hosted a seminar discussing the Child Sexual Abuse Material (CSAM) proposal. The seminar addressed concerns from various stakeholders including EU data protection authorities, legal experts, and child protection organisations. While the CSAM proposal aims to fight child sexual abuse online and offline, it has been criticised for its potential ineffectiveness and risks to data protection. Experts note that the detection measures could be easily circumvented and may result in false positives. Moreover, the technical requirements to implement these measures could undermine end-to-end encryption and thus, user privacy. The seminar concluded that, in its current form, the CSAM proposal risks leading to a “surveillance society”, which not only affects the exercise of fundamental rights, but also deeply interferes with the rights of children and young people. You can read the seminar’s briefing note here.
CJEU: Patient has the right to obtain a first copy of his or her medical records free of charge
In Case C-307/22, the Court of Justice of the European Union (CJEU) addressed the issue of charging patients for copies of their medical records. Originating in Germany, a patient sought to obtain a copy of his dental records, possibly for holding the dentist liable for alleged errors. The dentist requested payment for the copy, as allowed by German law. The German Federal Court of Justice referred the matter to the EU Court for a preliminary ruling, specifically seeking interpretation of the General Data Protection Regulation (GDPR). The CJEU concluded that under GDPR, a patient has the right to obtain a first copy of their medical records free of charge. The dentist, regarded as the controller of the patient’s personal data under GDPR, is obliged to provide this first copy without any need for the patient to justify the request. National laws cannot override this by making the patient bear the cost. You can read the press release here and the full Judgment here.
European Commission: First transparency reports under the DSA published
As part of their obligations under the Digital Services Act (DSA), the first seven Very Large Online Platforms and Search Engines have published their inaugural transparency reports. These platforms include Amazon, LinkedIn, TikTok, Pinterest, Snapchat, Zalando, and Bing. All remaining platforms have until 6 November to comply. These reports aim to shed light on content moderation practices for the benefit of citizens, researchers, and regulators, thereby enhancing public scrutiny and accountability. The reports are required to include detailed information about content moderation, such as the number of user notices and official orders received, and the accuracy of automated systems. Qualifications and linguistic expertise of content moderation teams must also be disclosed. Platforms with fewer than 45 million users and intermediary services will need to publish annual transparency reports starting February 2024. The DSA further allows the Commission to establish templates and other specifics for these reports. You can read the press release here.
Sweden: IMY issues opinion against biometric passport register
The Swedish data protection authority (IMY) has issued an opinion against a proposal in a government-commissioned report focused on biometrics for crime-fighting. One key proposal suggests converting the existing Swedish passport register into a biometric database that would be accessible to law enforcement agencies. IMY argues that this transformation would represent a substantial change and pose risks to individual privacy. It claims that implementing such a register would contravene both Swedish constitution and EU law. IMY also highlights the sensitive and permanent nature of biometric data, cautioning against its undifferentiated use for people neither suspected nor convicted of crimes. Concerns extend to the potential for abuse and discrimination, particularly when considered alongside expanded capabilities for facial recognition and public surveillance. You can read the press release here and the full opinion here (both in Swedish).
Denmark: Datatilsynet’s new initiative for organisation-specific data breach guidance
The Danish data protection authority (Datatilsynet) has launched a new initiative to provide organisation-specific guidance immediately after receiving notifications of personal data breach. The move aims to offer quick and practical assistance to organisations dealing with such breaches. These breaches can range from isolated human errors to extensive hacker attacks affecting thousands of individuals. The guidance will cover specific issues like ransomware, password security, logging, and phishing. This initiative aims to provide immediate assistance, so organisations do not have to search for relevant guidance but receive it directly in their inbox. Telephone consultations are also available to clarify factual aspects of each case. The Datatilsynet plans to evaluate and expand this initiative over time, extending the range of topics for which guidance can be offered. You can read the press release here (in Danish).
Norway: Meta takes legal action against Datatilsynet over behaviour based marketing ban
Meta Platforms, Inc., the parent company of Facebook and Instagram, has taken legal action against the Norwegian data protection authority (Datatilsynet). The case revolves around Datatilsynet’s ban on behaviour-based marketing practices on Facebook and Instagram. The Datatilsynet imposed this time-limited ban in July, citing that the monitoring of user behaviour for marketing purposes is illegal. Since Meta did not comply with the Datatilsynet’s decision, a daily compulsory fine of NOK 1 million (equivalent to €84,700) was levied starting August 14. Meta contends that both the ban and the fine are invalid and has appealed to the Oslo District Court to annul them. The trial is expected to take place in 2024. This follows a previous meeting in court this autumn where Meta’s request for a temporary injunction against the Datatilsynet’s decision was denied. You can read the press release here (in Norwegian).
UK: ICO releases insights on balancing retail security and data protection
In a new blog post, Melissa Mathieson, the ICO’s Director of Regulatory Policy Projects, discusses the rise of shoplifting in the UK and how retailers can utilise data protection laws to counter this. The post highlights that the use of technology is permissible to share criminal offence data like images for crime prevention, as long as the sharing is necessary and proportionate. Retailers are advised to limit the access of such data to a specific set of individuals who need it to prevent and detect crimes such as shoplifting. The ICO encourages businesses to adopt a data protection by design approach, thinking about privacy issues at both the design stage and throughout the system’s lifecycle. You can read the full article here.
BEUC: Concerns raised over EU’s approach to regulating generative AI
The European Consumer Organisation (BEUC) is expressing concerns about the European Union’s impending rules on generative artificial intelligence (AI) systems, such as ChatGPT and Bard. The organisation warns that the regulatory approach under consideration could be weak and unclear, resulting in difficulties for regulators, consumers, and businesses. There is a particular worry that only AI systems developed by large companies will be subject to stringent regulations, potentially leaving a wide range of systems with only weak transparency requirements. Ursula Pachl, Deputy Director General of BEUC, highlights the importance of robust regulation to protect consumers from various risks, including manipulation and disinformation. The call for stronger oversight comes as the EU institutions are set to have pivotal discussions on the world’s first comprehensive rules for AI systems. You can read the full article here.
Italy: Garante fines Axpo Italia Spa €10 million for activating unsolicited contracts with inaccurate data
The Italian data protection authority (Garante) has imposed a €10 million fine against Axpo Italia Spa for activating unsolicited electricity and gas contracts using inaccurate and outdated customer information. The decision followed numerous user complaints about unknown activations, often discovered through letters announcing the closure of existing contracts or reminders for unpaid invoices. Axpo relied on a network of approximately 280 door-to-door sellers to acquire these contracts, without implementing sufficient verification tools or procedures to ensure data accuracy. This failure resulted in over 5,000 users having their personal data unlawfully processed. Furthermore, the company’s database contained 2,462 contract proposals where the same email address was used more than five times. Axpo Italia Spa has been ordered to adopt a range of corrective measures to comply with Italian and European data protection laws. You can read the full article here (in Italian).
Italy: Garante fines online university for sending promotional text messages without consent
An online university has been fined €75,000 by the Italian data protection authority (Garante) for sending promotional text messages without recipient consent. The Garante also mandated that the university confirm its technical and organisational measures to ensure valid consent for data processing, particularly when outsourcing advertising to third parties. The university has been prohibited from using illicitly processed data. The action follows citizen complaints about unsolicited messages and, in one case, promotional calls persisting for six years, despite objections. The university violated EU Regulation by ignoring the rights of recipients and failing to address their complaints. The Garante also noted the university’s uncooperative behaviour, including neglecting to respond to official requests and wasting public resources, which influenced the fine’s amount. As a corrective measure, the university must establish appropriate data retention periods and train staff to respond to data requests promptly. You can read the full article here (in Italian).
Austria: DSB concludes investigation into processing of personal data related to Google Fonts
The Austrian data protection authority (DSB) has recently concluded an investigation under Article 58(1)(b) of the GDPR into Google Fonts, following an increase in inquiries about legal “warnings.” A lawyer had sent letters to various companies, asking them to acknowledge damage claims and issue cease-and-desist declarations for using Google Fonts. The investigation assessed the legal and technical aspects of how personal data is processed within this product. The DSB clarified that data transmission to Google servers occurs only when Google Fonts are not locally integrated on a company’s server. It was also stated that IP addresses and HTTP headers are processed but not for advertising purposes. However, the DSB pointed out that Google has not fully complied with GDPR’s information obligations, especially since IP addresses can potentially be personal data. You can read the press release here (in German).