Data Protection Weekly 41/2023

Oct 16, 2023

 European Union

CJEU: Adequacy decision for the  EU-US Data Privacy Framework upheld

In the case T-533/23, the Court of Justice of the European Union (CJEU) has ruled against the application of interim measures to halt the adequacy decision for the EU-U.S. Data Privacy Framework. The decision was a response to a filing by Member of French Parliament, Philippe Latombe, who had contested the data transfer agreement and its subsequent adequacy decision. The court stated that Latombe failed to demonstrate the significant and irreversible harm that could arise from the agreement. However, the main legal challenge from Latombe against the adequacy of the EU-U.S. Data Privacy Framework is still pending before the CJEU. You can read the full order of the court here (in French).

EDPS: Opinion on EU Liability proposals for defective products and AI systems

The European Data Protection Supervisor (EDPS) has released an Opinion on the European Commission’s recent proposals for directives concerning liability for defective products and adapting civil liability rules to artificial intelligence (AI). The EDPS fully supports the proposals’ objectives to ensure victims of AI-caused damage have equal levels of protection as those harmed by other means. The opinion includes specific recommendations such as equal protection for damages caused by AI systems deployed by EU institutions and private or national entities. Furthermore, the EDPS calls for procedural safeguards to apply universally, irrespective of the AI system’s risk classification. The opinion also emphasises the need for the proposals to be in harmony with existing Union data protection laws. Finally, the EDPS suggests shortening the review periods within the proposals and considering additional measures to ease the burden of proof for victims. You can read the Opinion here.

European Commission: New version of Procurement Clauses of AI available for Public Authorities

The European Commission has finalised its model contractual AI Clauses designed to guide public authorities in the procurement of AI technologies. Supported by various EU bodies, including the Directorate-General for Communications Networks, Content and Technology (DG CNECT) and the Directorate General for Internal Market, Industry, Entrepreneurship, and SMEs (DG GROW), the clauses aim to promote responsible, transparent, and accountable AI use. The clauses, which were developed and peer-reviewed with input from over 40 experts, offer specific provisions that align with the proposed AI Act, thus excluding other obligations or requirements that may arise under relevant applicable legislation such as the General Data Protection Regulation. They are designed to be adapted to specific contracts and are not a complete contractual framework. The final version also includes revisions based on the latest developments in AI regulations and introduces new data-sharing regimes. You can read the full article here and download the clauses here.

European Commission: Request for information sent to X under DSA

The European Commission has formally issued a request for information to X under the Digital Services Act (DSA). This action follows allegations of the platform spreading illegal content and disinformation, including terrorist and hate speech. As a Very Large Online Platform, X is obliged to adhere to DSA provisions, encompassing the mitigation of risks like illegal content, disinformation, and threats to fundamental rights. The Commission is specifically probing into X’s policies on illegal content notifications, complaint handling, and risk assessments. X must respond by 18 October for issues related to its crisis response protocol and by 31 October for other matters. Based on the assessment of X’s replies, the Commission could initiate formal proceedings. Failure to comply could result in the imposition of periodic penalty payments. You can read the press release here.

National Authorities

Sweden: IMY submits finding on DPO roles to EDPB

The Swedish data protection authority (IMY), recently submitted its findings to the European Data Protection Board (EDPB) as part of a coordinated action initiated earlier this year. The action aims to evaluate the role and resources of Data Protection Officers (DPOs) in accordance with Articles 37-39 of the GDPR. IMY conducted inspections within 50 organisations, asking numerous questions about DPOs’ qualifications, tasks, and status. Four key problem areas were identified: potential conflicts of interest in DPO roles, inconsistency in the amount of time DPOs dedicate to further training, variations in the resources allocated to DPOs, and differing interpretations of DPO responsibilities. IMY proposes possible solutions and will continue deeper scrutiny of selected organisations. An upcoming report by the EDPB will compile observations from various European data protection authorities, including IMY. You can read the press release here.

France: CNIL releases guidance for AI and data protection

On 11th October 2023, France’s data protection authority (CNIL), released its first set of practical guidelines concerning the development of artificial intelligence (AI) systems. The guidance aims to show that General Data Protection Regulation (GDPR) supports an innovative yet responsible approach to AI. CNIL has initiated a public consultation process and met with major stakeholders to understand their need for legal clarity. The guidance addresses key principles of the GDPR such as purpose limitation, data minimisation, and limited data retention, clarifying how they can be adapted for AI development. CNIL emphasises that AI can be developed responsibly without compromising individual liberties, as long as certain conditions are met. The guidance is part of CNIL’s broader plan to support innovation in AI while safeguarding privacy. You can read the press release here, the full guidance here and answer the public consultation here (all in French).

Spain: AEPD updates breach advisory and breach communication tools

The Spanish data protection authority (AEPD) has updated its breach advisory and breach communication tools to better assist organisations with data security incidents. The breach advisory tool helps data controllers decide whether a personal data breach should be notified to the regulatory authority, while the breach communication tool aids in fulfilling the obligation to inform affected parties. Both tools now offer the capability to download a full report, incorporating responses and recommendations, useful for internal documentation. These updates were implemented following consultations with public sector data protection officers. The enhancements come at a crucial time, as the AEPD received over 2,100 data breach notifications last year. The tools aim to facilitate compliance with Article 33 and 34 of the General Data Protection Regulation (GDPR), thereby improving the management of data breaches. You can read the press release here (in Spanish).

UK: Court of Appeal backs ICO on subject access request complaints litigation

The UK Court of Appeal recently confirmed the Information Commissioner’s Office (ICO) acted lawfully in handling a subject access request complaint filed by Mr Ben Delo against Wise Payments Limited. Upholding a prior High Court ruling, the decision emphasises that the ICO possesses broad discretion in investigating and commenting on complaints without necessarily determining an infringement. The case notably addresses the extent to which the ICO is obligated to investigate every complaint it receives. John Edwards, the Information Commissioner, highlighted that the ICO received over 33,500 data protection complaints and issued almost 40,000 outcome decisions in the 2022/23 fiscal year. The ICO welcomed the ruling, stating it allows them to appropriately prioritise investigations based on the merits of each case and the likely outcome of further inquiry. You can read the press release here.

UK: ICO opens applications for 2024 regulatory Sandbox

The Information Commissioner’s Office (ICO) is inviting organisations to express their interest in participating in its Regulatory Sandbox for the year 2024. Organisations have until 31 December 2023 to apply for a spot in the Sandbox, which offers bespoke regulatory advice and support to entities tackling complex data protection challenges. The ICO is particularly interested in initiatives focussed on biometric processing, emerging technologies, or exceptional innovations. More than 20 organisations have already benefited from the programme by receiving access to ICO expertise, gaining insights into data protection frameworks, and elevating consumer trust. Stephen Almond, ICO Executive Director of Regulatory Risk, expressed excitement over the new technological developments that the next cohort will bring. The Sandbox aims to not only enhance compliance but also to inform future ICO guidance and contribute to the UK’s ambition to be an innovative economy. You can read the press release here.

Sweden: Proposal to amend credit reporting law following IMY’s legal opinion

In a recent legislative proposal the Swedish government has proposed an amendment to the law that would permit credit reporting companies to include information about guardianship” in their reports. Guardianship refers to cases where someone is legally responsible for another person, often because that person can’t make decisions for themselves. The Swedish data protection authority (IMY) had previously concluded in a legal opinion that information about guardianship, as described in Chapter 11 of the Parental Code, is considered sensitive personal data. Consequently, this had implications for companies engaged in credit reporting activities as it is generally prohibited to handle sensitive personal data within such operations. The proposed change is expected to come into force on 1 January 2024. You can read the press release here (in Swedish)

Italy: Garante issues guidelines on the use of AI in healthcare

The Italian data protection authority (Garante) has released a ten-point guideline for implementing artificial intelligence (AI) in healthcare services nationwide. The guidelines emphasise transparency in decision-making processes, human-supervised automated decisions, and non-discriminatory algorithms. According to the Garante, patients should be made aware of any automated decision-making processes, especially in clinical or healthcare policy settings, and should receive clear information on the logic behind these decisions. Furthermore, human supervision should be integrated to allow healthcare personnel to validate or override AI-based outcomes. The Garante also urges service providers to use reliable AI systems, periodically review their effectiveness, and implement appropriate technical and organisational measures. This aims to minimise the risk of inaccurate or incomplete data affecting health outcomes or leading to discriminatory effects. The Garante also highlights that AI processing of health data should be governed by specific legal frameworks that protect individual rights and freedoms. You can read the press release here and download the full guidelines here (both in Italian).


Hacker advertises sale of ‘23andMe’ data

In a concerning development, a hacker has claimed to possess millions of “pieces of data” taken from the popular family genetics website 23andMe. The data is being advertised for sale on an online forum frequented by digital thieves. In a statement, 23andMe clarified that the company’s internal systems have not been breached. However, an unspecified amount of “customer profile information” appears to have been gathered by gaining access to individual 23andMe accounts. The company suggests that the hacker may have employed a technique known as ‘credential stuffing,’ using passwords stolen from other sites to access 23andMe accounts. To enhance account security, the firm recommends utilising two-factor authentication. It is unclear at this time how many records have been compromised, and the hacker has provided inconsistent information regarding the extent of the breach. You can read the full article here and 23andme statement here.

Bundeskartellamt gives Google users better data control

As a result of proceedings conducted by Germany’s Federal Cartel Office (Bundeskartellamt) under the recently introduced Section 19a of the German Competition Act, Google has made commitments to provide users greater control over their data. Andreas Mundt, President of the Bundeskartellamt, stated that users will have enhanced choices about how their data is used across various Google services, contributing to a more competitive and fair market. The commitments address scenarios where Google aims to combine or cross-use personal data across multiple services. Users will now be offered clear and unambiguous choice options that must be designed without manipulative elements, commonly referred to as “dark patterns.” The Bundeskartellamt’s decision adds to existing efforts under the Digital Markets Act (DMA), which came into effect in May 2023 and identified Alphabet Inc., Google’s parent company, as a “gatekeeper.” Both European and German authorities will continue to coordinate their regulatory efforts.  You can read the full document here.


Italy: Garante imposes €90,000 fine over illicit SIM card activation

The Italian data protection authority (Garante) has fined a retailer €90,000 for unlawful personal data processing. A resident of Bergamo province received notifications about two prepaid SIM cards activated in his name without his consent. After reporting the matter to legal authorities, he alerted the Garante. The investigation found that a barely legible photocopy of the individual’s ID had been used for activation in Naples, and that a non-existent but formally correct IBAN was linked to the accounts. The company failed to follow proper identification protocols, did not cooperate in the investigation, and provided no explanation for obtaining the ID copy. Interestingly, the telecom company associated with the retailer was not found to have violated any regulations. You can read the press release here (in Italian).

Italy: Employees’ right to access geolocation data upheld by Garante

The Italian data protection authority (Garante) has levied a €20,000 fine against a company responsible for reading gas, water, and electricity meters for failing to adequately respond to three employees’ data access requests. The employees had sought details about the data used to calculate their mileage reimbursements and hourly wages, including information from a company-provided smartphone with a geolocation system. Despite the specificity of their requests, the company only outlined the general purposes and methods for data usage, neglecting to disclose the actual GPS data collected. The Garante found that the company failed to provide a satisfactory response and ordered the company to provide the requested geolocation data. The Garante also noted that even if the company felt it couldn’t fully comply, it should have stated specific reasons for this to the employees, reinforcing their right to lodge a complaint. You can read the press release here (in Italian).