The European Data Protection Board recently issued an opinion on AI models, shedding light on what the consequences could be for the unlawful processing of personal data in the development phase of an AI model on the subsequent processing or operation of the AI model.
Possible remedies: Up to and including model deletion
Supervisory authorities may impose:
- A fine.
- Temporary limitation on the processing.
- Erasure of part of the dataset that was processed unlawfully.
- Deletion of the data of certain data subjects (ex officio) [individuals can ask for this too].
- Erasure of the whole dataset used to develop the AI model and/or the AI model itself (this depending on the facts , having regard to the proportionality of the measure (and e.g. the possibility of retraining)).
- The SAs will consider, among other elements, the risks raised for the data subjects, the gravity of the infringement, the technical and financial feasibility of the measure, as well as the volume of personal data involved.
The unlawful processing of the developer may punish the deployer (depending on potential risks to individuals).
Each controller should ensure the lawfulness of the processing it conducts and be able to demonstrate it:
- If the development and deployment phases involve separate purposes, the lack of legal basis for the initial processing doesn’t always taint the lawfulness of the subsequent processing. This is a case by case issue.
- Where the legal basis for the subsequent processing is legitimate interest; the lack of legal basis for the original processing may indeed negatively impact.
- The controller deploying the model must conduct an appropriate assessment as part of its accountability obligations to ascertain that the AI model was not developed by unlawfully processing personal data. (Check the source data, especially if there is a court decision on it.)
- The level of assessment of the controller and the level of detail expected by SAs may vary depending on diverse factors, including the type and degree of risks raised by the processing in the AI model during its deployment in relation to the data subjects whose data was used to develop the model.
- The deployer can’t blindly rely on an AI Act EU declaration of conformity filed by the developer of the model. Such a declaration will be taken into account by the SAs, though.
Where a controller unlawfully processes personal data to develop the AI model, but anonymizes the data before use in deployment, further processing may not be tainted.
- The SA can enforce against the developer.
- If really anonymous, the subsequent operation of the AI model does not entail the processing of personal data so GDPR doesn’t apply to the deployment.
- BUT only if it is really anonymous: A mere assertion of anonymity of the model is not enough to exempt it from the application of the GDPR.