The European Data Protection Supervisor (EDPS) has submitted comments to FTC Rulemaking on commercial surveillance.
Here are some key takeaways.
IOT devices:
- It is important that data from the Internet of Things are only used for purposes that are compatible with the original purpose of the processing, and that these purposes are all known to the persons concerned.
- The design of easy to use devices should not collide with proper and clear information about the data processing taking place.
- Aside from providing information in the product package, manufacturers should make sure that users can access such information if the package is no longer available (for instance, by printing a QR-code in the body of the device with a hyperlink to the privacy policy of the product).
- It is not allowed to track users’ behavior and activities without their knowledge and consent.
Prohibited Surveillance Practices:
- Commercial surveillance practices that undermine the validity of consent (the so called “dark patterns”) or otherwise, due to the systemic and pervasive nature of online surveillance (online behavioral advertising), make data subject’s consent not effective and not meaningful.
- The EDPS considers that online behavioral advertising should be regulated more strictly in favor of less intrusive forms of advertising that do not require tracking of users’ interaction with content, notably contextual advertising. The DSA is a first step towards this goal, since it lays down a ban on advertising based on profiling using special categories of data and minor’s data.
- The Proposal for the review of the consumer credit directive offers indications on the types of personal data which should not be used to assess creditworthiness. “[P]ersonal data, such as personal data found on social media platforms or health data, including cancer data, should not be used when conducting a creditworthiness assessment.” (Per EDPS, this should be all Article 9 data.)
- Personal data collected and processed in the context of creditworthiness assessment should not be allowed for marketing purposes.
Dark Patterns:
The use of dark patterns violates the principles that personal data must be processed fairly and in a transparent manner in relation to the data subject. In addition to being in breach of the GDPR, these practices are also expressly prohibited by the DSA.
Automated Decision-Making
The following AI systems should be prohibited:
- Social scoring.
- Any use of AI for automated recognition of human features in publicly accessible spaces, including gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals.
- AI systems categorizing individuals from biometrics (for instance, from face or voice recognition) into clusters according to ethnicity, gender, political or sexual orientation, or other grounds for discrimination.
- AI systems inferring “emotions” of natural persons (so-called emotion categorization systems), except for well-specified use-cases, namely for health or research purposes with appropriate safeguards in place. These must be subject to all data protection conditions and limits, including purpose limitation.
There should also be a mandatory third party audit of high risk AI systems.
Recommender Systems:
- The legislature should consider a ban on online targeted advertising based on pervasive tracking. It also should restrict the categories of data that can be processed for such advertising methods.
- There should be a full ban on micro-targeting for political purposes, which consists of targeting an individual or a small group of individuals with political messages according to some of their perceived preferences or interests that their online behavior may reveal.
- The use of targeted advertising based on pervasive tracking for political purposes should be prohibited.