I recently had the pleasure of speaking with the Atlantic County Bar Association. Here are some of the key takeaways from my presentation:

Employees are “consumers” under the California Consumer Privacy Act. It requires:

  • Privacy notice (employee and applicant) is required (and a good idea) for all jurisdictions.
  • Privacy rights like access, deletion (which are greater than the rights under labor laws).
  • Data processing agreements with third parties.
  • Breach reporting.
  • Data minimization/data retention limitation.

Profiling that may impact employment is addressed in all 19 US state laws.

  • This is an assessment/drawing inferences regarding behavior using automated processes that result in “legal or similarly significant effects”/are used to make “consequential decision.”
  • Employment decisions (like impact to employment opportunity/promotion) are significant.

If you fall under this, you need to:

  • Do a DPIA on the potential risk to the employee from this processing. (This may result in not being able to carry out the processing.)
  • Provide an opt in/out.

Additional requirements under the new CA draft regs on risk assessment and automated decision-making tools:

  • Assess whether the AI is fit for the purpose.
  • Provide a pre-use notice (with more explanation regarding logic and how the AI works).
  • Provide rights (e.g. opt out/access), but there are exceptions.
  • No retaliation.

In the EU:

GDPR:

  • All employees are protected.

Rights under GDPR with derogations (stricter stuff) under the sState laws:

  • Requirements to consult with works counsel (in the new DOL AI memo too).
  • Profiling that leads to potentially not seeing job ads may not be possible under legitimate interest.

EU AI Act:

  • Goes into effect in 2026/27.
  • Divides by role (developer and deployer). If you substantially improved the AI, you could still be a developer.
  • Main issues involve high risk uses.
  • Use in the workplace is high risk.
  • Some uses are prohibited: (e.g. emotion recognition in the workplace and educational institutions; social scoring based on social behavior or personal characteristics; the use of AI to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
  • You need to provide: transparency, risk assessments, monitoring, training and reporting.