The U.S. Department of Labor and The White House recently released a new framework designed to protect U.S. workers from adverse consequences when artificial intelligence systems are deployed in the workplace.
The framework sets forth eight mandatory principles for the development and deployment of AI systems in the workplace (some of which overlap or reinforce requirements in existing privacy law and Federal Trade Commission guidance.)
- Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use and oversight of AI systems for use in the workplace. This seems like part of a DPIA requirement under GDPR and US state laws.
- Ethically Developing AI: AI systems should be designed, developed and trained in a way that protects workers.
- Establishing AI Governance and Human Oversight: This is similar to the opt out requirement under privacy laws.
- Ensuring Transparency in AI Use (for both applicants and employees): This is also what many privacy laws require
- Protecting Labor and Employment Rights: Including the right to organize, wages, health, safety and anti discrimination.
- Using AI to Enable Workers
- Supporting Workers Impacted by AI
- Ensuring Responsible Use of Worker Data: Workers’ data collected, used or created by AI systems should be limited in scope and location, used only to support legitimate business aims and protected and handled responsibly. This parallels the data minimization in collection and use, as well as the use for “compatible purpose,” set forth in privacy laws.