The U.S. Department of Labor recently released new principles on the ethical use of artificial intelligence.

Here are some of the things we are working on with employers and other clients, including tech developers.

Employers:

  • Involve employees early and regularly in the adoption and use of AI.
  • Bargain in good faith with employee unions on use of AI and electronic monitoring.
  • Avoid collection, retention, and other handling of worker data that is not necessary for a legitimate and defined business purpose.
  • Do not use AI systems that undermine, interfere with, or have a chilling effect on labor organizing or protected activities.
  • Do not use AI to reduce wages, break time, or other benefits that workers are legally due.
  • Establish governance structures to ensure consistency.
  • Provide workers & reps advance notice and appropriate (conspicuous) disclosure of: the purpose of the AI system, including when t will be used, what it covers, how workers will engage with the AI and how the AI systems will be used to monitor or inform significant employment decisions.
  • If AI or algorithmic recommendations are a principal basis for significant employment decisions, inventory them and provide a meaningful and plain language explanation of the AI system’s role in the decision and the data relied on to make the decision.
  • Provide appropriate training about AI systems in use to a broad range of employees.
  • Ensure meaningful human oversight of any significant employment decisions supported by AI systems.
  • Ensure, through procedures, that workers and their representatives can request, view and submit corrections to data and file disputes without fear of retaliation.
  • Secure and protect any data about workers from internal and external threats.
  • Don’t share workers’ data outside your business and employer’s agents (including with consultants, AI auditors and M&A) without workers’ freely giving informed and specific consent (unless required by law).
  • Routinely monitor and analyze whether the use of the AI system is causing a disparate impact or disadvantaging individuals with protected characteristics and, if so, take steps to reduce the impact or use a different tool.

Developers:

  • Establish standards so that any AI products brought to market protects civil rights, mitigates risks to safety (like error rates and bias) and meets performance requirements.
  • Conduct risk assessments and independent audits that include the intended purpose for the AI; expected benefits; error rates; risks of discrimination or bias; and impacts on rights and accessibility.
  • Design worker-impacting AI systems to allow for employers’ ongoing monitoring and human oversight.
  • Design and build AI systems with safeguards for securing and protecting data, by default, about workers.
  • Establish capabilities for employees to make consent, access and control decisions in a complex data ecosystem.