The New Jersey Attorney General recently issued guidance to clarify how the New Jersey Law Against Discrimination (LAD) applies to algorithmic discrimination.

Key points:

  • If your conduct is discriminatory without AI, it’s still discriminatory with AI. Intent to discriminate isn’t required.
  • You can be liable as a user of AI, not just as a developer of AI.
  • The LAD prohibits algorithmic discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability and other protected characteristics.
  • Violation can occur due to: disparate treatment, disparate impact or no provision of reasonable accommodations.

Disparate Treatment

This is conduct that treats a person differently because of their membership in an LAD-protected class, whether intentionally or if a policy or practice is discriminatory on its face without intent. With respect to automated decision-making tools, a covered entity engages in disparate treatment discrimination when it:

  • Designs or uses automated decision-making tools to treat members of a protected class differently.
  • Selectively uses automated decision-making tools to assess members of a protected class. e g: use tenant screening algorithm only to evaluate Black prospective tenants.
  • The tool it uses does not directly consider a protected characteristic, but makes recommendations based on a close proxy for a protected characteristic.

Disparate Impact

  • The use of these tools has a disproportionately negative effect on members of an LAD-protected class.
  • This is true even if policies and practices are not discriminatory on their face (facially neutral & not motivated by discriminatory intent).
  • For example, an automated decision-making tool makes recommendations or contributes to decisions that disproportionately harm members of an LAD-protected class unless use of the tool serves a substantial, legitimate, nondiscriminatory interest and there is no less discriminatory alternative.

Reasonable Accommodations

  • Use of AI precludes or impedes the provision of reasonable accommodations.
  • For example, relying on recommendations by an AI tool not trained on data that includes individuals who use an accommodation, which may not recognize that an accommodation is possible or may penalize individuals who have or need a reasonable accommodation.