In the wake of the UK A-Level algorithm fallout, the U.S. National Institute of Standards and Technology (NIST) has published a report, for public comment, on the Four Principles of Explainable Artificial Intelligence.
“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said NIST electronic engineer Jonathon Phillips, one of the report’s authors.
The four principles for explainable AI are:
- AI systems should deliver accompanying evidence or reasons for all their outputs.
- Systems should provide explanations that are meaningful or understandable to individual users.
- The explanation correctly reflects the system’s process for generating the output.
- The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.
Under the General Data Protection Regulation (GDPR) for processing that could constitute profiling, you are required to provide meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing.
A similar requirement is found in the California Privacy Rights Act (the ballot initiative set to amend the California Consumer Privacy Act).