Federal Trade Commissioner Alvaro Bedoya recently released a statement on Rite Aid’s use of smart CCTV.
Here some key takeaways from the statement (and how the recent settlement applies to the use of surveillance and AI technologies).
You should consider this a blueprint for future AI enforcement by the FTC.
- We often talk about how surveillance “violates rights” and “invades privacy.” We should; it does. What cannot get lost in those conversations is the blunt fact that surveillance can hurt people.
- The settlement offers a strong baseline for what the FTC expects an algorithmic fairness program to look like.
- The FTC is not afraid to ban the use of particular AI technology for a number of years, or to order the deletion of biometric information collected through it.
- The FTC will not necessarily accept the use of biometric surveillance in commercial settings. There is a powerful policy argument that there are some decisions that should not be automated at all. Many technologies should never be deployed in the first place.
- This decision extends beyond smart surveillance into the use of any technology to automate important decisions about people’s lives, including decisions that could cause them substantial injury. Some context include: automated resume screening, screening for housing, and screening using pricing models.
When using AI for facial recognition you must:
- Carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them
- Notify people about the use of the technology (unless this is impossible due to specific safety concerns)
- Allow an opt out (unless this is impossible due to specific safety concerns)
- Notify people when you take some action against them based on this, as well as how to contest it
- Deploy robust testing, including testing for statistically significant bias on the basis of race, ethnicity, gender, sex, age or disability – acting alone or in combination
- Conduct a detailed assessment of how inaccuracies may arise from training data, hardware issues, software issues, probe photos and differences between training and deployment environments
- Conduct ongoing annual testing “under conditions that materially replicate” conditions in which the system is deployed
- Shut down the system if you cannot address the risks identified through this assessment and testing