The Federal Trade Commission is closely watching the artificial intelligence marketplace, as well as company conduct, as more AI products emerge, according to a recent FTC blog post.
“We are ultimately invested in understanding and preventing harms as this new technology reaches consumers and applying the law. In doing so, we aim to prevent harms consumers and markets may face as AI becomes more ubiquitous,” the post said.
Per the FTC, consumers are voicing concerns about harms related to AI. Those concerns span the technology’s lifecycle, from how it’s built to how its applied in the real world.
How AI is Built
- Copyright and IP. The key concern here is copyright infringement from the scraping of data from across the web. There also is concern that content posted to the web may be used to train models that could later supplant their ability to make a living by creating content.
- Biometric and personal data: The concern here is the possibility of biometric data, particularly voice recordings, being used to train models or generate “voice prints.” (Read more here.)
How AI Works and Interacts with Users:
- Bias and inaccuracies: These concerns often relate to the biases of facial recognition software, including customers being unable to verify their identity because of a lack of demographic representation. Scams are also possible. (Read more here.)
- Limited pathways for appeal and bad customer service: Some complaints include not being able to reach a human. There also have been reports by regular users of products who believe they were mistakenly suspended or banned by an AI without the ability to appeal to a human.
How AI is Applied in the Real World:
- Scams, fraud, and malicious use: Phishing emails are becoming harder to spot as scammers start to write them with generative AI products. The previously tell-tale spelling and grammar mistakes are starting to disappear. There also are real concerns that generative AI can be used to conduct sophisticated voice cloning scams, in which family members’ or loved ones’ voices are used for financial extortion. Romance scams and financial fraud could also be turbo-charged by generative AI.