The Federal Trade Commission Act’s prohibition on deceptive or unfair conduct can apply if you make, sell or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose. Therefore, it applies to deepfakes and voice clones, according to a new Federal Trade Commission blog post.

What to do?

  • Just because you can, doesn’t mean you should: Consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways a product could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all. (Hello GDPR “fair and lawful,” and hi there EU AI Act.)
  • Are you effectively mitigating the risks? Take all reasonable precautions before it hits the market. Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal. (Hello GDPR data protection by design and by default, and hi there EU AI Act AI audits.)
  • Are you over relying on post-release detection? The burden shouldn’t be on consumers to figure out if a generative AI tool is being used to scam them.
  • Are you misleading people about what they’re seeing, hearing or reading? Misleading consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes or chatbots, could result – and in fact have resulted – in FTC enforcement actions.

While the focus of this post is on fraud and deception, these new AI tools carry with them a host of other serious concerns, such as potential harms to children, teens and other populations at risk when interacting with or subject to these tools. Commission staff is tracking those concerns closely as companies continue to rush these products to market and as human-computer interactions keep taking new and possibly dangerous turns.