I recently sat down with the Restaurant Technology Network to discuss some of the many concerns people have regarding chatbots, and the legislation that governs them.
Here are some of the key points we talked about:
- Know your bot: If you are using a bot/AI bot, you need to tell people that they are not speaking to a human and that AI makes mistakes. You also need to tell them to check the information they are being given and preferably direct them to how to do that.
- This chat is being recorded: If you are recording/transcribing the chat, tell people about it somewhere prominently and in a manner that will be good enough to constitute consent for the purpose of wiretapping laws. It is better to be specific about why you are recording.
- Be transparent: If you are collecting personal information as part of the chat, you are subject to FTC scrutiny as well as U.S. state privacy laws. Those require a notice at collection and a detailed notice. Also, it is important that you don’t surreptitiously change privacy policies or relevant terms of service.
- Know your data collection: You have to know what data is collected by your chatbot, both actively from the user and passively through trackers. You also need to know what third parties (through those trackers or your service provider that set up the bot) are doing with it. Some of this will need an opt in/out.
- Tell it like it is. The Federal Trade Commission has said “Don’t misrepresent what these services are or can do. Your therapy bots aren’t licensed psychologists, your AI girlfriends are neither girls nor friends, your griefbots have no soul, and your AI copilots are not gods.”
- At your own risk: The FTC has said “Don’t offer these services without adequately mitigating risks of harmful output.
- Why am I seeing this ad? Don’t insert ads into a chat interface without clarifying that it’s paid content. Any generative AI output should distinguish clearly between what is organic and what is paid.
- Please, please don’t leave me: Don’t use consumer relationships with avatars and bots for commercial manipulation. Consistent with the FTC’s rulemaking proposal to make it easier for people to “click to cancel” subscriptions, a bot shouldn’t plead not to be turned off.
- You need to do an AI assessment: It is important to assess where the data came from (does it violate any laws?) and if there is any risk of bias? Also, are you using the user’s data to train the AI? Are you giving them a choice about it?
- Automatic decision making: If your bot is touching sensitive data or making “consequential decision,” involving access to employment, financing, education, health, mental health, etc., there will me a lot more to do. Call your data lawyer!