**As adoption of chatbots and conversational interfaces continues to grow, how will businesses keep their brand safe and their customer’s data safer?**
From[deliberate infiltration of systems][1]to[bugs that cause accidental data leakage][2], these days, the exposure or loss of personal data is a large part of what occupies almost every self-respecting CIO’s mind. Especially since [the EU has just slapped its first defendant with a GDPR fine.][3]
Over the last 10-15 years, through the rise of the “interactive” web and social media, many companies have learned the hard way about the importance of techniques like [hashing passwords][4] stored in databases and [sanitising user input before it is used for querying databases][5]. However as the use of chatbots continues to grow, conversational systems are almost certain to become an attractive method of attack for discerning hackers.
In this article I’m going to talk about some different types of chatbot attacks that we might start to see and what could be done to prevent them.
## Man in the Middle Attack
In a man in the middle attack, the adversary intercepts traffic in between the many components that make up a chatbot. Baddies might be able to [inject something into a library][6] that your beautiful UX uses that logs everything that your user is saying or they might not need to change the code at all [if you are not using HTTPS][7].<figureclass="wp-block-image">
<imgloading="lazy"width="660"height="218"src="https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Secure-Chat-1.png?resize=660%2C218&ssl=1"alt=""class="wp-image-348"srcset="https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Secure-Chat-1.png?w=756&ssl=1 756w, https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Secure-Chat-1.png?resize=300%2C99&ssl=1 300w"sizes="(max-width: 660px) 100vw, 660px"data-recalc-dims="1"/><figcaption>The chat interface on your device communicates (hopefully securely over HTTPS) with a server that the developer operates and may in term communicate with an external NLU provider. If someone was able to build a man-in-the-middle attack between any of these components it could be a big problem.</figcaption></figure>
These sorts of attacks are clearly a serious problem for any chatbot that will be talking to users about personal information. Even if your chatbot is designed to answer frequently asked questions without any specific link to personal accounts, vulnerability to this attack could give away personal information that the user has inadvertently shared (From “Do you have kids meals?” and “Do you deliver to Example Street” we can infer that the user has children and lives on Example Street).
### Mitigation
Developers of chatbots should make sure that bots are using the [latest security standards][8] – at a minimum [all communication should be encrypted at the transport layer (e.g. HTTPS)][9] but you might also consider [encrypting the actual messages][10] before they are transmitted as well. If you’re reliant on external open source libraries then make sure you [regularly run security checks on your codebase][11] to make sure that those external libraries can be trusted. If you are deploying a bot in a commercial context then you should definitely have independent security/penetration testing of chatbots as a key part of your quality assurance process.
## Exploitation of Third Party Services
The chatbot has often been seen as the “silver bullet” for quickly acquiring usage. No longer do you need to build an app that users have to install on their devices, simply integrate with the platforms that people already use e.g. Facebook, Google Home, Alexa and others. However, it’s important to remember the security consequences of this approach, especially in use cases with sensitive personal information and high stakes if there was ever a data leak.<figureclass="wp-block-image">
<imgloading="lazy"width="660"height="272"src="https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Secure-Chat-Webhook.png?resize=660%2C272&ssl=1"alt=""class="wp-image-350"srcset="https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Secure-Chat-Webhook.png?w=756&ssl=1 756w, https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Secure-Chat-Webhook.png?resize=300%2C123&ssl=1 300w"sizes="(max-width: 660px) 100vw, 660px"data-recalc-dims="1"/><figcaption>Facebook, Alexa, WhatsApp, Telegram, Google Home and other bots use this pattern: your device communicates with the chat service you are engaging with which in turn sends messages back to your service via a “WebHook”
</figcaption></figure>
In this scenario your bot’s security is heavily reliant on the security of the messaging platform that you deploy your system onto. For the most part, these platforms typically have[ sensible security procedures][12]. However it’s important to consider that large companies and platforms are desirable targets for hackers due to the huge potential personal data pay off from a successful breach.
Of course it’s not just the “Messenger Platform” part of this system that’s of interest to attackers. The “External NLU provider” in our diagram above could also be the target of an attack and user utterances stolen. Remember that any external service, whilst useful in many use cases, should be regarded with a healthy scepticism where security is concerned.
### Mitigation
If you are building chatbots tied to third party platforms then you can try to mitigate risks by coding defensively and sharing information sparingly. For example, never have your chatbot ask the user for things like passwords or credit card numbers through one of these portals. Instead use your companion app or website to gather this information securely and tie the user’s Messenger ID to their user account within your infrastructure.
When it comes to using external NLU a good practice is to run some [anonymisation, removing things like names, addresses, phone numbers etc,][13] on input utterances before passing them on to the service. You might also consider using on-premise NLU solutions so that chat utterances never have to leave your secure environment once they’ve been received.
## Webhook Exploits
When your bot relies on an external messaging platform as in the above scenario, the WebHook can be another point of weakness. If hackers can find the URL of your webhook then [they can probe it and they can send it messages][14] that look like they’re from the messaging platform.
### Mitigation
Make sure that your webhook requires authentication and make sure that you follow the guidelines of whichever messenger platform you are using in order to authenticate all incoming messages. Never process messages that fail these checks.
## Unprotected Device Attacks
Have you ever left your computer unlocked and gone to the water cooler? How about handing your mobile phone to a friend in order to make a call or look at a funny meme? Most people have done this at least once and if you haven’t, well done!
You should [be prepared for opportunistic attackers posing as other users when using your chatbot][15]. They might ask probing questions in order to get the user’s information “What delivery address do you have for me again?” or “What credit card am I using?”
### Mitigation
Remember to code and design defensively. Responding with something like “I’m sorry I don’t know that but you can find out by logging in to the secure preferences page [URL Here]” would be a relatively good response.
Of course there’s not much you can do if the user leaves their passwords written down on a sticky note next to the terminal or leaves their password manager app unlocked but by requiring users log in to get access to sensitive personal info we’ve taken some sensible precautions.
## Brand Poisoning Attacks {#mce_12}
<divclass="wp-block-image">
<figureclass="alignleft"><imgloading="lazy"width="200"height="200"src="https://i1.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Tay_bot_logo.jpg?resize=200%2C200&ssl=1"alt=""class="wp-image-343"srcset="https://i1.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Tay_bot_logo.jpg?w=200&ssl=1 200w, https://i1.wp.com/brainsteam.co.uk/wp-content/uploads/2018/12/Tay_bot_logo.jpg?resize=150%2C150&ssl=1 150w"sizes="(max-width: 200px) 100vw, 200px"data-recalc-dims="1"/><figcaption>Microsoft Tay is one of the most famous examples of a brand poisoning attack</figcaption></figure>
</div>
User data and proprietary information are clearly a high priority but there are other risks to your chatbot that you should also be mindful of. An adversary could poison the way that your chatbot responds in order to screen capture it saying something controversial and start a defamation campaign, poisoning your brand and putting you in a sticky situation.
In March 2016, Microsoft brought online an experimental chatbot called “Tay” which was designed to learn to respond in new ways by interacting with its users over time. From a technical perspective, Tay was an incredible piece of kit combining state of the art Natural Language Processing with Online Machine Learning. However, the developers didn’t bank on swathes of twitter trolls poisoning Tay’s memory bank and [turning her into a Holocaust denying racist.][16]
This attack was able to happen because of Tay’s state-of-the-art architecture that allowed her to learn over time and change her vocabulary and responses over time. In 2018 most bots still use a combination of intent detection and static rules in order to work out how to reply to users. This means that most bots probably isn’t susceptible to this kind of attack.
However, there are still ways that this kind of attack can trip you up. It all hinges on how your bot reacts to abusive messages and whether it’s allowed to reiterate stuff that the user has said.
Take the example conversation to the left here. It’s not exactly undeniable proof of wrongdoing by Joe’s Shoe Emporium but a well timed social media post or BuzzFeed article with “#NotADenial #BoycottJoes#ChildLabour” or could be enough to really do a number on Joe’s brand.
## Mitigation
So how can we avoid this kind of thing? Well a good start would be to check the user input for profanity as part of validation and then refuse to continue the conversation if things turn hairy. Think of this a bit like a real contact centre handler who has been trained to hang up the phone if the customer gets angry or aggressive. IBM advocate for [all chatbots being able to detect and react to profanity][17] and there’s a great post [here][18] about some approaches to doing that. Ultimately the way that your bot reacts to rude input – whether passive, humorous or a simple shut down – will depend on how you want your brand to come across.
I’d advocate for “dealing with aggressive/subversive user interactions” being high on the chatbot QA team’s todo list.