brainsteam.co.uk/brainsteam/content/posts/legacy/2018-12-09-🤐🤐can-bots-keep-...

14 KiB
Raw Blame History

author date medium_post post_meta preview tags title type url
James 2018-12-09 10:36:34+00:00
O:11:"Medium_Post":11:{s:16:"author_image_url";s:69:"https://cdn-images-1.medium.com/fit/c/200/200/0*naYvMn9xdbL5qlkJ.jpeg";s:10:"author_url";s:30:"https://medium.com/@jamesravey";s:11:"byline_name";N;s:12:"byline_email";N;s:10:"cross_link";s:3:"yes";s:2:"id";s:12:"8be78d43ff66";s:21:"follower_notification";s:3:"yes";s:7:"license";s:19:"all-rights-reserved";s:14:"publication_id";s:12:"6fc55de34f53";s:6:"status";s:6:"public";s:3:"url";s:121:"https://medium.com/@jamesravey/can-bots-keep-secrets-the-future-of-chatbot-security-and-conversational-hacks-8be78d43ff66";}
date
/social/75649786f1fe80e8f5e15fbc595cdc207a443978d3fb7d47992993c4bc4997bd.png
machine-learning
chatbots
nlp
security
Work
🤐🤐Can Bots Keep Secrets? The Future of Chatbot Security and Conversational “Hacks” posts /2018/12/09/🤐🤐can-bots-keep-secrets-the-future-of-chatbot-security-and-conversational-hacks/

As adoption of chatbots and conversational interfaces continues to grow, how will businesses keep their brand safe and their customers data safer?

From deliberate infiltration of  systems to bugs that cause accidental data leakage, these days, the exposure or loss of personal data is a large part of what occupies almost every self-respecting CIOs mind. Especially since the EU has just slapped its first defendant with a GDPR fine.

Over the last 10-15 years, through the rise of the “interactive” web and social media, many companies have learned the hard way about the importance of techniques like hashing passwords stored in databases and sanitising user input before it is used for querying databases. However as the use of chatbots continues to grow, conversational systems are almost certain to become an attractive method of attack for discerning hackers.

In this article Im going to talk about some different types of chatbot attacks that we might start to see and what could be done to prevent them.

Man in the Middle Attack

In a man in the middle attack, the adversary intercepts traffic in between the many components that make up a chatbot. Baddies might be able to inject something into a library that your beautiful UX uses that logs everything that your user is saying or they might not need to change the code at all if you are not using HTTPS.

The chat interface on your device communicates (hopefully securely over HTTPS) with a server that the developer operates and may in term communicate with an external NLU provider. If someone was able to build a man-in-the-middle attack between any of these components it could be a big problem.

These sorts of attacks are clearly a serious problem for any chatbot that will be talking to users about personal information. Even if your chatbot is designed to answer frequently asked questions without any specific link to personal accounts, vulnerability to this attack could give away personal information that the user has inadvertently shared (From “Do you have kids meals?” and “Do you deliver to Example Street” we can infer that the user has children and lives on Example Street).  

Mitigation

Developers of chatbots should make sure that bots are using the latest security standards at a minimum all communication should be encrypted at the transport layer (e.g. HTTPS) but you might also consider encrypting the actual messages before they are transmitted as well. If youre reliant on external open source libraries then make sure you regularly run security checks on your codebase to make sure that those external libraries can be trusted. If you are deploying a bot in a commercial context then you should definitely have independent security/penetration testing of chatbots as a key part of your quality assurance process.

Exploitation of Third Party Services

The chatbot has often been seen as the “silver bullet” for quickly acquiring usage. No longer do you need to build an app that users have to install on their devices, simply integrate with the platforms that people already use e.g. Facebook, Google Home, Alexa and others. However, its important to remember the security consequences of this approach, especially in use cases with sensitive personal information and high stakes if there was ever a data leak.

Facebook, Alexa, WhatsApp, Telegram, Google Home and other bots use this pattern: your device communicates with the chat service you are engaging with which in turn sends messages back to your service via a “WebHook”

In this scenario your bots security is heavily reliant on the security of the messaging platform that you deploy your system onto. For the most part,  these platforms typically have sensible security procedures. However its important to consider that large companies and platforms are desirable targets for hackers due to the huge potential personal data pay off from a successful breach. 

Of course its not just the “Messenger Platform” part of this system thats of interest to attackers. The “External NLU provider” in our diagram above could also be the target of an attack and user utterances stolen. Remember that any external service, whilst useful in many use cases, should be regarded with a healthy scepticism where security is concerned.

Mitigation

If you are building chatbots tied to third party platforms then you can try to mitigate risks by coding defensively and sharing information sparingly. For example, never have your chatbot ask the user for things like passwords or credit card numbers through one of these portals. Instead use your companion app or website to gather this information securely and tie the users Messenger ID to their user account within your infrastructure.

When it comes to using external NLU a good practice is to run some anonymisation, removing things like names, addresses, phone numbers etc, on input utterances before passing them on to the service. You might also consider using on-premise NLU solutions so that chat utterances never have to leave your secure environment once theyve been received.

Webhook Exploits

When your bot relies on an external messaging platform as in the above scenario, the WebHook can be another point of weakness. If hackers can find the URL of your webhook then they can probe it and they can send it messages that look like theyre from the messaging platform. 

Mitigation

Make sure that your webhook requires authentication and make sure that you follow the guidelines of whichever messenger platform you are using in order to authenticate all incoming messages. Never process messages that fail these checks. 

Unprotected Device Attacks

Have you ever left your computer unlocked and gone to the water cooler? How about handing your mobile phone to a friend in order to make a call or look at a funny meme? Most people have done this at least once and if you havent, well done!

You should be prepared for opportunistic attackers posing as other users when using your chatbot. They might ask probing questions in order to get the users information “What delivery address do you have for me again?” or “What credit card am I using?” 

Mitigation

Remember to code and design defensively. Responding with something like “Im sorry I dont know that but you can find out by logging in to the secure preferences page [URL Here]” would be a relatively good response.

Of course theres not much you can do if the user leaves their passwords written down on a sticky note next to the terminal or leaves their password manager app unlocked but by requiring users log in to get access to sensitive personal info weve taken some sensible precautions.

Brand Poisoning Attacks

Microsoft Tay is one of the most famous examples of a brand poisoning attack

User data and proprietary information are clearly a high priority but there are other risks to your chatbot that you should also be mindful of. An adversary could poison the way that your chatbot responds in order to screen capture it saying something controversial and start a defamation campaign, poisoning your brand and putting you in a sticky situation. 

In March 2016, Microsoft brought online an experimental chatbot called “Tay” which was designed to learn to respond in new ways by interacting with its users over time. From a technical perspective, Tay was an incredible piece of kit combining state of the art Natural Language Processing with Online Machine Learning. However, the developers didnt bank on swathes of twitter trolls poisoning Tays memory bank and turning her into a Holocaust denying racist.

This attack was able to happen because of Tays state-of-the-art architecture that allowed her to learn over time and change her vocabulary and responses over time.  In 2018 most bots still  use a combination of intent detection and static rules in order to work out how to reply to users.  This means that most bots probably isnt susceptible to this kind of attack. 

 However, there are still ways that this kind of attack can trip you up. It all hinges on how your bot reacts to abusive messages and whether its allowed to reiterate stuff that the user has said.

Take the example conversation to the left here. Its not exactly undeniable proof of wrongdoing by Joes Shoe Emporium but a well timed social media post or BuzzFeed article with “#NotADenial #BoycottJoes #ChildLabour” or could be enough to really do a number on Joes brand.

Mitigation

So how can we avoid this kind of thing? Well a good start would be to check the user input for profanity as part of validation and then refuse to continue the conversation if things turn hairy. Think of this a bit like a real contact centre handler who has been trained to hang up the phone if the customer gets angry or aggressive. IBM advocate for all chatbots being able to detect and react to profanity and theres a great post here about some approaches to doing that. Ultimately the way that your bot reacts to rude input whether passive, humorous or a simple shut down will depend on how you want your brand to come across.

Id advocate for “dealing with aggressive/subversive user interactions” being high on the chatbot QA teams todo list.