Chatbots and Cybersecurity: New Security Vulnerabilities and Solutions
With the popularity of chatbots exploding, it only makes sense that some hacker would figure out a way to make them malicious. Learn how to keep your data safe.
Join the DZone community and get the full member experience.
Join For FreeChatbots are becoming increasingly popular. The White House even produced an Obama Facebook chatbot. While they’re convenient for many reasons, they also bring up a variety of new security vulnerabilities. Any company taking advantage of them must be able to provide adequate security or they could be leaving a major vulnerability in their platform.
Here are some of the main security vulnerabilities and the solutions to them.
Channel Encryption Is a Must
Chatbot communication must be encrypted. Everyone knows that the primary protection for most online functions is encryption. Without encryption, it becomes far easier for hackers to break through your security walls. Your chatbot should have encryption at all times.
In other words, chatbots should only ever be used if you have an encrypted channel. This is simple for chatbots running on private platforms. However, if you’re running a chatbot on a public platform, such as Facebook, it becomes much harder to manage encryption.
It’s true that Facebook is working on end-to-end encryption for Facebook Messenger. The problem is that this is still in beta mode.
So, what should you do in the meantime?
It’s necessary to avoid transmitting any sensitive information through public channels. Always use bots on these channels with the belief that someone is watching. These bots also shouldn’t have access to a company’s systems.
Create Rules That Manage Data Storage and Data Handling
Chatbots are always going to collect information from its users. It’s the only way that it can function and provide useful information to people using them. It’s also how they train themselves to become more receptive in the long-term.
Consider where you’re storing this information and how long the bot will store this information. There’s a reason, for example, why financial information is only stored on company databases for a minimal amount of time.
Make sure these same rules are applied to the bot. The bot should know what it should do with data. Remember that if the bot exposes sensitive information it’s the company that’s going to suffer for it. You can’t take any risks with this.
But what about data storage?
Data storage must be as secure as any other types of data the company must collect. On private platforms, again, this is simple because you have full control over security. Public chatbots are a different story.
For example, a company like Diamond Color wouldn’t allow a chatbot to store personal data from customers if it was functioning via Facebook Messenger.
Right now, public platforms don’t have secure storage. And the storage they do have is far from reliable. As mentioned before, social media networks are not going to take responsibility if there’s a breach through your chatbot. The responsibility always falls on you.
The Phenomenon of Criminal Chatbots
Unless you were an expert on the subject, you wouldn’t even think about the potential for criminal chatbots. These are a thing.
Chatbots are more than capable of impersonating humans, which has led to scandals like that at Microsoft. That’s why they’re so valuable in the first place.
The problem is this same technology can be used to impersonate customers and staff to commit various scams. A chatbot impersonating your company could potentially gain control of personal information. It could even be used to gain access to company servers through tricking employees.
A similar incident occurred with Tinder. A bot impersonated a female and then encouraged men to click a link, which required them to enter their credit card information to become ‘Verified’ on the platform. Naturally, some people signed up to an online porn subscription and lost significant amounts of money.
So how do you combat these criminal chatbots?
This is all about training. Tell your employees to refrain from clicking on links sent by chatbots or customers. The same message should be sent to customers.
For example, banks always tell their customers that employees will never ask for their financial information. This prevents impersonation because customers automatically know that the bank wouldn’t request their data.
Unfortunately, until technology can identify these criminal chatbots there are no other solutions to this problem. It’s about being aware at all times that a chatbot may not be what it says it is.
Last Word – New Threats and Vulnerabilities
The issue with chatbots is that they’re very much like humans. And like humans, they can be manipulated into performing actions they shouldn’t.
Have you secured your company’s chatbot?
Opinions expressed by DZone contributors are their own.
Trending
-
TDD vs. BDD: Choosing The Suitable Framework
-
Which Is Better for IoT: Azure RTOS or FreeRTOS?
-
Building a Robust Data Engineering Pipeline in the Streaming Media Industry: An Insider’s Perspective
-
13 Impressive Ways To Improve the Developer’s Experience by Using AI
Comments