Defending Your Database From Ransomware
I’m aware that a security article that basically states “don’t leave your door wide open” is a bit redundant, but the tens of thousands of instances publicly available on the net shows that this is still a topic worth discussing.
Join the DZone community and get the full member experience.
Join For FreeSome attacks are hard to defend against. Zero-day exploits, where there is no patch and sometimes no solution, leave organizations with very little that can be done to prevent attacks. In other cases, keeping up to date on patches will prevent such issues (as was the case with WannaCry, for example), but that can be hard and requires constant vigilance and attention.
In contrast, the recent wave of ransomware attacks on databases has been primarily the result of bad defaults and the inattention of administrators in regard to what is going on their systems. In the past few months, we have seen ransomware attacks on tens of thousands of MongoDB instances, thousands of Elasticsearch databases, and hundreds of CouchDB, Hadoop, and MySQL servers.
In a few of these cases, the ransomware attack has indeed been an unpatched vulnerability, but in the vast majority of cases, the underlying reason was much simpler. The gates were opened, and anyone and everyone was welcome to come in and do as they please. These databases, often containing sensitive and private information, have been left open to the public internet without even the most rudimentary of security mechanisms.
All of the above-mentioned databases are quite capable of controlling who has access and to what, but if you never close the door, the sophistication of the lock is not relevant to the thief, who can simply step in.
Here is how it typically goes. A developer needs a database for an application, so they download and install a database and run it on their own machine. Typically, in such a case, the database is only available to the local machine — only used by a single user — and any security in place will be a hindrance, rather than help, during the development process.
Because of this, most databases will happily run in a mode that basically accepts any connection to the database as a valid user with full privileges. It makes the development process much easier and increases the adoption rate since you don’t have to struggle with security decisions from the start.
This is a good thing. Developers usually don’t have the required skillset to make proper security decisions for the organization, nor should they. That role should be at the hands of the operations staff, which control and maintain the production environments for the organization. At least, that is the way it should be.
In practice, in many cases, the person who developed the application is also responsible for setting it up in production. That can be a problem from a security point of view. The easiest thing to do is to simply repeat the same development setup procedure (after all, we know it works) in production and configure the database to accept remote connections as well.
Just a few words on the consequences of these issues. A database that was left open to the world in production can be the end of a business. Leaving aside the fact that you might need to ransom your data (being effectively offline for the duration), you’ll also need to disclose the breach to customers and possibly face regulatory action. As you can imagine, that can be...unpleasant.
A database that anyone can log into as an administrator is an excellent attack vector into an organization, so even if you aren’t holding anything particularly interesting in that database, it can be used as a jumping off point from inside your firewall, bypassing a lot of the safeguards you rely on.
If you have any sort of experience with security, the last paragraph probably made you squirm. I’m sorry about that, and I dearly wish things were different. It is easy to blame the developer for negligence when you consider the implications (I’m picking a bit on developers here because they are usual suspects in such cases, although professional operation teams also make these sorts of mistakes quite often). One recent case with an IoT stuffed-toy exposed two million voice records of kids and their families along with close to a million accounts. That particular database has been left exposed to the public and been held for ransom at least three times. I would call that gross negligence, to be sure.
But the number of successful attacks also indicate another issue. Taking MongoDB as the leading example in these breaches, there is a security checklist for production deployments, which is great. The list includes 15 topics and goes on for 44 pages. For any organization that does not have a dedicated security stuff, expecting users to go through that before each deployment it not realistic.
The major problem here is that security is hard, requires expertise, and often runs in direct contradiction to ease of use. It is also true that whenever a user needs to jump a hurdle because of security, their first inclination is to just disable security completely.
This means that any security design needs to balance the ease of use of the system with its level of security and make sure that a reasonable tradeoff is made. It is also the case that large part of the security design of the system is actually entirely unrelated to security at all but completely focused on the user experience. This is strange because you’ll typically find UX decisions at the user interface level, certainly not as one of the top items in the security stack.
But the truth is that if you want to have a secured solution, you need to make it easy to run your system in a secured manner, and you need to make that approach the default and most reasonable thing to do. From my point of view, this is a user experience problem, not a security problem.
In my day job, I’m working on building the RavenDB document database, and these kinds of threats can keep you up at night. Notice that at this point, we haven’t even discussed any kind of security mechanism of any kind because, by default, none is usually engaged. And the problem is that the security policies that are mandatory in production are a major hindrance in development. This leads to an interesting dichotomy. Databases that install with unsecured defaults get more users (since they are easier to start with), but they also have a tendency to leak data and expose the organization to major risks because it is easy to miss changing the default configuration.
With RavenDB, we have implemented a two-step approach.As long as RavenDB is running on a developer machine and is configured to listen only to local connections (which is the default in the typical developer installation), there is no need to configure security. This is safe since there is no way for an outside party to connect to the database. Only connections from within the same machine are allowed.
When you go to production, you’ll need to reconfigure RavenDB to listen to connections from the network, and it is at this stage that the second step of the security design applies. RavenDB will refuse to run in such a configuration. Listening to the network while not having any authentication set up is a red flag for a highly vulnerable position, and RavenDB will reject such a configuration.
At this point, the administrator is no longer just operating by rote but is aware that they need to make an explicit security decision and set up the appropriate security and authentication. We have been using this approach for the past five years or so with great success. Making a proper distinction between a developer machine (easy to set up, no need for security) and production (more involved to set up and configuration and security are paramount) gives users the appropriate balance.
A quick look through Shodan shows over 50,000 MongoDB instances that are listening to the public internet and over 15,000 exposed instances of Elasticsearch and Redis each. It took me about two minutes to find a publicly addressable MongoDB instance that had no authentication at all (by all appearances, it had already been mauled by ransomware by the time I looked at it).
Combine that with the fact that tools such as ZMap can scan the entirety of the IPV4 range in a few minutes, and you can assume that any publicly visible instance is going to be probed within a few minutes of coming online.
Regardless of your choice of database, ensuring that you have taken the basic step of locking down production systems is just the first item on your list. I’m aware that a security article that basically states “don’t leave your door wide open” is a bit redundant, but the tens of thousands of instances publicly available on the net shows that this is still a topic worth discussing.
Opinions expressed by DZone contributors are their own.
Comments