Elasticsearch Security: Authentication, Encryption, and Backup
In this post we take a quick look at how you can increase the security you have with your Elasticsearch instances. Read on for details!
Join the DZone community and get the full member experience.
Join For Free
The recent ransom attack on public Elasticsearch instances showed that Elasticsearch security is still a hot topic. Elasticsearch was not the only target – tens of thousands of poorly configured MongoDB databases have been compromised over the past week, too, compromising over 27,000 servers where hackers stole and then deleted data from unpatched or “poorly-configured” systems. The scenario is always the same: insecure instances are “hacked” and data replaced with a note informing the owner to send payment to a Bitcoin address and then email the attacker to retrieve the data. Over the last few days we saw more than 4000 Elasticsearch instances compromised and the number of instances is still growing, as seen here. The attacks are rather simple. The attacker simply scans for services on port 9200. Once such a service is found the hacked fetches the data from it, then deletes it and puts the payment information as document in the stolen Elasticsearch index. Due to the fact that many Elasticsearch instances are not protected these instances are very easy targets.
In this post, we are going to show you a few simple and free prevention methods to secure Elasticsearch instances.
Let’s start with the general attack ⇒ counter-measures:
- Port scanning ⇒ minimize exposure:
- Don’t use the default port 9200
- Don’t expose Elasticsearch to the public Internet (put Elasticsearch behind a firewall)
- Data theft ⇒ secure access:
- Lock down the HTTP API with authentication
- Encrypt communication with SSL/TLS
- Data deletion ⇒ set up backup:
- Backup your data
- Log file manipulation ⇒ log auditing and alerting
- Hackers might manipulate or delete system log files to cover their tracks. Sending logs to a remote destination increases the chances of discovering intrusion early.
Let’s drill into each of the above items with step-by-step actions to secure Elasticsearch:
Lock Down Open Ports
Firewall: Close the Public Ports
The first action should be to close the relevant ports to the Internet:
iptables -A INPUT -i eth0 -p tcp --destination-port 9200 -s {PUBLIC-IP-ADDRESS-HERE} -j DROP
iptables -A INPUT -i eth0 -p tcp --destination-port 9300 -s {PUBLIC-IP-ADDRESS-HERE} -j DROP
If you run Kibana note that the Kibana server acts as a proxy to Elasticsearch and thus needs its port closed as well:
iptables -A INPUT -i eth0 -p tcp --destination-port 5601 -s {PUBLIC-IP-ADDRESS-HERE} -j DROP
After this you can relax a bit! Elasticsearch won’t not reachable from the Internet anymore.
Bind Elasticsearch Ports Only to Private IP Addresses
Change the configuration in elasticsearch.yml to bind only to private IP addresses or for single node instances to the loopback interface:
network.bind_host: 127.0.0.1
Add Private Networking Between Elasticsearch and Client Services
If you need access from another machine to Elasticsearch connect them via VPN or any other private network. A quick way to establish a secure tunnel between two machines is via SSH tunnels:
ssh -Nf -L 9200:localhost:9200 user@remote-elasticsearch-server
You can then access Elasticsearch via the SSH tunnel with from client machines e.g.
curl http://localhost:9200/_search
Authentication and SSL/TLS With Nginx
There are several open-source and free solutions that provide Elasticsearch access authentication, but if you want something quick and simple, here is how to do it yourself with just Nginx:
Generate password file:
printf "esuser:$(openssl passwd -crypt MySecret)\n" > /etc/nginx/passwords
Generate self-signed SSL certificates, if you don’t have official certificates:
sudo mkdir /etc/nginx/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout
/etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt
Add the proxy configuration with SSL and activate basic authentication to /etc/nginx/nginx.conf (note we expect the SSL certificate and key file in /etc/nginx/ssl/). Example:
# define proxy upstream to Elasticsearch via loopback interface in
http {
upstream elasticsearch {
server 127.0.0.1:9200;
}
}
server {
# enable TLS
listen 0.0.0.0:443 ssl;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key
ssl_protocols TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES";
# Proxy for Elasticsearch
location / {
auth_basic "Login";
auth_basic_user_file passwords;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
# use defined upstream with the name "elasticsearch"
proxy_pass http://elasticsearch/;
proxy_redirect off;
if ($request_method = OPTIONS ) {
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, POST, , PUT, OPTIONS";
add_header Access-Control-Allow-Headers "Content-Type,Accept,Authorization, x-requested-with";
add_header Access-Control-Allow-Credentials "true";
add_header Content-Length 0;
add_header Content-Type application/json;
return 200;
}
}
Restart Nginx and try to access Elasticsearch via https://localhost/_search.
Free Security Plugins for Elasticsearch
Alternatively, you could install and configure one of the several free security plugins for Elasticsearch to enable authentication:
- HTTP Authentication plugin for Elasticsearch is available on Github. It provides Basic HTTP Authentication, as well as IP ACL.
- SearchGuard is a free security plugin for Elasticsearch including role based access control, document level security and SSL/TLS encrypted node-to-node communication. Additional enterprise features like LDAP authentication or JSON Web Token authentication are available and licenced per Elasticsearch cluster. Note that SearchGuard support is also included in some Sematext Elasticsearch Support Subscriptions.
Auditing and Alerting
As with any type of system holding sensitive data, you have to monitor it very closely. This means not only monitoring its various metrics (whose sudden changes could be an early sign of trouble), but also watching its logs. Concretely, in the recent Elasticsearch attacks, anyone who had alert rules that trigger when the number of documents in an index suddenly drops would have immediately been notified that something was going on. A number of monitoring vendors have Elasticsearch support, including Sematext (see Elasticsearch monitoring). Logs should be collected and shipped to a log management service in real time, where alerting needs to be set up to watch for any anomalous or suspicious activity, among other things. The log management service can be on premises or it can be a 3rd party SaaS, like Logsene. Shipping logs off site has the advantage of preventing attackers from covering their tracks by changing the logs. Once logs are off site attackers won’t be able to get to them. Alerting on metrics and logs means you will become aware of a security compromise early and take appropriate actions to, hopefully, prevent further damage.
Backup and Restore Data
A very handy tool to backup/restore or re-index data based on Elasticsearch queries is Elasticdump.
To backup complete indices, the Elasticsearch snapshot API is the right tool. The snapshot API provides operations to create and restore snapshots of whole indices, stored in files, or in Amazon S3 buckets.
Let’s have a look at a few examples for Elasticdump and snapshot backups and recovery.
- Install elasticdump with the node package manager
npm i elasticdump -g
- Backup by query to a zip file:
elasticdump --input='http://username:password@localhost:9200/myindex' --searchBody '{"query" : {"range" :{"timestamp" : {"lte": 1483228800000}}}}' --output=$ --limit=1000 | gzip > /backups/myindex.gz
- Restore from a zip file:
zcat /backups/myindex.gz | elasticdump --input=$ --output=http://username:password@localhost:9200/index_name
Examples for Backup and Restore Data With Snapshots to Amazon S3 or Files
First configure the snapshot destination
1) S3 example
curl 'localhost:9200/_snapshot/my_repository?pretty' -XPUT -d '{
"type" : "s3",
"settings" : {
"bucket" : "test-bucket",
"base_path" : "backup-2017-01",
"max_restore_bytes_per_sec" : "1gb",
"max_snapshot_bytes_per_sec" : "1gb",
"compress" : "true",
"access_key" : "<ACCESS_KEY_HERE>",
"secret_key" : "<SECRET_KEY_HERE>"
}
}'
2) Local disk or mounted NFS example
curl 'localhost:9200/_snapshot/my_repository?pretty' -XPUT -d '{
"type" : "fs",
"settings" : {
"location": "<PATH … for example /mnt/storage/backup>"
}
}'
3) Trigger snapshot
curl -XPUT 'localhost:9200/_snapshot/my_repository/<snapshot_name>'
4) Show all backups
curl 'localhost:9200/_snapshot/my_repository/_all'
5) Restore – the most important part of backup is verifying that backup restore actually works!
curl -XPOST 'localhost:9200/_snapshot/my_repository/<snapshot_name>/_restore'
What About Hosted Elasticsearch?
There are several hosted Elasticsearch services, with Logsene being a great alternative for time series data like logs. Each hosted Elasticsearch service is a little different. The list below shows a few relevant aspects of Logsene:
- Logsene API is compatible to Elasticsearch except for a few security related exceptions
- Logsene does not expose management API’s like index listing or global search via /_search
- Logsene blocks scripting and index deletion operations
- Logsene users can define and revoke access tokens for read and write access
- Logsene provides role based access control and SSL/TLS
- Logsene creates daily snapshots for all customers and stores them securely
- Logsene supports raw data archiving to Amazon S3
Published at DZone with permission of Stefan Thies, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Security Challenges for Microservice Applications in Multi-Cloud Environments
-
Building and Deploying Microservices With Spring Boot and Docker
-
What Is mTLS? How To Implement It With Istio
-
How To Design Reliable IIoT Architecture
Comments