How to Use PEM Certificates With Apache Kafka
Apache Kafka 2.7 finally enables the usage of PEM certificates for SSL encryption and authentication. Read on to learn how to use this new feature.
Join the DZone community and get the full member experience.Join For Free
It’s been a long time waiting but it’s finally here: starting with Apache Kafka 2.7, it's now possible to use TLS certificates in PEM format with brokers and Java clients. So, why does it matter?
PEM is a scheme for encoding x509 certificates and private keys as Base64 ASCII strings. This makes it easier to handle your certificates. You can simply provide keys and certificates to the app as string parameters (e.g. through environment variables). This is especially useful if your applications are running in containers, where mounting files to containers makes the deployment pipeline a bit more complex. In this post, I'll show you two ways to use PEM certificates in Kafka.
Providing Certificates as Strings
Brokers and CLI tools
Add certificates directly to the configuration file of your clients or brokers. If you’re providing them as single-line strings, you must transform the original multiline format to a single line by adding the feed characters ( \n ) at the end of each line. Here’s how the SSL section of the properties file should look:
security.protocol=SSL ssl.keystore.type=PEM ssl.keystore.certificate.chain=-----BEGIN CERTIFICATE-----\nMIIDZ...\n-----END CERTIFICATE----- ssl.keystore.key=-----BEGIN ENCRYPTED PRIVATE KEY-----\n...\n-----END ENCRYPTED PRIVATE KEY----- ssl.key.password=<private_key_password> ssl.truststore.type=PEM ssl.truststore.certificates=-----BEGIN CERTIFICATE-----\nMICC...\n-----END CERTIFICATE-----
Note that ssl.keystore.certificate.chain needs to contain your signed certificate as well as all the intermediary CA certificates. For more details on this, see the "Common Gotchas" section below.
Your private key goes into ssl.keystore.key field, while the password for the private key (if you use one) goes to ssl.key.password field.
Java clients use exactly the same properties, but constants help with readability:
Properties properties = new Properties(); properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "<bootstrap_servers>"); ...omitted other producer configs... //SSL configs properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); properties.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM"); properties.put(SslConfigs.SSL_KEYSTORE_CERTIFICATE_CHAIN_CONFIG, "<certificate_chain_string>"); properties.put(SslConfigs.SSL_KEYSTORE_KEY_CONFIG, "<private_key_string>"); // key password is needed if the private key is encrypted properties.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "<private_key_password>"); properties.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM"); properties.put(SslConfigs.SSL_TRUSTSTORE_CERTIFICATES_CONFIG, "<trusted_certificate>"); producer = new KafkaProducer<>(properties);
Providing Certificates as Files
If you already use mTLS authentication towards Kafka, then the easiest way to transition towards PEM certificates is to use them as files, replacing the java keystore and truststore you use today. This approach makes it easy to transition from PKCS12 files to PEM files.
Brokers and CLI Tools
Here’s how the SSL section of the properties file should look:
security.protocol=SSL ssl.keystore.type=PEM ssl.keystore.location=/path/to/file/containing/certificate/chain ssl.key.password=<private_key_password> ssl.truststore.type=PEM ssl.truststore.location=/path/to/truststore/certificate
ssl.keystore.type and ssl.truststore.type properties tell Kafka in which format we are providing the certificates and the truststore.
Next, ssl.keystore.location points to a file that should contain the following:
- Your private key
- Your signed certificate
- Any intermediary CA certificates
For more details about the certificate chain, see the "Common Gotchas" section below.
You'll need to set the ssl.key.password if your private key is encrypted (which I hope it is!) Make sure not to provide the ssl.keystore.password. Otherwise, you’ll get an error.
Again, Java clients use the same properties, but here we’re using the constants provided by the Kafka client library:
Properties properties = new Properties(); properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "<bootstrap_servers>"); ...omitted other producer configs... //SSL configs properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); properties.put(SslConfigs.SSL_KEYSTORE_TYPE_CONFIG, "PEM"); properties.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "/path/to/file/containing/certificate/chain"); // key password is needed if the private key is encrypted properties.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "<private_key_password>"); properties.put(SslConfigs.SSL_TRUSTSTORE_TYPE_CONFIG, "PEM"); properties.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/path/to/truststore/certificate"); producer = new KafkaProducer<>(properties);
Common Gotchas When Setting Up a Certificate Chain
- If your private key is encrypted (which it should be), you'll need to convert it from PKCS#1 to PKCS#8 format for Kafka to be able to read it properly.
- If you want to provide the PEM certificate as a one-line string, make sure to add the line feed characters at the end of each line ( \n ). Otherwise, the certificate will be considered invalid.
- The certificate chain has to include your certificate together with all the intermediary CA certificates that signed it, in that order. For example, if your certificate was signed by certificate A, which was signed by cert B, which was signed by the root certificate, your certificate chain has to include: your certificate, certificate A and certificate B, in that order. Do note that the root certificate should not be in the chain.
- Certificate order in your certificate chain is important (see point 3).
Example of Kafka SSL Setup With PEM Certificates
Testing an SSL setup of your clients is not simple because setting up a Kafka cluster with SSL authentication is not a straightforward process. This is why I created a docker-compose project with a single zookeeper and broker, enabled with SSL authentication. This project used many ideas from the excellent cp-demo project by Confluent.
To use the project, clone the docker-compose repository, and navigate to the kafka-ssl folder.
git clone https://github.com/codingharbour/kafka-docker-compose.git cd kafka-ssl
Running the start-cluster.sh script will generate the self-signed root certificate. The script will use it to sign all other certificates. It will also generate and sign the certificates for the broker and zookeeper and certificates for one producer and consumer. After this, the script will start the cluster using docker-compose.
Don’t have docker-compose? Check: how to install docker-compose.
In addition, the startup script will generate producer.properties and consumer.properties files you can use with kafka-console-* tools.
The consumer.properties file is an example of how to use PEM certificates as strings. The producer.properties, on the other hand, uses certificates stored in PEM files. This way you can see and test both approaches described in this blog post.
Published at DZone with permission of Dejan Maric. See the original article here.
Opinions expressed by DZone contributors are their own.