Kafka Authentication Tutorial (with 5 Examples)


Kafka provides multiple authentication options.  In this tutorial, we will describe and show the authentication options and then configure and run a demo example of Kafka authentication.

There are two primary goals of this tutorial:

  1. teach the options we have for Kafka authentication
  2. prepare us towards building a multi-tenant Kafka cluster.

There are a few key subjects which must be considered when building a multi-tenant cluster, but it all starts with Kafka authentication.

After Kafka authentication, the next tutorials will be Kafka authorization and Kafka quotas. Check the “Further Resources” section below for links to these tutorials.

Table of Contents

Kafka SSL/TLS in transit?

Before we begin exploring authentication options, let’s distinguish between configured authentication vs. SSL/TLS in transit.

SSL/TLS in transit from producers/consumers to a Kafka broker may be considered authentication in a sense. Totally cool.

But, in our case now, we are going to focus on the Kafka authentication options outside of communication channel.

Quick recap for clarity- Kafka producers and consumers can communicate with brokers over plain-text port (usually port 9092 by default) or of TLS port (usually 9094 be default). Establishing a TLS channel does require mutual trust and therefore might be considered authentication, but again, we are not going to consider it anymore here.

Kafka Authentication Overview

Let’s cover the following two sections when considering our options for authentication in Kafka.

  1. SASL with Kafka
  2. mTLS with Kafka

What is SASL with Kafka?

Kafka uses the Java Authentication and Authorization Service (JAAS) for SASL configuration. Authentication of connections from clients (producers and consumers) to Kafka brokers supports the following SASL mechanisms:

  • SASL/PLAIN
  • SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512
  • SASL/OAUTHBEARER (starting at version 2.0)
  • SASL/GSSAPI (Kerberos)

SASL/PLAIN Authentication

SASL/PLAIN is a simple username/password authentication mechanism. Kafka supports a default implementation for SASL/PLAIN but can also be extended with callback handlers.

To ensure credentials are sent in the clear over the wire, SASL/PLAIN should be used only with TLS as transport layer.

Add a suitably modified JAAS file similar to the one below to each Kafka broker’s config directory, let’s call it kafka_server_jaas.conf for this example:

By default, SASL/PLAIN in Kafka in the JAAS configuration file such as the following kafka_jaas.conf file.

 KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret"
    user_admin="admin-secret"
    user_alice="alice-secret";
};

This configuration defines two users (admin and alice) and is the same example shown in the Apache Kafka documentation. The properties username and password are used for broker to broker connections; i.e. admin is the user for inter-broker communication. The set of properties user_userName defines the passwords for all users connecting to the broker from clients.

To use this file on each broker, pass in the file location as JVM parameter setting the java.security.auth.login.config param such as:

-Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf

From Kafka version 2.0 onwards, you can avoid storing clear passwords on disk by configuring your own callback handlers that obtain username and password from an external source using the configuration options sasl.server.callback.handler.class and sasl.client.callback.handler.class.

Under the default implementation of principal.builder.class, the username is used as the authenticated Principal for configuration of ACLs etc. We will go through a demo of this later.

On your Kafka producer and consumer clients, you need to pass in a properties file to authenticate. For example

sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
    username="alice" \
    password="alice-secret";

SASL/SCRAM Authentication in Kafka

SCRAM, or Salted Challenge Response Authentication Mechanism, is a family of SASL mechanisms defined in RFC 5802. Apache Kafka supports SCRAM-SHA-256 and SCRAM-SHA-512.

The default SCRAM implementation stores credentials in Zookeeper, but there are SASL/SCRAM implementations such as those found in Amazon MSK which store credentials outside of Zookeeper.

Again, exactly like the previous SASL/PLAIN example, the username is used as the authenticated Principal for configuration of Kafka ACLs and quotas.

To configure, we pass in a JAAS conf file similar to the previous example on broker startup. For example

KafkaServer {
    org.apache.kafka.common.security.scram.ScramLoginModule required
    username="admin"
    password="admin-secret";
};

Again, the username and password properties are used by the broker to initiate connections to other brokers.

As shown in the Kafka documentation, creating SCRAM credentials for user alice with password alice-secret can accomplished with kafka-config.sh:

> bin/kafka-configs.sh --zookeeper localhost:2182 --zk-tls-config-file zk_tls_config.properties --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice

For Kafka clients, the properties file when using SCRAM may look like the following example

sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
    username="alice" \
    password="alice-secret";

SASL/OAUTHBEARER in Kafka

Ok, so by now, I’m sure you are seeing the pattern. We need a configuration for the Kafka brokers and we need configuration in Kafka clients for the authentication mechanism.

The SASL OAUTHBEARER mechanism in Apache Kafka enables the use of the framework in a SASL context. It is defined in RFC 7628. The OAUTHBEARER implementation in Kafka creates and validates Unsecured JSON Web Tokens by default and according to the Kafka docs is “only suitable for use in non-production Kafka installations”.

To make suitable for production, we are advised to:

Write an implementation of org.apache.kafka.common.security.auth.AuthenticateCallbackHandler which handles an instance of org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback. Next, declare this custom handler via either the sasl.login.callback.handler.class configuration option for a non-broker client or via the listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class configuration option for brokers. (when SASL/OAUTHBEARER is the inter-broker protocol).

In addition, we are told production workloads also require writing an implementation of org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback and declaring it via the listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class.

It’s not exactly clear to me if we can use the same AuthenticateCallbackHandler or not yet, but I know an example would help.

There are many options for configuring OAUTHBEARER which I won’t cover all here. See the Apache Kafka docs on OAUTHBEARER for specifics.

Kerberos in Kafka (SASL/GSSAPI)

It is possible to integrate Kafka with existing Kerberos infrastructure.

Again, for broker configuration, we need a JAAS configuration file in the Kafka broker’s configuration direction such as the following example:

KafkaServer {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/security/keytabs/kafka_server.keytab"
    principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};

As seen in the above example, as expected, Kerberos requires a unique principal and keytab for each broker.

Set the java.security.krb5.conf and java.security.auth.login.config options on broker startup; i.e.

-Djava.security.krb5.conf=/etc/kafka/krb5.conf
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

Last, in the server.properties file, update the SASL port and mechanisms as shown in following example:

listeners=SASL_PLAINTEXT://host.name:port
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI

sasl.kerberos.service.name=kafka

In addition, notice the last line which is configuring the service name which should match the principal name of the Kafka brokers; i.e. above matches kafka/kafka1.hostname.com@EXAMPLE.com specified in previously shown JAAS config file.

On Kafka client side, Kerberos based authentication is set in the connection properties file such as

sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \
    storeKey=true  \
    keyTab="/etc/security/keytabs/kafka_client.keytab" \
    principal="kafka-client-1@EXAMPLE.COM";

mTLS with Kafka

mTLS is an authentication mechanism by which BOTH clients and brokers exchange and verify each other’s certificates in order to establish trust as well as a client identity. This differs from encrypting the traffic channel over SSL/TLS where only the client verifies the Kafka broker certificate.

In mTLS, the connection is encrypted but both the broker and client now have signed certificates. Also both broker and client verify each other’s certificates. The client certificate provides an identity and is no longer anonymous.

A certificate authority (CA) is responsible for signing certificates. Organizations may choose to act as their own CA or use 3rd party CAs such as Comodo, DigiCert, Verisign, etc.

To learn more about acting as your own CA, start with learning about openssl.

Similar to Kerberos, there’s much to cover when considering options for keystores, truststores, CAs, etc. and we will leave that to existing documentation rather than attempt to recreate here.

Kafka Authentication Example Demo

Ok, now that we’ve covered available options in Kafka authentication. Let’s setup and run a simple authentication in Kafka demo. We’ll run a pre-configured docker container to keep things simple.

In future tutorials, we’ll expand on this example to implement and demonstrate authorization with Kafka ACLs and Kafka quotas.

We are going to use a docker container and docker-compose, so it is assumed you have both of these ready-to-go if you want to run the demo.

Kafka Authentication Demo Steps

  1. Download or clone the docker-compose yml file named kafka-authn-example.yml and the sample broker JAAS file from https://github.com/supergloo/kafka-examples/tree/master/authentication
  2. Run docker-compose -f kafka-authn-example.yml up -d

At this point, we can try commands with and without authentication. Let’s show a few examples with kcat and kafka-topics.sh CLI script.

For example, if we have kcat installed, compare

$ kcat -b localhost:9092 -X security.protocol=SASL_PLAINTEXT -X sasl.mechanisms=PLAIN -X sasl.username=alice -X sasl.password=alice-secret -L
Metadata for all topics (from broker 1001: sasl_plaintext://localhost:9092/1001):
 1 brokers:
  broker 1001 at localhost:9092 (controller)
 1 topics:
  topic "test-topic" with 1 partitions:
    partition 0, leader 1001, replicas: 1001, isrs: 1001

to the following where the attempt fails:

$ kcat -b localhost:9092 -L
%4|1670791597.386|FAIL|rdkafka#producer-1| [thrd:localhost:9092/bootstrap]: localhost:9092/bootstrap: Disconnected: verify that security.protocol is correctly configured, broker might require SASL authentication (after 303ms in state UP)

Or if you have a Kafka distribution downloaded and access to the bin/ directory, compare

./kafka_2.11-2.4.1/bin/kafka-topics.sh --create --topic test-topic --bootstrap-server localhost:9092

This will eventually timeout and fail. But if we pass in credentials to authenticate, then the command will succeed such as

~/dev $ ./kafka_2.11-2.4.1/bin/kafka-topics.sh --create --topic test-topic-auth  --bootstrap-server localhost:9092 --command-config kafka-examples/authentication/client.properties
~/dev $ ./kafka_2.11-2.4.1/bin/kafka-topics.sh --list test-topic-auth  --bootstrap-server localhost:9092 --command-config kafka-examples/authentication/client.properties
test-topic-auth

As shown above, we can successfully create and list topics when we pass in the client.properties file which is also included in the Github repo above.

You probably already know this, but just in case, the paths and directories above are just examples and based on my laptop. You’ll need to update accordingly, because you probably don’t have a dev directory with Kafka 2.4.1 downloaded to it for example.

Kafka Authentication Further Resources

Before you go…

If you enjoyed this tutorial, be sure to check out all of the Kafka Tutorials on supergloo.com

See also  Kafka Producer in Scala
About Todd M

Todd has held multiple software roles over his 20 year career. For the last 5 years, he has focused on helping organizations move from batch to data streaming. In addition to the free tutorials, he provides consulting, coaching for Data Engineers, Data Scientists, and Data Architects. Feel free to reach out directly or to connect on LinkedIn

Leave a Comment