To briefly explain what we are trying to do here: . We want to have permission to read and write Kafka topics Our Kafka is protected by Kerberos. It means, before we start accessing Kafka, we need to obtain a ticket from Kerberos. To get the ticket we have to provide a keytab — authentication file for each user. All these steps have to be done automatically because when we use commands to access Kafka there won’t be an opportunity to show keytab manually. To get things done we need to specify the right parameters and configurations in the right place. Here is my environment (your tools and versions may vary but the approach still should work): Cloudera Hadoop cluster v. 5+ Kafka v. 2+ ( topic with Kerberos auth already exists) Spark v. 2+ Kerberos v. 5 Jupyter Notebook with Pyspark For the beginning, let’s access the protected Kafka topic with the terminal. The topic access should only be granted if we obtain a ticket from Kerberos for the right user. For this operation, we need to prepare (it will be smoother if all the files will be in the same path): User’s keytab file ( for Kerberos ) For the beginning, let’s access the protected Kafka topic with . Access to the topic should only be granted if we obtain a ticket from Kerberos for the right user. For this operation we need to prepare (it will be smoother if all the files will be on the same path): terminal User’s keytab file ( for Kerberos ) File : jaas.conf KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab=”${PATH_TO_YOUR_KEYTAB}“ principal=”${USER_NAME}@${REALM}”; }; File kafka_security.properties: security.protocol=SASL_PLAINTEXT sasl.kerberos.service.name=kafka sasl.mechanism=GSSAPI File (probably located in or ) (see for more details) krb5.conf /etc/krb5.conf /etc/kafka/krb5.conf JDK’s Kerberos Requirements Then we need to export the variable with and jaas.conf krb5.conf : KAFKA_OPTS=” Djava.security.auth.login.config=jaas.conf -Djava.security.krb5.conf= krb5.conf” export /etc/ Then we can write and read Kafka topic from Terminal. For writing: /bin/kafka- -producer --broker-list ${KAFKA_BROKERS_WITH_PORTS} --topic ${TOPIC_NAME} --producer.config kafka_security.properties console For reading: /bin/kafka- -consumer --bootstrap-server ${KAFKA_BROKERS_WITH_PORTS} --topic ${TOPIC_NAME} -- -beginning --consumer.config kafka_security.properties console from Hope everything worked! Let’s do the same thing using . Spark The challenge here is that we want Spark to access Kafka not only with the application driver but also with every executor. It means each executor needs to obtain a ticket from Kerberos with our keytab. To make Spark do this, we need to specify the right configurations. Firstly, we need the same : jaas.conf KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab= keyTab=”${YOUR_KEYTAB_FILE} “ principal=”${USER_NAME}@${REALM}”; }; true Before launching Spark, we also need to export the variable: SPARK_KAFKA_VERSION= export 0.10 In Spark code we will access Kafka with these options (the first 5 is mandatory): kafka.bootstrap.servers=${KAFKA_BROKERS_WITH_PORTS} kafka.security.protocol=SASL_PLAINTEXT kafka.sasl.kerberos.service.name=kafka kafka.sasl.mechanism=GSSAPI subscribe=${TOPIC_NAME} startingOffsets=latest maxOffsetsPerTrigger= 1000 You can pass these options map to: spark.readStream. format( ). options(myOptionsMap). load() "kafka" Before starting Spark we can define the shell variable. JAVA_OPTIONS= "-Djava.security.auth.login.config=jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf" Also, we will need two copies of users Keytab with different names. If we already have one, we can create the second one with the command: cp $USER_NAME.keytab ${USER_NAME}_2.keytab And to launch the spark application we should run this command: spark2-submit \ --master yarn \ --conf \ --conf \ --conf \ --conf \ -- {USER_NAME}.keytab "spark.yarn.keytab=${USER_NAME}_2.keytab" "spark.yarn.principal=$USER_NAME@$REALM" "spark.driver.extraJavaOptions=$JAVA_OPTIONS" "spark.executor.extraJavaOptions=$JAVA_OPTIONS" " . . " \ -- - - -0-10 .11-2.4.0. \ -- " . ","$ class org example MyClass jars spark sql kafka _2 jar files jaas conf " \ my_spark.jar Or you can use the same configurations with or . spark-shell pyspark Note: to allow Spark access HDFS we specify and . To allow Spark access Kafka we specify and and provide files , mentioned in so every executor could receive a copy of these files for authentication. And for spark kafka dependency we provide jar suitable for our spark version. We can also use option --package instead of --jars. spark.yarn.keytab spark.yarn.principal spark.driver.extraJavaOptions spark.executor.extraJavaOptions jaas.conf , ${USER_NAME}.keytab JavaOptions spark-sql-kafka Hope everything worked! Let’s do the same trick in PySpark using Jupyter Notebook. To access the shell environment from python we will use . os.environ os sysos.environ[‘SPARK_KAFKA_VERSION’] = ‘ ’ import import 0.10 Then we should configure the Spark session. spark = SparkSession.builder. \ config(‘spark.yarn.keytab’, ‘${USER_NAME}_2.keytab’).\ config(‘spark.yarn.principal’, ‘$USER_NAME@$REALM’).\ config(‘spark.jars’, ‘spark-sql-kafka – _2 – .jar’).\ config(‘spark.driver.extraJavaOptions’, ‘-Djava.security.auth.login.config=jaas.conf -Djava.security.krb5.conf= krb5.conf’).\ config(‘spark.executor.extraJavaOptions’, ‘-Djava.security.auth.login.config=jaas.conf -Djava.security.krb5.conf= krb5.conf’).\ config(‘spark.files’, ‘jaas.conf,${KEYTAB}’).\ .appName(“KafkaSpark”).getOrCreate() -0 10 .11 2.4 .0 /etc/ /etc/ we can connect to Kafka like this: kafka_raw = spark.readStream. \ format(‘kafka’).\ option(‘kafka.bootstrap.servers’, ${KAFKA_BROKERS_WITH_PORTS}). \ option(‘kafka.security.protocol’,’SASL_PLAINTEXT’). \ option(‘kafka.sasl.kerberos.service.name’,’kafka’). \ option(‘kafka.sasl.mechanism’,’GSSAPI’). \ option(‘startingOffsets’,’earliest’). \ option(‘maxOffestPerTrigger’, ). \ option(‘subscribe’,${TOPIC_NAME}). \ load() 10 To access the data we can use: query = kafka_raw. \ writeStream. \ format( ). \ start() "console" That’s it. I hope you could find all the configurations you need to access Kafka using Kerberos any way you like. Also published . here