paint-brush
Comparing Apache Kafka with Oracle Transactional Event Queues (TEQ) as Microservices Event Meshby@paulparkinson
503 reads
503 reads

Comparing Apache Kafka with Oracle Transactional Event Queues (TEQ) as Microservices Event Mesh

by Paul ParkinsonSeptember 13th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As opposed to Kafka, Oracle Transaction Event Queues provide exactly-once delivery and atomicity between database and messaging activities without the need for the developer to write any duplicate-consumer code nor use distributed transactions, etc.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Comparing Apache Kafka with Oracle Transactional Event Queues (TEQ) as Microservices Event Mesh
Paul Parkinson HackerNoon profile picture

*(matrix above is referred to in the blog below)


This blog focuses on transactional and message delivery behavior, particularly as it relates to microservice architectures.  There are of course numerous areas to compare MongoDB, PostgresSQL, and Kafka with the converged Oracle DB and Oracle Transactional Event Queues/AQ that are beyond the scope of this blog.


Event Queues


The Oracle database itself was first released in 1979 (PostgresSQL was released in 1997 and MongoDB in 2009).  Oracle Advanced Queuing (AQ) is a messaging system that is part of every Oracle database edition and was first released in 2002 (Kafka was open-sourced by LinkedIn in 2011 and Confluent was founded in 2014).  AQ sharded queues introduced partitioning in release 12c and is now called Transaction Event Queues (TEQ). TEQ supports partitioning, a Kafka Java client, and a number of other features for Kafka interoperability, as well as upcoming technologies such as automated Saga support, etc. that will be covered in future blogs.

The Oracle database and AQ/TEQ have years of experience and hardening and at the same time have continuously evolved to support not only relational data and SQL (as PostgresSQL does) but also JSON document-based access (as MongoDB does) and other data types, such as XML, spatial, graph, time-series, blockchain, etc.


Both the database and TEQ support clients written in a wide variety of languages (in addition to native SQL and PL/SQL of course).  Examples in various languages can be seen in the blog series Developing Microservices in Java, JavaScript, Python, .NET, and Go with the Oracle Converged Database and the, soon to be released, next part of this blog series will give specific source examples of the concepts covered in this blog in all of the languages mentioned. The Oracle database also provides low and no code access via REST endpoints.


In doing so, Oracle provides an ideal Converged Database solution for microservices that supports both polyglot data models and polyglot programming languages as well as native communication support via both REST endpoints/APIs and, as we will discuss in more detail now, robust transactional messaging.  This provides a number of advantages as far as better and more simplified administration, security, availability, cost, etc. while also providing the bounded context model prescribed by microservices where data can be isolated at various levels (schema, Pluggable DB, Container DB, etc.) It is the best of both worlds.


While REST, gRPC, API, etc. communication is certainly widespread and has its purpose, more and more microservice systems use an event-driven architecture for communication for a number of reasons including decoupling/adaptability, scalability, and fault tolerance (removing the need for retries, consumers may be offline, etc.).  Related to this, more data-driven systems are employing an event sourcing pattern of one form or another. Changes to data (eg a SQL command to "insert order") are sent via events that describe the data change (eg an "orderPlaced" event) that are received by interested services. Thus the data is sourced from the events and event sourcing in general moves the source of truth for data to the event broker.  This fits nicely with the decoupling paradigm of microservices.


(Note that while the terms "message" and "event" can technically have somewhat different semantics, we are using them interchangeably in this context.)


Transaction Event Queues


It is very important to notice that there are actually two operations involved in event sourcing, the data change being made and the communication/event of that data change. There is, therefore, a transactional consideration and any inconsistency or failure causing a lack of atomicity between these two operations must be accounted for.  This is an area where TEQ has an extremely significant and unique advantage as it, the messaging/eventing system, is actually part of the database system itself and therefore can conduct both of these operations in the same local transaction and provide this atomicity guarantee. This is not something that can be done between Kafka and any database (such as PostgresSQL and MongoDB) which must use outbox patterns that directly read database logs and stream them as events via a connector or use change data capture or polling or other techniques.  A further advantage of TEQ usage is that it allows for the inclusion of arbitrary context with messages rather than the static context options of messages sent via Kafka.


This is in addition to the fact that TEQ provides exactly-once message delivery without the need for the developer to write their microservices to be idempotent and/or explicitly implement a duplicate-consumer pattern. This is a significant advantage and again a perfect match for microservice architectures where service implementations should be limited to domain-specific logic (thus opening up such development to domain experts) and any possible cross-cutting concerns (such as duplicate message handling, etc.) should be moved out of the core service (which is one of the main advantages of service meshes and indeed the TEQ event mesh).  This is also particularly useful for the migration of monoliths to microservices where monolithic apps may well not have been programmed to be idempotent (ie were designed presuming calls were only made/sent once).


Looking closer at the possible effects of not having exactly-once message delivery and/or atomicity of database and messaging operations mentioned, we can take an order inventory flow as an example, where the following occurs...


  1. The Order service places an order (inserted in the database via JSON document with the status "pending") and sends an "orderPlaced" message to the Inventory service (two operations in total)

  2. The Inventory service receives this message, checks and modifies the inventory level (via a SQL query), and sends an "inventoryStatusUpdated" message to the Order service (three operations in total).

  3. The Order service receives this message and updates the status of the order (two operations in total). A matrix of failures (occurring at three selected points in the flow) and recovery scenarios comparing MongoDB, PostgresSQL, and Kafka against Oracle Converged DB with Transactional Event Queues/AQ are shown in the table at the begining of this blog.


Every concept and technology discussed in this blog is shown in the Building Data-Driven Microservices with Oracle Converged Database Workshop which can easily be set up and run in ~30 minutes!  The source for the workshop can be found directly at https://github.com/oracle/microservices-datadriven (specific to this blog are setup, order-mongodb-kafka, inventory-postgres-kafka-inventory, order-oracleteq, and inventory-oracleteq).


Finally, the Developer Resource Center for Microservices landing page is a great resource for finding various microservice architecture materials.


Please feel free to provide any feedback here, on the workshop, on the GitHub repos, or directly. We are happy to hear from you.


This article was first published here.