The growing amount of equipment, connected to the Net has led to a new term, Internet of things (or IoT). It came from the machine to machine communication and means a set of devices that are able to interact with each other. The necessity of improving system integration caused the development of message brokers, that are especially important for data analytics and business intelligence. In this article, we will look at 2 big data tools: Apache Kafka and Rabbit MQ.
Can you imagine the current amount of data in the world? Nowadays, about 12 billion “smart” machines are connected to the Internet. Considering about 7 billion people on the planet, we have almost one-and-a-half device per person. By 2020, their number will significantly increase to 200 billion, or even more. With technological development, building of “smart” houses and other automatic systems, our everyday life becomes more and more digitized.
As a result of this digitization, software developers face the problem of successful data exchange. Imagine you have your own application. For example, it’s an online store. So, you permanently work in your technological scope, and one day you need to make the application interact with another one. In previous times, you would use simple “in points” of the machine to machine communication. But nowadays we have special message brokers. They make the process of data exchange simple and reliable. These tools use different protocols that determine the message format. The protocols show how the message should be transmitted, processed, and consumed.
Wikipedia asserts that a message broker “translates a message from the formal messaging protocol of the sender to the formal messaging protocol of the receiver”.
Programs like this are essential parts of computer networks. They ensure transmitting of information from point A to point B.
Original published at freshcodeit.com
So, we can say that message brokers can do 4 important things:
There are self-deployed and cloud-based messaging tools. In this article, I will share my experience of working with the first type.
Pricing: free
Official website: https://kafka.apache.org/
Useful resources: documentation, books
Pros:
Cons:
What do Netflix, eBay, Uber, The New York Times, PayPal and Pinterest have in common? All these great enterprises have used or are using the world’s most popular message broker, Apache Kafka.
With numerous advantages for real-time processing and big data projects, this asynchronous messaging technology has conquered the world. How did it start?
In 2010 LinkedIn engineers faced the problem of integration huge amounts of data from their infrastructure into a lambda architecture. It also included Hadoop and real-time event processing systems.
As for traditional message brokers, they didn’t satisfy Linkedin needs. These solutions were too heavy and slow. So, the engineering team has developed the scalable and fault-tolerant messaging system without lots of bells and whistles. The new queue manager has quickly transformed into a full-fledged event streaming platform.
The technology has become popular largely due to its compatibility. Let’s see. We can use Apache Kafka with a wide range of systems. They are:
With the help of Apache Kafka, you can successfully create data-driven applications and manage complicated back-end systems. The picture below shows 3 main capabilities of this queue manager.
Original published at freshcodeit.com
As you can see, Apache Kafka is able to:
First of all, you should know about the abstraction of a distributed commit log. This confusing term is crucial for the message broker. Many web developers used to think about “logs” in the context of a login feature. But Apache Kafka is based on the log data structure. This means a log is a time-ordered, append-only sequence of data inserts. As for other concepts, they are:
The interaction of the clients and the servers are implemented with easy to use and effective TCP protocol. It’s language agnostic standard. So, the client can be written in any language that you want.
There are 2 main patterns of messaging:
Both of them have some pros and cons. The advantage of the first pattern is the opportunity to easily scale the processing. On the other hand, queues aren’t multi-subscriber. The second model provides the possibility to broadcast data to multiple consumer groups. At the same time, scaling is more difficult in this case.
Apache Kafka magically combines these 2 ways of data processing, getting benefits of both of them. It should be mentioned that this queue manager provides better ordering guarantees than a traditional message broker.
Combining the functions of messaging, storage, and processing, Kafka isn’t a common message broker. It’s a powerful event streaming platform capable of handling trillions of messages a day. Kafka is useful both for storing and processing historical data from the past and for real-time work. You can use it for creating streaming applications, as well as for streaming data pipelines.
If you want to follow the steps of Kafka users, you should be mindful of some nuances:
Being a perfect open-source solution for real-time statistics and big data projects, this message broker has some weaknesses. The thing is it requires you to work a lot. You will feel a lack of plugins and other things that can be simply reused in your code.
I recommend you to use this multiple publish/subscribe and queueing tool, when you need to optimize processing really big amounts of data ( 100 000 messages per second and more). In this case, Apache Kafka will satisfy your needs.
Pricing: free
Official website: https://www.rabbitmq.com
Useful resources: tools, best practices
Pros:
Cons:
The next very popular solution is written in the Erlang. As it’s a simple, general-purpose, functional programming language, consisted of many ready to use components, this software doesn’t require lots of manual work. RabbitMQ is known as a “traditional” message broker, which is suitable for a wide range of projects. It is successfully used both for development of new startups and notable enterprises.
The software is built on the Open Telecom Platform framework for clustering and failover. You can find many client libraries for using the queue manager, written on all major programming languages.
One of the oldest open source message brokers can be used with various protocols. Many web developers like this software, because of its useful features, libraries, development tools, and instructions.
In 2007, Rabbit Technologies Ltd. had developed the system, which originally implemented AMQP. It’s an open wire protocol for messaging with complex routing features. AMQP ensured cross-language flexibility of using message broking solutions outside the Java ecosystem. In fact, RabbitMQ perfectly works with Java, Spring, .NET, PHP, Python, Ruby, JavaScript, Go, Elixir, Objective-C, Swift and many other technologies. The numerous plugins and libraries are the main advantage of the software.
Created as a message broker for general usage, RabbitMQ is based on the pub-sub communication pattern. The messaging process can be either synchronous or asynchronous, as you prefer. So, the main features of the message broker are:
Being a broker-centric program, RabbitMQ gives guarantees between producers and consumers. If you choose this software, you should use transient messages, rather than durable.
The program uses the broker to check the state of a message and verify whether the delivery was successfully completed. The message broker presumes that consumers are usually online.
As for the message ordering, the consumers will get the message in the published order itself. The order of publishing is managed consistently.
The main advantage of this message broker is the perfect set of plugins, combined with nice scalability. Many web developers enjoy clear documentation and well-defined rules, as well as the possibility of working with various message exchange models. In fact, RabbitMQ is suitable for 3 of them:
Here you can see the gap between Kafka and RabbitMQ. If a consumer isn’t connected to a fanout exchange in RabbitMQ, the message will be lost. At the same time, Kafka allows avoiding this, because any consumer can read any message.
As for me, I like RabbitMQ due to the opportunity to use many plugins. They save time and speed-up work. You can easily adjust filters, priorities, message ordering, etc. Just like Kafka, RabbitMQ requires you to deploy and manage the software. But it has convenient in-built UI and allows using SSL for better security. As for abilities to cope with big data loads, here RabbitMQ is inferior to Kafka.
To sum up, both Apache Kafka and RabbitMQ truly worth the attention of skillful software developers. I hope, my article will help you find suitable big data technologies for your project. If you still have any questions, you are welcome to contact Freshcode specialists. In the next review we will compare other powerful messaging tools, ActiveMQ and Redis Pub/Sub.
The original article Introduction to message brokers. Part 1: Apache Kafka vs RabbitMQ was published at freshcodeit.com.
<a href="https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href">https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href</a>