paint-brush
Efficient Microservices: Harnessing the Power of Data Compression in Direct Service-to-Service.by@fed4wet
904 reads
904 reads

Efficient Microservices: Harnessing the Power of Data Compression in Direct Service-to-Service.

by Oleh SypiahinJanuary 2nd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This comprehensive article explores the role of data compression in enhancing microservices architecture, especially in proxyless environments. Key points include: 1. Proxyless Microservices: Shifting from traditional proxy servers to direct service-to-service communication to address resource constraints, high-performance needs, security, and architecture simplicity. 2. Data Compression: Essential in microservices for efficient data transfer. Both lossless and lossy compression algorithms are considered, with a focus on lossless algorithms like GZIP for microservices communication. 3. Implementation: The article provides a step-by-step guide on setting up two Spring Boot Kotlin microservices, service-db, and service-mapper, with detailed instructions on project setup, configuration, and coding. 4. Challenges in Data Transfer: Large data volumes, network latency, data loss, cost, and security are significant concerns in microservices networks. 5. Benefits and Drawbacks of Data Compression: While data compression saves network traffic and improves system performance, it can also lead to data loss and compatibility issues. 6. Practical Application: The article demonstrates practical implementation using GZIP and Snappy algorithms, emphasizing real-world testing scenarios and performance evaluation using tools like Apache JMeter. 7. Test Cases and Results Analysis: Various test scenarios are conducted to evaluate data compression performance under different conditions, including latencies and user loads. The tests highlight the effectiveness of GZIP and Snappy in different scenarios. 8. Strategic Considerations: The article emphasizes the importance of a strategic approach in selecting and implementing data compression, including monitoring, traffic analysis, and performance testing, to optimize microservices efficiency. The author, Oleh Sypiahin, thoroughly explores the intricacies of microservices and data compression with practical examples and detailed technical guidance. The original code can be accessed through GitHub.

Company Mentioned

Mention Thumbnail
featured image - Efficient Microservices: Harnessing the Power of Data Compression in Direct Service-to-Service.
Oleh Sypiahin HackerNoon profile picture


This article delves into the intricacies of Microservices Architecture, with a special emphasis on its vital component - data transmission, in the context of a proxyless approach. We explore the nuances, benefits, and challenges of adopting this method, where direct service-to-service communication plays a pivotal role. The discussion extends to how DevOps practices and the right strategic choices can significantly bolster this architecture. Furthermore, we'll navigate through the realm of Cloud-Native DevOps, scrutinizing specific practices and tools that are essential in cloud-native environments, across platforms like AWS, Azure, or Google Cloud. This exploration aims to shed light on how a proxyless architecture can be effectively implemented and optimized in these diverse cloud platforms.


In today's cloud-based microservice architectures, efficient data exchange is crucial, particularly as we often operate in environments where traditional proxy-servers like Nginx are not the optimal solution. This necessity arises from various factors, leading to a shift towards a proxyless approach in specific scenarios:


  • Resource Constraints: In environments with limited computing resources, the overhead of proxy servers can be prohibitive.
  • High-Performance Requirements: Direct service-to-service communication can reduce latency, crucial in high-performance applications.
  • Security and Compliance: Certain regulatory or security requirements may dictate direct connections between services.
  • Simplified Architecture: For some applications, a streamlined setup without additional proxy layers is more manageable and efficient.
  • Specific Data Handling Needs: When unique data processing or compression methods are required that proxies cannot adequately provide.


Understanding the use cases for a proxyless approach, particularly in the context of data compression, becomes essential for optimizing microservices' efficiency and performance.


"To compress, or not to compress, that is the question: Whether 'tis nobler in the bytes to suffer. The pings and lags of outrageous file sizes, Or to take arms against a sea of data transfer, And by opposing, compress them."


Challenges of Data Transfer in a Network.

Data transfer has become an integral part of everyday life, akin to the air we breathe - its value is only truly appreciated when lacking. As IT professionals, we are responsible for this 'digital air,' ensuring its purity and uninterrupted flow. We must focus on maintaining the quality and speed of data transmission, much like environmentalists fight for clean air in cities. Our task extends beyond merely sustaining a steady stream of data; we must also ensure its efficiency and security within the complex network ecosystems of microservices. As a bridge connecting the dots in a Microservices Architecture, the network is also a challenging terrain where data faces numerous obstacles that can impact performance, efficiency, and security.


  1. Large volumes of data: Exchanging large volumes between microservices can result in performance and data transfer speed issues.
  2. Network latency: Data transmission delays can affect system performance depending on the distance between servers and the network's quality.
  3. Data loss: There is always a risk of information loss due to network failures or errors during data transmission.
  4. Data costs: Top-rated services like AWSMicrosoft AzureGCPIBM Cloud, etc., offer a pay-as-you-go pricing model where high data usage directly impacts price.
  5. Data security: Data compression does not improve data security. However, it indirectly affects several security-related aspects. The window for possible attacks is also reduced by reducing data transfer time. Decompression processing requires extra resources, complicating the task for a potential attacker attempting to analyze data traffic.


Note: Understanding these challenges is pivotal in appreciating the solutions and strategies that can be employed to overcome them, including the crucial role that data compression plays in this equation.

Understanding Data Compression.

Data Compression between Cloud Microservices


What is data compression?

Data compression is a technique to reduce data size to save storage space or transmission time. For microservices, this means that when one service sends data to another, instead of sending the raw data, it first compresses the data, sends the compressed data across the network, and then the receiving service decompresses it back to the original format. This will be the working scenario that we will focus on.

Discussing the significance and limitations of data compression in microservices architectures

Data compression reduces data size, leading to transfer faster and better performance. However, it can cause data loss, format inconsistencies, and performance problems.

Advantages of using data compression:

  • Transmit less data to save network traffic.
  • The data transmission times can be shortened to speed up the overall system operation.
  • The efficient exchange of information is facilitated by the ability to transfer large volumes of data at once. The more homogeneous data, the more Compression you get.
  • Improve system performance during peak times of simultaneous requests.
  • Enable better usage of network resources, resulting in improved efficiency.

Disadvantages of using data compression:

  • The need to process compressed data on both the client and server sides may lead to a slight decrease in performance.
  • Data loss during Compression and decompression may occur if the process is incorrectly set up.
  • Potential compatibility issues with data if different compression algorithms are used.
  • Additional configuration and setup are required to work with compressed data.
  • Requires additional expertise in Data debugging.


Note: This article will close some of these problems to ensure reliable and efficient data exchange between microservices.


Types of data compression algorithms.

Lossless vs Lossy


When considering data exchange between microservices, choosing the appropriate data compression algorithm is crucial to ensure efficiency and performance. Let's dive into the most common types of data compression algorithms.

Lossless Compression Algorithms:

Lossy Compression Algorithms:

These algorithms compress data to allow the original data to be perfectly reconstructed from the compressed data.

These algorithms compress data by removing unnecessary or less important information, meaning the original data cannot be perfectly reconstructed from the compressed data.

Examples include Huffman coding, Lempel-Ziv (LZ77 and LZ78), and Deflate, which are also used in GZIP.

Examples commonly used in audio and video compression include JPEG for images and MP3 and AAC for audio.

Another notable example is Snappy, which prioritizes speed and efficiency, making it a suitable choice for scenarios where quick data compression and decompression are more critical than achieving the highest compression level.


Benefits of using Lossless Compression Algorithms in Microservices Communication.

The final choice depends on the specific needs of the project and the characteristics of the data being exchanged between microservices.


GZIP is the optimal choice as a data compression and decompression software application for our purposes. It uses the "Deflate" algorithm, combining the LZ77 algorithm and Huffman coding. Since the "Deflate" algorithm is a "Lossless data compression algorithm," the original data can be perfectly reconstructed from the compressed data. Everyone in the business wants to receive accurate data and avoid any mistakes.


Verdict: Gzip is versatile and practical, so it is widely used for compressing text files and is recognized as both a lossless compression algorithm and a text compression tool.


Into Action: Deploying the Microservices Architecture.

You can skip the Microservice Creating section and jump straight to the test cases or summary part.

Now that we've brushed up on the basic concepts, it's time to start creating.


We will zoom in on two regular links in the intricate tapestry of microservice architectures, where hundreds of interconnected services weave a complex network. These two, isolated from the vast ecosystem, distinct microservices will serve as our focal points, allowing us to dissect and deeply understand a specific study case.

Diagram of a microservice architecture, which includes the baseless microservices.


As mentioned, we create two independent Spring Boot Kotlin microservices:

  1. service-db is a microservice that has access to DataBase *.
  2. service-mapper is a data-baseless microservice whose primary function is to map, enrich, and forward data to front-end applications.


Hint: Data BaseH2 was chosen as an in-memory Java SQL database for our study case example. The unique feature of H2 is that it only exists during the application's runtime and does not require separate installation. It is a lightweight, fast, and convenient option for development and testing purposes. Note that all database data will be erased upon application restart as it is temporary.

Fast Project Structure Generation

For short project generation, use the start.spring.io utility. Let's start.

  • Project: Gradle - Kotlin (or Maven if you prefer)
  • Language: Kotlin
  • Java: 21
  • Spring Boot: The latest version
  • Project Metadata:
  1. Group: com.compression
  2. Artifact: service-db / service-mapper
  3. Name: service-db / service-mapper
  • Dependencies:
  1. Spring Web (both)
  2. Spring Data JPA (for service-db)
  3. H2 Database (for service-db)
  4. Spring Boot DevTools (optional for automatic reloading).

Hint: It’s important to keep a close eye on your dependencies and only install what is necessary for your needs.


I already have made two configurations for you: service-db / service-mapper

Download the projects and unzip them.


In-project Configuration.

When you succeed with unzipping and indexation, you should add some code to your application.properties files.


By default, we need to configure the data source settings in service-db.


# DataSource Configuration
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=

# JPA/Hibernate Configuration
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect

# H2 Database Console Configuration
spring.h2.console.enabled=true

# Server Configuration
server.port=8081
server.servlet.context-path=/db


application.properties in service-mapper:

# Server Configuration
server.port=8080
server.servlet.context-path=/mapper

# Microservice Configuration
db.microservice.url=http://localhost:8081/db/
user.end.point=user

Hint: These variables are used to connect to the service-db microservice:

  • db.microservice.url is the URL of the microservice
  • user.end.point is the endpoint for user operations.

Development Process and Solution Design: Building the Data Flow.

Our project will follow a specific standard structure.

src
└── main
    └── kotlin
        └── com
            └── example
                └── myproject
                    ├── config
                    │   └── AppConfig.kt
                    ├── controller
                    │   └── MyController.kt
                    ├── repository
                    │   └── MyRepository.kt
                    ├── service
                    │   └── MyService.kt
                    ├── dto
                    │   └── MyUserDto.kt
                    └── entity
                        └── MyEntity.kt


In our course, we will be implementing the following points:

service-db implementation

service-mapper implementation

User Entity.

User DTO.

Service for Data Compression.

Decompression Service - Logic for Data Decompression.

Put/GET Controllers to save/get the Entity to/from the database.

GET Controller - Get Raw / Compressed Data from service-db.

Let's Code

Like sketching a blueprint before building a rocket, we start coding by shaping up DTO/Entity as my first step.


data class UserEntity in service-db and UserDto in service-mapper:

# service-db
@Entity
data class UserEntity(
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Id
    @jakarta.persistence.Id
    val id: Long,
    val name: String,
    val creativeDateTime: LocalDateTime? = LocalDateTime.now(),
)
# service-mapper
data class UserDto(
    val id: Long,
    val name: String,
    val creativeDateTime: LocalDateTime,
)

NOTE: For the service-db in our project, we will use the following decorators in the Entity to handle the unique identification of each entity. Using these annotations, the database will automatically manage the assignment of unique IDs when new entities are created, simplifying the entity management process.

  • @Id and @jakarta.persistence.Id - annotations are used to specify the entity’s primary key.
  • @GeneratedValue(strategy = GenerationType.IDENTITY) - annotation indicates that the database automatically generates a unique identifier for each entity as the primary key.

Service-db: Repository and Controller.

Let's shift our attention toward service-db and create a Classic Duet comprising a Repository and a Controller.


package com.compress.servicedb.repository

@Repository
interface UserRepository : CrudRepository<UserDto, Long>


@RestController
@RequestMapping("/user")
class EntityController(
    private val repository: UserRepository
) {

    @GetMapping
    fun getAll(): MutableIterable<UserDto> = repository.findAll()

    @PostMapping
    fun create(@RequestBody user: UserDto) = repository.save(user)

    @PostMapping("/bulk")
    fun createBulk(@RequestBody users: List<UserDto>): MutableIterable<UserDto> = repository.saveAll(users)

}


Please verify by Postman if everything is clear and proceed with the following steps accordingly.


POST: localhost:8081/db/user/bulk
POST: localhost:8081/db/user/
GET:  localhost:8081/db/user

NOTE: We need to ensure our services are configured and communicating effectively. Our current method of retrieving data from the service-db is rudimentary but serves as our first test. Once we have enough data, we can measure retrieval volume and speed to establish a baseline for future comparisons.

Service-mapper: Service and Controller.

Let's fly to service-mapper and create a Controller and a Service.

Service is:

package com.compress.servicemapper.service

@Service
class UserService(
    val restTemplate: RestTemplate,
    @Value("\${db.microservice.url}") val dbMicroServiceUrl: String,
    @Value("\${user.end.point}") val endPoint: String
) {
    fun fetchData(): List<UserDto> {
        val responseType = object : ParameterizedTypeReference<List<UserDto>>() {}
        return restTemplate.exchange("$dbMicroServiceUrl/$endPoint", HttpMethod.GET, null, responseType).body ?: emptyList()
    }
}

Controller is:

package com.compress.servicemapper.controller

@RestController
@RequestMapping("/user")
class DataMapperController(
    val userTransformService: UserService
) {
    @GetMapping
    fun getAll(): ResponseEntity<List<UserDto>> {
        val originalData = userTransformService.fetchData()
        return ResponseEntity.ok(originalData)
    }
}


RestTemplate Configuration:

In the service mapper, we configure a RestTemplate to facilitate communication between different parts of our application. This is done by creating a configuration class annotated with @Configuration containing the bean definition for our RestTemplate.


Here's the code:

@Configuration
class RestTemplateConfig {
    @Bean
    fun restTemplate(): RestTemplate {
        return RestTemplateBuilder().build()
    }
}


@Configuration is a Spring annotation that indicates that the class has @Bean definitions. Beans are objects the Spring IoC (Inversion of Control) container manages. In this case, our RestTemplate bean is defined in the RestTemplateConfig class.


The @Bean annotation in Spring identifies a method that produces a bean for the Spring container. In our example, the restTemplate method creates a new RestTemplate instance used for HTTP operations in Spring. This configuration lets you define RestTemplate once and inject it into necessary components, making your code cleaner and easier to maintain.


Without this configuration, you can still use RestTemplate, but you will need to create an instance of it at each place where you use it. This can lead to code duplication.


Postman is ready to start. Make sure that you created some users in service-db. As a result, you will get the same users in service-db.


localhost:8080/mapper/user



Compression Implimitation.

It's time to implement Compression in service-db. The heart of our Compression will be in CompressDataService. Take a look at the code:

@Service
class CompressDataService(
    private val objectMapper: ObjectMapper
) {
    fun compress(data: Iterable<UserEntity>): ByteArray {
        val jsonString = objectMapper.writeValueAsString(data)
        val byteArrayOutputStream = ByteArrayOutputStream()
        GZIPOutputStream(byteArrayOutputStream).use { gzipOutputStream ->
            val bytes = jsonString.toByteArray(StandardCharsets.UTF_8)
            gzipOutputStream.write(bytes)
        }
        return byteArrayOutputStream.toByteArray()
    }
}


Step-by-step explanation:

The compress function takes a collection of UserEntity objects as input and returns a Byte Array. The primary goal of this function is to compress data to save space during transmission or storage. Here's what happens inside the function:


  1. Input data are converted into a JSON string format using ObjectMapper. This is done to represent the data as a string that can then be compressed.
  2. A ByteArrayOutputStream object is created, which will be used to store compressed data.
  3. A GZIPOutputStream object is created, which wraps around ByteArrayOutputStream and provides functionality for compressing data.
  4. The JSON string is converted into a byte array using UTF-8 encoding.
  5. The byte array is written to GZIPOutputStream. During this process, the data is compressed and stored in ByteArrayOutputStream.
  6. The resulting compressed byte array is returned from the function.


Thus, the compress function transforms a collection of objects into a compressed Byte Array, saving space during data transmission or storage.


The Controller is so banal that it does not need comments.


@RestController
@RequestMapping("/user")
class CompressedDataController(
    private val repository: UserRepository,
    private val compressDataService: CompressDataService,

    ) {
    @GetMapping("/compressed")
    fun fetchCompressedData(): ByteArray {
        val data: Iterable<UserDto> = repository.findAll()
        return compressDataService.compress(data)
    }
}


Test yourself, please. You'll receive something special. If you're a young budding engineer, it's possible the first time you see a response like this. That's ByteArray, babe.


GET: localhost:8081/db/user/compressed

Check the sound: The data transfer volume is significantly lower at this stage. As the volume of similar-type data grows, the compression efficiency scales up, leading to increasingly better compression ratios.

Decompression Implementation.

It's time to decompress our data in the service mapper. To achieve this, we will create a new controller and the heart of this microservice: the decompression service. Let's open our hearts.

import java.io.ByteArrayInputStream
import java.util.zip.GZIPInputStream

@Service
class UserDecompressionService(
    @Value("\${db.microservice.url}") val dbMicroServiceUrl: String,
    @Value("\${user.end.point}") val endPoint: String,
    val restTemplate: RestTemplate,
    private var objectMapper: ObjectMapper
) {

    fun fetchDataCompressed(): ResponseEntity<ByteArray> {
        return restTemplate.getForEntity("$dbMicroServiceUrl$endPoint/compressed", ByteArray::class.java)
    }
    fun decompress(compressedData: ByteArray): ByteArray {
        ByteArrayInputStream(compressedData).use { bis ->
            GZIPInputStream(bis).use { gzip ->
                return gzip.readBytes()
            }
        }
    }
    fun convertToUserDto(decompressedData: ByteArray): List<UserDto> {
        return objectMapper.readValue(decompressedData, object : TypeReference<List<UserDto>>() {})
    }


Step-by-step explanation.

  1. fun fetchDataCompressed(): This method uses RestTemplate to make a GET request to a composed URL, which is a combination of dbMicroServiceUrl and endPoint with /compressed appended to it. It expects a response containing a compressed ByteArray. The method returns a ResponseEntity<ByteArray>, including the compressed data and HTTP response details.
  2. fun decompress(compressedData: ByteArray): This method takes a compressed ByteArray and performs decompression. It creates a ByteArrayInputStream from the compressed data, which is then wrapped in a GZIPInputStream which is used to read and decompress the compressed content. The result is a ByteArray of the original, decompressed data.
  3. fun convertToUserDto(decompressedData: ByteArray): After decompression, this method transforms the bytes into a list of UserDto objects. The ObjectMapper reads the decompressed data and maps it to a list of UserDto objects using the parameterized type provided by TypeReference.


Together, these methods create a workflow within the service for retrieving compressed user data from a microservice, decompressing it, and converting it into a list of UserDto objects that can be used elsewhere in the application. This allows other applications to leverage the service to get user data in a convenient format while abstracting away the details of data retrieval and transformation.


Controller: add this function. Here, we see what is described above in service.


@GetMapping("/decompress")
 fun transformData(): ResponseEntity<List<UserDto>> {
    val compressedData = userDecompressionService.fetchDataCompressed()

    return try {
        val decompressedData = userDecompressionService.decompress(compressedData.body!!)
        val users = userDecompressionService.convertToUserDto(decompressedData)
        ResponseEntity.ok(users)
        } catch (e: Exception) {
            ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build()
        }


Please test by Postman that everything is 200 OK. You have to receive decompressed data one-to-one, like from service-db.

localhost:8080/mapper/user/decompress


We have made significant progress in developing our two microservices, and we are now at an important milestone, similar to crossing the Rubicon. Our initial efforts have led us to build the blueprint of a globally renowned application. This pattern is the foundation of transformative platforms like Netflix's streaming service and Amazon's e-commerce ecosystem. These platforms leverage robust cloud systems to provide seamless, scalable, resilient services worldwide.

Snappy, jump on the bandwagon.

We have a wide variety of compression algorithms available for our test, and it would be beneficial to add another actor. As previously mentioned, Snappy Compression argues that it could be more efficient sometimes, and we plan to test this theory. How?


We add to Gradle dependencies in two microservices:

implementation ("org.xerial.snappy:snappy-java:1.1.10.5")


Service-db:

  1. Add CompressDataSnappyService:
class CompressDataSnappyService(
    private val objectMapper: ObjectMapper
) {
    fun compressSnappy(data: Iterable<UserEntity>): ByteArray {
        val jsonString = objectMapper.writeValueAsString(data)
        return Snappy.compress(jsonString.toByteArray(StandardCharsets.UTF_8))
    }

2. Controller:

@RestController
@RequestMapping("/user")
class UserCompressedSnappyController(
    private val repository: UserRepository,
    private val compressDataService: CompressDataSnappyService,

    ) {
    @GetMapping("/compressed-snappy")
    fun fetchCompressedData(): ByteArray {
        val data: Iterable<UserEntity> = repository.findAll()
        return compressDataService.compressSnappy(data)
    }
}

Service-mapper:

  1. Add functions to UserDecompressionService:
fun decompressSnappy(compressedData: ByteArray): ByteArray {
        return Snappy.uncompress(compressedData)
    }

    fun fetchDataCompressedSnappy(): ResponseEntity<ByteArray> {
        return restTemplate.getForEntity("$dbMicroServiceUrl$endPoint/compressed-snappy", ByteArray::class.java)
    }
  1. Add to the Controller:
@GetMapping("/decompressed-snappy")
fun transformDataSnappy(): ResponseEntity<List<UserDto>> {
    val compressedData = userDecompressionService.fetchDataCompressedSnappy()

    return try {
        val decompressedData = userDecompressionService.decompressSnappy(compressedData.body!!)
        val users = userDecompressionService.convertToUserDto(decompressedData)
        ResponseEntity.ok(users)
    } catch (e: Exception) {
        ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build()
    }
}


Original service-db / service-mapper you can download from GitHub.

Test Cases: Byteprint Before & After

Test Cases


Let me introduce my test case scenarios and my vision of the feasibility of using this approach globally.


Data compression is not merely programming magic; it's an art and science that allows us to achieve more with less. But how can we measure the true greatness of this art form? How can we ensure that our methods are functioning and functioning well?


Enter the concept of "Baseline Comparison" or "Reference Comparison." In simple terms, we create a starting point - a baseline - our uncompressed data in all its original, unedited glory. This raw data state will become our reference point for comparison and evaluation.

Steps:

  1. Data Preparation:
  • Create data for four scenarios: 1/100/1000/10000 users.
  • Use bulk API for fast entity creation in service-db.


2. Data Transfer:

  • Get Uncompressed data from service-db for Reference Comparison.
  • Get Compressed Data from service-db.
  • Verify that data has been correctly received.
  • Ensure that the data has been successfully compressed.


3. Results Validation:

  • Verify that after decompression, the data content matches the original dataset.
  • Measure the time taken for the compression and decompression processes.
  • Measure the volume of traffic transferred between the microservices.


4. Real Life testing:

  • Add artificial lag time, which emulates real-time case scenarios.
  • Add 75/150/300 ms lag-time.


Expected Results:

  • Data is successfully compressed and decompressed without any loss of information.
  • Compressed data occupies less space compared to the original data.
  • The time taken for Compression and decompression is as expected for the given volume of data.
  • Traffic between services is reduced due to data compression.


Postconditions:

  • Use Sequential Evaluation.
  • Record the test results using a reliable environment.


Success Criteria:

  • The test is successful if the compressed data matches the original and is smaller.

Test Environment.

In our demo, we'll leverage Apache JMeter, a robust testing tool, to simulate microservices interactions and measure the benefits of data compression. JMeter is ideal for such purposes as it can simulate loads and test service performance. It has documentation, tons of plugins, and excellent support. We aim to showcase how data compression optimizes traffic and communication efficiency between services.


In all cases, we will use a performance loader as 1000 Users each will create one request. Equals 1.000 requests to get 1/100/1.000/10.000 users.


This will give us a large field for statistical data and test our application for performance.

Let's start the Race

Starting Line


As we approach the testing track, our contenders are lining up, engines revving, each representing a unique approach to data transfer in the grand prix of microservice performance. Let's introduce our formula one teams:

  1. Team Compressed ByteStream by GZIP (localhost:8080/mapper/user/compressed): Equipped with a streamlined design to cut through the network like a razor, this API is expected to lead the pack. It zooms past the competition by delivering compressed data directly from the service-db without burning CPU cycles on decompression. It's our pace car, setting the standard for speed and efficiency.
  2. Team Snappy Streamline (localhost:8080/mapper/user/compressed-snappy): If raw speed through Snappy Compression is what you're after, this racer is not just participating - it's here to claim the trophy. But No! It would be fair to let the snappy team have its own pace car or marker if you want.
  3. Team Classic JSON (localhost:8080/mapper/user): The venerable veteran, steadfast and reliable, bringing the timeless simplicity of regular JSON data transfer to the race. Without the bells and whistles of Compression or decompression, it's our Anchor Object - holding down the fort with the Gold Standard of API data transfer.
  4. Team Gzip Glide (localhost:8080/mapper/user/decompressed): This contender takes a tactful approach, compressing data for the journey and then unfolding it upon arrival with gzip precision. It's a blend of tradition and innovation, a balancing act of transfer and transformation.
  5. Team Snappy Stallion (localhost:8080/mapper/user/decompressed-snappy): A dark horse indeed, with the agility and speed afforded by Google's Snappy algorithm. This team promises a fresh perspective on Compression, potentially disrupting the leaderboard with unconventional tactics.


Each API, a unique formula of technical prowess, stands ready to tackle the circuit, proving its mettle. As the flag waves, watch closely, for this race is not just about raw speed - it's about strategy, resource management, and endurance in the cloud arena, where every millisecond counts.


What do we measure:

  1. Label: The name of the test case or request that was tested.
  2. #Samples: The number of requests executed in the test. I exclude data about 1000 samples from the results table as they are constant.
  3. Average: The average response time of all requests in milliseconds. It indicates the overall performance of the system.
  4. Median: The median response time is the middle value in the set of response times. It is an excellent overall performance indicator and less sensitive to outlier values. Measured in milliseconds.
  5. 90% Line: This means that 90% of the requests are completed faster than this time. It's a metric often used to measure the maximum response time that clients might experience. Measured in milliseconds.
  6. 95% Line and 99% Line: Similar to the 90%, but for 95% and 99% of requests, respectively. These metrics help us understand how well the system handles the slowest requests. Measured in milliseconds.
  7. Min and Max: The minimum and maximum response time during the test. Measured in milliseconds.
  8. Error %: The percentage of errors during the test.
  9. Throughput: The throughput measured in requests per second. It shows how many requests are processed on average per unit of time.
  10. Received KB/sec and Sent KB/sec: The data rate received and sent in kilobytes per second.
  11. Compression %: To calculate the compression percentage using a formula, you can use the following approach:

Compression formula


The result gives you the percentage reduction in size due to Compression.

Main Target.

Regarding the most indicative percentiles (% Line), attention is usually focused on the 99% percentiles as they reflect the response time for the slowest requests. These values help determine if the system's performance characteristics meet performance requirements and how resilient the system is under high loads.


Constants during the test.

One user costs:

  • Gzip compression - 0,247 KB.
  • Snappy - 0,231 KB.
  • Pure JSON- 0,230 KB.


100 users cost:

  • Gzip compression - 1,06 KB.
  • Gzip - 1,440 KB.
  • Pure JSON - 7,290 KB.


1.000 users cost:

  • Gzip compression - 5,030 KB.
  • Snappy - 9,800 KB.
  • Pure JSON - 72,560 KB.


10.000 user costs:

  • Gzip compression - 37,440 KB.
  • Snappy - 92,170 KB.
  • Pure JSON - 735,730 KB.

3…2…1…GO!

Run Test Cases

Race #1 - In a perfect world.

The first race will be under laboratory conditions - both microservices run on one CPU by different ports. It's the simulation of Zero Latency Conditions. I believe we will one day achieve Seamless Data Flow, but as long as humanity works on this, we will always have optimization work.


So, four laps with getting 1/100/1000/10000 users.

Zero Latency Conditions


Pure compressed data is not considered; instead, we observe it to determine our location.

Note: All Sheets in CSV can be downloaded by this public link.


Subtotal: In our analysis, we observed that our system can crack like nuts 1, 100, and 1000 users, but it starts to experience delays when we increase the load to 10,000 users. That's why we must make use of pageable requests. JSON is the fastest response format, but the other teams are catching up quickly. Synthetic victory goes to JSON/Snappy. However, the most crucial factor is not necessarily speed but how much traffic we can save. The winner in this regard may be different from what we initially thought.


One more important conclusion is that for small requests, such as getting just one user, we experience a negative compression rate. This means that there are no advantages to using compression as the data becomes more extensive and requires more computing power to compress and decompress. Therefore, it's essential always to remember this case and conduct thorough testing to ensure optimal performance.

Race #2 - Local Ping (within the same country or region).

Local ping is usually the lowest because data is transmitted over short distances and is least affected by delays at routers and switches.

  • Very low latency: <20 ms.
  • Low latency: 20–50 ms.
  • Moderate latency: 50–100 ms.
  • Test Scenario: 75 ms.


75 ms Latency Results.


Race #3 Ping between countries within the same continent.

  • Low latency: 50–100 ms.
  • Moderate latency: 100–200 ms.
  • Test Scenario: 150 ms.


Latency increases when traffic crosses longer distances and goes through more routers and network nodes.


150 ms Latency Results.

Race #4 Intercontinental Ping (international requests):

  • Moderate latency: 150–250 ms
  • High latency: 250–350 ms or more
  • Test Scenario: 300 ms


Intercontinental requests usually have the highest latency due to the data passing through submarine cables and intercontinental connections, significantly increasing the distance traveled and the number of intermediate nodes.


300 ms Latency Results.

Latencies raced recap:

After setting the network latency, we can observe the outcome of our data compression taking shape. It's essential to remember that during transmission, several factors can affect the data: jitters, throttling, drops, packet loss, bandwidth limitation, congestion, latency spikes, routing and DNS issues, and so on.


In cases of longer lag time, the benefits of using highly compressed data become more apparent. GZIP emerges as the clear winner in this scenario. On the other hand, Snappy, which offers a perfectly balanced solution, performs well in low lag times but falls short compared to GZIP in mid to high lag times. Unfortunately, JSON fails to handle lag cases effectively and distorts reality.

Please keep in mind that these diagrams can be accessed through the link provided to facilitate better analysis.


Get One Diagram


Get 100 Diagram


Get 1.000 Diagram


Get 10.000 Diagram


Reminder: I hope you remember that you get a compression rate of 80-95% in addition to the time result in some cases.

Main Conclusions.

Сritical role of data compression in cloud systems and its interplay with the geographic proximity of microservices.


  • Adding artificial latency to tests underscores the importance of the geographic proximity of microservices. Data compression methods in regional placement models in cloud systems could be less efficient, except in cases where you use a pay-as-you-go payment model. However, this model has several potential drawbacks, including Vulnerability to Regional Outages, Limited Geographical Coverage, Dependence on a Single Service Provider, Challenges in Scaling, High Costs, and even Legal and Regulatory Constraints.
  • The real-life delay simulating real-network conditions demonstrate that with increased latency, the benefits of data compression become even more apparent, making traditional JSON API requests less preferable due to their higher sensitivity to delays. This highlights the necessity of employing data compression methods in conjunction with the geographical distribution of servers to optimize data exchange and minimize the impact of network latency.
  • Achieving the ideal system requires a thorough approach to selecting compression algorithms. By learning how to switch compression algorithms dynamically based on the request metadata and comprehending the response data, your business efficiency will improve, costs will decrease, and user experience will be enhanced.


Choosing a Strategy.

Choosing a strategy


To effectively identify and optimize traffic between microservices in proxyless applications, a strategy combining monitoring, traffic analysis, and data compression tools is essential. Here are the steps for developing such a strategy:

Monitoring and Traffic Analysis

Identify the most frequently used APIs and those generating the most traffic. Tools for monitoring and analyzing traffic are necessary for this.


  • Prometheus and Grafana: Utilize Prometheus for metrics collection and Grafana for visualization. Track metrics like API call frequency, data volume, and response times.
  • Elasticsearch, Logstash, and Kibana (ELK Stack): This stack can collect, analyze, and visualize logs to pinpoint the most active and resource-intensive APIs.

Choosing a Compression Strategy

After identifying the heavily loaded APIs, you can select an appropriate compression strategy.

  • Data Format Determination: If most traffic consists of JSON, XML, or other text formats, standard algorithms like Gzip, Brotli, and Snappy can be effective.
  • DO NOT COMPRESS ALL: Tests show that APIs like getOneEntityById should not be compressed at all.
  • Specialized compression algorithms may be required for different data values. For instance, Gzip is better for large data sets but works slower than Snappy.

Implementing Compression

In a proxyless environment, the focus shifts to application-level compression.

  • Implement data. For example, directly within the services, especially for unique data formats, ensuring the most efficient processing and minimal alterations to the existing infrastructure.
  • Use utility classes or class wrappers to avoid boilerplate.

Testing and Optimization

  • Performance Testing: Ensure that compression indeed improves performance without causing excessive CPU overhead.
  • Monitoring and Tuning: Continue monitoring to optimize compression rules, ensuring they are applied only where necessary and effective.

Continuous Improvement

Technologies and requirements change, so regularly reviewing and updating the compression strategy is crucial.


This approach will enable you to efficiently determine and optimize traffic between microservices in proxyless environments, enhancing performance and reducing data transmission costs.