This article delves into the intricacies of , with a special emphasis on its vital component - , in the context of a . We explore the nuances, benefits, and challenges of adopting this method, where plays a pivotal role. The discussion extends to how and the right strategic choices can significantly bolster this architecture. Furthermore, we'll navigate through the realm of , scrutinizing specific practices and tools that are essential in cloud-native environments, across platforms like , , or . This exploration aims to shed light on how a proxyless architecture can be effectively implemented and optimized in these diverse cloud platforms. Microservices Architecture data transmission proxyless approach direct service-to-service communication DevOps practices Cloud-Native DevOps AWS Azure Google Cloud In today's efficient data exchange is crucial, particularly as we often operate in environments where traditional like are not the optimal solution. This necessity arises from various factors, leading to a shift towards a proxyless approach in specific scenarios: cloud-based microservice architectures, proxy-servers Nginx : In environments with limited computing resources, the overhead of proxy servers can be prohibitive. Resource Constraints : Direct service-to-service communication can reduce latency, crucial in high-performance applications. High-Performance Requirements : Certain regulatory or security requirements may dictate direct connections between services. Security and Compliance : For some applications, a streamlined setup without additional proxy layers is more manageable and efficient. Simplified Architecture : When unique data processing or compression methods are required that proxies cannot adequately provide. Specific Data Handling Needs Understanding the use cases for a proxyless approach, particularly in the context of data compression, becomes essential for optimizing microservices' efficiency and performance. "To compress, or not to compress, that is the question: Whether 'tis nobler in the bytes to suffer. The pings and lags of outrageous file sizes, Or to take arms against a sea of data transfer, And by opposing, compress them." Challenges of Data Transfer in a Network. Data transfer has become an integral part of everyday life, akin to the air we breathe - its value is only truly appreciated when . As IT professionals, we are responsible for this ' ,' ensuring its and . We must focus on maintaining the quality and much like environmentalists fight for in cities. Our task extends beyond merely sustaining a ; we must also ensure its and within the complex network ecosystems of microservices. As a bridge connecting the dots in a , the network is also a where data faces numerous obstacles that can impact , , and . lacking digital air purity uninterrupted flow speed of data transmission, clean air steady stream of data efficiency security Microservices Architecture challenging terrain performance efficiency security Exchanging large volumes between microservices can result in performance and data transfer speed issues. : Large volumes of data Data transmission delays can affect system performance depending on the distance between servers and the network's quality. : Network latency : There is always a risk of information loss due to network failures or errors during data transmission. Data loss Top-rated services like , , , , etc., offer a pay-as-you-go pricing model where high data usage directly impacts price. : Data costs AWS Microsoft Azure GCP IBM Cloud : Data compression does not improve data security. However, it indirectly affects several security-related aspects. The window for possible attacks is also reduced by reducing data transfer time. Decompression processing requires extra resources, complicating the task for a potential attacker attempting to analyze data traffic. Data security Note: Understanding these challenges is pivotal in appreciating the solutions and strategies that can be employed to overcome them, including the crucial role that data compression plays in this equation. Understanding Data Compression. What is data compression? Data compression is a technique to reduce data size to save storage space or transmission time. For microservices, this means that when one service sends data to another, instead of sending the raw data, it first compresses the data, sends the compressed data across the network, and then the receiving service decompresses it back to the original format. This will be the working scenario that we will focus on. Discussing the significance and limitations of data compression in microservices architectures Data compression reduces data size, leading to transfer faster and better performance. However, it can cause data loss, format inconsistencies, and performance problems. Advantages of using data compression: Transmit less data to save network traffic. The data transmission times can be shortened to speed up the overall system operation. The efficient exchange of information is facilitated by the ability to transfer large volumes of data at once. The more homogeneous data, the more Compression you get. Improve system performance during peak times of simultaneous requests. Enable better usage of network resources, resulting in improved efficiency. Disadvantages of using data compression: The need to process compressed data on both the client and server sides may lead to a slight decrease in performance. Data loss during Compression and decompression may occur if the process is incorrectly set up. Potential compatibility issues with data if different compression algorithms are used. Additional configuration and setup are required to work with compressed data. Requires additional expertise in Data debugging. Note: This article will close some of these problems to ensure reliable and efficient data exchange between microservices. Types of data compression algorithms. When considering data exchange between microservices, choosing the appropriate data compression algorithm is crucial to ensure efficiency and performance. Let's dive into the most common types of data compression algorithms. Lossless Compression Algorithms: Lossy Compression Algorithms: These algorithms compress data to allow the original data to be perfectly reconstructed from the compressed data. These algorithms compress data by removing unnecessary or less important information, meaning the original data cannot be perfectly reconstructed from the compressed data. Examples include Huffman coding, Lempel-Ziv (LZ77 and LZ78), and Deflate, which are also used in GZIP. Examples commonly used in audio and video compression include JPEG for images and MP3 and AAC for audio. Another notable example is Snappy, which prioritizes speed and efficiency, making it a suitable choice for scenarios where quick data compression and decompression are more critical than achieving the highest compression level. Benefits of using Lossless Compression Algorithms in Microservices Communication. The final choice depends on the specific needs of the project and the characteristics of the data being exchanged between microservices. is the optimal choice as a data compression and decompression software application for our purposes. It uses the " " algorithm, combining the LZ77 algorithm and Huffman coding. Since the " " algorithm is a the original data can be from the compressed data. GZIP Deflate Deflate "Lossless data compression algorithm," perfectly reconstructed Everyone in the business wants to receive accurate data and avoid any mistakes. Verdict: is versatile and practical, so it is widely used for compressing text files and is recognized as both a lossless compression algorithm and a text compression tool. Gzip Into Action: Deploying the Microservices Architecture. You can skip the Microservice Creating section and jump straight to the test cases or summary part. Now that we've brushed up on the basic concepts, it's time to start creating. We will zoom in on two regular links in the intricate tapestry of microservice architectures, where hundreds of interconnected services weave a complex network. These two, isolated from the vast ecosystem, distinct microservices will serve as our focal points, allowing us to dissect and deeply understand a specific study case. As mentioned, we create two independent Spring Boot Kotlin microservices: service-db is a microservice that has access to *. DataBase service-mapper is a data-baseless microservice whose primary function is to map, enrich, and forward data to front-end applications. Hint: Data BaseH2 was chosen as an in-memory Java SQL database for our study case example. The unique feature of H2 is that it only exists during the application's runtime and does not require separate installation. It is a lightweight, fast, and convenient option for development and testing purposes. Note that all database data will be erased upon application restart as it is temporary. Fast Project Structure Generation For short project generation, use the . Let's start. start.spring.io utility Project: Gradle - Kotlin (or Maven if you prefer) Language: Kotlin Java: 21 Spring Boot: The latest version Project Metadata: Group: com.compression Artifact: service-db / service-mapper Name: service-db / service-mapper Dependencies: Spring Web (both) Spring Data JPA (for service-db) H2 Database (for service-db) Spring Boot DevTools (optional for automatic reloading). Hint: It’s important to keep a close eye on your dependencies and only install what is necessary for your needs. I already have made two configurations for you: / service-db r service-mappe Download the projects and unzip them. In-project Configuration. When you succeed with unzipping and indexation, you should add some code to your files. application.properties By default, we need to configure the data source settings in service-db. # DataSource Configuration spring.datasource.url=jdbc:h2:mem:testdb spring.datasource.driver-class-name=org.h2.Driver spring.datasource.username=sa spring.datasource.password= # JPA/Hibernate Configuration spring.jpa.database-platform=org.hibernate.dialect.H2Dialect # H2 Database Console Configuration spring.h2.console.enabled=true # Server Configuration server.port=8081 server.servlet.context-path=/db : application.properties in service-mapper # Server Configuration server.port=8080 server.servlet.context-path=/mapper # Microservice Configuration db.microservice.url=http://localhost:8081/db/ user.end.point=user Hint: These variables are used to connect to the service-db microservice: db.microservice.url is the URL of the microservice user.end.point is the endpoint for user operations. Development Process and Solution Design: Building the Data Flow. Our project will follow a specific standard structure. src └── main └── kotlin └── com └── example └── myproject ├── config │ └── AppConfig.kt ├── controller │ └── MyController.kt ├── repository │ └── MyRepository.kt ├── service │ └── MyService.kt ├── dto │ └── MyUserDto.kt └── entity └── MyEntity.kt In our course, we will be implementing the following points: service-db implementation service-mapper implementation User Entity. User DTO. Service for Data Compression. Decompression Service - Logic for Data Decompression. to save/get the Entity to/from the database. Put/GET Controllers - Get Raw / Compressed Data from service-db. GET Controller Let's Code Like sketching a blueprint before building a rocket, we start coding by shaping up as my first step. DTO/Entity data class UserEntity in service-db and UserDto in service-mapper: # service-db @Entity data class UserEntity( @GeneratedValue(strategy = GenerationType.IDENTITY) @Id @jakarta.persistence.Id val id: Long, val name: String, val creativeDateTime: LocalDateTime? = LocalDateTime.now(), ) # service-mapper data class UserDto( val id: Long, val name: String, val creativeDateTime: LocalDateTime, ) NOTE: For the in our project, we will use the following decorators in the to handle the unique identification of each entity. Using annotations, the database will automatically manage the assignment of unique IDs when new entities are created, simplifying the entity management process. service-db Entity these @Id and @jakarta.persistence.Id - annotations are used to specify the entity’s primary key. @GeneratedValue(strategy = GenerationType.IDENTITY) - annotation indicates that the database automatically generates a unique identifier for each entity as the primary key. Service-db: Repository and Controller. Let's shift our attention toward service-db and create a Classic Duet comprising a and a . Repository Controller package com.compress.servicedb.repository @Repository interface UserRepository : CrudRepository<UserDto, Long> @RestController @RequestMapping("/user") class EntityController( private val repository: UserRepository ) { @GetMapping fun getAll(): MutableIterable<UserDto> = repository.findAll() @PostMapping fun create(@RequestBody user: UserDto) = repository.save(user) @PostMapping("/bulk") fun createBulk(@RequestBody users: List<UserDto>): MutableIterable<UserDto> = repository.saveAll(users) } Please verify by if everything is clear and proceed with the following steps accordingly. Postman POST: localhost:8081/db/user/bulk POST: localhost:8081/db/user/ GET: localhost:8081/db/user : We need to ensure our services are configured and communicating effectively. Our current method of retrieving data from the service-db is rudimentary but serves as our first test. Once we have enough data, we can measure retrieval volume and speed to establish a baseline for future comparisons. NOTE Service-mapper: Service and Controller. Let's fly to and create a and a . service-mapper Controller Service Service is: package com.compress.servicemapper.service @Service class UserService( val restTemplate: RestTemplate, @Value("\${db.microservice.url}") val dbMicroServiceUrl: String, @Value("\${user.end.point}") val endPoint: String ) { fun fetchData(): List<UserDto> { val responseType = object : ParameterizedTypeReference<List<UserDto>>() {} return restTemplate.exchange("$dbMicroServiceUrl/$endPoint", HttpMethod.GET, null, responseType).body ?: emptyList() } } Controller is: package com.compress.servicemapper.controller @RestController @RequestMapping("/user") class DataMapperController( val userTransformService: UserService ) { @GetMapping fun getAll(): ResponseEntity<List<UserDto>> { val originalData = userTransformService.fetchData() return ResponseEntity.ok(originalData) } } RestTemplate Configuration: In the service mapper, we configure a to facilitate communication between different parts of our application. This is done by creating a configuration class annotated with @Configuration containing the bean definition for our . RestTemplate RestTemplate Here's the code: @Configuration class RestTemplateConfig { @Bean fun restTemplate(): RestTemplate { return RestTemplateBuilder().build() } } is a annotation that indicates that the class has definitions. Beans are objects the container manages. In this case, our bean is defined in the RestTemplateConfig class. @Configuration Spring @Bean Spring IoC (Inversion of Control) RestTemplate The annotation in identifies a method that produces a bean for the container. In our example, the method creates a new instance used for operations in . This configuration lets you define once and inject it into necessary components, making your code cleaner and easier to maintain. @Bean Spring Spring restTemplate RestTemplate HTTP Spring RestTemplate Without this configuration, you can still use , but you will need to create an instance of it at each place where you use it. This can lead to code duplication. RestTemplate Postman is ready to start. Make sure that you created some users in . As a result, you will get the same users in service-db service-db. localhost:8080/mapper/user Compression Implimitation. It's time to implement Compression in service-db. The heart of our Compression will be in CompressDataService. Take a look at the code: @Service class CompressDataService( private val objectMapper: ObjectMapper ) { fun compress(data: Iterable<UserEntity>): ByteArray { val jsonString = objectMapper.writeValueAsString(data) val byteArrayOutputStream = ByteArrayOutputStream() GZIPOutputStream(byteArrayOutputStream).use { gzipOutputStream -> val bytes = jsonString.toByteArray(StandardCharsets.UTF_8) gzipOutputStream.write(bytes) } return byteArrayOutputStream.toByteArray() } } Step-by-step explanation: The compress function takes a collection of objects as input and returns a The primary goal of this function is to compress data to save space during transmission or storage. Here's what happens inside the function: UserEntity Byte Array. Input data are converted into a string format using . This is done to represent the data as a string that can then be compressed. JSON ObjectMapper object is created, which will be used to store compressed data. A ByteArrayOutputStream object is created, which wraps around and provides functionality for compressing data. A GZIPOutputStream ByteArrayOutputStream The string is converted into a byte array using encoding. JSON UTF-8 The byte array is written to . During this process, the data is compressed and stored in . GZIPOutputStream ByteArrayOutputStream The resulting compressed byte array is returned from the function. Thus, the compress function transforms a collection of objects into a compressed Byte Array, saving space during data transmission or storage. is so banal that it does not need comments. The Controller @RestController @RequestMapping("/user") class CompressedDataController( private val repository: UserRepository, private val compressDataService: CompressDataService, ) { @GetMapping("/compressed") fun fetchCompressedData(): ByteArray { val data: Iterable<UserDto> = repository.findAll() return compressDataService.compress(data) } } Test yourself, please. You'll receive something special. If you're a young budding engineer, it's possible the first time you see a response like this. That's ByteArray, babe. GET: localhost:8081/db/user/compressed The data transfer volume is lower at this stage. As the volume of similar-type data grows, the compression efficiency scales up, leading to increasingly better compression ratios. Check the sound: significantly Decompression Implementation. It's time to decompress our data in the service mapper. To achieve this, we will create a new controller and the heart of this microservice: the decompression service. Let's open our hearts. import java.io.ByteArrayInputStream import java.util.zip.GZIPInputStream @Service class UserDecompressionService( @Value("\${db.microservice.url}") val dbMicroServiceUrl: String, @Value("\${user.end.point}") val endPoint: String, val restTemplate: RestTemplate, private var objectMapper: ObjectMapper ) { fun fetchDataCompressed(): ResponseEntity<ByteArray> { return restTemplate.getForEntity("$dbMicroServiceUrl$endPoint/compressed", ByteArray::class.java) } fun decompress(compressedData: ByteArray): ByteArray { ByteArrayInputStream(compressedData).use { bis -> GZIPInputStream(bis).use { gzip -> return gzip.readBytes() } } } fun convertToUserDto(decompressedData: ByteArray): List<UserDto> { return objectMapper.readValue(decompressedData, object : TypeReference<List<UserDto>>() {}) } Step-by-step explanation. : This method uses to make a GET request to a composed URL, which is a combination of dbMicroServiceUrl and endPoint with /compressed appended to it. It expects a response containing a compressed ByteArray. The method returns a , including the compressed data and response details. fun fetchDataCompressed() RestTemplate ResponseEntity<ByteArray> HTTP : This method takes a compressed and performs decompression. It creates a ByteArrayInputStream from the compressed data, which is then wrapped in a GZIPInputStream which is used to read and decompress the compressed content. The result is a ByteArray of the original, decompressed data. fun decompress(compressedData: ByteArray) ByteArray : After decompression, this method transforms the bytes into a list of objects. The reads the decompressed data and maps it to a list of objects using the parameterized type provided by . fun convertToUserDto(decompressedData: ByteArray) UserDto ObjectMapper UserDto TypeReference Together, these methods create a workflow within the service for retrieving compressed user data from a microservice, decompressing it, and converting it into a list of objects that can be used elsewhere in the application. This allows other applications to leverage the service to get user data in a convenient format while abstracting away the details of data retrieval and transformation. UserDto : add this function. Here, we see what is described above in service. Controller @GetMapping("/decompress") fun transformData(): ResponseEntity<List<UserDto>> { val compressedData = userDecompressionService.fetchDataCompressed() return try { val decompressedData = userDecompressionService.decompress(compressedData.body!!) val users = userDecompressionService.convertToUserDto(decompressedData) ResponseEntity.ok(users) } catch (e: Exception) { ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build() } Please test by Postman that everything is . You have to receive decompressed data one-to-one, like from service-db. 200 OK localhost:8080/mapper/user/decompress We have made significant progress in developing our two microservices, and we are now at an important milestone, similar to crossing the . Our initial efforts have led us to build the blueprint of a globally renowned application. This pattern is the foundation of transformative platforms like streaming service and e-commerce ecosystem. These platforms leverage robust cloud systems to provide seamless, scalable, resilient services worldwide. Rubicon Netflix's Amazon's Snappy, jump on the bandwagon. We have a wide variety of compression algorithms available for our test, and it would be beneficial to add another actor. As previously mentioned, Snappy Compression argues that it could be more efficient sometimes, and we plan to test this theory. How? We add to Gradle dependencies in two microservices: implementation ("org.xerial.snappy:snappy-java:1.1.10.5") Service-db: Add CompressDataSnappyService: class CompressDataSnappyService( private val objectMapper: ObjectMapper ) { fun compressSnappy(data: Iterable<UserEntity>): ByteArray { val jsonString = objectMapper.writeValueAsString(data) return Snappy.compress(jsonString.toByteArray(StandardCharsets.UTF_8)) } 2. Controller: @RestController @RequestMapping("/user") class UserCompressedSnappyController( private val repository: UserRepository, private val compressDataService: CompressDataSnappyService, ) { @GetMapping("/compressed-snappy") fun fetchCompressedData(): ByteArray { val data: Iterable<UserEntity> = repository.findAll() return compressDataService.compressSnappy(data) } } Service-mapper: Add functions to UserDecompressionService: fun decompressSnappy(compressedData: ByteArray): ByteArray { return Snappy.uncompress(compressedData) } fun fetchDataCompressedSnappy(): ResponseEntity<ByteArray> { return restTemplate.getForEntity("$dbMicroServiceUrl$endPoint/compressed-snappy", ByteArray::class.java) } Add to the Controller: @GetMapping("/decompressed-snappy") fun transformDataSnappy(): ResponseEntity<List<UserDto>> { val compressedData = userDecompressionService.fetchDataCompressedSnappy() return try { val decompressedData = userDecompressionService.decompressSnappy(compressedData.body!!) val users = userDecompressionService.convertToUserDto(decompressedData) ResponseEntity.ok(users) } catch (e: Exception) { ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).build() } } Original / you can download from service-db service-mapper GitHub . Test Cases: Byteprint Before & After Let me introduce my test case scenarios and my vision of the feasibility of using this approach globally. Data compression is not merely programming magic; it's an art and science that allows us to achieve more with less. But how can we measure the true greatness of this art form? How can we ensure that our methods are functioning and functioning well? Enter the concept of " " or " ." In simple terms, we create a starting point - a baseline - our uncompressed data in all its original, unedited glory. This raw data state will become our reference point for comparison and evaluation. Baseline Comparison Reference Comparison Steps: Data Preparation: Create data for four scenarios: users. 1/100/1000/10000 Use bulk API for fast entity creation in service-db. 2. Data Transfer: Get Uncompressed data from for Reference Comparison. service-db Get Compressed Data from . service-db Verify that data has been correctly received. Ensure that the data has been successfully compressed. 3. Results Validation: Verify that after decompression, the data content matches the original dataset. Measure the time taken for the compression and decompression processes. Measure the volume of traffic transferred between the microservices. 4. Real Life testing: Add artificial lag time, which emulates real-time case scenarios. Add ms lag-time. 75/150/300 Expected Results: Data is successfully compressed and decompressed without any loss of information. Compressed data occupies less space compared to the original data. The time taken for Compression and decompression is as expected for the given volume of data. Traffic between services is reduced due to data compression. Postconditions: Use Sequential Evaluation. Record the test results using a reliable environment. Success Criteria: The test is successful if the compressed data matches the original and is smaller. Test Environment. In our demo, we'll leverage , a robust testing tool, to simulate microservices interactions and measure the benefits of data compression. is ideal for such purposes as it can simulate loads and test service performance. It has documentation, tons of plugins, and excellent support. We aim to showcase how data compression optimizes traffic and communication efficiency between services. Apache JMete r JMeter In all cases, we will use a performance loader as 1000 Users each will create one request. Equals requests to get users. 1.000 1/100/1.000/10.000 This will give us a large field for statistical data and test our application for performance. Let's start the Race As we approach the testing track, our contenders are lining up, engines revving, each representing a unique approach to data transfer in the grand prix of microservice performance. Let's introduce our formula one teams: ( ): Equipped with a streamlined design to cut through the network like a razor, this API is expected to lead the pack. It zooms past the competition by delivering compressed data directly from the service-db without burning CPU cycles on decompression. It's our pace car, setting the standard for speed and efficiency. Team Compressed ByteStream by GZIP localhost:8080/mapper/user/compressed ( ): If raw speed through Snappy Compression is what you're after, this racer is not just participating - it's here to claim the trophy. But No! It would be fair to let the snappy team have its own pace car or marker if you want. Team Snappy Streamline localhost:8080/mapper/user/compressed-snappy (localhost:8080/mapper/user): The venerable veteran, steadfast and reliable, bringing the timeless simplicity of data transfer to the race. Without the bells and whistles of Compression or decompression, it's our Anchor Object - holding down the fort with the Gold Standard of API data transfer. Team Classic JSON regular JSON ( ): This contender takes a tactful approach, compressing data for the journey and then unfolding it upon arrival with gzip precision. It's a blend of tradition and innovation, a balancing act of transfer and transformation. Team Gzip Glide localhost:8080/mapper/user/decompressed ( ): A dark horse indeed, with the agility and speed afforded by algorithm. This team promises a fresh perspective on Compression, potentially disrupting the leaderboard with unconventional tactics. Team Snappy Stallion localhost:8080/mapper/user/decompressed-snappy Google's Snappy Each API, a unique formula of technical prowess, stands ready to tackle the circuit, proving its mettle. As the flag waves, watch closely, for this race is not just about raw speed - it's about strategy, resource management, and endurance in the cloud arena, where every millisecond counts. What do we measure: The name of the test case or request that was tested. Label: # : The number of requests executed in the test. I exclude data about 1000 samples from the results table as they are constant. Samples : The average response time of all requests in milliseconds. It indicates the overall performance of the system. Average : The median response time is the middle value in the set of response times. It is an excellent overall performance indicator and less sensitive to outlier values. Measured in milliseconds. Median This means that 90% of the requests are completed faster than this time. It's a metric often used to measure the maximum response time that clients might experience. Measured in milliseconds. 90% Line: : Similar to the 90%, but for 95% and 99% of requests, respectively. These metrics help us understand how well the system handles the slowest requests. Measured in milliseconds. 95% Line and 99% Line : The minimum and maximum response time during the test. Measured in milliseconds. Min and Max : The percentage of errors during the test. Error % : The throughput measured in requests per second. It shows how many requests are processed on average per unit of time. Throughput : The data rate received and sent in kilobytes per second. Received KB/sec and Sent KB/sec : To calculate the compression percentage using a formula, you can use the following approach: Compression % The result gives you the percentage reduction in size due to Compression. Main Target. Regarding the most indicative percentiles ( ), attention is usually focused on the as they reflect the response time for the slowest requests. These values help determine if the system's meet and how resilient the system is under . % Line 99% percentiles performance characteristics performance requirements high loads Constants during the test. user costs: One compression - 0,247 KB. Gzip - 0,231 KB. Snappy - 0,230 KB. Pure JSON users cost: 100 compression - 1,06 KB. Gzip - 1,440 KB. Gzip - 7,290 KB. Pure JSON users cost: 1.000 compression - 5,030 KB. Gzip - 9,800 KB. Snappy - 72,560 KB. Pure JSON user costs: 10.000 compression - 37,440 KB. Gzip - 92,170 KB. Snappy - 735,730 KB. Pure JSON 3…2…1…GO! Race #1 - In a perfect world. The first race will be under - both microservices run on by different ports. It's the simulation of . I believe we will one day achieve , but as long as humanity works on this, we will always have optimization work. laboratory conditions one CPU Zero Latency Conditions Seamless Data Flow So, four laps with getting . 1/100/1000/10000 users Pure compressed data is not considered; instead, we observe it to determine our location. All Sheets in can be downloaded by Note: CSV . this public link In our analysis, we observed that our system can crack like nuts , but it starts to experience delays when we increase the load to . That's why we must make use of . is the , but the other teams are catching up quickly. Synthetic victory goes to . However, the most crucial factor is not necessarily speed but . The winner in this regard may be different from what we initially thought. Subtotal: 1, 100, and 1000 users 10,000 users pageable requests JSON fastest response format JSON/Snappy how much traffic we can save One more important conclusion is that for small requests, such as getting just one user, we experience a . This means that there are no advantages to using compression as the data becomes more extensive and requires more computing power to compress and decompress. Therefore, it's essential always to remember this case and conduct thorough testing to ensure . negative compression rate optimal performance Race #2 - Local Ping (within the same country or region). Local ping is usually the lowest because data is transmitted over short distances and is least affected by delays at routers and switches. Very low latency: <20 ms. Low latency: 20–50 ms. Moderate latency: 50–100 ms. Test Scenario: 75 ms. Race #3 Ping between countries within the same continent. Low latency: 50–100 ms. Moderate latency: 100–200 ms. Test Scenario: 150 ms. Latency increases when traffic crosses longer distances and goes through more routers and network nodes. Race #4 Intercontinental Ping (international requests): Moderate latency: 150–250 ms High latency: 250–350 ms or more Test Scenario: 300 ms Intercontinental requests usually have the highest latency due to the data passing through submarine cables and intercontinental connections, significantly increasing the distance traveled and the number of intermediate nodes. Latencies raced recap: After setting the network latency, we can observe the outcome of our data compression taking shape. It's essential to remember that during transmission, several factors can affect the data: and so on. jitters, throttling, drops, packet loss, bandwidth limitation, congestion, latency spikes, routing and DNS issues, In cases of longer lag time, the benefits of using highly compressed data become more apparent. emerges as the clear winner in this scenario. On the other hand, , which offers a perfectly balanced solution, performs well in low lag times but falls short compared to in mid to high lag times. Unfortunately, to handle lag cases effectively and distorts reality. GZIP Snappy GZIP JSON fails Please keep in mind that these diagrams can be accessed through the provided to facilitate better analysis. link Reminder: I hope you remember that you get a compression rate of 80-95% in addition to the time result in some cases. Main Conclusions. underscores the importance of the of microservices. in regional placement models in cloud systems could be less efficient, except in cases where you use a . However, this model has several potential drawbacks, including , , , , , and even . Adding artificial latency to tests geographic proximity Data compression methods pay-as-you-go payment model Vulnerability to Regional Outages Limited Geographical Coverage Dependence on a Single Service Provider Challenges in Scaling High Costs Legal and Regulatory Constraints The real-life delay simulating real-network conditions demonstrate that with increased latency, the benefits of become even more apparent, making traditional less preferable due to their higher sensitivity to delays. This highlights the necessity of employing in conjunction with the to optimize data exchange and minimize the impact of network latency. data compression JSON API requests data compression methods geographical distribution of servers Achieving the ideal system requires a thorough approach to selecting . By learning how to switch based on the request metadata and comprehending the response data, your business efficiency will improve, costs will decrease, and user experience will be enhanced. compression algorithms compression algorithms dynamically Choosing a Strategy. To effectively identify and optimize traffic between microservices in , a strategy combining monitoring, traffic analysis, and data compression tools is essential. Here are the steps for developing such a strategy: proxyless applications Monitoring and Traffic Analysis Identify the most frequently used APIs and those generating the most traffic. Tools for monitoring and analyzing traffic are necessary for this. and : Utilize Prometheus for metrics collection and Grafana for visualization. Track metrics like API call frequency, data volume, and response times. Prometheus Grafana , , and ( ): This stack can collect, analyze, and visualize logs to pinpoint the most active and resource-intensive APIs. Elasticsearch Logstash Kibana ELK Stack Choosing a Compression Strategy After identifying the heavily loaded APIs, you can select an appropriate compression strategy. If most traffic consists of JSON, XML, or other text formats, standard algorithms like can be effective. Data Format Determination: Gzip, Brotli, and Snappy Tests show that APIs like getOneEntityById should not be compressed at all. DO NOT COMPRESS ALL: may be required for different data values. For instance, Gzip is better for large data sets but works slower than Snappy. Specialized compression algorithms Implementing Compression In a proxyless environment, the focus shifts to . application-level compression For example, directly within the services, especially for unique data formats, ensuring the most efficient processing and minimal alterations to the existing infrastructure. Implement data. or class to avoid boilerplate. Use utility classes wrappers Testing and Optimization Ensure that compression indeed improves performance without causing excessive CPU overhead. Performance Testing: Continue monitoring to optimize compression rules, ensuring they are applied only where necessary and effective. Monitoring and Tuning: Continuous Improvement Technologies and requirements change, so regularly reviewing and updating the compression strategy is crucial. This approach will enable you to efficiently determine and optimize traffic between microservices in proxyless environments, enhancing performance and reducing data transmission costs. Oleh Sypiahin in LinkedIn Original / you can download from service-db service-mapper GitHub . Online Excel Report with Diagrams