paint-brush
5 Caching Mechanisms to Speed Up Your Applicationby@pragativerma
19,742 reads
19,742 reads

5 Caching Mechanisms to Speed Up Your Application

by Pragati VermaSeptember 12th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Caching is a buffering technique where we store the frequently accessed data in temporary memory or space to make it readily available and reduce the workload for our application, server, or database. It can be implemented on different levels in a web application depending on the use case. Caching happens at different levels such as the Edge Caching or CDN(Content Delivery Network) or Server-side Caching (Server Side Caching) Caching can be used with all types of data stores, including NoSQL databases as well as relational databases.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - 5 Caching Mechanisms to Speed Up Your Application
Pragati Verma HackerNoon profile picture


If you are a backend developer, Caching is a go-to solution whenever you need to speed up your web application. However, there’s a lot about Caching that goes missing when deciding the right Caching strategy for your web application. Hence, in this article, we will discuss the various Caching strategies available and how to choose the right one for your use case.


What is Caching?

In simpler terms, Caching is a buffering technique where we store the frequently accessed data in temporary memory or space to make it readily available and reduce the workload for our application, server, or database. It can be implemented on different levels in a web application depending on the use case.


Caching at Different Levels in a Web Application

Caching happens at different levels in a web application such as the following:


Edge Caching or CDN(Content Delivery Network)

A CDN is used to cache static assets(such as images, videos, or webpages) in geographically distributed servers such that it can send the content faster to the end user using the cache.


Source: https://imagekit.io/


Consider a CDN to be a grocery store chain: instead of traveling hundreds of kilometers to fields where food is grown, buyers go to their local grocery store, which still needs some travel but is considerably closer. Grocery shopping takes minutes rather than days since grocery shops stock food from distant farms. Similarly, CDN caches 'stock' the content that appears on the Internet, allowing webpages to load significantly faster.


Source: https://awesome-tech.readthedocs.io/


When a user uses a CDN to request any content from a website, the CDN retrieves the content from an origin server and keeps a copy of the content for future requests. The cached content is kept in the CDN cache as long as users continue to access it.


Database Caching

Database Caching refers to the native smart caching algorithms used by any database to optimize reads and writes. The cache, in this case, can live in several areas including the database, the application, or as a standalone layer.


Database Caching can be used with all types of data stores, including NoSQL databases as well as relational databases such as SQL server - MySQL or MariaDB, etc. It also works well with many data platforms such as AWS and Microsoft Azure.


Browser Caching or Client Side Caching

Browsers or clients store the static assets based on the cache expiry headers. The HTTP cache headers specify how long the browser can fulfill subsequent cache responses for the requested web content. Also, browsers can cache the response for GET requests to avoid unnecessary data calls.


Server Side Caching

Server Caching is the most commonly known and used caching mechanism where the data is cached in a server application. Here things depend a lot on the business needs, it is highly optimized for applications with fewer concurrent users.


Server-side web caching often involves using a web proxy to hold web responses from the web servers in front of which it resides, significantly lowering their load and latency. These caches are implemented by the site administrators and act as an intermediate agent between the browser and the origin servers.


Another form of Server-side Caching is by utilizing the key-value stores such as Memcached and Redis. In contrast, to reverse proxies, which merely cache the HTTP response to a specific HTTP request, any web content needed by the application developer may be cached using a key/value object storage. The web content is often fetched using an application code or an application framework that may exploit the In-Memory data storage.


Another advantage of employing key/value stores for online caching is that they are frequently used to store web sessions and other cached material. This gives a unified solution for a variety of application situations.


Why do we need Caching?

There are several benefits of caching as mentioned below:


Improved Application Performance

As we discussed earlier, Cache is a high-speed data storage layer that stores a subset of data that is frequently accessed and is typically transient, so that future requests are served faster than accessing the original storage location. Thus, Caching allows for to efficient reuse of previously accessed or computed data. Hence, the read/write operation of data is reduced by a large magnitude and helps to improve the overall performance of the application.


Reduce the Database Cost

A single cache instance can provide hundreds of thousands of Input/Output operations per second, thus driving the total cost down by reducing the number of database instances required. Hence, the cost savings can be significant if the storage charges are per throughput.


Reduce the Load on the Backend

Caching can effectively reduce the load on the backend server and save it from slower performance, by redirecting the specific parts of the read load from the backend database to the in-memory caching layer. It can save the system from crashing at times of traffic overload or spike.


Predictable Performance

A common challenge with modern applications is when it comes to dealing with spikes in application usage. For example - spike on social media applications during a worldwide tech event, a cricket match or an election day, or a festive sale on an eCommerce website, etc. Increased load on the application could result in latency to get data, thus making the performance as well as the user experience unpredictable. Although, by using a high throughput in-memory cache, we can mitigate this problem to a large extent.


Eliminate Database Hotspots

A small subset of data, such as a celebrity profile or popular product, is likely to be retrieved more frequently than the remainder in many applications. This can cause hotspots in your database and necessitate overprovisioning of database resources based on throughput requirements for the most frequently used data. Storing common keys in memory reduces the need to overprovision while providing fast and predictable performance for the most frequently accessed data.


Increase Read Throughput (IOPS)

In addition to reducing latency, in-memory systems provide significantly greater request rates than a similar disk-based database. A single distributed side-cache machine may serve hundreds of thousands of requests per second.


Having discussed the benefits of caching, let’s dive into the caching strategies with some real-world use cases. In this article, we will mainly focus on server-side caching strategies.


5 Different Types of Server Caching Strategies

Cache Aside

In this caching strategy, the cache is logically placed aside and the application directly communicates with the database or the cache to know if the requested information is present or not. Firstly the application checks with the cache, if the information is found, it reads and returns the data, and it is known as a cache hit otherwise if the information is not found, it is known as a cache miss, in which case, the application queries the database to read the information and returns the requested data to the client, and then also stores it into the cache for future use.


Source: https://www.prisma.io


It is essentially beneficial in use cases that are read-heavy. Hence, if the cache server is down, then the system will work properly by directly communicating with the database, but it isn’t a long-term or dependable solution in case of peak load or spikes that can happen suddenly.


The most common writing strategy is to write directly to the database, however, this might lead to data inconsistency in case of frequent writes. To deal with this situation, developers often use a cache with TTL(Time to Live) and continue serving until it expires.


Here’s a quick overview of cache with and without TTL and when to use them:


Cache with TTL

A cache with TTL is the most commonly used cache when the data is getting updated frequently. In such cases, you want the cache to expire in regular intervals, so, you can use a cache with a time limit and the cache will be automatically deleted once the time interval has passed.


For example - server sessions or sports scorecards.


Cache without TTL

A cache without TTL is used for caching needs that don’t need to be updated frequently, for example, content on a website that provides courses. In this case, the content will be updated or published infrequently, and so it’s much easier to cache without any time limit.


Write-Through Cache

As the name suggests, in this strategy, any new information is first written into the cache before the main memory/database. In this case, the cache is logically between the application and the database. Hence, if a client requests any information, the application does not need to check with the cache for the availability of the information as the cache already has the information, and thus, it is directly retrieved from the cache and served to the client.


Source: https://www.prisma.io


However, it increases the latency of a write operation. But if paired with another caching strategy called read-through cache, we can ensure consistency of data.


Read-Through Cache

In this caching strategy, the cache sits in line with the database such that any time there’s a cache miss( data is not found in the cache), the missing data is populated from the database and gets returned to the client.


Source: https://www.prisma.io


As you might have guessed, it works best for ready-heavy applications such that the same set of information is requested again and again. For instance, a news website would be serving the same stories for a day over and over.


The downside of this strategy is if the data is requested for the first time, it is always a cache miss, and thereby it will be slower than a normal request.


Write-Back

In this caching strategy, whenever there is a write operation, the application writes the information to the cache that immediately acknowledges the changes and after some delay, it writes back the data to the database. It is also known as the Write-behind caching strategy.


Source: https://www.prisma.io


It is a good caching strategy for applications that are heavy on write operations for improving the write performance of the application. It can also help accommodate moderate database downtimes and failures that can happen in instances.


It can also work well when paired with a read-through cache. Moreover, it can further reduce the write workload on the database if batching is supported. However, the downside is that if there is a cache failure, the data might get lost forever. In most relational databases, the write-back caching mechanism is enabled by default.


Write-Around

In this case, the data is directly written to the database and only the data which is read is stored in the cache.


Source: https://www.prisma.io


It can be combined with the read-through cache and could be a good choice in situations where the data is written once and is read only a few times. For example, when there is a need for real-time logs or chats.


Conclusion

In this article, we discussed what is caching and the different levels of caching in an application, and why we need it. Then, we also discussed various caching strategies for server-side caching. However, it is not necessary that any one of these caching strategies could fulfill your practical use cases but it is always recommended to go for a combination of these strategies for the best results.


For a developer new to this, it might require some trial and error, or we can say, hit or miss to gain a more thorough understanding of the concept in the practical sense and come up with the best solution for a particular use case.


That was all for this article, I hope you found it helpful. Do let me know what you think. You can connect with me here:


LinkedIn | Twitter | GitHub


Keep reading!