paint-brush
Caches in Pythonby@bhavaniravi
845 reads
845 reads

Caches in Python

by Bhavani RaviAugust 16th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Caching is a concept that was gifted to the software world from the hardware world. A cache is a temporary storage area that stores the used items for easy access. Caches are located closer to the consumer (CPU) hence the less latency. Cache such as Browser caches, server caches, Proxy Caches, and Hardware caches work under the principle of `read` and `write` The write to the DB happens through the cache. Every time a new data is written in the cache it gets updated in the DB.

Company Mentioned

Mention Thumbnail
featured image - Caches in Python
Bhavani Ravi HackerNoon profile picture

Caching is a concept that was gifted to the software world from the hardware world. A cache is a temporary storage area that stores the used items for easy access. In layman’s terms, it is the chair we all have.


This post covers the basics of:


  1. What are Caches?
  2. Caching Operations
  3. Cache Eviction Policies
  4. Implementation of Cache Eviction Policies
  5. Distributed Caching
  6. Caching In Python

Conventional Caches

In computer science, Caches are the hardware components that store the result of computation for easy and fast access. The primary factor that contributes to the speed is its memory size and its location. The memory size of the cache is way less than RAM. This reduces the number of scans to retrieve data. Caches are located closer to the consumer (CPU), hence the less latency.

Caching Operations

There are two broad types of cache operation. Cache such as browser caches, server caches, proxy caches, and hardware caches work under the principle of read and write caches.


When dealing with caches, we always have a huge chunk of memory which is time-consuming to read and write to, DB, hard disk, etc.; cache is a piece of software/hardware sitting on top of it, making the job faster.

Read Cache

A read cache is a storage that stores the accessed items. Every time the client requests data from storage, the request hits the cache associated with the storage.


  1. If the requested data is available on the cache, it is a cache hit.

  1. If not, it is a Cache miss.


  1. Now, when accessing data from the cache, some other process changes the data at this point.


    You need to reload the cache with the newly revised data. This is a Cache invalidation.


Write Cache

Write caches, as the name suggests, enables fast writes. Imagine a write-heavy system, and we all know that writing to a DB is costly. Caches come in handy and handle the DB write load, which is later updated to the DB in batches. It is important to notice that the data between the DB and the cache should always be synchronized. There are three ways one can implement a write cache.

  1. Write Through
  2. Write Back
  3. Write Around



Write Through

The write to the DB happens through the cache. Every time a new data is written in the cache, it gets updated in the DB.


  • Advantages - There won’t be a mismatch of data between the cache and the storage
  • Disadvantage - Both the cache and the storage needs to be updated, creating an overhead instead of increasing the performance

Write Back

Write back is when the cache asynchronously updates the values to the DB at set intervals.


This method swaps the advantage and disadvantages of Write through. Though writing to a cache is faster, Data loss and inconsistency.

Write Around

Write the data directly to the storage, and load the cache only when the data is read.

  • Advantages
    • A cache is not overloaded with data that is not read immediately after a write.
    • Reduces the latency of the write-through method.
  • Disadvantages
    • Reading recently written data will cause a cache miss and is unsuitable for such use-cases.

Cache Eviction Policies

Caches make the reads and write fast. Then it would only make sense to read and write all data to and from caches instead of using DBs. But remember, the speed comes only because caches are small. The larger the cache, the longer it will take to search through it.


So, we must optimize with space. Once a cache is full, we can make space for new data only by removing the ones that are already in the cache. Again, it cannot be a guessing game. We need to maximize the utilization to optimize the output.


The algorithm used to decide which data needs to be discarded from a cache is a cache eviction policy.

  1. LRU - Least Recently Used
  2. LFU - Least Frequently Used
  3. MRU - Most Recently Used
  4. FIFO - First In, First Out

LRU - Least Recently Used

As the name suggests, remove the element when a cache runs out of space. It is simple and easy to implement and sounds fair, but for caching, frequency of usage has more weight than when it was last accessed, which brings us to the next algorithm.

LFU - Least Frequently Used

LFU takes both the age and frequency of data into account. But the problem here is that frequently used data stagnates in the cache for a long time.

MRU - Most Recently Used

Why on earth would we use an MRU algorithm after talking about the frequency of usage? Won’t we always re-read the data we just read? Not necessarily. Imaging the image gallery app, the images of an album get cached and loaded when you swipe right. What about going back to the previous photo? Yes, the probability of that happening is less.

FIFO - First In, First Out

When caches start working like queues, you will have a FIFO cache. This fits well for cases involving pipelines of data sequentially read and processed.

LRU Implementation

Caches are a hash table. Every data that goes inside is hashed and stored, making it accessible at O(1).


Now, how do we kick out the least recently used item? We, by far, only have a hash function and its data. We need to store the order of access in some fashion.


We can have an array where we enter the element as and when it is accessed. But calculating frequency in this approach becomes an overkill. We can’t go for another Hash table. It is not an access problem.


A doubly linked list might fit the purpose. Add an item to the linked list every time it is accessed, and maintain its reference in a hash table, enabling us to access it at O(1).

When the element is already present, please remove it from its current position, and add it to the end of the linked list.

Where to Place the Caches?

The closer the caches are to its consumer, the faster it is. This might implicitly mean placing caches along with the web server in case of a web application. But there are a couple of problems.


  1. When a server goes down, we lose all the data associated with the server’s cache.
  2. When there is a need to increase the size of the cache, it will invade the memory allocated for the server.


The most viable solution is to maintain a cache outside the server. Though it incorporates additional latency, it is worth the reliability of caches.


Distributed Caching is the concept of hosting a cache outside the server and scaling it independently.

When to Implement Caches?

Finding the technologies to implement caches is the easiest of all steps. Caches promise high-speed APIs, and it might feel stupid not to incorporate them, but if you do it for the wrong reasons, it just adds additional overhead to the system. So, before implementing caches, make sure that:


  1. The hit to your data store is high.
  2. You have done everything possible to improve the DB level's speed.
  3. You have learned and researched various caching methodologies and systems and found what fits your purpose.

Implementation in Python

To understand caching, we need to understand the data we are dealing with. For this example, I am using a simple MongoDB Schema of User and Event collections.


We will have APIs to get User and Event by their associated IDs. The following code snippet comprises a helper function to get a respective document from MongoDB.


def read_document(db, collection, _id): 
  collection = db[collection] 
  return collection.find_one({"_id": _id})

Python inbuilt LRU-Caching

from flask import jsonify from functools 
import lru_cache 

@app.route(“/user/<uid>“) 
@lru_cache() 
def get_user(uid): 
  try: 
    return jsonify(read_user(db, uid)) 
  except KeyError as e: 
    return jsonify({”Status": “Error”, “message”: str(e)})

LRU-Caching, as you see in the following example, is easy to implement since it has out-of-the-box Python support. But there are some disadvantages.


  1. It is simple and can’t be extended for advanced functionalities
  2. It supports only one type of caching algorithm
  3. LRU-Caching is a classic example of server-side caching. Hence, there is a possibility of memory overload in the server.
  4. Cache timeout is not implicit; invalidate it manually

Caching In Python Flask

To support other caches like Redis or Memcache, Flask-Cache provides out-of-the-box support.


config = {’CACHE_TYPE’: ‘redis’} # or memcache
app = Flask(__name__)
app.config.from_mapping(config)
cache = Cache(app)
​
@app.route(“/user/<uid>“)
@cache.cached(timeout=30)
def get_user(uid):
    try:
        return jsonify(read_user(db, uid))
    except KeyError as e:
        return jsonify({”Status": “Error”, “message”: str(e)})


With that, we have covered what caches are, when to use one and how to implement it in Python Flask.

Resources

  1. You’re Probably Wrong About Caching
  2. Caching - Full Stack Python
  3. Redis Vs. Memcache
  4. Caching - A Trip Down the Rabbit Hole
  5. All About Caching

Photo by Alex Chumak on Unsplash