10 min. read

January 31, 2022

redis

How Does Redis Work? An Example

Redis is quite simple to get up and running, let me show you how!

Patrick Zawadzki

Patrick Zawadzki, Senior Software Engineer

What is a cache?

A cache is typically defined as a temporary store of data that will need to be reused often. This can mean a lot of things and the timeframe for the cached data can vary greatly based on what it's used for. A cache is typically an in-memory storage system for quick lookup of data and typically for data that won’t be changing often but may need to be accessed often.

Caching example

As an example, we will be using a salary for a user to show something that may be cached. A salary will likely not change more than once or twice per year for the average individual. Let's say we have a backend service that does calculations on a user’s salary information, but the salary is stored in a separate service. This would mean in our backend we would need to request the salary information from the separate service every single time we would need to do a calculation, causing an increase in response times and unnecessary delays. Maybe this application is a career growth calculator and a user is interested in seeing projections for themselves, this would mean that the salary information would be used as a basis for calculation in different projections and would need to be requested often.

Instead of making a request for the user’s salary every time we need to make a calculation, we can instead make the request once and store it in our cache with whatever TTL or time to live, that we choose. Therefore after say, 4 hours, the cache will clean itself of the data that was stored and the next time we go to make a request we will need to fetch fresh salary information from our service and re-store it into our cache.

Cache setup

A common in-memory cache system is Redis. Redis is an open-source cache that is commonly used in many backend applications. Many cloud providers have their own service implementation of caches as well, for example, AWS’s Elasticache or Azure’s Cache for Redis. For our example, we'll be using a NodeJS backend and the Redis NPM package to illustrate creating a cache connection and using the cache.

The other great part about Redis is once you have the server running, there isn’t much else you need to do. No schema configurations or custom configurations to get yourself up and running. As long as the service is running you can connect to it and utilize it however your application needs.

1. Local install

You can install Redis locally and connect your server that way to try experimenting as well. This is probably the easiest route to go so you don’t have to tinker with the different cloud setups too much and is easiest for proto-typing.

2. Redis extension package

There is some functionality for hashes for example in Redis that people desire. For context, a hash in Redis works like a key -> field(s) -> value(s). Where there is a root key, which holds multiple fields, each field having its own value. In JSON it would look something like this.

json example

The only problem is that the current implementation of Redis at the time this is written, only allows expiration at the rootkey level, and not on the fields. This NPM package provides some custom commands you can use with Redis to give you some of this functionality. I personally haven’t tested it, but it may be worth knowing if you have a potential use case for it.

3. Creating your cache connection

Creating a cache connection with IORedis is straightforward. There are multiple ways to do this, whether it be a connection string or passing in an options object with your parameters. In a non-example scenario, this connection string or connection-specific information should be stored via environment variables into the service, but for demonstration purposes I have it shown directly.

code

4. Using your cache

Using your cache is also just as straightforward. This can be highly dependent on how you want to structure your code and how you want to use your cache. For example, maybe you want to cache all responses from a certain service, it might be easier to use express middleware to implement this cache.

In this scenario, we will be taking a salary number from the user and storing it under the userId in a key-value store within Redis.

Redis offers plenty of different ways to store data, which can be viewed in their commands section. You can do key-value stores, lists, sets, etc. all based on how you need to be storing and accessing your data for your service. In this case, since we have a userId unique to each user and all we want to store at this time is the salary for that specific userId we will just have a key-value system in place. This will also be an O(1) lookup speed.

code-salary-service

In this example we are taking the Redis service file we have created previously and importing it into this service file. From there we are able to access our Redis cache and access data however we choose.

On lines 12-19 we are going to take the userId we have been provided and check that it exists within our Redis cache. The get command in Redis allows us to pull the value for a provided key. In this case, if we have previously tried to retrieve the salary information for this user we will be able to quickly pull that and return it back to the user. Thereby eliminating the API request to the salaryService

Side note: Any type of connection like this will need to be sent over the internet meaning it can take time. Whenever you go to set up your Redis instance it is best to try and configure it to be near wherever your server is being hosted. Therefore if you have a web server running in AWS’s us-east-1 you should also try and configure your Redis cache to us-east-1 to minimize latency.

On lines 21-31 would occur in the event that we are unable to retrieve an existing salary from Redis and need to make a request to our salary service. From here we would make the API request as usual then take the response as well as the userId, and update the Redis cache. This occurs on lines 28 & 29 and is quite simply taking the salary off of the response and storing it in Redis with a 1 hour TTL. This TTL is totally configurable to your needs and can even be set down to milliseconds if need be. This ensures that after 1 hour of being in the cache, Redis will automatically clear that value from the cache forcing the service to request fresh salary information for a user.

Conclusion

Redis is quite simple to get up and running and incredibly powerful for the average application. You can add it strategically to different parts of your application to improve aspects that are being highly trafficked by your users to greatly improve their user experience and the overall performance of your application.

Nowadays in software, speed is everything, and making a snappy and responsive user experience is huge. Consider using a Redis cache the next time you’re running into issues and see if it can fit your application to improve overall user experiences!