Caching in distributed systems is an important aspect for designing scalable systems. We first discuss what is a cache and why we use it. We then talk about what are the key features of a cache in a distributed system.
The cache management policies of LRU and Sliding Window are mentioned here. For high performance, the cache eviction policy must be chosen carefully. To keep data consistent and memory footprint low, we must choose a write through or write back consistency policy.
Cache management is important because of its relation to cache hit ratios and performance. We talk about various scenarios in a distributed environment.
System Design Video Course:
A complete course on how systems are designed. Along with video lectures, the course has architecture diagrams, capacity planning, API contracts and evaluation tests.
Use the coupon code ‘earlybird’ for a 20% discount!
System Design Playlist: https://www.youtube.com/playlist?list=PLMCXHnjXnTnvo6alSjVkgxV-VH6EPyvoX
Designing Data Intensive Applications - https://amzn.to/2yQIrxH
You can follow me on:
Guava Cache - https://github.com/google/guava/wiki/CachesExplained
LRU - http://www.mathcs.emory.edu/~cheung/Courses/355/Syllabus/9-virtual-mem/LRU-replace.html
Implementation of Sliding Window Cache policies (Caffeine) - https://github.com/ben-manes/caffeine
#SystemDesign #Caching #DistributedSystems