
Tech Glossary
Least Recently Used (LRU) Cache
The Least Recently Used (LRU) cache is a cache management algorithm that removes the least recently accessed items when the cache reaches its storage limit. It is commonly used in computer systems, databases, and web browsers to improve performance by ensuring frequently accessed data is retained while older, unused data is discarded.
How LRU Works
An LRU cache keeps track of recently accessed elements and prioritizes frequently used data. When the cache reaches its capacity and a new item needs to be stored, the least recently used item is removed to make room. The LRU policy is typically implemented using:
1. Linked Lists & Hash Maps – A doubly linked list maintains the order of access, while a hash map provides quick lookups.
2. Priority Queues – A min-heap structure is used to track usage frequency.
3. Counters or Timestamps – Each item is assigned a timestamp that updates upon access, allowing the least recent item to be identified and removed.
Applications of LRU Cache
- Operating Systems – Used in page replacement algorithms to manage virtual memory efficiently.
- Databases – Applied in buffer caching to speed up query processing.
- Web Browsers – Helps manage stored web pages to optimize loading times.
- Content Delivery Networks (CDNs) – Caches frequently accessed media to enhance performance.
Advantages and Limitations
Pros:
✔ Efficient for workloads where recent data is frequently reused.
✔ Simple and effective for memory management.
✔ Commonly implemented in hardware-level caching mechanisms.
Cons:
✖ Can be inefficient when access patterns do not favor recency (e.g., cyclic workloads).
✖ Requires additional data structures, increasing overhead.
Many programming languages provide built-in support for LRU caching, such as Python’s functools.lru_cache decorator. LRU remains a fundamental technique in modern computing for optimizing speed and resource usage.
Learn more about (LRU) Cache here.