In order to operate with high efficiency, Facebook’s infrastructure relies on caching in many different backend services. These services place very different demands on their caches, e.g., in terms of working set sizes, access patterns, and throughput requirements. Historically, each service used a different cache implementation, leading to inefficiency, duplicated code and effort.
CacheLib is an embedded caching engine, which addresses this requirement with a unified API for building a cache implementation across many HW mediums. CacheLib transparently combines volatile and non-volatile storage in a single caching abstraction. To meet the varied demands, CacheLib successfully provides a flexible, high-performance solution for many different services at Facebook. In this talk, we describe CacheLib’s design, challenges, and several lessons learned.
To request accommodations for a disability, please contact Emily Lawrence at emilyl@cs.princeton.edu, at least one week prior to the event.