Abstract
We present a set-associative page cache for scalable parallelism of IOPS in multicore systems. The design eliminates lock contention and hardware cache misses by partitioning the global cache into many independent page sets, each requiring a small amount of metadata that fits in few processor cache lines. We extend this design with message passing among processors in a nonuniform memory architecture (NUMA). We evaluate the set-associative cache on 12-core processors and a 48-core NUMA to show that it realizes the scalable IOPS of direct I/O (no caching) and matches the cache hits rates of Linux’s page cache. Set-associative caching maintains IOPS at scale in contrast to Linux for which IOPS crash beyond eight parallel threads.
Original language | English (US) |
---|---|
State | Published - 2012 |
Event | 4th USENIX Workshop on Hot Topics in Storage and File Systems, HotStorage 2012 - Boston, United States Duration: Jun 13 2012 → Jun 14 2012 |
Conference
Conference | 4th USENIX Workshop on Hot Topics in Storage and File Systems, HotStorage 2012 |
---|---|
Country/Territory | United States |
City | Boston |
Period | 6/13/12 → 6/14/12 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Hardware and Architecture
- Information Systems
- Software