A parallel page cache: IOPs and caching for multicore systems

Da Zheng, Randal Burns, Alexander S. Szalay

Research output: Contribution to conferencePaperpeer-review

Abstract

We present a set-associative page cache for scalable parallelism of IOPS in multicore systems. The design eliminates lock contention and hardware cache misses by partitioning the global cache into many independent page sets, each requiring a small amount of metadata that fits in few processor cache lines. We extend this design with message passing among processors in a nonuniform memory architecture (NUMA). We evaluate the set-associative cache on 12-core processors and a 48-core NUMA to show that it realizes the scalable IOPS of direct I/O (no caching) and matches the cache hits rates of Linux’s page cache. Set-associative caching maintains IOPS at scale in contrast to Linux for which IOPS crash beyond eight parallel threads.

Original languageEnglish (US)
StatePublished - 2012
Event4th USENIX Workshop on Hot Topics in Storage and File Systems, HotStorage 2012 - Boston, United States
Duration: Jun 13 2012Jun 14 2012

Conference

Conference4th USENIX Workshop on Hot Topics in Storage and File Systems, HotStorage 2012
Country/TerritoryUnited States
CityBoston
Period6/13/126/14/12

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Hardware and Architecture
  • Information Systems
  • Software

Fingerprint

Dive into the research topics of 'A parallel page cache: IOPs and caching for multicore systems'. Together they form a unique fingerprint.

Cite this