Building Cheap and Large CAMs Using BufferHash

Loading...
Thumbnail Image

Date

Authors

Anand, Ashok
Kappes, Steven
Akella, Aditya
Nath, Suman

Advisors

License

DOI

Type

Technical Report

Journal Title

Journal ISSN

Volume Title

Publisher

University of Wisconsin-Madison Department of Computer Sciences

Grantor

Abstract

We show how to build cheap and large CAMs, or CLAMs, using flash memory. These CLAMs are targeted at an emerging class of networking applications that require massive indexes running into a hundred GB or more, with items been inserted, updated and looked up at a rapid rate. Examples of such applications include WAN optimizers, data de-duplication, network monitoring, and traffic analyzers. For such applications, using DRAM-based indexes is quite expensive, while on-disk approaches are too slow. In contrast, our flash memory based CLAMs cost nearly the same as using existing on-disk approaches but offer orders of magnitude better performance. While flash memory inherently offers efficient random reads required for fast lookups, it does not support efficient small random writes required for inserts and updates. To address this, we design an efficient data-structure called BufferHash that significantly lowers the amortized cost of all write operations. Our design of BufferHash also incorporates efficient and flexible eviction policies. We build CLAMs using BufferHash on SSDs and disks. We find that the SSD-based CLAMs can offer average insert and lookup latencies of 0.02ms and 0.06ms (for 40% lookup success rates), respectively. We show that using such a CLAM in a WAN optimization application can offer 3X better throughput improvement than current designs.

Description

Keywords

Related Material and Data

Citation

TR1651

Sponsorship

Endorsement

Review

Supplemented By

Referenced By