BMC: Accelerating Memcached using In-kernel Caching and Pre-stack Processing

# · ✸ 35 · 💬 4 · 2 years ago · pchaigno.github.io · yagizdegirmenci · 📷
Because BMC runs on commodity hardware and requires modification of neither the Linux kernel nor the Memcached application, it can be widely deployed on existing systems. BMC focuses on accelerating the processing of small GET requests over UDP to achieve high throughput as previous work from Facebook has shown that these requests make up a significant portion of Memcached traffic. This provides the ability to serve requests with low latency and to fall back to the Memcached application when a request cannot be treated by BMC. Of course, if your Memcached application listens only on TCP, BMC won't be of much use. Updating the in-kernel cache with SET requests requires that both BMC and Memcached process SET requests in the same order to keep the BMC cache consistent, which is difficult to guarantee without a overly costly synchronization mechanism. The few requests BMC can't process will be served by Memcached in userspace, so the only downside here is a small loss in caching efficiency. They allocate 2.5 GB of memory to BMC and 10 GB to Memcached in userspace, which makes for a fairly small Memcached compared to production servers. Memcached + BMC significantly outperform Memcached running alone, whether it is patched or not.
BMC: Accelerating Memcached using In-kernel Caching and Pre-stack Processing



Send Feedback | WebAssembly Version (beta)