Taming Go’s memory usage, or how we avoided rewriting our client in Rust

#38 · 🔥 356 · 💬 219 · 2 years ago · www.akitasoftware.com · jeanyang · 📷
Not only does it take a while for Go to catch up if the rate of allocation temporarily increases, but Go deliberately lets the heap size increase so that there are no large delays waiting for memory to become available. One of our incidents where memory spiked had demonstrated 3GB growth in memory usage over a period of 40 seconds, or a data rate of about 75MByte / second. All of these reduced memory usage to just the memory required for individual hash computations in the OneOfOne/xxhash library that objecthash-proto was using. While these improvements may seem obvious in hindsight, there were definitely times during the Great Memory Reduction when the team and I considered rewriting the system in Rust, a language that gives you complete control over memory. PRO-REWRITE: Rust has manual memory management, so we would avoid the problem of having to wrestle with a garbage collector because we would just deallocate unused memory ourselves, or more carefully be able to engineer the response to increased load.???? PRO-REWRITE: Rust is very popular among hip programmers and it seems many startup-inclined developers want to join Rust-based startups. People seem to complain less about the ergonomics of Rust than the ergonomics of Go.???? ANTI-REWRITE: Rust has manual memory management, which means that whenever we're writing code we'll have to take the time to manage memory ourselves. In our production environment, the 99th percentile memory footprint is below 200MB, and the 99.9th percentile footprint is below 280MB. We've avoided having to rewrite our system in Rust.
Taming Go’s memory usage, or how we avoided rewriting our client in Rust



Send Feedback | WebAssembly Version (beta)