Hacker News new | past | comments | ask | show | jobs | submit login
The Optimizations in Erlang/OTP 27 (erlang.org)
190 points by pjmlp 9 days ago | hide | past | favorite | 27 comments





This is not mentioned in the article, but since it has performance implications for a ton of BEAM applications, it's probably relevant: OTP 27 will also include a new builtin JSON module, and I did a quick and dirty benchmark [1] of it against Elixir's (excellent) Jason library. It seems like it'll be a pretty good performance win for BEAM applications that work with JSON.

[1] https://zeroclarkthirty.com/2024-04-21-benchmarking-erlangs-...


Coincidentally I started playing around with CouchDB today. I wonder if this will have much of an impact on its performance.

How do these compare with jiffy?

There are more benchmarks and quite a bit more detail here: https://github.com/erlang/otp/pull/8111

Belated thanks, this was very interesting!

I'm legitimately surprised with their performance achievements there.


>handle errors with exceptions

What's the logic behind this?


I don't know. I'm sure there is a design reason for it, but I haven't been able to find it in any of the public discussion.

> ...It measures the time to convert a binary holding 1,262,000 digits to an integer.

> Running an unpatched Erlang/OTP 26 on my Intel-based iMac from 2017, the benchmark finishes in about 10 seconds.

> The same benchmark run using Erlang/OTP 26.0.2 finishes in about 0.4 seconds.

I love that Erlang even supports an integer with 1,262,000 digits, but I imagine that's one reason it'll never be a mathematical speed daemon.


Arbitrary precision integers aren't necessarily that expensive to implement. It does require overflow checking for operations on regular fixed integers, and keeping track of type, but if you're doing that anyway (many languages are) then it's almost free; the bignum code only runs when it doesn't fit in a fixed integer. Common Lisp, Python, Haskell use big integers for all integers, too. And others, I'm sure. It can be optimized away by a compiler to just fixed integers, sometimes.

> It can be optimized away by a compiler to just fixed integers, sometimes

Yep, SBCL does range analysis and can do inline fixnum or unboxed arithmetic.


Racket/Chez Scheme can too :-)

It really pisses me off because sometimes I have numbers with 1,262,001 digits and erlang leaves me completely stranded!

I don't think there's an explicit to large integers, other than memory? Just the benchmark needs to pick a size, and 1_262_000 digits is used for the benchmark.

And Erlang/OTP follows a pattern of not setting limits on things unless needed, but not necessarily being well optimized at large sizes. Big integers have been there forever? with no explicit size limit, but pretty slow for the last several decades.


If you need mathematical performance and parallel computing (SIMD or matrix multiplication) it might be worth using the NX library which is basically the elixir equivalent of numpy. Otherwise you could probably write maths stuff in rustler if there really is a performance issue. Generally it’s immutability slowing down the maths not the calculations.

> Generally it’s immutability slowing down the maths not the calculations

>> A brief history of recent optimizations

>> Erlang/OTP 22 introduced a new SSA-based intermediate representation in the compiler.


I actually really love this!

Using Karatsuba fixes a problem I've been complaining about for awhile when I'd run into certain fake-believe leetcode style problem or just doing weird non-sensical math benchmarks across different languages.

If you're not familiar with Karatsuba checkout the following

* https://www.youtube.com/watch?v=frT1UPiJUO0

* https://www.youtube.com/watch?v=JCbZayFr9RE


I had a problem last year with opentelemetry.js where it started concatenating numbers as strings instead of summing them. The backend, which I believe is Python? Did not blow up until we hit somewhere north of 300 digits, which is still pretty good.

But a million digits is a pretty good BigInteger. What is that? Half a megabyte?


Python>2.6 has excellent bignum support. Do you happen to know what blew up?

“OpenTelemetry.js” is written in typescript and they thought putting argument types on functions would protect them from bad inputs and thus “1” + 1 problems. Only none of that happens when calling it from JavaScript.

Somewhere in our half million line (I actually don’t know, because we didn’t have a monorepo) application someone turned a stat into a string and I never did track it down.

When we migrated from StatsD that became a problem.


Well, that's nice. Maybe 2025 will be the year I finally get an Erlang/Elixir job

Seems like most of the optimizations were actually in 26.0.2 (not 27).

Most of the post is about changes in 27; although most of the binary_to_integer/1 improvement is indeed from 26.0.2.

If no more optimisations could be done in OTP 22, does that mean 22 is completed software.

I was going to say that OTP was certainly done, but it got a patch last month [1]. I don't expect many more? But just because OTP 22 is done, doesn't mean OTP is done. There's obviously lots of optimization available that can be done, as well as ports to future architectures, and the requirements aren't set in stone; it's got a TLS stack, so that's never done for example.

[1] https://www.erlang.org/patches/otp-22.3.4.27


Software isn’t complete until it is lost.

Not all which is lost is complete though.

Not all which is lost, wanders



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: