Binary representation of floating-point numbers

# · ✸ 49 · 💬 18 · 2 years ago · github.com · trekhleb · 📷
Have you ever wondered how computers store the floating-point numbers like 3.1416 or 9.109 10⁻³¹ in the memory which is limited by a finite number of ones and zeroes? The IEEE 754 standard describes the way of using those 16 bits to store the numbers of wider range, including the small floating numbers. To get the idea behind the standard we might recall the scientific notation - a way of expressing numbers that are too large or too small to be conveniently written in decimal form. I've tried to describe the logic behind the converting of floating-point numbers from a binary format back to the decimal format on the image below. The 16-bits number is being used here for simplicity, but the same approach works for 32-bits and 64-bits numbers as well. Js for the example of how to convert array of bits to the floating point number. Js for the example of how to see the actual binary representation of the floating-point number in JavaScript.
Binary representation of floating-point numbers



Send Feedback | WebAssembly Version (beta)