Studio Concepts: Why Record at 32-Bit Float?

Pros and Cons of 32-Bit Floating Point Audio Recording

Stephen Mcleod Blythe · 06/26/24

I am probably what most people would consider to be something of a "technical person." Since I was wee, I have always enjoyed learning about and experimenting with different kinds of technology, and ultimately been fortunate enough to forge a career off of the back of that curiosity. When it comes to making music however, things have been notably different. I never really paid much attention to what the individual settings within my DAW did, or took any real interest in the specifics of how or why audio worked. For me, the creative process itself, and the end result were all that really mattered.

One of the many facets of potential knowledge that I threw upon this particular bonfire of deliberate ignorance was bit depth. I had some vague idea that 16-bit was the standard, and knew that it was always a pain to have to convert my samples before loading them onto the MPC 2500 as it didn’t support 24-bit WAVs, but aside from that—it was all just details.

Once I started recording videos as part of my incredible rise to fame on YouTube, and paying more attention to the challenges involved in recording audio on the fly though, I realised that there was actually probably some value to doing a bit more research—especially as I kept hearing the term "32-bit float" getting banded about. I had always just stuck with 16-bit out of habit. If I don’t even really need 24-bit for my tunes, then why would I bother with 32…and why is it floating?!

Bit Depth and Dynamic Range

Before diving into the err, pool of 32-bit (I tried), it’s probably worth explaining briefly what bit depth actually is, or at least, why it matters in the world of digital audio recording. This is obviously a topic which can very easily become incredibly complicated, and I am clearly no scientist. However, I am going to dare to suggest that at its core, it is a fairly straightforward concept.

The nature in which digital data is stored means that it has discrete values, e.g. individual numbers. In order to represent information accurately (or at least to a degree that it appears indistinguishable from the source), there needs to be enough of those values available. This is something that is especially difficult with something like an analog soundwave, which by its nature is not comprised of distinct, individual pieces. You might already have come across something like this in practice with digital synthesizers, where you are sometimes able to hear the "stepping" as you adjust a VCO or filter. The greater the bit depth, the more values can be assigned, and the smoother any resulting variations in the signal will be. When it comes to audio recordings, this means that more dynamic range can be stored. In other words, a greater spread of signal strength or amplitude, which is handy for reproducing all of the nuances involved.

16-bit audio (apparently) has the ability to record 65,536 different amplitude values per second. In contrast, 24-bit can record almost 17 million, a resolution which means it ostensibly provides a far more accurate representation of the sound in question. I may not understand much, but a jump from under a hundred thousand to millions seems to be quite the difference, and upon discovering this, I can’t help but wonder if I should have reconsidered my apathy towards bit-depth choices much earlier.

32-Bit Float

The obvious assumption at this point is that 32-bit float audio can store far more information than either 16 or 24-bit—and that is kind of correct. If we’re talking about amplitude values, the number of individual values that can potentially be recorded works out at something like 4.2 Billion (gasp!). However, what is actually more interesting is the way in which 32-bit float technology achieves that.

Rather than relying on specific individual values, it instead expresses the stored information using a mathematical concept known as floating point numbers. From my limited understanding of mathematics, this can be thought of as a kind of formula or notation which allows larger numbers to be expressed in a shorter form. In practice, this means that a single audio recording can store a massive amount of data, and thus more accurately represent a diverse range of amplitudes.

Use Cases

Whether they realise it or not, anybody that has ever done any sort of recording will understand the importance of "gain staging." At its simplest, this means ensuring that you have enough headroom so that the highest levels of the signal or signals that you want to capture aren’t going to clip and cause unpleasant distortion. Similarly, you want to ensure that your incoming signal is strong enough that it can be used without having to content with any additional unwanted sounds (e.g. the "noise floor").

This is all well and good when you are in a controlled environment, or where there is unlikely to be sudden spikes in volume. However, that isn’t always going to be the case. In an interview scenario, for example, you may find that your otherwise softly spoken guest suddenly lets out an unexpectedly riotous laugh. Without some kind of built-in compression or limiter, this just doesn’t sound good—and is one of the reasons that 32-bit float audio has become so popular with products like the Rode Wireless Pro and Zoom H6essential field recorders.

The dynamic range that a bit-depth of 32-bit float is capable of handling effectively means that there is a much greater latitude to cope with those differences. If you are pressed for time, or unable to predict what levels you might have to deal with—recording in 32-bit float can be an excellent solution. You can tame those unruly peaks in editing, and give a clean boost to sections that would otherwise be too quiet in the mix. As you might imagine, this kind of logic also applies to a variety of other scenarios, such as mixing for live sound, or recording instruments with a large dynamic range.

Why Not Use 32-Bit Float All the Time?

So if 32-bit float is so great, then why don’t we all just use it for everything? The first thing to understand is that while the use of floating point numbers means that a greater range can be captured, it is not as precise as expressing that same information in discrete integer values. Therefore, if a massive range is not required, then 32-bit float may not be the best choice.

This is particularly the case for final mastered recordings. For example, the standard for CDs has long since been 16-bit, which might not sound like much after reading through all of this—but is higher quality than most streaming services offer. If you want to push the boat out, high quality audio is generally offered at a bit depth of 24—but that is fairly unusual.

Part of the reason of course that Spotify et al don’t just provide the highest quality audio in 32-bit float by default is that the resulting file sizes are much larger. MP3 became the world’s most popular format for music for a reason. With compressed formats, whole albums can be transferred quickly and easily, even on limited data plans and slow Internet speeds.

In situations where you don’t need to worry about unexpected peaks in the audio, it might also be advantageous to stick with 24-bit to make your workflow easier. If we accept that the final output of our track is going to end up as a 24 or 16-bit WAV file anyway (and probably just a grubby MP3 at some stage), then using that resolution from the start means there will be less post-processing involved, and less risk of distortion being introduced at a later step. While audio software and enlightened musicians like us may be able to handle 32-bit float perfectly fine, that isn’t necessarily the case across the board. Dragging a 32-bit float file into a video editing program that doesn’t support that format natively will result in a conversion that could result in unexpected changes. To avoid this, you need to take extra steps in post-processing to ensure that any peaks are limited before exporting to the lower bit-depth. If you don’t need the range that 32-bit float provides, why not just save yourself that hassle in the first place?


When I was first trying to wrap my head around this, it helped to think of it in relation to the world of digital cameras. This may perhaps result in a terrible over-simplification that I will regret immediately upon publication of this article, but it could provide some useful parallels, so I’ll take that chance. If we view things through the lens of a photographer (sorry, sorry), then 32-bit float can be compared to shooting RAW.

With these files, a much greater range of data about a scene is captured, which allows far more latitude when post-processing. Accidentally under-exposed your scene? No problem, you can just bring the levels up in Lightroom without worrying about introducing additional digital noise. Blown your highlights? Not an issue. Just dial them down a bit. Once you’ve gone through that process and landed on how your final masterpiece should look, all of the extra information provided by a RAW file is unnecessary. It makes far more sense to convert the picture to a compressed format like JPEG which has a lower file size, and can be shared more readily.

In many ways, this is very similar to the idea of 32-bit float audio. It can be advantageous to use at the recording stage when you want to gather as much information as possible across a wide range, but as we go through the post-processing stages such as mixing, mastering, and online distribution—it isn’t necessary, or even the best choice.


The era of 32-bit float audio has officially arrived, and I for one am glad that I finally managed to get over my indifference towards bit depth, because in practice it actually does make a difference. 32 bit-float has proven to be incredibly useful for capturing recordings spontaneously where I just want to focus on what I’m doing in the moment, as opposed to worrying about getting levels just so. As a relatively new technology, it is a great tool to have in our arsenal—so long as we know when to deploy it most effectively.