There’s some confusing stuff in this response so before I get into the weeds, for all the people reading out there: you don’t lose quality by using your operating systems volume control.
Okay, with that out of the way, let’s say you wanted to adjust the volume of a digital stream that’s composed of samples. Each sample represents the original analog signals voltage at that slice of time when it was encoded. The number of slices per second is the sample rate, expressed in kilohertz and the voltage of the original signal is converted to a number, which is stored as a binary value whose length is expressed in bits, each of which can be either a one or zero and is referred to as the streams bit depth.
So you could have a stream whose sample rate is 44.1khz for example and that would mean that it was sampled 44,100 times per second. That same stream might have a bit depth of 16, and that would mean that the original signals voltage level was divided into 65,536 possible values. Depending on some other factors, that stream might just be cdda (a compact discs digitally encoded song information).
Now let’s say you had a computer that was handling that stream and was asked to reduce the volume of the stream by half by a user who can only stand to listen to it at that volume.
One way to do that job would be to decode the stream back into an analog voltage, attenuate it, recode it and then send it on its merry way. That would incur a decoding operation, require routing of that signal to either dedicated hardware to perform the attenuation and send the signal back and an encoding operation to make that now half as loud signal back into a digital stream that can then be sent wherever it’s destined.
Another way of handling that operation is simply dividing every slice of the streams 16 bit component by two, something that computers are very good at doing quickly.
It should come as no surprise then that the latter process is generally how it’s done.
But does that reduce quality or result in worse signal? That’s the question, right?
Well, any variation of one bit or less could be essentially deleted. A person could say “ah hah! The signal has been degraded!” And they’d be technically correct, but it wouldn’t matter.
In our example, the computer whose hands are all over our precious data stream is sending that adulterated information to a dac, which true to its moniker will convert the signal from a digital stream into an analog signal. That analog signal will then be sent to an amplifier with an analog volume control and from there to a set of speakers.
The amplifiers analog volume control is a resistor in the shape of a 3/4 arc with a wiper that can move back and forth across it, allowing anything put in one side to be resisted (or in the case of our ac signal, impeded) a varying amount depending on the users selected position of the knob attached to the wiper.
Okay but what is resisting a signal though? Well, a resistor will reduce the voltage between its two ends proportional to its resistance, measured in ohms. More ohms means more resistance.
For the purposes of our example, let’s assume the user has chosen an amplifier and dac combination such that the amplifiers volume control at minimum setting applies the minimum resistance necessary to completely attenuate the dacs maximum output and is not applying any resistance at its maximum setting. In other words, that it all works as expected and is perfect.
In this case, what’s the difference between sending a stream with data corresponding to a .5V signal that gets amplified as opposed to a stream with data corresponding to a 1V signal that goes through a resistor to bring it down to .5V before being amplified?
nothing
In fact, the digitally attenuated stream will probably sound better (closer to the original) because it’s not subject to the bourns/alpha ppm lottery!
Now.
Don’t let this stop you from listening to music however you like. My ass is itt admitting to using 40 year old record players to make sounds to cook to. But don’t worry about the computers volume control.
Yes, if everything aligns perfectly, there is no impact. The bit shift would be when you set the volume to exactly half, but that’s probably not going to be the case. The app volume control alters the signal slightly, multiplied by the OS altering it slightly, which has a virtual certainty of introducing a floating point rounding error on every single sample, so now the ratios between your samples is ever so slightly different. And for what reason? What did that operation gain you?
And no you’re not going to hear a difference, but the point of being an audiophile is less about hearing a difference, and more about good quality preservation practices.
okay, lets consider the worst possible rounding error in a 16 bit division operation:
i’ll divide the level of one sample by some number that will not divide evenly, lets say three, and consider the impact of the rounding error. for the purposes of making it so I know there will be a rounding error i’ll choose a number that three has a really bad time dividing into, say 65536.
65536/3=21845.3 with the .3 component repeating. perfect, that’s exactly horrible!
so if we were to just do the simplest rounding possible to fit that into a 16 bit integer, the decimal component is dropped, rounding down to 21845.
but what is the significance of that error? a samples volume level that’s one integer value off introduces a .003% error in level, but this isn’t supposed to be 21846, it’s supposed to be 21845.3. so the error that’s introduced is .001%!
that’s a pretty tiny error, but what if it was periodic and consistent enough to produce a harmonic component? that’d create harmonic distortion!
lets say there’s a periodic and consistent rounding error that has a frequency of 1000Hz. so every thousandth of a second, the rounding caused by the volume being set at 1/3 causes a rounding error and makes a sample off by .001%. such a repeating error would introduce a harmonic component into the signal that the dac produces and be measurable as harmonic distortion at 1000Hz!
but how measurable? well if, for example, the harmonic component of the signal introduced was at it’s absolute worst, and oscillated between a positive going error and a negative going error, it could introduce a peak at 1000hz of…
.002% of your dacs DBFS. so far below the noise floor it’s immeasurable.
even if you had two software volume controls set at 1/3 daisy chained together doubling that error it would be immeasurable. although if we picked a computer software package to use instead of a bunch of hypothetical worst cases the total volume of a signal would be summed and then applied once in order to minimize just this problem. the people writing that software are pretty smart and doing that saves their program a step!
but measurability or audibility isn’t the point, as you said. the point is to reproduce the sound as accurately as possible! so it really doesn’t matter how tiny the effect of rounding errors due to prime denominator volume settings is if it’s larger than the effect of the analog volume control that whatever signal the dac manages to reconstruct from our mangled stream is put through. we’re trying to adjust the volume down to a comfortable listening level, after all.
so how bad is the volume control? well, if i were to go to mouser and look at the potentiometers section, i could choose one with a tolerance as low as… .5%! and that’s a three-gang model that costs $50!
but what if i used a precision potentiometer? why, there’s a .15% tolerance part that’s available for the very reasonable price of $825!
okay the precisions are out of my price range, but those $50 ones could work. tolerance is just a number anyway, right? we want linearity! what does the datasheet say about linearity… 2%!
that’s not even considering the amplifier design’s distortion. lets assume it’s perfect.
so just passing the signal from the dac through the amps volume control causes possibly 200 times more error and distortion than adjusting the volume control in the computer.
i get the pursuit of the best possible reproduction, but the computer volumes got those cermet pots beat hands down.
Interesting point. What about the case where you have your digital volume set to 1%? Would this not squeeze the samples into 1/100 the dynamic range? If I set my volume to 1% it seems to me like those samples now have to all exist within the bottom 1% of the 16b range. Do you not lose at least 5-6 bits of precision on your signal doing this?
You don’t lose precision when you lower the volume (in either an analog or digital realm). You lose actual information!
Let’s say you have a recording you can only listen to with your volume at the 1% setting. Analog or digital, it doesn’t matter.
Your whole system has an acoustic noise floor at something like idk, 10 acoustic decibels. That is a really charitable number because I’ve never measured one that low and it directly corresponds to the loudness of another healthy persons breathing at rest. To give you an idea of how quiet that is, the acoustic decibel scale generally puts a ticking mechanical watch at twice as loud (20 decibels).
I don’t want to talk about decibels because I don’t want to explain the math in the detail I’ve been giving these posts, but we gotta at least cover a little:
Decibels are the measure of sound energy, their scale is logarithmic, so the base of the log function determines how many of decibels make for twice as much.
There are different decibels for measuring in different mediums with different references and they even use different logarithm bases.
Acoustic decibels are log10, so that 20 decibel ticking wristwatch is twice as loud as a person breathing and half as loud as whatever the workplace safety scale says 30 decibels is equivalent to.
Okay so now that we have a floor, we need to establish a ceiling. Let’s say that you did everything right and hooked all your stuff up, turned the volume on the amplifier all the way down, put your headphones, played a maximum volume test tone, maxed out the volume on the software, then turned the amplifiers volume control up until it caused you immediate physical pain. If you have really good hearing, that’s 115 acoustic decibels. Let’s say you got to 120 with the amplifiers volume control up all the way.
Okay, so the noise floor of your headphones on your head is 2^11 as quiet as the loudest sound you can tolerate hearing.
Now you set the volume control to 1%. Doesn’t matter which one. Everything gets 99% quieter. The parts of the signal that were 120 decibels before are now 1.2 decibels. They have been divided by 100, and it’s possible that rounding errors have added .006% error to their harmonic content. .006% of 1.2 is .0072 decibels. Not only is the loudest sound you can stand to hear now quieter than a person breathing, it’s below the noise floor of your system. Far, far below the noise floor. And any rounding error from dividing by 100 is as well!
Okay but what happens when you’re listening to music though? Let’s put aside all that hypothetical stuff and get rockin! Instead of talking about test signals and boring crap, let’s talk about a song!
So same established setup from before, but now you’re listening to a recording of someone playing the banjo while rocking in a chair. There’s a lot of different harmonic content in this signal, the birds chirping, the persons breathing, the wind, the chair creaking the boards of the porch and of course, the instrument itself!
All these different things are at different volumes and they represent components of the harmonic content of the signal you’re listening to. When you turn the recording down, you’re attenuating the signal. All of the signal. If you apply enough attenuation through your chosen volume control to lower the level of the banjo by 40 acoustic decibels then all the other components of the signal are lowered by 40 decibels too. If they were previously 50 acoustic decibels through your headset, they’re part of the noise floor.
The quietest information is simply lost.
Edit: there are massive amounts of information that have been simplified so much as to make this post incredibly inaccurate. Please do not use this as a reference for understanding how we measure or interact with sound. I’m sorry for not going into greater detail but it’s too early to explain the relationship and history of acoustic decibels and decibels per volt.
I’m sorry you have to type so much, I am familiar with most of it, but I appreciate your effort to make sure we’re on the same page without being a douche about it lol. It sounds like we’re saying similar things, but I don’t understand why lower precision is different from losing information. To me, that’s the same thing, it’s a lossy operation.
So the thing is, I have a pair of desktop speakers without any physical volume control that I primarily use for convenience. And for whatever reason, a comfortable listening volume with them is between 1-8% in the OS volume control. I guess the internal amp is just hardwired to be way too loud?
Anyway, I assume that this setup is resulting in objectively lower quality output than if I were to have a 100% signal going to a decent quality DAC/amp with analog volume outputting to the same speakers. And not in a “technically” kind of way, but in a very real “we just crushed the signal into 1/25th of its original scale” way. Would you agree? Am I mistaken?
no worries. i’ve been enjoying going back through this. you’re basically me 25 years ago emailing the winamp ppl to find out what volume control i should use to turn down (for what!).
so there’s a misconception here between compressing a signal and attenuating it. imagine you are looking at a frequency spectrum chart of some song instead of listening to it. it’s got some loud sounds, which show up as big peaks on the chart, and some quiet sounds which show up as small peaks on the chart and there’s a noise floor which is the stuff in the chart that’s not a peak at all.
if you plug a potentiometer in between your signal and the spectrum analyzer and turn down the volume, youll see all the peaks, loud and small, be reduced in amplitude by the same amount. this is called attenuation. the quieter sounds could be reduced until they are part of the noise floor and become imperceptible while the louder sounds would still show up.
it doesnt matter if you achieve attenuation by dividing the 16 bit level component of a stream of samples or by using a resistor as a voltage divider. the quiet and loud sounds are affected equally. those two ways of achieving attenuation function the same because they are performing the same operation.
now lets say you plug a rack mount compressor effects module in between your signal and your spectrum analyzer instead. you could apply more compression to the signal and achieve exactly what you describe, a smaller distance between the quiet and loud sounds, reduction of the original scale, removal of dynamic range, effective bit depth reduction! it would be actual factual “we just crushed the signal into 1/25th of its original scale”.
and if you used a module (or software package) with the capacity for it, you could tie the compression ratio to a gain control so that the compressors output got quieter when you turn the compression ratio up, resulting in more heavily compressed sounds at a quieter volume. that’s a neat little mastering trick to make recordings sound “lively” and “intimate”. makes all those pick scrapes and finger swishes stand out alongside the plucked strings.
you could also do the inverse, make all the quiet sounds louder, so that the guitar is as loud as the kick drums’ transient and it would make your whole song sound much louder and stand out better against background noise in a difficult listening environment like a car radio or cell phone inside a solo cup.
there are even modules that do the opposite, called… expanders! they do what you might expect, increase the dynamic range between loud and quiet sounds. a company called DBX made models for use in home stereos in between tape decks and the amplifier in order to reduce the noise floor of tapes.
but it’s none of that is attenuation, the operation that your volume control provides.
and you’re correct, both compression and attenuation are lossy operations no matter if they’re done with analog electronics or by a microprocessor operating on a buffer somewhere in memory. the difference is that attenuation is literally required to prevent permanent hearing loss and possible equipment damage, while compression is not.
it doesnt matter if you achieve attenuation by dividing the 16 bit level component of a stream of samples or by using a resistor as a voltage divider.
This is the part where I’m not following. In my head, if you’re using analog hardware of sufficient quality, you can attenuate the signal to be very quiet, but still preserve it’s dynamic range. In fact, the DAC is already outputting a very weak, but faithful analog reproduction of the signal, and an amp with a decent S/N ratio is able to bring that very weak signal up to a listening volume without introducing enough noise to matter.
Hypothetically, if for some reason, you took the signal post-amp, used a pot to attenuate it again down to the energy of the post-DAC level, and again ran it through another amp you would theoretically have the same signal still (I understand that in the real world we would start amplifying noise and the signal would degrade, but stick with me). Nothing about the process necessarily introduces noise and thus destroys the signal, you’re only limited to the quality of the components at that point. If you had an infinite chain of theoretically perfect amps and pots, you could repeatedly attenuate and amplify the signal forever without ever losing any quality. It’s an analog process that theoretically preserves the signal, +/- some amount of error due to physics.
Meanwhile, 16b is 16b. If you start shrinking all samples relative to each other (ex. down to 1/64 the original volume, or 10b of resolution), different values inevitably have to clamp to the same values (fitting 64k values into 1024 values), losing information and resulting in poorer quality. If you then try to send that 10b signal through a DAC/amp to achieve the same listening volume that you would have had before digital attenuation, it’s just a 10b signal bit shifted up. All your LSBs are 0s. You can’t possibly attenuate digitally, and then amplify it in any way and hope to get the same signal back. It’s a discrete math process which destroys the signal by design.
the effect of attenuation is the loss of intensity of signal.
loss. it goes away.
attenuation is a lossy process. information in the signal is literally absorbed and radiated away as heat. it cannot be reconstructed because it’s gone.
it isn’t an analog process that theoretically preserves the signal, it’s an analog process that explicitly destroys a component of the signal.
but what if it wasn’t…
okay, lets assume for a second that you have a signal with the same harmonic content as one of my previous examples, a high peak when viewed on a frequency spectrum chart, a low peak when viewed on that chart and everything else. these three parts of the signal represent the loud, quiet and “silent” parts of the signal respectively. unlike the previous example we’ll let our noise floor for the silent parts be infinitely low. for now. so you start hooking up your perfect amps and pots in line and setting them all to 1% or so and listening. it’s sounding pretty good at first, but once you get a few deep, you start getting white noise and clicks and pops and all kinds of craziness.
what the hell! all this equipment is theoretically perfect, why is there noise? it can’t be coming from the perfect equipment!
it’s not. it’s coming from the medium. in our theoretical example all these amplifiers and pots are hooked up with conductive wire. the signal has to propagate through that wire from component to component. atoms of copper are being excited and losing their excitation in proportion to the signal. their state of excitation is being amplified over and over again. the noise is in the wire. by amplifying it over and over again you made it audible. you can’t ever escape it. signed, listening to noise gang. come to my modular synth show.
okay, so now that the possibility of ever attenuating a signal without losing information is hopefully put to rest, lets turn to the digital attenuation of the signal in comparison.
level attenuation over the digital domain is also a lossy process. what’s being misunderstood here is that the levels aren’t being shrunk relative to each other, they’re each being divided and the signal that’s reconstructed by the DAC no longer contains the quiet parts.
just like those quiet parts were absorbed and radiated as heat by the resistor, the digital version of attenuation does away with the need for all that physics crap and simply deletes them from the stream.
if the levels were being shrunk relative to each other, you’d be compressing the signal like when you use the bitcrusher pedal for your guitar and there would be lots of harmonic distortion. but attenuation and compression are different processes and have significantly different results.
consider a quiet sound, your 1/64th volume signal. a sine wave. its encoded to represent 1/64th of the maximum level of the adc’s input because when it was recorded, it represented 1/64th the maximum level of the preamp/microphone/whatever that was plugged into the adc.
is the quiet sine wave of lower quality than one that’s using the full bit depth of the adcs output because it’s intended to represent the maximum level that the adcs input saw from the preamp/microphone/whatever?
of course it isn’t. it just wasn’t loud.
and if your loud sine wave was electrically generated by a theoretical perfect function generator which contains no distortion or other sonic content before being sent to the adc, would it be more damaged if it’s amplitude were divided by 64 before being decoded or if it were decoded and sent through a resistor whose value was chosen specifically to dissipate 63/64ths of it as heat in order to make it as quiet as the quiet sine wave?
of course it wouldn’t.
to your last question, let me rephrase it into something I can agree with: you cannot possibly attenuate and then amplify in any way and hope to get the same signal back. It’s a lossy process which destroys the signal by design.
There’s some confusing stuff in this response so before I get into the weeds, for all the people reading out there: you don’t lose quality by using your operating systems volume control.
Okay, with that out of the way, let’s say you wanted to adjust the volume of a digital stream that’s composed of samples. Each sample represents the original analog signals voltage at that slice of time when it was encoded. The number of slices per second is the sample rate, expressed in kilohertz and the voltage of the original signal is converted to a number, which is stored as a binary value whose length is expressed in bits, each of which can be either a one or zero and is referred to as the streams bit depth.
So you could have a stream whose sample rate is 44.1khz for example and that would mean that it was sampled 44,100 times per second. That same stream might have a bit depth of 16, and that would mean that the original signals voltage level was divided into 65,536 possible values. Depending on some other factors, that stream might just be cdda (a compact discs digitally encoded song information).
Now let’s say you had a computer that was handling that stream and was asked to reduce the volume of the stream by half by a user who can only stand to listen to it at that volume.
One way to do that job would be to decode the stream back into an analog voltage, attenuate it, recode it and then send it on its merry way. That would incur a decoding operation, require routing of that signal to either dedicated hardware to perform the attenuation and send the signal back and an encoding operation to make that now half as loud signal back into a digital stream that can then be sent wherever it’s destined.
Another way of handling that operation is simply dividing every slice of the streams 16 bit component by two, something that computers are very good at doing quickly.
It should come as no surprise then that the latter process is generally how it’s done.
But does that reduce quality or result in worse signal? That’s the question, right?
Well, any variation of one bit or less could be essentially deleted. A person could say “ah hah! The signal has been degraded!” And they’d be technically correct, but it wouldn’t matter.
In our example, the computer whose hands are all over our precious data stream is sending that adulterated information to a dac, which true to its moniker will convert the signal from a digital stream into an analog signal. That analog signal will then be sent to an amplifier with an analog volume control and from there to a set of speakers.
The amplifiers analog volume control is a resistor in the shape of a 3/4 arc with a wiper that can move back and forth across it, allowing anything put in one side to be resisted (or in the case of our ac signal, impeded) a varying amount depending on the users selected position of the knob attached to the wiper.
Okay but what is resisting a signal though? Well, a resistor will reduce the voltage between its two ends proportional to its resistance, measured in ohms. More ohms means more resistance.
For the purposes of our example, let’s assume the user has chosen an amplifier and dac combination such that the amplifiers volume control at minimum setting applies the minimum resistance necessary to completely attenuate the dacs maximum output and is not applying any resistance at its maximum setting. In other words, that it all works as expected and is perfect.
In this case, what’s the difference between sending a stream with data corresponding to a .5V signal that gets amplified as opposed to a stream with data corresponding to a 1V signal that goes through a resistor to bring it down to .5V before being amplified?
nothing
In fact, the digitally attenuated stream will probably sound better (closer to the original) because it’s not subject to the bourns/alpha ppm lottery!
Now.
Don’t let this stop you from listening to music however you like. My ass is itt admitting to using 40 year old record players to make sounds to cook to. But don’t worry about the computers volume control.
Yes, if everything aligns perfectly, there is no impact. The bit shift would be when you set the volume to exactly half, but that’s probably not going to be the case. The app volume control alters the signal slightly, multiplied by the OS altering it slightly, which has a virtual certainty of introducing a floating point rounding error on every single sample, so now the ratios between your samples is ever so slightly different. And for what reason? What did that operation gain you?
And no you’re not going to hear a difference, but the point of being an audiophile is less about hearing a difference, and more about good quality preservation practices.
okay, lets consider the worst possible rounding error in a 16 bit division operation:
i’ll divide the level of one sample by some number that will not divide evenly, lets say three, and consider the impact of the rounding error. for the purposes of making it so I know there will be a rounding error i’ll choose a number that three has a really bad time dividing into, say 65536.
65536/3=21845.3 with the .3 component repeating. perfect, that’s exactly horrible!
so if we were to just do the simplest rounding possible to fit that into a 16 bit integer, the decimal component is dropped, rounding down to 21845.
but what is the significance of that error? a samples volume level that’s one integer value off introduces a .003% error in level, but this isn’t supposed to be 21846, it’s supposed to be 21845.3. so the error that’s introduced is .001%!
that’s a pretty tiny error, but what if it was periodic and consistent enough to produce a harmonic component? that’d create harmonic distortion!
lets say there’s a periodic and consistent rounding error that has a frequency of 1000Hz. so every thousandth of a second, the rounding caused by the volume being set at 1/3 causes a rounding error and makes a sample off by .001%. such a repeating error would introduce a harmonic component into the signal that the dac produces and be measurable as harmonic distortion at 1000Hz!
but how measurable? well if, for example, the harmonic component of the signal introduced was at it’s absolute worst, and oscillated between a positive going error and a negative going error, it could introduce a peak at 1000hz of…
.002% of your dacs DBFS. so far below the noise floor it’s immeasurable.
even if you had two software volume controls set at 1/3 daisy chained together doubling that error it would be immeasurable. although if we picked a computer software package to use instead of a bunch of hypothetical worst cases the total volume of a signal would be summed and then applied once in order to minimize just this problem. the people writing that software are pretty smart and doing that saves their program a step!
but measurability or audibility isn’t the point, as you said. the point is to reproduce the sound as accurately as possible! so it really doesn’t matter how tiny the effect of rounding errors due to prime denominator volume settings is if it’s larger than the effect of the analog volume control that whatever signal the dac manages to reconstruct from our mangled stream is put through. we’re trying to adjust the volume down to a comfortable listening level, after all.
so how bad is the volume control? well, if i were to go to mouser and look at the potentiometers section, i could choose one with a tolerance as low as… .5%! and that’s a three-gang model that costs $50!
but what if i used a precision potentiometer? why, there’s a .15% tolerance part that’s available for the very reasonable price of $825!
okay the precisions are out of my price range, but those $50 ones could work. tolerance is just a number anyway, right? we want linearity! what does the datasheet say about linearity… 2%!
that’s not even considering the amplifier design’s distortion. lets assume it’s perfect.
so just passing the signal from the dac through the amps volume control causes possibly 200 times more error and distortion than adjusting the volume control in the computer.
i get the pursuit of the best possible reproduction, but the computer volumes got those cermet pots beat hands down.
Interesting point. What about the case where you have your digital volume set to 1%? Would this not squeeze the samples into 1/100 the dynamic range? If I set my volume to 1% it seems to me like those samples now have to all exist within the bottom 1% of the 16b range. Do you not lose at least 5-6 bits of precision on your signal doing this?
You don’t lose precision when you lower the volume (in either an analog or digital realm). You lose actual information!
Let’s say you have a recording you can only listen to with your volume at the 1% setting. Analog or digital, it doesn’t matter.
Your whole system has an acoustic noise floor at something like idk, 10 acoustic decibels. That is a really charitable number because I’ve never measured one that low and it directly corresponds to the loudness of another healthy persons breathing at rest. To give you an idea of how quiet that is, the acoustic decibel scale generally puts a ticking mechanical watch at twice as loud (20 decibels).
I don’t want to talk about decibels because I don’t want to explain the math in the detail I’ve been giving these posts, but we gotta at least cover a little:
Decibels are the measure of sound energy, their scale is logarithmic, so the base of the log function determines how many of decibels make for twice as much.
There are different decibels for measuring in different mediums with different references and they even use different logarithm bases.
Acoustic decibels are log10, so that 20 decibel ticking wristwatch is twice as loud as a person breathing and half as loud as whatever the workplace safety scale says 30 decibels is equivalent to.
Okay so now that we have a floor, we need to establish a ceiling. Let’s say that you did everything right and hooked all your stuff up, turned the volume on the amplifier all the way down, put your headphones, played a maximum volume test tone, maxed out the volume on the software, then turned the amplifiers volume control up until it caused you immediate physical pain. If you have really good hearing, that’s 115 acoustic decibels. Let’s say you got to 120 with the amplifiers volume control up all the way.
Okay, so the noise floor of your headphones on your head is 2^11 as quiet as the loudest sound you can tolerate hearing.
Now you set the volume control to 1%. Doesn’t matter which one. Everything gets 99% quieter. The parts of the signal that were 120 decibels before are now 1.2 decibels. They have been divided by 100, and it’s possible that rounding errors have added .006% error to their harmonic content. .006% of 1.2 is .0072 decibels. Not only is the loudest sound you can stand to hear now quieter than a person breathing, it’s below the noise floor of your system. Far, far below the noise floor. And any rounding error from dividing by 100 is as well!
Okay but what happens when you’re listening to music though? Let’s put aside all that hypothetical stuff and get rockin! Instead of talking about test signals and boring crap, let’s talk about a song!
So same established setup from before, but now you’re listening to a recording of someone playing the banjo while rocking in a chair. There’s a lot of different harmonic content in this signal, the birds chirping, the persons breathing, the wind, the chair creaking the boards of the porch and of course, the instrument itself!
All these different things are at different volumes and they represent components of the harmonic content of the signal you’re listening to. When you turn the recording down, you’re attenuating the signal. All of the signal. If you apply enough attenuation through your chosen volume control to lower the level of the banjo by 40 acoustic decibels then all the other components of the signal are lowered by 40 decibels too. If they were previously 50 acoustic decibels through your headset, they’re part of the noise floor.
The quietest information is simply lost.
Edit: there are massive amounts of information that have been simplified so much as to make this post incredibly inaccurate. Please do not use this as a reference for understanding how we measure or interact with sound. I’m sorry for not going into greater detail but it’s too early to explain the relationship and history of acoustic decibels and decibels per volt.
I’m sorry you have to type so much, I am familiar with most of it, but I appreciate your effort to make sure we’re on the same page without being a douche about it lol. It sounds like we’re saying similar things, but I don’t understand why lower precision is different from losing information. To me, that’s the same thing, it’s a lossy operation.
So the thing is, I have a pair of desktop speakers without any physical volume control that I primarily use for convenience. And for whatever reason, a comfortable listening volume with them is between 1-8% in the OS volume control. I guess the internal amp is just hardwired to be way too loud?
Anyway, I assume that this setup is resulting in objectively lower quality output than if I were to have a 100% signal going to a decent quality DAC/amp with analog volume outputting to the same speakers. And not in a “technically” kind of way, but in a very real “we just crushed the signal into 1/25th of its original scale” way. Would you agree? Am I mistaken?
no worries. i’ve been enjoying going back through this. you’re basically me 25 years ago emailing the winamp ppl to find out what volume control i should use to turn down (for what!).
so there’s a misconception here between compressing a signal and attenuating it. imagine you are looking at a frequency spectrum chart of some song instead of listening to it. it’s got some loud sounds, which show up as big peaks on the chart, and some quiet sounds which show up as small peaks on the chart and there’s a noise floor which is the stuff in the chart that’s not a peak at all.
if you plug a potentiometer in between your signal and the spectrum analyzer and turn down the volume, youll see all the peaks, loud and small, be reduced in amplitude by the same amount. this is called attenuation. the quieter sounds could be reduced until they are part of the noise floor and become imperceptible while the louder sounds would still show up.
it doesnt matter if you achieve attenuation by dividing the 16 bit level component of a stream of samples or by using a resistor as a voltage divider. the quiet and loud sounds are affected equally. those two ways of achieving attenuation function the same because they are performing the same operation.
now lets say you plug a rack mount compressor effects module in between your signal and your spectrum analyzer instead. you could apply more compression to the signal and achieve exactly what you describe, a smaller distance between the quiet and loud sounds, reduction of the original scale, removal of dynamic range, effective bit depth reduction! it would be actual factual “we just crushed the signal into 1/25th of its original scale”.
and if you used a module (or software package) with the capacity for it, you could tie the compression ratio to a gain control so that the compressors output got quieter when you turn the compression ratio up, resulting in more heavily compressed sounds at a quieter volume. that’s a neat little mastering trick to make recordings sound “lively” and “intimate”. makes all those pick scrapes and finger swishes stand out alongside the plucked strings.
you could also do the inverse, make all the quiet sounds louder, so that the guitar is as loud as the kick drums’ transient and it would make your whole song sound much louder and stand out better against background noise in a difficult listening environment like a car radio or cell phone inside a solo cup.
there are even modules that do the opposite, called… expanders! they do what you might expect, increase the dynamic range between loud and quiet sounds. a company called DBX made models for use in home stereos in between tape decks and the amplifier in order to reduce the noise floor of tapes.
but it’s none of that is attenuation, the operation that your volume control provides.
and you’re correct, both compression and attenuation are lossy operations no matter if they’re done with analog electronics or by a microprocessor operating on a buffer somewhere in memory. the difference is that attenuation is literally required to prevent permanent hearing loss and possible equipment damage, while compression is not.
This is the part where I’m not following. In my head, if you’re using analog hardware of sufficient quality, you can attenuate the signal to be very quiet, but still preserve it’s dynamic range. In fact, the DAC is already outputting a very weak, but faithful analog reproduction of the signal, and an amp with a decent S/N ratio is able to bring that very weak signal up to a listening volume without introducing enough noise to matter.
Hypothetically, if for some reason, you took the signal post-amp, used a pot to attenuate it again down to the energy of the post-DAC level, and again ran it through another amp you would theoretically have the same signal still (I understand that in the real world we would start amplifying noise and the signal would degrade, but stick with me). Nothing about the process necessarily introduces noise and thus destroys the signal, you’re only limited to the quality of the components at that point. If you had an infinite chain of theoretically perfect amps and pots, you could repeatedly attenuate and amplify the signal forever without ever losing any quality. It’s an analog process that theoretically preserves the signal, +/- some amount of error due to physics.
Meanwhile, 16b is 16b. If you start shrinking all samples relative to each other (ex. down to 1/64 the original volume, or 10b of resolution), different values inevitably have to clamp to the same values (fitting 64k values into 1024 values), losing information and resulting in poorer quality. If you then try to send that 10b signal through a DAC/amp to achieve the same listening volume that you would have had before digital attenuation, it’s just a 10b signal bit shifted up. All your LSBs are 0s. You can’t possibly attenuate digitally, and then amplify it in any way and hope to get the same signal back. It’s a discrete math process which destroys the signal by design.
Would you agree?
the effect of attenuation is the loss of intensity of signal.
loss. it goes away.
attenuation is a lossy process. information in the signal is literally absorbed and radiated away as heat. it cannot be reconstructed because it’s gone.
it isn’t an analog process that theoretically preserves the signal, it’s an analog process that explicitly destroys a component of the signal.
but what if it wasn’t…
okay, lets assume for a second that you have a signal with the same harmonic content as one of my previous examples, a high peak when viewed on a frequency spectrum chart, a low peak when viewed on that chart and everything else. these three parts of the signal represent the loud, quiet and “silent” parts of the signal respectively. unlike the previous example we’ll let our noise floor for the silent parts be infinitely low. for now. so you start hooking up your perfect amps and pots in line and setting them all to 1% or so and listening. it’s sounding pretty good at first, but once you get a few deep, you start getting white noise and clicks and pops and all kinds of craziness.
what the hell! all this equipment is theoretically perfect, why is there noise? it can’t be coming from the perfect equipment!
it’s not. it’s coming from the medium. in our theoretical example all these amplifiers and pots are hooked up with conductive wire. the signal has to propagate through that wire from component to component. atoms of copper are being excited and losing their excitation in proportion to the signal. their state of excitation is being amplified over and over again. the noise is in the wire. by amplifying it over and over again you made it audible. you can’t ever escape it. signed, listening to noise gang. come to my modular synth show.
okay, so now that the possibility of ever attenuating a signal without losing information is hopefully put to rest, lets turn to the digital attenuation of the signal in comparison.
level attenuation over the digital domain is also a lossy process. what’s being misunderstood here is that the levels aren’t being shrunk relative to each other, they’re each being divided and the signal that’s reconstructed by the DAC no longer contains the quiet parts.
just like those quiet parts were absorbed and radiated as heat by the resistor, the digital version of attenuation does away with the need for all that physics crap and simply deletes them from the stream.
if the levels were being shrunk relative to each other, you’d be compressing the signal like when you use the bitcrusher pedal for your guitar and there would be lots of harmonic distortion. but attenuation and compression are different processes and have significantly different results.
consider a quiet sound, your 1/64th volume signal. a sine wave. its encoded to represent 1/64th of the maximum level of the adc’s input because when it was recorded, it represented 1/64th the maximum level of the preamp/microphone/whatever that was plugged into the adc.
is the quiet sine wave of lower quality than one that’s using the full bit depth of the adcs output because it’s intended to represent the maximum level that the adcs input saw from the preamp/microphone/whatever?
of course it isn’t. it just wasn’t loud.
and if your loud sine wave was electrically generated by a theoretical perfect function generator which contains no distortion or other sonic content before being sent to the adc, would it be more damaged if it’s amplitude were divided by 64 before being decoded or if it were decoded and sent through a resistor whose value was chosen specifically to dissipate 63/64ths of it as heat in order to make it as quiet as the quiet sine wave?
of course it wouldn’t.
to your last question, let me rephrase it into something I can agree with: you cannot possibly attenuate and then amplify in any way and hope to get the same signal back. It’s a lossy process which destroys the signal by design.