Sign in with
Sign up | Sign in
Your question

Resample and dither or dither and resample?

Last response: in Home Audio
Share
Anonymous
March 2, 2005 5:50:10 PM

Archived from groups: rec.audio.pro (More info?)

What is the correct order of these two processes when going from e.g. 96/24
to 44.1/16 ?
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

Michael Hansen wrote:
> What is the correct order of these two processes when going from e.g.
96/24
> to 44.1/16 ?

dither then re-sample


just like when you sample for the first time in the A/D, you add the
dither first so that when you sample, there is no signal smaller than 1
lsb.


Mark
Anonymous
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

"Michael Hansen" <dyster_tid@hotmail.com> wrote in message
news:D 04gai$2cq7$1@news.cybercity.dk...
> What is the correct order of these two processes when going from e.g.
> 96/24 to 44.1/16 ?

resample then dither.
Related resources
Anonymous
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

"Michael Hansen" <dyster_tid@hotmail.com> wrote in message
news:D 04gai$2cq7$1@news.cybercity.dk...
> What is the correct order of these two processes when going from e.g.
96/24
> to 44.1/16 ?

Dither last.

Poly
Anonymous
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

Michael,

> What is the correct order of these two processes when going from e.g.
96/24 to 44.1/16 ? <

Not to belabor the obvious, but why don't you just record at 44.1 with 16
bits in the first place?

--Ethan
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

Ethan Winer wrote:
> Michael,
>
> > What is the correct order of these two processes when going from
e.g.
> 96/24 to 44.1/16 ? <
>
> Not to belabor the obvious, but why don't you just record at 44.1
with 16
> bits in the first place?
>

Correct me if I'm wrong, but this is the common wisdom I've gathered
about this issue:

Recording at 24 then dithering to 16 instead of just recording at 16:
you have more dynamic range, can afford to record the signal quieter
for more headroom, and have more precision in volume adjustments. Also,
to produce a high quality dithered 16-bit audio file, you need to start
from a higher bit depth when you dither down.

Recording at 96 then resampling to 44.1 instead of just recording to
44.1: The digital filter that goes from 96->44.1 (or better,
88.2->44.1) will be better quality than the ADC's filter for 44.1
directly. Better time precision if you're moving clips around and
editing them. Possibly lower latency if you're monitoring digital
effects in real time. (I've also heard some mystical theories about how
the information above 44.1 somehow still can "affect" the final 44.1
waveform audibly, but I'm not sure if I buy that)

Ken
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

Michael Hansen wrote:
> (I've also heard some mystical theories about how
> > the information above 44.1 somehow still can "affect" the final
44.1
> > waveform audibly, but I'm not sure if I buy that)
>
> Well, if you're familiar with antialiasing in imageprocessing, you
can
> compare to that.. the final result will be more clear and smooth..
same is
> the case with audio files..

Again, I would attribute this to the digital antialiasing filter being
of higher quality than whatever "smoothing" filtering is being done at
the initial A->D (image capture?) step. According to all known theory,
when you sample something, it's impossible to have any information
above the nyquist limit, no matter what.

> But the best example why one should edit in a higher samplerate and
bitrate
> is shown by comparing math with pure integers to math with real
numbers:
>
> Simple calculation with real numbers: 1.5 + 1.5 = 3
> Same with integers (rounded): 1.5 ~= 2 ... 2 + 2 = 4
>
> So if we keep all sound "calculations" in the highest precision, and
finally
> "round" our result, as much precision will be kept in the result...

Oh yes, I forgot to mention rounding errors. But, in terms of audio, I
can only see this affecting the bit depth and not the sample rate.
However, editing at a higher sample rate can improve temporal
precision, as I mentioned (for example, you could move or cut a sound
at a position that will end up "between" samples when downsampled to a
lower rate, and the resulting sound will be truer to your intentions
than if you had edited directly at the lower rate).

> imagine how that would change all dynamics and effects..
>
> /M
>
> But I'm still confused about where the dithering step should be
introduced
> ;) 

Not entirely sure, but I would lean to downsampling first -- the higher
bit depth will let the downsampling filter produce a more accurate
result, which the dither could at least approximate to some extent when
reducing the bit depth. If you dither first, the downsampling filter
has to work at the lower bit depth and may not produce as accurate a
result, possibly resulting in more artifacts. Very much like in image
processing, as you mentioned elsewhere.

Given that the other responses in the thread are split 50/50 so far,
however, I'm really beginning to wonder. Does anyone have the *real*
answer here, hopefully with some solid theory to back it up?

Ken
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

PenguiN wrote:
> Michael Hansen wrote:
> > (I've also heard some mystical theories about how
> > > the information above 44.1 somehow still can "affect" the final
> 44.1
> > > waveform audibly, but I'm not sure if I buy that)
> >
> > Well, if you're familiar with antialiasing in imageprocessing, you
> can
> > compare to that.. the final result will be more clear and smooth..
> same is
> > the case with audio files..
>
> Again, I would attribute this to the digital antialiasing filter
being
> of higher quality than whatever "smoothing" filtering is being done
at
> the initial A->D (image capture?) step. According to all known
theory,
> when you sample something, it's impossible to have any information
> above the nyquist limit, no matter what.
>
> > But the best example why one should edit in a higher samplerate and
> bitrate
> > is shown by comparing math with pure integers to math with real
> numbers:
> >
> > Simple calculation with real numbers: 1.5 + 1.5 = 3
> > Same with integers (rounded): 1.5 ~= 2 ... 2 + 2 = 4
> >
> > So if we keep all sound "calculations" in the highest precision,
and
> finally
> > "round" our result, as much precision will be kept in the result...
>
> Oh yes, I forgot to mention rounding errors. But, in terms of audio,
I
> can only see this affecting the bit depth and not the sample rate.
> However, editing at a higher sample rate can improve temporal
> precision, as I mentioned (for example, you could move or cut a sound
> at a position that will end up "between" samples when downsampled to
a
> lower rate, and the resulting sound will be truer to your intentions
> than if you had edited directly at the lower rate).
>
> > imagine how that would change all dynamics and effects..
> >
> > /M
> >
> > But I'm still confused about where the dithering step should be
> introduced
> > ;) 
>
> Not entirely sure, but I would lean to downsampling first -- the
higher
> bit depth will let the downsampling filter produce a more accurate
> result, which the dither could at least approximate to some extent
when
> reducing the bit depth. If you dither first, the downsampling filter
> has to work at the lower bit depth and may not produce as accurate a
> result, possibly resulting in more artifacts. Very much like in image
> processing, as you mentioned elsewhere.
>
> Given that the other responses in the thread are split 50/50 so far,
> however, I'm really beginning to wonder. Does anyone have the *real*
> answer here, hopefully with some solid theory to back it up?
>
> Ken
OK I think the problem is that we are assuming that the re-sampling
down to 44.1 and re-quantizing down to 16 bits has to happen at the
same time when they are actually separate issues. Going from 96 to
44.1 is re-sampling. Going from 24 bits to 16 bits is re-quantizing.

You should definitely dither BEFORE re-quantizing from 24 to 16 bits.

I can see there may be some advantage to re-sampling at the higher bit
depth but I'm not sure.

So
1: re-sample
2: add dither
3 re-quantize

OR

1 add dither
2 re-sample and re-quantize


But the dither MUST be added before quantizing for the dither to do any
good.

Mark
Anonymous
March 2, 2005 5:50:11 PM

Archived from groups: rec.audio.pro (More info?)

On Wed, 2 Mar 2005 14:50:10 +0100, "Michael Hansen" <dyster_tid@hotmail.com>
wrote:

>What is the correct order of these two processes when going from e.g. 96/24
>to 44.1/16 ?
>

OK, I'm not going to claim to have the right answer here, but I'll tell you my
reasoning for dither-first.

When you record, the signal is dithered before it is quantized by a fraction
of a bit, and this prevents quantization noise. If we added dither afterward,
we could not add fractional bits because a bit is our smallest unit of
measurement. Furthermore, we wouldn't be avoiding quantization noise, we'd be
adding useless white noise to a signal that already had a quantization noise
problem.

To me this would imply that when we do a sample rate/depth confersion, we want
to first add a dither of a fraction of one bit of level in the target
resolution. That is to say, we would dither before.
Anonymous
March 2, 2005 5:50:12 PM

Archived from groups: rec.audio.pro (More info?)

"polymod" <polymod@optonline.net> wrote in message
news:p ZkVd.27383$aX3.1422@fe08.lga
> "Michael Hansen" <dyster_tid@hotmail.com> wrote in message
> news:D 04gai$2cq7$1@news.cybercity.dk...
>> What is the correct order of these two processes when going from
>> e.g. 96/24 to 44.1/16 ?
>
> Dither last.

Bass ackwards. Dither first.
Anonymous
March 2, 2005 5:50:13 PM

Archived from groups: rec.audio.pro (More info?)

"Arny Krueger" <arnyk@hotpop.com> wrote in message
news:7r6dnTMdNfr7f7jfRVn-pw@comcast.com...
> "polymod" <polymod@optonline.net> wrote in message
> news:p ZkVd.27383$aX3.1422@fe08.lga
> > "Michael Hansen" <dyster_tid@hotmail.com> wrote in message
> > news:D 04gai$2cq7$1@news.cybercity.dk...
> >> What is the correct order of these two processes when going from
> >> e.g. 96/24 to 44.1/16 ?
> >
> > Dither last.
>
> Bass ackwards. Dither first.

You say Tomato, I say tomato.

Poly
Anonymous
March 2, 2005 5:50:13 PM

Archived from groups: rec.audio.pro (More info?)

"Arny Krueger" <arnyk@hotpop.com> wrote in message
news:7r6dnTMdNfr7f7jfRVn-pw@comcast.com...
> "polymod" <polymod@optonline.net> wrote in message
> news:p ZkVd.27383$aX3.1422@fe08.lga
> > "Michael Hansen" <dyster_tid@hotmail.com> wrote in message
> > news:D 04gai$2cq7$1@news.cybercity.dk...
> >> What is the correct order of these two processes when going from
> >> e.g. 96/24 to 44.1/16 ?
> >
> > Dither last.
>
> Bass ackwards. Dither first.

Not according to the Waves L2 software guide.

"All sample rate conversion must be done FIRST"


Poly
Anonymous
March 2, 2005 5:54:16 PM

Archived from groups: rec.audio.pro (More info?)

"polymod" <polymod@optonline.net> wrote in message
news:vnoVd.5753$RK6.3098@fe12.lga
> "Arny Krueger" <arnyk@hotpop.com> wrote in message
> news:7r6dnTMdNfr7f7jfRVn-pw@comcast.com...
>> "polymod" <polymod@optonline.net> wrote in message
>> news:p ZkVd.27383$aX3.1422@fe08.lga
>>> "Michael Hansen" <dyster_tid@hotmail.com> wrote in message
>>> news:D 04gai$2cq7$1@news.cybercity.dk...
>>>> What is the correct order of these two processes when going from
>>>> e.g. 96/24 to 44.1/16 ?
>>>
>>> Dither last.
>>
>> Bass ackwards. Dither first.
>
> Not according to the Waves L2 software guide.
>
> "All sample rate conversion must be done FIRST"

Sample rate conversion and resampling aren't always the same thing. Sample
rate conversion is a subset of resampling. Resampling can involve either
sample rate conversion or word format conversion, or both. So, its possible
to do sample rate conversion without changing the word format and
vice-versa.

The point of all this is that dither is only required when the word format
involves a decrease in resolution.

Now, look at the title of this thread. It is "Resample and dither or dither
and resample?" Since dither is involved in either case, it must be that the
question is about decreasing resolution. If you decrease the resolution,
you're not necessarily doing a sample rate conversion. Therefore the
authority you cited is not always relevant to the thread's question.

I suspect that what they are saying is that if you are for example going
from 24/96 to 16/44 you should convert from 24/96 to 24/44 without dithering
and then convert from 24/44 to 16/44 with dither.
Anonymous
March 2, 2005 6:17:44 PM

Archived from groups: rec.audio.pro (More info?)

"Arny Krueger" <arnyk@hotpop.com> wrote in message
news:APKdnWV4ush9hLvfRVn-vg@comcast.com...

> I suspect that what they are saying is that if you are for example going
> from 24/96 to 16/44 you should convert from 24/96 to 24/44 without
dithering
> and then convert from 24/44 to 16/44 with dither.

Ok. I get it.

Poly
Anonymous
March 2, 2005 7:13:16 PM

Archived from groups: rec.audio.pro (More info?)

> dither then re-sample

Thanks :) 

> just like when you sample for the first time in the A/D, you add the
> dither first so that when you sample, there is no signal smaller than 1
> lsb.

Hmm.. I don't understand this .. do you mean when I record something, then I
should immediatly dither the signal? - That makes no sense to me.. why add
something that makes the sound more diffused at this level? And what exactly
is this "1 lsb" you talk about? (sorry.. probably me thats too noob here)
Anonymous
March 2, 2005 7:27:31 PM

Archived from groups: rec.audio.pro (More info?)

"Ricky Hunt" <rhunt22@hotmail.com> wrote in message
news:j9kVd.83767$4q6.37297@attbi_s01...
> "Michael Hansen" <dyster_tid@hotmail.com> wrote in message
> news:D 04gai$2cq7$1@news.cybercity.dk...
>> What is the correct order of these two processes when going from e.g.
>> 96/24 to 44.1/16 ?
>
> resample then dither.
>

Hehe.. two possible answers... and I got them both ;)  Allright... let me
hear your arguments
In image processing one would resample and then dither.. the reason is that
the resampling calculations will be done with numbers of higher precision,
thus a more accurate result. But I'm not sure wether image and audio
processing can be compared here ...

/M
March 2, 2005 8:44:40 PM

Archived from groups: rec.audio.pro (More info?)

Chris Hornbeck wrote:
> On Wed, 2 Mar 2005 14:50:10 +0100, "Michael Hansen"
> <dyster_tid@hotmail.com> wrote:
>
> >What is the correct order of these two processes when going from
e.g. 96/24
> >to 44.1/16 ?
>
> Arny has already said it, but some confusion remains. Dither is
> needed prior to word length shortening. Period.
>
> Chris Hornbeck

Yep agreed,

I think it becomes more clear when you call it quantizing distortion
which is what it actually is. The dither (random noise) that is added
before quantizing, smooths over the quantizing steps and eliminates the
qauntizing non-linearity. This applies when going fom analog to
digital or when going from 24 bits to 16 bits etc.

Mark
Anonymous
March 2, 2005 9:11:41 PM

Archived from groups: rec.audio.pro (More info?)

(I've also heard some mystical theories about how
> the information above 44.1 somehow still can "affect" the final 44.1
> waveform audibly, but I'm not sure if I buy that)

Well, if you're familiar with antialiasing in imageprocessing, you can
compare to that.. the final result will be more clear and smooth.. same is
the case with audio files..
But the best example why one should edit in a higher samplerate and bitrate
is shown by comparing math with pure integers to math with real numbers:

Simple calculation with real numbers: 1.5 + 1.5 = 3
Same with integers (rounded): 1.5 ~= 2 ... 2 + 2 = 4

So if we keep all sound "calculations" in the highest precision, and finally
"round" our result, as much precision will be kept in the result...
imagine how that would change all dynamics and effects..

/M

But I'm still confused about where the dithering step should be introduced
;) 
Anonymous
March 2, 2005 11:14:53 PM

Archived from groups: rec.audio.pro (More info?)

"Steve Jorgensen" <nospam@nospam.nospam> wrote in message
news:8o4c21dh2e3efhhrjhdo43rjnr7dgon7jg@4ax.com


> When you record, the signal is dithered before it is quantized by a
> fraction of a bit, and this prevents quantization noise.

Err, no. The dither prevents the quantization noise from being correlated
with the signal, the clock frequency, etc. The quantization noise is in some
sense irreducable, but changing its form eliminates a lot of distortion.
Anonymous
March 3, 2005 3:07:34 AM

Archived from groups: rec.audio.pro (More info?)

> When you record, the signal is dithered before it is quantized by a
> fraction
> of a bit, and this prevents quantization noise.

"before quantizied.." .. that can only be outside the digitial domain then..
that is the analog.. but then dithering doesn't make much sense? If I
understand the process of dithering, it is a temporal distribution of the
precision loss you get from going from a higher bit depth to a
lower..consider the following sequence of numbers ("samples") in floating
point:

1.2
1.3
1.4
1.5

a simple dithering algorithm would do something like:

1.2 ~= 1 (error 0.2 passes on to the next sample)
1.3 ~= 2 (1.3 + 0.2 = 1.5 ~= 2... error -0.5 passes on)
1.4 ~= 1 (1.4 - 0.5 = 0.9 ~= 1... error -0.1 passes on)
1 ( 1.5 - 0.1 = 1.4 ~= 1)

so in order to dither something, it must already be quantizied..

I could be wrong though?

>If we added dither afterward,
> we could not add fractional bits because a bit is our smallest unit of
> measurement.

A bit must be a bit, wether it expresses fractional parts or integers.. its
about the amount of bits..and if you've already introduced the notion of
bits, you're in the digital domain, meaning that your signal is already
quantizied

> Furthermore, we wouldn't be avoiding quantization noise, we'd be
> adding useless white noise to a signal that already had a quantization
> noise
> problem.

I'm beginning to think the opposite is true.. if you reduce bitrate and want
to resample afterwards, you'll be resampling a signal containing less
prescision and artificial information..thus dithering (implicit: bit
reduction) should always be the last step..

It would be great to get this reasoning confirmed though..

/M
Anonymous
March 3, 2005 3:43:16 AM

Archived from groups: rec.audio.pro (More info?)

On Wed, 2 Mar 2005 14:50:10 +0100, "Michael Hansen"
<dyster_tid@hotmail.com> wrote:

>What is the correct order of these two processes when going from e.g. 96/24
>to 44.1/16 ?

Arny has already said it, but some confusion remains. Dither is
needed prior to word length shortening. Period.

Chris Hornbeck
Anonymous
March 3, 2005 11:07:52 AM

Archived from groups: rec.audio.pro (More info?)

Ethan Winer wrote:
> Ken,
>
> > Recording at 24 then dithering to 16 instead of just recording at
16: you
> have more dynamic range, can afford to record the signal quieter for
more
> headroom, and have more precision in volume adjustments. <
>
> Nobody here has a bigger propeller on their beanie than I do. So I
> understand well the *theoretical* reason some people use 24 bits and
high
> sample rates. But all of the advantages of 24 bits over 16, and
sample rates
> higher than 44.1 KHz, and dithering, etc., are overly "tweeky" to the
point
> of silliness. All 24 bits offers over 16 are a lower noise floor and
less
> distortion. But 16 bits is perfectly capable of distortion far below
what
> anyone could hear, and noise 20 or more dB below what you'll ever
capture in
> a room with a microphone.
>
> So why waste the disk space and bandwidth which just reduce your
track count
> and the number of plug-ins you can use? Why bother with all those
extra
> steps to record at a resolution so high you have to reduce it later
to
> actually get the music onto a CD? If you have to turn up the volume
30 dB
> during a reverb tail to hear the improvement 24 bits offers, who
cares?
>
> I agree it's useful to record at a lower level to avoid an unexpected
loud
> passage. This might be useful for recording classical music, but a
pop tune?
>
> --Ethan

I have been trying to convince people of this
since the dawn of the digital age. But (most)
everyone seems so enamored with piling high on the
192kHz bandwagon and barreling down the dirt road
that no-one hears the improvemnt because of the
clattering of the wagonwheels on the rutts.
Me, I'll continue to track at 48k if I'm going to mix
analog and 44.1 if it's going to stay "in the box".
I will agree that I like the flexability of 24 bit
but none of my demo clients have ever cared (or noticed)
if they were tracked at 16 or 20 bit.

rd
"the crown frog of 44.1/16 recording"
Anonymous
March 3, 2005 11:15:03 AM

Archived from groups: rec.audio.pro (More info?)

Sorry if I go off-topic at times, I'm recovering from a head injury ...

To respond to the thread topic -
I follow the theory put forth by both the Waves
and T-Racks people:
Dither only once, and only as the very last step after all other
processing is done.

rd
Anonymous
March 3, 2005 11:58:01 AM

Archived from groups: rec.audio.pro (More info?)

Ken,

> Recording at 24 then dithering to 16 instead of just recording at 16: you
have more dynamic range, can afford to record the signal quieter for more
headroom, and have more precision in volume adjustments. <

Nobody here has a bigger propeller on their beanie than I do. So I
understand well the *theoretical* reason some people use 24 bits and high
sample rates. But all of the advantages of 24 bits over 16, and sample rates
higher than 44.1 KHz, and dithering, etc., are overly "tweeky" to the point
of silliness. All 24 bits offers over 16 are a lower noise floor and less
distortion. But 16 bits is perfectly capable of distortion far below what
anyone could hear, and noise 20 or more dB below what you'll ever capture in
a room with a microphone.

So why waste the disk space and bandwidth which just reduce your track count
and the number of plug-ins you can use? Why bother with all those extra
steps to record at a resolution so high you have to reduce it later to
actually get the music onto a CD? If you have to turn up the volume 30 dB
during a reverb tail to hear the improvement 24 bits offers, who cares?

I agree it's useful to record at a lower level to avoid an unexpected loud
passage. This might be useful for recording classical music, but a pop tune?

--Ethan
Anonymous
March 3, 2005 12:35:42 PM

Archived from groups: rec.audio.pro (More info?)

"Arny Krueger" <arnyk@hotpop.com> wrote in message
news:APKdnWV4ush9hLvfRVn-vg@comcast.com...
>
> I suspect that what they are saying is that if you are for example going
> from 24/96 to 16/44 you should convert from 24/96 to 24/44 without
> dithering
> and then convert from 24/44 to 16/44 with dither.
>

I understood that to be exactly what he wanted to do.
Anonymous
March 3, 2005 12:36:22 PM

Archived from groups: rec.audio.pro (More info?)

"Michael Hansen" <dyster_tid@hotmail.com> wrote in message
news:D 04m14$2ibm$1@news.cybercity.dk...
>
>
> Hehe.. two possible answers... and I got them both ;)  Allright... let me
> hear your arguments
> In image processing one would resample and then dither.. the reason is
> that the resampling calculations will be done with numbers of higher
> precision, thus a more accurate result. But I'm not sure wether image and
> audio processing can be compared here ...
>

Right. Same with audio.
Anonymous
March 3, 2005 2:05:05 PM

Archived from groups: rec.audio.pro (More info?)

> Sample rate conversion and resampling aren't always the same thing.
> Sample
> rate conversion is a subset of resampling. Resampling can involve either
> sample rate conversion or word format conversion, or both. So, its
> possible
> to do sample rate conversion without changing the word format and
> vice-versa.

I think thats a confused understanding of the term resampling.. The term in
itself indicates that you deal with re - sampling .. not "re - bitting"
(kinda awkward term ;)  ).
In all audioapps I've worked with so far, I've never seen dithering
mentioned as an option under resampling, you can choose antialiasing and
sometimes choose btwn different algorithms.. but not dithering (perhaps
you're saying that this happens implicitly?).
As I see it, you can understand resampling in two ways, actual resampling
(which crunches the whole file) and samplerate information change (which
leaves the data intact but changes the information about the samplerate, and
makes the tones change).

Word format conversion or bitrate conversion can, and should, involve
dithering. Many plugins allow you to dither the output even though the
actual bitrate is not changed (the sound is "prepared" for a specific
bitrate).. This is probably because the plugin internally works at a high
bitdepth.. but since the actual bitrate is not changed, I think thats one of
the reasons there's so much confusion about it.

> Now, look at the title of this thread. It is "Resample and dither or
> dither
> and resample?" Since dither is involved in either case, it must be that
> the
> question is about decreasing resolution. If you decrease the resolution,
> you're not necessarily doing a sample rate conversion. Therefore the
> authority you cited is not always relevant to the thread's question.
>
> I suspect that what they are saying is that if you are for example going
> from 24/96 to 16/44 you should convert from 24/96 to 24/44 without
> dithering
> and then convert from 24/44 to 16/44 with dither.

Allright.. let me ask another way..

Should I change the bitrate (involving dithering) before resampling or
afterwards?
Anonymous
March 3, 2005 7:56:34 PM

Archived from groups: rec.audio.pro (More info?)

In article <1109866072.165648.246490@g14g2000cwa.googlegroups.com> annonn@juno.com writes:

> I have been trying to convince people of this
> since the dawn of the digital age. But (most)
> everyone seems so enamored with piling high on the
> 192kHz bandwagon and barreling down the dirt road
> that no-one hears the improvemnt because of the
> clattering of the wagonwheels on the rutts.

They don't seem to have anything else to market but bits. The musical
talent doesn't seem to have improved much.

I can be convinced that there's some benefit to recording at 24-bit
resolution if the analog stages are quiet enough so that you don't end
up with more noise when recording conservatively and bringing the
level back up to "normal" when you know it's safe than if you recorded
closer to maximum level at 16-bit resolution.

Besides, all you can buy today (as a consumer) are 24-bit A/D
converters. I know that the 24-bit converter chips of a couple of
years ago had a pretty good 16-bit dither implementation that could be
turned on, but nobody used it so it's possible that today you get 24
bits whether you want them or not. And swallowing all 24 bits is
preferable to truncating before you're finished with the project.

> Me, I'll continue to track at 48k if I'm going to mix
> analog and 44.1 if it's going to stay "in the box".
> I will agree that I like the flexability of 24 bit
> but none of my demo clients have ever cared (or noticed)
> if they were tracked at 16 or 20 bit.

Most of the time I'll track at 44.1/24, but I never notice if I've
inadvertently switched to 16-bit. Mostly because I don't compare it to
the same thing at 24 bits. I'm just not that picky.



--
I'm really Mike Rivers (mrivers@d-and-d.com)
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me here: double-m-eleven-double-zero at yahoo
Anonymous
March 4, 2005 3:37:03 AM

Archived from groups: rec.audio.pro (More info?)

"Michael Hansen" <dyster_tid@hotmail.com> wrote in message
news:D 04s5q$2oh8$1@news.cybercity.dk...
>...I'm still confused about where the dithering step should be introduced

Dithering needs to be introduced every time word length is reduced. The vast
majority of signal processing routines (Including most SRC routines)
increase the word length beyond what can be stored in a file or sent to a D
to A converter. These all need to be dithered although only to the new word
length.

--
Bob Olhsson Audio Mastery, Nashville TN
Mastering, Audio for Picture, Mix Evaluation and Quality Control
Over 40 years making people sound better than they ever imagined!
615.385.8051 http://www.hyperback.com
Anonymous
March 4, 2005 3:43:54 AM

Archived from groups: rec.audio.pro (More info?)

"Michael Hansen" <dyster_tid@hotmail.com> wrote in message
news:D 05h0u$fjq$1@news.cybercity.dk...
> I'm beginning to think the opposite is true.. if you reduce bitrate and
want
> to resample afterwards, you'll be resampling a signal containing less
> prescision and artificial information..thus dithering (implicit: bit
> reduction) should always be the last step..

Bit reduction is the last step however failing to dither each intermediate
word length reduction always generates distortion. For this reason,
dithering needs to be done every time and not just before the final
reduction.

--
Bob Olhsson Audio Mastery, Nashville TN
Mastering, Audio for Picture, Mix Evaluation and Quality Control
Over 40 years making people sound better than they ever imagined!
615.385.8051 http://www.hyperback.com
March 4, 2005 9:43:18 AM

Archived from groups: rec.audio.pro (More info?)

I give up.
Mark
Anonymous
March 4, 2005 12:40:28 PM

Archived from groups: rec.audio.pro (More info?)

RD,

> I have been trying to convince people of this since the dawn of the
digital age. <

Me too. I've even tested it. :->) One time I tried all three dither options
offered in a DAW program I was using at the time, and I couldn't hear any
difference between any of them, or even versus undithered. I did not raise
the level during reverb tails, or record really soft or anything like that -
I just played normal music at a reasonably loud level. It might be me, but I
doubt it. As Mike says, those extra bits are for marketing.

--Ethan
Anonymous
March 4, 2005 12:41:05 PM

Archived from groups: rec.audio.pro (More info?)

In article <ebOVd.98437$Th1.12692@bgtnsc04-news.ops.worldnet.att.net> olh@hyperback.com writes:

> Bit reduction is the last step however failing to dither each intermediate
> word length reduction always generates distortion. For this reason,
> dithering needs to be done every time and not just before the final
> reduction.

You really have to know what's going on inside your system, and this
is often difficult to determine. It's pretty common to let word length
grow internally since there's usually plenty of space so there's no
need to dither between stages of the same program. But when you go
outside the program, perhaps to apply a plug-in, that plug-in may want
to start with a given word length that's shorter to the current one.

It seems to me that a transparent solution would be to dither at
inputs (both hard and soft) to assure that a longer word coming in
would suffer the least damage. But this may be one of those "let the
other guy do it" things. And sometimes you're "the other guy" and you
don't know it.


--
I'm really Mike Rivers (mrivers@d-and-d.com)
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me here: double-m-eleven-double-zero at yahoo
Anonymous
March 4, 2005 1:29:32 PM

Archived from groups: rec.audio.pro (More info?)

On Wed, 2 Mar 2005 20:14:53 -0500, "Arny Krueger" <arnyk@hotpop.com> wrote:

>"Steve Jorgensen" <nospam@nospam.nospam> wrote in message
>news:8o4c21dh2e3efhhrjhdo43rjnr7dgon7jg@4ax.com
>
>
>> When you record, the signal is dithered before it is quantized by a
>> fraction of a bit, and this prevents quantization noise.
>
>Err, no. The dither prevents the quantization noise from being correlated
>with the signal, the clock frequency, etc. The quantization noise is in some
>sense irreducable, but changing its form eliminates a lot of distortion.

So perhaps the correct term would be "quantization artifacts", then?
March 4, 2005 1:45:10 PM

Archived from groups: rec.audio.pro (More info?)

Steve Jorgensen wrote:
> On Wed, 2 Mar 2005 20:14:53 -0500, "Arny Krueger" <arnyk@hotpop.com>
wrote:
>
> >"Steve Jorgensen" <nospam@nospam.nospam> wrote in message
> >news:8o4c21dh2e3efhhrjhdo43rjnr7dgon7jg@4ax.com
> >
> >
> >> When you record, the signal is dithered before it is quantized by
a
> >> fraction of a bit, and this prevents quantization noise.
> >
> >Err, no. The dither prevents the quantization noise from being
correlated
> >with the signal, the clock frequency, etc. The quantization noise is
in some
> >sense irreducable, but changing its form eliminates a lot of
distortion.
>
> So perhaps the correct term would be "quantization artifacts", then?

Quantization distortion is a good term for it since the quantization is
a non-linear process (which gets linearized by the dither noise).

Mark
Anonymous
March 4, 2005 6:11:21 PM

Archived from groups: rec.audio.pro (More info?)

Michael Hansen wrote:
> What is the correct order of these two processes when going from e.g. 96/24
> to 44.1/16 ?

Dither must always be the last thing in chain.

br,
Sven
Anonymous
March 5, 2005 12:26:01 PM

Archived from groups: rec.audio.pro (More info?)

Ethan Winer wrote:

> It might be me, but I doubt it.
> As Mike says, those extra bits are for marketing.
>
> --Ethan

Of course, there's something to be said for both effective
and responsible marketing. Without it we'd be tracking with
wire recorders and spinning 78's. That doesn't mean that
every intermediate technological advance needs to be
incorporated into the accepted standards. Marketing has
allowed major new ideas to become widespread by the
"survival of the fittest" (or fattest) and by being cost
effective. And I'm not just talking about digital stuff here.
But effective and responsible have become divergent in
that some numbers now are really just for bragging rights.
i.e. 192kHz, 64 bit ... etc.
Even though there is a clearly measurable improvement if I
(or more importantly) my customer can't hear it how can
any expenditure or any significant marketing be justified ?

rd
Anonymous
March 6, 2005 4:32:40 AM

Archived from groups: rec.audio.pro (More info?)

"RD Jones" <annonn@juno.com> wrote in message news:1110043561.640028.224170@o13g2000cwo.googlegroups.com...
>
> Ethan Winer wrote:
>
> > It might be me, but I doubt it.
> > As Mike says, those extra bits are for marketing.
> >
> > --Ethan

> Even though there is a clearly measurable improvement if I
> (or more importantly) my customer can't hear it how can
> any expenditure or any significant marketing be justified ?
>
> rd

Measurable is the key word.... designers can actually throw out a
little math that substantiates their claims, but the math and the
human ear don't often correlate so significantly when it comes
down to practical application. It seems like the big craze on the
'numbers' game right now is based on how many home computer
recording set-ups are entering the market. In most cases, the new
purchasers of those systems have never had the chance to hear
their audio in an analogue chain or in earlier stages of digital recording
and development.... so it's not all about what their gear sounds like
or the quality of what they can produce with it, but rather about the
ability to claim high numbers and 'state of the art' technology.

DM
Anonymous
March 6, 2005 1:04:03 PM

Archived from groups: rec.audio.pro (More info?)

David,

> designers can actually throw out a little math that substantiates their
claims, but the math and the human ear don't often correlate <

Yes, just because we can measure the difference between 0.01 and 0.001
percent distortion doesn't mean that anyone can hear it. For most specs, we
can measure to 100 or even 1000 times lower than anyone can hear. This makes
those audiophile claims laughable, when they say science hasn't yet found a
way to measure what they can hear. Clearly it's the other way around!

--Ethan
Anonymous
March 6, 2005 5:52:14 PM

Archived from groups: rec.audio.pro (More info?)

In article <btedndrgqObIgbbfRVn-pg@giganews.com> "Ethan Winer" <ethanw at ethanwiner dot com> writes:

> Yes, just because we can measure the difference between 0.01 and 0.001
> percent distortion doesn't mean that anyone can hear it. For most specs, we
> can measure to 100 or even 1000 times lower than anyone can hear.

But there are some things we can hear that we either can't measure or
haven't yet figured out how to correlate them with what we can
measure.

--
I'm really Mike Rivers (mrivers@d-and-d.com)
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me here: double-m-eleven-double-zero at yahoo
Anonymous
March 6, 2005 8:33:54 PM

Archived from groups: rec.audio.pro (More info?)

In article <znr1110127601k@trad>, Mike Rivers <mrivers@d-and-d.com> wrote:
>In article <btedndrgqObIgbbfRVn-pg@giganews.com> "Ethan Winer" <ethanw at ethanwiner dot com> writes:
>
>> Yes, just because we can measure the difference between 0.01 and 0.001
>> percent distortion doesn't mean that anyone can hear it. For most specs, we
>> can measure to 100 or even 1000 times lower than anyone can hear.
>
>But there are some things we can hear that we either can't measure or
>haven't yet figured out how to correlate them with what we can
>measure.

That's true. But we do know a lot of things NOT to measure, which is a good
part of the battle.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Anonymous
March 7, 2005 2:02:08 PM

Archived from groups: rec.audio.pro (More info?)

"Steve Jorgensen" <nospam@nospam.nospam> wrote in message
news:57ah2155i4jgsan21evsts6hgcc2s7pssp@4ax.com
> On Wed, 2 Mar 2005 20:14:53 -0500, "Arny Krueger" <arnyk@hotpop.com>
> wrote:
>
>> "Steve Jorgensen" <nospam@nospam.nospam> wrote in message
>> news:8o4c21dh2e3efhhrjhdo43rjnr7dgon7jg@4ax.com
>>
>>
>>> When you record, the signal is dithered before it is quantized by a
>>> fraction of a bit, and this prevents quantization noise.
>>
>> Err, no. The dither prevents the quantization noise from being
>> correlated with the signal, the clock frequency, etc. The
>> quantization noise is in some sense irreducable, but changing its
>> form eliminates a lot of distortion.
>
> So perhaps the correct term would be "quantization artifacts", then?

"Quantization artifacts" sounds nice.
Anonymous
March 7, 2005 2:04:12 PM

Archived from groups: rec.audio.pro (More info?)

"Mark" <makolber@yahoo.com> wrote in message
news:1109961910.573293.284580@z14g2000cwz.googlegroups.com
> Steve Jorgensen wrote:
>> On Wed, 2 Mar 2005 20:14:53 -0500, "Arny Krueger" <arnyk@hotpop.com>
>> wrote:
>>
>>> "Steve Jorgensen" <nospam@nospam.nospam> wrote in message
>>> news:8o4c21dh2e3efhhrjhdo43rjnr7dgon7jg@4ax.com
>>>
>>>
>>>> When you record, the signal is dithered before it is quantized by a
>>>> fraction of a bit, and this prevents quantization noise.
>>>
>>> Err, no. The dither prevents the quantization noise from being
>>> correlated with the signal, the clock frequency, etc. The
>>> quantization noise is in some sense irreducable, but changing its
>>> form eliminates a lot of distortion.
>>
>> So perhaps the correct term would be "quantization artifacts", then?
>
> Quantization distortion is a good term for it since the quantization
> is a non-linear process (which gets linearized by the dither noise).

I think that gets to the point. There's been this unfortunate terminology
where what was really quantization distortion was somehow called
quantization noise. What we call noise shaping would really be distortion
shaping without proper dither.
Anonymous
March 7, 2005 2:04:43 PM

Archived from groups: rec.audio.pro (More info?)

"Sven" <sven_usenetis@metal.ee> wrote in message
news:42285e47$0$177$bb624dac@diablo.uninet.ee
> Michael Hansen wrote:
>> What is the correct order of these two processes when going from
>> e.g. 96/24 to 44.1/16 ?
>
> Dither must always be the last thing in chain.

Nope, its right before requantization.
Anonymous
March 7, 2005 4:23:57 PM

Archived from groups: rec.audio.pro (More info?)

Mike,

> But there are some things we can hear that we either can't measure or
haven't yet figured out how to correlate them with what we can measure. <

Such as?

--Ethan
Anonymous
March 7, 2005 9:41:51 PM

Archived from groups: rec.audio.pro (More info?)

In article <mKCdnYtgaeeiAbHfRVn-oQ@giganews.com> "Ethan Winer" <ethanw at ethanwiner dot com> writes:

> > But there are some things we can hear that we either can't measure or
> haven't yet figured out how to correlate them with what we can measure. <
>
> Such as?

Toob vs. transistor distortion. Everything we can measure about
transistor amplifiers, even crummy transistor amplifiers, is lower
than that of even a good tube amplifier, yet many people will find
that the tube amplifier sounds subjectively better.



--
I'm really Mike Rivers (mrivers@d-and-d.com)
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me here: double-m-eleven-double-zero at yahoo
Anonymous
March 8, 2005 12:11:55 AM

Archived from groups: rec.audio.pro (More info?)

Arny Krueger wrote:

> "Quantization artifacts" sounds nice.

<j>

They does? I thought we's supposed to avoid thems.

</j>

--
ha
Anonymous
March 8, 2005 12:11:56 AM

Archived from groups: rec.audio.pro (More info?)

hank alrich <walkinay@thegrid.net> wrote:
>Arny Krueger wrote:
>
>> "Quantization artifacts" sounds nice.
>
>They does? I thought we's supposed to avoid thems.

Man, Quantization Artifacts is my favorite band. I saw them open for
Total Harmonic Distortion at the Budokhan!
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Anonymous
March 8, 2005 4:14:37 AM

Archived from groups: rec.audio.pro (More info?)

On Mon, 7 Mar 2005 13:23:57 -0500, "Ethan Winer" <ethanw at ethanwiner
dot com> wrote:

>> But there are some things we can hear that we either can't measure or
>haven't yet figured out how to correlate them with what we can measure. <
>
>Such as?

It's an honest and serious question, so I wouldn't want to demean it
with *baseless* speculation, but I do have a couple possible avenues
of enquiry based on lotsa observation.

In the analog world, intrinsic linearity may turn out to be
significant. We currently give it zero value, but it may turn
out differently. Each component's transfer function, delay
effects, storage effects, non-linear loading of preceding
components, etc., rather than the "black-box" model of the
whole-stage-plus-feedback-loop's performance.

Still in the analog world, we tend to be terribly short sighted
about circuits' capabilities to deal with out of band signals.
This is something we know, understand completely, but too
often ignore and don't measure. It's a mean world out there.

I could mention several others, but not without a pie fight.

In the digital world, where I know notink!, notink!, there
certainly remain questions about filter characteristics, including
clipping (!) and audibility, to be resolved.

And the conversion between the two worlds has very difficult to
measure, but very likely audible issues such as monotonicity
and signal level dependent "jitter" (for lack of a better word).

At no time have any of these potential issues ever mattered
even remotely, compared to getting a good musician to give a
good performance. And at the current state of recording art,
they're way down the list of things to worry about. But they
are some of the things not "measurable", or at least, measured.

Sorry for the bandwidth,

Chris Hornbeck
"There are no ordinary cats". -Colette
!