Sign in with
Sign up | Sign in
Your question

88.2 vs. 96 K sampling rate

Last response: in Home Audio
Share
August 14, 2005 1:18:41 PM

Archived from groups: rec.audio.pro (More info?)

Maybe everyone knows but me.... :) 

If the target product is a CD at 44.1 K, it would seem logical that
recording at 88.2K rather than 96 K would avoid all sorts of
downsampling problems (??dithering??). The digital imaging equivalent
would be interpolation. I would intuit that the errors in
downsampling from 96 to 44.1 would outweigh the benefit of the
only-8K-better sampling rate.

Obviously my intuition must be wrong or everyone would use 88.2
instead of 96K.... is that correct?? why?? has anyone got samples
anywhere that show that you can hear the difference??

.....relative newbie about to embark on digital recording, looking
forward to further education in RAP... it makes great reading, btw.,
so thanks to the regulars!!


Peter

More about : sampling rate

Anonymous
August 14, 2005 1:18:42 PM

Archived from groups: rec.audio.pro (More info?)

"bohemian" <alias@snowcrest.net> wrote in message
news:ja2uf1ddork08klh40aq53qln81e96a635@4ax.com
> Maybe everyone knows but me.... :) 
>
> If the target product is a CD at 44.1 K, it would seem
> logical that recording at 88.2K rather than 96 K would
> avoid all sorts of downsampling problems (??dithering??).

Come on, use a little common sense.

If the target product is a CD at 44.1 K, it would seem
logical that recording at 44.1K rather than 96 K or 88.1 K
would
avoid any and all downsampling problems. ;-)
Anonymous
August 14, 2005 1:18:42 PM

Archived from groups: rec.audio.pro (More info?)

Carey Carlan wrote:

> > The digital imaging equivalent
> > would be interpolation. I would intuit that the errors in
> > downsampling from 96 to 44.1 would outweigh the benefit of the
> > only-8K-better sampling rate.
>
> Theoretically, downsampling from 88.2 to 44.1 is simply throwing half the
> samples away. There's not a lot of benefit in that over recording
> directly at 44.1.

There are remarkably few programs and outboard SRC devices that
actually function in this manner...as it's the worst possible way of
performing sample rate conversion with regards to the effective
application of anti-aliasing filters.

I thought the matter of first upping the sample rate to the least
common multiple of the available rates, applying the anti-aliasing
filter, then downsampling to destination rate even in the case of even
multiple rates (in other words, today's most common method of SRC) was
made clear in earlier posts to this thread.

The most common method of performing SRC today uses much more complex
methods than "simply throwing half the samples away".
Related resources
Anonymous
August 14, 2005 4:34:26 PM

Archived from groups: rec.audio.pro (More info?)

bohemian wrote:
> Maybe everyone knows but me.... :) 
>
> If the target product is a CD at 44.1 K, it would seem logical that
> recording at 88.2K rather than 96 K would avoid all sorts of
> downsampling problems (??dithering??). The digital imaging equivalent
> would be interpolation.

First, as Arny says, why not record at 44.1k if that's what you want to
end up with...

But if you want to start with a higher rate for some reason, your
intuituion is wrong about interpolation. The mathematics behind sample
rate conversion is well defined (if not understood by everybody) and
produces results as close to perfect as rounding errors will allow, for
absolutely any combination of sample rates. The process amounts to
digital low pass filtering with a sharp cutoff at half the lower of the
two sampling rates.

Even if the conversion is to exactly half the original rate, a simple
linear averaging of adjacent samples (maybe what you were thinking
"intuitively") does not produce a correct result.

> Obviously my intuition must be wrong or everyone would use 88.2
> instead of 96K.

I don't know why 96k was chosen. I do know that 44.1 has a historical
derivation from TV line rates, which is mostly irrelevant now but we're
stuck with it. Around the time profesisonal digital audio systems were
first being developed, 48kHz and 50 kHz were competing "standards" - 48k
surviving as an option in DAT recorders.

--
Anahata
anahata@treewind.co.uk -+- http://www.treewind.co.uk
Home: 01638 720444 Mob: 07976 263827
Anonymous
August 14, 2005 4:34:27 PM

Archived from groups: rec.audio.pro (More info?)

On Sun, 14 Aug 2005 07:34:26 -0400, anahata wrote
(in article <42ff2c43$0$97100$ed2619ec@ptn-nntp-reader03.plus.net>):


> I don't know why 96k was chosen. I do know that 44.1 has a historical
> derivation from TV line rates, which is mostly irrelevant now but we're
> stuck with it. Around the time profesisonal digital audio systems were
> first being developed, 48kHz and 50 kHz were competing "standards" - 48k
> surviving as an option in DAT recorders.
>


96 was chosen because it's 2 x 48kHz. the sampling rate for digital audio
that accompanies digital picture.

Ty Ford





-- Ty Ford's equipment reviews, audio samples, rates and other audiocentric
stuff are at www.tyford.com
Anonymous
August 14, 2005 6:32:38 PM

Archived from groups: rec.audio.pro (More info?)

bohemian <alias@snowcrest.net> wrote in
news:ja2uf1ddork08klh40aq53qln81e96a635@4ax.com:

> Maybe everyone knows but me.... :) 
>
> If the target product is a CD at 44.1 K, it would seem logical that
> recording at 88.2K rather than 96 K would avoid all sorts of
> downsampling problems (??dithering??).

You won't need dithering to change sample rates, only sample size (e.g.
24 bit to 16 bit).

> The digital imaging equivalent
> would be interpolation. I would intuit that the errors in
> downsampling from 96 to 44.1 would outweigh the benefit of the
> only-8K-better sampling rate.

Theoretically, downsampling from 88.2 to 44.1 is simply throwing half the
samples away. There's not a lot of benefit in that over recording
directly at 44.1.

> Obviously my intuition must be wrong or everyone would use 88.2
> instead of 96K.... is that correct?? why?? has anyone got samples
> anywhere that show that you can hear the difference??

The folks using 96K are either (a) working harder to get it to 44.1 or
(b) using video (which has both 96K and 48K sampling rates).

> ....relative newbie about to embark on digital recording, looking
> forward to further education in RAP... it makes great reading, btw.,
> so thanks to the regulars!!

You'll find that your ears are the final arbiters. Compare your "88.2 to
44.1" material to material recorded at 44.1. I expect that you won't
hear much of a difference.
Anonymous
August 14, 2005 8:14:35 PM

Archived from groups: rec.audio.pro (More info?)

Carey Carlan wrote:
> Theoretically, downsampling from 88.2 to 44.1 is simply throwing half the
> samples away.

It is not, if done properly.

--
Anahata
anahata@treewind.co.uk -+- http://www.treewind.co.uk
Home: 01638 720444 Mob: 07976 263827
Anonymous
August 14, 2005 9:35:37 PM

Archived from groups: rec.audio.pro (More info?)

anahata wrote:


> I don't know why 96k was chosen.

96 is double 48.

> I do know that 44.1 has a historical
> derivation from TV line rates, which is mostly irrelevant now but we're
> stuck with it. Around the time profesisonal digital audio systems were
> first being developed, 48kHz and 50 kHz were competing "standards" - 48k
> surviving as an option in DAT recorders.

It's a nice round, easily divisible number - 24, 12, 6 - for lower
sample rates. Not that it necessarily makes a big difference in
processing, but it's easier for humans to track (vs. 11.025 or something).



---
avast! Antivirus: Outbound message clean.
Virus Database (VPS): 0532-6, 08/13/2005
Tested on: 8/14/2005 10:34:57 AM
avast! - copyright (c) 1988-2005 ALWIL Software.
http://www.avast.com
Anonymous
August 15, 2005 12:42:55 AM

Archived from groups: rec.audio.pro (More info?)

Just like Heisenberg's Uncertainty Principle!
Anonymous
August 15, 2005 1:01:38 AM

Archived from groups: rec.audio.pro (More info?)

"Chris Cavell" <chriscavell@cavellstudios.com> wrote in
news:1124032876.681200.10000@g43g2000cwa.googlegroups.com:

> I thought the matter of first upping the sample rate to the least
> common multiple of the available rates, applying the anti-aliasing
> filter, then downsampling to destination rate even in the case of even
> multiple rates (in other words, today's most common method of SRC) was
> made clear in earlier posts to this thread.
>
> The most common method of performing SRC today uses much more complex
> methods than "simply throwing half the samples away".

Please explain.

I envision a curve. On this curve are 88,200 points per second of content.

If I only want 44,100 of those points, would I not just choose every other
point? That would give me 44,100 points correctly spaced on that same
curve. How could you improve on that?
Anonymous
August 15, 2005 1:01:39 AM

Archived from groups: rec.audio.pro (More info?)

"Carey Carlan" <gulfjoe@hotmail.com> wrote in message
news:Xns96B2AD396FC66gulfjoehotmailcom@140.99.99.130
> "Chris Cavell" <chriscavell@cavellstudios.com> wrote in
> news:1124032876.681200.10000@g43g2000cwa.googlegroups.com:
>
>> I thought the matter of first upping the sample rate to
>> the least common multiple of the available rates,
>> applying the anti-aliasing filter, then downsampling to
>> destination rate even in the case of even multiple rates
>> (in other words, today's most common method of SRC) was
>> made clear in earlier posts to this thread.
>>
>> The most common method of performing SRC today uses much
>> more complex methods than "simply throwing half the
>> samples away".
>
> Please explain.
>
> I envision a curve. On this curve are 88,200 points per
> second of content.
>
> If I only want 44,100 of those points, would I not just
> choose every other point? That would give me 44,100
> points correctly spaced on that same curve. How could
> you improve on that?

For openers, downsamplers usually involve decimation. This
is the process of averaging adjacent samples to come up with
new samples at a lowered sample rate.

When you throw away samples you throw away information.

When you decimate data, you preserve the information in a
sense.

Information theory suggests that when you drop the sample
rate of data, and preserve the information in the right
sense, you should improve the dynamic range of the
downsampled data. IOW, you trade bandwidth for reduced
noise.
Anonymous
August 15, 2005 1:56:10 AM

Archived from groups: rec.audio.pro (More info?)

"Carey Carlan" <gulfjoe@hotmail.com> wrote in message
news:Xns96B26B505EB08gulfjoehotmailcom@140.99.99.130...
> bohemian <alias@snowcrest.net> wrote in
> news:ja2uf1ddork08klh40aq53qln81e96a635@4ax.com:
>
> > Maybe everyone knows but me.... :) 
> >
> > If the target product is a CD at 44.1 K, it would seem logical that
> > recording at 88.2K rather than 96 K would avoid all sorts of
> > downsampling problems (??dithering??).
>
> You won't need dithering to change sample rates, only sample size (e.g.
> 24 bit to 16 bit).
>
> > The digital imaging equivalent
> > would be interpolation. I would intuit that the errors in
> > downsampling from 96 to 44.1 would outweigh the benefit of the
> > only-8K-better sampling rate.
>
> Theoretically, downsampling from 88.2 to 44.1 is simply throwing half the
> samples away. There's not a lot of benefit in that over recording
> directly at 44.1.
>
> > Obviously my intuition must be wrong or everyone would use 88.2
> > instead of 96K.... is that correct?? why?? has anyone got samples
> > anywhere that show that you can hear the difference??
>
> The folks using 96K are either (a) working harder to get it to 44.1 or
> (b) using video (which has both 96K and 48K sampling rates).
>
> > ....relative newbie about to embark on digital recording, looking
> > forward to further education in RAP... it makes great reading, btw.,
> > so thanks to the regulars!!
>
> You'll find that your ears are the final arbiters. Compare your "88.2 to
> 44.1" material to material recorded at 44.1. I expect that you won't
> hear much of a difference.


What is the consensus on the benefits of recording with any sample rate
higher than 44,1 kHz if there's a moderate amount of software processing
(light eq, gain change) involved, in addition to editing, but the project
will still end up on a CD?

Predrag
Anonymous
August 15, 2005 1:56:11 AM

Archived from groups: rec.audio.pro (More info?)

On Sun, 14 Aug 2005 21:56:10 +0200, Predrag Trpkov wrote:

> What is the consensus on the benefits of recording with any sample rate
> higher than 44,1 kHz if there's a moderate amount of software processing
> (light eq, gain change) involved, in addition to editing, but the project
> will still end up on a CD?

AFAICT, that's a good reason to choose 24 bit over 16 bit. But shouldn't
be affected by sample rate at all.
Anonymous
August 15, 2005 1:56:11 AM

Archived from groups: rec.audio.pro (More info?)

"Predrag Trpkov" <predrag.trpkovNeSpamu@ri.htnet.hr> wrote
in message news:D do7m2$ifd$1@ss405.t-com.hr

> What is the consensus on the benefits of recording with
> any sample rate higher than 44,1 kHz if there's a
> moderate amount of software processing (light eq, gain
> change) involved, in addition to editing, but the project
> will still end up on a CD?

None. The most significant sample-rate dependent processing
is tha anti-aliasing during digitization, and the low pass
filtering for the reconstuction filter. If the target sample
rate is 44.1, that processing is always going to be part of
the signal chain. If you downsample, there has to be
anti-aliasing at 44.1 at that point, so you can't get away
from it.
Anonymous
August 15, 2005 1:56:12 AM

Archived from groups: rec.audio.pro (More info?)

"Agent 86" <maxwellsmart@control.gov> wrote in message
news:p an.2005.08.14.20.19.46.300519@control.gov...
> On Sun, 14 Aug 2005 21:56:10 +0200, Predrag Trpkov wrote:
>
> > What is the consensus on the benefits of recording with any sample rate
> > higher than 44,1 kHz if there's a moderate amount of software processing
> > (light eq, gain change) involved, in addition to editing, but the
project
> > will still end up on a CD?
>
> AFAICT, that's a good reason to choose 24 bit over 16 bit. But shouldn't
> be affected by sample rate at all.

Although I have heard that some plugins sound better at 88.2 or 96, I think
something about producing less artifacts in the normal audio bandwidth. I
think my solution would be to get a better plugin.

Sean
Anonymous
August 15, 2005 2:34:03 AM

Archived from groups: rec.audio.pro (More info?)

Carey Carlan wrote:
> I envision a curve. On this curve are 88,200 points per second of content.
>
> If I only want 44,100 of those points, would I not just choose every other
> point? That would give me 44,100 points correctly spaced on that same
> curve. How could you improve on that?

Here's a simple example that should how that might not work.

Original signal : 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1
converted (1) 0 1 0 1 0 1 0 1
converted (2) 0 0 1 1 0 0 1 1

Depending which set of samples you take,
you get signals at two different frequencies in 2:1 ratio!

Your curve visualisation is fine until you look at the fine detail and
realize the curve may be jagged rather than smooth. It is at the HF end
that your method will make the biggest errors.

Taking the average of samples in pairs is better, but it's possible to
prove that's still not perfect. It's a filter but not a good one.

Another way of looking at this is in the frequency domain.
Any content between 22kHz and 44 kHz in the 88k sampled signal will be
aliased by your "ignore alternate samples" method into the 0-22kHz range
where it will be audible and different from the original.

--
Anahata
anahata@treewind.co.uk -+- http://www.treewind.co.uk
Home: 01638 720444 Mob: 07976 263827
Anonymous
August 15, 2005 3:01:32 AM

Archived from groups: rec.audio.pro (More info?)

On Sun, 14 Aug 2005 22:34:03 +0100, anahata wrote:

> Carey Carlan wrote:
>> I envision a curve. On this curve are 88,200 points per second of
>> content.
>>
>> If I only want 44,100 of those points, would I not just choose every
>> other point? That would give me 44,100 points correctly spaced on that
>> same curve. How could you improve on that?
>
> Here's a simple example that should how that might not work.
>
> Original signal : 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1 converted (1) 0 1
> 0 1 0 1 0 1 converted (2) 0 0 1 1 0 0 1 1

Not a very good example. What's getting sampled every 1/44.100 second is
not a single bit that can be represented by a one or a zero. Methinks you
have your bits mixed up with your bytes.
Anonymous
August 15, 2005 3:06:04 AM

Archived from groups: rec.audio.pro (More info?)

Carey Carlan wrote:

> I envision a curve. On this curve are 88,200 points per second of content.
>
> If I only want 44,100 of those points, would I not just choose every other
> point? That would give me 44,100 points correctly spaced on that same
> curve. How could you improve on that?

You must lowpass it first. If those 88200 pps contain
harmonic content above 22050 kHz then simply selecting
alternate samples aliases it down into the baseband.


Bob
--

"Things should be described as simply as possible, but no
simpler."

A. Einstein
Anonymous
August 15, 2005 3:18:28 AM

Archived from groups: rec.audio.pro (More info?)

Predrag Trpkov wrote:

> What is the consensus on the benefits of recording with any sample rate
> higher than 44,1 kHz if there's a moderate amount of software processing
> (light eq, gain change) involved, in addition to editing, but the project
> will still end up on a CD?

This depends on the quality and nature of your processing
software. If it introduces non-linearity, such as a
compressor does, then it must be really careful to upsample
to well above what it is given before processing and
downsample it back with filtering after the process so that
the non-linearities of the process produce fewer harmonics
that get aliased down into the baseband.

If you are working with a higher sample rate to start with,
and resampling back down after all is said and done, that
problem, should the effects suffer from it, is ameliorated.

I've never seen anything published that evaluates the
effectiveness of processes in that regard. It's not
something that plugin vendors spec.

If all the processes are linear then there is no reason I
can conceive of why a higher recording sample rate would be
of any value when distribution is to be at a lower one.


Bob
--

"Things should be described as simply as possible, but no
simpler."

A. Einstein
Anonymous
August 15, 2005 3:28:51 AM

Archived from groups: rec.audio.pro (More info?)

"Arny Krueger" <arnyk@hotpop.com> wrote in message
news:wMWdnZ2dnZ1_wZWgnZ2dnYs2Yt-dnZ2dRVn-z52dnZ0@comcast.com...
> "Predrag Trpkov" <predrag.trpkovNeSpamu@ri.htnet.hr> wrote
> in message news:D do7m2$ifd$1@ss405.t-com.hr
>
> > What is the consensus on the benefits of recording with
> > any sample rate higher than 44,1 kHz if there's a
> > moderate amount of software processing (light eq, gain
> > change) involved, in addition to editing, but the project
> > will still end up on a CD?
>
> None. The most significant sample-rate dependent processing
> is tha anti-aliasing during digitization, and the low pass
> filtering for the reconstuction filter. If the target sample
> rate is 44.1, that processing is always going to be part of
> the signal chain. If you downsample, there has to be
> anti-aliasing at 44.1 at that point, so you can't get away
> from it.


Thanks. It'll make my life easier. I've started recording with high sample
rates, but have never been quite sold on the idea.

You may be surprised, but I have a lot of respect for your vast theoretical
knowledge. I listen carefully when you're dealing with factual info. Just
don't expect me to accept as gospel your personal preferences on subjective
matters.

Predrag
Anonymous
August 15, 2005 4:09:49 AM

Archived from groups: rec.audio.pro (More info?)

"If all the processes are linear then there is no reason I
can conceive of why a higher recording sample rate would be
of any value when distribution is to be at a lower one. "

I would say there are a number of reasons not all of which are sonic.
First of all it's always wise to have a master recording done with
highest possible quality even if it's cut back later. There is this
little thing known as the future and one never knows when a certain
recording will be viewed as a priceless gem at some point in it.
Technology marches on and while a certain sample rate may be fine for
today, tomorrow you may be wishing you'd used up a bit more space on
that recording.

Secondly as you yourself note, if there to be some non-linear
processing now or in the future, it would be best to have that higher
rate master in the can.

And lastly, there actually CAN be some sonic improvement with higher
sample rates if the downsampling algorithms are properly done. The
reason has to do with the phase sensitivity of digital sampling near
the Nyquist rate. Everyone loves to say that if you sample a 20 Khz
wave at 40k sps you can get the wave back perfectly. Yes, IF the
samples happen to hit the positive and negative peaks. If you hit the
zero crossings you get absolutely NOTHING back! But sample the same
wave 4 times and the phase situation is much improved. It is possible
with the proper downsampling methods to use the added information to
produce a wave (though maybe not totally correct) where the lower rate
was giving nothing. In other words the phase problems even of the
lower sampling rate can be reduced. Basically you are using the
greater information to adjust phases of tones to better match the lower
sampling rate. Sonically phase shifts are not very audible but tones
dropping out are much more noticiable.

And lastly, there is the philosophical question of throwing information
away. (Which is what recording at lower sampling rates does) This is
generally a bad policy. Whether it's your old IRS data or a master
recording it's always better to have the data and never use it than to
need it and not have it! :-)

Benj
Anonymous
August 15, 2005 7:18:58 AM

Archived from groups: rec.audio.pro (More info?)

Chris Cavell <chriscavell@cavellstudios.com> wrote:

> I thought the matter of first upping the sample rate to the least
> common multiple of the available rates, applying the anti-aliasing
> filter, then downsampling to destination rate even in the case of even
> multiple rates (in other words, today's most common method of SRC) was
> made clear in earlier posts to this thread.

In the case of 88.2k to 44.1k, upsampling to the least-common multiple
does not require upsampling at all.

But, when you downsample, the low-pass filter will require an
arithmetic calculation to be performed on each sample which will
obliterate whatever tidiness you perceive in your Nice Round SRC
maneuver.

I can think of one very good reason to record with a higher sampling
rate than the delivery medium, but I have no idea how often this
scenario occurs today: If you have an anti-aliasing filter in a
non-real-time SRC algorithm that sounds demonstrably better than the
real-time filter in the AD converter. At least in theory, recording at
96k will allow the use of a lower-order low-pass filter that has fewer
in-band artefacts (in-band for the final delivery medium, that is).
It's possible that a more sophisticated filter could be used later for
the SRC that would produce fewer or less objectionable in-band
artefacts.

ulysses
Anonymous
August 15, 2005 9:03:05 AM

Archived from groups: rec.audio.pro (More info?)

On Sun, 14 Aug 2005 21:01:38 GMT, Carey Carlan <gulfjoe@hotmail.com>
wrote:

>I envision a curve. On this curve are 88,200 points per second of content.
>
>If I only want 44,100 of those points, would I not just choose every other
>point? That would give me 44,100 points correctly spaced on that same
>curve. How could you improve on that?

Strangely enough, you'd first have to implement a low pass filter.
Otherwise, you'd have actually doubled the signal bandwidth,
effectively making an "alias" in the twice-passband band.

Clear as mud, isn't it? Maybe Bob Cain will weigh in with
a convincing explanation that's understandable.

Chris Hornbeck
August 15, 2005 10:09:21 AM

Archived from groups: rec.audio.pro (More info?)

Great reading....
Point 1.... i was wrong about dithering....
point2 throwing out half the data would mean sampling at 44.1 would
be equivalent, and I agree that done properly they MUST average (or
use a more sophisticated algorithm)

The real question is - would the final result on CD at 44.1 not be
better if recorded and processed (particularly the plugins, etc.) at a
higher sample rate and then downsampled as the last step??

Bottom line, should the newbie get in the habit of using 44.1 or 96K??
Why and why not??

Thanks in advance for the comments!

Peter
On Mon, 15 Aug 2005 05:03:05 GMT, Chris Hornbeck
<chrishornbeckremovethis@att.net> wrote:

>On Sun, 14 Aug 2005 21:01:38 GMT, Carey Carlan <gulfjoe@hotmail.com>
>wrote:
>
>>I envision a curve. On this curve are 88,200 points per second of content.
>>
>>If I only want 44,100 of those points, would I not just choose every other
>>point? That would give me 44,100 points correctly spaced on that same
>>curve. How could you improve on that?
>
>Strangely enough, you'd first have to implement a low pass filter.
>Otherwise, you'd have actually doubled the signal bandwidth,
>effectively making an "alias" in the twice-passband band.
>
>Clear as mud, isn't it? Maybe Bob Cain will weigh in with
>a convincing explanation that's understandable.
>
>Chris Hornbeck
Anonymous
August 15, 2005 10:18:54 AM

Archived from groups: rec.audio.pro (More info?)

"bohemian" <alias@snowcrest.net> wrote in message
news:p 1c0g1hlnihlk5ul9spn8csvhb7lr0c5df@4ax.com

> The real question is - would the final result on CD at
> 44.1 not be better if recorded and processed
> (particularly the plugins, etc.) at a higher sample rate
> and then downsampled as the last step??

No.

However, it can make sense to record and process in 24 or 32
bits and dither down to 16 bits as your last step.
Anonymous
August 15, 2005 10:56:40 AM

Archived from groups: rec.audio.pro (More info?)

On Mon, 15 Aug 2005 06:09:21 GMT, bohemian <alias@snowcrest.net>
wrote:

>point2 throwing out half the data would mean sampling at 44.1 would
>be equivalent,

Perhaps clouding, perhaps enlightening, but let me just
interject that ALL modern, meaning in the last coupla
decades, converters, both A/D and D/A, run oversampled.

Meaning that low pass filters are a significant, critical
part of their works. We civilians don't talk about their
workings but they matter.

IOW, *nothing* actually runs at 44.1; only the firmware
matters and nobody talks about the firmware (here, because
we can't; because we don't know enough).


>The real question is - would the final result on CD at 44.1 not be
>better if recorded and processed (particularly the plugins, etc.) at a
>higher sample rate and then downsampled as the last step??

Mon, that's the best phrased question of recent memory, and
I hope it gets an appropriately well considered response from
those who know.

Good fortune,

Chris Hornbeck
Anonymous
August 15, 2005 11:22:57 AM

Archived from groups: rec.audio.pro (More info?)

On 15 Aug 2005 00:09:49 -0700, bjacoby@iwaynet.net wrote:

>And lastly, there actually CAN be some sonic improvement with higher
>sample rates if the downsampling algorithms are properly done. The
>reason has to do with the phase sensitivity of digital sampling near
>the Nyquist rate. Everyone loves to say that if you sample a 20 Khz
>wave at 40k sps you can get the wave back perfectly. Yes, IF the
>samples happen to hit the positive and negative peaks. If you hit the
>zero crossings you get absolutely NOTHING back! But sample the same
>wave 4 times and the phase situation is much improved.

Your example at the Nyquist limit is incorrect in two significant
ways. Number one is pretty obvious; what's the second?

Hint: it might be really significant to your basic argument.

Good fortune,

Chris Hornbeck
Anonymous
August 15, 2005 11:49:16 AM

Archived from groups: rec.audio.pro (More info?)

In article <p1c0g1hlnihlk5ul9spn8csvhb7lr0c5df@4ax.com> same writes:

> The real question is - would the final result on CD at 44.1 not be
> better if recorded and processed (particularly the plugins, etc.) at a
> higher sample rate and then downsampled as the last step??

It depends on what you're recording and how you're recording it. In
theory, there are some advantages to recording and processing at
higher sample rates, and little disadvantage other than the obvious
(reduction in bandwidth and resolution) when converting to "CD format"
however in practice, it may not matter in the end.

> Bottom line, should the newbie get in the habit of using 44.1 or 96K??
> Why and why not??

The newbie should simplify his process until he fills up a bag of
tricks and techniques, learns to listen accurately, develops a
monitoring environment that lets him hear what he's doing, fills up
his equipment closet, and so on. It's simpler to record at 44.1 kHz
than 88.2 or 96. It takes less computer resources, and there are no
decisions as to what conversion process should be used, when, and by
whom. Nobody seems to worry much about converting from 24-bit to
16-bit any more (that's where you need dithering) so you can probably
safely take advantage of 24-bit resolution and available dynamic range
and enjoy the headroom and the simplicity of setting record levels
conservatively and not worry about overloads.

As you gain more experience and have better listening facilities, you
can experiment with higher sample rates and decide when it's worth
while and when it isn't.


--
I'm really Mike Rivers (mrivers@d-and-d.com)
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me here: double-m-eleven-double-zero at yahoo
Anonymous
August 15, 2005 1:49:04 PM

Archived from groups: rec.audio.pro (More info?)

On Mon, 15 Aug 2005 03:18:58 -0500, Justin Ulysses Morse wrote:

> Chris Cavell <chriscavell@cavellstudios.com> wrote:
>
>> I thought the matter of first upping the sample rate to the least
>> common multiple of the available rates, applying the anti-aliasing
>> filter, then downsampling to destination rate even in the case of even
>> multiple rates (in other words, today's most common method of SRC) was
>> made clear in earlier posts to this thread.
>
> In the case of 88.2k to 44.1k, upsampling to the least-common multiple
> does not require upsampling at all.
>
In making the conversion, you may not have to upsample, but that doesn't
mean that it doesn't happen. It will just be built into the conversion
algorithm. The conversion process needs antialiasing at 20kHz, and to do
this a filter must be implemented at a clock frequency much higher than the
original 88.2kHz. Since oversampling is already done on the original source
signal, this same frequency will probably be used, with standard
interpolation algorithms. Once the dots have been joined with a smooth
curve, it doesn't really make any difference what lower frequency you
resample to.

> But, when you downsample, the low-pass filter will require an
> arithmetic calculation to be performed on each sample which will
> obliterate whatever tidiness you perceive in your Nice Round SRC
> maneuver.
>
> I can think of one very good reason to record with a higher sampling
> rate than the delivery medium, but I have no idea how often this
> scenario occurs today: If you have an anti-aliasing filter in a
> non-real-time SRC algorithm that sounds demonstrably better than the
> real-time filter in the AD converter. At least in theory, recording at
> 96k will allow the use of a lower-order low-pass filter that has fewer
> in-band artefacts (in-band for the final delivery medium, that is).
> It's possible that a more sophisticated filter could be used later for
> the SRC that would produce fewer or less objectionable in-band
> artefacts.
>
> ulysses

Again, what you describe here ignores the fact that oversampling is always
used in playback. 16x is quite common, and this means that the analogue
anti-alias filter is trivial in the extreme, and need to have any audible
effects on the signal. And of course it makes no difference whether or not
this happens in real time.

d
Anonymous
August 15, 2005 4:57:57 PM

Archived from groups: rec.audio.pro (More info?)

On Mon, 15 Aug 2005 00:09:49 -0700, bjacoby wrote:

> There is this little thing
> known as the future and one never knows when a certain recording will be
> viewed as a priceless gem at some point in it. Technology marches on and
> while a certain sample rate may be fine for today, tomorrow you may be
> wishing you'd used up a bit more space on that recording.

What's your prediction regarding just how the average person's hearing is
going to get better in the future?


> And lastly, there actually CAN be some sonic improvement with higher
> sample rates if the downsampling algorithms are properly done. The
> reason has to do with the phase sensitivity of digital sampling near
> the Nyquist rate. Everyone loves to say that if you sample a 20 Khz wave
> at 40k sps you can get the wave back perfectly. Yes, IF the samples
> happen to hit the positive and negative peaks. If you hit the zero
> crossings you get absolutely NOTHING back!

That's why we sample at 44.1KHz or 48KHz.
Anonymous
August 15, 2005 4:57:58 PM

Archived from groups: rec.audio.pro (More info?)

"Agent 86" <maxwellsmart@control.gov> wrote in message
news:p an.2005.08.15.12.57.56.517002@control.gov
> On Mon, 15 Aug 2005 00:09:49 -0700, bjacoby wrote:
>
>> There is this little thing
>> known as the future and one never knows when a certain
>> recording will be viewed as a priceless gem at some
>> point in it. Technology marches on and while a certain
>> sample rate may be fine for today, tomorrow you may be
>> wishing you'd used up a bit more space on that
>> recording.
>
> What's your prediction regarding just how the average
> person's hearing is going to get better in the future?

In recent times it seems like the hearing of the average
person has gone down hill, due to all the new sources of
ear-damaging sound levels.

>> And lastly, there actually CAN be some sonic improvement
>> with higher sample rates if the downsampling algorithms
>> are properly done.

This is going to be good! ;-)

>> The reason has to do with the phase
>> sensitivity of digital sampling near the Nyquist rate.

If the nyquist frequency is above a few KHz the ear has just
about zero sensitivity to phase shift.

>> Everyone loves to say that if you sample a 20 Khz wave
>> at 40k sps you can get the wave back perfectly.

There's no need to get the wave back perfectly. With a
little creative phase shifting I can show you waveforms that
look like they've been to North Korea and back, and they
will completely fool you in a good listening test.

>> Yes, IF the samples happen to hit the positive and
>> negative
>> peaks. If you hit the zero crossings you get absolutely
>> NOTHING back!

That only happens at the Nyquist frequency, which violates
the Nyquist theorum.

> That's why we sample at 44.1KHz or 48KHz.

Fact is we could sample as low as 32-38 KHz and listeners
could remain very happy.
Anonymous
August 15, 2005 6:04:43 PM

Archived from groups: rec.audio.pro (More info?)

bjacoby@iwaynet.net wrote:
> "If all the processes are linear then there is no reason I
> can conceive of why a higher recording sample rate would be
> of any value when distribution is to be at a lower one. "
>
> I would say there are a number of reasons not all of which are sonic.
> First of all it's always wise to have a master recording done with
> highest possible quality even if it's cut back later. There is this
> little thing known as the future and one never knows when a certain
> recording will be viewed as a priceless gem at some point in it.
> Technology marches on and while a certain sample rate may be fine for
> today, tomorrow you may be wishing you'd used up a bit more space on
> that recording.

Hi, Benj. This gets all wrapped up in the question of what
sample rate is adequate for reproduction given human ears
(now and until we genetically improve our bandwidth) and
that debate seems never ending despite the ease of
experimenal determination.

> Secondly as you yourself note, if there to be some non-linear
> processing now or in the future, it would be best to have that higher
> rate master in the can.

This presumes that the non-linear processes aren't designed
with suppression of higher frequency harmonics. While there
hasn't been much written on that I would certainly hope that
DSP designers would be well aware of such an obvious
problem and address it in the design.

>
> And lastly, there actually CAN be some sonic improvement with higher
> sample rates if the downsampling algorithms are properly done. The
> reason has to do with the phase sensitivity of digital sampling near
> the Nyquist rate. Everyone loves to say that if you sample a 20 Khz
> wave at 40k sps you can get the wave back perfectly. Yes, IF the
> samples happen to hit the positive and negative peaks. If you hit the
> zero crossings you get absolutely NOTHING back!

The sampling theorem requires that bandwidth be strictly
less than half the sample rate for perfect reconstruction.
In practice, reconstruction (or SRC lowpass) filters have a
slope no matter how high the order or the oversampling and
things get fuzzy up there regardless of how SRC is done.

> But sample the same
> wave 4 times and the phase situation is much improved. It is possible
> with the proper downsampling methods to use the added information to
> produce a wave (though maybe not totally correct) where the lower rate
> was giving nothing. In other words the phase problems even of the
> lower sampling rate can be reduced. Basically you are using the
> greater information to adjust phases of tones to better match the lower
> sampling rate. Sonically phase shifts are not very audible but tones
> dropping out are much more noticiable.

But what happens when you eventually SRC down to the
distribution sample rate? Any such advantage disappears.


Bob
--

"Things should be described as simply as possible, but no
simpler."

A. Einstein
August 15, 2005 6:19:07 PM

Archived from groups: rec.audio.pro (More info?)

Thanks to all for the replies... seems pretty clear that I would do
well to sample at 44.1 24 bit for starters... mics, their placement,
rooms, etc. will have so much more impact that there is where my focus
needs to be for quite some time I suspect.
Anonymous
August 15, 2005 6:19:08 PM

Archived from groups: rec.audio.pro (More info?)

"Peter" <alias@snowcrest.net> wrote in message
news:6s81g11sljb7bn9hrfv9kipjk5tdghijrs@4ax.com

> Thanks to all for the replies... seems pretty clear that
> I would do well to sample at 44.1 24 bit for starters...
> mics, their placement, rooms, etc. will have so much more
> impact that there is where my focus needs to be for quite
> some time I suspect.

Exactly.
Anonymous
August 15, 2005 6:19:08 PM

Archived from groups: rec.audio.pro (More info?)

Peter wrote:
> Thanks to all for the replies... seems pretty clear that I would do
> well to sample at 44.1 24 bit for starters... mics, their placement,
> rooms, etc. will have so much more impact that there is where my focus
> needs to be for quite some time I suspect.

I certainly agree with that and will even more so when 24
bit converters generally get more than 16-18 honestly
signifigant bits. :-)


Bob
--

"Things should be described as simply as possible, but no
simpler."

A. Einstein
Anonymous
August 15, 2005 9:11:09 PM

Archived from groups: rec.audio.pro (More info?)

Easy. For archival purposes if nothing else. Better yet, if you've got
really decent converters that can handle 192 kHz, then do that. At some
point in the future higher sampling rates will be normal, but even if they
aren't there's no reason not to use the highest sampling rate you can, and
this comes to me from Glenn Meadows.

For the average person doing some work, 16/44.1 done correctly can be quite
enough for a CD release. In fact, you'll probably find more variations in
dithering possibilities going from 24 bit to 16 than you would find in a
decent SRC algorithm. In Samplitude I've got two different POWR dithering
algorithms, and a couple of selections of noise shaping. And it also
depends on WHERE you are recording. Live events don't have enough frequency
response concerns usually to warrant higher sample rates nor bit depth. If
a room is -55 dB at its quietest, why use 24 bit recording? And it depends
on what you are recording. A guitar band, for instance, isn't going to
require anything higher than 22.05 kHz, so 44.1 kHz is just fine, and again,
probably not a big concern for the bit depth unless you happen to be at
Skywalker or some place really nicely built.

So I don't think there's only the concern of what sampling rate to record
at. That's just one factor, and for all practical purposes, as long as the
converters are good at the higher rates, you are only taking up more data
space to record. If it were two track orchestral recordings with a nice
Decca tree configuration I'd look at DSD.

--


Roger W. Norman
SirMusic Studio
http://blogs.salon.com/0004478/

"bohemian" <alias@snowcrest.net> wrote in message
news:ja2uf1ddork08klh40aq53qln81e96a635@4ax.com...
> Maybe everyone knows but me.... :) 
>
> If the target product is a CD at 44.1 K, it would seem logical that
> recording at 88.2K rather than 96 K would avoid all sorts of
> downsampling problems (??dithering??). The digital imaging equivalent
> would be interpolation. I would intuit that the errors in
> downsampling from 96 to 44.1 would outweigh the benefit of the
> only-8K-better sampling rate.
>
> Obviously my intuition must be wrong or everyone would use 88.2
> instead of 96K.... is that correct?? why?? has anyone got samples
> anywhere that show that you can hear the difference??
>
> ....relative newbie about to embark on digital recording, looking
> forward to further education in RAP... it makes great reading, btw.,
> so thanks to the regulars!!
>
>
> Peter
Anonymous
August 15, 2005 10:42:37 PM

Archived from groups: rec.audio.pro (More info?)

Agent 86 wrote:
> On Sun, 14 Aug 2005 22:34:03 +0100, anahata wrote:
>>Here's a simple example that should how that might not work.
>>
>>Original signal : 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1
>>converted (1) 0 1 0 1 0 1 0 1
>>converted (2) 0 0 1 1 0 0 1 1
>
> Not a very good example. What's getting sampled every 1/44.100 second is
> not a single bit that can be represented by a one or a zero. Methinks you
> have your bits mixed up with your bytes.

Not at all. For simplicity of illustration I chose two simple data
values. Perhaps I should have written 26489 and -14752 instead of 1 and
0. Would that have been clearer?

(no, don't bother to answer that)

--
Anahata
anahata@treewind.co.uk -+- http://www.treewind.co.uk
Home: 01638 720444 Mob: 07976 263827
Anonymous
August 15, 2005 11:45:18 PM

Archived from groups: rec.audio.pro (More info?)

> Not a very good example. What's getting sampled every 1/44.100 second is
> not a single bit that can be represented by a one or a zero. Methinks you
> have your bits mixed up with your bytes.


Yeah, but it's a moot point anyway. Arny's correct. A new curve is
interpolated from the average of the samples. If you simply threw out every
other sample, you'd have changes in amplitude that wouldn't correctly be
represented over time. So the algorithm has to make a new plotted curve that
represents the old curve's amplitude but in the correct time frame. In
fact, if one were to try to just throw away every other sample I believe
you'd really have digital hash. It might play, but it surely wouldn't sound
right. In other words, if you simply removed half the samples you'd have
44,100 samples for a second but they'd be in the 88,200 (with an 88,200
sample missing) spaces. To get the timeframes correct one has to place the
44,100 sample between the two 88,200 samples. That means that data point is
at a different place in time and the amplitude of that signal will be
somewhat slightly different than either of the two higher rate samples in
order to represent the exact same curve.

--


Roger W. Norman
SirMusic Studio
http://blogs.salon.com/0004478/

"Agent 86" <maxwellsmart@control.gov> wrote in message
news:p an.2005.08.14.23.01.31.48305@control.gov...
> On Sun, 14 Aug 2005 22:34:03 +0100, anahata wrote:
>
> > Carey Carlan wrote:
> >> I envision a curve. On this curve are 88,200 points per second of
> >> content.
> >>
> >> If I only want 44,100 of those points, would I not just choose every
> >> other point? That would give me 44,100 points correctly spaced on that
> >> same curve. How could you improve on that?
> >
> > Here's a simple example that should how that might not work.
> >
> > Original signal : 0 0 1 0 0 1 1 1 0 0 1 0 0 1 1 1 converted (1) 0
1
> > 0 1 0 1 0 1 converted (2) 0 0 1 1 0 0 1 1
>
>
>
Anonymous
August 16, 2005 3:05:05 AM

Archived from groups: rec.audio.pro (More info?)

Bob Cain <arcane@arcanemethods.com> wrote in
news:D dpbc80cfj@enews1.newsguy.com:

> Carey Carlan wrote:
>
>> I envision a curve. On this curve are 88,200 points per second of
>> content.
>>
>> If I only want 44,100 of those points, would I not just choose every
>> other point? That would give me 44,100 points correctly spaced on
>> that same curve. How could you improve on that?
>
> You must lowpass it first. If those 88200 pps contain
> harmonic content above 22050 kHz then simply selecting
> alternate samples aliases it down into the baseband.

Absolutely correct. I was erroneously envisioning a wave that could be
reproduced at 44.1K.

In fact, the fact that I'm recording at 88.2K means I WILL have content
above the 44.1K Nyquist limit whether I want it or not. Even if it's just
thermal noise, it will be there and have to be removed. The "smoothing"
function will move those 44,100 points to slightly different locations.
Anonymous
August 16, 2005 7:33:08 AM

Archived from groups: rec.audio.pro (More info?)

Don Pearce <donald@pearce.uk.com> wrote:

> Again, what you describe here ignores the fact that oversampling is always
> used in playback. 16x is quite common, and this means that the analogue
> anti-alias filter is trivial in the extreme, and need to have any audible
> effects on the signal. And of course it makes no difference whether or not
> this happens in real time.

How can an inaudible 20k brickwall filter be trivial? Regardless of
the (up)sample rate. Filters have inband artefacts, and steeper
filters have more. Has somebody developed a perfect filter I haven't
heard about? Are you claiming that a 6dB/Octave filter with a corner
at 44kHz is no better in the audio band than a 12dB/Octave filter with
a corner at 22kHz?

The idea of a non-real-time filter algorithm potentially working better
than a real-time filter is not my original idea. There was a lot of
talk about that maybe 5 years ago, and I figured at some point the
hardware would become powerful enough to do whatever number-crunching
needed to be done. Has that day arrived? I don't know.

ulysses
Anonymous
August 16, 2005 8:02:14 AM

Archived from groups: rec.audio.pro (More info?)

Peter wrote:

> Thanks to all for the replies... seems pretty clear that I would do
> well to sample at 44.1 24 bit for starters... mics, their placement,
> rooms, etc. will have so much more impact that there is where my focus
> needs to be for quite some time I suspect.

This is what I call a successful thread.

The OP waded through a myriad of technical banter,
distilled a 'bottom line' trend and accepted the
most useful and productive conclusion, while allowing
for a closer approach to the 'bleeding edge' in the future.

<I also use 44.1/24 for basic tracking>

good luck
rd
Anonymous
August 16, 2005 11:58:10 AM

Archived from groups: rec.audio.pro (More info?)

"Justin Ulysses Morse" <ulyssesnospam@rollmusic.com> wrote
in message
news:1124181189.2f30baacaa7464dab85e5abe262ff261@teranews
> Don Pearce <donald@pearce.uk.com> wrote:
>
>> Again, what you describe here ignores the fact that
>> oversampling is always used in playback. 16x is quite
>> common, and this means that the analogue anti-alias
>> filter is trivial in the extreme, and need to have any
>> audible effects on the signal. And of course it makes no
>> difference whether or not this happens in real time.

> How can an inaudible 20k brickwall filter be trivial?

Not trivial, but a problem that can be solved and for which
many good approximations exist.

> Regardless of the (up)sample rate. Filters have inband
> artefacts, and steeper filters have more.

Not necessarily. Classic filter theory shows that you can
have say a butterworth filter of a high order, very steep
cutoff, and highly controlled ripple in the passband.

> Has somebody
> developed a perfect filter I haven't heard about?

The most common kind of filter in a high end converter has
linear phase within the passband which extends up to about
95% of the Nyquist frequency. At the Nyquist frequency there
will be 60 to 90+ dB of attenuation. Ripple in the passband
will be like 0.05 dB or less. Since the filter's phase shift
follows the pattern dictated by the linear phase rule, its
phase shift is the same as a simple, short delay.

> Are you claiming that a 6dB/Octave filter with a corner
> at
> 44kHz is no better in the audio band than a 12dB/Octave
> filter with a corner at 22kHz?

Those would be very crude filters by modern digital audio
standards. So crude as to be outside the realm of a
reasonable discussion.

> The idea of a non-real-time filter algorithm potentially
> working better than a real-time filter is not my original
> idea.

The need for such a thing is based on the availability of
processing speed and processing components. Being that this
is the 21st century, from an audio perspective both speed
and complexity are more than readily available enough to do
just about anything that needs to be done in a filter and
then some. For example, downsampling done right implies some
very nice brick wall filtering, but even with quality set to
the highest, Audition/CE downsamples for me in about 1/5 of
real time.

> There was a lot of talk about that maybe 5 years
> ago, and I figured at some point the hardware would
> become powerful enough to do whatever number-crunching
> needed to be done. Has that day arrived? I don't know.

I think it has come and passed.
Anonymous
August 16, 2005 1:47:59 PM

Archived from groups: rec.audio.pro (More info?)

On Tue, 16 Aug 2005 03:33:08 -0500, Justin Ulysses Morse wrote:

> Don Pearce <donald@pearce.uk.com> wrote:
>
>> Again, what you describe here ignores the fact that oversampling is always
>> used in playback. 16x is quite common, and this means that the analogue
>> anti-alias filter is trivial in the extreme, and need to have any audible
>> effects on the signal. And of course it makes no difference whether or not
>> this happens in real time.
>
> How can an inaudible 20k brickwall filter be trivial? Regardless of
> the (up)sample rate. Filters have inband artefacts, and steeper
> filters have more. Has somebody developed a perfect filter I haven't
> heard about? Are you claiming that a 6dB/Octave filter with a corner
> at 44kHz is no better in the audio band than a 12dB/Octave filter with
> a corner at 22kHz?
>
> The idea of a non-real-time filter algorithm potentially working better
> than a real-time filter is not my original idea. There was a lot of
> talk about that maybe 5 years ago, and I figured at some point the
> hardware would become powerful enough to do whatever number-crunching
> needed to be done. Has that day arrived? I don't know.
>
> ulysses

With upsampling the filter is not steep, and nor need it be at 20kHz - all
it is required to do is remove the alias artifacts from the upsampled
clock. It is indeed a trivial filter. The hard stuff is done digitally -
where it is easy.

d
Anonymous
August 16, 2005 1:49:00 PM

Archived from groups: rec.audio.pro (More info?)

On Tue, 16 Aug 2005 09:47:59 +0100, Don Pearce wrote:

> On Tue, 16 Aug 2005 03:33:08 -0500, Justin Ulysses Morse wrote:
>
>> Don Pearce <donald@pearce.uk.com> wrote:
>>
>>> Again, what you describe here ignores the fact that oversampling is always
>>> used in playback. 16x is quite common, and this means that the analogue
>>> anti-alias filter is trivial in the extreme, and need to have any audible
>>> effects on the signal. And of course it makes no difference whether or not
>>> this happens in real time.
>>
>> How can an inaudible 20k brickwall filter be trivial? Regardless of
>> the (up)sample rate. Filters have inband artefacts, and steeper
>> filters have more. Has somebody developed a perfect filter I haven't
>> heard about? Are you claiming that a 6dB/Octave filter with a corner
>> at 44kHz is no better in the audio band than a 12dB/Octave filter with
>> a corner at 22kHz?
>>
>> The idea of a non-real-time filter algorithm potentially working better
>> than a real-time filter is not my original idea. There was a lot of
>> talk about that maybe 5 years ago, and I figured at some point the
>> hardware would become powerful enough to do whatever number-crunching
>> needed to be done. Has that day arrived? I don't know.
>>
>> ulysses
>
> With upsampling the filter is not steep, and nor need it be at 20kHz - all
> it is required to do is remove the alias artifacts from the upsampled
> clock. It is indeed a trivial filter. The hard stuff is done digitally -
> where it is easy.
>
> d

Sorry - make that oversampling.

d
September 21, 2010 9:43:33 AM

What I do is recording at 88.2 with protools HD and al apogee converters .
I use also the grim clock which can do multiple clocks out .
When I mixed a session I go to my second PSX 100 analog in ( using the 44.1 clock out of the grimm ) then go to my HHB CD recorder also clock with the grimm on 44.1 .
I use the PSX 100's UV22 to go to cd and the result is fabulous.
I love to record 88.2 because the transients are recorded more (air) t :wahoo:  :wahoo:  :wahoo:  :wahoo: 
!