Sign in with
Sign up | Sign in
Your question

Is relative phase audible?

Last response: in Home Audio
Share
Anonymous
December 10, 2004 2:47:13 AM

Archived from groups: rec.audio.pro (More info?)

I know I'm hardly the first person to study whether relative phase is
audible :-) Probably not even the first person this week. Nonetheless, I
never actually tried it myself till now. So:

Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
880Hz sines at the same relative levels but with different phase
relationships sound pretty different, to me. (That is, I can reliably tell
them apart in blind randomized trials.)

Thought a few others might be interested to take a listen. So, I put the
..wav files and a bit of discussion up on my web page, at
http://www.cafewalter.com/cafewalter/signals/phase.htm.

By the way, does anyone know offhand whether MP3 encoders preserve relative
phase?

-walter

More about : relative phase audible

Anonymous
December 10, 2004 9:16:54 AM

Archived from groups: rec.audio.pro (More info?)

Walter,

I think your experiment is a good one, but any conclusions need to be
carefully drawn for at least two reasons that I can think of:

[a] It's crucial to define exactly what question is being answered.
Psychoacoustics describes (and the anatomy of human hearing supports)
an ability to hear relative phase in the sense that you're using the
term, but only below 1500 Hz or so. Above that range the ability
disappears, but all your test signals were well below that point. So
let's beware the false dichotomy: "we can hear relative phase" / "we
cannot hear relative phase" when it's not quite so simple.

When you vary the phase relationships among signal components, you
alter the peak levels of the composite signal (sometimes greatly) even
though the effective values remain the same. Some of your test
equipment may well behave differently given these changes in peak
level--a power amplifier or a recorder used in the experiment may
produce audibly higher or lower distortion levels, for example.
Listeners may well respond differently to these differences in peak
level.

Of course you can't be blamed the impossibility of keeping both the
effective loudness and the peak levels constant while altering the
phase relationships. Point can indeed be considered a valid reason
to maintain relative phase relationships carefully. But this
uncontrolled variable limits the conclusions that can fairly be drawn
from any such experiment.

--best regards
Anonymous
December 10, 2004 11:00:33 AM

Archived from groups: rec.audio.pro (More info?)

On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
<walterh@cafewalterNOSPAM.com> wrote:

>I know I'm hardly the first person to study whether relative phase is
>audible :-) Probably not even the first person this week. Nonetheless, I
>never actually tried it myself till now. So:
>
>Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
>880Hz sines at the same relative levels but with different phase
>relationships sound pretty different, to me. (That is, I can reliably tell
>them apart in blind randomized trials.)
>
>Thought a few others might be interested to take a listen. So, I put the
>.wav files and a bit of discussion up on my web page, at
>http://www.cafewalter.com/cafewalter/signals/phase.htm.
>
>By the way, does anyone know offhand whether MP3 encoders preserve relative
>phase?
>
> -walter
>

They do indeed sound very different. The one you call inphase has an
"aaaaahhh" sound, and the inverted an "ohhhhhh" sound. Roughly-

Saving them as MP3 preserves the phase information very nicely.

d

Pearce Consulting
http://www.pearce.uk.com
December 10, 2004 12:21:21 PM

Archived from groups: rec.audio.pro (More info?)

If you combine the fundamental with its harmonics and vary the phase of
the harmonics, the peak amplitude of the waveform will change. If you
are not carful with scaling etc, the peaks can be clipped. It is
because of this distortion and other non-lineatieies that you might be
able to tell. If the playback system (and your ears) were linear, you
should not be able to perceive a change in the phase of the harmonics
relative to the fundamental.

Mark
Anonymous
December 10, 2004 12:23:40 PM

Archived from groups: rec.audio.pro (More info?)

In article <kOWdnUwiosTiySTcRVn-jA@speakeasy.net> walterh@cafewalterNOSPAM.com writes:

> I know I'm hardly the first person to study whether relative phase is
> audible :-)

Certainly not, since, by definition, phase IS relative. There is no
such thing as absolute phase.

> Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
> 880Hz sines at the same relative levels but with different phase
> relationships sound pretty different, to me. (That is, I can reliably tell
> them apart in blind randomized trials.)

That's not at all surprising. By changing the phase relationship,
you're changing the shape of the complex waveform. Certainly that
should make a difference. With the right (or wrong) phase
relationship, certain frequencies could be completely cancelled.

> By the way, does anyone know offhand whether MP3 encoders preserve relative
> phase?

I don't know that anyone has ever specified this, but considering how
they work, I would think not.



--
I'm really Mike Rivers (mrivers@d-and-d.com)
However, until the spam goes away or Hell freezes over,
lots of IP addresses are blocked from this system. If
you e-mail me and it bounces, use your secret decoder ring
and reach me here: double-m-eleven-double-zero at yahoo
Anonymous
December 10, 2004 12:23:41 PM

Archived from groups: rec.audio.pro (More info?)

>> I know I'm hardly the first person to study whether relative phase
is
>> audible :-)


>Certainly not, since, by definition, phase IS relative. There is no
>such thing as absolute phase.

I think it depends on a definition. To me "absolute phase" is often the
term used when someone really means "polarity". But really, polarity is
also a relative term, requiring some reference, i.e. "point of origin"
(such as "positive polarity is a positive voltage on pin 2 and a
positive excursion of the woofer").

But it has been determined that "absolute phase" can be heard, if the
original waveform is asymetrical, such as from a single reed instrument
(sax, clarinet, etc.), human voice, drums and string instruments. In
other words, if the kick drum produces a compression of the local
volume of air on a hit, then the woofer reproducing it should also
create an increase in pressure, i.e. it is in "absolute phase" with the
point of origin.

Which brought me to a thought I had yesterday... many engineers use two
mics on the toms and snare of the drum kit. And to have the "absolute
phase" match for these two mics, they flip the bottom mic out of
polarity. However, I think it would be better to flip the *top* mic out
of polarity, since when the skin is first hit, it goes *down* on the
heads, thus pulling the diaphragm of the top mic out and pushing the
diaphragm of the bottom mic in. Thus to get a positive excursion at the
speaker, the bottom mic should be used as the reference, with the top
mic "flipped" to match.

Undoubtedly, there are engineers who are already doing this, and maybe
I've exposed a "secret". Sorry about that...
Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com
Anonymous
December 10, 2004 1:58:59 PM

Archived from groups: rec.audio.pro (More info?)

Mike, this is indeed a good point you are making. In the "for my
method" camp I would say that by presenting a positive waveform to the
listener, it would create an "impact" that is associated with the
snare. But what you say is true, and really, what would be needed is
something that recreates a realistic impression of *height* as well as
L-R information. And at this point, no one seems to be working on this
principal. Clearly, that's what Ambisonics was/is doing... but none of
the current 5.1, 7.1 etc. systems seem to take this into account.

I'm personally a huge fan of minimal drum miking when it can be done
right. In fact, for several years when I was touring with the Air Force
jazz big band, I used 3 mics for the drums: two overheads and a kick
mic. I would often get comments about how "realistic" the overall sound
was. My goal was to maintain the impact of the drums, and the natural
relationships between the different sources in the kit, rather than
trying to isolate and present each source.
Karl Winkler
Lectrosonics, Inc.
http://www.lectrosonics.com
Anonymous
December 10, 2004 2:19:56 PM

Archived from groups: rec.audio.pro (More info?)

"Karl Winkler" <karlwinkler66@yahoo.com> wrote in message
news:1102698432.554387.270860@f14g2000cwb.googlegroups.com...
>
> Which brought me to a thought I had yesterday... many engineers use two
> mics on the toms and snare of the drum kit. And to have the "absolute
> phase" match for these two mics, they flip the bottom mic out of
> polarity. However, I think it would be better to flip the *top* mic out
> of polarity, since when the skin is first hit, it goes *down* on the
> heads, thus pulling the diaphragm of the top mic out and pushing the
> diaphragm of the bottom mic in. Thus to get a positive excursion at the
> speaker, the bottom mic should be used as the reference, with the top
> mic "flipped" to match.
>
> Karl Winkler
> Lectrosonics, Inc.
> http://www.lectrosonics.com
>

But why? When you sit and listen to a drummer play, you are at the side of
the snare. So, micing a snare the way you suggest (or even the other way)
would produce something other than what is heard in the room. That might be
why minimalist micing of drums is prefered by many...more realistic.

Mike
Anonymous
December 10, 2004 3:05:22 PM

Archived from groups: rec.audio.pro (More info?)

On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
<walterh@cafewalterNOSPAM.com> wrote:

>Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz, and
>880Hz sines at the same relative levels but with different phase
>relationships sound pretty different, to me. (That is, I can reliably tell
>them apart in blind randomized trials.) <snip>

This has been known for years. The Hammond "organ", first sold in
1935, uses additive synthesis in a failed attempt to recreate the
sound of various organ stops. Due to the construction of the
tonewheel generator, all the tonewheel are in a different phase
relationship every time the organ starts up, since all the wheels are
clutch driven and slip slightly upon startup. The combination you
cite above would be equivalent to A below Middle C with the 8', 4', 2
2/3' and 2' drawbars pulled out. However, there's a catch here, since
Hammonds are roughly tuned (not exactly) to Equal Temperatment, so the
2 2/3' pitch would be slightly flat from Just Temperament, and thus,
not exactly 660 Hz. In this case, the 2 2/3' pitch would be 659.255 Hz
in ET, and Hammonds "stretch and shirnk" ET just a tad here and there
because of the limitations of the mathematics of the tonewheel
generator. In any event, there'll be some beating from this,
regardless of phase.

Every time the organ is started up, this combination will sound
different to the ear. The same will go when you use tones derived
from a top octave generator, or a bank of free running oscillators
locked with a PLL; vary the phase, the tone will sound slightly
different there and there.

So, to answer your question, yes, phase angle can change the timbre of
harmonically complex tones. How much is a matter of conjecture.
Tests done by the Allen Organ Company showed that relative phase is
far less a determining factor in timbre "footprint" than is relative
amplitude of harmonics, but, contrary to what many had said in the
past, it IS discrenable. However, as Messrs. Fletcher and Munson
learned at Bell Telephone Labs in the '20s, pitch recognition up above
the midrange area, say above 1800 Hz, becomes less accurate as
frequency increases.

Thus, it can be argued quite well that changes in phase angle that
mostly affect the top end, such as we see in digital PCM, do not
materially affect the tonality of the sound, but rather become mostly
indecipherable to the human ear. Again, amplitude is the prime
consideration, with phase angle ranking way down the list. At
frequencies above about 8 KHz, pitch recognition in most people goes
away, anyway, so phase is totally irrelevant UNLESS the change in
phase angle changes the difference products of IM distortion. THEN
you open a whole new kettle of fish!

dBdB
Anonymous
December 10, 2004 3:26:25 PM

Archived from groups: rec.audio.pro (More info?)

Mark <makolber@yahoo.com> wrote:
>If you combine the fundamental with its harmonics and vary the phase of
>the harmonics, the peak amplitude of the waveform will change. If you
>are not carful with scaling etc, the peaks can be clipped. It is
>because of this distortion and other non-lineatieies that you might be
>able to tell. If the playback system (and your ears) were linear, you
>should not be able to perceive a change in the phase of the harmonics
>relative to the fundamental.

What the original poster is measuring is the audibility of group delay.
When someone says "relative phase" I figure they are talking about phase
differences between channels.

There is some good research on the audibility of group delay out there.
Including Koray Oczam's paper, AES preprint 5740.
--scott

--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Anonymous
December 10, 2004 9:17:08 PM

Archived from groups: rec.audio.pro (More info?)

"Michael Putrino" <putrino@juno.com> wrote in message news:cpclru$88f$1@news01.intel.com...
>
> "Karl Winkler" <karlwinkler66@yahoo.com> wrote in message
> news:1102698432.554387.270860@f14g2000cwb.googlegroups.com...
> >
> > Which brought me to a thought I had yesterday... many engineers use two
> > mics on the toms and snare of the drum kit. And to have the "absolute
> > phase" match for these two mics, they flip the bottom mic out of
> > polarity. However, I think it would be better to flip the *top* mic out
> > of polarity, since when the skin is first hit, it goes *down* on the
> > heads, thus pulling the diaphragm of the top mic out and pushing the
> > diaphragm of the bottom mic in. Thus to get a positive excursion at the
> > speaker, the bottom mic should be used as the reference, with the top
> > mic "flipped" to match.
> >
> > Karl Winkler
> > Lectrosonics, Inc.
> > http://www.lectrosonics.com
> >
>
> But why? When you sit and listen to a drummer play, you are at the side of
> the snare. So, micing a snare the way you suggest (or even the other way)
> would produce something other than what is heard in the room. That might be
> why minimalist micing of drums is prefered by many...more realistic.
>
> Mike


Unfortunately, 'realistic' is very often not do-able because of bad rooms.
Close miking a drum kit allows the producer to at least have a chance at
creating a space for the kit which actually fits a mix, rather than spend the
time figuring out how to eliminate the horrible sounding room that came with
the miniscule number of tracks available, without destroying the source.

In my experience, minimal miking only works in a special set of circumstances,
which span every variable from the drummer's performance to the room itself.
With a little practice, more sources will allow a good engineer to re-create what
may have been missing from the tracking quality. Sound design is a big part of
putting together the mix... it doesn't have to be something bigger than life and
can still end up sounding "natural". Two or three sources that are crammed
with multiple and possibly unbalanced, awful tones with waaay too much 'room'
are often nigh on impossible to work with if drums are meant to cut through
the mix at all.

In most cases that I have witnessed (in my 30 years of doing this) where
minimal miking was used on the drums are either very genre' specific, as in
jazz - but even more so lately, the main reasons people are purporting to be
'minimalist' in technique, is that they simply don't have the space, the mics,
the available tracks, or the experience to take a bigger picture to work with.

It may be preferred by many, but in the global scheme of things, in large
studios, that's actually very few.

--
David Morgan (MAMS)
http://www.m-a-m-s DOT com
Morgan Audio Media Service
Dallas, Texas (214) 662-9901
_______________________________________
http://www.artisan-recordingstudio.com
December 10, 2004 11:35:10 PM

Archived from groups: rec.audio.pro (More info?)

DeserTBoB wrote:
> On Thu, 9 Dec 2004 23:47:13 -0800, "Walter Harley"
> <walterh@cafewalterNOSPAM.com> wrote:
>
> >Two samples each containing a signal comprising 220Hz, 440Hz, 660Hz,
and
> >880Hz sines at the same relative levels but with different phase
> >relationships sound pretty different, to me. (That is, I can
reliably tell
> >them apart in blind randomized trials.) <snip>
>
> This has been known for years. The Hammond "organ", first sold in
> 1935, uses additive synthesis in a failed attempt to recreate the
> sound of various organ stops. Due to the construction of the
> tonewheel generator, all the tonewheel are in a different phase
> relationship every time the organ starts up, since all the wheels are
> clutch driven and slip slightly upon startup. The combination you
> cite above would be equivalent to A below Middle C with the 8', 4', 2
> 2/3' and 2' drawbars pulled out. However, there's a catch here,
since
> Hammonds are roughly tuned (not exactly) to Equal Temperatment, so
the
> 2 2/3' pitch would be slightly flat from Just Temperament, and thus,
> not exactly 660 Hz. In this case, the 2 2/3' pitch would be 659.255
Hz
> in ET, and Hammonds "stretch and shirnk" ET just a tad here and there
> because of the limitations of the mathematics of the tonewheel
> generator. In any event, there'll be some beating from this,
> regardless of phase.
>
> Every time the organ is started up, this combination will sound
> different to the ear. The same will go when you use tones derived
> from a top octave generator, or a bank of free running oscillators
> locked with a PLL; vary the phase, the tone will sound slightly
> different there and there.
>
>
This is not the same thing as the OP talked about. The OP talked about
one note and harmonics of that note. What you described are differenct
notes of an organ and the fact that when they are not exactly equally
tempered it gives the organ body which is a good thing. These are two
different things.

Your ear cannot perceive a change in phase between the fundamental and
the harmonics unless there is a non-linearity that distorts the
waveform which then changes the amplitude of additional harmoincs.
Mark
Anonymous
December 11, 2004 1:46:41 AM

Archived from groups: rec.audio.pro (More info?)

"David Morgan (MAMS)" wrote:

> Unfortunately, 'realistic' is very often not do-able because of bad rooms.

OT to the subject, but here is my thought on *realistic* sounds and drums.

I don't think Realistic sounding drums is possible since *realistic* is too relative a term for
anything as eclectic as drums. First where do we hear drums? Clubs? Bars? Concerts? Living
Rooms? What constitutes a realistic drum sound then? It can't be in a nice recording studio
room with great acoustics, since very few places that we listen to drums are like that, and
rarely do people listen to drums in those kinds of rooms. But we don't record drums to sound
like the aforementioned locations either (well mostly we don't).

So it appears recording *natural* sounding drums is not about letting them sound natural like
they really do in the majority of rooms we listen to them in, rather it is about eliminating
the bad acoustical drum noise to make room in a recording for some good acoustical drum
noise. Which BTW I think you do point out later in your post. Am I off base in this thinking
though?

--
Nathan

"Imagine if there were no Hypothetical Situations"
Anonymous
December 11, 2004 1:59:20 AM

Archived from groups: rec.audio.pro (More info?)

"David Satz" <DSatz@msn.com> wrote in message
news:1102688214.469577.190220@z14g2000cwz.googlegroups.com...
> Walter,
>
> I think your experiment is a good one, but any conclusions need to be
> carefully drawn for at least two reasons that I can think of: [...]

Both very good points. I do discuss those in my writeup.

-w
Anonymous
December 11, 2004 2:07:49 AM

Archived from groups: rec.audio.pro (More info?)

"Mark" <makolber@yahoo.com> wrote in message
news:1102739709.996765.218310@f14g2000cwb.googlegroups.com...
> Your ear cannot perceive a change in phase between the fundamental and
> the harmonics unless there is a non-linearity that distorts the
> waveform which then changes the amplitude of additional harmoincs.

Your ear might not be able to, but I just demonstrated that my ear can. I
do not believe there is any substantive nonlinearity in the system on which
I explored this (described in my writeup).

It is precisely this (mis-)conception which I hoped to address.

Out of interest: Mark, can you hear the difference between the two .wav
files?

-walter
Anonymous
December 11, 2004 2:20:05 AM

Archived from groups: rec.audio.pro (More info?)

"Scott Dorsey" <kludge@panix.com> wrote in message
news:cpcm81$i3h$1@panix2.panix.com...
> What the original poster is measuring is the audibility of group delay.
> When someone says "relative phase" I figure they are talking about phase
> differences between channels.

Thanks for the correction in terminology. I'll update my web page.


> There is some good research on the audibility of group delay out there.
> Including Koray Oczam's paper, AES preprint 5740.

....and thanks for the reference. It was an article in the latest JAES that
motivated me to go explore this (I've always wondered about the schism
between people saying it's inaudible and people complaining about graphic
EQ's screwing up the sound, but never done anything about it). I've just
downloaded Oczam's paper and will check it out.

-walter
Anonymous
December 11, 2004 4:42:01 AM

Archived from groups: rec.audio.pro (More info?)

"Karl Winkler" <karlwinkler66@yahoo.com> wrote in message
news:1102698432.554387.270860@f14g2000cwb.googlegroups.com...
However, I think it would be better to flip the *top* mic out
> of polarity, since when the skin is first hit, it goes *down* on the
> heads, thus pulling the diaphragm of the top mic out and pushing the
> diaphragm of the bottom mic in. Thus to get a positive excursion at the
> speaker, the bottom mic should be used as the reference, with the top
> mic "flipped" to match.
>
> Undoubtedly, there are engineers who are already doing this, and maybe
> I've exposed a "secret". Sorry about that...

No, I don't think so, anyway... the response of the head is fast enough that
unless you're both top & bottom mic'ing a given drum, the difference is
negligible, relative to the whole kit... unless by doing so you generate
other phase related issues such as how that mic's signal now relates to,
let's say, that of the overheads. Think about it... you've got all these
variables to consider:
1.) Is the mic ONLY picking up the sound generated by the center of the
head, where the stick (assumedly) strikes? No, it's also picking up the
sound from the shell, and from the outer edges of the head (which generate a
wave faster than the center does, since the edges have less distance to
travel on "recoil" than the center of the head does. So it's a pretty
complex sound that a drum mic is picking up, even apart from reflections
from room surfaces.
3.) Is the mic pointed with the capsule directly downward right at the
strike point? Never - except in the case of a mic stuck right in front of a
kick drum beater, perhaps... therefore you've got more "relative" phase
happening than "absolute" phase in every circumstance on each drum.
3.) The mic is also picking up reflections from the floor, and any
surrounding walls - how do these reflections relate to, for example, those
picked up from the overheads if you were to flip the phase on a top snare
mic?
4.) Is the few microseconds of difference in when the sound arrives at the
mic on a single close-mic'ed snare (again assuming top-micing only) if you
flip the polarity, going to make a detectable difference in phase relative
to - let's say - the kick mic, which is normally/often mic'ed so that the
head is in excursion relative to the mic diaphragm when it's "kicked"?
Probably not - what's more likely to happen is that the combination of the
waves generated by the kick shell & the floor & the surrounding wall
surfaces are going to have more of an impact as to whether it sounds more
in-phase or out of phase. Same goes for what the in-phase kick mic picks up
from the snare when the snare is struck (which is normally/often at a much
lower level if the mic is located inside the kick).

There's more, I'm sure, but off the top of my head, that's the stuff that
immediately comes to mind. I've tried messing around with what you mentioned
before, just out of curiosity, and to me it just makes more sense to keep
everything in the same relative polarity, but just maintain awareness of the
normal stuff that can cause phase issues (distances between mics, esp. how
the overheads are set up, dealing with reflections, etc).

Anyway, having said all that; Karl, have you tried that which you mentioned,
and do you prefer it that way?

Neil Henderson
Anonymous
December 11, 2004 4:43:28 AM

Archived from groups: rec.audio.pro (More info?)

"David Satz" <DSatz@msn.com> wrote in message
news:1102688214.469577.190220@z14g2000cwz.googlegroups.com...
> Walter,
>
> I think your experiment is a good one, but any conclusions need to be
> carefully drawn for at least two reasons that I can think of:
>
> [a] It's crucial to define exactly what question is being answered.
> Psychoacoustics describes (and the anatomy of human hearing supports)
> an ability to hear relative phase in the sense that you're using the
> term, but only below 1500 Hz or so. Above that range the ability
> disappears,

Hey David... since the ear has a natural presence peak at around 3k,
wouldn't one be able to detect things in that range even more readily?

Neil Henderson
December 11, 2004 12:22:12 PM

Archived from groups: rec.audio.pro (More info?)

Walter Harley wrote:
> "Mark" <makolber@yahoo.com> wrote in message
> news:1102739709.996765.218310@f14g2000cwb.googlegroups.com...
> > Your ear cannot perceive a change in phase between the fundamental
and
> > the harmonics unless there is a non-linearity that distorts the
> > waveform which then changes the amplitude of additional harmoincs.
>
> Your ear might not be able to, but I just demonstrated that my ear
can. I
> do not believe there is any substantive nonlinearity in the system on
which
> I explored this (described in my writeup).
>
> It is precisely this (mis-)conception which I hoped to address.
>
> Out of interest: Mark, can you hear the difference between the two
..wav
> files?
>
> -walter

Why no Walter, I could not hear a difference but that means nothing.

Have you demonstated that you hear a difference using a double blind
test as you describe in the article?

If you wish to address the (mis)conception as you say, then you need to
also verify the validity of the experiment by checking the 2 waveforems
with an oscilloscope and a spectrum analyzer. Please use the scope to
verify that neither waveform is being distorted and use the SA to
verify that all the harmonics are at the same relative amplitude in
both cases. This will verify that the ONLY thing different is the
phase and that there are no additional harmoincs being added or that
the amplitude of the ones you put in are changed.


If you can verify the experimient is valid this way, and can still
statistically hear a differnce in a double blind test, then you will
begin to get my attention.

I applaud you for questioning the party line but you must apply
rigorous checks or else its cold fussion.


Mark
December 11, 2004 12:41:41 PM

Archived from groups: rec.audio.pro (More info?)

Walter,

I just tried your experiment in my lab in a different way. I used 2
audio oscillators, set 1 to 440 and the other to 880. I summed them
and looked at the combination waveform on the scope and listend on my
monitors. Since the 880 is not exactly 2x the 440, the relative phase
drifts through slowly. To my surprise, I could hear a difference as
the waveform changed. But then I thought about it realized that my 440
generator (and yours) is not perfect and generates some 880. This 880
combines with the 880 that I added from the other generator and as the
phase relationship changes, the AMPLITUDE of the 880 changes. This in
fact is what it sounded like. So again, I belive you need to verify
your experimental setup to ensure that ALL the harmonics are at the
same AMPLITUDE in both waveforms. A change in the amplitude of the
harmoics will obviously change the "timbre" of the sound.
Thanks

Mark
Anonymous
December 11, 2004 1:30:25 PM

Archived from groups: rec.audio.pro (More info?)

Walter Harley <walterh@cafewalterNOSPAM.com> wrote:
>
>...and thanks for the reference. It was an article in the latest JAES that
>motivated me to go explore this (I've always wondered about the schism
>between people saying it's inaudible and people complaining about graphic
>EQ's screwing up the sound, but never done anything about it). I've just
>downloaded Oczam's paper and will check it out.

Well, graphic EQs screw up the sound in enough different ways that the group
delay issue may not even be the most serious one. Just looking at the actual
frequency response of a graphic configured for a gradual rise will make you
feel queasy.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Anonymous
December 11, 2004 10:13:12 PM

Archived from groups: rec.audio.pro (More info?)

Hi Walter,

This is an interesting experiment. However, I feel, as some other
posters do, that there are some sources of error in it.

1) The maximum peak to rms ratio becomes larger as the number of tones
added together is
increased. Hence, to keep the maximum peak to rms ratio to a minimum,
I suggest that you
use only two tones.

2) Sound generation equipment generates harmonic distortion. By chosing
tones that
are harmonically related, you will augment or diminish the harmonics
generated by the equipment thereby accentuating the effect. So, I
suggest that you not only use two tones but that they be not related
harmonically. Perhap 220 Hz and pi*220 Hz may work.

3) Putting two or more tones through an amplifier results in intermods.
I suggest that you put one tone in the left channel and the other in
the right channel just to get this out of the picture.

Btw, I did listen to your experiment on junky cheap powered computer
speakers I bought
many years ago and could not hear any difference. (This being rec.audio
pro should I run for cover? :)  But I do believe that on better
equipment the difference would be audible. And I did see that the
in-phase set had higher peaks than the out-of-phase set.

Joe
Anonymous
December 12, 2004 1:20:50 AM

Archived from groups: rec.audio.pro (More info?)

> the ear has a natural presence peak at around 3k

That isn't quite what equal loudness curves (e.g. Fletcher/Munson)
represent, but OK--your meaning is clear enough.


> wouldn't one be able to detect things in that range even more
readily?

That just doesn't turn out to be true in practice. So you've just
offered a highly intelligent explanation for a fact that doesn't exist.
--best regards
Anonymous
December 12, 2004 8:54:42 AM

Archived from groups: rec.audio.pro (More info?)

"Neil Henderson" <neil.henderson@sbcglobal.netNOSPAM> wrote in message
news:4hsud.28774$fC4.22202@newssvr11.news.prodigy.com
> "David Satz" <DSatz@msn.com> wrote in message
> news:1102688214.469577.190220@z14g2000cwz.googlegroups.com...
>> Walter,
>>
>> I think your experiment is a good one, but any conclusions need to be
>> carefully drawn for at least two reasons that I can think of:
>>
>> [a] It's crucial to define exactly what question is being answered.
>> Psychoacoustics describes (and the anatomy of human hearing supports)
>> an ability to hear relative phase in the sense that you're using the
>> term, but only below 1500 Hz or so. Above that range the ability
>> disappears,
>
> Hey David... since the ear has a natural presence peak at around 3k,
> wouldn't one be able to detect things in that range even more readily?

For what this "Me too" post is worth, David's got it exactly right.

The normal given frequency for the point where the ear is most sensitive
based on intensity, is more like 4 KHz than 3. The reason why is usually
given is the ear-canal resonance that you seem to be referring to.

While the ear is most sensitive to sound based on intensity at about 4 KHz ,
the ear is most sensitive to other aspects of sound at other frequencies.

For example, the ear is most sensitive to FM distortion when the FM
modulation occurs at very low frequencies, a few Hz. The ear is often most
sensitive to nonlinear distortion when the test signal is at some frequency
other than 4 KHz, but there is a spurious tone generated by the distortion
that appears around 4 KHz, and so on.
Anonymous
December 12, 2004 10:03:06 AM

Archived from groups: rec.audio.pro (More info?)

"David Satz" <DSatz@msn.com> wrote in message
news:1102832450.413744.190500@c13g2000cwb.googlegroups.com...
>> the ear has a natural presence peak at around 3k
>
> That isn't quite what equal loudness curves (e.g. Fletcher/Munson)
> represent, but OK--your meaning is clear enough.
>
>
>> wouldn't one be able to detect things in that range even more
> readily?
>
> That just doesn't turn out to be true in practice. So you've just
> offered a highly intelligent explanation for a fact that doesn't exist.
> --best regards

Hmmm... I don't get what you're saying - can you run that by me again? That
was a serious question, BTW - not a flippant remark, if that's what you
thought.

Neil Henderson
Anonymous
December 12, 2004 12:58:49 PM

Archived from groups: rec.audio.pro (More info?)

On Sun, 12 Dec 2004 01:20:50 -0500, David Satz wrote
(in article <1102832450.413744.190500@c13g2000cwb.googlegroups.com>):

>> the ear has a natural presence peak at around 3k
>
> That isn't quite what equal loudness curves (e.g. Fletcher/Munson)
> represent, but OK--your meaning is clear enough.
>
>
>> wouldn't one be able to detect things in that range even more
> readily?
>
> That just doesn't turn out to be true in practice. So you've just
> offered a highly intelligent explanation for a fact that doesn't exist.
> --best regards
>

The FM curve illustrates that at low levels the 3kHz area is dominant. As the
SPL rises, however, the peak is not as dominant.

Regards,

Ty Ford



-- Ty Ford's equipment reviews, audio samples, rates and other audiocentric
stuff are at www.tyford.com
Anonymous
December 12, 2004 6:00:10 PM

Archived from groups: rec.audio.pro (More info?)

"Arny Krueger" <arnyk@hotpop.com> wrote in message
news:WdOdnRz20Jl0viHcRVn-ow@comcast.com...

> While the ear is most sensitive to sound based on intensity at about 4 KHz
> , the ear is most sensitive to other aspects of sound at other
> frequencies.

OK, I get what you're saying now. Thanks!

Neil Henderson
December 13, 2004 12:17:55 PM

Archived from groups: rec.audio.pro (More info?)

snipped lots of interesting information about tone wheels etc.

That was interesting and I thank you for it. However, if I understood
you correctly the harmonics in a tone wheel organ are created by
various wheels which may slip at startup and therefore have arbitrary
phase relationships to the fundamental. Fine. But you also said
yourself that these wheel generated harmoincs combine with the "actual"
harmonics created by the fundamental wheel and as the phase changes the
AMPLITUDE of the combined harmonics change. There is no argument that
these amplitude changes are audible. You also said yourself that the
harmonic amplitudes changed about 0.5 dB. This is a lot of change for
the kind of subtle effects we are talking about here.


You also seem to be saying that if the harmonic is off pitch, that is
audible. OK fine thats a frequency change and I certainly agree that a
frequency change can be audible as a pitch shift.

So I stand my my original contention, the phase relationship of the
harmonics to the fundamental are not audbile.

Any AMPLITUDE changes to the harmonic are audible as a change in
timbre.

Any FREQUENCY changes to the harmonic can be audible as a pitch shift.

You have not cited a case where the organ gnerates a harmonic phase
change that was audible without also a change to the amplitude or
frequency of the harmonic.

thanks

Mark
Anonymous
December 13, 2004 6:09:39 PM

Archived from groups: rec.audio.pro (More info?)

DeserTBoB wrote:

> By the way, it's NOT that feature which provides the "warmth" of a
> Hammond organ...it's that 22-H or 122 Leslie over there.

Hey, those'll even warm up a Telecaster on bridge pickup. <g>

--
ha
Anonymous
December 13, 2004 9:40:15 PM

Archived from groups: rec.audio.pro (More info?)

<< it's that 22-H or 122 Leslie over there.>

<Hey, those'll even warm up a Telecaster on bridge pickup. <g> >>



They've done some great stuff to some violin & viola tracks of my acquaintance.
too.

Scott Fraser
!