How much separation is there in live sound, anyhow?

Archived from groups: rec.audio.pro (More info?)

This weekend I was out gigging regional band festivals - not the most
glorious work, but it seemed like a fun thing to do, and it pays.

I was provided with a Rode NT4, a SX202, and an CDR-W66 CD recorder.
Simplistic recording like this could get old fast, but the room sounded good
and the people were very nice.

During some of the intermissions I did a few experiments related to what
sort of imaging this setup could produce.

My measurement technique was to start recording, and walk across the stage
under the proscenium, tapping the floor with a walking-stick on the seams
in the resilient flooring. This was a constant spatially-dispersed stimulus.

On-site playback through headphones demonstrated a very clear sound stage
that rather closely duplicated the live sound, from a front-row seating
position.

I brought the recordings home. Using Audition, I found that the channel
unbalance of the stick-taps maxed out at almost exactly 6 dB, which was
achieved at the edges of the proscenium. Anybody else ever try something
like this?
15 answers Last reply
More about separation live sound anyhow
  1. Archived from groups: rec.audio.pro (More info?)

    Remembering that 6db is equivalent to a double in the distance of two in
    line sound sources. And also equivalent to a double in the volume between
    channels. It sounds like the mic did very well for this fairly narrow
    sound field. Likely is realistic to what was heard minus the time of
    arrival cues that this xy mic is going to miss by designing for no phase
    problems.

    Audition is a great program for taking different kinds of looks at a sound
    file and gaining perspective, isn't it?

    Rich


    "Arny Krueger" <arnyk@hotpop.com> wrote in message
    news:-rKdnWxu8c4l47HfRVn-hw@comcast.com...
    > This weekend I was out gigging regional band festivals - not the most
    > glorious work, but it seemed like a fun thing to do, and it pays.
    >
    > I was provided with a Rode NT4, a SX202, and an CDR-W66 CD recorder.
    > Simplistic recording like this could get old fast, but the room sounded
    > good
    > and the people were very nice.
    >
    > During some of the intermissions I did a few experiments related to what
    > sort of imaging this setup could produce.
    >
    > My measurement technique was to start recording, and walk across the stage
    > under the proscenium, tapping the floor with a walking-stick on the seams
    > in the resilient flooring. This was a constant spatially-dispersed
    > stimulus.
    >
    > On-site playback through headphones demonstrated a very clear sound stage
    > that rather closely duplicated the live sound, from a front-row seating
    > position.
    >
    > I brought the recordings home. Using Audition, I found that the channel
    > unbalance of the stick-taps maxed out at almost exactly 6 dB, which was
    > achieved at the edges of the proscenium. Anybody else ever try something
    > like this?
    >
    >
  2. Archived from groups: rec.audio.pro (More info?)

    >Yes, this is intensity stereo, so the outputs of mic should only
    differ in
    >amplitude.

    Then you should be able to get the same effect with a mono recording
    and just using the pan control? Can you?


    >Looking at the left-most tapping sound, when both are normalized, the
    two
    >waves just about sit perfectly on top of each other. The arrival-times
    match
    >within 5 microseconds or less.


    What about the phase of the signal, not the delay of the envelope but
    the actual phase of the sine waves, I don't think you can see that on
    the DAW, can you?

    Mark
  3. Archived from groups: rec.audio.pro (More info?)

    "Rich" <RichPeet@hotmail.com> wrote in message
    news:iYSdnRQyZpSHFbHfRVn-ig@comcast.com

    > Remembering that 6db is equivalent to a double in the distance of
    > two in line sound sources. And also equivalent to a double in the
    > volume between channels. It sounds like the mic did very well for
    > this fairly narrow sound field.

    Let me expand. the NT-4 was pole-mounted maybe a dozen feet above the stage,
    center, front lip of the stage. IOW the mic stand was on the floor just in
    front of the stage. So, the included angle between the sounds at the extreme
    left and right were about 90 degrees. I would not call this a narrow sound
    field.

    > Likely is realistic to what was
    > heard minus the time of arrival cues that this xy mic is going to
    > miss by designing for no phase problems.

    Yes, this is intensity stereo, so the outputs of mic should only differ in
    amplitude.

    Looking at the left-most tapping sound, when both are normalized, the two
    waves just about sit perfectly on top of each other. The arrival-times match
    within 5 microseconds or less.

    I noticed that that some of the cymbal crashes were relatively flat out to
    about 15 KHz.

    > Audition is a great program for taking different kinds of looks at a
    > sound file and gaining perspective, isn't it?

    It's a DAW program, no it's test equipment, no its a DAW program, no it's
    test equipment... ;-)
  4. Archived from groups: rec.audio.pro (More info?)

    Yes, Audition does have a phase relation display.

    Not as good as the one that came with my RME Multiface but it does work.

    Rich

    "Mark" <makolber@yahoo.com> wrote in message > What about the phase of the
    signal, not the delay of the envelope but
    > the actual phase of the sine waves, I don't think you can see that on
    > the DAW, can you?
    >
    > Mark
    >
  5. Archived from groups: rec.audio.pro (More info?)

    You are (I think) confusing separation with imaging. (A similar fallacy is the
    basis of justifying "discrete" surround recording.)

    If the imaging is correct, the measured electrical separation doesn't matter.
    And vice versa.
  6. Archived from groups: rec.audio.pro (More info?)

    When an artist paints a canvas, or a camera takes a snapshot, of a
    multi-layered event (depth of field), the result is a composition of a
    foreground subject, things masked behind it, or apparent beside it, in
    layers, and lastly the background or whatever is final.
    But when a cardioid microphone relays its pick-up pattern to a listener,
    live or recorded, it detects the "subject" as well as millions of secondary
    and tertiary.... echoes or ricochets and all the other guff it cannot ever
    discriminate against. The perspective is wrong because of an inverse square
    law.
    Why can't somebody concoct a specially-processed aspect format of audio,
    maybe stereophonically, so that all that ambient but unnecessary detail
    still protrudes as human-perspectively correct and present, and without the
    sterilised treatment of a studio or concert arena?
  7. Archived from groups: rec.audio.pro (More info?)

    Jim Gregory wrote:

    > When an artist paints a canvas, or a camera takes a snapshot, of a
    > multi-layered event (depth of field), the result is a composition of a
    > foreground subject, things masked behind it, or apparent beside it, in
    > layers, and lastly the background or whatever is final.
    > But when a cardioid microphone relays its pick-up pattern to a listener,
    > live or recorded, it detects the "subject" as well as millions of secondary
    > and tertiary.... echoes or ricochets and all the other guff it cannot ever
    > discriminate against. The perspective is wrong because of an inverse square
    > law.

    Firstly, use two mics, for stereo, and then note that how much of the
    ambience is heard falls to one's placement of mics.

    > Why can't somebody concoct a specially-processed aspect format of audio,
    > maybe stereophonically, so that all that ambient but unnecessary detail
    > still protrudes as human-perspectively correct and present, and without the
    > sterilised treatment of a studio or concert arena?

    Investigate how the various stereo recording practices work. Find the
    FAQ.

    http://www.recaudiopro.net

    --
    ha
  8. Archived from groups: rec.audio.pro (More info?)

    "William Sommerwerck" <williams@nwlink.com> wrote in message
    news:112pvtuanfrvh4c@corp.supernews.com

    >> My measurement technique was to start recording, and walk across the
    stage
    >> under the proscenium, tapping the floor with a walking-stick on the
    seams
    >> in the resilient flooring. This was a constant spatially-dispersed
    stimulus.

    >> On-site playback through headphones demonstrated a very clear sound stage
    >> that rather closely duplicated the live sound, from a front-row seating
    >> position.

    >> I brought the recordings home. Using Audition, I found that the channel
    >> unbalance of the stick-taps maxed out at almost exactly 6 dB, which was
    >> achieved at the edges of the proscenium. Anybody else ever try something
    >> like this?

    > You are (I think) confusing separation with imaging.


    Really? I think they are two different things - separation is in this case
    based on comparison of electrical signals, while imaging is related to
    perception.

    > (A similar fallacy is the basis of justifying "discrete" surround
    recording.)

    I didn't say that 6 dB is a lot or a little. My multichannel close-miced
    recordings would show lots more separation - maybe 20 dB or so. When I mix
    down I throw lots of that away.

    > If the imaging is correct, the measured electrical separation doesn't
    > matter. And vice versa.

    In intensity stereo, it seems like the relationship between perceived
    imaging or soundstaging and ratios of the size of very similar signals
    should be at least somewhat understandable.
  9. Archived from groups: rec.audio.pro (More info?)

    >>> I brought the recordings home. Using Audition, I found that the channel
    >>> unbalance of the stick-taps maxed out at almost exactly 6 dB, which was
    >>> achieved at the edges of the proscenium. Anybody else ever try something
    >>> like this?

    >> You are (I think) confusing separation with imaging.

    > Really? I think they are two different things - separation is in this case
    > based on comparison of electrical signals, while imaging is related to
    > perception.

    Perhaps I was a bit unfair. I agree that you performed an interesting
    experiment, which showed that what you expect and what you measure are not
    always the same.


    >> If the imaging is correct, the measured electrical separation doesn't
    >> matter. And vice versa.

    > In intensity stereo, it seems like the relationship between perceived
    > imaging or soundstaging and ratios of the size of very similar signals
    > should be at least somewhat understandable.

    Agreed. But I suspect that "soundstaging" also depends significantly on signal
    "phasing."

    Yes, I know you said "intensity stereo." But...

    Have you considered looking at the antiphase components in the signal and seeing
    how they relate to the to imaging? It would be a really worthwhile experiment.
  10. Archived from groups: rec.audio.pro (More info?)

    On Mon, 07 Mar 2005 22:05:59 +0000, Jim Gregory wrote:

    > When an artist paints a canvas, or a camera takes a snapshot, of a
    > multi-layered event (depth of field), the result is a composition of a
    > foreground subject, things masked behind it, or apparent beside it, in
    > layers, and lastly the background or whatever is final.
    > But when a cardioid microphone relays its pick-up pattern to a listener,
    > live or recorded, it detects the "subject" as well as millions of secondary
    > and tertiary.... echoes or ricochets and all the other guff it cannot ever
    > discriminate against. The perspective is wrong because of an inverse square
    > law.
    > Why can't somebody concoct a specially-processed aspect format of audio,
    > maybe stereophonically, so that all that ambient but unnecessary detail
    > still protrudes as human-perspectively correct and present, and without the
    > sterilised treatment of a studio or concert arena?

    If the recording and the playback system is good enough, your brain can
    discriminate and ignore extraneos sounds in the same way as when you
    listen live. You ears have a kinda omni pattern anyway, so they pick up
    pretty much everything, it's the brain that decides what to ignore or
    focus on.

    The problem is that all recording and playback equipment, even the really
    expensive stuff, is still not really good enough to reproduce all the
    tiny little cues you need to hear so you can discriminate what to ignore
    and what to focus on. Let alone modelling the shape of the ears, shoulders
    and the angle your head happens to be at.

    I've made binaural recordings with those little OKM in ear mics that get
    some of the way there when I listen back on headphones though.
  11. Archived from groups: rec.audio.pro (More info?)

    > This weekend I was out gigging regional band festivals - not the most
    > glorious work, but it seemed like a fun thing to do, and it pays.
    >
    > I was provided with a Rode NT4, a SX202, and an CDR-W66 CD recorder.
    > Simplistic recording like this could get old fast, but the room sounded
    good
    > and the people were very nice.
    >
    > During some of the intermissions I did a few experiments related to what
    > sort of imaging this setup could produce.
    >
    > My measurement technique was to start recording, and walk across the stage
    > under the proscenium, tapping the floor with a walking-stick on the seams
    > in the resilient flooring. This was a constant spatially-dispersed
    stimulus.
    >
    > On-site playback through headphones demonstrated a very clear sound stage
    > that rather closely duplicated the live sound, from a front-row seating
    > position.
    >
    > I brought the recordings home. Using Audition, I found that the channel
    > unbalance of the stick-taps maxed out at almost exactly 6 dB, which was
    > achieved at the edges of the proscenium. Anybody else ever try something
    > like this?

    Well Rode did when they mapped the response pattern, which is basically all
    you've done. At 90 degrees off-axis the response is 6dB less than on-axis,
    just like any typical cardioid pattern.

    That mic position cries out for ORTF though. I only use coincident XY when
    the sound is mostly SR.
  12. Archived from groups: rec.audio.pro (More info?)

    "philicorda" <philicorda@localhost.com> wrote in message
    news:pan.2005.03.10.18.46.40.247407@localhost.com...
    > On Mon, 07 Mar 2005 22:05:59 +0000, Jim Gregory wrote:
    >
    > > When an artist paints a canvas, or a camera takes a snapshot, of a
    > > multi-layered event (depth of field), the result is a composition of a
    > > foreground subject, things masked behind it, or apparent beside it, in
    > > layers, and lastly the background or whatever is final.
    > > But when a cardioid microphone relays its pick-up pattern to a listener,
    > > live or recorded, it detects the "subject" as well as millions of
    secondary
    > > and tertiary.... echoes or ricochets and all the other guff it cannot
    ever
    > > discriminate against. The perspective is wrong because of an inverse
    square
    > > law.
    >

    Well a painter can't avoid applying an interpretive process to the
    representation, which perverts it no less than secondary signals. And could
    some theoretical "vocal chord pick-up" really be more realistic? Many
    instruments have some means of direct signal capture, but often a microphone
    is prefered wherever feasible. Neither are perfect, but mics offer better
    trade-offs, despite the "unwanted guests".

    > > Why can't somebody concoct a specially-processed aspect format of audio,
    > > maybe stereophonically, so that all that ambient but unnecessary detail
    > > still protrudes as human-perspectively correct and present, and without
    the
    > > sterilised treatment of a studio or concert arena?
    >

    I wouldn't call a concert arena sterile, more like an epidemic.

    > If the recording and the playback system is good enough, your brain can
    > discriminate and ignore extraneos sounds in the same way as when you
    > listen live. You ears have a kinda omni pattern anyway, so they pick up
    > pretty much everything, it's the brain that decides what to ignore or
    > focus on.
    >
    > The problem is that all recording and playback equipment, even the really
    > expensive stuff, is still not really good enough to reproduce all the
    > tiny little cues you need to hear so you can discriminate what to ignore
    > and what to focus on. Let alone modelling the shape of the ears, shoulders
    > and the angle your head happens to be at.
    >
    > I've made binaural recordings with those little OKM in ear mics that get
    > some of the way there when I listen back on headphones though.
    >

    I would blame the shortcomings of sound reinforcement long before blaming
    the recording/playback processes. At a live show your eyes give you cues
    and head movements provide spatial information that help your brain model
    the acoustic space and better interpret what is heard, which of course is
    rife with imaging discrepancies. I never thought much of live recordings
    until I started doing my own, and even then I liked my recordings better
    because I know the rooms they were recorded in, and can "revisit the room"
    during playback, so to speak.

    If a listener were blindfolded and kept completely still for the entire
    performance I expect it would be only marginally more appreciable than a
    competent recording with mics in the same position. Similarily, stereo
    reproduction is most effective in only one listening position.
  13. Archived from groups: rec.audio.pro (More info?)

    On Thu, 10 Mar 2005 01:18:09 -0500, Zigakly wrote:

    >
    > "philicorda" <philicorda@localhost.com> wrote in message
    > news:pan.2005.03.10.18.46.40.247407@localhost.com...
    >> On Mon, 07 Mar 2005 22:05:59 +0000, Jim Gregory wrote:
    >>
    >> > When an artist paints a canvas, or a camera takes a snapshot, of a
    >> > multi-layered event (depth of field), the result is a composition of a
    >> > foreground subject, things masked behind it, or apparent beside it, in
    >> > layers, and lastly the background or whatever is final.
    >> > But when a cardioid microphone relays its pick-up pattern to a listener,
    >> > live or recorded, it detects the "subject" as well as millions of
    > secondary
    >> > and tertiary.... echoes or ricochets and all the other guff it cannot
    > ever
    >> > discriminate against. The perspective is wrong because of an inverse
    > square
    >> > law.
    >>
    >
    > Well a painter can't avoid applying an interpretive process to the
    > representation, which perverts it no less than secondary signals. And could
    > some theoretical "vocal chord pick-up" really be more realistic? Many
    > instruments have some means of direct signal capture, but often a microphone
    > is prefered wherever feasible. Neither are perfect, but mics offer better
    > trade-offs, despite the "unwanted guests".
    >
    >> > Why can't somebody concoct a specially-processed aspect format of audio,
    >> > maybe stereophonically, so that all that ambient but unnecessary detail
    >> > still protrudes as human-perspectively correct and present, and without
    > the
    >> > sterilised treatment of a studio or concert arena?
    >>
    >
    > I wouldn't call a concert arena sterile, more like an epidemic.
    >
    >> If the recording and the playback system is good enough, your brain can
    >> discriminate and ignore extraneos sounds in the same way as when you
    >> listen live. You ears have a kinda omni pattern anyway, so they pick up
    >> pretty much everything, it's the brain that decides what to ignore or
    >> focus on.
    >>
    >> The problem is that all recording and playback equipment, even the really
    >> expensive stuff, is still not really good enough to reproduce all the
    >> tiny little cues you need to hear so you can discriminate what to ignore
    >> and what to focus on. Let alone modelling the shape of the ears, shoulders
    >> and the angle your head happens to be at.
    >>
    >> I've made binaural recordings with those little OKM in ear mics that get
    >> some of the way there when I listen back on headphones though.
    >>
    >
    > I would blame the shortcomings of sound reinforcement long before blaming
    > the recording/playback processes. At a live show your eyes give you cues
    > and head movements provide spatial information that help your brain model
    > the acoustic space and better interpret what is heard, which of course is
    > rife with imaging discrepancies. I never thought much of live recordings
    > until I started doing my own, and even then I liked my recordings better
    > because I know the rooms they were recorded in, and can "revisit the room"
    > during playback, so to speak.

    I was thinking more of recordings of acoustic performances. If there is a
    PA, the sound is quite mashed to start with, and getting a direct feed
    from the desk in addition to the mics is quite common.... For me anyway
    the mics are more for atmosphere, at least when the PA rig is big enough
    that there is little of the direct acoustic sound of the player's amps etc
    getting to the audience.

    I agree that head movements and visual cues are important too, but I have
    found with binaural recordings that I can tell the size of the space and
    location of instruments fairly well. At least enough so that once the
    image becomes solid to me, other distracting sounds that I don't want to
    hear in the recording seem to get much quieter.

    It is nice to revisit locations you know with recordings, isn't it? I've
    been archiving DATs recently and found a folk event in a barn I'd done a
    rough recording of last summer, with a pair of omnis hanging from the
    rafters. I went to sleep there sometime in the very early hours, but left
    the recorder going with a two hour tape. Hearing people waking up around
    dawn, the first few notes of a song picked out, and the barn creaking as
    the sun starts heating up the old beams...

    >
    > If a listener were blindfolded and kept completely still for the entire
    > performance I expect it would be only marginally more appreciable than a
    > competent recording with mics in the same position. Similarily, stereo
    > reproduction is most effective in only one listening position.

    I've not found any recording method that can really fool me into
    thinking there is an unamplified instrument in the same room.
    (Apart from player piano rolls. :)
  14. Archived from groups: rec.audio.pro (More info?)

    >
    > I've not found any recording method that can really fool me into
    > thinking there is an unamplified instrument in the same room.
    > (Apart from player piano rolls. :)

    that is because no sound system made of paper cone, coils of wire
    magnets and wood boxes can sound like a instrument with tensions, and
    resonances nor can a paper cone sound like a vibrating steel string or a
    fluttering reed
    they are so rally really basically diffrenet I am amazed at how much
    enjoyment those paper cones can deliver
    George
  15. Archived from groups: rec.audio.pro (More info?)

    > I've not found any recording method that can really fool me into
    > thinking there is an unamplified instrument in the same room.
    > (Apart from player piano rolls. :)

    Considering how well conventional reproduction does on so wide a variety of
    instruments which cost a shitload more than the stereo, it's hard to
    complain too much, but I see your point. Kinda hard to hope that musicians
    become fully replaceable though, things are bad enough as it is.
Ask a new question

Read More

Pro Audio Audio