virtual surround intended to be used on stereo speakers (not headphones) relies on bouncing the soundwave off a wall.
since most television speakers hardly bounce off a wall.. they rely on stereo widening.
its possible to try and set the distance between the television speaker and the wall.. to forcefully emphasize a wall reflection, but generally you have the fill the room with soundwaves.
to be a bit more technical, you throw out a sound signal that has squared reflection coordinates.
(squared as in math)
that way, when the soundwave does eventually bounce into a wall, its all set to make its way through the airspace.
if you throw out an audio signal designed for one foot of distance before it bounces.. and the tail is set to one foot so that the head of the soundwave can pass through without any cancelations.. you can then fill a room of whatever size.
the chance of it working from one size room to the next increase, as long as the walls are seperated with one foot increments.
its possible that an uneven measurement will cause soundwave cancellations.. but its generally advisable to use average room sizes from the typical layouts of living rooms already built.
it helps to keep the sound processing as small as possible (again the one foot increments)
that way you have a better chance of success in a room that is 8ft x 12ft
or 10ft x 16ft
the idea is that if the sound processing is set for one foot, all wallls seperated in one foot increments will see the benefit.
that is a brief explanation of the most advanced (state of the art) virtual surround that can be had.
usually the virtual surround is a simple stereo widener that allows for greater panning from left to right (or right to left)... with the option to keep the sound dull, so to be perceived as the center channel.
stereo widening is generally for all the space directly in front of your ear opening, all they way to the front of your face, and anything inbetween.
so its not technically accurate to say virtual 'surround'
because what the sound processing is trying to mimmick is simply a center channel with a front left and front right speaker.
state of the art in acoustics has made it possible to be anywhere in front of the television and hear quite solid 3D surround.
no matter where you walk in front of the television, as long as both ears are set to receive audio from the two speakers, you can hear front/side/behind you.
but it is really a challenging task to trick the mind into believing there is a sound coming from behind you when the speaker is 6ft - 8ft in front of you.
there is a lot of air in the room and distance from the speaker that need to be compensated for.
because if you can hear the air/distance.. it will subtract from the affect.
virtual surround for headphones is so much easier to understand.
you use a mannequin (dummy) head with ears and microphones inside the ears.
those microphones pick up audio exactly like a human ear does (give or take earhole size)
and what this does is allow the microphones to pick up the soundwaves from the source of the audio and record it.
as the sound source starts to go behind you.. your ear gets in the way between your ear's microphone and the audio source.
the fact that the ear starts to get in the way is why the sound changes.
our brain has been listening since we were babys.. and our brains have been trained to properly comprehend the change in sound so we know exactly where the sound is coming from.
the reason why the sound is different from being directly in front of you and directly to the side of you is because of the 'tragus'
the tragus is the part of the ear that kinda covers the ear canal.. you will see people with an earring in this part of the ear.
you can see a solid chart of the ear and all of its different names that create the ear at this link: http://www.virtualmedicalcentre.com/uploads/VMC/Treatme...
there is really nothing 'virtual' about it except that the sound source's are generally not accurate with todays video games and movies.
(well.. maybe todays movies are better, but yesterdays movies and video games werent even close)
what has been released to the public is not the full result of audio and all the work that has been done to perfect it.
if you use the dummy head with microphones to record a demo much like the 'barbershop chair' audio clip.. the results played back through a pair of headphones will be identical and thus realistic, as well as embracing.
i'm quite sure there are audio labs that have tried to re-train the human brain's natural ability to hear and pinpoint the audio source.. simply for the sake of experimentation.
if you are using a dummy head with microphones to record the acting, the recording should be completely identical and shouldnt have any processing done to it from the time it gets recorded to the time it gets played back.
and based on this situation, the only thing that could ruin the positional audio would be inferior microphones.
the world leaders were wise to not release all of the positional audio capabilities at the very start of the craze.. because people would have had heart attacks and/or stroke .. potentially causing death.
audio has such a strong hold on most people.
it could have lead to mental disorders .. personality disorders .. or simply not going to work or raising your children because you are highly captivated by the new technology.
these are, of course, the extreme cases.
and by slowly releasing the correct audio positions, we have overcome these extreme cases as best as possible.
people not going to work is one thing.. but sueing the government because of death can get expensive, is embarassing, and would ultimately show a lack of responsibility.
you should know that the work has been finalized for quite a long time.
some evidence providing proof would be 'dolby prologic'
prologic is designed to make two stereo speakers (with or without a center channel speaker) reproduce a quadrophonic (four speaker) setup.
it relies on bouncing soundwaves off of the walls.
its all about reverb really.
humans have been fascinated with audio all the way back to when the colosseums were built.
but before that, they used hallways of castles.
picking and choosing which ones were better suited for audio clarity throughout a wide listening pattern.
this made announcements go faster/easier.
it made listening to a singer more pleasurable.
and overall created a stage for entertainment.
now let me get into the technical aspect of stereo surround sound.
we go back to the dummy head with microphones.
and then we take a frequency response by recording a sine sweep.
a sine sweep is a piece of audio that plays from 0hz - 20,000hz (or higher)
that sine sweep recording is like an equalizer with all of the knobs already set for us.
if you want 7.1 surround sound, you record a frequency response from seven different locations.
you put your headphones on and select which speaker you want to hear the audio from.
depending on which speaker is selected, depends on which equalizer setting is used.
this is a very crude method though, because generally equalizers dont have enough knobs to fully recreate a different speaker source.
its a problem because there is not enough resolution .. not enough equalizer knobs .. to create a perfect representation of what the microphone in the dummy head recorded.
going to digital with the help of computers has really helped.
because you can use an equalizer that has 20,000 knobs and set all of those knobs easier than twisting them by hand.
back in the day.. surround sound relied on the fact that there is an actual speaker placed behind you.
they would use the force of the speaker being behind you to playback audio for audio that is supposed to be behind you.
there was no special processing done.
the rear audio was simply mapped to the rear speaker and it sounded like normal audio coming from the front speakers.
but what can be done to make the sounds more unique is to apply the equalizer setting associated with sounds coming from behind you to the audio that is coming from the rear speakers.
now, not only is the speaker physically behind you, it is also playing a processed/equalized version to fully represent audio coming from behind you.
if it isnt done this way.. you have to make certain that your rear soundstage is optimized with distance and timing calibration.
in fact, the entire soundfield has to be calibrated so you get the affect of being surrounded by audio.
otherwise it will sound like there is audio playing behind you, and it will sound obnoxious as if the rear speaker is louder than the rest of the speakers.
you wont get the feeling of realism because the amplitude levels arent accurate enough to trick your brain into thinking you are in an environment.
some technology that has been getting used lately by sound studios is the fact that they can insert a frequency reponse into their sound system to recreate an entirely different room.
this sorta thing is great for sound mastering.
you can be inside a closet and shape the audio so it sounds best in a typical sized living room.
but what has more benefit would be optimizing the audio for the typical sized movie theater.
you can send movies with alternate audio tracks.. one optimized for theater number one (usually the biggest room/screen designed for opening day)
and another audio track optimized for the other smaller rooms.
i have been in the theaters recently and they sound obnoxious.
the audio is relying on the simple fact that the speaker is physically where its supposed to be.
then everything is turned up loud so that it fills the room with audio.
it is so loud, infact, that the side speakers are overtaken by the front speakers.. and the rear speakers are struggling to make wind into the airspace.
and that means if you are sitting towards the back, the rear surround speakers dominate the soundfield.
you really need front row seats or something close to the middle to get better positional sound.
if that same system was applying seperate equalization to each speaker.. to further represent its functional place in position.. the entire room would benefit from an astounding capability of position.
in fact, if each speaker had seperate equalization assigned to it that associates that speaker with where it is supposed to be.. you could move those speakers around quite a bit without hurting the effect of surround sound.
you could actually have two speakers directly next to eachohther as long as the speaker cone is pointed in the direction that it is entended to represent.
if you had the center channel right next to the front right speaker, you would have to twist the center channel so that it is pointing at the center of the room.
and the same thing can be said for the rear speaker(s) in a 6.1 or 7.1 surround setup.
that speaker that is supposed to be centered on the middle of the wall (behind you) can be placed in the corner of the room, as long as the speaker cone is pointed towards the listening position.
you would obviously have to adjusting the distance so that the audio arrives at the listening position on time.. but it can be done, and it can help with room design/speaker placement.
it is said that the frequency response is taken for a room, and from the position of audio source.
those results are transformed into an impulse response.
that impulse response is loaded into a convolver and the convolver applies the impulse response to the audio to get the final result.
and now we are seeing impulse responses from famous cathedrals/concert halls/colesseums.. and even exotic places like a submarine.
you can record seven different frequency responses around the head to be played back on a 7.1 system.
but you should record more positions so that the audio doesnt snap from one speaker to the next.
the snap occurs when there is a gap between one speaker and the next.
you can play more than one position from a speaker.
the more surround positions you record, the higher the resolution.. and the smoother the sound will travel.
you can also record from positions like above your head.
the playback doesnt have to be with headphones to benefit from 50 different sound sources.
as long as there is enough speakers in the room to fill in the voids/gaps.. you can map the correct sound source to the closest speaker and enjoy an audio experience so real that it could still give you a heart attack/stroke.
when you are dealing with realism... it helps to have as many surround sound positions as possible so that the movement is smooth.
otherwise you will be listening to something move in 3D space and it sounds choppy like watching a video at 20 frames per second (or a video game at 25fps).
if the audio is too chunky, you are going to lose all sense of realism.
there are 360 degrees that surround your head.. but it isnt necessarily necessary to record 360 different positions.
if we were to talk about available memory used to cache these positions.. i would rather have less, so that i could include positions above the head.
listening to a helicopter that has the equalization from an above the head sound source would really benefit movies.
video games would benefit even more.
if there was a helicopter above your head and it sounded like it was above your head, you should be able to look down at the ground and hear the helicopter sounds move to the back of your head.
since video games offer the ability to move your vision freely, they would be a much better suited place for positional audio.
because watching a movie, the camera is in a fixed position and all of the audio will be in a fixed position.
in a video game, you can move around and listen to the audio change with you being in control.
these new soundcards have memory on them that can be used to store all of the different surround source positions.
but i dont see any reason why it would be hard to have the rest of the audio sitting in the RAM on the motherboard.
it should be fast enough to transfer the audio to the soundcard without any delay from what is seen on the screen.
headphones can be considered much better for virtual surround because often times a 7.1 surround system in a room isnt fully optimized to create a solid soundfield.
if you can listen to music from all of your speakers at once and not be able to tell where the speakers are, this doesnt apply to you and your surround sound experience will be better than most.
but usually you can easily tell that there is a speaker on the wall or on a speaker stand behind you.
i would suggest that you look into some time-alignment to get the ball rolling if you have such a problem.
because headphones dont have that problem.. and therefore they can reproduce the 3D soundfield with extra precision.
the only problem is, most (if not all) of the surround positions available to the public are inaccurate .. and/or they are missing actual sound points and they blend two of them together so you dont hear any gaps.
you hear it as one solid piece of audio.. but your brain doesnt have the information to process where it is in the 3D space.
in all honesty, if you recorded a quadrophonic speaker setup (2 in the front and 2 in the rear) you would get a fantastic virtual surround.
but the industry isnt using the rear speakers in their surround schemes.
instead they are using one of the side speakers from a 7.1 setup.
you can see much of what i am talking about in this picture: http://en.wikipedia.org/wiki/File:Sound-10_2.svg
there really isnt anything behind the listener except for the center surround.
and now they are trying to increase the number of surround speakers from 7.1 to 9.1 or 10.2 or even 22.2
all of these are to force a surround sound field by use of physical speakers instead of playing state of the art positional audio across the speakers we already have.
refusing the apply the equalizer settings for each recorded point around the dummy head with microphone, these large number of speaker setups are needed and will help some extent.
but if you have ANY two speakers playing the exact same piece of audio, you are filling the room with sound and losing the immersive effect of sound position.
now if you used one of these higher surround sound speaker formats to play audio that has been processed to perfectly represent the point of location taken from a recorded dummy with microphone.. you would be in for situation of very high potential.
but with that many speakers.. that means more calibration and more mapping of the audio to the correct speaker.
that will usually make the equipment cost more.
plus you have to have more amplifiers.. and that will raise the cost.
you only need enough speakers to fill the room evenly with sound.
and if you can do that professionally with a quadrophonic speaker setup.. i dont see why you would need any more speakers, as long as the audio being played through those speakers have the positional audio processed into it.
the more speakers you have, the more simplified it will be to calibrate the room for an even soundfield that emits sound from all directions, making the source of audio impossible to find.
using two speakers to fill the entire room and make it impossible to find the source is quite an impossible task.
there are too many reflections of the sound bouncing off the walls that need to be calculated and compensated for.
besides, average speakers would struggle to play back all of the processing needed, because they start having a need to play the same thing four or five different times.. all while playing something else.
usually with reverb you have to play the head and then play the tail to cancel out the head.
the more reflections that you are trying to purposefully get, the more times you have to play the head.
and the more times you play the head, the more times you have to play a tail.
the actual audio from the speaker itself isnt even comprehensible anymore.. you have to sit in the room to make sense of what is being played.
having more speakers will allow for more chances to cancel out reflections.
you can use rear speakers dedicated to help cancel out the front speakers.
and that means the front speakers have to do less.
but all of this talk is like talking about reflections and room LFO's and the ability to completely eliminate those LFO's.
its engineering, and it gets complex when you start trying to master it.
but thankfully its all done and in the books.
i mean, if you wanted to use a different pair of headphones.. it should work because the processing is done on the soundcard.
but there might be a varous number of things that make those headphones work together with the usb soundcard.
that means something as small as impedance matching.. or power/amplifier requirements.
we generally live in a technological economy where things are built to be a perfect match for eachother.
an example of that would be, the usb soundcard plays something that sounds colored.. and when played through the headphones that come with the soundcard, the colored sound is compensated for and the final result is perfect.
if you use different headphones, you would hear the colored sound that was being sent to the headphones that the set came with.
the only way for an average person to tell you is if they tried it themselves and found out already.
otherwise you would have to try it on your own.
so if you want to try it on your own, make sure the place you buy it from has a return policy that will allow you to return it if you arent happy with it.
as our intelligence evolves.. it is only natural for us to make things go together as a perfect match.
designing a perfect match creates jobs.. and those jobs are a bit more difficult for the sake of a challenge.
you cant always pick just any capacitor off the shelf and expect it to work perfectly.
instead, you have to hunt down the right component that will do the job.
its kinda like a crossword puzzle for designers.
it makes their job more interesting and fun.
and more complicated so that not just anybody can do it.. which offers a reward for your effort and a benefit for your intelligence.