Sign in with
Sign up | Sign in
Your question

what is wrong in my understanding ???????????????

Last response: in Home Audio
Share
Anonymous
December 14, 2004 4:14:19 PM

Archived from groups: rec.audio.pro (More info?)

Dear All !!

****************************************************
Any shed of the Kowledge on this will help my me out
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I am working on the module in which i have to mix the two (audio/speech) files
Its look simple to add the each samples of the two diffrent audio file and
then write into the Mixed file.

But here comes the problem That if i simply add the two diffrent audio files
(Each samples) then there may be over flow of the range, so I decided to
divide the each sample by two and then add the data and write into the file.

what I observed that the resultant mixed wav file whcih I got has the low
volume, and this is obvious that as i am dividing the value of each sample by
two. So it is decreasing the amplitude level.

So I took another Way to mixed the audio files.

Let the two signal be A and B respectively, the range is between 0 and 255.

Y = A + B – A * B / 255

Where Y is the resultant signal which contains both signal A and B,
merging two audio streams into single stream by this method solves the
problem of overflow and information loss to an extent.

If the range of 8-bit sampling is between –127 to 128

If both A and B are negative Y = A +B – (A * B / (-127))
Else Y = A + B – A * B / 128

For n-bit sampling audio signal

If both A and B are negative Y = A + B – (A * B / (-2 pow(n-1) – 1))
Else Y = A + B – (A * B / (2 pow(n-1))

Now the aplying the above approach I am geting the good sound qualites
for the mixing of the two audio signal.
But As I am increasing the number of the files to be mixed then I hear
some sort of disturbance (Noise) in it. Means that as the number of the
files is increased then the disturbence in the audio mixed file also
increases.

WHat is the reason behind this ??? Is there is some underlying hard ware
Problem or The qualites Of the sound recording depend on the Recording Device
??????????

I want to have some review of your views on this.

Personally what I think is that it may due to the folloing factors

1: Digital computaion error
http://www.filter-solutions.com/quant.html

2: Due to aggressinve increase of the amplitude of the mixed file,
as we go on increasing the number of the audio files. I.e higher the number
of the files the resultant values of the mixed audio fuiles will be
increased and will tend towards the higgher range i.e towards 32767 in the
case of the positive samples. and will tend towards the -32768 when the
two samples of the audio files are negative. { here I am talking about the
16 bit audio data Recorded at the 8KHz sampled }

So is there Any other approach So that I can approve my self that the Mixed
audio data is Noise Free (At most I have to Mix the 10 Audio Files).

One More queery is, what is the reason behind the distortion when the low
level recording is done and when we paly the same file. Is there any
distortion in it. ????? and in my perception we have the distortion in the
recorded and the play back of the same Audio file. For which I am stating my
views. (Correct me where ever I am wrong)

Explanation 1-->

If we have a good A/D-D/A converter also in recording and playback the
audio files, Then there comes the picture of the distortion also. we know
that the digital recording is extremely accurate due to its (S/N) high
signal to noise Ratio. Now Regarding at the low level, digital is actually
better than analog, due to its 90 dB dynamic range. The best we can get from
phonograph (Recording and Playing software/device) records is around 60 dB.
More precise is around 40 dB.

we can hear the range of the 120-plus dB. This is why recordings utilize a
lot of compression (Compressor--> a electronic device that quickly turns
the volume up when the music/speech is soft and quickly turns it down when
it is loud).

Now here comes the Picture of the compressor which compress and the term
Quickly" which means some loss of digital data at the both the ends (High
and Low). Since low level Surroundings detail are completely stripped by
digitizing when we record at the low level.

So the digitizing the low level signal lose the relevent information which
result in the distortion.

Note :
In the Sound cards Use the A/D and The D/A converter and it is involved with
the samling frequency and It is not sure that Exact sampling frequnecy is
same for the difrent sound cards which may vary and very low level. So which
also cause the Distortion at the low level.

Explainion 2-->

Now suppose If we record the audio data from the one's system(Recording
Device) at the low level volume set, in the volume control. such that a
sound recorded at the 100% low level of the recording. And when this
recorded audio file is played back at the another System at the 100% low
level of the volume control and if we dont vary the setting then it will
paly the same with out distortion

And if there is diffrence in the Volume level control setting at which it is
recorded and audio file played back will result in some sort of distortion.

Note :

If there is variance in the recorded and the played back audio files volume
control then also their will be distortion. So for the Low level Recording
and listining there will be some distoortion will be seen if we play this
low level recorded file into another system at the very high level.


Explainnation 3-->

Some software and the hard ware Use the Normalisation concept for various
algorthim used. Some normalisers are basically "Volme expaders," and some
are the "Limiters" They stretch the dynamic range of the material, the low
sounds in the original remain low and that to at their original level,
while the level of the loudest sounds is raised peak level permiitted by
the recording proccess and what eevr lies in between is raised in level
pro-portionately. (Addaptive increase), Which also cause the distortion of
the original recorded sound. Hence to hear the low volumes sounding we have
to increase the volume, to hear the lower volumes (soft volumes) parts of
audio file, Hence all the enhance signal is also plyaed causing the
distortion.

Note:
Mostaly the sound Recorded under the concept of normalisation at low level
can also cause the Distortion. Very High Music and the Speech are recorded
at the (Compressor/Expansion) Algorthim which uses the Normalisation.


One More Thing what is the Lowest and the upper limit for the recoerding of
the 16 bit data 8Khz sampling frquency so that we dont have the NOISE
for the same recoerded and the play back audio file. ???????????????

Any shed of the Kowledge on this will help my me out
Thanks In Advance

Regards
Ranjeet

More about : wrong understanding

Anonymous
December 15, 2004 4:26:14 PM

Archived from groups: rec.audio.pro (More info?)

ranjeet.gupta@gmail.com (ranjeet) wrote:

> Let the two signal be A and B respectively, the range is between 0 and 255.
>
> Y = A + B – A * B / 255

What is the term A*B/255 good for? And why is that subtracted from the mix?
Do use use linear quantization?

I use simply Y = A/n + B/n + C/n with n being the number of channels to add.

Norbert
Anonymous
December 16, 2004 10:12:00 PM

Archived from groups: rec.audio.pro (More info?)

On Wed, 15 Dec 2004 13:26:14 +0100, Norbert Hahn <me@privacy.net>
wrote:

>ranjeet.gupta@gmail.com (ranjeet) wrote:
>
>> Let the two signal be A and B respectively, the range is between 0 and 255.
>>
>> Y = A + B – A * B / 255
>
>What is the term A*B/255 good for? And why is that subtracted from the mix?
>Do use use linear quantization?

It appears the product of the two samples creates a pseudo-random
number (not even that, it makes highly correlated sum and difference
terms!) that when scaled down to the lowest bit of the 8-bit word just
happens to add a reasonable amount of dither "noise." This appears
operational but it's crude. It really doesn't matter if this is
mathematically added or subtracted (or if it were uncorrelated with
the signal it wouldn't matter).

>I use simply Y = A/n + B/n + C/n with n being the number of channels to add.

The equation works (though to pick a nit, I'd do it in fractional
fixed point or floating point, and multiply each term by the
precalculated value 1/n), but the result depends on the number format.
For best quality you have to do this at substantially higher bit depth
than your "target" bit depth, and just before truncating to the target
bit depth, add the appropriate amount, type and spectrum of noise, to
do what's called dithering. If the noise has something other than a
flat spectrum (for audio, it's usually high-pass filtered for the top
octave or two, but this should perhaps be different depending on your
sample rate), this is called "noise shaping."

The full details are beyond a short Usenet post (though perhaps not
a full thread), but the article below on dither should help.

To respond and hopefully answer the original message, if you're
aimong for a final file of 8-bit or 16-bit depth at 8 kHz, record at
the standard CD rate of 16-bit, 44.1kHz (this is a standard rate for
any soundcard) and use a .wav editor/DAW program (Adobe Audition,
N-Track Studio, etc.) to mix all the files, then convert them to the
final bit depth and sample rate. Even if the files you're trying to
mix are already at 8-bit 8KHz, using a DAW program will make it a lot
easier than writing your own code.

If you really need to write the code, this link should tell you
what dither is, why you need it and why you need to keep the bit depth
high until you do the final bit reduction to 16 or 8 bits. Click on
articles, then dither:

http://digido.com


>Norbert

-----
http://mindspring.com/~benbradley
Anonymous
December 17, 2004 2:43:21 AM

Archived from groups: rec.audio.pro (More info?)

On Thu, 16 Dec 2004 19:12:00 GMT, Ben Bradley
<ben_nospam_bradley@mindspring.com> wrote:

>On Wed, 15 Dec 2004 13:26:14 +0100, Norbert Hahn <me@privacy.net>
>wrote:
>
>>ranjeet.gupta@gmail.com (ranjeet) wrote:
>>
>>> Let the two signal be A and B respectively, the range is between 0 and 255.
>>>
>>> Y = A + B – A * B / 255
>>
>>What is the term A*B/255 good for? And why is that subtracted from the mix?
>>Do use use linear quantization?
>
> It appears the product of the two samples creates a pseudo-random
>number (not even that, it makes highly correlated sum and difference
>terms!) that when scaled down to the lowest bit of the 8-bit word just
>happens to add a reasonable amount of dither "noise." This appears
>operational but it's crude. It really doesn't matter if this is
>mathematically added or subtracted (or if it were uncorrelated with
>the signal it wouldn't matter).

Thanks for the explanation! I never thought of doing it like that.
A long time ago I programmed a 24 bit DSP for operating on S/P DIF
signals. As S/P DIF uses inherently 20 bit word length I didn't pay
much attention on dithering.

>For best quality you have to do this at substantially higher bit depth
>than your "target" bit depth, and just before truncating to the target
>bit depth, add the appropriate amount, type and spectrum of noise, to
>do what's called dithering.
[snip]
> To respond and hopefully answer the original message, if you're
>aimong for a final file of 8-bit or 16-bit depth at 8 kHz, record at
>the standard CD rate of 16-bit, 44.1kHz (this is a standard rate for
>any soundcard) and use a .wav editor/DAW program (Adobe Audition,
>N-Track Studio, etc.) to mix all the files, then convert them to the
>final bit depth and sample rate. Even if the files you're trying to
>mix are already at 8-bit 8KHz, using a DAW program will make it a lot
>easier than writing your own code.

The AC-97 standard (intended for consumer sound cards) may give a
couple of hints for implementation including un-synchronized digital
input.

HTH
Norbert
!