Archived from groups: rec.video.desktop (
More info?)
"PDTV" <pdtv_info@yahoo.com> wrote in message
news:2689a1aa.0410262248.4518dd51@posting.google.com...
> Thanks for backing me up here, but I think you're muddying the point
> further.
>
Muddying up the water, LOL That's a first for me.
I'll try and be more clear.
> I don't expect "consumers" to keep their footage in pristine quality
> -- they needn't use DV, or hi-def, or what have you. In fact, I'll
> say yet again that MPEG-2 is FINE as a storage format, and I archive
> PLENTY of stuff to MPEG-2 so I won't have to bother with VHS, and I
> don't even keep the original tapes in most cases.
>
Well, I always keep the original VHS and a prisine first generation digital
copy on DV tape as an archive.
This way you can put it on whatever new longer lasting media comes along.
The point should be to try and keep as much as the original detail as
possible for an archive since you can never get back any detail you throw
away, especially since people will all be watching hi def tv in the future.
> The fact remains that editing MPEG video and re-encoding into MPEG is
> very much like analog recording -- you're not only going to GET a kind
> of "generation loss
> ", it MULTIPLIES as you edit and re-edit.
>
I'm not sure I agree with that statement, since many mpeg editing software
now uses smart rendering techniques that only change a few seconds of video
before and after the edit points. So if you edit mpeg2 and add a title, only
the scene with the title will be affected etc.
Plus once you get down to mpeg2 level of compression, any re-compression
will basically remain the same since you can only get rid of so much
redundant data to begin with. It doesn't just start making the remain video
look bad, it analyses the video and only removes and compresses redundant
info. So an already compressed mpeg2 video will reamain an mpeg2 video if
you try and re-encode it the entire movie for some strange reason. It might
clean up any missed frames but should keep a good looking final picture
unless you ask it to compress it to a low bitrate (higher level of
compression) each time.
> Open up a .jpeg picture in a program like Photoshop. Looks fine,
> doesn't it? JPEG is a "lossy" compression scheme, but there's nothing
> wrong with the picture just because it's a JPEG, is there? Nope,
> looks fine, doesn't it? Now save that JPEG as another JPEG using
> quality setting 8, which is probably higher than what it was
> originally saved at (you can tell by whatever setting is already in
> the dialog box). Now, open the resulting file and take a look at it.
> Better yet, save that file again as yet another JPEG, also at the
> "High" (8) quality setting. It shouldn't take more that a couple
> iterations of this process for the picture to reach "unacceptable"
> quality, especially since you've seen the original JPEG that was
> probably saved from an uncompressed image.
>
Hmm, never actually did this test before, I'll try it today and see. I work
with jpg
all the time. If you choose a low quality compression right off the bat, you
will see bad artifacts in the image. However if you chose a high quality
setting, then open the image work on it a bit and save again, then again and
again, saving to high quality jpg setting each time, I don't see why the
compression algorithym would start messing up and introducing artifacts all
of a sudden. i believe it will continue to work as it is suppose to, just
saving at the quality level you selected and compressing to that level and
no more. It should introduce any artifacts that didn't get introduced the
first time it compressed to that level.
Maybe you're thinking print out the jpg and then scan it as a BMP image?
Then recompress it to JPG, then print it out and rescan it again as a BMP
and compress to JPG? Like that you will introduce new artifacts but that is
because you are starting out each time with a fresh BMP image with artifacts
and trying to compress it to the first level of compression etc.
But that would be like taking an AVI compressing to MPEG2 playing it on a
projection screen and videotaping the image and getting a new AVI and making
a new mpeg2, so not sure but the loss of detail would come from taping the
projected image and not so much from the recompression. I believe that if an
mpeg2 encoder processes an already encoded mpeg2 video file, it will change
nothing in that file if it is already encoded to that level of quality, just
the parts that have been edited and can use some cleaning up and removing of
redundant detail info.
> Now, you may ask, how can this be when every time you saved out a new
> JPEG, you chose the "High" quality setting? Look at it this way --
> even if you save an uncompressed image as a JPEG at the highest
> quality setting, you're probably only getting 80% of the detail that's
> in the image. Furthermore, every time you save it back out as a JPEG,
> EVEN AT THE HIGHEST SETTING, you're only getting 80% of the detail in
> the first JPEG, which is already only 80% of what was in the
> uncompressed file. Also keep in mind that you can save a JPEG as a
> .psd, TIFF, or whatever, and it's still not any better than the
> original JPEG, and will continue to deteriorate if you re-save the
> image as a JPEG, even without editing it. Here's how it works:
>
> UNCOMPRESSED IMAGE (100% detail)
>
> JPEG at "10" setting (80% of original detail)
>
> SAVE AGAIN AS JPEG at "10" (80% of 80% which equals 64% of original
> detail)
>
> SAVE AS UNCOMPRESSED .PSD FILE (preserves all of the 64% detail you
> have left, but doesn't get you back any lost detail)
>
> SAVE AGAIN AS JPEG at "10" (80% of 64% which equals 51.2% detail)
>
>
I think the math is wrong here, once the redundant data is gone, it's gone!
Any resaving to jpg or mpeg2 for that matter will not find any new redundant
data in the image file to remove.
Think of it like data compression, ok? you save a program into a zip file.
It is compressed, it removes redundant and unnecessary data from the
original file and compresses it for better storage size. That is all it can
do.
If you rezip it, you aren't going to keep getting the same amount of
compression over and over and start losing data.
No, it can only go to a certain point.
Same with encoding using smart rendering, it will maintain a certain level
of quality and after that can't compress any further no matter how many
times you ask it to resave in JPG or MPEG2.
> Anyway, I think you can see where this is going. The point I'm trying
> to make is this: Is 80% of the original detail "acceptable"? Sure,
> you probably won't even notice it. If you decide you REALLY want to
> edit your material and you don't mind a little drop in quality, is 64%
> "watchable"? Depends on your personal tastes, but for argument's sake
> I'll accept that. Thing is, now you've got footage that's 64% of what
> it once was sitting around. Say in a couple years, new equipment
> comes out that's backwards-compatible with the format your footage is
> in so it'll still play it, but even at 100% quality, it will show
> limitations of the recording medium (Hi8, whatever) you originally
> used. So with that information (that even 100% quality isn't gonna
> look so hot because it was recorded on older technology), is 64% still
> good enough? In fact, since you now only have 64%, are you really all
> that confident that you can still go back and edit if you want to? If
> 100% will show its age on your new equipment, are you that sure that
> "because it's digital", going down to 51.2% isn't going to be
> noticeable???
>
I would totally agree with that if I believed the math involved, but don't.
> Now, if anyone still doesn't get what I'm saying here, find yourself a
> nice full-size VHS camcorder and stick with it -- you have no business
> with anything more advanced.
>
> Good day to you all.
As technology progresses people will be editing with MPEG2 as their original
source material captured on high definition camcorders, according to you
they should stick to VHS, rather then edit in mpeg? I don't get the logic.
In my opinion removing redundant data is necessary now with high definition
camcorders more than before.
With DV, a stationary background had a large number of pixels that could be
compressed with MPEG but with new high definition consumer camcorders coming
out soon, the number of pixels required to be stored for that stationary
background is enormous, and storing all those millions of pixels for each
frame is a big waste of space, that is why intelligent compression schemes
are needed to detect redundant data and remove it. The key is removing only
redundant data, that's what I mean by intelligent encoders. If some file is
already done, no more redundant data to compress, there is nothing further
for the encoder to do, it shouldn't just do compression for the sake of
compression without analyzing the data!
I hope you can maybe understand this a bit better yourself, it will help you
accept technology and all its power and glory.
It can be used to make things better, if people aren't quick to compare it
to older technology and fear it.
AnthonyR.