Sign in with
Sign up | Sign in
Your question

Another flight of fancy

Last response: in Home Theatre
Share
August 27, 2004 7:38:20 AM

Archived from groups: alt.tv.tech.hdtv (More info?)

Here's different possible direction for the technology to develop.

Suppose someone comes up with a super hot video graphics engine.
A chip that does rendering, ray tracing, viewing planes, sprites, etc,
on-the-fly, in real time. It's development is driven by the
computer game market of course, but wouldn't it be usable for
passive viewing as well?

Then, video "feed" becomes a stream of instruction codes defining
the presence of objects and how they are to move. Something
along the lines of MIDI codes for music.

Naturally, cartoons and animation would easiest. So instead
of video info for every screen pixel at the refresh rate,
there would be an initial load of objects... such as Nemo
the clown fish.

Then our display system receives instructions as to how Nemo
should swim about, speak, etc.

The first benefit of this system would be more efficient use
of bandwidth and therefore better possible image resolution.

However -- here's the kicker. We don't necessarily have to
visualize Nemo as a clown fish! Why not make him a sea horse?
The turtles can be Teenage Mutant Ninja Turtles (I suppose
we'd have to pay some kind of royalty for using their images),
but their lingo would be authentic.

Soon there would be a whole industry creating viewable
characters for this system. (And lots of public domain
ones of course).

Of course Nemo need not be a fish at all. I might like
to see the story showing him as, say, a tropical parrot.
The Great Barrier Reef scenery becomes South American
jungle canopy, and Nemo's teacher, the ray is now a condor.

Nemo gets caught and put in a cage, not a fishtank...

Ok, maybe it's a bit far fetched but we can still dream,
can't we?

Sean

More about : flight fancy

Anonymous
August 27, 2004 10:30:32 PM

Archived from groups: alt.tv.tech.hdtv (More info?)

"Sean" <no.spam@no.spam> wrote in message news:412EACF7.EF4E15C8@no.spam...
> Here's different possible direction for the technology to develop.
>
> Suppose someone comes up with a super hot video graphics engine.
> A chip that does rendering, ray tracing, viewing planes, sprites, etc,
> on-the-fly, in real time. It's development is driven by the
> computer game market of course, but wouldn't it be usable for
> passive viewing as well?
>
> Then, video "feed" becomes a stream of instruction codes defining
> the presence of objects and how they are to move. Something
> along the lines of MIDI codes for music.
>
> Naturally, cartoons and animation would easiest. So instead
> of video info for every screen pixel at the refresh rate,
> there would be an initial load of objects... such as Nemo
> the clown fish.
>
> Then our display system receives instructions as to how Nemo
> should swim about, speak, etc.
>
> The first benefit of this system would be more efficient use
> of bandwidth and therefore better possible image resolution.
>
> However -- here's the kicker. We don't necessarily have to
> visualize Nemo as a clown fish! Why not make him a sea horse?
> The turtles can be Teenage Mutant Ninja Turtles (I suppose
> we'd have to pay some kind of royalty for using their images),
> but their lingo would be authentic.
>
> Soon there would be a whole industry creating viewable
> characters for this system. (And lots of public domain
> ones of course).
>
> Of course Nemo need not be a fish at all. I might like
> to see the story showing him as, say, a tropical parrot.
> The Great Barrier Reef scenery becomes South American
> jungle canopy, and Nemo's teacher, the ray is now a condor.
>
> Nemo gets caught and put in a cage, not a fishtank...
>
> Ok, maybe it's a bit far fetched but we can still dream,
> can't we?

It's an interesting idea, though it may not ever end up being the best way
to do things or at least something that anyone spends the money to make
available. One other benefit of such a system could be the ability to change
viewing angle arbitrarily (look at something from the back), zoom in or out
at will, etc. I think most possible applications of such a system are pretty
far away though. But at some point in the distant future, I could possibly
see (don't know whether anyone would bother to do it, just that it would be
posisble) to watch, say, a soccer game this way. The stadium, players (plus
refs, coaches, etc.) and ball could be modelled in advance (probably not
hand modelled, rather, quickie camera-generated models could be done just
before they walk on the field). Cameras would track the positions of
everything at high speed, including player positioning and deformations for
animation, and video tracking could alter expressions on faces, etc. It
might not be perfect really up close, but it might be possible for it to be
as good as or better than regular (HD)TV at normal viewing angle/distance,
plus the ability to view the game from any virtual viewpoint (even from on
the field, or from the eyes of one of the players!) How much detail the
crowd would get, etc., would be an open question. With the highest level of
detail for everything, this could easily use a lot MORE bandwidth than
video, since it must include detailed information sufficient for up-close
viewing of everything in the stadium... But with a reasonable level of
detail, while it still might look/feel a bit like watching a high quality
video game (only approximating a "being there" experience), it would still
be a fascinating way to be able to watch a live sporting event. Such a thing
might start off as a way to "capture" actual games for playback in regular
sports video games (initially just for given actions or plays, as motion
capture is used already, but ultimately maybe for complete games).

As for instructions as proxies for pixels of just a straight 2D image or
video feed, that could easily describe many existing compression systems. As
for replacing one fish with another, that requires not just
instruction-driven rendering but also some intelligence about that object
being a discrete object (which regular video doesn't automatically "know"),
as well as being a fish. The biggest problem with this (as with the 3D
system I described) is that a very high level of abstraction is required
(knowing that each player is a separate object, for instance), but computers
aren't good at making these kind of distinctions without a lot of help. So
for something that is set up well in advance (with a lot of human
instructions and tweaking) might work, but something captured fully in real
time is a very very long way away.

Another way to look at this is the current use of such systems on the
internet. This is how shockwave/flash animations work, for instance. They've
caught on for some things (low-bandwidth animations), but aren't likely to
be used for most others for some time to come. We've also been waiting for
effective internet 3D for many years, as none of the previous or current
standards and technologies has ever caught on, though now many if not most
people have sufficiently powerful 3D cards to manage some pretty impressive
stuff. The biggest problem is that it still takes a fair amount of
time/bandwidth to transmit sufficient models, textures, animations, shaders,
lighting information, etc. to make a good presentation.
Anonymous
August 28, 2004 8:16:11 PM

Archived from groups: alt.tv.tech.hdtv (More info?)

I get what you are saying, and it reminds me of compressed digital video
that we use today, I notice when I look closely at compressed SD video, a
guy walking down the street has this sort of cloud around him where you can
see the compression.

I may be wrong, but I think the compression kind of re-uses the existing
video. Like if the street the guy is walking on isn't moving, it more or
less re-uses the street and only updates the walking guy. I think thats why
high motion video can't be compressed as much as say a still scene of a
field or something. Again, I could be wrong but I am pretty sure it kind of
works that way. Not so far from what you are describing, only this method
introduces noticeable compression, where your scheme would use predefined
objects.

--Dan

"Sean" <no.spam@no.spam> wrote in message news:412EACF7.EF4E15C8@no.spam...
> Here's different possible direction for the technology to develop.
>
> Suppose someone comes up with a super hot video graphics engine.
> A chip that does rendering, ray tracing, viewing planes, sprites, etc,
> on-the-fly, in real time. It's development is driven by the
> computer game market of course, but wouldn't it be usable for
> passive viewing as well?
>
> Then, video "feed" becomes a stream of instruction codes defining
> the presence of objects and how they are to move. Something
> along the lines of MIDI codes for music.
>
> Naturally, cartoons and animation would easiest. So instead
> of video info for every screen pixel at the refresh rate,
> there would be an initial load of objects... such as Nemo
> the clown fish.
>
> Then our display system receives instructions as to how Nemo
> should swim about, speak, etc.
>
> The first benefit of this system would be more efficient use
> of bandwidth and therefore better possible image resolution.
>
> However -- here's the kicker. We don't necessarily have to
> visualize Nemo as a clown fish! Why not make him a sea horse?
> The turtles can be Teenage Mutant Ninja Turtles (I suppose
> we'd have to pay some kind of royalty for using their images),
> but their lingo would be authentic.
>
> Soon there would be a whole industry creating viewable
> characters for this system. (And lots of public domain
> ones of course).
>
> Of course Nemo need not be a fish at all. I might like
> to see the story showing him as, say, a tropical parrot.
> The Great Barrier Reef scenery becomes South American
> jungle canopy, and Nemo's teacher, the ray is now a condor.
>
> Nemo gets caught and put in a cage, not a fishtank...
>
> Ok, maybe it's a bit far fetched but we can still dream,
> can't we?
>
> Sean
!