New Encoding Tech May Bring Billions of Color to Blu-ray
Earlier this week, a company called Folded Space said that it has created encoding/decoding algorithms that will bring deeper color to media such as movies and TV shows, which are filmed using high dynamic range digital cameras. This set of algorithms is called DCE (Deep Color Encoding), and will process original content with 12 bits per color instead of the current 8 bits.
"With DCE, studios can now release Blu-ray discs and even next generation UHD/4K physical media to support what's commonly considered to be the most important, most visual improvement in next generation video," said John Schuermann, who leads business development for Folded Space.
Although the wave of 4K UHD TVs, Blu-ray players and other equipment is beginning to saturate the market, most of the media that will play on these devices are encoded with 8 bits per color channel. When you look at the media up close and personal, it's easy to see color "banding," or rather a failed attempt to blend several colors together.
That said, encoding media in 10-bit, 12-bit and even 16-bit will eliminate banding because there are billions of colors to use. Folded Space's DCE can do this while keeping the content roughly the same size as the 8-bit version, and while preserving backwards compatibility. That means one Blu-ray disc will cough up a movie that works on 8-bit and 12-bit players.
Of course the big drawback is getting the industry to accept the new technology. This would not only apply to the media burned on a Blu-ray disc, but the optical drives that read them as well. That means devices would need to be on the Internet to receive a firmware update that supports the algorithms.
"The company's proprietary yet simple and fast algorithms process original content with 12-bits per color and imperceptibly encode information about the fine color detail into a standard, backward compatible 8-bit Blu-ray disc," reads the company's press release. "Newer displays and Blu-ray players with the decoding algorithm can then restore a 12-bit equivalent of the original image in support of much greater color range of recently announced displays."
Folded Space plans on licensing the encoding algorithm to software partners free of charge to "stimulate" deep color and high dynamic range content production as soon as possible. The company also plans to license the decoding algorithm to player and display partners for a "modest" fee.
For more information about Folded Space and DCE, head here.
While banding with 8bits usually is not too bad, there are times where it can be distracting enough to make me wish 10bits was more common.
With that attitude, lets just go back to black and white to save space.
Normally, movies try to avoid banding using dithering. This is particularly bad for compression of animations (cartoons, if you wish), where there's usually huge blocks of single-tone colors, or very regular gradients that are easy to compress.
I certainly look forward to the day the old 8 bit encoded video gets replaced by something newer, preferably 16 bit, since it's a power of 2
No it is not. I have never once noticed any banding.The large dark blocks the previous poster mentioned has nothing to do with banding. That is simply poor compression, end of discussion.
Depends on your display, some displays can show 30-bit... and a portion of people can see better than 256 shades of grey and thus see more banding. A portion of these people work in graphic arts...
The biggest problem with 8bits per plane, bending and video encoding is that at 8bits, encoders cannot tell the difference between 1-2LSB differences in white/black/color levels and noise or they drop details of that magnitude because they require too much bandwidth to encode for what little detail they seem to provide so you end up with 6 effective number of bits (ENUB) and that ends up greatly magnifying the blotching in shadows and highlights. With two extra bits, the encoder can tell the difference between noise and details that much better and encodes end up looking cleaner with a full ~8 ENUB output from 10bits source material.
Try looking at a 8x8 full-screen checkerboard pattern displaying all 256 shades of individual colors. The steps between individual shades is quite obvious on a still image so the human eye can certainly discern more than 8bits per channel worth of information when presented in a way that maximizes perception.
This is why image and video encoders rely on human perception models to decide which details can be dropped without affecting perceived image quality too much.
10 bits per channel x 3 channels (R+G+B) = 30bits. 2**30 = 1.07 billion.
With 12bits per channel, you have 36bits total which is 68.8 billions.
GPUs have been processing images in 16-32 bits per plane (high dynamic range) for several years already but displays capable of more than 8bpp are still uncommon... some high-speed panels are still using 6bpp with dithering.
Btw, SGI was doing 48bit RGBA more than 20 years ago. It's nothing new.
Only the switch to flat panels has meant the fidelity of perceived images
has gone down since then; in the past, CRTs could show the 4096 different
shades of each colour just fine with 36bit RGB; 12bits for Alpha has always
been important for accurate low-light/contrast visual simulation and precision
imaging/video, while 16bit to 64bit greyscale has been used in medical imaging,
GIS and defense imaging analysis for ages aswell, eg. the Group Station for
Defense Imaging was dealing with 60GB 2D images probably a decade before
many of the readers of this forum were even born.
Modern display tech is merely catching up to what high-end gfx was doing
in the last century with CRTs. *yawn*
Ian.
Most stuff you see on TV was encoded by the camera in 4:2:2. Most news footage is 4:1:1 or 4:2:0 depending on the format they use. But id doesn't matter because the encode for digital broadcast is 4:2:0. Most dramatic series are shot in 4:4:4 or some even still on film.