Granite Texture Streaming 101: An Overview From Graphine’s CEO

Graphine recently released the fourth generation of its texture streaming middleware, Granite. We’ve covered Granite before, so we know it improves graphics performance and enables the use of higher resolution textures for 3D objects and environments. We’ve just never truly understood how Graphine’s magic works. Following the launch of Granite SDK 4.0, we decided it was high time we found out.

Shortly after the launch of Granite SDK 4.0, we had the chance to speak with Aljosha Demeulemeester, Graphine’s CEO, to get a better understanding of how Granite works and what it’s used for. If you’re a 3D content developer, especially if you make VR games, you may want to pay attention. Granite has the potential to significantly reduce the GPU workload while improving visual fidelity.

Demeulemeester started off with a quick summary of his company’s software: “Granite SDK is a middleware that can be integrated into any real-time 3D engine that is entirely focused on texture streaming,” he remarked. 

Clever Techniques Bring Out Better Performane

At its core, Graphine is designed to help curtail the hardware limitations that restrict the visual fidelity of 3D objects and environments, which is limited to the quality of the texture covering its surface. The highest polygon model will still look terrible if you cover it with a low-resolution texture. Higher resolution textures improve the visual quality, but as Demeulemeester put it, “[...] at a certain point, you just use too much video memory.”

He went on to paint a clearer picture of what he was talking about.

“To give you an example, if you have an 8K x 8K PBR (physically-based rendered) texture, which has a couple of different channels, then that might be 256MB of data for that one single texture, which could be just a character or a table or something like that. So, if you have 10 of these in your scene close to the camera, they already would need more than 2GB of video memory to render that correctly.”

That’s an awful lot of video memory for just “10 objects.” You can see how this scenario would get out of hand, even for today’s top-of-the-line graphics cards, which routinely carry 6GB, 8GB, and even 12GB of graphics memory.

If you want to use higher-than-8K resolution textures, the problem gets even worse. With 16K resolution textures, for example, you’re limited by more than memory constraints—at that point, you’re dealing with software limitations. “And if you want to go even higher in resolution, then the graphics drivers will just stop,” said Demeulemeester. “Graphics APIs like Direct X and OpenGL kind of limit the developers of using more than 16K texture resolution.”

Granite works around the texture resolution limitations by segmenting texture data into tiny pieces. The middleware splits each texture into 128 x 128-pixel tiles to create a “texture tile database” called a tileset. Graphine calls upon the tiles as necessary in real-time using prefetch algorithms. The middleware sends only the necessary texture segments to render the visible scene. In a way, the Granite SDK is built on the same principle as foveated rendering: It focuses only on what impacts the the parts you can see.

Demeulemeester continued:

“What [Granite] actually does is in the data loading side of things. It only loads the data into the GPU memory that is actually visible—the data that will actually be needed to render the screen pixels. We take the entire digital image and divide them into very small pieces. And then we're going to figure out, while the game is running—while the player is running around in the environment— which of those tiles are actually visible. The great thing about Granite is that all these extra pieces that are not visible; they’re not needed in memory. And so, we actually need much less memory, and you can add much more texture content to your environment, and have a more unique, and more much interesting environment.”

Of course, there’s a practical limitation to everything, and Granite can scale beyond practicality. Graphine’s middleware supports unfathomably large 256,000 x 256,000-pixel textures. And a single Granite project can use up to 64 individual tilesets that can each theoretically contain 128GB of texture data, for a potential total of 8TB of texture information.

You would never be able to ship a game with that much texture data, but Graphine isn’t limited to game development. It could also be used for visualization projects that demand incredibly high-resolution textures but don’t necessarily require the same rendering performance (framerate) that video games demand.

It’s not likely that a workstation would have access to enough hard drive storage for 8TB of texture data, but a project of that size wouldn’t rely on local storage. Granite is designed to work with local storage, but Graphine is working on network storage solutions. “We have prototypes of network streaming,” said Demeulemeester. “Right now, we store the texture data on disk, but the next step is streaming data from a cloud server.”

Streaming data from a network storage solution presents a new set of problems that Graphine must solve. Graphine relies on the predictable time it takes to retrieve texture data from a hard drive to fill the visible screen area at a consistent framerate. Network transfer is far less reliable and is often much slower than local hard drive access. Graphine is countering the network latency by “prefetching more content.” Graphine has a patent in prediction to help solve this issue.

The streaming option isn’t actually part of Granite, but Graphine is willing to entertain the idea of sharing the tech with the right developer.

“I can’t give all the details,” said Demeulemeester. “But if a developer would be looking for that, we can work with them, but it's not an off-the-shelf product. Granite is a product that streams from the local drive into video memory.”

Past releases of the Granite SDK supported only tile-based texture streaming, but that solution doesn’t work for all types of textures. Granite “needed a generic streaming system, instead of just one for super high-resolution textures” because the tile-based streaming system “doesn’t make sense,” Demeulemeester said, for 10-30% of textures.

“So, one of the interesting things that we added is a completely new streaming system to Granite,” said Demeulemeester. “We have this tile-based virtual texturing streaming system, but then now we also have a more old-school MIP map shading system, which is what Unreal is still using.”

Implications For Virtual Reality

Technology like Graphine’s Granite texture streaming middleware is invaluable for virtual reality development. The more interactive and experience, the bigger the need for high-resolution textures. Survios saw the need for high-resolution textures for Raw Data and recently partnered with Graphine for that very reason.

Demeulemeester continued:

“In virtual reality, you have a tendency to get really close to objects and really inspect them. So, once you get super close to them, you need super high-resolution textures to have some detail still. And also, you have these high-resolution screens, meaning you have more screen pixels, and there's a very strong desire to go even much higher resolution. And so, for every screen pixel, you actually need to have some texture data. So, more screen pixels mean you need more texture data; getting closer to objects means you also need more texture. So, there’s a very strong demand there to use super high-resolution textures. But on the other hand, there’s a very critical demand of having a solid performance on Vive and Oculus; solid 90fps. And on Daydream; solid 60[fps]. And so, you cannot have a streaming system that messes with your framerate—that suddenly starts to lock the rendering pipeline because it’s uploading a very big chunk of data.”

Granite Is Perfect For Photogrammetry

If you think Granite would help improve performance in photogrammetry environments, you’re not alone. In fact, Graphine is way ahead of you:

“Photogrammetry is a very good use case because Granite is very good at streaming very large resolution textures,” said Demeulemeester. “With photogrammetry, it’s a completely different approach where you actually scan an existing environment. If you’re scanning, you end up with every pixel in your environment being unique, so, you end up with a lot of texture content, and that is where Granite really shines.”

Flexible Licensing Plans

Graphine doesn’t advertise the price for a Graphine SDK license because the cost varies for every customer. Granite SDK is sold as a project license, which is to say developers negotiate the license for a specific development project and keep it for the duration of the project. Because every development team is different, and projects have variable development lengths, Graphine negotiates the price with every licensee. Demeulemeester said the company tries to cater to low-volume projects, too, but he said those projects usually opt for the less expensive plug-in licenses.

Graphine offers Unity and Unreal Engine Granite plug-ins, which it sells on a per-seat basis. Developers can opt for annual subscriptions, which give them access to the Granite plug-in and all updates for one year. There’s also a perpetual license option which includes lifetime access to Granite plug-in, and two years of updates.

"It’s a per case basis. We try to work something out for smaller projects," said Demeulemeester. "A lot of VR projects are smaller than larger game projects, so we want to be a good match with these VR projects, and we work something out. But in most cases, they are using Unity or Unreal, and for those people, we have a plug-in based on the SDK."

Graphine offers Indie and Pro versions of the Unity and Unreal Engine plug-ins. The Indie Unity license is $65 and lasts forever. The Pro version costs $199 per year, or you can throw down $349 for a lifetime license. The Unreal Engine plugins are much more expensive. The lifetime Indie version sells for $249, and the lifetime Pro version sells for $890. If you’d prefer an annual subscription, it will set you back $490 per year.

You can purchase a seat license for the Granite plug-ins directly from the Graphine website. You must fill out a request form if you're interested in an SDK license.

 Kevin Carbotte is a contributing writer for Tom's Hardware who primarily covers VR and AR hardware. He has been writing for us for more than four years. 

  • computerguy72
    ARK Survival Evolved would seem to be the game that needs this middleware the most. Streaming all those repeating textures that composite player made buildings is the slowest part of the game.
    Reply
  • bit_user
    I'm not sold on the idea, here. In most cases, I think more sophisticated procedural textures are a better solution. In the sample image, above, what's really needed is a vector-based texture format (or something better at compressing line art).

    Part of the problem with their solution is that even though they solve the problem of video memory, the assets still have to come from somewhere. That means more system RAM, more HDD/SSD space, and more network utilization (either in the form of bigger downloads or if they stream over the net, as mentioned). There's no free lunch, here.

    Of course, I can believe there are exceptions, when nothing will work as well as a high-res PBR texture. If any game devs have any examples to share, please do.
    Reply
  • michaelzehr
    Good point, Bit_user. Though encapsulating texture fetching into a middleware could led to smart textures that might be vector graphics or a PBR, and the data is presented to the next layer in the best/fastest possible way, depending on circumstances.

    It sounds related to smart, predictive cache algorithms. It's not really a surprise that some of these problem solutions have gone from "very complex caching logic to keep the working unit fed" to "put everything in memory" and back to the former (with side trips into "wait for moore's law to catch up"). In theory a smart middleware like this would behave differently on an HDD as on an SDD (in practice not caring about the technology itself, but observing how long it takes to load things, and making space/time tradeoff decisions based on that).
    Reply
  • RomeoReject
    Based upon what was said about it, wouldn't offloading some of the work to the RAM and SSD be a great way to improve things compared to relying 100% on the GPU? Even a high end card these days still typically ends at around 8-12GB. In comparison, 16GB is basically the bare-minimum for RAM these days, and plenty of kits offer 32GB or more. Storing super-high resolution files and just using them as needed sounds like a really intelligent solution.
    Reply
  • bit_user
    19044010 said:
    Based upon what was said about it, wouldn't offloading some of the work to the RAM and SSD be a great way to improve things compared to relying 100% on the GPU? Even a high end card these days still typically ends at around 8-12GB. In comparison, 16GB is basically the bare-minimum for RAM these days,
    Perhaps 16 GB is bare minimum for anyone doing 4k gaming. But I'd say 8 GB is still more common.

    The thing is that system RAM is used for a lot besides textures. For the game, it needs to hold 3D models, AI, sounds, and datastructures needed for physics. Those are probably the main things, anyway.

    But, you don't want a game consuming all your physical memory. The OS needs some, plus a few background apps will typically be running. And any unused RAM gets turned into disk cache. So, there's typically a performance rationale for leaving a bit of headroom.

    Anyway, he wasn't talking about storing it exclusively in RAM. The article mentions streaming textures from HDD/SSD. So, the RAM is used like a cache/staging area. That said, it's still going to use up RAM and memory bandwidth vs. if you somehow didn't need to do any texture streaming.
    Reply
  • bit_user
    In case anyone is unfamiliar with procedural texturing, which I mentioned as an alternative to large bitmaps:

    https://en.wikipedia.org/wiki/Procedural_texture

    The need to execute little programs for each pixel is basically how GPUs evolved into the compute powerhouses they are today.
    Reply
  • sixto1972
    You must also realize it doesn't stream in all the textures at high resolution.Only the areas closest to the field of view are streamed in full resolution. Further distance from view are mip mapped textures of lower resolution. As you move closer to an area her resolution textures are streamed in as needed and textures that can not be seen or are out of view are culled from video ram using a predictive algorithm. The entire game scene is turned into a tileset that is optimized and indexed and stored in an optimized arrangement on disk. It is very similar to the megatexture engine in id softwares RAGE.
    Reply
  • bit_user
    19044486 said:
    You must also realize it doesn't stream in all the textures at high resolution.
    I thought that was pretty clear, from the article (besides being obvious to anyone with a graphics background).

    Of course, the down side of MIP mapping is that it comes at price. If you have the whole texture loaded in memory, the overhead is about 33% larger footprint. However, their texture streamer probably uses the same tile sizes for each level of detail. So, it might do a bit worse than that (assuming tri-linear interpolation or better). But, if you're using texture maps, that's a price worth paying, IMO.
    Reply
  • d_kuhn
    Procedural textures would be a good solution for some part of the application space... just like vector based approaches. The reality (of reality) is that it's not going to be solved by any one approach. These tools go into a bag of tricks that the developer can apply intelligently to solve their particular problems based on what they're trying to do. If they systems are well integrated then maybe the application deploys approaches based on the store and compute assets available.

    Regarding the article, you'd think we'd learn in the computer world to "never say never". Applications with 8TB of texture data are only a couple orders of magnitude away... we'll DEFINITELY see them unless something drastic happens to alter the graphics engine landscape. Might be a few years but graphics have already progressed farther than we have left to go.
    Reply
  • bit_user
    19045789 said:
    Regarding the article, you'd think we'd learn in the computer world to "never say never". Applications with 8TB of texture data are only a couple orders of magnitude away... we'll DEFINITELY see them unless something drastic happens to alter the graphics engine landscape. Might be a few years but graphics have already progressed farther than we have left to go.
    Not sure about that. With Moore's law slowing, leading to a tapering of memory & flash capacities, and increasing focus on download vs. sales of physical media, I don't see how 8 TB is going to be practical to deliver or store. It would take me 7.4 days to download that @ 100 Mbps (or 17 hrs @ 1 Gbps - and I don't see any potential for mainstream internet access beyond that, in the next decade). Plus, ever faster GPUs are going to be more capable of using generating procedural textures than ever before.

    For certain professional applications, GPUs are already accessing TB of data via AMD's SSG technology. But those are for specialized applications involving GIS and volumetric datasets.
    Reply