Skip to main content

Clever VR Tricks: A 60fps VR Experience On An Old Phone With Tessellation And Vertex Displacement

Running VR content on a phone is a technical challenge. Desktop VR solutions including Oculus and Vive recommend an Intel Core i5 CPU and GTX 970 as the minimum specs. However, phones lack these high-end desktop components and have to run VR content on a mobile SOC while simultaneously processing all motion tracking.

Framing the Issue

Although Google’s new “Daydream VR” specification standard hasn’t been clarified, it will almost certainly apply only to the newest high-end phones. But Google Cardboard, the ubiquitous folded paper viewers, were supposed to be the one-size-fits all VR solution for everyone. How then, do you get an immersive VR experience to run on an iPhone 5, or even a Galaxy S3?

Let’s lay out the challenges here. For VR content, 60 fps is generally considered the minimum for immersive content (the target is generally held to be 90 fps, and Sony’s PSVR is even pushing for 120 fps). Below that, we tend to notice the lag in motion controls and head tracking, and for some people, nausea becomes a bigger problem. Even on a low-end phone, this means pushing out a lot of pixels per second.

Representation of corrected images for VR. Via Brian Kehrer

Through the Looking Glass

Then there’s the fundamental nature of how we view VR. VR systems render a separate image for each eye, but these images are each roughly square, and it’s the lenses and our perception that create the illusion of a wide scene with depth. Lenses are responsible for much of the quality of the perceived image, but regardless of their own quality, they also invariably introduce distortion, warping and stretching the image.

The solution has generally been to render the image, apply an inverse distortion to it that is the opposite of the lens distortion, and then feed it to the screen buffer to output on the phone’s display. This inverse distortion means the final image will look correct when viewed through the lens.

In short, inverse distortion plus lens distortion equals an undistorted image.

In photography terms, these are called “barrel” and “pincushion” distortion. The lenses of an HMD generally cause pincushion distortion, making the image look like it's being pulled into a central point, and the correction takes the form of barrel distortion, which looks like the images were stretched over part of a sphere.

However, distorting and interpolating the pixels of each image to cancel out the lenses adds considerable processing time to each frame and significantly increases the work a phone has to do to create a frame. It also lowers the effective resolution, because the areas near the center of the image that are stretched outward are essentially smeared across too many pixels, similar to zooming into a low-res photo. By extension, extra detail is compressed in the edges of the frame, wasting CPU time and packing in detail where the user is least likely to be looking.

A look at the underlying vertex geometry in Arctic Journey. Via Brian Kehrer

A Million Little Pieces: Vertex Displacement And Tessellation

One studio has come up with an innovative solution. Brian Kehrer is a game designer who worked with ustwo (the developers behind Monument Valley) to create Arctic Journey, a VR tour of a polar wilderness that has become one of the default experiences in Google Cardboard. In order to try to create a truly universal VR experience, Kehrer’s team had a goal to run the software at a stable 60fps with dynamic lighting running on a Galaxy S3--a phone from 2012.

The team’s preliminary testing showed that there was no way a phone that old could render the image, apply distortion, and output it to the screen fast enough to maintain 60 fps. So Kehrer and his team used an innovative solution: Instead of rendering the game world and then distorting it, what if you warped the entire world beforehand? The idea, which is called Vertex Displacement, is to literally warp the geometry of the game to approximate the inverse lens distortion. This means the scene renders with the proper lens correction and can be sent straight to the screen, cutting out the middle step.

If you remember that all computer graphics are made of a collection of polygons, this starts to make sense. Vertex Displacement doesn’t warp lines; the points between two vertices (the corners of the polygons) always stay straight. However, by shifting where the vertices are, you can approximate lens correction in the scene itself and render the final image in one pass.

There is one challenge here: Low-polygon objects (for example, a perfect square) will be completely unaffected by Vertex Displacement. To correct for lens distortion, a square actually needs its sides to be bowed out in slight arcs. As Kehrer pointed out, a curved lens makes curved lines look straight. Without this, the lens distortion will make a square look like it’s collapsing inwards. However, because Vertex Displacement doesn’t actually warp lines, it can’t correct a simple object like a square. This is especially challenging when you consider that most user-interface elements incorporate simple polygons, such as squares and rectangles.

Example of using tessellation to compensate for vertex displacement. Via Brian Kehrer

The solution is something that’s become commonplace with rendering complex geometry: tessellation. Tessellation is the process of using repeating, interlocking simple shapes to approximate a complex object. Generally, this means taking fairly rough, large polygons and breaking them down into dozens or hundreds of small triangles, which lets you approximate much more complex textures and surfaces using simple shapes. You can easily scale the number of triangles up or down based on how much processing power is available.

In Vertex Displacement, this means that you can break down a simple polygon, such as a square, into one hundred smaller squares, each made up of two interlocking triangles. Rendering this many more vertices isn’t significantly taxing on the CPU.

Kehrer said most mobile devices can reliably render 100,000 to 400,000 vertices, so turning a few 4-vertex squares into 121-vertex tessellated objects doesn’t have a huge impact (121 vertices is the number required for splitting a square into our theoretical 10x10 grid of paired triangles). This tessellated approach means that Vertex Displacement can now shift the corners of all those triangles to bend the cube into a rough approximation of the curved object needed for lens correction.

There is another advantage: The image isn’t rendered and then stretched. It is actually rendered in the proper dimensions, so detail isn’t lost in the stretched-out middle of the frame, creating improved perceived resolution.

This shows rendering after vertex displacement vs post-processing distortion correction. Thicker lines indicate lower effective resolution. Via Brian Kehrer

This technique makes Arctic Journey (and Cardboard Design Lab, another VR product that uses Vertex Displacement) something of a technological marvel, but the technique may not see wide adoption. Google’s new VR platform, Daydream, doesn’t seem to support Vertex Displacement, and Oculus and Vive rely on a more processor-intensive implementation called “Timewarp.”

It also highlights a serious question that Kehrer raised in a blog post: If his team can run a VR game on a Galaxy S3, why do the Oculus and Vive require such powerful hardware? This isn’t an answer we currently have, but it’s one of the things Tom’s Hardware will be investigating as we dive deeper into VR.

Follow us on FacebookGoogle+, RSS, Twitter and YouTube.

Chris Schodt is an Associate Contributing Writer for Tom's Hardware US. He writes news and features, specializing in virtual reality.