Switching a WebGL application to WebVR

A couple months ago I ported the Pathfinder demonstration app to WebVR . It had been an interesting experience, and I feel like I actually learned a bunch of things about porting WebGL applications to WebVR that would be usually useful to folks, especially folks arriving at WebVR from non-web programming experience.

Pathfinder is a GPU-based typeface rasterizer in Rust, and it comes using a demo app that runs the particular Rust code on the server side yet does all the GPU work in WebGL in a TypeScript website.

We had a 3D demo displaying a representation of the Mozilla Monument as a way to demo text rasterization within 3D. What I was hoping to perform was to convert this to some WebVR application that would let you see the monument by moving your head rather than using arrow keys.

I started working on this problem with an understanding of OpenGL plus WebGL , but almost absolutely no background in VR or WebVR. I’ d written an Android Cardboard boxes app three years previously and that involved it.

I’ mirielle hoping this article may be useful for other people from similar backgrounds.

The transformed triangle demonstration running in WebVR

What is WebVR?

WebVR is a set of APIs for writing VR programs on the web. It lets us request bouncing into VR mode, at which stage we can render things directly to the particular eyes of a VR display, instead of rendering to a flat surface browser inside the display. When the user is on the device like the Cardboard or Fantasize where a regular phone substitutes for that VR display, this is the point in which the user puts their phone inside the headset.

WebVR APIs help with transitioning to/from VR setting, obtaining pose information, rendering within VR, and dealing with device insight. Some of these things are being improved within the work in progress on the new WebXR Device API specification .

Should i need any devices to work with WebVR?

Ideally, a good VR device will make it easier to test out your work in progress, but depending on just how much resolution you need, a Daydream or even Cardboard (where you use your cell phone in a headset casing) is enough. You can also test stuff without the headset covering, though stuff will look weird plus distorted.

For nearby testing Chrome has a WebVR API emulation expansion that’ s fairly useful. You can use the devtools board in it to tweak the present, and you get a non-distorted display associated with what the eyes see.

Firefox supports WebVR, and Chromium Canary supports it if you enable some red flags . There’ s also a polyfill which should work for more browsers.

How does it work beneath the hood?

I think not really understanding this part was the way to obtain a lot of confusion and bugs for me personally when I was getting started. The primary of the API is basically “ provide something to a canvas and then miracle happens”, and I had trouble foreseeing how that magic worked.

Essentially, there’ s a variety of work we’ re supposed to do , and then there’ s i9000 extra work the browser (or polyfill) does.

After we enter VR mode, there’ t a callback triggered whenever the product requests a frame. Within this callback we have access to pose information.

Using this pose information, we are able to figure out what each eye need to see, and provide this to the WebVR API in some form.

What the WebVR API expects is the fact that we render each eye’ ersus view to a canvas, split flat (this canvas will have been transferred to the API when we initialize it).

That’ s this from our side, the browser (or polyfill) does the rest. It uses the rendered canvas as a texture, as well as for each eye, it distorts the particular rendered half to appropriately work together with the lenses used in your gadget. For example , the distortion for Fantasize and Cardboard follows this code in the polyfill .

It’ ersus important to note that, as application designers, we don’ t have to worry about this particular — the WebVR API is definitely handling it for us! We need to provide undistorted views from each attention to the canvas — the still left view on the left half as well as the right view on the right half, as well as the browser handles the rest!

Porting WebGL applications

I’ m going to try and save this self contained, however I’ lmost all mention off the bat that several really good resources for learning these things can be found at webvr. info plus MDN . webvr. information has a bunch of nice samples if, like me, you understand better by looking at code plus playing around with it.

Getting into VR mode

1st up, we need to be able to get access to the VR display and enter VR mode.

 let vrDisplay;
navigator. getVRDisplays(). then(displays => 
    if (displays.length === 0) 
    vrDisplay = displays [displays.length - 1] ;

    // optional, but recommended
    vrDisplay.depthNear = /* near clip plane distance */;
    vrDisplay.depthFar = /* far clip plane distance */;


We need to add an event handler designed for when we enter/exit VR:

 let canvas sama dengan document. getElementById(/* canvas id */);
let inVR = false;

screen. addEventListener('vrdisplaypresentchange', () => 
  // no VR display, exit
  if (vrDisplay == null)

  // are we entering or exiting VR?
  if (vrDisplay.isPresenting) 
    // We should make our canvas the size expected
    // by WebVR
    const eye = vrDisplay.getEyeParameters("left");
    // multiply by two since we're rendering both eyes side
    // by side
    canvas.width = eye.renderWidth * 2;
    canvas.height = eye.renderHeight;

    const vrCallback = () => ;
    // register callback
    inVR = false;
    // resize canvas to regular non-VR size if necessary

And, in order to enter VR itself:

 if (vrDisplay! sama dengan null) 
    inVR = true;
    // hand the canvas to the WebVR API
    vrDisplay.requestPresent( [ source: canvas ] );

    // requestPresent() will request permission to enter VR mode,
    // and once the user has done this our `vrdisplaypresentchange`
    // callback will be triggered


Rendering in VR

Well, we’ ve entered VR, now what? In the above program code snippets we had a render() call that was doing most of the hard work.

Since we’ re starting with a current WebGL application, we’ ll possess some function like this already.

 let width sama dengan canvas. width;
let height sama dengan canvas. height;

function render()  gl.DEPTH_BUFFER_BIT);

    gl.bindBuffer(/* .. */);
    // ...
    let uProjection = gl.getUniformLocation(program, "uProjection");
    let uModelView = gl.getUniformLocation(program, "uModelview");
    gl.uniformMatrix4fv(uProjection, false, /* .. */);
    gl.uniformMatrix4fv(uModelView, false, /* .. */);
    // set more parameters
    // run gl.drawElements()


So 1st we’ re going to have to divided this up a bit further, to deal with rendering the two eyes:

// access point for WebVR, called by vrCallback() function renderVR() let gl = canvas.getContext("gl"); // set clearColor and call gl.clear() clear(gl); renderEye(true); renderEye(false); vrDisplay.submitFrame(); // Send the rendered frame over to the VR display // access point for non-WebVR rendering // known as by whatever mechanism (likely keyboard/mouse events) // you used prior to trigger redraws function render() let gl = canvas.getContext("gl"); // set clearColor and call gl.clear() clear(gl); renderSceneOnce(); function renderEye(isLeft) // choose which half of the canvas to draw on if (isLeft) gl.viewport(0, 0, width / 2, height); else gl.viewport(width / 2, 0, width / 2, height); renderSceneOnce(); perform renderSceneOnce() // the actual GL program and draw calls go here

This looks like a good step forward, yet notice that we’ re rendering exactly the same thing to both eyes, and not dealing with movement of the head at all.

To implement this we have to use the perspective and view matrices provided by WebVR from the VRFrameData object.

The VRFrameData object contains a pose member with all of the head pose details (its position, orientation, and even speed and acceleration for devices that will support these). However , for the purpose of properly positioning the camera whilst object rendering, VRFrameData provides projection and view matrices which we can directly use.

We can do this like therefore:

 allow frameData = new VRFrameData();
vrDisplay. getFrameData(frameData);

// use frameData. leftViewMatrix / framedata. leftProjectionMatrix
// for your left eye, and
// frameData. rightViewMatrix / framedata. rightProjectionMatrix for your right

Within graphics, we often find ourselves coping with the model, view, and output matrices. The design matrix defines the positioning of the object we wish to provide in the coordinates of our space, the particular view matrix defines the transformation involving the camera space and the world room, and the projection matrix handles the modification between clip space and digital camera space (also potentially dealing with perspective). Sometimes we’ ll deal with the particular combination of some of these, like the “ model-view” matrix.

One can find these matrices in use in the cubesea code in the stereo rendering example from webvr. info .

There’ s a good chance our app has some concept of a model/view/projection matrix already. If not, we can pre-multiply our own positions with the view matrix within our vertex shaders .

So now our code will appear something like this:

  // entry point for non-WebVR rendering
// called by what ever mechanism (likely keyboard/mouse events)
// we used before to cause redraws
function render() 
    let gl = canvas.getContext("gl");
    // set clearColor and call gl.clear()
    let projection = /*
        calculate projection using something
        like glmatrix.mat4.perspective()
        (we should be doing this already in the normal WebGL app)
    let view = /*
        use our view matrix if we have one,
        or an identity matrix
    renderSceneOnce(projection, view);

functionality renderEye(isLeft) 
    // choose which half of the canvas to draw on
    let projection, view;
    let frameData = new VRFrameData();
    if (isLeft) 
        gl.viewport(0, 0, width / 2, height);
        projection = frameData.leftProjectionMatrix;
        view = frameData.leftViewMatrix;
        gl.viewport(width / 2, 0, width / 2, height);
        projection = frameData.rightProjectionMatrix;
        view = frameData.rightViewMatrix;
    renderSceneOnce(projection, view);

function renderSceneOnce(projection, view) 
    let model = /* obtain model matrix if we have one */;
    let modelview = glmatrix.mat4.create();
    glmatrix.mat4.mul(modelview, view, model);

    gl.bindBuffer(/* .. */);
    // ...

    let uProjection = gl.getUniformLocation(program, "uProjection");
    let uModelView = gl.getUniformLocation(program, "uModelview");
    gl.uniformMatrix4fv(uProjection, false, projection);
    gl.uniformMatrix4fv(uModelView, false, modelview);
    // set more parameters
    // run gl.drawElements()


This will be it! Moving your head around ought to now trigger movement in the picture to match it! You can see the program code at work in this demo app that will takes a rotating triangle WebGL application and turns it into a WebVR-capable triangle-viewing program using the techniques using this blog post.

If we acquired further input we might need to make use of the Gamepad API to design a good VR interface that works with typical VR controllers, but that’ s from scope for this post.

Manish works on the experimental Servo internet browser at Mozilla, and is quite mixed up in Rust community

More articles simply by Manish Goregaokar…

If you liked Switching a WebGL application to WebVR by Manish Goregaokar Then you'll love Web Design Agency Miami

Add a Comment

Your email address will not be published. Required fields are marked *