How the Oculus Rift Works to Create Virtual Reality
Join the DZone community and get the full member experience.
Join For Free
this article is an excerpt from oculus rift in action by bradley austin davis, karen bryla and alex benton.
virtual reality is about constructing an experience that simulates a user’s physical presence in another environment. the rift accomplishes this by acting both as a specialized input device and a specialized output device.
as an input device, it uses a combination of several sensors to allow an application to query for the current orientation and position of the user’s head. this is commonly referred to as the head pose. this allows an application to change its output in response to the changes in where the user is looking or where their head is.
head pose
in vr applications, a head pose is a combination of the orientation and position of the head relative to some fixed coordinate system.
as an output device, the rift is a display that creates a deep sense of immersion and presence by attempting to more closely reproduce the sensation of looking at an environment as if you were actually there, compared to viewing it on a monitor. it does this by
-
providing
a much wider field of view than conventional displays
-
providing
a different image to each eye
-
blocking
out the real environment around you, which would otherwise serve as a
contradiction to the rendered environment.
on the rift display, we can display frames that have been generated to conform to this wide field of view and offer a distinct image to each eye.
frame
because developing for the rift involves rendering multiple images, it’s important to have terminology that makes it clear what image we might be talking about at a given moment. when we use the term frame, we’re referring to the final image that ends up on a screen. in the case of a rift application, these frame images will be composed of two eye images, one each for the left and right eyes. each eye image will have been distorted specifically to account for the lens under which it will appear, and then composited together during the final rendering step before they are displayed on the screen.
these specializations do not happen automatically. you can’t simply replace your monitor with a rift and expect to continue to use your computer in the same way. only applications that have been specifically written to read the rift input and customize the output to conform to the rift’s display will provide a good experience.
to understand how an application running on the rift is different, it is important to look at how it is distinct from non-rift applications.
conventional applications
all applications have input and output and most graphical
applications invoke a loop that conceptually looks something like figure 1.
figure 1: the typical loop for conventional applications
the details can be abstracted in many ways, but for just about any program you can eventually look at it as an implementation of this loop. for as long as the application is running, it responds to any user input, renders a frame and outputs that frame to the display.
rift applications
rift-specific applications embellish this loop, as seen in figure 2.
figure 2: a typical loop for a rift application
in addition to conventional user input, we have another step which fetches the current head pose from the rift. this is typically used by the application to change how it renders the frame. specifically, if you’re rendering a 3d virtual environment, you want the view of the scene to change in response to the user’s head movements.
in addition to the rendering step, we also need to distort it to account for the effects of the lenses on the rift.
practically speaking, the head pose is really a specialized kind of user input, and the rift-required distortion is part of the overall process of rendering a frame, but we’ve called them out here as separate boxes to emphasize the distinction between rift and non-rift applications.
however, as we said, the design of the rift is such that it shows a different image to each eye by showing each eye only one half of the display panel on the device. as part of generating a single frame of output, we render an individual image for each eye and distort that image, before moving on to the next eye. then, after both per-eye images have been rendered and distorted, we send the resulting output frame to the device.
Opinions expressed by DZone contributors are their own.
Comments