VR + Eye Tracking SOLUS Demo — Part One

This week I’d like to give you all a slightly more technical look at our eye tracking implementation in the SOLUS VR demo.

It was a lot of fun to work with, and is an interesting case study to look at when designing eye tracking interaction in VR games.

In SOLUS we implemented five different features, and I will go through each of them in a series of three different posts. This is part one;

Fredrik Lindh,
Game Developer Tobii Eye Tracking

Foveated Depth of field

The first thing you might notice when trying out the demo is the foveated depth of field effect. Depth of field is a term most commonly found in photography where it denotes the distance band at which objects in a scene as perceived by a physical camera end up sharp in the final image. In general, objects in front of and beyond the depth of field conversely experience some amount of blurring depending on different factors such as what kind of camera aperture are used, the focal length and other factors.

Another interesting thing about depth of field is that virtual cameras, such as those that can be found in games, do not have this focus problem since they don’t actually focus light rays to hit some sensor, but instead use linear algebra to project our game scene onto a plane in front of our virtual camera. This does not mean that achieving depth of field is impossible though. There are many techniques that can let a game designer add blur on top of the rendered image. Of these techniques there are several, offering advantages in either performance, or visual fidelity. All of them however, require the designer to specify some focal distance at which the depth of field should be located. This is all well and good if you want to mimic how a movie or a photograph leverages the technique. That is, by drawing attention to a specific character or object in your scene… but if you instead want to replicate how a human experiences the phenomenon you’re out of luck, since the designer cannot know what object in the current camera scene the user is actually looking at, and consequently what distance to choose for the depth of field - unless he has an eyetracker of course!

It sounds like this should be an easy task, but in reality, it is quite difficult. For scenes that have very smooth distance gradients, it works really well, but as soon as you start getting many objects that are close in screen-space, but very far apart in z (distance), that’s where you start getting problems since the eyetracker has small errors in both precision and accuracy. In these cases then, it becomes necessary to use some heuristic to determine which object the user is in fact looking at. At Tobii we call these heuristics GTOM (Gaze-To-Object-Mapping) techniques. Using GTOM, we can guess which object in the scene the user is focusing at, instead of sampling the pixel depth, which leads to better and more convincing results.

And of course; the better the GTOM algorithm, the better it feels!

This video explains how foveated depth of field makes for a more realistic experience

Read Part Two

Read Part Three

Official Eye Tracking Blog

Eye Tracking and User Sensing Technology for Improved Human-Device Interactions

Tobii Gaming

Written by

Official profile for Tobii Gaming. Eye Tracking for Streaming, PC Games, Esports at www.tobiigaming.com

Official Eye Tracking Blog

Eye Tracking and User Sensing Technology for Improved Human-Device Interactions

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade