Different Kinds of Eye Tracking Devices

Different Kinds of Eye Tracking Devices

13 Min.
By Mark A. Mento
June 12, 2020

Recent advancements in eye tracking technology have expanded the field to include applications in many different areas, both as a tool for research and as a source of real-time data for interaction. Eye tracking has expanded well beyond the original research in vision and attention.

As a result, the field now includes very disparate use cases. Systems are designed to fit certain applications and are often much less suited for others. In this entry, we will discuss four of the major types of eye tracking devices, along with some basic examples of applications for each.

Differences in Technique

In our previous blog post on eye tracking, we discussed the basics of eye tracking technology and how these systems function to measure movements of the human eye. 

Eye tracking systems are used in the measurement of eye position and visual attention for research purposes, medical diagnosis, or to provide an alternative interface method for a computer or device. 

They mostly use similar techniques for pupil and cornea reflex detection, but there are some significant differences in form factor and functionality. Here are some of the ways that eye tracking hardware may differ:

  1. Human Interface: The most immediately obvious difference between eye tracking devices is in how they interface with the user and environment. Some systems require head-stabilization via a chinrest or bite-bar. Other devices are built into a headband or glasses and are worn by the participant. Probably the most common type does not touch the person at all and measures the eye from a distance. 

    Differences in human interface usually exist to accommodate #2 and #3 in this list, and we’ll discuss that in more detail below.

    Other more invasive methods of eye tracking (scleral search coil systems, for example) are outside of the scope of this entry.

  2. Tracking Area: Most eye tracking devices use a computer screen as the stimulus area and do not track eye movements elsewhere. Some systems are capable of tracking relative to more complex geometries (like a cockpit or multiple-screen area) and a few are designed for real-world tracking over almost anything the participant looks at.

    Understanding the limitations in tracking scope is one of the most important parts of purchasing an eye tracking device.

  3. Specifications: Measures such as spatial resolution, sample rate and accuracy are important for many research applications and can have an impact in other areas too. There are some tradeoffs in terms for performance vs. human interface and tracking area. 

We’ll discuss some of these tradeoffs below, and explain these metrics in much more detail in a subsequent blog post. 

Types of eye-tracking devices

Most modern eye tracking systems fall into one of four categories: Head-stabilized, remote, mobile (head-mounted), and embedded (integrated).

1. Head-Stabilized Eye Tracking

These eye tracking systems utilize some method of constraining the participant’s head movements, usually via bite-bar or chinrest. These are typically high-fidelity research systems that are used in neurophysiology or vision experiments where participant comfort is secondary to accuracy and precision. Sometimes head stabilization is done in conjunction with another technology that already immobilizes the head (fMRI, MEG, etc). 

Head Stabilized Eye Tracking 2

The ultra-high precision EyeLink1000 Plus system can be used at 1000 Hz binocular in remote mode, or 2000 Hz monocular with the chinrest. 

There are typically three reasons for doing this: 

  1. Enhanced accuracy and precision: All eye tracking systems have to accommodate head movements. By stabilizing the head, these systems can remove at least some head-movement artifacts and noise from the eye tracking data. These systems sacrifice participant freedom of movement and comfort for data quality. 
  2. Controlled visual experience: Head-stabilized systems control aspects of the visual experience between participants. For example, with the head in a fixed position, a researcher can be sure that a saccade target is exactly 15 degrees from the center fixation point. If the participant is free to move, this angle may change for participants who lean closer to the screen. Eye tracking studies aimed at understanding perception and the visual system can benefit from making a uniform visual experience for all participants.
  3. Used in conjunction with secondary technology that already requires head stabilization: Eye tracking systems used in fMRI, MEG, and other research areas are head-fixed due to the fact that the other research tool already requires this. In fMRI experiments, the head is stabilized in order to control the quality of scanner data, but this will immobilize the eyes for the eye tracking device as well. 

Head-stabilized eye tracking systems typically reach a level of precision that is not possible for other types of systems. This is in part because a high resolution camera can take a much closer image of the eye, without having to adjust the view for head movements. These types of systems are also often capable of a much higher sample rate, drastically increasing the temporal resolution for faster eye movement analysis. Head-stabilized systems can be monocular or binocular. 


The main limitation of head-stabilized tracking is the comfort and natural interaction of the participant. Many experiments that use head stabilization do not require the participant to feel or act naturally. These systems are used in controlled lab settings only. 

It is also important to note that head-stabilized eye tracking systems still usually must do some level of CR-based head movement or slip-compensation. This is because, in human-based experiments, some head movement is still possible even in a chinrest or bite-bar. Very small head movements on the order of 0.1 degrees of rotation angle will still affect accuracy of the eye movement data.

Some of the peripheral technology mentioned here, like fMRI, impose a different set of limitations on eye tracking, especially in terms of performance and accuracy. 


Animal-based eye tracking research systems typically utilize a fixed camera and head stabilization. These are often quite different in terms of setup, performance and calibration, and are outside of the scope of this discussion. 

2. Remote Eye Tracking

Modern remote systems are called “remote” because they do not require contact with the participant at all. The camera is set up with a view of the eyes from a distance, and the systems can automatically alter the camera field of view to compensate for head movements. They use pupil center and cornea reflection to track eye position and head orientation. 

Remote eye tracking systems typically consist of a camera and IR source positioned below the stimulus area, most often a computer screen. It is possible for a remote system to be positioned above the display (which is useful for touch screens, for example). However, due to the shape of the eye and eyelids, the pupil is somewhat more visible and less likely to be occluded from below. The remote camera can be set in front of the screen, attached to it, or embedded in a laptop, monitor, or kiosk.

Remote Eye Tracking Tobii Pro Bitbrain 2

An unobtrusive, naturalistic screen-based (remote) eye tracking device - Tobii Pro X2-30

These systems always have a functional working area, called a “head box” and often can only map eye movements on a defined “calibration plane”, usually the computer screen. If the participant leaves the headbox or looks beyond the calibration plane, the tracking will be temporarily interrupted. A good remote system will reacquire the eyes very quickly with a minimal loss of tracking once the eyes are back within range or gaze is back on the calibration plane. Remote experiments where a participant is dividing their attention between the computer screen and something elsewhere are not uncommon, but gaze data is only collected for the computer screen.

Screen Based Remote Eye Tracking Device

Remote systems are most commonly used for screen-based interaction or experiments. They are also useful for gaze-contingent interfaces, such as assistive technology devices, or gaming laptops

Some advantages of this type of tracking:

  1. Natural Interaction: Ideally, the participant can use a computer completely naturally while the eye tracking system is working. This approach is excellent for usability testing, various human behavior psychology and vision experiments, screen-based market research, etc., where a more obtrusive interface would potentially alter the participant’s behavior.
  2. Non-Contact: Remote systems are often the only option for research into infant or neurocompromised participant populations, both of whom may not tolerate something touching the head. It is also the basis for various assistive communication devices. For example, a person with quadriplegia or locked-in syndrome can use a remote eye tracking device to communicate via eye movements. 
  3. Compatibility with EEG: Because they do not touch the participant, and electronics are somewhat distant, remote systems work very well with other research technologies, such as EEG, NIRs, biosignals, etc.

Remote eye tracking systems are virtually always binocular. Some are capable of measuring, or at least accommodating, vergence (a property of binocular vision we will discuss in a later entry), which increases accuracy at different depths.


  1. Working Area: Remote systems understand only a fixed working area and will not track beyond that. It is difficult to track eye movements vs. a real-world object, like a mobile device or a document, unless that object is fixed relative to the camera. However, this kind of setup makes for an awkward human interaction that ruins the “natural” advantages of remote eye tracking. 
  2. Touch-Screens: Touch-screens can be difficult (with the camera mounted below the screen), because the participant will constantly have to reach across the camera and/or illumination, which will cause gaps in the data.
  3. Head-Movement: Because the participant is able to move freely, they can change the angle and distance to the screen and significantly alter their view of the stimulus vs other participants. These systems will generally be able to accommodate head movements to a point, but excessive movement can cause gaps in the data, inaccuracy and artifact. Moving significantly closer or further from the screen will require the system to interpolate the calibration and contend with changes in vergence, which can result in increased error. 
  4. Care must also be taken to prevent more than one participant from being visible to the camera. For example, an infant sitting on a parent’s lap may be tracked well at first, but if they look away, the eye tracking device could switch to tracking the parent instead. Modern eye tracking devices are very good at finding the eyes in the headbox, but they have no way of knowing the difference between the participant and another individual who enters the tracking range.
  5. Sunlight: Most remote systems have an optical filter that prevents non-infrared light from interfering with the tracking. They will work well in any level of artificial light. However, these systems are intolerant of IR sources like sunlight, especially if the sun is reflecting in the participant’s eyes (i.e. facing a sunny window). This is typically not a problem for most researchers, but it can cause significant difficulties in using assistive communication devices outdoors.


  1. Scene Camera: A few remote eye tracking systems can be used with a scene camera. This is a fixed camera that records a real-world field-of-view, as an alternative to a stimulus monitor. It is often useful to record eye movements in a real-world human interaction study (imagine two participants sitting across a desk from each other). Similar to the head-mounted systems below, this requires a more complex analysis as objects in the scene camera will move constantly and vary between each participant. Also, unlike a head-mounted system, the scene has to be fixed - the camera absolutely cannot move relative to the eye tracking camera or the calibration will be broken and all data will be inaccurate.

    Eye Tracking Test with An Infant in Front of Tobii

    Real world scenario recorded with a scene camera (not pictured, mounted behind the baby) and a Tobii Pro Spectrum eye tracking system.
  2. Multiple-Camera: Another special variation of the remote system is one that uses multiple cameras, often in a vehicle, simulator, or control panel environment. This kind of device requires a more complicated setup and calibration, but it is a very useful option for complex interfaces - critical in human factors and ergonomics research and development.

3. Mobile Eye Tracking

Mobile eye tracking, sometimes called “head-mounted”, consists of a device worn by the participant, usually in the form of eye tracking glasses or a headband. This type of system typically requires a camera or a mirror to be positioned in the visual path of one eye (monocular) or both (binocular), and an additional camera that records the scene or field of view. 

Tobii Pro Glasses3 Headshot Frontview2

The Tobii Pro Glasses 3 system, which uses two extremely small eye cameras and several illuminators installed directly on the lens, plus a front-facing scene camera between the eyes.

Gaze tracking on a head-mounted system is done relative to the entire field of view, which makes it ideal for real-world experiments. This includes research applications in sports, driving, wayfinding, social communication, hand-eye coordination, mobile device and store shelf testing, etc.

Modern examples of mobile systems are untethered, and this allows for experiments in a much more real-world context, including simulator and vehicle use, motor-control and gait experiments, sports training, store-shelf shopping, wayfinding, etc. 

Head-mounted systems built into glasses are generally more comfortable, less invasive, and can be worn with other technologies like EEG. With mobile eye tracking, many eye tracking studies in perception, communication, and other fields that were simulated on screens in the past can now be moved into a naturalistic and realistic context.

Eye Tracking Human Factors Applications

In human factors and usability, many applications involve studying interaction in an industrial context or when using real-world objects. A car company, for example, can study how changing the design of the dashboard or sightlines of a car can affect driver perception. An industrial engineer can study attention-based safety hazards in a factory or warehouse. A usability engineer working on informational signs in an airport can study a participant’s gaze during a wayfinding test. Head-mounted eye-tracking systems are also employed to analyze the UX of smartphone applications. 

Eye Tracking Market Research Ux Applications 0

Mobile systems are almost always binocular. If only one eye is tracked, the device will suffer from parallax error (a problem calculating gaze data at depth due to the angle between the eyes and between the eyes and the scene camera). 


  1. Sunlight: Like all eye tracking devices, these devices can have trouble tracking eye movements in sunlight. The best glasses systems have IR-based shields that can attach in this case. It can also be difficult to track if the participant is squinting due to excessive brightness or glare. In this case, an IR-based, tinted shield is necessary, and it may also be useful to shade the participant’s eyes with a hat or visor.
  2. Eccentric Eye Movements: Because the eye tracking cameras have to have an unobstructed view of the eyes, eye movements to the periphery can be hard to track and will often show less accuracy. This may be more likely in eye movements upwards (if the cameras are below). 
    Since eye movements are calibrated to the scene camera, the scene camera lens itself can create a limitation. A telephoto scene camera will capture more detail, but it will be relatively easy for the participant to look beyond the edge of the camera view. A particularly wide-angle scene camera will capture all possible eye movements but will lose some detail (and therefore target differentiation) in the scene.
    Since humans tend to look at the horizon and lower, most scene cameras are oriented downward. It can still be challenging sometimes to track eye movements when the participant is looking down at an object held in the hands (i.e. a mobile phone). 
  3. Relative Coordinate System: Unlike the other kinds of systems discussed here, there is no absolute coordinate system when using a mobile eye tracking device. The system records gaze data in a coordinate system defined by the scene camera. This scene-based coordinate system acts like an imaginary screen that moves with the participant’s head. For example, on a remote system, you could show a moving target on a computer screen. If you know the target’s position relative to the screen, it would be trivial to determine whether each participant’s gaze position (also formatted in screen pixels) coincided with this target.

    However, with a mobile eye tracking system, the target may be a real-world object recorded by the scene camera, like a kicked soccer ball. The position of the ball in the scene camera depends on the participant’s head position and can change as a function of both the ball’s movement and the participant’s movement at the same time. And this will vary significantly between each participant.

    While a real soccer ball is a much more natural method to record a participant viewing a moving object in a sports training experiment, the analysis will require much more care and subjective interpretation given the significant variations between each participant’s experience. Each kicked soccer ball will perform differently in the scene video, and each participant’s experience will have to be analyzed individually. 

    There are multiple methods to analytically analyze this kind of experiment in post-processing. Some systems can also use markers as a method of delimiting a uniform tracking area (i.e. a section of a store shelf, or a movie screen). There are also methods to “map” eye tracking data into a more static representation of a scene. These analysis tools can result in a very analytical data output, but generally take time and some level of subjectivity. For this reason, it is often much harder to scale a mobile eye tracking study to a large number of participants. The complexity of analysis should be taken into account when designing a mobile eye tracking study.

    Mobile systems are also not typically useful in a gaze-contingent environment (like assistive technology), as real-time data lacks context without an analysis of the scene video. 


  1. Screen-based, head-mounted: A very small number of head-mounted systems are used for screen-based experiments by using a method to track the screen from the device itself. The original SR Research EyeLink and EyeLink II are examples of this. They consist of a headband with a front-facing IR camera and four IR LED markers denoting the corners of the screen. These devices operate in high-fidelity screen-based experiments and generally do not work in the real-world scenarios mentioned here.
  2. Add Head Tracking: One potentially powerful add-on to head-mounted systems is to add a head or whole-body tracking system. If the orientation of the head is known in space, gaze can be calculated as a 3D vector relative to the environment. This is a complex setup but it solves the relative coordinate system above and results in powerful analytical analysis of attention in an immersive real-world environment.

4. Integrated or Embedded Systems

This category is a catch-all for eye tracking devices built into other kinds of technology. This can include aiming devices in eye surgery systems and other medical products. These systems have even found their way into consumer electronics. Canon has released several cameras with an autofocus system based on gaze position within the viewfinder. Eye tracking devices have also been integrated into vehicle dashboards.

More recently, integrated systems include those embedded in virtual or augmented reality devices. This can be done for research applications, somewhat analogous to remote eye tracking only in this case the screen is immersive and the stimuli are completely controllable.

Eye Tracking System Integrated in Vr and Eeg

Systems integrated into Virtual Reality and Augmented Reality devices can also be used as a control scheme, where the user can interact with content via eye movements. The technology can provide an intuitive control method to menus within AR and VR, where there is no mouse or keyboard.

One particularly interesting use of eye tracking in VR is foveated rendering. The human visual system is only capable of high-acuity at the fovea (at the point of gaze). Peripheral vision has comparatively lower acuity and does not see detail. With graphical rendering in VR, one way of saving processing power is foveated rendering. The system will render higher quality graphics at the point of gaze, and lower-quality in the periphery. This requires embedded eye tracking with a fast enough sample rate and real-time data transmission to react to fast eye movements.

Some examples of each type of eye tracking system

Uses and Limitations of Eye Tracking Systems 0


These are most of the major types of eye tracking systems used in research and real-world applications today. Each system uses a tracking method and interface that is tailored to different applications. Some, like screen-based psychology can be handled with more than one method, with tradeoffs for each. Others, like kinesiology, can really only be done properly with a particular type of system. 

In subsequent entries, we will discuss eye tracking data and how the results from these systems are analyzed and interpreted. We will also discuss some of the features one may look for when purchasing an eye tracking system. Finally, we’ll cover some of the actual applications of eye tracking devices in the field today. 

About the Author

A. Mark Mento - Director of Business Development Bitbrain North America (LinkedIn)

A. Mark Mento holds a BSc in Biomedical Engineering from Boston University and has twenty years of experience in eye tracking at SensoMotoric Instruments (SMI) and Apple, Inc. He has also worked in neurotechnology product development and on other applications within the medical device and research fields. Mark joined Bitbrain this year as the Director of Business Development for our new office in Boston, USA.


  • Rogers, S. (2019, February 5). Seven Reasons Why Eye-tracking Will Fundamentally Change VR. Forbes, Retrieved from https://www.forbes.com/sites/solrogers/2019/02/05/seven-reasons-why-eye-tracking-will-fundamentally-change-vr/#3ef679df3459
  • Holmqvist, K., Nystrom, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Weijer, J. van de. (2015). Eye-tracking: a comprehensive guide to methods and measures. Oxford: Oxford University Press.
  • Duchowski, A. T. (2017). Eye tracking methodology: theory and practice. London: Springer.
  • Bojko, A., & Krug, S. (2013). Eye tracking the user experience: a practical guide to research. Brooklyn, NY: Rosenfeld Media. 

You might also be interested in:

Screen-based eye tracking
System designed to analyse the visual response to stimuli displayed on a screen.
Learn more