This Is How Eye Tracking Technology Works

This Is How Eye Tracking Technology Works

13 Min.
By Mark A. Mento
May 5, 2020

Eye tracking has become an invaluable tool for understanding attention, visual behavior and human behavior in a number of diverse fields, from psychology, neurophysiology, user experience, market research/neuromarketing, etc. The technology can also be used for medical analysis and screening, and it provides a new method of interaction.

Historically, eye tracking systems were invasive and immobile, and therefore useful only in very limited experiments. While there are still some limitations of eye tracking, recent advancements in the technology have allowed for much smoother setup and more universal applications. 

In this entry, we will introduce eye tracking technology and discuss the methods that most systems use to track the eyes. In subsequent entries, we will look at the different kinds of eye tracking systems, how the data is analyzed, and some of the features and applications of the technology.

What is eye tracking? 

Eye tracking is a process of measuring the movement or position of the eyes. The retina has an area of dense nerves and high-visual acuity called the fovea. The lens of the eye focuses light on the fovea, and a person moves their eyes to ‘aim’ the lens and fovea where they want to look.

In short, an eye tracking system measures where a person is looking or how the eyes react to stimuli.  

By tracking and analyzing these eye movements, researchers can use the application of eye tracking technology to gain valuable insights into human behavior, physiology, psychology, perception, and visual attention. Alternatively, eye movements can provide an alternate method for interaction with the environment, a computer interface, a virtual reality headset, etc.

Eye-tracking methods

1. Traditional Methods

Eye Tracking research was originally done by direct observation. The researcher would sit and watch the participant’s eye movements. The first automated systems required mechanical contact with the lens of the eye and were remarkably intrusive and uncomfortable.  

One of the first electronic systems was the electromagnetic scleral search coil (SSC).  These are embedded in a silicone contact lens with a wire connected to a recording device. SSC systems provide high precision and speed, but are still intrusive and are typically used in a faraday cage.

Later eye tracking devices like the dual purkinje imaging system (DPI), didn’t physically touch the eye but still required a bite bar for head stabilization and also had a very small allowable visual field (but very high resolution and accuracy).  

SSC and DPI systems are all still used under very controlled settings, typically for ultra-high fidelity neurophysiology, vision, and ophthalmology research, as they are still capable of extremely high precision and accuracy.  

Binocular Dual Purkinje Image Eye Tracking System

A binocular dual purkinje image eye tracking system at the University of Rochester Active Perception Lab

Another semi-intrusive technique, Electrooculography (EOG) systems use electrodes attached to the face that measure the small electrical potential between the front and back of the eye.  This is not particularly accurate, but the method is still sometimes used to remove eye movement noise from EEG.

2. Video-based Eye Tracking

Researchers experimented with camera-based eye tracking methods from the turn of the 20th century on. More recent advancements in computerized camera technology have made possible a new class of video-based eye tracking systems that are increasingly less intrusive and functional in a variety of scenarios, including real-world environments and in real-time.  This is now the standard in most human-based eye tracking applications, and the only ones that will be considered for subsequent sections in this document.  

Most video-based eye tracking systems consist of an infrared-sensitive camera, infrared light (IR) illumination, and a sophisticated algorithm for pupil center detection and artifact rejection. Image processing and data collection is handled by dedicated hardware, or via software on a computer or mobile device. Infrared-based illumination has several advantages: the illumination is largely invisible to the participant, and artifacts from artificial light sources can be easily filtered out by wavelength.

There are certain characteristics of the pupil and cornea that are unique under IR illumination, making it easy to selectively detect the eye and to reject “false eyes” from the camera view.

  1. Pupil Tracking Methods

Bright pupil systems use an IR source on the same axis as the camera, and track the glowing reflection from the retina through the pupil, similar to the “red eye” effect seen in photographs. This retina tracking technology approach can help compensate for lower-quality cameras and generally works best in a dimly lit room with consistent illumination. It may also work better on a participant with very pale blue eyes or an infant who has not developed pigment in the iris yet. 

Eye Tracking Bright Pupil System

Source: Wikipedia - IR-illuminated bright-pupil image

By contrast, dark pupil eye tracking systems use an IR source that is off-axis from the camera, which lights up everything in the camera view except the pupil.  The image processing system tracks the darkest, roundest thing in the field of view.  Dark pupil systems are more robust in varying light conditions and with participants with dark eyes and small pupils.  They are also usually larger since the IR sources have to be physically separated from the camera.  This accounts for the elongated rectangular shape of many commercial eye tracking systems.  

Eye Tracking Dark Pupil System

Source: Wikipedia - IR-illuminated dark-pupil image

Bright Dark Pupil Effect Eye Tracking Technology

Source: Tobii Pro

2. Cornea Reflex Tracking

In both cases, these systems also track the cornea reflection, or reflex (CR), the bright ‘glint’ from the illumination off the spherical surface of the cornea. This is a mechanism used for differentiating between eye movements and head movements. The human visual system uses both of these to acquire and track a visual target, but an eye tracking system that only sees the eye has to understand the difference. When the eye rotates, the pupil moves but the CR stays in place (like a flashlight shining on a spinning ball). When the head moves, however, both the pupil and CR move together. Gaze position is the primary indicator of human attention and a basis for subsequent analysis metrics (dwell time, glance, area of interest, etc). Before calibration, gaze can be calculated in simple terms as:

(gaze position) = (pupil position) - (CR position)

Cornea Reflex Tracking

Source: Wikipedia - Visible light eye tracking algorithm

For head-mounted systems, including eye tracking glasses and VR-integrated devices, CR tracking can not be used to determine head position (since the IR source moves with the head in this case). However, it is still done for slip compensation. This is a process to remove small movements between the eye tracking gear and the eye from the data.  

  1. Other types of video-based systems

Visible light web camera systems have seen some use in recent years. The idea is that eye tracking experiments can be run with large numbers of participants by utilizing common webcams in people’s homes. However, these systems have some significant drawbacks in terms of accuracy and data quality.  This can be compensated for, to some extent, by increasing the total number of possible test participants. If, for example, you need 100 participants but can only reliably track 10% of them, you can recruit 1000. 

The problem with this approach is that tracking difficulty is not evenly distributed among test participants.  These systems will often have trouble tracking the eyes of older participants, those with long eyelashes or eye makeup, people with glasses, etc, causing a significant selection bias problem in the resulting data.  

Setup, calibration and validation methods

Most modern eye tracking systems require relatively little setup. Thresholds for pupil and CR detection and artifact rejection are now automated and continuously adjusted. Eye tracking systems do a reasonably good job at adapting to different eye color, size, shape, and interpupillary distance.  

The participant has to be placed in front of the camera (or in the case of eye tracking glasses, AR/VR devices, or a headband, the camera has to be placed on the participant). The experiment has to be designed so that the participant stays in the camera view throughout the test and can avoid some sources of tracking difficulty (see #5 “Methodology limitations” below).

Eye tracking systems do require a calibration, which is a method of algorithmically associating the physical position of the eye with the point in space that the participant is looking at (gaze). This is because there are some variations in eye size, fovea position and general physiology that have to be accommodated for each individual. To a degree, gaze position is a function of the perception of the participant. A calibration typically involves the participant looking at fixed, known points in the visual field.  These can be displayed on a computer screen for a screen-based eye tracking system, or displayed in the physical world for eye tracking glasses.  

Calibrations can be as few as a single, centered target, but more commonly are 5, 9, or even 13 points.  The algorithm creates a mathematical translation between eye position (minus CR) and gaze position for each target, then creates a matrix to cover the entire calibration area with interpolation in between each point.  The more targets used, the higher and more uniform the accuracy will be across the entire visual field.  The calibration area defines the highest accuracy part of the eye tracking system’s range, with accuracy falling if the eye moves at an angle larger than the points used.

Eye Tracking Calibration

Typical 9-point calibration sequence.  The participant fixates on each target as they appear.  

Since calibrations require some degree of cooperation and ability, it is necessary in many cases to perform a validation to measure the success of the calibration. Some systems do this by showing new targets and measuring the accuracy of the calculated gaze. Tolerance for calibration accuracy depends on the application, but very generally an error of between 0.25 and 0.5 degrees of visual angle is considered acceptable, and well within the expected tolerances of good commercial eye tracking systems (0.3 degrees for most Tobii products, for example). For many applications, more than 1 degree is considered a failed calibration and requires another attempt. Many participants will improve on the second or third try.  Participants who consistently have a high validation error may have a vision or physiological problem that precludes their participation in an experiment.


Source: Tobii Pro website

Validation results expressed in degrees of visual angle and displayed graphically.  The purple X points are the validation targets, orange is where gaze was recorded.

Note that some very advanced systems are able to self-calibrate by creating sophisticated models of the eye and passively measuring the characteristics of each individual. Calibration can also sometimes be done without the participant’s active cooperation by making assumptions about gaze position based on content, effectively “hiding” calibration targets in other visual information.  

With some participant populations (those with macular degeneration, for example, or people who are unwilling or unable to fixate on targets), not calibrating, or using a generic calibration, may ultimately give higher accuracy.  

Some devices do not need to be calibrated if useful data can be taken from raw pupil position (i.e. medical VOR equipment, fatigue monitoring systems, etc).  

Methodology limitations

Modern eye tracking systems still have some functional limitations that have to be planned for when designing an eye tracking experiment or a real-world application.  

  1. Pupil Obstruction: One major issue is that a video-based eye tracking system has to have a (mostly) unobstructed view of the pupil.  Eyelids and eyelashes can intrude into the pupil space, obscuring the view from the eye tracking camera. Modern pupil detection algorithms can determine the center even if the pupil is partially occluded, but at some point if enough of the view is obscured, tracking will stop, even though the participant may still see.     

  2. Eye Makeup: This is much less of a problem with better systems than it used to be, but eye makeup (eyeliner and mascara) can sometimes absorb IR light and look very much like the pupil to the eye tracking system. Some researchers will insist on removing eye makeup before eye tracking experiments.

  3. Corrective Lenses: Any glasses will distort the camera’s view of the eye and cut down on some of the IR illumination that reflects back. Most modern eye tracking systems can accommodate normal glasses, but bifocals are sometimes avoided due to the non-linear distortion they cause at the transition point between prescriptions. Contacts usually do not cause accuracy problems and work well enough, though high end eye tracking experiments (reading research, for example) may avoid them because they float on the eyes, distorting the pupil and CR positions during fast eye movements.  

  4. Cooperation: Most eye tracking systems that record or use gaze position require a calibration, and this process requires cooperation from the participant.  Populations who will not reliably look at targets on command (infants, non-human primates, etc) have to be enticed (or trained) to look at calibration targets.  People with partial blindness or oculomotor deficiencies (macular degeneration, cataracts, spontaneous nystagmus, etc) are also typically difficult test participants (when gaze is required) unless the researcher is willing to accept inaccurate data or has a novel way to calibrate that does not require fixation on targets.

  5. Sunlight: Modern eye tracking systems are very good at filtering and adjusting for artificial illumination, but sunlight has a broad IR component that can obscure the pupil and blind the eye tracking camera. Only a few commercially available systems are capable of dealing with sunlight and typically these still require some way of shading the eyes.  Even if the participant is in the shade, if they feel compelled to squint because of bright sunlight, the eye tracking system will likely lose the pupil and not be able to track due to pupil obstruction. If in an static environment where some sunlight is unavoidable (i.e. a bright window) it is usually better to position the participant so the sun is behind them, rather than reflecting in the eyes.  


Eye tracking is a 100+ year old methodology that was originally used for basic research into vision and neurophysiology. The field has experienced many advancements in the last twenty years that have increased the versatility and flexibility of the technology, and therefore opened the field to new applications beyond vision and neurophysiology research.

Eye tracking (along with other technologies and monitoring devices and research techniques such as EEG, MEG, fMRI, GSR, BVP, EMG, HRV, Indoor Positioning Systems, Implicit Bias Tests (IAT and Priming), etc., provides researchers a wider knowledge into human behavior and alternative methods for human interaction.  

In the next sections, we will provide additional information about eye tracking, including the different kinds of devices available today, eye tracking data, characteristics such as sampling rate, precision and accuracy, and some of the more common eye tracking studies and applications of the technology.

About the author

A. Mark Mento - Director of Business Development Bitbrain North America (LinkedIn)

A. Mark Mento holds a BSc in Biomedical Engineering from Boston University and has twenty years of experience in eye tracking at SensoMotoric Instruments (SMI) and Apple, Inc.  He has also worked in neurotechnology product development and on other applications within the medical device and research fields.  Mark joined Bitbrain this year as the Director of Business Development for our new office in Boston, USA.


  • Holmqvist, K., Nystrom, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Weijer, J. van de. (2015). Eye-tracking: a comprehensive guide to methods and measures. Oxford: Oxford University Press.
  • Duchowski, A. T. (2017). Eye tracking methodology: theory and practice. London: Springer.

You might also be interested in:

Get your latest collection of posts on Neurotechnology, Health, Research and Business.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.