To Eye-track or not to Eye-track?

Advances in eye-tracking have caught many peoples’ attention lately in various blogs, journals and articles. Most eye-tracking hardware utilizes a combination of regular cameras and infrared cameras to measure users’ ocular movement (gaze) as the they look at a screen, monitor or object. The hardware is often available in various forms with different degrees of intrusiveness, such as desktop modules, Head Mounted Displays (HMD) or integrated into various displays or glasses. Selection of hardware usually depends on the application for which it is being used. The setup and operation is generally very easy and its (for the most part) very low intrusiveness and minimal calibration, makes it an ideal tool for testing certain features or parameters of new products or applications.

Physiological tools like this offer a potential way of objectively obtaining insights regarding users’ behavior. This is something that has attracted many researchers and product developers, as many of the current tools (e.g. questionnaires), often introduce various forms of bias and often call for subjective interpretation. However, one of the issues is that even though eye-tracking may reveal where a user is looking, it does not directly provide information as to why the user is focusing on a specific area. It may indicate that the user is interested in something but it can also mean that he/she is distracted or confused etc. Some companies have tried to couple eye-tracking with additional physiological measures such as Electroencephalogram (EEG), Heart Rate (HR) or Galvanic Skin Response (GSR), trying to decipher what specific fixations really mean, but their efforts have had limited results because much of this technology is still very new. Typically additional follow-up questions may be required to get a deeper understanding of the specific behavior.

Although eye-tracking may not directly tell a complete story, it is still a good indicator of certain user behaviors and a great complement to conventional methods for user testing, market research or product development. For example, by collecting eye-tracking data one can obtain information about navigation patterns and areas or details that may catch the users’ eyes more frequently as well as things that users may fail to notice.

In domains with tasks that require a lot of attention such a monitoring tasks (e.g. Control Rooms, Air Traffic Control or Command and Control) eye-tracking can offer insights on operator situational awareness and provide reasoning for certain actions or potential mistakes (e.g. did the operator fail to notice a gauge or a change on a display). Further, by combining the eye-tracking data with the adequate systems (e.g. Preventive alerts and clever mitigation strategies) many potential errors may be avoided.

In a product or software development setting, eye-tracking can be used to obtain information regarding specific details or areas. For example did something specific catch the eye of the user (e.g. Advertisement, Color, Shape or Form) or are there areas (e.g. Top, Bottom, Left, Right) in the application where users scanned more frequently and thus where more pertinent information should be placed? Also, do users tend to navigate or scan using a specific pattern? Should information be provided in a specific sequence? This is all interesting information that can be obtained using eye-tracking.

Eye-tracking also has its limitations and I feel that, as with most data, it’s the way one presents and interprets the collected data that has the most value. From eye-tracking data, graphs and images, such as heat maps and gaze plots can be created. Heat maps display a visual representation of where users fixated most frequently during a certain period of time. Gaze plots display all the fixations and the order of which they were scanned. Both are overlaid on top of the specific image or object viewed. Various statistics can also be obtained such as fixation duration or number of fixations for specific details, areas, or for the overall session. To my knowledge, these outputs are all only available when observing static views or still images. It is not possible to create meaningful heat maps or gaze plots, for any number of users, for dynamically changing views (e.g. navigating through a web site or completing a task across multiple screens). The eye-tracker simply has no way of determining what is displayed on the screen without some kind of markers, such as key presses or timestamps, and since users navigate down different paths or complete tasks using different methods, this is impossible to predict.

For dynamically changing views that do not have a clear path, eye-tracking can instead provide an output referred to as a gaze replay. The gaze replay is a video of the users’ fixations and scan paths over time, which is overlaid on top of whatever is being viewed (e.g. web site, product, book etc). This can provide some general information to where each specific user was focusing and their scan pattern, but it requires interpretation of the output for each specific user. I have not yet come across anybody that has been able to effectively collect and present data across multiple users in a presentable way for dynamically changing views.

So do I feel eye-tracking is useful? I think it can be a great tool for objectively gaining some insights regarding users’ behaviors, as long as researchers and their clients are aware of its capabilities and limitations. I don’t think it should be relied on as a stand-alone research tool, however, I think it can be a very powerful tool accompanied with other data capturing methods.