EN
6 July, 2022

Eye Tracking Systems Guide for Research: Visualization, Analysis, and Reporting

Many of our research clients prefer to use the raw eye tracking data feeds without visualizations of the data. However, some researchers prefer to have additional visualization and analysis capabilities. We have seen this at Smart Eye, where many of our university clients don’t necessarily prioritize data visualizations: the raw data is most important to them, as they can analyze the data directly. We already covered some of the common visualization capabilities such as gaze trails, fixations, heat maps, and Area of Interest (AOI). But you can also utilize other features such as: 

 

Data recording, Logging + Playback

Smart Eye Pro (SEP) has the ability to output any of the data parameters that it reports (gaze, eyelid opening, head tilt, etc.). One of the outputs can be a log file stored on the localPC. Another output can be the same data (as in the log file)streamed to another PC via Ethernet. 

In the case of video data recording, if video is recorded usingSEP it can be replayed in SEP and all of the measurements/algorithms will operate as though the recorded video was coming into SEP as a live video feed from the cameras. In other words, it doesn’t matter if the video feed is live (from the connected cameras) or replayed (from a recorded video), SEP can act/operate on it the same way(including logging to a log file or streaming to another PC viaEthernet)

 

Graphical Representations of Data / Pre-Formatted Analysis Reports

Smart Eye Pro can graph the data using an X-Y style graph.Each individual data value can be placed on its own graph. Pre-formatted analysis reports reference the reports that our data analysis partner software can produce. Their software takes data values from Smart Eye Pro to produce reports based on different topics, such as drowsy driving, attentiveness, and workload. Though Smart Eye Pro doesn’t directly produce values for these topics, it does produce the data and information necessary to perform the calculations. 

 

Performing Gaze Overlays

This is essentially laying the gaze over top of a video or display image. One overlay method is by using Scene CAMwhere the gaze is overlayed directly onto an image from the scene camera. This method works well, however it will introduce parallax errors caused by the offset of the angle between the subject’s eyes and the camera position.Another option is performing Screen Grabbing overlay, which is where the live video output from a display is split and laid directly into the 3D world model for finite gaze overlay. This method provides more precision and eliminates issues with parallax errors. 

A parallax error can be described as follows: a camera is placed near a subject’s head to track what the subject is looking at out the windshield of an automobile. The object being observed is going to have a plane (3D object) that will be parallel to both the camera and the subject’s eye. Since the camera is not placed directly at the same location in 3Dspace as the subject’s eye, there will be an angular offset between where the camera “thinks” the eye is looking and where the person “thinks” their eye is looking. This angular offset is called the parallax error. The distance from the subject’s eye and the camera to the plane change the angle of error.

 

Merging Multiple Systems

Another capability of the visualization tools is Merging multiple eye tracking systems together for performing simultaneous multi-person studies. In this case you can see when the test subjects are looking at their screen and when they are interacting directly with each other. This capability can be used for worker / co-worker evaluations as you can see here or it can also be used for pilot / co-pilot studies.

 

Learn more in our latest e-book: A Guide to Selecting Eye Tracking Solutions.

Written by Ashley McManus
Back to top