Measuring Expertise – A New Era in Training

surgeons

In today’s world, highly skilled jobs are becoming more demanding as personnel are expected to perform critical tasks using highly complex systems in difficult environments. Surgeons and pilots are only two examples of individuals facing such environments. Pressures on the medical and aviation industries are enormous. In the medical realm, our aging population means more medical conditions to treat and thus an ever-increasing demand for skilled medical personnel. In the aviation arena, more people are travelling to more places and thus adding further strain on the aviation industry to supply evermore pilots to meet the demands both now and into the future.

Despite these two examples being seemingly quite different, they face a common basic problem—namely time-efficient and resource-efficient training. The quality of training needs to be maintained, and ideally even improved, as time goes on. Today, training is conducted in both of these fields with a mixture of time in the classroom, time using simulators (computer based or otherwise) and ultimately time doing the actual job in a highly supervised environment. Typically, trainees must complete successfully a set of defined “tasks” in order to exit the training and move towards the appropriate form of certification.

The problem with the current approach (as many trainers in these fields will tell you) is that the candidates who complete these training programs are highly varied and don’t necessarily have comparable levels of proficiency. I was shocked myself to be shown by one trainer (in charge of training surgeons in a particular technique) a list of surgical residents who would be graduating that year. Of that list, he commented, “I would only allow two of them to touch me”. The ultimate decision as to whether a person qualifies comes from a combination of checkboxes that shows a trainee completed the perquisite tasks and possibly a subjective determination from one or more supervisors that the candidate is ready. The latter may be hard to hold back if the trainee did in fact complete the mandated tasks.

EyeTracking has now begun working with a number of entities to bring its patented technology to help mitigate this problem. The training community has long searched for an objective measure of expertise or competence in order to make better and more consistent determinations as to when a trainee is proficient. Some organizations already use eye movement data to understand where the trainees look as they perform their training scenarios. Eye movement information is an important and valuable asset, especially in an aircraft where it is important for pilots to maintain specific scan patterns, ensuring that a pilot continually views specific instruments and readouts in a pre-defined order and time period.

Now, instructors understand that scan behavior can be trained and measured. What the instructors don’t know, however, is how hard each task is for each trainee. Ideally, when a trainee passes a task, he or she does so using a reasonable level of mental effort. The worrisome situation is the trainee who passes but is at or near his upper limit of manageable cognitive effort and thus is on the verge of making serious mistakes because of high cognitive workload. Looking at the scan pattern “trace” alone fails to identify when the workload of the operator began to climb or became too high. Perhaps it had already been elevated for the last ten minutes, and ultimately mental fatigue is what led to the pilot’s error. Simply put, eye movement data alone usually only pinpoints the ultimate point of failure, which in many situations is too late. Our goal should be to fix the problem when it starts and not let it escalate into a catastrophe.

Let’s dig deeper into this previous thought for a moment, as I think this example underlines many different training and certification issues of today. While we can train a person to perform a set of actions, we can’t actually know how hard that action is for the person to complete. I am told that many of the jobs we are discussing are highly competitive, so simply asking the trainees how hard they find a training task to be is likely not going to get you an answer other than “no problem” or “fine”. The upside for attaining certification in these jobs include a highly competitive salary, or the chance to realize a lifetime’s ambition, or both. For military pilots, as an example, it is a personality trait that trainee pilots often do not want to show any weakness to their superiors, and / or peers, something that instructors try hard to train out of their trainees.

EyeTracking’s revolutionary technology for measuring level of mental effort goes to the heart of this issue. It can be easily integrated into a wide range of training environments today—including medical, aircraft and automotive simulators—to enable instructors to gain a purely objective level of understanding about their trainees’ mental effort. Using eye tracking cameras (either worn as glasses, or unobtrusively mounted onto a desk, console, dashboard or cockpit) we monitor small changes in pupil diameter to provide a measure of cognitive workload. As we are using eye tracking, we of course can also tell where a person is looking. So, now we know where a person is looking and how hard (or not) the brain is working. We now know whether the trainees are “spacing out” as they look at a display or whether they are mentally engaged. We know, when each person’s workload elevates or drops outside of its norm. We know if the person has an elevated workload for a given scenario compared to other trainees as well. So even if that person successfully completes a given scenario, the instructor may wish to concentrate on retraining for that scenario if the instructor feels that this trainee was working much harder than other trainees to complete it. Why is this important? Imagine if a pilot successfully completes an instrument only nighttime approach and landing scenario but in actuality had such high cognitive workload that he nearly did not complete the task. What if the pilot is ultimately certified and finds himself in a similar situation but now additional stressors are introduced, such as the co-pilot is unconscious or an engine fails or some other unforeseen problem occurs? It is likely that the pilot with the high workload in training would not be able to cope with the additional demands as easily as another pilot with more cognitive capacity.

In today’s training programs, many trainees who are not yet fully trained may well get the OK to proceed through their program. If we could know which ones are having cognitive difficulty, the instructors of tomorrow can adapt and tailor training to the pilots and surgeons under their supervision to ensure a higher level of safety and success.

Reach out to us today to see how you can use our technology in your training program. Take advantage of the latest developments in training and move your training environment to the next level. Email us at info@eyetracking.com

Patent Notice: EyeTracking, Inc.’s Cognitive Workload, Cognitive State and Level of Proficiency technologies are protected by Patents: US 7,344,251, US 7,438,418 and US 6,090,051 and all International Counterparts.

Featured image from Unsplash.

iPhone 6 Eye Tracking and the FOVIO Eye Tracker

iPhone 6

Scene Camera Data Collection – Mobile / Tablet Example

Testing on a monitor, testing with a projector, testing on a laptop, a Command and Control Station, a TV… the list goes on. Where ever a person meets machine, there is a way that eye tracking can be employed. As new interfaces and devices are released, eye tracking must evolve to ensure that it can be used easily with those devices.

The latest such device was released yesterday, and that’s when mine turned up in the mail – I am of course referring to the much anticipated iPhone 6. Here at EyeTracking, we have many customers that use our EyeWorks software to test mobile apps on a variety of devices. We ourselves, run usability services (using EyeWorks or course), for a range of companies testing mobile apps. As we had an iPhone 6 in hand, we thought we should perform a quick test to ensure that all is working well between EyeWorks and the latest top end phone on the market.  

For those that have not used the EyeWorks Scene Camera Module yet, it is the most easy to use and powerful scene camera solution on the market. We will get more into this in a future blog. Just to make things more interesting, we decided to use the newest eye tracker on the market –the much talked about FOVIO system from Seeing Machines. The first production system of FOVIO only started shipping to the research community this week too, so it seemed only too fitting to use it for this test.

Setup took around about 3 minutes, and we recorded simultaneous and synchronized high-definition videos of the iPhone 6 screen and Picture-in-Picture view of the subject’s hands. There is no geometry configuration needed, just click start, calibrate four points and everything else is running.

Click the embedded clip below to view the raw unedited video from our test. We’ll be sure to post more in the near future, so be sure to check back often and subscribe to our YouTube channel.

Contact our sales team if you are interested in learning more about EyeWorks or any of our other products and services.

Featured image from Unsplash.

King Midas and the Golden Gaze

Golden Gaz

Gaze-controlled systems have been in the news quite a bit lately. Fujitsu made headlines at CEATAC (the Consumer Electronics Show of Japan) last week when they demoed a prototype tablet that uses eye movements for navigation. In September CNN Tech ran a story about a $30 pair of eye tracking glasses that “opens the door to a new era of hands-free computers, allowing us to use them without a mouse, keyboard or touch screen.” Such innovations are certainly impressive, but before we all throw away our antiquated hand-controlled devices and start practicing eye-clicks, let’s get some perspective on this application.

History: The technology to control systems with our eyes has been around for about three decades. Since the early 1980s disabled users have benefited greatly from the use of gaze-controlled systems as a means of clicking and typing. As eye tracking has improved, these systems have grown more accurate, less invasive, easier to calibrate and more broadly applied. It’s been a life-changing advancement for users with cerebral palsy, spinal injury, Parkinson’s, muscular dystrophy and a variety of other disabilities.

So why hasn’t gaze-control been implemented in all computing platforms? Well, one reason is that the technology is not small enough, fast enough or cheap enough for a standard computer. Evidently, that barrier is on the verge of being eliminated. There is, however, another reason that you navigated to this blog using your fingers instead of your eyeballs -because it’s easier that way. In a digital environment, clicking, swiping and typing are currently the best ways to get from pixel A to pixel B. Why complicate things? Our hands are well-suited to fine motor tasks. Although the idea of controlling the world using only your eyes may appeal to your inner-Jedi, it really isn’t the most practical option for able-bodied users.

Obstacles: King Midas thought it would be great if everything that he touched turned to gold, but that didn’t work out so well for him. This legend has been adopted by eye tracking researchers to describe a fundamental obstacle of gaze-controlled systems. They call it the Midas Touch Problem. Here it is in a nutshell: the eye has evolved over millions of years to view the environment, not to manipulate it. In a gaze-controlled interface the eye needs to do both of those things. Thus, the system is required to distinguish between (1) gaze intended to gather visual information and (2) gaze intended to activate a specific command. Otherwise, the user finds that everywhere he or she looks – voluntarily or involuntarily – a new function is activated (just like King Midas – Get it?). To combat the Midas Touch Problem, dwell time and blinks are used as clicking modalities in many gaze-controlled systems, but that doesn’t really solve the issue. How many times did you blink while reading this paragraph? How many times did you stare at a part of the screen for more than 500 milliseconds? You probably don’t know because these actions often occur unconsciously. So now King Midas has some gloves, but they have a few pretty big holes in them. And Midas Touch isn’t the only problem. You also have to worry about head box constraints, calibration drift and mechanical issues inherent in all practical applications of eye tracking. Plus, in the computer age our visual system is already over-strained. How will the eye respond to repetitive selection tasks? How long until we have a disorder called Pupil Tunnel Syndrome? All of these factors must be considered when evaluating this technology.

Conclusions: Gaze-controlled systems provide a wonderful benefit to the disabled. They offer the opportunity to read, write, communicate and use the internet to people who would otherwise be excluded from these activities. The smaller/faster/cheaper gaze-controlled systems in today’s news definitely represent important breakthroughs for eye tracking as an assistive technology. That said, it’s hard to imagine that gaze-controlled systems will replace hand-controlled systems for the population at large. Maybe there is a hybrid arrangement (hands + eye) that could improve digital interactions, but the eye alone does not seem to be the best option. There are just too many complications (accidental clicks, slower dwell-based navigation, accuracy issues, camera problems, eye stress, etc.). If we are indeed on the precipice of “a new era of hands-free computing,” we might end up learning the same lesson that King Midas did – Be Careful What You Wish For.

Featured image from Unsplash.

Literature Review: A Decade of the Index of Cognitive Activity

Cognitive Workload Module

In 2002, Dr. Sandra Marshall presented a landmark paper at the IEEE 7th Conference on Human Factors and Power Plants, introducing the Index of Cognitive Activity (ICA). This innovative technique “provides an objective psychophysiological measurement of cognitive workload” from pupil-based eye tracking data. In the decade since this conference, the ICA has been used by eye tracking researchers all over the world in a wide variety of contexts.

In this installment of the EyeTracking blog, we’ll take a look at some of the most interesting applications of the ICA. There are many to choose from, but here are a few of the greatest hits…

The ICA in Automotive Research

Understanding the workload of drivers is central to automotive design and regulation. Schwalm et al. collected ICA data during a driving simulation including lane changes and secondary tasks. Analyses of workload for the entire task and on a second-by-second basis indicated that the ICA (a) responded appropriately to changes in task demands, (b) correlated well with task success and self-reported workload and (c) identified shifts in participant strategy throughout the task. The researchers conclude that the ICA could be a valuable instrument in driver safety applications including learning, skill acquisition, drug effects and aging effects.

The ICA in Surgical Skill Assessment

Currently, surgical skill assessments rely heavily on subjective measures, which are susceptible to multiple biases. Richstone et al. investigated the use of the ICA and other eye metrics as an objective tool for assessing skill among laparoscopic surgeons. In this study, a sample of surgeons participated in live and simulated surgeries. Non-linear neural network analysis with the ICA and other eye metrics as inputs was able to classify expert and non-expert surgeons with greater than 90% accuracy. This application of the ICA may play an integral role in future documentation of skill throughout surgical training and provide meaningful metrics for surgeon credentialing.

The ICA in Military Team Environments

Many activities require teams of individuals to work together productively over a sustained period of time. Dr. Sandra Marshall describes a networked system for evaluating cognitive workload and/or fatigue of team members as they perform a task. The research was conducted at the Naval Postgraduate School in Monterey, CA under the Adaptive Architectures for Command and Control (A2C2) Research Program sponsored by the Office of Naval Research. Results demonstrated the viability of the ICA as a real-time monitor of team workload. This data can be examined by a supervisor or input directly into the operating system to manage unacceptable levels of workload in individual team members.

The ICA Across Eye Tracking Hardware Systems

Different research scenarios demand different eye tracking equipment. Because the ICA is utilized in so many disparate fields of study, it is important to validate this metric across different hardware systems. Bartels & Marshall evaluated four eye trackers (SMI’s Red 250, SR Research’s EyeLink II, Tobii’s TX 300 and Seeing Machines’ faceLAB 5) to determine the extent to which manufacturer, system type (head-mounted vs. remote) and sampling rate (60 Hz vs. 250 Hz) affected the recording of cognitive workload data. Each of the four systems successfully captured the ICA during a workload-inducing task. These results demonstrate the robustness of the ICA as a valid workload measure that can be applied in almost any eye tracking context.

The Index of Cognitive Activity is offered as part of EyeTracking, Inc.’s research services. It is also available through the EyeWorks Cognitive Workload Module.

References

Richstone, L., Schwartz, M., Seideman, C., Cadeddu, J., Marshall, S., & Kavoussi, L. (2010). Eye metrics as an objective assessment of surgical skill. Annals of Surgery. Jul; 252 (1): 177-82.

Marshall, S. (2009). What the eyes reveal: Measuring the cognitive workload of teams. In Proceedings of the 13th International Conference on Human-Computer Interaction, San Diego, CA July 2009.

Schwalm, M., Keinath A. & Zimmer, H. (2008). Pupillometry as a Method for Measuring Mental Workload within a Simulated Driving Task. In Human Factors for Assistance and Automation. Shaker Publishing, 75–87.

Bartels, M. & Marshall, S. (2012). Measuring Cognitive Workload Across Different Eye Tracking Hardware Platforms. Paper presented at 2012 Eye Tracking Research and Applications Symposium, Santa Barbara, CA March 2012

Patent Notice:

Methods, processes and technology in this document are protected by patents, including US Patent Nos.: 6,090,051, 7,344,251, 7,438,418 and 6,572,562 and all corresponding foreign counterparts.

EyeWorks™: Dynamic Region Analysis

Dynamic Region Analysis

There’s a lot to like about EyeWorks™. Its unique brand of flexible efficacy makes it an ideal software solution for eye tracking professionals in a variety of academic, applied and marketing fields. To put it simply, EyeWorks™ IS the collective expertise of EyeTracking, Inc., refined and packaged for researchers everywhere. In the coming months we will highlight a few unique features of EyeWorks™ in the EyeTracking Blog.

Dynamic Region Analysis (Patent Pending)

All good science must quantify results. Eye tracking research is no exception, be it academic, applied, marketing or any other discipline. Unless you have an objective way to evaluate the precise activity of the eye, there is little value in collecting such data. Thus, most eye tracking software offers the ability to draw regions (or AOIs, if you like) as a way to quantify the number and timing of data points within any static area. In other words, if you want to know how long the user of your training software spends viewing the dashboard, or when your website user sees the navigation, or how many eyes run across your magazine ad, you can simply draw the shape and let the software generate the results. This is quite useful, but there’s a limitation (hint: it’s underlined and bolded above). Yes, the operative word is static. Most eye tracking analysis software allows you to draw regions for static content only. That means no flash, no dropdowns, no mobile features of a simulation, no video, no objects moving in a scene camera view. As you can imagine, this seriously inhibits the ability of the researcher to quantify the results of any study of dynamic content.

…Unless that researcher is using EyeWorks, a software platform that does not limit regions to the static variety. Dynamic Region Analysis allows you to build regions that change shape, regions that move closer and farther away, regions that disappear and reappear. Generally speaking, any region that is visible at any time during your testing session can be tracked. This patent-pending feature has been part of EyeWorks for the past five years, and we’ve used it in analysis of video games, websites, television, simulators, advertisements, package design and sponsorship research. Because of EyeWorks, the results of these dynamic content studies include more than just approximations of viewing behavior and subjective counting of visual hits; they include detailed statistical analysis of precise eye activity. Our clients appreciate this distinction.

Here’s a video in case you are having trouble visualizing (so to speak) dynamic regions. We’ve taken a very subtle product placement scene from a film, and used EyeWorks’ Dynamic Region Analysis to identify the hidden advertising (outlined in green). In a study of this content, these regions would allow us to analyze precisely (1) when each product was seen, (2) how many viewers saw it and (3) how long they spend looking it. Click the embedded clip below and watch the dynamic regions in action.

This is yet another example of an area where other eye tracking software says “No Way,” and EyeWorks says “Way!” Contact our sales team if you are interested in learning more about EyeWorks or any of our other products and services.

The EyeTracking Blog EyeWorks™: Multi-Display Data Collection

Multi-display data collection

There’s a lot to like about EyeWorks™. Its unique brand of flexible efficacy makes it an ideal software solution for eye tracking professionals in a variety of academic, applied and marketing fields. To put it simply, EyeWorks™ IS the collective expertise of EyeTracking, Inc., refined and packaged for researchers everywhere. In the coming months we will highlight a few unique features of EyeWorks™ in the EyeTracking Blog.

Multi-Display Data Collection

A typical eye tracking study takes place within the borders of a single display, be it a monitor, projection, television or scene camera view. EyeWorks, however, is far from typical. In addition to managing standard data collection, our software offers the opportunity to collect data across multiple displays simultaneously. This innovative feature is fully integrated into all components of the EyeWorks research model, from study design through data analysis.

There is a wide variety of applications for which multi-display testing is essential. To name just a few, it is possible to collect data on users of/in:

  • Multi-screen software interfaces
  • Command & control workstations
  • Driving, flight and other vehicle simulators
  • Competing media (e.g. using iPad while watching TV)
  • Digital display + environment (e.g. taking notes on a computer while viewing a live lecture)
  • 360 degree environments (e.g. multiple scene cameras)
EyeWorks is the only eye tracking software capable of collecting data and recording video in these scenarios. Up to five independent displays are supported, but as computer processing speed increases that number will grow. Regarding hardware, the multi-display feature is currently available only for researchers using a faceLAB eye tracker from Seeing Machines.

The video (above) demonstrates multi-display data collection and introduces another component that we haven’t yet mentioned – your displays can record more than just eye data. You may wish to use one display to record the foot on the gas pedal of a driver in a simulator. You may be interested in capture the face of a system operator to make sure they are alert. Any video stream may be recorded in synchrony with your eye tracking data.

The example provided here shows a participant interacting with three different media simultaneously (top). We have collected multi-display eye tracking data capturing (bottom left) his eyes viewing a print brochure Multi-Display, (bottom center) his eyes exploring CNN.com on a computer monitor and (bottom right) his eyes interacting with an iPad2. You can watch the user in the scene camera view and follow his point of gaze as it moves across each of the displays.

Our eyes are rarely, if ever, confined to a single visual plane, so why should our eye data be treated in such a way? Contact our sales team if you are interested in learning more about EyeWorks Multi-Display or any of our other products and services.

The Danger of Safety

Driving in snow

The semiautonomous vehicle is the future of the automotive industry. Innovations such as forward collision avoidance radar and lane departure warning systems are evidence of a clear trend – little by little, demands on the driver are being shifted to the car. It’s easy to see how these and other safety advances could make our roadways less dangerous. After all, the vast majority of traffic accidents are the result of human error. Any technology that can take a bit of responsibility away from the guy fiddling with the radio and playing Angry Birds while traveling 70 MPH down the freeway is welcome.

But let’s not forget the ‘semi’ in semiautonomous. A recent feature in Wired Magazine explains the risks inherent in the automation of certain aspects of the driving experience. While computerized assistance can improve safety in dealing with stressful situations, it may actually have an opposite effect in less taxing ones. The deciding factor is cognitive load. Until vehicles reach the point of being fully autonomous, the driver must remain mentally engaged at all times. That isn’t a problem when navigating the gridlock of downtown at rush hour (i.e. high cognitive load), but consider the open road at its most hypnotic – a long straight featureless desert highway late at night. It can get quite boring. You might flip on the cruise control. You might activate voice navigation to let you know when to exit. Such actions reduce the cognitive load of a task that is already, perhaps, too low. The potential consequences include decreased situational awareness and increased reaction time. This can be a dangerous combination as you speed toward that stalled truck in your lane a few miles ahead.

So it seems that a safeguard is required to ensure that our safety features do indeed keep us safe. More specifically, the semiautonomous vehicle needs a means of monitoring the mental state of the driver, a way to determine whether or not he or she is sufficiently engaged in steering, braking, accelerating, etc. There are several ways to measure task-based cognitive workload. They run the gamut from paper-and-pencil subjective ratings (e.g. the NASA-TLX) to complex objective readings of brain activity (e.g. EEG). Obviously, you aren’t going to ask people to fill out a questionnaire or wear a network of electrodes every time they take a trip to the supermarket. The goal is to make driving safer without adding further complications. If we want to monitor workload in a real world driving scenario, we’re going to need something a bit more subtle.

EyeTracking, Inc. has a solution. The Index of Cognitive Activity (ICA) is an objective, unobtrusive means of measuring cognitive workload. Instead of relying on driver feedback or direct physiological sensors, the ICA algorithm analyzes fluctuations in pupil size while minimizing light effects. Best of all, this patented metric relies on a tool that will most likely be available in tomorrow’s cars anyway – eye tracking. The benefits of monitoring not only point of gaze, but also workload are undeniable. In this model of ICA-enhanced eye tracking, your car will be able to address four critical driving questions: (1) are your eyes are opened? (2) are your eyes focused on the road? (3) are you cognitively overwhelmed and (4) are you cognitively underwhelmed? This information can be used in real-time to alert you to the greatest hazard out there – your own visual and mental behavior.

Several major automakers have discovered this valuable metric and put it to use in their testing labs. For example, the BMW group conducts groundbreaking research using the ICA to evaluate cognitive workload during critical driving events (Schwalm, 2008). For another automaker, the ICA has been employed to examine the differences between professional racers and normal drivers. These and other applications represent key steps toward integration of a cognitive workload gauge into the next generation of automobiles. Additional R&D is required, but hopefully a new breed of semi-autonomous vehicles, capable of evaluating the mental state of the driver, is just a bit further down the road.

Featured image from Unsplash.

The Evolution of a ‘Promising’ Technology

EyeTracking Technology

Over the past hundred years or so, the word “promising” has been employed quite often to describe eye tracking technology – from the very first noninvasive eye data collection by Dodge and Cline in 1901, through Fitts’ work with pilots in the 40s and 50s, right up to modern day uses in a diverse array of applied and research fields. Indeed, it is a promising technology. Absolutely, unquestionably, indubitably, there is great promise in the precise evaluation of visual behavior.

However, as noted by Jacob and Karn (2003), to be described as “promising” for such a lengthy interval is a dubious distinction. On one hand eye tracking must really hold promise or else it would have been discarded long ago. On the other hand, it raises a difficult question: when will this long-heralded promise finally be fulfilled?

I’ve worked in the industry for roughly seven years, and I can count on one hand the number of times that I have stated my occupation to someone who showed even the smallest modicum of recognition. The most common response that I get is a vaguely interested “Hmm.” It seems that even now, after a century of development and important discovery, eye tracking is still relegated to the fringes of public awareness. Think about some of the other inventions around the time that Dodge and Cline were measuring eye movements: the X-ray, the modern microscope, the diesel engine. While these contemporary advances famously changed the world, eye tracking continued to be thought of (if at all) as “promising,” and so it remains to this day. Present company excluded, of course. Anyone reading this blog probably already knows that eye tracking is a great deal more than just some potential futuristic possible down-the-road solution, so I won’t bother with a list of its accolades. What I’d like to discuss instead is the aforementioned perception.

From my vantage, there are two reasons that eye tracking has spent so long in the limbo of “promise.” The first is that the pertinent technologies have been slow to develop. Visual behavior is both subtle and swift. In order to accurately analyze gaze position, pupil dilation and other eye activity, you need an advanced configuration of cameras and software, the likes of which has only recently become available. Past generations of eye trackers were nowhere near the level of precision, automation and flexibility that we now enjoy. Also, today’s eye tracking systems are more than just noninvasive; they are unobtrusive. That may seem like a purely semantic difference, but actually it’s a key component in delivering on the promise. It means that for the first time we can track the eye in a truly natural setting. Consider, for example, the eye tracking-aided 3D televisions unveiled by LG last week. Without our current standard of accurate unobtrusiveness, such a device would have been impossible. And who knows? – maybe this invention will serve as the final nudge that pushes eye tracking across the tipping point of our collective consciousness. Maybe the next time I tell someone what I do for a living they’ll say, “Oh, you mean that thingy in my 3D TV?” to which I will joyously reply, “Sort of, yeah!”

So there was this snail’s pace of development over the course of a century that contributed to the perception (or lack thereof) of eye tracking, and yet that isn’t the whole story. There’s another reason that people are still calling eye tracking a “promising” technology today. It’s because no matter how many new frontiers are reached, there are always promising new ones. One need only look at the history of the field to see what I mean. By the time eye tracking had become an established tool in physiological research, it had developed into a promising one for HCI. Then, as it grew into an established tool for HCI, it became promising as an assistive technology. Over time that newly-established assistive technology was applied to promising areas of defense, security, automotive, medicine, marketing, entertainment, and on and on and on. If you consider such myriad applications, it isn’t any wonder that eye tracking has remained perennially “promising.” In fact, with every evolution and expansion, this descriptor becomes all the more appropriate…which is a good thing. I promise.

Featured image from Pexels.

Monitoring Wakefulness of Air Traffic Controllers

Air traffic

At midnight on Wednesday March 23rd two commercial airplanes approaching Ronald Reagan Airport in Washington D.C. requested permission to land. The tower responded with only silence. After repeated attempts at communication, both pilots were forced to navigate their descent through the darkness without the assistance of Air traffic Control.  The landings were successful and no one was injured, but when it was revealed that the controller on duty was asleep at his post, the story captured national attention.

Fatigue is unavoidable for the air traffic controller. The combination of long hours, monotonous tasks and high stress will eventually lead to physical and mental exhaustion, no matter how many cups of coffee are consumed. The event described above is just one of five cases reported in the past month. This is not a pleasant thought for the frequent flyers among us.  It means that at any given time as we hurtle through the atmosphere in a combustible tube traveling 500 miles per hour suspended 30,000 feet above the earth, the person charged with guiding us safely to the ground might be fighting that pesky recurrent nod of the head that we have all experienced during one workday or another (hopefully in lower leverage situations). To say the least, this prospect raises concerns.

New government regulations have already been put in place to increase staff and decrease hours, but technology may offer a more proactive solution. The application of eye tracking to aviation and transportation security is not new. Over the past decade we have conducted research with FAA, TSA, ONR and NASA to examine the visual behavior and cognitive state of system operators. It’s easy to see how this technology might be applied to our current situation with ATC. The challenge, after all, is making sure that the controller’s eyes are opened and pointed at the screen. What better method for achieving this than eye tracking? It’s the most objective and reliable tool available for ensuring that attention remains focused during critical aviation events.

And while you’re at it, you might as well get the most out of this technology. In addition to detecting when the eyes are opened and directed at the screen, eye tracking can determine whether or not a person is looking at the appropriate SECTION of the screen. Such data could be used in real time to alert the controller of an unnoticed situation before it becomes a crisis. Another applicable component of eye tracking is the detection of cognitive state. Fatigue, boredom, and mental overload each leave a unique signature upon the eye. By examining fluctuations in pupil size (using the Index of Cognitive Activity) along with eye movements, blinks and divergence, we are able to determine whether or not a person is cognitively impaired. In the case of ATC, this information could be used to alert the supervisor when a given controller is too tired or stressed and needs to take a break.  

Putting more controllers in the tower for shorter periods of time is certainly a step in the right direction. However, the use of eye tracking in air traffic control would provide an additional safeguard, one that most air travellers would be delighted to know is in place.