‘iTrack’ That App!

iTrack

How many times each day do you use an app on your smart phone? once? twice? dozens? Most of us use apps almost without thinking—they have become a part of the way we work and play. You are no doubt very aware of how apps are changing how you travel (buy a ticket, check in, reserve a car, find the nearest coffee, get a weather report and on and on). And there’s a visible impact on our shopping, with apps for all the major online shopping sites becoming more and more popular.

At latest count, there are at least 700,000 apps EACH for iPhones and Android smart phones. That’s a lot of apps. Just as you might expect, some are good and some aren’t. Remember about 10-15 years ago when everyone rushed to build a website? Some worked, some didn’t. We’re going through the same thing with apps right now.

Apps that don’t deliver what the consumers want are quickly abandoned. One thing to keep in mind is that people don’t necessarily use smart phones in the same ways they use a full size keyboard and monitor or even as they use a tablet. Building an app that mimics too closely how people interact with a traditional website may be a mistake. Each display option has its own advantages, and app developers need to recognize and take advantage of particular platform features.

What does eye tracking have to do with all this? Just as savvy commercial entities routinely carry out usability studies on their ecommerce websites, they are now beginning to do similar studies on their smart phone apps. The only way you are going to find out if your customers see the major features on your app is to follow their eyes as they use it. Users’ interactions with a smart phone app are far quicker than their interactions with a traditional computer website. It takes only a fraction of a second for your users to scan the phone display and move on. Do you know what they saw? Are your navigation features clear and visible even when the user is swiping or scrolling? The real estate on a smart phone is valuable—you can’t afford to waste any of it. And, time is precious—milliseconds matter, because transactions take at most a few seconds. Usability testing can improve the way your app functions and make it more valuable for you and your customers.

Featured image from Unsplash.

For the Eye Tracking Skeptic, Seeing is Believing

skeptics and believers

When evaluating new technology, skepticism is a useful reflex. Are you sure it works? How do you know? Where is your evidence? Such questions help to weed out the ineffective tools and improve the ones that show promise. In this way, skeptics guide the evolution of the very technologies of which they are skeptical. And if you’re skeptical of that conclusion, just look at eye tracking. Today’s excellent visual behavior analysis tools are in part the result of a century and half of skepticism regarding the accuracy of data collected, the applicability to different research areas and the realism of the testing setup. For example, if no one had ever said, “Hmm, that chin rest and bite bar sure do seem to distract the participant,” then progress toward systems with head movement correction would not have been as swift.

At this point, the technical quality of eye tracking is well-established. Most serious researchers accept that the data collected is accurate and realistic, but that doesn’t mean that the skepticism has disappeared. The main challenge that we currently hear from skeptics is one of utility: “Do I really even need eye tracking?” It’s an important question, one that drives us to constantly improve our services and software. However, as practically useful as this question may be, it isn’t a simple one to answer, at least not in a way that will satisfy the true skeptic. I can list the ways in which eye tracking is beneficial, but that’s just marketing. I can provide references to studies that have effectively used our technology, but those are just someone else’s findings. To the skeptic wondering if eye tracking offers them any real advantage, seeing is believing.

Here’s what I mean. To combat skepticism in website testing, I have two options: I can say “evidence suggests that traditional usability research does not fully account for the richness of the user experience” (and then listen to the crickets), OR I can simply show this video…

As you might expect, the video is the more effective means of conveying the value of eye tracking. It’s not about replacing traditional methods; it’s about using all of the tools available to describe the interaction between user and site completely. If you are only capturing outward behavior, then you aren’t getting the whole picture. You’re watching the highlight reel instead of seeing the whole game. You’re skimming the blurb on the back cover instead of reading the whole book. I could probably rattle off a few more analogies, but again, I think the video does a much better job of turning eye tracking skeptics into eye tracking believers.

Featured image from Unsplash.

Even Eye Trackers Have Blind Spots

Blind spots

“To the man with a hammer, everything looks like a nail.”

The origins of the preceding quotation are unclear – most likely Kaplan or Maslow, but some argue that Mark Twain said it first. No matter the author, this analogy is apt to describe a current trend in our industry. After roughly a half-century of amazing technological advancements and staggering feats of R&D, eye tracking researchers have created some extremely useful hammers. We have hammers that measure every fixation, saccade and flicker of your pupil. We have hammers that sit on your desk and hammers that rest unobtrusively on the bridge of your nose. We have hammers that can track the eye of pretty much anyone pretty much anywhere doing pretty much anything. I am referring, of course, to our eye tracking hardware systems, which seamlessly translate raw physiology into accurate visual behavior data. Regardless of the source of this well-worn quote, the point can be easily applied to our own high-tech tools.

Eye tracking researchers must resist the temptation to approach every study objective exclusively with eye tracking. Although there is a treasure trove of valuable information available through this methodology, there are many questions that analysis of visual behavior alone simply cannot answer. Eye tracking cannot tell you for certain which item a shopper will purchase. It provides no means of divining click or scrolling data. Most glaringly, there is no configuration of cameras, software and infrared lights capable of capturing the thoughts, expectations and perceptions of the consumer. When planning a research study, it is important to ask yourself the following two questions: Which of my objectives can be addressed by eye tracking? And what other methodologies might I employ to fill in the blanks?

This may seem like common sense, and yet the reach of eye tracking is sometimes overstated by its practitioners. For example in the field of web usability, there are researchers who suggest that heat maps and gaze plots will tell you virtually everything you need to know about usability. Obviously we agree that the eye of the user is an indispensable resource for answering a great many questions, but we only recommend it as a standalone approach when the goal of the research is extremely simple. In addition to eye tracking, any comprehensive evaluation must include analysis of click patterns, pages viewed, time on page, usability errors and scrolling. Websites are not static test stimuli; they are dynamic multilayered interfaces, which cannot be assessed without considering visual and navigational behavior.

Perhaps even more critically, there is the subjective component. Everyone knows that analyzing qualitative data can be somewhat messy. That doesn’t mean it isn’t useful. In our experience the best way to understand the implications of eye and click data is to ask the user to explain it in an eye tracking-enhanced post-testing interview, in which the eye movements of the user serve as a powerful memory cue. A gaze plot will tell you exactly what he/she looked at during a given task, but unless you ask ‘why,’ you can only guess at underlying motivations. Yes, interview data can be unreliable. Yes, the process can be inefficient and the data difficult to quantify. However, if you ask the right questions and interpret the answers carefully, your understanding of the user experience will undoubtedly be enriched by your qualitative efforts.

Questionnaires, physiological sensors, task success metrics – there are too many other methods to mention, most of which compliment eye tracking nicely (and are supported by EyeWorks). The best approach is one that pairs each research question with the appropriate research technique, for example…

  • How quickly do users notice the left navigation? Utilize eye tracking.
  • How often do users click links in the left navigation? Track navigational behavior.
  • Do users like how the left navigation is organized? Interview the user after the session.

Eye tracking is an unparalleled research methodology with applications in a wide variety of different fields. Nevertheless, that doesn’t mean it must stand alone. Don’t limit yourself. Use any and all tools necessary to meet the specific objectives of your study. If you can do that, there’s a good chance your results will hit the nail on the head.

What Gets Lost in the Heat Map

Eye tracking heatmap

If you perform a Google image search for ‘eye tracking,’ your results will consist primarily of heat maps – heat maps of webpages, heat maps of advertisements, heat maps of grocery store shelves, heat maps, heat maps and more heat maps. They are the most recognizable eye tracking analysis tool. They are the most commonly requested eye tracking deliverable. At this point, it isn’t too much of a stretch to say that the heat map has become the logo for the eye tracking industry as a whole.

However, this post will not be another puff piece about the unmitigated value of this oft-used data rendering. EyeTracking, Inc. will toot its own horn just this once to say that we were the originators of the heat map (or GazeSpot as we call it) back in the 1990s, and then we will proceed to a more objective discussion. What we’d like to talk about today is the manner in which these graphics are misused and misinterpreted. In doing so, we hope to shed some light on what gets lost in the heat map.

Take a look at the example on the right. This GazeSpot shows the aggregated visual behavior of ten users interacting with the Services page of eyetracking.com. Over 7,000 data points are represented here, and yet it doesn’t tell the whole story. Where is the eye drawn first? Is there a pattern in the way users move between elements of the page? How long do they stay here before clicking away? What usability problems are encountered? Did one user’s atypical viewing habits unduly influence the rendering as a whole? No matter what you may have heard, none of these important questions can be answered by the heat map alone.

And what about the pictures in our example? One of the most common misinterpretations of heat maps is the assumption that a particular non-text element was not viewed because it does not have an associated hot spot. Actually, the pictures on the page shown here were all viewed by all users. The reason that they don’t show up as hot spots is that it takes much longer to read a paragraph than it does to view an image. Thus, the impact of each user’s glance toward the picture grows more diluted with each second spent reading the text. As you can see, interpretation is not always as straightforward as it seems.     

This is not to say that the heat map has no value. In fact, we use them quite often in all kinds of different studies – websites, packages, advertisements, applied science and more. They are both elegant and intuitive as a means of demonstrating the total amount of attention allocated to specific features of a medium. However, attempts to apply them to deeper research questions are misguided. Any expert in the analysis of eye data will tell you that heat maps serve a precise purpose, one that should not be stretched too far.

In our experience, there is no graphic deliverable that really tells the whole story of visual behavior. That’s why we use a range of different ones – GazeTraces, GazeStats, GazeClips, Bee Swarms, GazeSpots and Dynamic GazeSpots (which are video heat maps with the added dimension of time). All of these deliverables are integrated with statistical analysis of the data, as well as traditional marketing research and usability measures to fully describe the interaction with our test materials. That’s the approach that we recommend for any comprehensive eye tracking study – use all of the tools at your disposal. While there are many fascinating results to be found in a heat map, if you aren’t careful how you use it, you might just get more lost.