Step through the doors of the Full Sail User Experience Lab, and the first thing you’re likely to notice are workstations outfitted with the latest technology. High definition cameras mounted in the corner of each room offer a panoptic view of what’s playing out on screen—and on a gamer’s face—at any given moment. Two-way mirrors allow researchers to observe test subjects unobtrusively, while a host of futuristic gadgets churn out biometric data.
“We have a lot of capabilities here,” says Dr. Adams Greenwood-Ericksen, a Course Director in Full Sail’s Game Design master’s program who oversees studies conducted in the UX Lab. “From really nice eye tracking equipment to a brand new facial recognition software package that comes out of MIT, we can take all sorts of physiological and brain-based measurements. That’s what most people think of when they think about user experience assessment. The shiny stuff.”
“But at the end of the day,” he adds, “most of what we do still relies on old school observation techniques.”
The research methods used by Adams and other scientists have been around for decades—in some cases, 80 years or more. So how do techniques that predate video games by nearly a century result in a better, more playable product? The answer lies in a field of study called Human Factors, something that both Adams and his colleague Dr. Shawn Stafford specialize in.
“The goal of Human Factors is to get machines to play nice with humans. Engineers are really good at building machines with amazing capabilities, but getting those machines to keep a human operator in the loop and functioning along with the machine in a way that’s safe; that’s a whole different problem.”
The study of Human Factors was born out of necessity during the Industrial Revolution, at a time when advancements in production techniques put workers in contact with machines that were often dangerous in their inefficiency. After World War II, the same techniques used to build a safer steam engine in the 19th century and better radar systems in the 1940’s were applied to a whole generation of product design— from toasters to space shuttles.
“This actually dovetails nicely with the evolution of video games, which are very complex function systems that have to work with a human operator. And unlike other areas, where a human can be trained to work around an existing machine, there is no higher purpose to a video game other than entertainment and enjoyment. In other words, if a game doesn’t work with a human, it’s completely worthless,” says Adams.
Enter UX. Bringing researchers into the process early on is key to ensuring that the development process runs smoothly, and it can make a huge difference in investment returns.
Generally, there are two approaches researchers take when compiling UX data. Qualitative research methods focus on data that is largely observed or anecdotal, meaning it’s less likely to be measured by numbers. Quantitative methods, on the other hand, are all about things that can be numerically verified, like statistics.
Here’s how the process works: A client comes to the UX Lab with a specific project in mind. More often than not, Adams and his team will recommend a usability study, which begins by compiling qualitative data through the systematic observation of a test subject. At Full Sail, every step of the usability assessment is handled by graduate students in the Game Design master’s program, with Adams and Shawn overseeing the process.
“Students who choose to work in the UX Lab as part of their capstone usually finish the program with 20 to 25 studies under their belt, so they’re pretty sharp by the time they graduate.”
The graduate students work with the client to determine their needs, then design custom test plans built to answer certain questions. Once the client agrees on a test plan, the team moves forward with data collection.
Remember the shiny stuff? This is where it comes into play, says Adams. Study participants are hooked up to machines that note subtle changes in their vital signs. Eye trackers use infrared light reflected off the subject’s corneas to track where their visual focus is at any given moment. And facial recognition algorithms detect micro expressions—tiny changes in a person’s affect lasting between one 25th and a third of a second—which help researchers determine what the test subject might be feeling.
“If we detect a change in affect, we’ll look at the eye tracking data to figure out what they’re looking at, then we’ll go back and ask them, ‘Hey, at this particular moment, when you were looking at this particular thing, you smiled. Anything you can tell about that? That helps us drill down to what’s happening during gameplay and why.”
Of course, qualitative methods aren’t perfect. Asking a player to describe what they’re doing as they do it has the potential to distract from the game. This is where quantitative data such as gameplay statistics come in handy.
“We look for descriptive statistics that describe broad trends such as means, medias, and modes. This combination of [qualitative and quantitative] techniques offers the most helpful and actionable feedback.”
Once the data has been collected and analyzed, the team provides a report to the client, which is often followed by an adjustment period before heading into another round of testing. It’s an arduous (and often expensive) process, but Adams says the cost benefits of UX testing are well documented.
“Usability problems account for about 80% of maintenance costs [in gaming]. The reason it’s so expensive is because you’re changing assets, you’re changing code and architecture, in many cases to support functionality.”
From AAA games to indie projects, the amount of money UX testing can save developers is no small consideration. Beyond that, the data Adams and his team provides offer another incentive, one that could be considered more important than even monetary savings.
“Personally, I think the big value in UX is that you decrease the probability, sometimes drastically, that players will encounter something that makes them quit your game.”