Distraction Free Reading

Understanding Users through Data: UX, Ratings, and Audiences

Photograph of hands putting sticky notes with drawings on them onto a grid hung no a wall.

Image: Understanding how audiences move through information. Photo by Luca Mascaro, some rights reserved (Creative Commons license).

“It needs to be usable by distracted individuals in a hurry. It needs to be extremely legible and intuitive,” began the client emphatically as he leaned forward, one of several people  gathered at a conference table on the 16th floor of an office tower in Houston, Texas. He rested back in his chair and waited, drumming his hands on the table. The project lead and two of the designers nodded, as one called a vast library of application mockups up onto the demo screen. As she scrolled through these, the other explained the rationale behind its user-interface elements: “we tested this prototype with [x user base]. We have seen that they need to take [y action] immediately, and if they are hindered in this, the company itself cannot track projects or time spent by employees. [Staff] are too busy on the job to engage in lengthy bookkeeping procedures.”

This project, a massive one spanning more than a year’s research and development, is one among many for which I am currently acting as a participant observer at [Intellisoft Inc.] In foregrounding research, this company is not unique, but they are among an ever-growing number of organizations appropriating anthropological methods to understand how audiences interface with technological artifacts. Occasionally, these methods employ terminology that diverges between the academic and applied social sciences; it took me a moment to realize, for example, that “contextual inquiry” is field research, that is, ethnography.

In this post, I map out how emerging fields such as UX conceptualize audiences in ways that differ from, and improve upon, earlier audience measurement models. I also want to outline ways that mass media outlets are rethinking how social media can be harnessed to deliver clearer understandings of audience behavior.

Conceptualizing an audience in UX and mass media work

In a recent post, I outlined the history of User Experience research (UX), and its divergence from Human Factors Engineering/User-Centered Design, but it bears clarification that the field of UX as such was born out of the growth of the Internet, and an increasing interest in making design and information architecture decisions based on more than just intuition. Indeed, one of the most prominent names in the field of UX is Don Norman, founder of the Nielsen Norman Group, and the first to label himself a “user experience professional.” UX represents a departure from the models used to track audience behavior and measure demographics that still dominate the mass media industry.

Television, for example, still relies largely on aggregate measurements of its audience—for reasons of scale it is reluctant to engage research methods that go beyond accessing (presumably) representative sub-samples. And although anthropologists have unpacked the decoding of television by observing interactions between individual audience members and the medium (see, for example, the work of Lila Abu-Lughod, Tejaswini Ganti, Purnima Mankekar, and Andrew Shryock, among others) in the form of reception studies, the first-hand intensive observation of individuals interacting with media has largely remained the purview of academic social scientists and user experience researchers. This model of industry practice led Ien Ang, among others, to define audiences as an unknowable category, a mere discourse object intended to function as shorthand for a body whose particulars could not be understood.

Part of the problem is that mass media frequently relies on users to self-assess their behavior and habits; participants in such surveys or in the still-common Nielsen system have often been shown to engage in aspirational thinking—underestimating, for example, the time they spend watching television or misidentifying the kinds of programs they prefer. Where television becomes social TV, such as in the case of the pioneering programs whose development I witnessed in Japan (developed by companies such as Bascule), we start to gain some of the benefits of big data in understanding the nuances of user behavior.

Tracking user behavior: data vs. observation

To that end, I am sitting behind Sanjay as he walks me through the Google Analytics for a smartphone application he has developed. We can see that not many of the application’s current users have upgraded to the recent version—this enables Sanjay and his team to begin a conversation about the barriers to upgrade affecting their clients, and to identify a recent price increase in app versions as the likely culprit. Further, the data collected through Google’s system shows him precisely which parts of the app users are accessing, and when. From this information, he is composing a document recommending which features are to be maintained, and which elements of the app can presumably be dropped. Because this is an in-house project, and not for a client, field testing has been limited.

“But we did guerrilla testing,” reports Robert, one of the design leads. I ask what this is, and he explains: “we went to a coffee shop and randomly asked several people to use the app while we observed them. We requested they complete specific tasks, and recorded how long it took them, as well as whether they were able to to do so.”

Tracking user access to sites and features has been a strategy employed by the authors of interactive content since page hits were first counted and web logs were analyzed in the early 1990s. (Although it wasn’t until 1996 that the first odometer-style web counters emerged—remember those? For more on the history of web analytics see ClickTale.) Still, while the Internet and later forms of interactive technology aimed increasingly to refine their measurement of precisely what users were accessing and when, I have only observed detailed metrics quantifying audience engagement employed in a select few mass media cases. One of these was a collaboration between Japanese technology developer Bascule, and two of the country’s networks: NHK and NTV. The program, dubbed “60 Ban Shōbu” (60-Year Battle), was among the first in that country to tie a smartphone/computer application to a television screen directly, so that when users pushed an application button indicating they “liked” television content (the ii button), a chart measuring such button pushes reacted in real-time on the TV screen.

Combining interactive technology’s capacity to quantify and measure user behavior with a medium like television necessitates that we discard a lingering notion of audiences as passive bodies still perpetuated by some recalcitrant academics with a mind to a sharp division between producers and consumers of media.

So why does UX research need to observe people actually using software, websites, and so forth? What is to be gained in this process? Recalling the classic work of Stuart Hall, such an observation process recognizes individuals as appropriators of media, inclined to experiment with and customize uses of products consistent with their own needs and goals. These often cannot be predicted by a product’s designer, for whom “intended functionality” and “ideal use” are necessarily in mind. Fieldwork, or “contextual inquiry,” thus takes a basic premise that individuals engage media unpredictably, and can perceive usability flaws that designers—privy to the structural underpinnings and architecture of a product—frequently cannot. This is not to say that UX doesn’t use aggregates, trends, and categories—it does (see the creation of personas, for example), but its processes represent an interesting complement to quantitative big data and the recognition that users are always doing something.

We’re currently in the midst of a revolution in the way that corporations (mass media or other) visualize, assess, and speak to audiences. As such, the thinking about audiences must be updated to account for the new methods, and to engage in a conservative prediction, ever more nuanced understandings of user behavior will continue to circulate between industry and academic writing. Presently, I would argue, academia has some catch-up to do.

1 Trackback

Leave a Reply

Your email address will not be published. Required fields are marked *