What UX research maturity looks like and how we get there [Advancing Research Community Workshop Series] (Videoconference)
Three of your research colleagues discussed and defended their respective positions on what UX research “maturity” looks like. Participants then engaged with them in a discussion and Q&A, facilitated by Lada Gorlenko.
“Absent a strong baseline level of data fluency, product teams struggle to harness the power of insight in their work. As UX and UXR leaders, we are uniquely positioned to define what fluency looks like for our organizations and help teams transform to achieve it.”
– Megan Blocker
“Research has come a long way, but we have a long way to go. Our future success rests on two pivotal aspects of maturity: our leadership’s proximity to power and the use of ‘Strategic Research Programs’ to deliver value beyond Design and Product.“
– Fatimah Richmond
“Does UX maturity matter in the age of Generative AI? Is your ability to do well as a team going to depend on your UX skills? Or your tech or people skills?”
– Molly Stevens
Breaking the Tension: The Power of Enabling Your Employees to Show Up Authentically
Last July Silicon Valley Business Journal named B. Pagels-Minor, Senior Data Project Manager at Netflix, to its 40 Under 40 Class of 2021. To quote the article, “B. Pagels-Minor is a thought leader on product and culture development within technology companies. They are passionate about creating a culture of accountability and sustainable processes that allow teams to do the work well.”
Little did anyone know what was to come just a few months later. B. was thrust into the public eye for a stance they took on an issue that was important to them and others at their place of employment. A walk out ensued. So did termination.
B. joins us as a member of the greater UX community, and speaks to us about a topic that fits squarely into our day 3 theme: Design People—Caring for Individuals and Teams.
In this talk, B. Pagels-Minor will talk about their life, their experience at Netflix, and most importantly the things they’ve learned about the power of authenticity in both doing and enabling great work.
The Basics of Automotive UX & Why Phones Are a Part of That Future
In this session with James Rampton, dive into the automotive UX industry, the types of screens that go into cars, and the impact that tech companies like Apple and Google have had on the industry with products like Apple CarPlay and Android Auto. Discover the role phones play in shaping the strategy for companies designing software-defined vehicles. Learn what to call that screen in front of you while you’re driving and why car companies struggle to make good experiences. Plus, James speculates why Apple gave up making a car and guesses what future experiences will look like for both everyday drivers and those who can afford to have the car drive itself.
People Are Sick of Change: Psychological Safety is the Cure (Videoconference)
At the heart of any organization, people are the driving force.
And people are exhausted, depleted, languishing, detached, and burned out from continuous change. The tech layoffs, bank collapses, org restructuring, and being asked to do more with less have challenged workers in ways like never before.
There is a global human energy crisis, and creating a culture of psychological safety is the cure.
Sample Chapter: Deliberate Intervention
This is a sample chapter from Alexandra Schmidt’s book Deliberate Intervention: Using Policy and Design to Blunt the Harms of New Technology. 2022, Rosenfeld Media.
Chapter 1
A View of the Future
A father in a suburban U.S. town returns home after dropping his children off at school and unfastens his seatbelt, just in time to receive a package delivered by a drone. He walks inside and checks his retirement account on his phone, using facial recognition to log in. In a large city a few hundred miles away, an undocumented immigrant walks into his job at a restaurant. He has his movements tracked through his smartphone, unbeknownst to him, so that the spread of a new viral infection can be traced by health experts. Overseas, a deployed member of the military checks her social media feed and sees political ads regarding an upcoming election. She puts her phone in her pocket and walks to a nearby barracks for training on a new piece of military targeting technology.
Invisibly, these individuals are experiencing a designed world of technologies, tools, and built environment. Policies have in some cases brought these tools into being—think of government-funded GPS and the internet. And then, once the private sector promotes uptake among the public, policies constrain and shape those designs to varying degrees. In some cases—like a seatbelt in a car—policies that inform the design are well formed. In others, like the online bank account and health tracking, they are just beginning to emerge and take shape. And in yet others, like AI used in military technologies, few policies exist at all. As a result, the impacts of these technologies are felt in both predictable and unpredictable ways. Society is shaped by the technologies as they emerge, and in turn, society responds and shapes them back.
The act of forming policy to drive outcomes “in the public interest” (which we’ll talk more about in Chapter 2, “Policy and Design Basics”) has never been a simple undertaking. Both policy and design work on shaping the future, and they do so in varied, overlapping, and sometimes unpredictable ways. This book doesn’t propose an elegant solution that will help the future come into being in a neatly planned-out fashion, which causes no harm to anyone. Rather, it offers a way for people working at the intersection of policy and design of new technology to think more clearly about these issues and understand their place in the puzzle better.
Increasing Speed and the “Pacing Problem”
Many thinkers and writers have detailed the increasing speed of technological progress. In the 1940s, economist Joseph Schumpeter wrote about the concept of “creative destruction,” which he posited underpinned all societal progress. Creative destruction is the process by which existing business models are disrupted by newcomers, which Schumpeter called “the essential fact about capitalism.”
Such business growth has an extra fast, exponential flavor in the digital age, as Douglas Rushkoff observed in his book Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity. Venture capitalists buy companies not to own them, but to sell them at a steep profit. Because of that, there is a need by VC-backed companies to show quick disruption and exponential growth, rather than to build a slow and steady, sustainable company with a reliable customer base. That’s why thriving companies like Twitter, which produced over $1 billion in revenue in 2020, are considered a failure—they produce revenue rather than growth. “A more realistically financed company gets to grow whenever it’s appropriate,” Rushkoff stated. “You get to grow at the rate of increasing demand rather than the rate demanded by your debt structure.”
The speed of tech development is exacerbated by the development of technology itself. Gordon Moore, in what came to be known as Moore’s Law, theorized when describing the development of semiconductors that the computing world would dramatically increase in power and decrease at a relative cost, at an exponential pace. Building on that insight came writers like Ray Kurzweil and his “Law of Accelerating Returns,” which extrapolated Moore’s insight from computing power more widely to all technology. R. Buckminster Fuller, an American architect, designer, and philosopher, theorized in his book Critical Path that human knowledge was increasing at an exponential rate—it doubled every 100 years in 1900, he theorized, then doubled every 25 years by the end of World War II, and so on.
Pull it all together, and we have a tech world evolving at a rapid pace. This trend has led to what’s known as the “pacing problem,” where technology moves ever faster, but policymakers move at the same speed as they always have. Regulatory agencies typically collect facts over a long period of time and engage in trial-like processes that go through multiple levels of oversight before being codified. When the U.S. Department of Justice sought to break up Microsoft in the late 1990s, for example, the case dragged on into the 2000s. By then, competitors like Google Chrome and Mozilla Firefox had appeared, rendering the case all but moot. Our current model of top-down enforcement, thoughtful as it is, may not be ideal for the rapidly moving technological environment we find ourselves in.
A run-of-the-mill UX’er is thinking 6 months out. Product innovation teams are looking 2-3 years out, standards people are 5 years out, deep academic research is 10 years out, and policy probably looks 15 years out.
—Whitney Quesenbery, director and co-founder of the Center for Civic Design
Proactive and Reactive Policy
A key concept to start out with is proactive and reactive policy, which we will revisit over the course of this book. Proactive policy shapes the future ex ante, before it has transpired. A current example of this involves the bans on the use of facial recognition software, with the IRS pulling back on a facial recognition scheme for tax filing before it had been tried. Reactive policy is post hoc, in response to something that has already occurred. An example of this could be safety requirements for refrigerators, which were implemented in 1956 in response to children getting caught in the appliances! As we’ll see, most policy responses throughout history are reactive in nature. Indeed, facial recognition is widely used in the private sector, with proactive bans being far rarer.
The province of new designs is, typically and appropriately, that of the private sector. With their expertise in understanding customers and building things people want, marketing them and getting them used, it makes sense that private companies would work on creating “the new.” Entrepreneurs and private companies do not typically ask for permission in creating what they do (though there come points in the development of any company when laws come into play, some of which may even halt progress). And herein lies the messiness—stuff gets built out there in the world and, if and when something goes wrong, policymakers step in to “intervene.”
At this point, we must ask: “Who gets to decide which interventions are appropriate and when they should be attempted?” Consider the Winston Churchill saying: “Democracy is the worst form of government, except for every other one.” The idea is that, even if policies are not perfect, if they are implemented by leaders whom people have elected democratically to foster shared values and serve their interests, policies will be about as good as they can get. (Of course, whether particular democracies, including the U.S., are healthy and well-functioning is up for debate.)
Every new thing brings with it indisputable problems. “When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution,” according to cultural theorist Paul Virilio. There is no objective right or wrong about whether a new thing is or was good or bad, the thinking goes. Likewise, in some cases (particularly in the digital world), there is not always a clear answer as to what constitutes a “harm” of new technology, or what the right intervention should be to address it. Instead of coming up with the “right” answer, if we decide on these things together through transparent debate and a democratic system, we are more likely to achieve an outcome that the majority of us are happy with.
The Inscrutability of the Future and Reactive Policy
One of this book’s arguments is that we typically cannot know the impacts of new designs or technologies until those things have had some time “out there,” among people in the world, to make themselves known. History is littered with examples of worry about new things that later came to be assimilated and accepted as normal. For example, there was the fear that television and radio would lead to dangerous addiction, that photographs would lead to people collecting “fake friends,” and that roller-skating could lead to charges of impropriety. See this excerpt from The Chicago Tribune of 1885, defending the roller skate against charges of impropriety (see Figure 1.1).
Figure 1.1
A newspaper clipping expressing fear of the roller skate’s negative side effects.
Twitter account Pessimists Archive @pessimistsarc
Writer Douglas Adams nicely sums up the worry about new things: “I’ve come up with a set of rules that describe our reactions to technologies:
- Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
- Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
- Anything invented after you’re thirty-five is against the natural order of things.”
Adams’ theory doesn’t mean that new technologies can’t have negative side effects—they can, and they do. Think of the misidentification of Black people by facial recognition software leading to false arrests, or the countless deaths as a result of automobiles. But the point is that, from our subjective individual perspectives, we can’t know which technologies will have mass uptake by the public, we can’t usually know for sure what their negative effects will be ahead of time, and we often can’t predict what the right policy interventions might be. There are too many unknown links in the chain of future events. This view means that policy tends to be “reactive” to the harms that we see emerging from technology.
The unimaginable is ordinary, [and] the way forward is almost never a straight line you can glance down but a convoluted path of surprises, gifts, and affliction you prepare for by accepting your blind spots as well as your intuitions.
—Rebecca Solnit, Hope in the Dark
Notably, as policy attempts to shape the world, like design, it too produces unintended consequences. An example of an unintended consequence of well-meaning policy is cookie pop-ups (those notifications on websites asking you to “accept cookies”) brought forth by EU regulations including the ePrivacy Directive and GDPR (General Data Protection Regulation). Because giving users control over their data has long been seen as the most ethical way of protecting privacy, it logically followed that every website should inform users of how their data would be used and seek their consent. Some years on, the pop-ups are widely panned as ruining the experience of the internet (see Figure 1.2). Future privacy legislation may seek to ban mass data collection outright, rather than put the onus on users for giving “consent”—thus removing the need for a pop-up at all. Just like design, policy is iterative as understanding evolves and unintended consequences make themselves known.
Figure 1.2
Cookie pop-ups have received broad criticism for “ruining” the experience of the internet, although they were intended to help protect user privacy.
Making Reactive Policy as Proactive as Possible
The future’s inscrutability does not mean we can’t be a bit wiser about it as it starts to come into focus—about bias and other harms. By bringing policy and design closer together, society can attempt to shape technology more thoughtfully, rather than (just) the other way around. While we often can’t know the impacts of technology until it’s had some time in the world, we can make reactive responses as proactive as possible. This book will run through proactive and reactive responses to harms of tech, both in the private and public sectors, as well as provide some ideas for how these can move closer together.
The funny thing is that the tech world is very much fixated on the future and attempts to “hack” it—it’s just that they’re not trying to hack harms, but rather hack product-market fit and figure out which products will have uptake by users. VC firms run hundreds of “anthropological tech” experiments at a time to see which few ideas might stick, as a way to hack the inscrutable future and make it move faster.
And well they should focus on product-market fit. Without something that lots of people can use, we wouldn’t be having this conversation about policy interventions to begin with. And there’s the rub at the core of progress: We don’t want to thwart innovation and progress, but as a society we need to also understand the trade-offs, and if and how to intervene. Makes you wonder what might happen if these firms ran similar experiments to anticipate future harms. Could we perceive them faster?
If we learn from revenge effects [or unintended consequences of technology], we will not be led to renounce technology, but we will instead refine it: watching for unforeseen problems, managing what we know are limited strengths, applying no less but also no more than is really needed…I am not arguing against change, but for a modest, tentative, and skeptical acceptance of it.”
—(Edward Tenner, Why Things Bite Back: Technology and the Revenge of Unintended Consequences)
Pulling It Together
The world we live in is invisibly shaped by designs we interact with and policies that go on to shape those designs. But in many spaces, humans are not good at predicting the impact of the new. On top of that, the speed at which new technologies enter society is increasing at a rapid pace. Thus, with some notable exceptions, policy tends to be more reactive than proactive in nature. By being a bit more thoughtful and collaborative, we can hope to be as proactive about our reactions as possible, and address harms as they emerge.
Sample Chapter: Conversations with Things
This is a sample chapter from Diana Deibel & Rebecca Evanhoe’s book Conversations with Things. 2021, Rosenfeld Media.
Chapter 1: Why Conversation Design?
-
Rebecca: Hey, folks. Does anyone out there know how to make voice experiences more accessible?
Diana: Yes, I heard a great talk about that a month ago. This is something I’m trying to learn more about, too. I’d be happy to share my notes—want me to send them to you?
Rebecca: I’d love to see your notes! Thanks for being so helpful.
This conversation, held over a community Slack channel in 2018, is the origin of our friendship and this book. At the time, we were both about six years into careers in the tech industry. We finally felt like we knew what we were doing—enough to start noticing where the technology was short-changing people, and enough to start getting opinionated about the conversational interfaces we worked on.
Terms Defined: Conversational Interfaces
When you talk to technology and it answers back—whether it’s speaking or typing—that’s a conversational interface. A voice assistant like Siri? Conversational interface. A fridge that says you’re low on milk when you ask? That’s one, too. An interactive virtual reality game where you can talk with characters to advance the game? Yep. It’s a broad term that encompasses nonhuman things that do their best to listen, speak, and chat similarly to the way humans do.
Since we met, conversational technology has only gotten even more ubiquitous: chatbots waving hi from the corners of websites and apps, smart speakers hanging out on countertops, people walking around talking to their watches and glasses. But we still see that the industry’s approach tends to be technology-centered, rather than human-centered strange, considering conversational interfaces are supposed to be modeled after human communication. To us, this disconnect is a huge reason why these technologies aren’t living up to the hype. But we remain fascinated with this work, and optimistic about its potential. Because you’ve picked up this book, it seems like you’re interested in conversational interfaces, too. Keep reading, and you’ll learn everything you need to get started, including how to be a critical, ethical, inclusive thinker.
Let’s begin with a look at what makes conversational interfaces unusual—remarkable, even. First of all, conversational interfaces include lots of kinds of technology; there’s a ton of variety. (Figure 1.1 gives a snapshot of devices that fall under this umbrella.)
Figure 1.1
Pick a conversation partner.
Conversational interactions are different from, say, typing a question into Google; a search engine uses words, too, but it’s not a conversational exchange. Here are some hallmarks of true conversational interfaces:
- Language (words) is the primary input and output.
- The nature of the interaction is a back-and-forth conversation.
- The person’s input is similar to how they’d say it to another person.
- The system’s output is meant to mimic natural language—to answer on human terms.
Conversational interfaces are powerful because people are language super-users. People learn language intuitively and use it all day, every day, in both speaking and reading. That’s why these interfaces can be so effective: when people experience their technology talking to them, they click right into this easy mode. It’s a deeply innate way to navigate interactions.
Coming to Terms
Conversation design is interdisciplinary, so its practitioners use a lot of jargon coming from different ‘elds—and this jargon isn’t standardized. We’re word nerds, so for this book, we scrutinized what terms people used and where those terms came from. This book uses the term conversational interface broadly, to refer to talking technology, including both spoken and typed interactions. For aural interactions, we use these terms:
- Voice user interface, or VUI (pronounced voo-ey, rhymes with chewy): A general category of interactions that use out loud speech as an input or output.
- Voice assistants: A VUI system that’s meant to help you with daily life at home, work, in the car, or everywhere. (These are your Alexas, your Siris, or Googles.)
- Interactive voice response, or IVR: Older, computer-automated phone systems where users listen to pre-recorded or synthetic voices, and respond by pressing numbers or saying simple voice commands.
- Text to speech (TTS): Technology that takes text (letters, words, numbers, sentences) and a synthetic voice speaks the text aloud.
For text-based interactions (which necessarily involve a screen), we use these terms:
- Chatbot: An interactive system where the conversation is typed (instead of spoken). Some chatbots use clickable buttons, too.
- Multimodal: Systems that use more than one sensory input or output. (For example, a combination of voice and visuals.)
They have other key uses, too:
- Convenient multitasking: Walking through your front door with two bags of groceries, you say “Alexa, I’m home,” and voilà, your lights turn on and your playlist starts. In the kitchen, hands covered in cookie dough, you can holler “Hey Google, set a timer for twelve minutes.” You’re driving, and you say “Hey Siri, text Chloé that I’m ‘ve minutes late” without taking your eyes off the road or your hands off the wheel.
- Information “spearfishing”: Navigating apps and websites can involve searching, scanning, clicking, and scrolling. A well-designed bot can cut through the muck and deliver concise bits of information: “What’s my credit card balance?” “When was Bea Arthur born?” When a user nabs the info they want in one quick jab, it’s more frictionless than any web search.
- Hands-free public spaces: When the 2020 pandemic started, people’s aversion to touching an ATM or vending machine sky-rocketed. Voice interactions can create a less germy future where people can speak with interfaces, rather than tapping sticky screens or pushing grubby crosswalk buttons.
- Judgment-free help: Research shows that in some situations, people feel more comfortable spilling the beans when they know they’re talking to a “fake” person—a virtual therapist, for example. Shame can be a powerful silencer. During conversations where people often feel judged, with topics on drug use or talking about debt, a neutral speaking partner can ease the stress.
- Accessibility: For people in the Deaf community, or hard-of- hearing folks, a chatbot can be a much smoother way to get customer support, for example. And using voice makes so many things easier—order take-out, call friends, check the news—for people who are blind, sight-impaired, or have limited mobility for any reason.
- Infinite patience: Voice assistants don’t mind being woken up at 3 a.m. Chatbots don’t mind if you wait twenty minutes before responding to them. You can ask a bot the same question over and over—it won’t mind.
Conversational interfaces can accomplish things that screens alone can’t. When they’re designed well, they tap into human instincts and emotions, and they feel personal and familiar like no other form of technology. And building a conversational product is a hard, interdisciplinary puzzle—who wouldn’t want to solve a puzzle like that?
Coming to Terms
Speaking of bots in general, that little syllable has been used since the 1960s to denote “a type of robot or automated device,” according to the Oxford English Dictionary, thank you very much. We use the shorthand “bot,” or even “thing,” to refer to conversational interfaces or devices.
What about this sloppy meatball: artificial intelligence? We blame the media, corporate talking heads, and the public imagination for this one devolving into near meaninglessness. We’ll take a stab at a definition that works for this book:
- Artificial intelligence: Algorithmic systems that try to “think,” speak, or behave like people can.
Sometimes this book uses conversational AIs to refer to more advanced systems that get closer to mimicking human intelligence.
Finally, the most important word we need to address: user.
If you’re in design, you’re probably acclimated to an odd convention: refer to the people who are interacting with the technology—the app, the website, the printer, the smart fridge—as users. It’s right there in the name: it’s the U in UX. There’s been well-founded pushback on the term in recent years. Criticism coming from grassroots UXers, as well as tech bigheads like Jack Dorsey, calls it out as dehumanizing, creating abstraction instead of highlighting the humanity in people the ‘eld is trying to center.
These are valid criticisms. This book employs user because in certain places, the term people felt too general, and we wanted to specifically connote someone using the technology being discussed. When the industry clicks on a better term, we’ll be all in.
Conversation Designers to the Rescue
Conversation design falls under the umbrella of user experience (UX) design, so it’s both human-centered and data-driven—just with a tight focus on talking. Conversation designers are the practitioners of this craft, and they aim to help people and bots have good conversations, starting with what people need and how they use language to express those needs. They think in terms of scripts and flows and user journeys. (Figure 1.2 shows a literal sketch of a conversation design brainstorm. Beware!)
In simple terms, conversation designers usually do these things:
- Research to understand how people talk and what their needs are.
- Design personalities for bots.
- Write responses that the bot will say.
- Study different ways that users ask for things or express the same idea.
- Craft diagrams, charts, or sketches of how conversations flow.
- Create prototypes to test how people react to different personalities, voices, and scenarios.
- Advocate for accessibility and inclusive design.
- Collaborate with the people around them.
Figure 1.2
A page from a conversation designer’s notebook.
Conversation design has interdisciplinary roots. Its techniques stem from research on how people ingest, comprehend, and produce language—which means conversation designers often come from diverse backgrounds like linguistics, sociology, psychology, neurology, and more. (And yes, it can take inspiration from the arts, like screenwriting, acting, poetry, and improvisation.)
If you’re trying to find a conversation designer for your team, or wondering how you fit into the conversation design landscape, know that people with a wide and diverse set of backgrounds have this job. Greg Bennett, linguist and conversation designer, says that including these diverse perspectives are a strength, especially for language-driven products, “Because your lens on the world is going to be slightly different than mine, and your lens on how to use language will be slightly different, which reveals something that I can’t see. That’s the best part about it.”
No matter where conversation designers come from, it’s a crucial role, because conversational interfaces are a strange, ever-surprising form of technology. To get them right requires expertise, and without it, a lot of voice and chat interactions end up pretty unhelpful and frustrating. See Figure 1.3 for a sampling of tweets explaining what can happen when conversation design is left out.
Figure 1.3
When Rebecca tweeted “What can go wrong when voice or chat projects don’t have a dedicated conversation designer?” these three folks nailed it: Brooke Hawkins, conversation designer; Lauren Golembiewski, CEO and co-founder of Voxable; and Roger Kibbe, voice and conversational AI technologist.
Conversation design isn’t easy. First, its users are still learning to trust conversational tech. They worry that voice assistants are “always listening.” They’ve been burned before by an obtuse chatbot. They’re traumatized from years of bad computerized phone systems. So designers face an uphill battle trying to build user trust.
Combine that with the fact that people really, really notice when bots can’t hold up their end of the conversation. Most people are such natural language machines that any anomalies are obvious and jarring: that’s why a crappy, stilted conversation feels so wrong. Everyone is a harsh critic, with the highest of expectations for the interface.
Content Warning
Throughout this book, you’ll encounter an unfortunate truth: Because these technologies imitate people (and are created by people), they can be biased and harmful just as people can.
A common theme in technological bias is racism. Ruha Benjamin, author of Race After Technology, sums up this potential for any technology: “So can robots—and by extension, other technologies—be racist? Of course they can. Robots, designed in a world drenched in racism, will find it nearly impossible to stay dry.”1
Conversational AIs have a complicated relationship with femininity, too. They are often criticized for “sexist overtones, demising of women in traditional feminized roles, and inability to assertively rebuke sexual advances,” as authors Yolande Strengers and Jenny Kennedy wrote in their book The Smart Wife.2 This book gives several examples of where racial and gender bias rear their heads.
But these aren’t the only forms of oppression a bot can put out there: they are just the ones with the most research thus far. Conversation designers need to understand intersectionality: “the complex, cumulative way in which the effects of multiple forms of discrimination (such as racism, sexism, and classism) combine, overlap, or intersect especially in the experiences of marginalized individuals or groups,” according to Merriam-Webster. Lots of factors impact how people experience oppression and privilege, like sexual orientation and identity, disability, age, body size, and more.
This book calls attention to bias throughout. It’s a complicated topic, but understanding where it surfaces and how it impacts people is integral to human-centered design.
1 Ruha Benjamin, Race After Technology (Cambridge: Polity, 2019), 62.
2 Yolande Strengers and Jenny Kennedy, The Smart Wife (Cambridge: The MIT Press, 2020), 11.
From a business perspective, companies often misunderstand, underestimate, or simply ignore the need for conversation design.
These are commonly held viewpoints that may lead to trouble:
- Underestimating the complexity and role of language: “The bot’s essentially an FAQ page.”
- Treating the project as a purely technological endeavor: “All we need are developers.”
- Approaching production as if the project were screen-based: “We’ve got a UX team already.”
- Miscalculating the benchmark for MVP (minimum viable product): “Let’s get something out fast to test and learn.”
These viewpoints have repercussions. Rebecca did her fair share of “chatbot doctoring”—being brought in as a consultant to save an ailing bot. More often than not, when she took a look under the hood, the whole bot had to be discarded, from soup to nuts, because of those assumptions.
That said, it’s totally normal that users and businesses are still get- ting their sea legs with conversational interfaces—the technology is still hitting its stride. And, by their very nature, conversations are hard to design because language is complex. That’s exactly what this book will teach you, starting with the differences between human and mechanical conversations in the next chapter.
The Last Word
Of course, conversation design is unique. Think about it: You’re creating a product that’s modeled after the human mind and its ability to interpret and respond to language. That’s a daunting task.
With good design and process, amazing conversational experiences are possible. Your chatbot or voice experience can be great right out of the gate. You could launch the world’s most elegant talking dishwasher, or make a virtual debate coach. Your talking car could teach a million teens to drive! Your mental health bot could improve lives.
This is why being a conversation designer is fascinating: you get to think big about the complexity of language, the wildness of human behavior, and the inner workings of technology. It’s weird and it’s fun and it’s hard. Never forget, though, that the ultimate goal of a conversational interface is for it to be good—that is to say, easy to talk to, on human terms.
Sample Chapter: Product Management for UX People
This is a sample chapter from Christian Crumlish’s book Product Management for UX People: From Designing to Thriving in a Product World. 2022, Rosenfeld Media.
Chapter 1: What Exactly Does a Product Manager Do?
If you’re not sure what product managers do, you’re not alone. Quite a few hiring managers—not to mention entire businesses—are also confused about this job title and what exactly it means. It doesn’t help that there are a wide variety of legitimate approaches to product management that tend to emphasize one or another of the constituent proficiencies at the expense of the others. As confusing as this may seem, there are multiple legitimate approaches to product management in practice today, because the work itself depends so heavily on context. That being said, every product manager has the same core responsibility: value.
Product Management Is Responsible for Value
The product manager is responsible for value, through the coordination and delivery of customer experiences, and for making sure that the experience being delivered to customers (and other stakeholders) provides enough value to be “hired” by the user and developed as a sustainable concern, ideally in service of a broader vision.
OK, but sustainable in what sense? It’s a broad goal. LinkedIn product lead and social change evangelist B. Pagels-Minor suggested at least one dimension of this: “Something the user values and repeatedly uses.” In addition to that, for a system of any kind, business or otherwise, to become sustainable, it needs to find repeatable cycles of inputs and outcomes that literally keep the system going. Some of the inputs, usually those related to people or money, need to be at least steady and consistent, if not growing, Whatever you’re building has to keep these cycles flowing.
So think of it this way: any sustaining value to the organization is derived by taking a fair share of the value created for the “customer” (or end user, subject, actor, protagonist).
Responsibility for value helps clarify a few roles that are often confused with product managers: project managers and product owners. Before digging into the building blocks of product management, let’s first get those different titles defined and distinguished.
From the Trenches…
What We Talk About When We Talk About Value
The first person who taught me to focus on “value” as the lodestar of product management was Jay Zaveri, who was my chief product officer at the time, at a start-up called CloudOn, and now runs a product incubator at Social Capital, a VC firm in Palo Alto.
I checked back with him because when people ask what defines value, it’s hard to avoid circularity of the “you know it when you see it variety.” Some people emphasize value to the whole system vs. monetary value, or value that accrues to the owner of the organization alone. However, Jay put it this way: “Value is something special that a person or customer experiences that never existed in the same way for them in the past—it’s a product that is useful, usable, and desirable. Value fulfills a deep need, desire, or want for the customer that they did not even know existed. It’s apparent when something is technologically differentiated (‘cheaper, faster, better’), abundantly available (‘accessible in a way that was only available to few before’), and changes human behavior (in a way that is beneficial to the person or customer).”
When asked who gets this value, he said, “I think people get confused by adding financial metrics as value metrics. Some of those are necessary, but not sufficient, and some are pure garbage. No true value is created by just financial and growth metrics; in fact, we now know there are serious unintended consequences if you are focused only on them. Nothing beats staying focused on true value to your customer—everyone wins!”
A Product Manager Is Not a Project Manager
Product managers are frequently mixed up with project managers. Even people who know the difference will occasionally confuse them in speech. Abbreviations are no help, as both are commonly referred to as PMs with only context making the meaning clear. (Sometimes that context is “this company doesn’t have any project managers” or vice versa; other times, it’s based on the speaker, the team, and the conversation itself.)
Note: In this book, PM means Product Manager
- Forget PrM or ProM, too, as potential abbreviations—still no distinguishing characters. And I haven’t met anyone yet who wants to go around calling them ProjMs and ProdMs, or PjMs and PdMs for that matter. In this book, PM stands for product manager.
To make things worse, project management can be one of the responsibilities of a product manager. PMs care a lot about schedules, know how to read a Gantt chart, strive to keep everything on track, and work to hold everyone to their commitments, but this should only be a sliver of their time and attention.
A project manager is a specialist whose subject matter knowledge helps them excel at understanding the fine points, but whose core expertise is keeping projects on track, on time, and on budget, not on defining the value of a product and driving the strategy to maximize that value.
Some project managers do become product managers and when they do, just as with UX designers, they must master a whole series of adjacent skills beyond “keeping the trains running on time.”
Product consultant and author Matt LeMay, co-founder of Sudden Compass, put it this way: “Product managers have both the opportunity and the responsibility to ask ‘why?’”
A Product Manager Is Not a Product Owner
There are core differences between a product manager and a product owner. Although companies often use the terms indiscriminately to mean the same thing or apply their own meaning, for this book, we’ll define them this way:
- A product manager orchestrates the efforts of a cross-disciplinary team to ship software experiences as part of accomplishing strategic business goals.
- A product owner is a person who shapes and guides the engineering team as they develop software. In this model, they are a bit like a very tactical product manager, but one who is primarily focused on the tracking tasks. This is an engineer-centric role invented in the absence of true product managers.
Originally, the product owner tended to be drawn from the company’s engineering pool, and some teams used a specialized scrum master role that required training and certification and focused on the project management dimensions of an Agile scrum development environment. Product owners from the engineering team were often a team lead but not always. However, today, there are many different real-world uses of this title in practice, including teams where the primary business stakeholder is called the product owner, or in some government contexts in which the “product owner” is the person ultimately responsible for what the team delivers, more akin to what
most businesses would call a head of product or what some academic
projects would call a primary investigator.
Product owner activities likewise are often part of the work of a product manager, to the extent that some businesses even treat the product owner as an entry- or low-level product manager job title, but again this somewhat obscures the origin of the role from outside of the product management tradition.
Where Did Product Managers Come From?
So where did the tradition of a product manager come from? Why does everyone now seem to speak in terms of “products” at all in this digital age, and why are the people called upon to pull it all together called product managers?
The deep history of product management came from the 20th century concept of marketing, which emerged as an attempt to really understand a potential customer and to be more scientific about measuring the size of the market, the reach of a product, and so on.
(Some of this should sound familiar, as new generations rediscover these ideas and frame them in terms of research, humans, users, experience, experimentation, or analysis.)
The product metaphor itself is a bit double-edged in the internet age. The value it offers is to help focus and concretize the offering you are building to meet the needs of real people, or do jobs for them, or ease their journeys, and so on.
But the very real need to be specific and clear about what you are making (and what you are not) can easily hide the slippery nature of online products, which differ from their industrial counterparts in two major ways that both fall under the heading of “actually being a service”:
- In contrast to physical products in the old “packaged widget in a box on a shelf” sense, most software made these days is SaaS (software as a service), hosted in the cloud, accessible via the
web and sometimes with native app clients, and resistant to some of the finite constraints of the manufacturing process (sunk costs, waterfall processes, and limited ability to make affordable changes once the product starts shipping). - Online products also tend to be services, in the sense of working for or providing assistance to their users in an ongoing way (vs. the concrete experience of using an object or tool).
Regardless of the subtext of the word product and the mental frames that may get dragged along by its use, it has emerged as a way of talking about the product or service being built to meet the needs of real people in a valuable way.
A mid-20th century product manager would have usually been someone with a business background, if not a degree in business, and the earliest digital equivalents inherited some of that DNA.
Product Manager as Business Manager
Product management to this day is perceived by most people as a business discipline or practice. Core to the role of the product manager is the responsibility for the business case, business strategy, and financial viability of a product.
Unfortunately, this stereotype can be negative: for example, the “suit,” the bean-counter, or the boss man who only cares about the bottom line. Yes, there are plenty of people with product titles out there living up to those clichés, but it doesn’t have to be that way. UX designers interested in product management can start by embracing the realities, necessities, and even the joy of business. It doesn’t have to be a dirty word.
When the product manager role first emerged in large software and other tech companies, it came with that business foundation and was often paired with technology or balanced by engineering and perhaps one or more other pillars (such as clinical expertise in a health enterprise, or editorial content in a media company, etc., depending on the nature of the business).
The equivalent role that emerged at Microsoft at the time was called program manager. Today, program management usually refers to a separate discipline dedicated to operational execution of complex programs, generally consisting of multiple ongoing interrelated projects.
These PMs nearly always had MBAs and at times rubbed seasoned engineers and designers the wrong way when “put in charge” directly out of school.
A number of business titles and roles have contributed to how product management is practiced today, and along the way, many people have done product management work under these titles, roles such as business analyst, product marketer, customer success specialist, and others. Execution-related business skills, such as project management, decision-making, strategic alignment, and leadership factor in there somewhere as well.
Sometimes the business aspect of the role is summarized with a saying, “The product manager is the CEO of the product,” but this really isn’t true. The only value of that expression is that in an extremely broad way it suggests that the PM has a business responsibility for their product that is central and critical. The buck stops with the product manager.
But the expression is frankly more misleading than helpful because CEOs control the budget, CEOs can hire or fire the team, and just about everybody reports ultimately to the CEO. Product managers have business responsibilities, sure, but they do not wield anything like CEO power.
From the Trenches…
MBA Not Required
A couple of years ago, I was part of a team led by Matte Scheinker that was charged with raising the product bar at AOL, which had newly spun off from its disappointing Time/Warner merger and was playing catch-up with a decade-old style of web development. One of the things we did was review and update the HR ladders for product managers and UX designers, indicating what level of accomplishment was required across a series of criteria to be hired at or promoted to each level—from associate to VP (with an individual-contributor track leading to principal, at the director level for designers).
The old grid required the product managers to have an MBA. We removed this. The HR department asked if we could make it “MBA preferred,” but we said that this wasn’t the case. If anything, we were MBA-neutral. An MBA might help make a PM better at the business side of the role, or it might not. The time spent getting the MBA yielded one set of experiences and contacts, and the equivalent time spent working yielded another. By itself, the degree didn’t tell us much; however, we didn’t penalize anyone for having an MBA!
Joff Redfern, a VP of Product at Atlassian (and formerly LinkedIn and Facebook) prefers to frame this aspect of the role as thinking like a general manager. It has some of the same limitations in terms of direct authority, but more closely matches the notion of one person with business-related responsibility for a coherent chunk of work.
Clement Kao of Product Manager HQ points out the GMs also have hiring and firing responsibilities, and he prefers to frame these operational and strategic leadership aspects as being “both coaches and janitors.”
Alongside this business-focused type of product manager, the turn of the millennium saw some managers and lead developers emerge from engineering departments and take on product management roles, sometimes, at first, in the absence of a true product practice, but more generally as a new career path open to engineering managers.
Product Manager as Marketing Manager
Another antecedent of today’s product manager roles lies in the concept of a marketing manager or even a product marketing manager, which is the historic origin of the role in 20th century business practices. Interestingly, the obsession with customer needs that is inherent in product management derives from this fundamental DNA. The obsessions today with addressing markets and achieving product-market fit are other elements of continuity with the market- ing orientation of early product management.
Both roles still exist as distinct positions in many organizations. This dichotomy can potentially lead to turf or coordination issues when the product manager wants to approach product/marketing issues from a product-centric point of view and the product marketing manager wants to approach these same issues from a marketing-centric framework.
The article “Product Marketing Manager vs. Product Manager: Where Do You Draw the Line?” (www.productplan.com/learn/product-manager-vs-product-marketing-manager/) does a nice job of delineating these roles and making a case for them being separate, boiling down the essence to this:
- Product management’s role is strategic oversight.
- Product marketing’s role is message creation.
Product Manager as Engineering Manager
Given that the context of all of this is software and technology, science and engineering, on some level any product manager in the internet age is a technical product manager, at least in the eyes of people who don’t work in tech. (In practice, roles defined as technical product managers almost always require a computer science or analytical or subject matter expertise with the specific technical approach of the business.)
Engineers with big-picture skills (such as technical design and architecture), a vision for the purpose and value of what the team is building, and the ability to debate pros and cons with other stakeholders to make the case for a specific direction, may find they have greater leverage and ability to steer the ship as product managers.
The influx of engineer-trained PMs into the field started rebalancing the mix of skills expected from product people, with the business sense still a core orientation and now coupled with a deep mastery of the technical issues involved in software development.
When the role is literally advertised as a “technical product manager” or at an engineering-led company such as Google or any of its many imitators, the job application will include several technical interviews involving puzzles and problem-solving questions very similar to the ones presented to programmers, without necessarily requiring them to write any code.
Questions involving sorting, efficiency, algorithmic complexity, etc. reflect product cultures that are heavily centered on engineering skill sets, experience, and frames of reference.
Google is famous both for making product managers “earn” engineering resources and buy in. There’s no guarantee that producing a spec means anyone will build it for you. But Google is also famous for cultivating and empowering product managers. The Associate Product Manager program with its structured training and rotation that Marissa Mayer pioneered there has been widely imitated at other aspiring tech giants.
But again, the type of product manager favored in shops with Google or Google-adjacent cultures tends to be highly technical, hence these brain-teaser type interview sessions that really only make sense as a filter for programming-capable minds and not so much as the far-fetched notion that the PM will routinely debate the “big-O complexity” of several competing algorithmic approaches.
Note: The Technical PM Interview
-
- I still fondly recall a day I spent on the Google campus being interviewed and taken to lunch by a sequence of 11 white men of varying ages and hair colors and degrees of athleticism or geekiness. Many of the interviews were a lot of fun and, to be honest, I’ve always liked puzzles, although not so much under pressure with big bucks at stake. They rotate these questions over time so that you can usually find expired examples with a little searching. For example, I was asked at one point how an algorithm might work to efficiently zero in on the correct five-digit passcode on a keypad, given certain rules or constraints about the numbers (to do with repetition, etc.). As I said, it was almost fun.
And to be fair, nowadays most decent places that pose challenges like these encourage them to collaborate with you and help you along with your thinking. (If a role like that is your goal and you aren’t a computer scientist, there are books to help you cram. More about choosing your career path in Chapter 2, “Do You Want to Be a Product Manager?”)
At Yahoo, the product organization was a peer to and equally as powerful as the engineering organization. From the beginning, Yahoo’s websites were planned and built by people called producers (adopting terminology from media and broadcasting).
Over several years, these jobs gained in complexity and ultimately diverged into two distinct roles, one focused on planning and directing what got built (product managers) and the other doing the actual building (front-end engineers). It actually took some time for the front-end developers to be accepted as peers in the engineering organization, given prejudices at the time against HTML markup and the other front-end languages, but the significance here is that the product role, at least at one of the 90s era internet tech companies (“dot coms”), shared a common ancestor with a programming job.
Fast forward to today, and the role is still a highly technical one. A strong UX practitioner is going to take a serious interest in the technology they are designing with and for, just as an artist takes pains to understand their materials, but at the same time the designer is empowered to explore possibilities without constantly bringing to mind the apparent limitations of the existing technical stack and codebase.
Product managers (and not just “technical” product managers), by contrast, must delve even more deeply into the substance and properties and limitations of the technologies being worked with and never really have the luxury of putting those factors to one side. (PMs also spend much more time working directly with engineers than most UX designers do, which creates further pressure to demonstrate a thorough command of the technical factors that figure into any difficult decision.)
The new hybrid eng/biz type product model still left a lot to be desired as practiced, as most companies still follow waterfall and command-and-control software development lifecycle practices, but in the first decade or so of the millennium a few influential practitioners studying what worked well in Silicon Valley started articulating a fresh model of “lean” and “agile” and “fully empowered” product management.
Product Manager as Experimental Explorer
Marty Cagan, a consultant with the Silicon Valley Product Group and author of the book Inspired, made a strong case for empowering the product team to investigate problem spaces, conduct discovery processes, meet customers and prospects face-to-face, and seek to deeply understand what people need and what they will love to bring valuable products to the market.
Rich Mironov, a product consultant who advises companies, takes interim product executive roles (he calls this smoke jumping) and writes and teaches workshops. They and others have sought to raise the bar and to highlight the most effective techniques, approaches, and mindsets, while remaining clear-eyed and cautionary about the institutional patterns and incentives that can push back against these approaches.
For example, an empowered product team should understand the goals and outcomes it is seeking and be engaged in a process of iterating experimentally toward meeting those goals. The team should be capable of communicating to others what the current snapshot of the plan is, in the form of a roadmap (much more on this to come), expressed in terms of what is underway right now, what is coming up next, and what is expected to come later.
Many leaders in traditional organizations balk when they see a roadmap communicated this way, especially if what they had in mind when they asked to see the roadmap was really a set of firm release dates on which clearly defined features would be delivered and launched.
But committing to deliver a feature on a certain date, based on a fully baked specification, is a recipe for disaster. That process is too brittle and fails in the face of new information, data from users, stakeholders, and changing market conditions, just to name a few.
This is the same notion from the “lean” movement, popularized by Eric Ries’s book The Lean Startup: The product manager in an empowered team is facilitating a constant cycle that involves learning from what is currently “shipped” and “in the wild,” feeding these insights back into renewed discovery processes driven by qualitative inquiry to explore hypotheses and seek understanding, redefinition of the problem space, identification of further opportunities or experiments worth exploring, decisions about what to build or fix next, and shipping to start the cycle anew.
This cycle shown in Figure 1.1 can be modeled in great detail but is most often reduced to “Build, Measure, Learn.”
Figure 1.1
“Build, measure, learn” is a simple but powerful model that lies at the heart of lean product management, with its bias to action and emphasis on learning and experimentation.
It’s worth noting that despite the title and the fact that it’s a cycle, you generally do not start with building. You start by learning something or by measuring (initially assessing) something, learning about something, and then building a thing.
This process of constant learning, feeding back into discovery, redefinition of problems and opportunities, and iterative design is applicable in the early stages while prototyping new ideas, as well as throughout the life of a product. The approach is still gaining adherents (for example, people like Jeff Gothelf and Josh Seiden have worked hard to bring lean ideas about experimentation to the UX community). Outside of innovative tech companies and start-ups, though, the idea of a product manager as an experimenter (or “mad scientist”) is not as fully distributed and accepted.
But all product managers work with data and spend hours every week studying it closely. Whatever the cycle of learning and iteration, the job cannot be done without accurate signals about what is working and how the software is actually being used, and this focus on managing what you can measure carries through all of these strands mentioned so far—business, engineering, and entrepreneurial experimentation.
The most recent archetype to contribute their superpowers to the ideal product manager is one you should be familiar with: the designer.
Product Manager as Creative Artist
The experimental product manager is already more of a creative type than a simple number cruncher or bean counter. This person is someone who is intensely exploring possibilities and looking for ways to discover new solutions to acute problems.
The rise of user experience design in all its various forms, alongside the business-school friendly notion of “design thinking,” coincided in the culture with widespread awareness of the creative mythology of the Apple computer, the heroic figure of Steve Jobs, and for design aficionados, Jobs’s collaboration with industrial designer Jonathan Ive.
Suddenly, creative founders were getting funding for their own start-ups. Other start-ups were making their first design hires much earlier in the process. Design (or “design thinking”) offered proven methods for harnessing creativity and inventing innovative solutions to interesting problems.
Product management evolved as well. At first, PMs paid lip service to UX design, made their own wireframes based on zero research, and asked designers to, more or less, color them in. But now product practitioners take user experience research and design seriously as core disciplines with invaluable, necessary skills and techniques. They also foster mindsets that are required to develop product experiences that people will love and return to again and again.
The lean movement had already shifted its focus emphatically onto the customer (or potential customer). Conveniently user experience research and design revolves around this exact same obsession! UX has methods and traditions for learning from customers, and provides systems and models and tools for exploring and communicating solutions.
Design also excels at redefining problems and questioning prior assumptions, and much like UX design leads, product managers are charged with inspiring and rallying creativity in others. So alongside the business heads, coders, and founder types turning into product managers, some user experience designers, managers, directors, and VPs now jump to the adjacent product track as their careers evolve.
Three Other Traits Shared by All Great Product Managers
Great product managers tend to have the following personality
characteristics:
- Curious
- Connecting
- Courageous
They are curious almost to the extent of feeling like a busybody, being nosy, wanting to “know all the things,” keeping tabs on everything, and being incredibly “high context” and thorough about understanding.
They are connecting in the sense of constantly “connecting the dots” to form the big picture, orchestrating the performance, keeping people in the loop, providing the glue or lubrication or the circulatory fluid or whatever metaphor you prefer for the combination of emotional intelligence, “soft skills,” and implicit ties that enable teams to thrive and work well.
And they are courageous in the sense of being brave enough to take risks, make mistakes, face problems square on, evaluate failures coldly, and learn ferociously from every experience, good or bad. This courageous behavior sets a tone that encourages others to try harder and seek more difficult goals.
So What Does a PM Do?
OK, so now you know what product managers are responsible for (value and focus), where product managers came from (from all over), and what makes a good product manager (business sense, entrepreneurialism, technical chops, experimentalism, creativity, inquisitiveness, emotional intelligence, and courage—easy peasy, right?).
But how does a PM apply that mix of skills and aptitudes to fulfill these responsibilities? What are the primary activities, processes, and tasks of a product manager, or in other words, “What exactly does a PM do all day?”
For a day in the life of a product manager, check out the section “A Typical Day” in Chapter 2, “Do You Want to Be a Product Manager?”
Key Insights
- Product managers are responsible for value. A sufficiently valuable product will delight customers and support the business making the product financially.
- Don’t confuse product management and project management, but product managers do usually have some project management responsibilities.
- A product owner fills a role in Agile scrum development practices and is not the same as a product manager, but some product managers do fill this role as well.
- Product management originated as a business discipline, influenced by product marketing, business analysis, program management, among other practices studied in MBA programs.
- In the software development world, many engineering managers have evolved into product managers.
- In Silicon Valley (writ large), product management has taken on the entrepreneurial virtues of experimentation and exploration, and the “build, measure, learn” cycle.
- Today, it is becoming more common for UX design practitioners and managers to move into product management roles, bringing the creativity and innovation of design crafts with them. (You are here.)
- Great product managers are (benign) busybodies who constantly weave together the motley strands that make up software and are brave enough to lead teams into the unknown in search of insights and ever greater value.
Sample Chapter: Life and Death Design
This is a sample chapter from Katie Swindler‘s book Life and Death Design: What Life-Saving Technology Can Teach Everyday UX Designers. 2021, Rosenfeld Media.
Chapter 1: A Designer’s Guide to the Human Stress Response
The human stress response is key to our survival as a species. A human without a stress response would never avoid pain, defend themself from an attack, or invent solutions to overcome problems. Stress is unpleasant, but that unpleasantness has a purpose. It’s by design. It’s meant to drive humans away from harmful things. Stress is a powerful motivator and a major driver of human behavior, which is why it’s critical that designers of all types of products, services, and experiences understand the ins and outs of the human stress response.
Stress takes many forms. There’s eustress, which is a healthy, beneficial type of stress experienced when engaging in enjoyable, challenging experiences. There’s also hypostress, caused by boredom, that drives the person to take action to relieve the tedium. But when most people think of stress, they think of distress, which is a stress response in reaction to negative or overwhelming things. Under the umbrella of distress are the two most common types of stress that scientists study: acute stress, stress in response to a momentary crisis that quickly passes, and chronic stress, which lasts weeks, months, or years.
Although the causes, severity, and durations of different types of stress responses may vary, at its core, the basic biology of stress is always the same. The stress response has stood the test of evolution for millions of years with the core processes, hormones, and reactions remaining largely unchanged from the time of our earliest mammalian ancestors. In fact, the core difference between the human stress response and the stress response of animals is not a change to the core functionality at all but rather an additional feature—logic. The development of the prefrontal cortex, the part of the brain that handles logic and reason, gave humans a way to interrupt, redirect, and suppress their instinctual stress reactions. This additional layer of control means that humans’ reaction to stress is significantly more complex than that of other animals, but when you understand the driving forces at the core of the stress response, patterns of behavior emerge. It is through understanding these patterns and better anticipating users’ needs in these moments of crisis that designers can make a positive impact.
This book will focus primarily on the acute stress response, because the effect of stress on human behavior is most pronounced in moments of peak stress. The extreme reactions that humans experience during acute stress make it an ideal teaching tool: it’s easier to break down, share relevant examples, and establish applicable techniques for designers. And, once you understand how to create designs that support users through an acute stress response, the lessons can be adapted and applied to any other type of stress.
What Happens in a Moment of Acute Stress?
During an acute stress response, the body diverts energy from some physical and mental functions to supercharge others. These changes favor qualities that are associated with “primal” attributes—strength, speed, and aggression. This response sharpens senses, intensifies focus, and drives fast, intuitive decision-making. But these advantages come at a cost: the loss of fine motor control, a suppression of impulse control, a degradation of rational thought and higher-order thinking like reading and math, and a loss of empathy and creativity. The things that people value most about their humanity are momentarily set aside in a raw bid for survival.
Some industries have invested heavily in research around designing for the human stress response. For example, designers who make airplane dashboards, weapon interfaces, and medical devices understand that their users are likely going to be in intense, high-pressure, life-and-death situations when using their products, and they invest in the research necessary to ensure that their products allow humans to function at peak performance while using them. But flying planes, fighting wars, and performing surgeries aren’t the only situations where humans experience stress, far from it.
Consider the visceral reaction someone might have to logging in to their banking app to find their account unexpectedly overdrawn, or receiving the devastating news through a social media app that a friend has died. Or think about how often people lose their temper at chat bots or threaten to throw their laptop across the room in frustration over some technical issue.
Of course, digital interfaces aren’t just the cause of acute spikes in stress. They are also the place where people often turn for help during a moment of extreme distress—for example, dialing emergency services after a car crash, checking for updates on an approaching storm, frantically Googling for medical information to determine if they should take their child to the ER, or ordering a taxi to escape a date that’s turned threatening.
Because technology has become so integrated into every aspect of people’s lives, it is a near certainty that, over the course of your career as a digital designer, you will design products that will be used by someone during or directly following a moment of crisis, even if that is not the product’s primary intent. By studying and understanding the human stress response, you can better anticipate people’s needs and behaviors in those moments, allowing your product to support them in the moments that really matter.
In this chapter, we’re going to start with an overview of the acute stress response, which includes five major stages:
- Startle re1ex
- Intuitive assessment
- Fight, flight, or freeze
- Reasoned reaction
- Recovery
In order to illustrate all the steps of the response, we’ll follow the tale of a woman named Amy (named for the neurological star of the stress response—the amygdala) as she experiences a frightening event. Amy and the following story are fictional, but the science underlying Amy’s adventure is as real as it gets.
Amy’s Accident
Amy is about to have a very bad morning. She’s driving to work—it’s a route she’s followed a hundred times before. She’s driving on the kind of mental autopilot only daily commuters can achieve. Engrossed in her podcast episode, Amy begins to execute a standard maneuver, switching from the center lane to the left lane of a three-lane highway, when an unexpected movement is caught by the very edge of her peripheral vision. This is when the trouble begins.
The unexpected movement triggers the first phase of the acute stress response, the startle reflex. Her system is flooded with adrenaline, a stress hormone that supercharges the body for survival reactions like fighting or fleeing danger. This unconscious re1ex causes her to simultaneously turn her attention toward the threat while moving her body away from it. Her arms jerk the wheel sharply to the right in order to move herself and her car out of the path of the incoming object.
Now that Amy has turned to face the approaching object, she enters the second phase of the response, intuitive assessment. She effortlessly and instantly identifies the mystery object as a motorcycle recklessly speeding in the left lane. Intuitively, she assesses the trajectory of the motorcycle compared to the trajectory of her own vehicle and, without the need for any actual math, she correctly calculates that she is no longer in danger of colliding with the driver. However, she is so focused on avoiding the cyclist that she doesn’t realize her reflexive motion, fueled by adrenaline, was an overcorrection, sending her car veering into the right lane. BAM!
Amy’s front bumper clips the side of a minivan. Her vehicle bounces back, thrown into the center lane. This is the moment when Amy officially enters a full fight-or-flight response, the third stage of the acute stress response. This stage is fueled by even more adrenaline along with a healthy dose of cortisol, another important stress hormone that increases focus, drives immediate action, and prioritizes fast, intuitive decision-making over logic and reason. She wrestles with the wheel to keep from rebounding into the motorcycle on her left. Supercharged by adrenaline, her foot slams down hard on the brake. She hears the squeal of tires from multiple vehicles and sees cars swerving all around as they try to avoid rear-ending her. In her panic, Amy is frozen in her seat. Only her eyes move, darting between her mirrors and windows, as the traffic slows to a crawl around her. Miraculously, no additional crashes occur.
A robotic voice fills the car, “Vehicle crash detected. Connecting to OnStar Emergency.” Amy remembers that OnStar crash support service came included with her car purchase and is equal parts embarrassed and relieved when a few seconds later a “real human” comes on the line. “This is Randall with OnStar. Is anyone injured?” This rational question kicks Amy’s brain out of the panic mode of fight or flight and into the fourth stage, reasoned reaction. She reports that she is uninjured, but she is unsure about the people in the other vehicle. Centered by Randall’s clear, step-by-step directions, Amy follows his instructions to pull her badly damaged car to the side of the road and confer with the other driver. Luckily, no one in the van is injured either. Still, Randall offers to send a police officer to the scene to create an accident report for the insurance company and Amy agrees.
By the time Amy parks her car on the shoulder, her heart rate and breathing have almost returned to normal. Amy has now entered the recovery period, the fifth and final stage. Since her response to the accident involved very little physical exertion, Amy still has plenty of leftover adrenaline in her system, causing shaking hands and jitteriness. While she waits for the police to arrive, she channels her restless energy into using her phone to do all kinds of things: taking down the other driver’s contact information, texting her boss to tell her she’ll be late for work, and taking pictures of the damage to her own car and the minivan. She even uses her insurance app to file a claim and order a tow truck for her car, and then orders an Uber for herself to the nearest car rental agency. Although it will be another hour or two before the effects of the adrenaline and cortisol fully wear off, Amy, her car, and all the other people and vehicles involved in the incident will make a full recovery from this stressful morning.
Amy’s acute stress response both helped and hurt her during this encounter. First, it helped her reflexively avoid a collision with the motorcycle, but this caused her to overcorrect and hit the minivan. Next, it helped her wrestle the car back under control, though in bringing it to a sudden stop, she nearly caused a pileup. And finally, it took her back to rational thinking in the end, allowing her to
execute a flurry of activity in the aftermath of the event.
At each step of the response, her ancient instincts had to work with multiple types of modern interfaces: digital, physical, and voice con- trolled. Some of those interactions were more successful than others. Different parts of the stress response require different approaches from designs. Understanding those various needs is critical to creating designs that can properly protect and empower a user in a moment of crisis.
Startle Reflex Considerations
When you are designing for a startle response, it’s helpful to remember that a startle response is a powerful force of nature, allowing humans to respond to danger with lightning-fast reactions. You can attempt to harness it through your designs, but this requires your interface to be in just the right place at just the right time. In Amy’s story, her hands were on the steering wheel at the moment she was startled, so she was able to use the wheel to jerk out of the path of the motorcycle. This reaction is exactly why drivers are instructed to keep their hands on the wheel at all times while controlling a vehicle. Physical interfaces like steering wheels, buttons, and knobs tend to be better at capturing these speedy reactions than touchscreens, but there are design lessons that can be borrowed from these analog controls to maximize the responsiveness of all kinds of digital interfaces including touchscreens, which we’ll explore more in Chapter 2, “The Startle Reflex.”
It’s also important for designers to keep in mind that, more often than not, startle re1exes are just a nuisance. Either they are false alarms, or they cause someone to overreact, like Amy careening into the right lane after jerking away from the motorcycle. Designers should always take steps to minimize false startle moves, or, if prevention is impossible, put systems in place to protect users from themselves in these uncontrolled, reactionary moments. We’ll dig into specific techniques for accomplishing these goals in Chapter 2.
Intuitive Assessment Considerations
When Amy turned to assess the threat level of the speeding motorcycle, this assessment, like all intuitive knowledge, happened instantaneously in her subconscious through a process of pattern matching. Even though she had never seen that particular vehicle before, she was able to match it to the category of objects she had learned was listed as “motorcycles.” Similarly, she was able to predict the motorcycle’s trajectory intuitively, based on how she had seen similar vehicles move in the past.
It’s important to note that nearly all of the driving maneuvers Amy executed throughout this story were powered by her intuition. She never once stopped to calculate how many degrees to turn her wheel to avoid a collision. All of her interactions came from an intuitive understanding of the car’s interface, which she had developed through years of repeated use.
Note: A Dangerous Learning Curve
It takes time and practice to develop reliable intuition for tasks as complex as driving. This is why the first 18 months of driving are so dangerous for new drivers, with car accidents topping the list of causes of death among American teenagers.
Designing for intuition can be a mixed bag. On the one hand, a truly intuitive interface can make technology feel almost like an extension of the user, allowing the person to focus all of their conscious efforts on the problem they are trying to solve. However, there are times when creating interfaces that rely too heavily on intuitive decision-making, unrestrained by fact checking or logic, can lead users to make hasty decisions that are overly influenced by harmful biases and stereotypes. When you are designing for intuition, it is critical to understand the types of environments where this subconscious ability to match patterns and automate decision-making is helpful, and the kinds of situations where users need to have their bias checked by the systems they use. We’ll explore different techniques for maximizing the benefits of intuition and minimizing the drawbacks in Chapter 3, “Intuitive Assessment.”
Fight, Flight, or Freeze Considerations
When a fight-or-flight response is triggered, the user’s rational mind is no longer in charge. No matter what their original objective was, now survival is the only goal. Often, users forget about technical solutions altogether in this state, falling back on more primal methods of dealing with danger. But, occasionally, users are forced through circumstances to interface with technology while in the grips of panic.
During Amy’s accident, her use of technology (AKA the car interface) during her fight-or-flight response was highly inconsistent. On the one hand, the fight-or-flight response enhanced her physical strength in a way that helped her get the car steering back under control after hitting the minivan. But the instinct to slam hard on her brakes, a form of the freeze response, actually increased her risk of a second collision with the cars behind her. This kind of unpredictable performance is very common when someone is panicking.
The best thing that technology can do when someone is in fight-or- flight mode is to protect them from harm and get them back to a rational state of mind as quickly as possible. For someone triggered to flee a situation, always allow them to exit or quit if they wish. Additionally, consider ways to provide clear, unobstructed paths to help, preferably human help. For someone in fight mode, look for ways to deescalate the situation. For those who are frozen in fright, give clear, specific direction to help them snap out of their indecision. (For example, the type of direction the OnStar operator Randall provided for Amy.) All of these techniques and more will be explored in Chapter 4, “Fight, Flight, or Freeze.”
Reasoned Reaction Considerations
More often than not, to survive and thrive in the modern world requires more logic than instincts. Well-designed systems and services can help users act rationally in a stressful situation by taking complex, multifaceted procedures and breaking them down into step-by-step processes. As illustrated in Amy’s story, companies like OnStar will design talk paths for their operators that allow them to triage the information-gathering process quickly and efficiently in an emergency. The goal is to ask about injuries first, and then address safety concerns like getting out of traffic, before moving on to more mundane issues like accident reports and insurance claims. Having these conversation flows written out and streamlined by the design team ahead of time allows the operator to stay calm and focus on the needs of the person in the crash, while executing each step correctly and in the right order of priority.
Experiences that are well crafted for reasoned response help users focus on the most relevant information for the task at hand and make well-informed choices at every step in the process. Techniques for designs that support rational decision-making under stress are explored thoroughly in Chapter 5, “Reasoned Reaction.”
Follow the Signal
Your senses capture raw data from the outside world just the same way a microphone, video camera, thermometer, accelerometer, or other electronic sensor might. And just like a machine, incoming data from each of your senses are turned into electric signals that travel through the brain for processing. When an acute stress response is triggered by something the senses can see, hear, or feel, that signal takes a very specific path through your brain and body, which is mapped in Figure 1.1.
Figure 1.1
A map of the signals related to the acute stress response as they travel through the brain and body.
SENSORY THALAMUS: The sensory thalamus is essentially the switchboard for the senses. Normally, it sorts incoming information from the senses and passes it on to the appropriate parts of the brain for decoding. But when it detects something sudden and unexpected, it sends out a special super-fast signal to the amygdala.
AMYGDALA: The amygdala is the central coordinator within the sympathetic system, the body system that controls the fear response. It’s located in the mid-brain sitting just above the spinal cord. When the amygdala receives the emergency signal from the sensory thalamus, it sends two signals, one to the hypothalamus and one to the hippocampus.
HYPOTHALAMUS: The hypothalamus is a small but mighty region at the base of the brain that controls a number of functions; the most critical to the stress response is the release of the hormone adrenaline that triggers the startle reflex. All of this happens within about 100 milliseconds (a 10th of a second) from when the motion was caught by the peripheral vision.
HIPPOCAMPUS: The hippocampus holds experiential memories, for example, memories that are gained through seeing and experiencing things, as opposed to facts or concepts learned in other ways. If the suspected threat matches a memory of something dangerous, then the amygdala will trigger a full fight-or-flight response by activating the HPA axis.
HPA AXIS: HPA stands for hypothalamus, pituitary gland, and adrenal gland, which, when triggered, flood the bloodstream with the stress hormones adrenaline and cortisol. When this happens, energy is redirected from nonessential systems like digestion, reproduction, and immunity to supercharge the circulatory and respiratory systems—the moment when the fight-or-flight reaction truly sets in.
PREFRONTAL CORTEX: Logic and reason come from the part of the brain at the very front of the forehead called the prefrontal cortex. Although its response time is a fraction of a second slower than the hippocampus, it has the ability to overrule the more primal survival responses if it thinks the body is taking the wrong actions.
PARASYMPATHETIC SYSTEM: When the crisis is over, the parasympathetic system sends a signal down the spinal cord, telling each body system to return to normal. The circulatory, respiratory, and immune systems bring the heart rate and breathing to normal. The release of acetylcholine restarts any temporarily suppressed processes in the digestive, reproductive, and other nonessential systems.
Recovery Period Considerations
Users often turn to technology for help in the aftermath of a stressful event. In Amy’s story, as soon as she resolved the immediate threat, she was on her phone engaged in a flurry of activity.
While the physical aftereffects of adrenaline, like shaking hands, may be of some concern to interface designers, of much more relevance are the lingering mental effects of cortisol during this period. Cortisol drives action. In low-to-moderate levels, it is extremely helpful in terms of focus and motivation. During an acute stress response, however, cortisol floods the brain, intensifying focus to the point of tunnel vision and compelling a person to address the immediate threat to the exclusion of all other goals. High levels of cortisol can cause poor decision-making, reduce creative problem solving, and increase aggression in users. And while the effects of an adrenaline rush rarely last for more than an hour, cortisol can take several hours to be filtered out of the bloodstream.
These negative mental effects have major implications for all kinds of designs used in the aftermath of a stressful event. Luckily, designers can help shorten this recovery period by designing aesthetics that calm the person and lead to empowering actions. In Chapter 6, “Recovery,” we’ll examine all the techniques that designers can leverage to help users weather the recovery period.
Finding the Peak Stress Points for Your Users
As a designer, being aware of the potential moments of crisis or stress for your users is critical to designing a supportive experience. Here are some questions to ask yourself or discuss with your col- leagues to help you find the moments of peak stress within the user experiences you design:
- What are the circumstances where you must deliver bad news to your user? (Example: Service outage or telling a customer a bill is past due.)
- What are the points where the user may discover bad news that your company/digital product is not necessarily aware of yet? (Example: Their account was hacked.)
- What are some upsetting things that might have happened to someone that would cause them to turn to your company/digital product for help? (Example: Customer in car crash and uses insurance app to order a tow truck.)
- If a bully or domestic abuser were trying to use your app/product to harass or harm one of your users, how might they use your digital product to do so? (Example: Abuser cancels power to victim’s apartment through a previously shared utility account.)
For additional insights, it can be helpful to map these stressful moments. These stress points can be added to a high-level strategy document like a customer journey map that illustrates all of the stages a customer goes through when interacting with your company. Or, if your product is already designed, you can map them directly to your interface designs. Are there certain screens that handle a lot of stressful moments? Or if you are designing voice-based interfaces or diagraming service experiences, are there places within your flow of user interactions that are hot spots for stress? These can be the first places you start to apply the lessons in this book.
It can also be helpful to interview users who have gone through the types of stressful situations you identify and ask them about the type of help they wish they had had in their moment of crisis. Performing a contextual inquiry, which means observing someone as they actually experience and deal with the stressful experience, would generate even richer and more accurate insights—however, many crises are rare, dangerous, emotionally traumatic, or unpredictable, making direct observation impossible or inadvisable. Additionally, it’s important to be especially mindful of research ethics when interviewing users about past upsetting experiences. Make sure that you are giving them proper compensation, providing a safe environment, and not asking them to relive their trauma.
Critical Information: The Human Stress Response
An acute stress response is triggered when a human is faced with an immediate threat. When designers understand the specific neurological and physiological effects of the stress response, they are better able to create experiences that support users in critical moments. See the phases of the stress response in Figure 1.2.
Figure 1.2
The five stages of the stress response elicit different emotions, actions, and neurological activity.
Sample Chapter: Changemakers
This is a sample chapter from Maria Giudice and Christopher Ireland’s book Changemakers: How Leaders Can Design Change in an Insanely Complex World. 2023, Rosenfeld Media.
Chapter 1
The Imperfect Future
When Neil Armstrong landed on the moon, the concept of progress was almost universally popular. Few people protested the arrival of new vaccines, faster food, advanced appliances, or more powerful cars. The envisioned future had its own neighborhood in Disneyland and popular television shows imagined even more transformations on the horizon.
In this context, “change” was synonymous with “improved.” New companies were created to commercialize inventions, while older companies focused on what could be enhanced, remodeled, or extended. Ambitious graduates sought emerging fields like computer science and genetic engineering as sure paths to prosperity, and young children pretended to live in a world with flying cars and robot dogs. Underlying all this was a promise—inferred but nonetheless clear: change, and those who led it, would deliver a positive trajectory of technical, social, and organizational advancements that consistently produced benefits and left all longing for more.
Nobody is living on the moon right now, but some of the envisioned advancements arrived as promised. The 21st century started with flip phones, cable TV, and encyclopedias on CD. In barely 20 years, phones morphed into supercomputers with immediate access to near infinite knowledge. Billions of people rose up from extreme poverty, and medical advances improved life on every continent. World leaders communicate on Twitter for anyone to follow and women, people of color, and LGBTQ folks finally have a modicum of power.
This is progress by any definition, and much of it was on display in that long-ago Disney exhibit, but its trajectory has not been smooth, and its benefits are countered by unanticipated outcomes. Tech behemoths barely out of their adolescence connect the world beyond physical barriers, but also distribute a daily tsunami of misinformation and lies. People worry that their phones track them, their smart homes spy on them, and their personal data is being sold to the highest bidders. Employers are likely to reorganize every other year, and employee skill sets need constant upgrading because a replacement can come from anywhere at any time for almost any reason and may not even be human.
Change now happens so pervasively, so exponentially fast, and with such erratic impact that it is as likely to cause stress as it is to bring delight. Ask a cross section of people how they feel about change and you’re unlikely to hear the uniform support it once enjoyed. Instead, from those who lead change, you’ll hear that’s it’s inspiring and frustrating, stressful and satisfying. From those being changed, you’ll hear that it’s needed and threatening, beneficial and frightening. Unquestioning support has disappeared, along with the assumption of a positive trajectory.
But change is needed—perhaps even more and faster. It’s needed for existential problems like managing climate change and morally important issues like administering social justice. Organizations need change in order to stay relevant and competitive. Institutions and communities need change to help shift them to new priorities and to embrace new tools. Governments need change to help them meet a range of challenges from economic security to environmental sustainability and more. Change is needed at all levels and in many diverse circumstances. It just needs to happen in a way that creates more benefits than damage.
What’s in the Way?
The 1960’s vision of progress as a smooth flow of relentlessly positive innovation was certainly a fairytale. It focused too keenly on optimistic outcomes and ignored challenging realities. But it is worth asking why change that’s imagined and desired by so many people rarely happens as envisioned. Why do innovations disrupt and distort social norms instead of fitting seamlessly into everyone’s lives? Why are business, community, and political leaders blind to obstacles that result in unintended consequences? Why are high-level goals like peace, inclusivity, and an enhanced experience of life considered out of reach?
What’s in the Way?
These are deep and difficult questions to answer. Somewhere a grad student is developing robust arguments and reams of evidence supporting a well-rounded theory of how progress inevitably descends into chaos. That analysis will be enlightening, but in the meantime three obvious suspects stand out: a fragmented world, intractable problems, and outdated approaches to making change.
A Fragmented World
When significant cultural or technological transitions take place they often produce messy, conflict-ridden divides. Consequential advancements, like the advent of printing, electricity, or computerization, deliver significant benefits but also cause disorder and resistance as they impact people’s lives. Some people learn of and adapt to a change early while others remain unaware of what’s happening or actively resist adapting. As a result, the world fractures into different segments where some people have advantages that others don’t have, and some fear problems that others don’t see.
It’s hard to imagine an era more fragmented than this one. News, books, and media have splintered into a mosaic of perspectives that all reflect a different version of truth. A constrained set of respected thought leaders has been replaced by an army of influencers, and shared experiences are increasingly rare. At least five generations of adults compete for relevance and authority in organizational hierarchies, on moral issues, and through lifestyle choices. New definitions of gender vie for acceptance, and new recognitions of sexual preference confound traditional perceptions. On a more foundational level, some world cultures are living in the 21st century, while others are barely out of the 10th. In some countries, women are regarded as equals; in others, they are equated with pets. On some roads, people drive Teslas; on others, they ride donkeys.
Fragmentation inevitably leads to tribal inclinations urging people who think or act similarly to band together and try to ignore those who are different from them. Forming a tribe or sect may calm tensions, but it creates an almost impermeable barrier to problem solving and progress because it’s impossible to gain consensus. The chasm between world views is too wide to cross.
Intractable Problems
This increasingly fragmented world is also awash in problems that resist straight-forward resolution. They are suitably labeled as “wicked,” “untamed,” or most recently “VUCA” (volatile, uncertain, complex, and ambiguous.) Where the cause of a traditional problem can be isolated and analyzed, a complex problem is linked to multiple other systems, each of which contributes new inputs and intricacies. Often, the true source of a wicked problem is hidden or misunderstood, and cause and effect are extremely difficult to identify or model. A solution may require imaginative leaps and several iterations to get right.
Each era has its challenges, but as technology bleeds into every aspect of life and interconnects people, their thoughts, and their things, straightforward problems become multidimensional and increasingly daunting. Solutions need to address not only the stated problem, but also its context and potential ramifications. Often, the only course of action is a “best guess.”
Building suspension bridges and skyscrapers were feats of incredible engineering in their time, but the underlying rules of physics as they applied to construction were reasonably well-understood. Compare those instances to the current development of artificial intelligence, which is being implemented before it is fully understood, or climate change which is exceeding the extent of scientific knowledge. Similarly, conquering smallpox and decoding DNA required amazing ingenuity and perseverance, but the pioneers who led these pursuits could count on a relatively receptive population.
Compare those accomplishments and their public receptions with the more recent need to create a Covid vaccine. To experts around the world, the difficulty was scientific in nature: find a means of protecting humans from a deadly virus. They did it in record time, employing novel technology and admirable collaboration, but that didn’t turn out to be the full problem. Other problems branched out from the original. Some were predictable, such as how to reach people in remote areas or how to make the vaccine affordable to poorer countries. Some were not predictable, such as how to convince celebrities that the vaccine wouldn’t reduce male virility or that horse paste was not a viable alternative.
There are wicked problems galore right now in all countries, among all communities, and at all levels of organizations. Should nations pursue democracy, socialism, or authoritarianism? Should cryptocurrency become the foundation of all transactions? Should populations be compensated for past discrimination? Should the prices of life-saving drugs be subject to market forces? Should work be done remotely, in offices, or a hybrid of the two? Any solution to these problems will have upsides and downsides. Any solution will have people in favor and people opposed. Any solution will result in unanticipated consequences.
Outdated Approaches to Change
Contrary to what some people think, significant or systemic change doesn’t just happen. Minor change can occur quickly and relatively easily if a need is urgent enough or an opportunity rich enough. But more notable change requires extensive effort, substantial resources, and highly capable leadership. Traditionally, it follows a process or a specific approach endorsed by change management specialists who frame the way an organization defines and implements any desired transformation. Interestingly, these approaches tend to reflect the function or specialty that businesses value most at that time.
For example, when Disneyland showcased progress in the 1960s, the approach to change endorsed by most executives mirrored a mid-century emphasis on manufacturing: change was carefully planned and precisely executed in an assembly line fashion. To modify anything meant to “freeze” the current state, make the revision, and then “unfreeze” it. Leaders were similar to military commanders. Whatever top executives decided, everyone else had to follow.
When companies shifted from manufacturing to service offerings in the 1980s, finance became the dominant function. Change was newly branded as “re-engineering,” and was sought as a way to improve capital allocation and increase per-share value. Leaders were strategic visionaries. Top executives still made most decisions, but employees wanted to follow them so as to not get left behind.
Corporate and social attitudes toward change management morphed again in the 1990s, as the Web spread beyond Silicon Valley. Echoing the attributes of startup culture, companies were birthed in garages, and products were created overnight. Change became innovation—a concept that promised to renovate tired companies into transformative juggernauts capable of keeping up with the dizzying pace of technological growth and global competition. Leaders were inventive renegades who moved fast and broke things. Everyone else hoped they could be like them.
The process of making change evolves
Each of these different approaches to making change depended on the tools and mental models common to corporations and their dominant business function at the time. Each new proposal developed different philosophies and theories of how to help people adjust and how to ensure that the envisioned outcomes turned out as planned, and for a while each had some success. But notably, each approach began to lose its dominance and relevance as the times, the tools, and the techniques changed.
That’s happening now and it’s adding to the conflict and confusion. Most organizations and communities continue to push change in a siloed, top-down manner, ignoring the social, educational, and technological shifts that have made people more independent and less willing to blindly follow leaders. Outdated approaches to change remain rigid and rule-bound, despite technology that enables expansive connectivity and fluid communication. While popular culture and younger generations emphasize the importance of empathy and inclusion, these entities fail to consider how improvements for some might worsen situations for others. They rush experimentation, turning invention into a contest where the biggest and boldest initiative wins, regardless of what is lost in the race.
This misalignment between expectations and delivery leaves individuals struggling to understand their role and responsibility in making change. Empowered and overinformed by the web, cell phones, and social media, many people lack any training on how to use these gifts effectively. In an effort to contribute, they spread their thoughts like seeds, casting ideas and complaints broadly across networks, hoping to land on fertile soil. Others give up and retreat. They consign the future to forces beyond their control, crediting a deity, science, historical precedent, or random luck. With no role to play, their only responsibility is to warn or critique and hope for more favorable trends.
Startup culture is still popular, and innovation is still a potent buzzword, but the environment of change is rapidly evolving. Respect for authoritative, top-down approaches is losing favor to laterally connected, distributed collaborations. Wealthy tech gurus are still idolized, but so are humbler, values-driven leaders who spur collective action. Strategy documents are shrinking to the size of posters, and grand visions are being diminished by an explosion of experimentation. Little improvement is possible if leaders and the teams that support them continue to promote change haphazardly and impulsively as though everything is a startup. Nor is there any benefit attained when people assume they have no power to alter course and resign themselves to whatever happens. In a fragmented world struggling with complex challenges, the current approaches to making change have largely stopped working.
What’s Next?
The transitional era fragmenting the world has only begun. People everywhere are still digesting the impacts of the internet, and it’s about to be served up artificial intelligence, virtual reality, and a whole new version of the web. Climate change, the needs of emerging populations, global conflicts, and the god-like powers of gene-editing will provide a steady stream of new complexities. These conditions are impossible to ignore or to modify, but how change is implemented is a choice and there’s some early evidence of how it might be evolving.
Chef Jose? Andre?s noted the patterns of poor disaster relief response amid an abundance of underutilized food resources. He tried working within existing hierarchical structures, but quickly realized that the problems were often due to the regulations and bureaucracies that enforced them. As an alternative, he founded World Central Kitchen, and led an ad hoc community of chefs and food providers to rapidly focus resources exactly where needed without hierarchies, strategic plans, or ROI concerns. He continues to iterate, learning from failures or flaws.
Black Lives Matter united a like-minded, but highly diverse population of activists, with no headquarters, no central planning, and no official leader. Noting that nothing had changed despite decades of promises, they identified the problem of invisibility and found ways to gain attention. Using social media, they united powerful coalitions of people who shared the same experiences and frustrations—not only blacks, but all ethnicities. This collective action has prompted more change in response to systemic racism than traditional organizations pursuing that identical goal for over a hundred years.
Web3 is being developed by a devoted crowd of engineers and community leaders working together to shape the future of the internet. Ridiculed by some as naive and delusional, they are steadily building the Web3—a distributed, iterative assemblage of networks with the potential to disrupt and reorder nearly everything. The loose collection of believers building this space have noted the growing desire for transparency, distributed ownership, and authentication. They share an appreciation for the problems that technology has created and seek to address them. They work collaboratively in different roles and with different approaches, iterating as they go. The ecosystem they are building is nascent, but its potential has attracted some of the brightest minds of this generation and earned massive venture investment.
None of these organizations are perfect in the expression of their vision or execution, but each is making change in a new and significant way that’s worth unpacking. Each of these examples hint at a new approach to organizational or community change that suits the time and fits the evolutionary trajectory of corporate change management. These are approaches that don’t slow change down but ensure a higher quality outcome. They leverage newly connected communities and help people channel their passion and ability. These approaches pursue methodologies that encourage a deeper appreciation of people and their perspectives. They embrace processes that pay attention to potential downsides and consciously avoid disastrous outcomes. They offer an improved approach to change that uses the tools and technology needed to build an inclusive future that works for more than just those in charge.
These real-time instances of how to make change in today’s world is further bolstered by early insights from Bill Drayton, the founder of Ashoka, an organization that helped define the social entrepreneur movement. His thoughts captured the value of these recent examples and spurred our thinking about what this new approach to change might demand in terms of leadership and process. Drayton described modern changemakers as those who would no longer treat problems as if they were fixed in time, but rather seek solutions suitable to evolving and complex circumstances. In 2006, he defined this new breed of change leaders as:
People who can see the patterns around them, identify the problems in any situation, figure out ways to solve the problem, organize fluid teams, lead collective action, and then continually adapt as situations change.
This insight is a compelling description of people capable of building a more desirable future amid an increasingly complex and irrational world. It is also a very accurate description of how good design leaders think and behave, whether they are changing a graphic, a platform, or an institution.
Designing Change
“Design” is an ambitiously flexible word. It can mean a dozen different things as a noun, another dozen as a verb, and for good measure, it can also be an adverb. It refers to both a process and its end result. To design can include doing, making, having, seeing, or formulating. A design can be a thing, a place, an interface, or an experience. It can be done by businesses, households, schoolchildren, even nature.
In this context, where it is central to change, it means to develop or bring about the existence of a future state or condition in collaboration with those affected by it.
This is a mindset and capability increasingly adopted by large organizations seeking to survive and thrive in a rapidly evolving world. While manufacturing, finance, and innovative zeal still matter to organizations, the function currently taking center stage is design. Apple, the world’s most valuable company, began touting the value of design in the 1990s. For a while, it was the only technology firm courageous enough to bet its business on this claim. But when cell phones, social media, and the cloud began connecting everyone, the world’s attention shifted from a singular focus on technology to one that included the user’s experience of technology. It was not enough for phones and apps to work; they also had to be desirable and intuitive. That required design.
As digital connections wrapped the world, the corporate move to design-driven innovation gained urgency. IBM conveyed legitimacy by hiring thousands of designers as part of reorienting its business. Facebook and Google flooded their campuses with UX and UI designers. Nearly every large consulting firm bought a design firm to augment its offerings, and companies everywhere hired at least one designer—if only to say they were design-driven. This transition continues to benefit every design school graduate and many who have converted from other fields, and it shows no signs of slowing.
Elevating design from a job to a strategy shifts perspective. Problems become opportunities and customers or stakeholders become important contributors in the search for viable solutions. Experimentation, captured in renderings, comps, hypotheses, and trial balloons, mitigates risk. Iteration delivers refinement and failure offers guidance. Imagination rises to the same lofty height as analysis and intuition is no longer just guesswork. The strategic use of design respects the context, constraints, and requirements of business but marries them with the abstraction and openness of inventiveness. Most importantly, it recognizes that authoritative directives are more a hindrance than a help, and that the most innovative solutions arise from diverse collaborations, not singular dictates.
A Modern Mindset
Design provokes and responds to change. No one hires a design team with the goal of keeping everything the same. Designs enter the fray when a problem needs to be solved or something needs to be improved. Good designers become adept at identifying benefits in change. They can imagine a better way to communicate, a simpler means of creating engagement, or a different function that addresses a hidden problem. As a result, they are more comfortable with change as a continual flow in their life. Each new client, new tool, new material, or new perspective represents the possibility of positive change.
Design is famously useful in addressing issues that are ill-defined, unknown, or insanely complex. Perhaps because designers are trained to see every challenge as a problem that can be solved, they’ve developed tools, techniques, and processes that help them uncover insights, experiment and prototype, and deliver clear, valued results, regardless of the context. They push to look beyond what’s expected or what’s been done before, connecting to novel approaches and ideas.
In addition to embracing change and thriving on tough problems, design is collaborative. Few designers work alone. They have clients, customers, or colleagues. Depending on the assignment, they engage with engineers, authors, suppliers, coders, color experts, and more. While some designers may prefer to dominate collaborations, that’s a remnant of the past that is rapidly becoming the exception. Most are comfortable as contributors, taking the lead when their expertise is most relevant.
Lastly, but of equal importance, design is “human-focused,” meaning it is squarely focused on the behaviors, beliefs, and motivations of real people. Decades before neuroscience confirmed the importance of understanding people’s mental and emotional states to connect and communicate better with them, designers were interviewing, surveying, and observing people in all aspects of their lives. They do this because a design only succeeds if people adopt it.
It’s not much of a stretch to suggest that design’s power and prowess can be extended from making products, services, and experiences to making change in an era struggling with fragmented perspectives and complex problems. Treating the future as a design space is a viable approach. Using the processes common to design is an appropriate choice. Employing the tools and techniques that designers value allows new perspectives and enables more creative solutions.
Designing the future doesn’t mean swapping out MBAs for MFAs or shifting from learning statistics to learning to draw. It means adopting a practical, beneficial framework that encourages and incorporates diverse input and creative output. It means embracing change as a constant and directing it toward a carefully considered purpose, weighted to benefit the people most impacted.
Why This Book?
As your authors and guides on this journey, we have a deep familiarity and respect for designers and their abilities. We each ran successful, independent design firms in Silicon Valley for 30 years starting in the 1990s. We led research, development, and execution of design projects for startups, nonprofits, community services, government agencies, academic institutions, and hundreds of large corporations. We regularly transitioned our practices, moving from graphic design to experience design, from press visits to zoom calls, from printed page to VR environments.
We led change relentlessly for our teams and our clients throughout every decade, always cognizant of its cost and doing our best to make sure that we weren’t leading others off a cliff. We made countless mistakes and learned a lifetime of lessons. We also benefited from others leading similar charges in different companies, many of whom shared their experience and wisdom with us as we created this book. We appreciate their generosity and note many of their observations and advice in the chapters ahead.
We no longer lead teams exploring what’s possible or building experiences on the latest technology platforms. Others have assumed those roles, and we actively support and coach them. Our role now is to share what we know, to pass forward what we gained in those 30 years of designing change, and hopefully to stimulate a new appreciation of what design can do.
Perhaps the Disney version of progress was naive, but its opposite is worse. To cling to the past and refuse to change is to invite atrophy and eventually fade from relevance. Smooth progress with no bumps along the way is unlikely, but none can afford to simply shrug their shoulders and accept a future that spins increasingly out of control. The future can’t be a perfect expression of a meticulously calibrated vision. There are too many unforeseeable variables for that type of optimism. But there’s a lot of middle ground between unmanageable chaos and godlike manifestation. It’s that middle space that seems attainable—a balance between the rigid funk of stagnation and the craziness of chaos.
For these reasons and others that we’ll explore in this book, we anticipate that the next approach to change will be design-driven, and its leaders—at all levels and in a wide range of circumstances—will be changemakers.
These changemakers will be people who can view the future of communities, companies, and even countries as a design problem: an opportunity space that can be clearly defined, intentionally studied, and reliably addressed. That’s the goal of this book—to describe the leaders and approaches appropriate for this time and its uniquely complex challenges, and to encourage those who can make change to act in the right way, in the right place, and with the right support.
Takeaways
The future needs help.
Fast-paced and chaotic change has divided the world. Partly this is due to the messiness and fragmentation common to transitional eras, the increasing complexity of problems and challenges, and outdated approaches to making change.
A new approach to change is emerging.
To progress in a more inclusive and less damaging way, the traditional approach of pushing change down from the top needs to be replaced by a process that doesn’t slow it down but ensures a higher quality outcome. An approach that is based on a deeper understanding of people and problems, which tries to anticipate and adapt to potential downsides and encourages cooperation and partnership.
Design—in its broadest definition—fits this need.
It provokes and responds to change, is famously useful in addressing issues that are ill-defined or insanely complex, and is almost always collaborative. It empowers human-focused solutions and is increasingly accepted as the preferred approach to innovation.
Sample Chapter: Closing the Loop
This is a sample chapter from Sheryl Cababa’s book Closing the Loop: Systems Thinking for Designers. 2023, Rosenfeld Media.
Chapter 1
The Shortcomings of User-Centered Design
As a designer working in technology, I never thought I would see Black Mirror, the BBC show focused on a future of dystopian technologies, used for product inspiration.
I was in an ideation workshop with a client team that was working on a design strategy for augmented reality. We were talking about potential features and adding sticky notes with ideas to a whiteboard. During our discussion, we started talking about potential unintended consequences to features and design decisions. One of my colleagues brought up an example from the show Black Mirror in which soldiers, implanted with an augmented reality system, saw other humans as monsters that must be killed.
We discussed it for a bit, and everyone was quiet. Finally, one of our clients spoke up.
“Yeah, that’s a good idea—add it to a note on the whiteboard.”
“Add what?” my colleague asked.
“You know, the idea that people can use avatars and disguise themselves.”
It was one of the key moments in which I realized that our methods—our user-centered design methods—were failing us.
We reminded our client that, no, Black Mirror wasn’t meant to be a feature inspiration—rather, it’s a cautionary tale. We reminded him that it’s a good example of showing the ramifications of technology—that not all scenarios are good. And that even if he were thinking of it strictly from the technical problem-solving perspective, that the horror of it should give him pause.
It showed me that the ideation process was too myopic, too idealistic, and way too technology-solution-centered.
And so is the rest of user-centered design.
The design practice is experiencing a critical moment in time. Designers design products and services, especially in technology, that often have millions, and even billions, of users, yet they often fail to see design beyond individual users and the immediacy of their interactions with the products and services they work on. They often fail to anticipate and design for the impact on those who are not the direct users of their products, or for long-term effects on those they design for. And before that, they fail to clearly understand the problem space and the context in which their products will live.
In order to address the problems of user-centered design, you first need to understand what it is, why this is an approach that is widely used, and why it’s so problematic in the first place.
The Beginnings of User-Centered Design
If you’ve ever seen a Dutch bike, known as an omafiets, you might notice that it’s got quite a different design than the typical racing bike or modern commuter bike. Its handles are swept back, curved toward the rider in a way that keeps your arms and wrists free of pressure when you are sitting upright on the bike (see Figure 1.1). This type of bike is a good example of user-centered design: it’s meant to make the act of riding the bike more comfortable and enjoyable. It’s designed for the context in which these types of bikes are used, such as getting to work, carrying kids, running errands, all while wearing street clothes, which is quite a different context than, say, a racing bike. It’s a design decision made more than a century ago that prioritizes how the rider experiences the bike. It does not appear to have prioritized a more efficient manufacturing process, or cheaper materials, although perhaps with the popularity of this design over time, these processes may have responded to the demand. Ultimately, it’s a design that puts the user first.
Figure 1.1
A woman is riding a Dutch-style omafiets.
Image: Todd Fahrner
This type of design, which prioritizes the user’s experience, is certainly not new. However, a user-centered approach has not been inherent to, nor codified within, the design process, particularly in digital technology, until fairly recently. Much of the user-centered approach to design in the technology industry was pioneered by designers in the 1980s, and the spread of its ideas can be attributed to the writings of Donald Norman.
In his 1988 book, The Design of Everyday Things, Norman referred to a conceptual model that has three parts: a designer, a user, and a system. The interaction between the designer’s decisions and a user’s actions is facilitated through what he called a system, which, in this case, are objects and products. This book popularized the notion of conducting user-centered research and framed “good design” as that which is intentionally directed toward, and considerate of, a user’s mental models of how things should work.
These ideas shaped the tenets of the modern user experience design practice. Designers in recent decades have rallied around and emphasized the importance of these ideas: Designers must develop an understanding of end users by engaging with them directly through the course of their design decisions! They should emphasize ease of use and efficiency as it maps to a user’s expectations!
These were much needed advancements in the philosophy of product design, particularly as many products entered the digital realm. For example, think about many products—particularly electronics—that existed before the popularization of user-centered design: the first personal computers or VCRs when they initially appeared on the market. They were barely usable, with buttons and interfaces that were impossible to decipher. The approach that has brought us the iPhone, and the obsession with user-friendliness, was a much needed shift that users of digital products have all benefited from.
This approach has been built upon and articulated in a process known as design thinking, popularized by the Hasso Plattner Institute of Design at Stanford (also known as the d.school). The process was borrowed from a method called challenge mapping developed by Min Basadur, which sought to emphasize problem generation and conceptualization prior to solution development.
The five parts of the design-thinking process are typically articulated in the following way:
- Empathize: This phase emphasizes gaining empathy with your user and their situation, in order to understand the context in which your product or service will potentially solve something. It involves conducting foundational user research.
- Define: This phase is focused on taking your insights from the Empathize phase and synthesizing them to create problem statements, user journeys, or other aspects that will serve as a foundation for your ideas for solutions to your user’s problems.
- Ideate: This phase is where designers brainstorm on possibilities for solving for the problems identified and articulated in the first two phases. Often, designers ask “How might we [solve for X]? to spur creative thinking around potential solutions.
- Prototype: Core to the design-thinking process is experimentation. This phase occurs when designers create quick versions of design solutions—they could be paper prototypes, digital, physical, or anything that is lightweight—to take to end users for testing purposes.
- Test: In this phase, designers test the prototypes with end users, so they can continue to learn, iterate, and refine.
This process has helped many organizations take a user-centered approach to product and service design and development. It has also gained broader appeal as business leaders can see their return on investment for user-centered design. Apple is a good example of a company whose products have had wild success due to their user-friendliness, and many apps and services, such as Airbnb, which was founded by designers, have become prominent and widely used. User advocacy has become core to business decision-making, which has been an advancement from the bad old days in which users of products were poorly understood, seen as just buyers, or considered to be an obstacle to a business’s profits. An understanding of users has become essential.
And yet … this philosophy, with its thoughtful approach to how people interact with products, has led to a myopia in which designers—and others within organizations—have failed to recognize other consequences that fall outside the realm of the direct use of the products they are designing. There have always been outsize effects to both good and bad design. However, there is a growing sense among user-centered designers that the process does not address contextual understanding of people beyond just users, nor take into account impact at scale, nor acknowledge the complexity of the technology for which they are designing.
To clarify, there is nothing inherently wrong, per se, with the design-thinking process articulated previously. And, in fact, much of the criticism directed at the design process is actually related to how this process is practiced. For example, for the Empathize phase, the process itself does not necessarily prescribe who you need to empathize with. But in the practical and common execution of the Empathize phase, most practitioners are focused on one type of stakeholder only: the end user of a product. And this particular way, in which design thinking and UCD are commonly understood, informs how design is typically practiced far and wide.
Ultimately, there are three key problems with the approach and process of user-centered design that contribute to its shortcomings and lead to shortsightedness and potential unintended consequences.
- Users are viewed as nothing more than users.
- A user-centered approach does not acknowledge or address potential harm, and it limits the potential impact of design.
- A user-centered process does not inherently take into account the systemic forces.
Users Are Not Just Users
Users are multifaceted humans who affect others and are affected themselves by contexts that go beyond their relationship with the products they are designing.
Yet the approach in design is often oriented toward only the direct benefit of usage. Think about the tools that designers often use to create understanding about people: personas, user journeys, and user stories are common frameworks for synthesizing insight about users into something that can be used to make design decisions (see Figure 1.2).
Figure 1.2
A typical user or customer journey map articulates the direct relationship between a person and the product.
Image: Nielsen Norman Group
These tools, such as the customer journey, provide nuances about an individual’s contextual situation as it applies to how they might perceive and interact with your product. They help articulate customer pain points when it comes to that product relationship, and they can be used to identify opportunities for making the product experience better. The commonality is that they tend to focus exclusively on an individual who might buy or use a specific product.
Take a mad lib that is popular in the product design process: As an [individual], I want to [do something] so that [I can achieve something]. Although it seems to take an outcome into account (the “so that” part of the statement), this mad lib is most often used to define software features. In most design processes, people are defined solely by their relationship with your product. This is the most efficient way to think about people—as users—but it narrows your purview of the ways their lives and situations interconnect with other people and circumstances.
In addition, designers tend to only design for a “typical” user. By focusing on a typical user, and trying to solve for their immediate needs, they can go through the motions of being a user advocate. Let’s take a designer working on a social media platform. She’s been tasked with designing a fun experience that allows users to look at their past year and enjoy photos and posts they may have forgotten about. Surfacing this experience keeps the users engaged and makes it fun and interesting for them to keep returning. She creates an experience: fun celebratory illustrations that include text that says, “It’s been a great year! Thanks for being a part of it!” She engages evaluative testing to see which versions resonate most, and the feature goes live.
Although this is hypothetical, it is actually close to a feature that Facebook launched and continues to run. You may be familiar with the fallout from the platform’s “Year in Review” feature. In 2014, designer and developer Eric Meyer wrote a blog post addressing his experience with this feature. Rather than surfacing a fun party, an amazing trip, an excellent meal, or any of the other scenarios that Facebook’s designers may have considered, the Year in Review feature surfaced Meyer’s most interacted-with post: the photo that he posted when his six-year-old daughter, Rebecca, died of cancer. Instead of celebrating, he was forced to relive his grief over and over again every time he logged into Facebook, because this feature continually surfaced in his feed for weeks on end. In writing about this experience, Meyer said, “The design is for the ideal user—the happy, upbeat, good-life user. It doesn’t take other use cases into account.”
The designers who worked on this feature almost certainly did not intend to cause this type of trauma for Meyer or other users like him. But they failed to consider people at the margins, who might not be having what they assumed to be the typical user experience.
The tools that designers use contribute to this “flattening” of people into users viewed in isolation of others. In addition, the narrow focus on ideal users, and a failure to recognize additional contexts, contributes to unintended consequences time and time again.
User-Centered Harm and the Limited Impact of Design
In November 2020, design researcher Erika Hall tweeted, “This is all too often how UX design is considered and practiced.” The image she referred to was an anglerfish, with its glowing lure marked “UX” and its mouth marked “business model” (see Figure 1.3).
Figure 1.3
The shiny beacon of design!
Image: Erika Hall on Twitter (@mulegirl)
When you consider many tech industry products, particularly those based on ad tech such as Facebook, this is an apt and depressing metaphor. Designers often create shiny, wonderfully interactive experiences that can lead to detrimental outcomes.
The UCD process does not take into account the impact that designers have on their users beyond their direct use of their product, and in fact, there are no codified tools that help designers think specifically about harm. The repercussions of this are vast, particularly in tech.
Take, for example, Instagram’s infinite scroll. Aligning with tenets of persuasive design, Instagram’s feed provides what twentieth century psychologist B.F. Skinner referred to as variable rewards. The three components are described by writer Nir Eyal, a former proponent of addictive technology. He outlined them as rewards of the tribe (validation by others), rewards of the hunt (satisfaction gained by searching for information), and rewards of the self (gratification through stimulation). Instagram’s feed provides all of these things, with a touch of randomness that keeps its users engaged and interested.
Although users are constantly entertained by scrolling through for seconds, minutes, and hours, other outcomes aren’t so wholesome. Teens are less happy, misinformation thrives, and users experience addiction to these products.
If you asked designers whether Instagram’s infinite scroll constitutes good design, well, it would be hard to say no based on what makes user experience design successful. Is it easy to use? Yes. Does it satisfy and keep users engaged? Yes. Does it give people what they want from the product? Yes.
And yet, this feature, in keeping users glued to the product, can lead to harmful behaviors. Aza Raskin, who is often credited with (or blamed?) for the concept of infinite scroll, has said, “One of my lessons from infinite scroll: that optimizing something for ease-of-use does not mean best for the user or humanity.”
In another example, Tony Fadell, a former executive at Apple and founder of the thermostat company Nest, once remarked that he now regrets the design choices that his team made while designing the iPhone because of the destructive patterns of distraction, interruption, and digital addiction with which the device has been associated: “I wake up in cold sweats every so often thinking, what did we bring to the world?”
From a user-centered design perspective, those very decisions for the iPhone are wildly successful: engagement, ease of use, its fulfillment of users’ needs and desires. On a societal level, it’s a much more nuanced story: there are problems of distraction and addiction, as well as a pattern, particularly with teens, of deteriorating mental health associated with relationships with their smartphones. Unfortunately, the tools and methods of user-centered design make designers ill-equipped to address those nuances.
A key aspect worth acknowledging is that these examples are not about design gone wrong. It’s very different when a product is created, and people are harmed because of flaws in the product or misunderstandings of how people might use it. These examples are not like Samsung phones catching fire. The technologists who worked on that did not intend for exploding phones to be an outcome. The difference between the Samsung phones and something like infinite scroll is that technologists intended the outcomes of infinite scroll to be a user spending more and more time in the product. In prioritizing the short-term goals that benefitted their product financially, they failed to account for the long-term harmful outcomes for individuals as well as for society.
It’s important to recognize that the responsibility to do better does not just sit on the shoulders of designers. Everyone in an organization, particularly those empowered to make fundamental business decisions, is responsible for whether or not products have a positive impact on humanity. To return to the anglerfish metaphor, a fundamental problem is that a designer’s influence only extends to the glowing lure, and the lure sits within the context of a harmful business model. It’s not just a designer’s job, but everyone’s job, to consider context and actively strive to create less harmful products and services.
If organizations were shaped differently to prevent this deterministic use of design, then designers could be more empowered to use their skills to question the purpose of their work and think holistically about it.
UCD and Outside Forces
When conducting work with communities with whom I lack familiarity, whether in global health or regionally, I typically partner with practitioners from those communities to help give cultural context to our research work. This approach is not inherent to the user-centered design process, and adherents to the process are often willfully agnostic to the problem spaces and contexts for which they will later be tasked to design. This is hubris.
Although you can learn a lot by using the design process to “empathize” and “define,” designers often fail to acknowledge their own blind spots and assumptions.
In his Medium article entitled “Stop Designing for Africa,” designer Chris Elawa discussed the failure of the XO laptop, the device from the famed “One Laptop Per Child” initiative in the early 2000s. The initiative failed for a myriad of reasons. One key reason was that the designers and distributors did not understand the context of ownership: “The XO laptop was rejected by adults in African [low income economies] because it focused on empowering one child as opposed to providing value to a family or entire community.” In addition, “Factors such as marketing, distribution, and funding do play a role in the relative success of a product designed for the socio-economic development of African [low-income economies].”
These factors are not typically considered through the course of user-centered design. An understanding of individual users can only take you so far. Now that designers, particularly in tech, are often designing in contexts for millions, and in some cases, billions of individuals, the practice needs better tools and processes to consider these challenges of consequence, context, and scale.
A Change in Perspective: Systems Thinking
Knowing about the variety of pitfalls in user-centered design, what is a designer to do? Given that so many tools of the practice represent the narrow philosophy inherent in UX and user-centered design, it’s difficult to shift your mindset without also shifting the tools you use for problem-solving. Broadening your lens requires both a mindset shift and a shift in the tools that you use. Systems thinking is that mindset and contains your requisite set of tools and methods.
So, what is systems thinking? In Systems Thinking for Social Change, David Peter Stroh describes the practice as “the ability to understand… interconnections in such a way as to achieve a desired purpose.” What he acknowledges through this definition is that systems often do the opposite of achieving a desired purpose: sometimes the system achieves a purpose that is unintended, undesirable to most, or both unintended and undesirable.
For example, picture, as a system, a city’s desire to decrease homelessness within its borders. The city spends money and resources toward attempts to solve the problem. Yet, unfortunately, both homelessness continues to increase in the city, and despite its efforts, the city continues to increase its spending in this area. In this instance, the stakeholders within the city’s government may need to ask themselves: Is our spending having an impact? Are there other interventions, or addressing of root cause, that we need to engage in? Are there actions that are exacerbating the problem? If homelessness continues to increase, then, harkening back to Stroh’s definition, the system is not achieving the desired purpose.
Designing for Greater Impact
One of the most famous myths in animal lore is that lemmings collectively kill themselves by jumping off cliffs. It’s a fanciful idea that definitely fascinated me in elementary school, leading my vivid imagination to picture cute little hamster-like animals dutifully marching toward a cliff edge, robot-like, and quietly plummeting to their deaths. (I never imagined them screaming because I had zero notion of what a lemming might sound like.) In case you were wondering, according to the Alaska Department of Fish and Game, this myth began after the release of a Disney nature documentary in the 1950s. Apparently, during the filming, “The lemmings supposedly committing mass suicide by leaping into the ocean were actually thrown off a cliff by the Disney filmmakers.” Monsters!
Despite its questionable origins, the lemming myth is often used as a metaphor for human behaviors. I find it useful to explain the benefits of systems thinking, a thought exercise around solving more broadly for certain problems. If, as a designer, you were faced with the problem of lemmings throwing themselves off a cliff en masse, you could take a couple of different approaches to reducing the harm. You could set up a lemming clinic at the bottom of the cliff and treat lemmings that were injured, or had survived the fall, maybe even catching some as they were falling. Or, you could build a barrier, a fence, at the top of the cliff to prevent them from falling in the first place.
It’s a good parallel for the way that many of our institutions, both public and private, often approach complex problems. Lots of investment is spent at the bottom of the cliff, treating symptoms or the results of problems, rather than at the top of the cliff, on preventive measures. This plays out in many domains that have “wicked problems,” a term coined by Horst Rittel, a design professor at Ulm School of Design in 1973. Wicked problems include unique, multidimensional issues such as climate change, homelessness, public health crises, and extreme poverty. As the Interaction Design Foundation described it, they are often “problems with many interdependent factors making them seem impossible to solve.”
Often in the design practice, the course of problem-solving is already determined. I’ve had multiple client organizations clarify—as my team and I were conducting foundational research to understand a problem space—that they already knew that the solution would take a digital form, that it would solve a specific acute problem, and that inquiry into other spaces was not needed because of that. This narrow and deterministic focus often leads to solutions that have minor impact.
Take, for example, solutions that are meant to reduce youth homelessness. One can treat the issues that young people face once they are homeless: a lack of shelter and safety, inaccessibility to healthcare and other resources, and a myriad of services that are confusing and dispersed. In the lemmings metaphor, this would be positioning the solutions at the bottom of the cliff. A preventive approach would be to address the issues that would keep young people from becoming homeless in the first place: increased housing or family stability, financial and educational support, a culture that protects LGBTQ youth, an end to institutional racism, and many other interventions, both narrow and broad.
The beauty of systems thinking is that it allows designers to access multiple forms of potential intervention as possibilities, even if, as a designer, you may have the skill set to design for only some areas of intervention. Acknowledging the diversity of possible interventions within a problem space, both preventive and treatment-focused, enables designers and their stakeholders to take a broader lens, and perhaps consider alternate areas of investment or direct ideas oriented toward policymaking and other domains, involving the appropriate experts. In the meantime, if a designer is, for example, a digital experience expert, she can more accurately focus on the most meaningful execution of her intervention space, while understanding the broader context in which that digital solution or experience will sit.
What Systems Thinking Is Made Of
In relation to the ability to understand interconnections, there are several methods that comprise systems thinking. The act of systems thinking, according to researchers Samir Patel and Khanjan Mehta in their comparative study on design frameworks, is focused on “not any one thing, but a set of tools, habits, and practices that help in mapping dynamic complexities. Systems thinking focuses on the cyclical cause and effects, as opposed to linear cause and effect relationships.” Often, in the methods employed in the design practice, designers are essentially required to engage in linear thinking. For example, in product design, think about the typical product lifecycle process. From a high level, it consists of four steps: introduction, growth, maturity, and decline. Similarly, many expressions of user journeys or experience mapping follow a linear process of product use. Systems thinking, through its focus on cycles, as well as radiating effects, extends designers’ thinking beyond just the linear format. It requires you to recognize that problems aren’t solved and done: in fact, solutions feed back into a cycle of effects, and could cause problems themselves, or create cycles of positive outcomes.
If you connect these ideas to some of the outcomes of user-centered design, this approach of focusing on system goals and interconnectedness can help avoid the shortcomings—the narrow view of “users,” the limited impact, the unacknowledged outside forces—and create very different conversations from the outset, which automatically prioritize multiple views of a problem space. And, in fact, when designers consider systems, they can utilize their skills in design thinking to further and more thoughtfully think about how to change the system.
Takeaways
- User-centered design, when it was first developed, was a much-needed advancement in prioritizing the users of products and services. As time progressed, and products and services became more complex, its limitations became apparent.
- Some of the shortcomings of user-centered design include: an assumption that users are only users, that the practice of user-centered design fails to acknowledge potential harms and limits the potential impact of design, and that it fails as a process to account for wider forces.
- A response to the shortcomings of UCD is combining design with systems thinking, which prioritizes a focus on understanding interconnections, recognizes multiple forms of intervention, and articulates whether system goals are meeting the outcomes we want as a society.