Now available for pre-order: Design for Impact by Erin Weigel

Sample Chapter: Deliberate Intervention

This is a sample chapter from Alexandra Schmidt’s book Deliberate Intervention: Using Policy and Design to Blunt the Harms of New Technology. 2022, Rosenfeld Media.

Chapter 1

A View of the Future

A father in a suburban U.S. town returns home after dropping his children off at school and unfastens his seatbelt, just in time to receive a package delivered by a drone. He walks inside and checks his retirement account on his phone, using facial recognition to log in. In a large city a few hundred miles away, an undocumented immigrant walks into his job at a restaurant. He has his movements tracked through his smartphone, unbeknownst to him, so that the spread of a new viral infection can be traced by health experts. Overseas, a deployed member of the military checks her social media feed and sees political ads regarding an upcoming election. She puts her phone in her pocket and walks to a nearby barracks for training on a new piece of military targeting technology.

Invisibly, these individuals are experiencing a designed world of technologies, tools, and built environment. Policies have in some cases brought these tools into being—think of government-funded GPS and the internet. And then, once the private sector promotes uptake among the public, policies constrain and shape those designs to varying degrees. In some cases—like a seatbelt in a car—policies that inform the design are well formed. In others, like the online bank account and health tracking, they are just beginning to emerge and take shape. And in yet others, like AI used in military technologies, few policies exist at all. As a result, the impacts of these technologies are felt in both predictable and unpredictable ways. Society is shaped by the technologies as they emerge, and in turn, society responds and shapes them back.

The act of forming policy to drive outcomes “in the public interest” (which we’ll talk more about in Chapter 2, “Policy and Design Basics”) has never been a simple undertaking. Both policy and design work on shaping the future, and they do so in varied, overlapping, and sometimes unpredictable ways. This book doesn’t propose an elegant solution that will help the future come into being in a neatly planned-out fashion, which causes no harm to anyone. Rather, it offers a way for people working at the intersection of policy and design of new technology to think more clearly about these issues and understand their place in the puzzle better.

Increasing Speed and the “Pacing Problem”

Many thinkers and writers have detailed the increasing speed of technological progress. In the 1940s, economist Joseph Schumpeter wrote about the concept of “creative destruction,” which he posited underpinned all societal progress. Creative destruction is the process by which existing business models are disrupted by newcomers, which Schumpeter called “the essential fact about capitalism.”

Such business growth has an extra fast, exponential flavor in the digital age, as Douglas Rushkoff observed in his book Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity. Venture capitalists buy companies not to own them, but to sell them at a steep profit. Because of that, there is a need by VC-backed companies to show quick disruption and exponential growth, rather than to build a slow and steady, sustainable company with a reliable customer base. That’s why thriving companies like Twitter, which produced over $1 billion in revenue in 2020, are considered a failure—they produce revenue rather than growth. “A more realistically financed company gets to grow whenever it’s appropriate,” Rushkoff stated. “You get to grow at the rate of increasing demand rather than the rate demanded by your debt structure.”

The speed of tech development is exacerbated by the development of technology itself. Gordon Moore, in what came to be known as Moore’s Law, theorized when describing the development of semiconductors that the computing world would dramatically increase in power and decrease at a relative cost, at an exponential pace. Building on that insight came writers like Ray Kurzweil and his “Law of Accelerating Returns,” which extrapolated Moore’s insight from computing power more widely to all technology. R. Buckminster Fuller, an American architect, designer, and philosopher, theorized in his book Critical Path that human knowledge was increasing at an exponential rate—it doubled every 100 years in 1900, he theorized, then doubled every 25 years by the end of World War II, and so on.

Pull it all together, and we have a tech world evolving at a rapid pace. This trend has led to what’s known as the “pacing problem,” where technology moves ever faster, but policymakers move at the same speed as they always have. Regulatory agencies typically collect facts over a long period of time and engage in trial-like processes that go through multiple levels of oversight before being codified. When the U.S. Department of Justice sought to break up Microsoft in the late 1990s, for example, the case dragged on into the 2000s. By then, competitors like Google Chrome and Mozilla Firefox had appeared, rendering the case all but moot. Our current model of top-down enforcement, thoughtful as it is, may not be ideal for the rapidly moving technological environment we find ourselves in.

A run-of-the-mill UX’er is thinking 6 months out. Product innovation teams are looking 2-3 years out, standards people are 5 years out, deep academic research is 10 years out, and policy probably looks 15 years out.
—Whitney Quesenbery, director and co-founder of the Center for Civic Design

Proactive and Reactive Policy

A key concept to start out with is proactive and reactive policy, which we will revisit over the course of this book. Proactive policy shapes the future ex ante, before it has transpired. A current example of this involves the bans on the use of facial recognition software, with the IRS pulling back on a facial recognition scheme for tax filing before it had been tried. Reactive policy is post hoc, in response to something that has already occurred. An example of this could be safety requirements for refrigerators, which were implemented in 1956 in response to children getting caught in the appliances! As we’ll see, most policy responses throughout history are reactive in nature. Indeed, facial recognition is widely used in the private sector, with proactive bans being far rarer.

The province of new designs is, typically and appropriately, that of the private sector. With their expertise in understanding customers and building things people want, marketing them and getting them used, it makes sense that private companies would work on creating “the new.” Entrepreneurs and private companies do not typically ask for permission in creating what they do (though there come points in the development of any company when laws come into play, some of which may even halt progress). And herein lies the messiness—stuff gets built out there in the world and, if and when something goes wrong, policymakers step in to “intervene.”

At this point, we must ask: “Who gets to decide which interventions are appropriate and when they should be attempted?” Consider the Winston Churchill saying: “Democracy is the worst form of government, except for every other one.” The idea is that, even if policies are not perfect, if they are implemented by leaders whom people have elected democratically to foster shared values and serve their interests, policies will be about as good as they can get. (Of course, whether particular democracies, including the U.S., are healthy and well-functioning is up for debate.)

Every new thing brings with it indisputable problems. “When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution,” according to cultural theorist Paul Virilio. There is no objective right or wrong about whether a new thing is or was good or bad, the thinking goes. Likewise, in some cases (particularly in the digital world), there is not always a clear answer as to what constitutes a “harm” of new technology, or what the right intervention should be to address it. Instead of coming up with the “right” answer, if we decide on these things together through transparent debate and a democratic system, we are more likely to achieve an outcome that the majority of us are happy with.

The Inscrutability of the Future and Reactive Policy

One of this book’s arguments is that we typically cannot know the impacts of new designs or technologies until those things have had some time “out there,” among people in the world, to make themselves known. History is littered with examples of worry about new things that later came to be assimilated and accepted as normal. For example, there was the fear that television and radio would lead to dangerous addiction, that photographs would lead to people collecting “fake friends,” and that roller-skating could lead to charges of impropriety. See this excerpt from The Chicago Tribune of 1885, defending the roller skate against charges of impropriety (see Figure 1.1).

1885 Chicago Tribune newspaper clipping on “The Morale of Roller-Skating.” Clipping includes the following text: “What is there in a smooth rink-floor, covered with roller-skaters, that should endanger personal morals more than the smooth ice covered with steel-shod skaters? These are some of the questions which are now agitating the community.”

Figure 1.1
A newspaper clipping expressing fear of the roller skate’s negative side effects.
Twitter account Pessimists Archive @pessimistsarc

Writer Douglas Adams nicely sums up the worry about new things: “I’ve come up with a set of rules that describe our reactions to technologies:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
  3. Anything invented after you’re thirty-five is against the natural order of things.”

Adams’ theory doesn’t mean that new technologies can’t have negative side effects—they can, and they do. Think of the misidentification of Black people by facial recognition software leading to false arrests, or the countless deaths as a result of automobiles. But the point is that, from our subjective individual perspectives, we can’t know which technologies will have mass uptake by the public, we can’t usually know for sure what their negative effects will be ahead of time, and we often can’t predict what the right policy interventions might be. There are too many unknown links in the chain of future events. This view means that policy tends to be “reactive” to the harms that we see emerging from technology.

The unimaginable is ordinary, [and] the way forward is almost never a straight line you can glance down but a convoluted path of surprises, gifts, and affliction you prepare for by accepting your blind spots as well as your intuitions.
—Rebecca Solnit, Hope in the Dark

Notably, as policy attempts to shape the world, like design, it too produces unintended consequences. An example of an unintended consequence of well-meaning policy is cookie pop-ups (those notifications on websites asking you to “accept cookies”) brought forth by EU regulations including the ePrivacy Directive and GDPR (General Data Protection Regulation). Because giving users control over their data has long been seen as the most ethical way of protecting privacy, it logically followed that every website should inform users of how their data would be used and seek their consent. Some years on, the pop-ups are widely panned as ruining the experience of the internet (see Figure 1.2). Future privacy legislation may seek to ban mass data collection outright, rather than put the onus on users for giving “consent”—thus removing the need for a pop-up at all. Just like design, policy is iterative as understanding evolves and unintended consequences make themselves known.

A tweet from Andy Budd from January 2,2022, which reads:  A typical website visit in 2022 1. Figure out how to decline all but essential cookies 2. Close the support widget asking if I need help  3. Stop the auto-playing video  4. Close the “subscribe to our newsletter” pop-up 5. Try and remember why I came here in the first place

Figure 1.2
Cookie pop-ups have received broad criticism for “ruining” the experience of the internet, although they were intended to help protect user privacy.

Making Reactive Policy as Proactive as Possible

The future’s inscrutability does not mean we can’t be a bit wiser about it as it starts to come into focus—about bias and other harms. By bringing policy and design closer together, society can attempt to shape technology more thoughtfully, rather than (just) the other way around. While we often can’t know the impacts of technology until it’s had some time in the world, we can make reactive responses as proactive as possible. This book will run through proactive and reactive responses to harms of tech, both in the private and public sectors, as well as provide some ideas for how these can move closer together.

The funny thing is that the tech world is very much fixated on the future and attempts to “hack” it—it’s just that they’re not trying to hack harms, but rather hack product-market fit and figure out which products will have uptake by users. VC firms run hundreds of “anthropological tech” experiments at a time to see which few ideas might stick, as a way to hack the inscrutable future and make it move faster.

And well they should focus on product-market fit. Without something that lots of people can use, we wouldn’t be having this conversation about policy interventions to begin with. And there’s the rub at the core of progress: We don’t want to thwart innovation and progress, but as a society we need to also understand the trade-offs, and if and how to intervene. Makes you wonder what might happen if these firms ran similar experiments to anticipate future harms. Could we perceive them faster?

If we learn from revenge effects [or unintended consequences of technology], we will not be led to renounce technology, but we will instead refine it: watching for unforeseen problems, managing what we know are limited strengths, applying no less but also no more than is really needed…I am not arguing against change, but for a modest, tentative, and skeptical acceptance of it.”
—(Edward Tenner, Why Things Bite Back: Technology and the Revenge of Unintended Consequences)

Pulling It Together

The world we live in is invisibly shaped by designs we interact with and policies that go on to shape those designs. But in many spaces, humans are not good at predicting the impact of the new. On top of that, the speed at which new technologies enter society is increasing at a rapid pace. Thus, with some notable exceptions, policy tends to be more reactive than proactive in nature. By being a bit more thoughtful and collaborative, we can hope to be as proactive about our reactions as possible, and address harms as they emerge.

back to Deliberate Intervention