Designing Agentive Technology Cover

Designing Agentive Technology

AI That Works for People

By Christopher Noessel

Published: May 2017
Paperback: 280 pages
ISBN: 978-1933820-63-7
Digital ISBN: 978-1933820-70-5

Advances in narrow artificial intelligence make possible agentive systems that do things directly for their users (like, say, an automatic pet feeder). They deliver on the promise of user-centered design, but present fresh challenges in understanding their unique promises and pitfalls. Designing Agentive Technology provides both a conceptual grounding and practical advice to unlock agentive technology’s massive potential.

Hear author Chris Noessel on The Rosenfeld Review Podcast

Paperback + Ebooks i All of our Paperbacks come with a FREE ebook in 4 common formats.


Ebooks only i All ebooks come in DRM-free Kindle (MOBI), PDF, ePub, and DAISY formats.


More about Designing Agentive Technology


Noessel’s book cuts through the AI hype to give practical advice and valuable context on how to apply today’s powerful-but-limited AI technology to create useful, helpful, supportive, and valuable services that go way beyond silly chatbots.


—Mike Kuniavsky, Xerox PARC

A fun, friendly, and accessible guide on how to design for an emerging world where software-enabled machines become intelligent servants, laced with sci-if wonder and moral dilemmas.

Dave Gray, founder of XPLANE, author of Liminal Thinking, Gamestorming, and The Connected Company

Engaging and accessible—Designing Agentive Technology takes on one of the most relevant topics of this era by making it fun and understandable.

Adam Cutler, distinguished designer for Cognitive Computing, IBM

This book taps us on the shoulder and asks us to look up and realize that the world around us has changed. It’s time for us to design ‘narrow’ AI to start making things that work for users.

Nir Eyal, author of Hooked: How to Build Habit-Forming Products

There’s a lot of pie-in-the-sky speculation about AI. On the other hand, Designing Agentive Technology makes a corner of the AI universe accessible in this straightforward and fun-to-read book.

Molly Wright Steenson, associate professor, Carnegie Mellon School of Design

This book is an ideal entry point for professionals considering the design and implications of AI, and for anyone who wishes to understand the important new space between automation and manual control.

Amber Case, author of Calm Technology

AI isn’t just technology—it’s how we interact and experience it. Chris Noessel gives a fantastic framework that opens the field so that more diverse voices can collectively design our future.

David Young, and

Table of Contents

Part I: Seeing

Chapter 1: The Thermostat That Evolved
Chapter 2: Fait Accompli: Agentive Tech Is Here
Chapter 3: Agentive Tech Can Change the World
Chapter 4: Six Takeaways from the History of Agentive Thinking

Part II: Doing

Chapter 5: A Modifed Frame for Interaction
Chapter 6: Ramping Up with an Agent
Chapter 7: Everything Running Smoothly
Chapter 8: Handling Exceptions
Chapter 9: Handoff and Takeback
Chapter 10: Evaluating Agents

Part III: Thinking

Chapter 11: How Will Our Practice Evolve?
Chapter 12: Utopia, Dystopia, and Cat Videos
Chapter 13: Your Mission, Should You Choose to Accept It


These common questions and their short answers are taken from Christopher Noessel’s book Designing Agentive Technology: AI That Works for People. You can find longer answers to each in your copy of the book, either printed or digital version.

  1. How do you pronounce “agentive”?
    “Agentive” is a once-languishing adjective that is built on the word “agent,” so I pronounce it emphasizing the first syllable, “A-jen-tiv.” I like that this pronunciation points back to its root, which helps people suss out its meaning when they’re hearing it for the first time. I’ve heard people stress the second syllable, as “uh-JEN-tiv,” which rolls off the tongue just fine, but doesn’t do much to help people’s understanding.
  2. Did you invent this kind of technology?
    Oh no, far from it. As you’ll read in Chapter 4, “Six Takeaways from the History of Agentive Thinking,” thoughts about machines that take some sort of initiative go all the way back to at least ancient Greece. So, no, I didn’t invent it. I have designed several agentive systems over the past few years, though, and on about my third such project, realized I was seeing some recurring patterns (in the Christopher Alexander sense). I looked for a book on a user-centered approach to this kind of technology, and when I could not find one, decided to write it.
  3. What’s the most accessible example of agentive technology you can give me?
    Chapter 1, “The Thermostat That Evolved,” goes into some detail on one example that is popular in the United States, the Nest Thermo- stat. If you’re not in the U.S., or unfamiliar with that product, imagine an automatic pet feeder. It is not a tool for you to feed your cat. It has tools for you to specify how you want the machine to feed your cat, and the feeder does most of the rest. You will still need to refill it, free food stuck in its rotors, and occasionally customize or pause feeding schedules. These maintenance and customization touchpoints are what distinguishes it from automation and where design plays a major role. To flesh out this singular example, see Appendix B for a list of every other real-world example included in the book.
  4. I have an agentive project beginning. How can you help me start it out right?
    Begin with the first diagram shown in Appendix A, “Consolidated Touchpoints.” It shows common use cases in a rough, chronological order. Think through your product and identify which use cases apply to your project and which don’t. Reference the chapters in Part II for details on the use cases and begin to construct scenarios around them. This should give you a great head start.
  5. Why didn’t you go into depth about interfaces?
    Agentive technology differs primarily in use cases, rather than interfaces, so Part II is dedicated to identifying and describing these. Readers can draw on the existing practices of interaction and inter- face design for best practices around individual touchpoints. The notable exception is the interface by which a user speci es triggers and behaviors. See Chapter 5, “A Modified Frame for Interaction” for an introduction to these concepts, and Chapter 8, “Handling Exceptions,” for an interface pattern called a “Constrained Natural Language Builder,” which you can consider customizing in your agentive interfaces.
  6. You’re just another cheerleader for the
    future, blithely bringing artificial intelligence doom down on us all! Wake up, sheeple!

    Technically, that’s not a question, and frankly a little hyperbolic. But I’m still here to help. There’s a distinction to learn in Chapter 2, “Fait Accompli: Agentive Tech Is Here,” between narrow artificial intelligence and general artificial intelligence. Once you understand that difference, it becomes easier to understand that, unlike general AI, narrow AI gets safer as it gets smarter. And as you’ll read at the end of Chapter 12, “Utopia, Dystopia, and Cat Videos,” I believe a worldwide body of agentive rules is a useful data set to hand to a general AI if/when one comes online, to help it understand how humans like to be treated. This is on the good side of the fight.
  7. Aren’t you that sci-fi interfaces guy?
    I am one of them. I keep the blog, and you may have heard me speaking on the topic, attended a workshop, or been to one of my sci-fi movie nights. Also, Nathan Shedroff and I co- authored Make It So: Interaction Design Lessons from Science Fiction in 2012, which is all about what real-world designers can learn from speculative interfaces. Predictably, sci-fi makes appearances in this book. You’ll see some quick mentions in Chapter 2, and two important mentions in Chapter 13, “Your Mission, Should You Choose to Accept It.” These serve as a telling contrast of sci-fi written with and without agentive concepts. You also can search the #agentive tag on the sci blog to find even more.
  8. If you could wave your hands and make anything an agent, what would it be?
    Well, I must admit that part of the reason I chose Mr. McGregor to
    be the illustrative example is that I grew up in big cities, far from farmsteads, and never got the knack of raising plants. If, like me, you have a brown thumb, but dream of growing your own garden-fresh food, read about Mr. McGregor in sections placed after Chapters 5, 6, 7, and 8. My second choice might be an agent on mobile phones that listens in on conversations and does some socially adept fact-checking and frame-checking to encourage skeptical thinking and discourage lies or bullshit, in the H. G. Frankfurt sense.


Chris has written a surprising book. He’s written something so inherently human that you can’t help but be swept up into the new world of agentive technology. To be honest, it’s just not fair! It sneaks up on you because Chris effortlessly shows the progression toward agentive technology—toward our giving agency to technology to be simply an obvious step in human evolution. That’s a profound, and useful, shift of mindset.

Chris tells the story of artifcial intelligence from the perspective of human imagination (sci-fi, scary) and of technical capabilities from the perspective of human needs and desires (real, narrow, beneficial). It’s a sleight of hand that brings perspective to some of the “sky is falling” noise that’s out there right now around AI. More importantly, this approach makes it all so relatable (and, yes, readable). You won’t leave here knowing how machines learn, but you will appreciate better how machine learning might impact the humans who rely on it. You’ll also be introduced to the implications of that reliance over time. These might surprise you—it’s not about AI as a sinister over- lord, but rather the seemingly mundane implications of a machine’s lack of empathy.

Again, humanity. I’m so struck with how human this book is.

It’s a book about invention and the evolution of ideas, technologies, and desires. I think maybe the single biggest trick Chris performs here is that by providing the history of various tools and their creators (like temperature control technology), the obviousness of technical assistants is almost shown to be a refined human need, as opposed to a new technical capability. This completely changes how we should approach the design of agents. It argues for, well, human-centered design, now, doesn’t it?

And that, finally, is what leaps from these pages: the need for new practices in human-centered design. Without approaching the problem from a “framework” perspective (thank you), Chris offers the first word on some of these practices. He adds depth to the understanding of how agents differ from tools (both hardware and software). And by covering agentive technology’s human impacts, he shows that industrial design, UX design, and service design don’t adequately address what’s required to understand and solve problems of agentive technologies.

This is just the beginning of a new conversation in design, for sure, but wow—what a great start!

Phil Gilbert
General Manager, Design IBM Corporation
March 22, 2017