Now published: Research That Scales by Kate Towsey!

Frequently Asked Questions

These common questions and their short answers are taken from Christopher Noessel’s book Designing Agentive Technology: AI That Works for People. You can find longer answers to each in your copy of the book, either printed or digital version.

  1. How do you pronounce “agentive”?
    “Agentive” is a once-languishing adjective that is built on the word “agent,” so I pronounce it emphasizing the first syllable, “A-jen-tiv.” I like that this pronunciation points back to its root, which helps people suss out its meaning when they’re hearing it for the first time. I’ve heard people stress the second syllable, as “uh-JEN-tiv,” which rolls off the tongue just fine, but doesn’t do much to help people’s understanding.
  2. Did you invent this kind of technology?
    Oh no, far from it. As you’ll read in Chapter 4, “Six Takeaways from the History of Agentive Thinking,” thoughts about machines that take some sort of initiative go all the way back to at least ancient Greece. So, no, I didn’t invent it. I have designed several agentive systems over the past few years, though, and on about my third such project, realized I was seeing some recurring patterns (in the Christopher Alexander sense). I looked for a book on a user-centered approach to this kind of technology, and when I could not find one, decided to write it.
  3. What’s the most accessible example of agentive technology you can give me?
    Chapter 1, “The Thermostat That Evolved,” goes into some detail on one example that is popular in the United States, the Nest Thermo- stat. If you’re not in the U.S., or unfamiliar with that product, imagine an automatic pet feeder. It is not a tool for you to feed your cat. It has tools for you to specify how you want the machine to feed your cat, and the feeder does most of the rest. You will still need to refill it, free food stuck in its rotors, and occasionally customize or pause feeding schedules. These maintenance and customization touchpoints are what distinguishes it from automation and where design plays a major role. To flesh out this singular example, see Appendix B for a list of every other real-world example included in the book.
  4. I have an agentive project beginning. How can you help me start it out right?
    Begin with the first diagram shown in Appendix A, “Consolidated Touchpoints.” It shows common use cases in a rough, chronological order. Think through your product and identify which use cases apply to your project and which don’t. Reference the chapters in Part II for details on the use cases and begin to construct scenarios around them. This should give you a great head start.
  5. Why didn’t you go into depth about interfaces?
    Agentive technology differs primarily in use cases, rather than interfaces, so Part II is dedicated to identifying and describing these. Readers can draw on the existing practices of interaction and inter- face design for best practices around individual touchpoints. The notable exception is the interface by which a user speci es triggers and behaviors. See Chapter 5, “A Modified Frame for Interaction” for an introduction to these concepts, and Chapter 8, “Handling Exceptions,” for an interface pattern called a “Constrained Natural Language Builder,” which you can consider customizing in your agentive interfaces.
  6. You’re just another cheerleader for the
    future, blithely bringing artificial intelligence doom down on us all! Wake up, sheeple!

    Technically, that’s not a question, and frankly a little hyperbolic. But I’m still here to help. There’s a distinction to learn in Chapter 2, “Fait Accompli: Agentive Tech Is Here,” between narrow artificial intelligence and general artificial intelligence. Once you understand that difference, it becomes easier to understand that, unlike general AI, narrow AI gets safer as it gets smarter. And as you’ll read at the end of Chapter 12, “Utopia, Dystopia, and Cat Videos,” I believe a worldwide body of agentive rules is a useful data set to hand to a general AI if/when one comes online, to help it understand how humans like to be treated. This is on the good side of the fight.
  7. Aren’t you that sci-fi interfaces guy?
    I am one of them. I keep the blog scifiinterfaces.com, and you may have heard me speaking on the topic, attended a workshop, or been to one of my sci-fi movie nights. Also, Nathan Shedroff and I co- authored Make It So: Interaction Design Lessons from Science Fiction in 2012, which is all about what real-world designers can learn from speculative interfaces. Predictably, sci-fi makes appearances in this book. You’ll see some quick mentions in Chapter 2, and two important mentions in Chapter 13, “Your Mission, Should You Choose to Accept It.” These serve as a telling contrast of sci-fi written with and without agentive concepts. You also can search the #agentive tag on the sci interfaces.com blog to find even more.
  8. If you could wave your hands and make anything an agent, what would it be?
    Well, I must admit that part of the reason I chose Mr. McGregor to
    be the illustrative example is that I grew up in big cities, far from farmsteads, and never got the knack of raising plants. If, like me, you have a brown thumb, but dream of growing your own garden-fresh food, read about Mr. McGregor in sections placed after Chapters 5, 6, 7, and 8. My second choice might be an agent on mobile phones that listens in on conversations and does some socially adept fact-checking and frame-checking to encourage skeptical thinking and discourage lies or bullshit, in the H. G. Frankfurt sense.

Back to Designing Agentive Technology