Author Archives: Rachel Hinman

The latest from Rosenfeld Media

The latest from Rosenfeld Media

  • Animation Principle Four: Straight Ahead and Pose to Pose

    Posted on

    straight_ahead.jpg

    Straight ahead and pose to pose are animation
    techniques that refer directly to the animation drawing process. In order to
    capture fast, dynamic action with unusual movement, animators will use the
    straight ahead technique and draw every
    single frame of an animation. The
    pose
    to pose
    drawing technique employs the use of keyframes (the important frames of a sequence) and inbetweens (the intermediate frames that
    express movement between the keyframes).

    (Caption:
    The first illustration depicts the straight ahead drawing technique in which
    every single frame of an animation is rendered. The second illustration
    represents the keyframes that will be used in a pose to pose animation.)

    The vast majority of animations and transitions used
    in mobile experiences employ the pose-to-pose animation technique. Pose to pose
    will usually suffice for transitions that are not overly complex and can be
    described easily. If you’d like to
    incorporate unusual or dynamic movement in your experience that can’t be
    achieved using pose to pose, you’ll likely need to incorporate the straight
    ahead drawing technique in order in order capture the motion you are
    envisioning.

    examples.jpg


    (Caption 1.0: Popular games like Plants and Zombies for the iPad employs the use of pose to pose
    animation techniques.)


    (Caption 2.0: Games with more complex
    movement, like the ones found in the iPad game Fruit Ninja, use straight ahead animation techniques to capture
    dynamic motion.)

     

    Animation Principle Three: Staging

    Posted on

    god_bad_staging.jpg

    People keen on selling property often “stage”
    a home, meaning they arrange each room in such a way that its purpose is
    completely clear. The principle of staging in animation is similar – good
    staging makes the central idea of an animation completely clear to the viewer.
    In the world of mobile user experience, the principle of staging is most
    relevant when considering the transitions between screens and interactions. Interactions
    that are well staged combine light, color, composition, and motion to direct
    the user’s eye to exactly where it needs to be as they interact with an
    experience. Well-staged mobile experiences have a sense of flow and ease,
    whereas poorly staged ones feel disjointed.

    (Caption: The well-staged illustration
    makes the central idea – two characters engaged in conversation – completely
    clear. The poorly staged illustration leaves the dynamic between the two
    characters open for interpretation, making the central idea unclear.)

    Staging is a subtle yet important
    consideration when applying animation and motion to mobile experiences. A key
    challenge for natural user interfaces is that they lack a strong conceptual
    anchor. As a result, users new to NUIs often feel anchorless as they navigate
    touchscreen experiences. If good, strong staging is applied to the animation
    and transitions of your experience, users will likely feel more grounded in the
    experience.


    Mobile UX Staging Examples

    staging_example.jpg

    (Caption:
    Good staging used in the iPad version of Keynote allows users to see exactly
    where the file they are currently working on lives in the application’s file
    structure. This subtle use of staging allows the user to feel grounded in the
    experience.)

     

    Animation Principle Two: Anticipation

    Posted on



    bowling_2.jpg

    When an illustrator depicts a moving object or
    character, there are three distinct phases that should be considered to make the
    object’s movement seem realistic:
    • the preparation for the
    action
    • the action itself
    • the results of the movement

    Caption: The crouching pose of a
    bowler, winding up before swinging a bowling ball, is an example of type of pose the principle of anticipation should capture.

    Whether it’s a baseball batter winding up before a swing, or the recoil
    of a spring before it’s sprung, anticipation
    is the animation principle used to address the preparation of an object for
    movement.  Anticipation is about
    orchestrating components of a scene – be it the lighting, composition, or even
    manipulating the shape and form of an object or character – in order to give
    the viewer insight into what is about to happen. 

    Similar to its application in animated film and cartoons, when applied to
    the realm of mobile UX, anticipation is all about giving the user insight into
    what is about to happen next. For example, it’s a principle that can be applied
    to the visual treatment of the interface as a user opens up an application. It
    can also be applied to transitions between experiences. Because gesture
    languages are relatively new for users, the principle of anticipation can also
    be used to provide affordances for gestural UIs. Anticipation gives insight into
    the speed and direction with which objects within a UI can move as well and the
    gestural possibilities of those objects.

    anticipation_eamples.jpg

    (Caption 1: The aperture animation
    found on the camera application of many smart phones prepares the user for the
    action of taking a photograph.)

    (Caption 2: The window shade animation
    on the homescreen of the Windows Phone 7 employs the principle of anticipation
    by giving users a peak into the phone’s dynamic tile UI.)

    (Caption 3: The way in which the cards of the Palm Pre’s user interface
    move acts as an affordance for users, giving them insight into gestural
    language of the UI.)

     

    Animation Principle One: Squash and Stretch

    Posted on

    squash_stretch.jpg

    People and objects inherently have a sense of mass. When an
    object moves, the character of the movement often indicates the rigidity of the
    object. Manmade, real world objects such as bookshelves or a wooden chair are
    rigid and have little flexibility. Soft surfaces, like clothing, and organic
    objects, like the leaves of a plant, have less rigidity and a higher level of
    flexibility. Squash and stretch is the animation principle used to accurately
    express the rigidity of an object.

    (Caption:
    Organic and soft surface objects, such as a balloon filled with water have some
    level of flexibility in their shape. Squash and Stretch is the animation
    principle that helps depict this character in animation.)

    This principle should be used to help communicate the feeling
    you want your mobile experience to evoke as users engage with it. Is your
    mobile experience a world of solid planes, rigid surfaces and sharp, exact
    movement? Or is it a world that’s more organic, with softer, pliable surfaces
    with easy, graceful movement? Squash and stretch is the principle that can help
    you express your decision through movement.

    Mobile UX Squash and Stretch Examples

    squash_stretch_2.1.jpg

    (Caption: Screen transitions in Flipboard use the
    principle of squash and stretch to express the surfaces inside the
    world of the application are rigid and “board-like”. In contrast, the screen
    transitions in Apple’s iBook use the principle of squash and stretch to echo
    the flexible and organic movement of turning the pages of an analog book.)

     

    12 Days of Animation Principles

    Posted on


    illusion_of_life.jpg

    I recently finished the first draft of Chapter 5  –
    Animation and Motion: A New Design Material. It struck me as important subject
    to cover in the book since motion is becoming an important design component in
    all the major mobile platforms, yet most designers don’t have a lot of
    experience working with it.

    The backbone of the chapter explores the Twelve Basic Principles of
    Animation
    from the
    bible on animation –
    The Illusion of Life: Disney Animation.

    I’ve
    learned a lot writing this chapter so I thought it would be fun content to
    share here. So, for the next 12 days, I’ll be posting a principle a day along
    with mobile examples. I’d be thrilled if you provide feedback, share any mobile
    examples you’ve seen of the principles in action, or share your experiences
    applying animation to your own mobile design work.

    Good Mobile Experiences Unfold & Progressively Reveal their Nature

    Posted on

    Shrinky Dinks are a children’s toy/activity kit consisting of large flexible sheets which, when heated in an oven, shrink to small hard plates without altering their color or shape. Most sets are pre-printed with outline images of popular children’s characters.
    Shrinky Dinks are a children’s toy/activity kit consisting of large flexible sheets which, when heated in an oven, shrink to small hard plates without altering their color or shape. Most sets are pre-printed with outline images of popular children’s characters.

    Growing up, my sister and I were big fans of Shrinky Dink Kits. I fondly remember the many hours we spent meticulously adding color to outlined images of our favorite cartoon characters printed on the weird, slightly slippery Shrinky Dink plastic paper. While the activity itself was akin to coloring in coloring books, the final product was infinitely cooler. A standard kitchen oven was all that was needed to unlock the magical powers of Shrinky Dinks. Bake the colored paper characters in a hot oven like a batch of cookies and they’d magically turn in to tiny versions of themselves.

    Shrinky Dinks and Mobile UX
    Shrinky Dinks come to mind when I think of the often-cited screen real estate disparity between mobile devices and personal computers; mobile experiences have substantially less screen real estate to work with than their PC counterparts. A common yet unwise method for dealing with less screen real estate is to employ a Shrinky Dink strategy – to simply shrink a PC experience, load it onto a mobile device and call it a mobile experience. While my fondness for Shrinky Dinks clearly runs deep, miniaturizing a PC experience for a mobile device is a bad idea. It’s a surface solution to a structural problem. Successful PC and mobile experiences are built on fundamentally different conceptual models and leverage different psychological functions of the user. Understanding these differences will help you create better experiences for both contexts.

    PC Design Patterns: Anchors, Stacking, and Recognition
    All PC experiences have a conceptual anchor – the desktop – from which users can navigate. Similar to a Jenga tower or a stack of papers, PC experiences have a conceptual “bottom” and “top” that’s anchored to the desktop. Like stacks of paper placed on a table, the desktop metaphor enables multiple cascading application windows to be open at once. These open windows can be shifted and shuffled (reinforced by functions like “bring to front” or “send to back”.) This sense of a static anchor coupled with the ability to layer and cascade application windows enables users to traverse between applications with ease and multi-task.

    Similar to a Jenga tower, PC experiences have a conceptual “bottom” and “top”, making it easy to stack cascading application windows in a layered fashion.
    Similar to a Jenga tower, PC experiences have a conceptual “bottom” and “top”,
    making it easy to stack cascading application windows in a layered fashion.

    Similar to a Jenga tower, PC experiences have a conceptual “bottom” and “top”, making it easy to stack cascading application windows in a layered fashion.

    GUIs focus on the psychological function of recognition. Users can see all their options and designers give minimal visual differentiation to the interface elements. GUIs rely on a user's sense of sight - WYSIWYG (What you see is what you get).
    GUIs focus on the psychological function of recognition. Users can see all their options and designers give minimal visual differentiation to the interface elements. GUIs rely on a user’s sense of sight – WYSIWYG (What you see is what you get).

    Users of desktop experiences interact with graphical user interfaces (aka GUIs). Graphical user interfaces are built on the psychological function of recognition. Users click on a menu item, the interface provides a list of actions, the user recognizes the appropriate action and clicks on it. GUIs reliance on recognition gave rise to the term WYSIWYG (What you see is what you get). Users can see all their options and minimal visual differentiation between interface elements is commonly used.

    Unfolding, “Topping In” to Information, and Intuition
    In contrast, mobile experiences – especially those with touch screens and natural user interfaces – can feel anchorless by comparison. Instead of cascading windows stacked on top of each other, open mobile applications take up the entire screen. Lacking the screen real estate to present all the interface options at once, mobile UIs intelligently truncate and compartmentalize information into bite-size portions that users can navigate in a way that feels intuitive. If PC experiences are anchored, mobile experiences are about movement and unfolding.

    Instead of possessing a strong conceptual anchor, mobile experiences unfold and progressively reveal their nature. While PC experiences present all the content and functionality at once, great mobile experiences allow users to “top in” to information, and reveal more content and complexity as the user engages with the application or experiences.

    The natural user interfaces (aka NUIs) found on most modern mobile devices are built on the psychological function of intuition. Instead of recognizing an action from a list, users must be able to
    sense from the presentation of the interface what is possible. Instead of “what you see is what you get” NUIs are about “what you do is what you get.” Users see their way through GUI experiences, and sense their way through NUI ones. Unlike GUI interfaces with minimal differentiation between interface elements, NUI interfaces typically have fewer options and there is more visual differentiation and hierarchy between the
    interface elements.

    Unlike GUIs, Natural user interfaces don't have a strong conceptual anchor, which can make users feel anchorless. In order to adapt to limited screen real estate, designers must allow users to "top in" to content, progressively revealing more information as the user engages with the experience.
    Unlike GUIs, Natural user interfaces don’t have a strong conceptual anchor, which can make users feel anchorless. In order to adapt to limited screen real estate, designers must allow users to “top in” to content, progressively revealing more information as the user engages with the experience.

    Patterns are emerging with regard to the way in which mobile experiences unfold. The following examples are some patterns I’ve been tracking.

    Mobile experiences that employ the nesting doll pattern are all about funneling users to detailed content. This pattern allows users to toggle easily between an overview screen displaying many pieces of content, to a detail-level view of a specific piece of content. It’s a pattern has a strong sense of forward/ back movement.

    Nested Doll Examples: iPhone Email App, BravoTV App, Netflix App

    The iPhone's email application employs the nested doll pattern. The user is gradually funneled to a detailed view of an individual email.
    The iPhone’s email application employs the nested doll pattern. The user is
    gradually funneled to a detailed view of an individual email.

    Mobile experiences with a hub and spoke pattern have a strong central anchor point from which users can navigate. Similar to the model of an airport hub, users can bypass the hub and navigate to other “spokes” of the system. However, users interacting with hub and spoke experience often traverse through the hub of the application several times while engaging with the experience. This pattern works best when applied to experiences with large quantities of content or to experiences with a several disparate types of functionality.

    Hub and Spoke Examples: Flipboard App, Facebook App, FourSquare App

    Flipboard employs the hub and spoke unfolding pattern. The "contents" page is the hub, with the various content sources creating the spokes of the experience.
    Flipboard employs the hub and spoke unfolding pattern. The “contents” page
    is the hub, with the various content sources creating the spokes of the experience.

    Just like a bento box from a Japanese restaurant, this pattern carves up the surface area of a mobile device’s screen into small compartments, each portion contributing to the overall experience. This pattern is a good way to express sets of information that are strongly related to each other and is more commonly used on tablets than smartphone experiences.

    The Kayak application for the iPad uses the bento box pattern. Small pieces of tightly related information are displayed on the screen at one time, allowing the user to see the interrelationships of information easily.
    The Kayak application for the iPad uses the bento box pattern. Small pieces of tightly related information are displayed on the screen at one time, allowing the user to see the interrelationships of information easily.

    Similar to the optical refractor used in an optometrist’s office, the filtered view pattern allows users navigate the same data set using different views. It’s a pattern that’s especially well suited for navigating large sets of similar digital media such as music, photos, video, or movies.

    Filtered View Examples: iPod on the iPad or iPhone, CoolIris App, Calendar Apps on most smartphones and tablets

    Similar to the optical refractor at an optometrist's office, which offers a patient multiple views of an eye chart, the filtered view pattern provides users multiple views of the same data. The iPod application for the iPad employs the filtered view pattern and allows users to navigate the same set of content using different views.
    Similar to the optical refractor at an optometrist’s office, which offers a patient multiple views of an eye chart, the filtered view pattern provides users multiple views of the same data. The iPod application for the iPad employs the filtered view pattern and allows users to navigate the same set of content using different views.

    What mobile “unfolding” patterns have you been noticing?

    Mobile Prototyping Essentials

    Posted on

    Last week I presented the following talk on Mobile Prototyping at Web Directions Unplugged in Seattle. It was a great opportunity to share content from my latest chapter of The Mobile Frontier on prototyping. Thanks to John Allsopp, Maxine Sherrin, and Brian Fling for including me in such an inspiring event.


    View more presentations from Rachel Hinman

    Mobile Context – The Chapter that Nearly Killed Me

    Posted on

    context_killing_blog_post.jpg


    The mantra I scrawled in
    serial-killer-styled handwriting across a draft of the latest chapter of The
    Mobile Frontie
    r
    should give you a sense of how fun the chapter on
    “mobile context” was to write.

    Haunted, tortured,
    frustrated, angry… all those words fit my state of mind over the last month and
    a half.

    But it’s done! I finished
    it today and it didn’t kill me. I’m hoping for smoother sailing as I tackle the
    chapters that lie ahead. Plus, I’m planning to start the mobile expert
    interviews in earnest and plan to post them here. They should be fun so stay
    tuned. 

    I’ve enclosed some excerpts from
    the context chapter (that I slayed like a dragon) below. Comments and
    feedback are welcomed!


    mobile_context_dragon_slaying.jpg

     


     

    Practitioners of mobile UX
    often cite context as the biggest difference between designing for mobile user
    experiences and other design spaces. But what does “the mobile context” really
    mean?

    Early in my career I worked
    on a research project designed to answer that very question. We recruited ten
    participants and asked them to photograph their surrounding environment each
    and every time they used their mobile device. The research team hoped these
    photos would reveal a pattern – that somehow through analyzing all these
    images, we could crack the code of the mysterious and elusive mobile context.

    A week later, we were
    drowning in a sea of photographs. Some photos were predictable “on-the-go”
    shots often associated with the mobile context: the inside of a supermarket,
    interiors of buses or trains, and “street shots” taken by users while rushing
    down the street. Other environments were surprising: an interior shot of a
    public restroom, a bedroom, the interior of a church even. After many days of
    sifting, sorting and clustering photographs the research team came to a
    sobering conclusion:

    Mobile context = anywhere and
    everywhere.

    It wasn’t the
    earth-shattering, code-cracking conclusion we’d hoped for. It did however
    underscore one of the most fundamental aspects of designing for mobile user
    experiences. Unlike the static and predictable PC context, the mobile context
    is a lot like life. It’s unpredictable, ambiguous… it’s everywhere. The sheer
    number and variance of environments depicted in the photographs we received
    emphasized one of the most magical aspects of mobile user experience that is
    still true today. The ability to access, consume, share, and create information
    from anywhere – untethered from a keyboard and mouse – is a latent human need
    mobile technology has only begun to solve.

     

    Developing a Sympathy to
    Context

    Compelling mobile
    experiences share a common characteristic – they are sympathetic to the
    constraints of the mobile context. Underneath all the hoopla mobile folk make
    about the importance of context is the recognition of a skill that everyone
    interested in this medium must grow: both sympathy and curiosity for the
    complexity of designing for everywhere. It’s not a skill most grow overnight –
    but rather something most of us learn through trial and error. And like any
    skill, the learning never stops.

    Throughout the course of my
    career, there are three design principles that have emerged through my own
    painful process of developing this skill. I’ve found them invaluable for coping
    and dealing with “the everywhere” that is the mobile context of use. 

    They are:

     

    1.   Focus on
    creating experiences that are uniquely mobile

    2.   Design for
    partial attention and interruption

    3.   Reduce
    cognitive load and opportunity cost

     

    2. Design for Partial
    Attention and Interruption


    A key
    constraint most users are confronted with on a daily basis is allocating their
    two most precious resources: their time and attention. In a chaotic,
    time-and-attention-starved world, the last thing people want is to be
    fascinated by a mobile phone. Even if the mobile application or web site you’ve
    designed is the coolest experience in the world, you’ll be lucky to get a
    fraction of a user’s time and attention. Expecting mobile experiences to be
    immersive is unrealistic. Instead, be sympathetic to the conditions of the
    mobile context and design with partial attention and interruption in mind.

    How?

    A metaphor that helped me
    tune my sympathies to the time and attention constraints of a typical mobile
    user has been to compare PC and mobile experiences to a pair of similar yet
    fundamentally different water activities; scuba diving and snorkeling.

    scuba_snorkeling.jpg

     

    PC experiences
    are like scuba diving. 


    Mobile experiences are like snorkeling.

    PC experiences are
    scuba-like because they are designed to be immersive. Just as a wet suit and a
    tank of air enables scuba divers to plunge deep into the ocean and become
    immersed in the exploration of a different world, the large screen, and static
    environment implicit during PC use enables users to become immersed in the
    rich, graphical world behind their computer monitor. Just as it’s easy for
    scuba divers to maneuver through the water, it’s easy for PC users to move
    through content quickly and easily with the precision afforded by a keyboard
    and mouse. Overlapping windows and visual cues allow for easy exploration of
    multiple applications and documents at one time. Just like the world beneath
    the ocean, the PC invites exploration and discovery. Engagement is prized.

    Mobile is akin to snorkeling
    because attention is divided. Similar to snorkelers who float on the surface of
    the water and must ride with the ebb and flow of the ocean, mobile users often
    need to access content while in an uncontrollable and unpredictable
    environment. Snorkelers tend to dip in and out of the water in search of
    interesting seascapes, just as mobile users “dip in and dip out” of content and
    information. The dynamics of both snorkeling and mobile experiences make it
    inherently difficult for users to get totally immersed because attention is
    divided. Slow connection speeds and small screen sizes do not allow users to
    multi-task or become engrossed.         

    Emergent Computing Paradigms

    Posted on

    Bill squared, the interaction design duo of Bill Verplank and Bill Moggridge, created a framework to describe six computing paradigms. Bill Verplank asserts the first three are firmly established computing paradigms, while the final three are paradigm predictions he thinks will take shape in the future. (sketch by Bill Verplank below)

    diagram1-300x226.png

    • Computer as PERSON

    • Computer as TOOL
    • Computer as MEDIA  
    • Computer as LIFE      
    • Computer as FASHION
    • Computer as VEHICLE

    I agree with their first three paradigms. Computer as person, tool and media accurately express the paradigms that have given shape to the computing landscape for the last 50 years. Where my opinions differ is on predictions for future paradigms. Perhaps I’m splitting hairs, but LIFE, VEHICLE and FASHION seem vague and difficult to envision with any specificity.  

    There are three similar yet distinct future paradigms I’ve been tracking that I believe will become important and emergent in the years to come. They are:

    • Computer as ORGANIC MATERIAL
    • Computers as INFRASTRUCTURE
    • Computers as SOCIAL CURRENCY

    What does this have to do with mobile experiences? Gone are the days of computing in a static environment. I’ve long believed that mobile phones aren’t really phones anymore. Instead, they are precursors – tangible instantiations of what computing experiences will evolve into. As such, our experiences with mobile devices offer early glimpses into new computing paradigms. Nothing illuminates ideas about the future like a good science fiction reference, so I’ve leaned on a couple favorites to communicate these ideas in the list below.

    Computing Paradigms: Past, Present, and Future

    computer_person_small.jpg

    Computer as Person
    Initially, computers were conceived of as “intelligent agents” or “electronic brains”. In this paradigm, computers act as intermediaries between humans and the digital world of information. To reinforce the notion of computer as person, designers give systems that reflect this paradigm anthropomorphic qualities such as interfaces that “listen” or “hear” human commands. Computers in this paradigm are intelligent agents that can replace the need for humans to perform mundane tasks. Research areas like computer visioning, artificial intelligence and robotics continue in this tradition by trying to give computers human-like attributes.

    Key Values:
    • Computers as an assistant or servant
    • Command and control
    • Computers can replace people

    Expressed in interactions through:
    • Voice Commands
    • Text/language interfaces
    • Text input
    • Programming

    Existing Examples:
    • Clippy
    • Command line interfaces.
    • Voice-driven interfaces

    Examples from Science Fiction
    • Hal in 2010
    • Sonny in I, Robot




    computer_as_small.jpg

    Computer as Tool

    The notion that computers are a tool that can augment human intelligence emerged in the 1970’s and has been best exemplified by the desktop metaphor and the graphical user interface. Instead of replacing people, the computer as tool paradigm relies on our ability to view computers as we would a hammer a pen – as a tool for completing tasks. This paradigm supports the notion that computers exist to enable people to be more efficient through our own agency. It celebrates values like utility, task completion, and efficiency. Many of the hallmarks of interaction design used today are deeply anchored in the “computer as tool” paradigm.

    Key Values:
    • Computers should empower people
    • “Getting stuff done”
    • Utility and usability
    • Computers should be useful and efficient

    Expressed in interactions through:
    • Metaphorics
    • “The desktop”
    • Graphical user interface
    • WIMP
    • Mouse and Keyboard

    Existing Examples:
    • Microsoft Office
    • Email
    • Folders and Files
    • Using your mobile phone as a remote

    Science Fiction Examples:
    • PreCrime tools in Minority Report



    little_media.jpg
    Computer as Media
    The notion that computers could act as distributors of media existed before the 1990s. However, it wasn’t until the widespread proliferation of the Internet that the “computers as media” paradigm got traction and became convincing. Instead of tools for efficiency, computers bear a likeness to televisions and radios in that they distribute media. Instead of helping people complete tasks, computers provide content that can be watched, read, heard, engaged with and enjoyed. This paradigm celebrates values like engagement, expression, content distribution, play and access. In this paradigm content can be prismed through a variety of devices – televisions, computers, mobile phones, and portable media players. As such, anything that can deliver content and provides an engaging and immersive experience is “a computer.”

    Key Values:
    • Computers should entertain us
    • Expression and Engagement
    • Immersive experiences
    • Focus on content
    • Play
    • Persuasion

    Expressed in interactions through:
    • Web Pages
    • Content stores (iTunes, Netflix)
    • Game consoles
    • Convergence
    • GUI/NUI Hybrid interfaces
    • Content as the interface

    Existing Examples:
    •  YouTube
    • Online publication (ex: newyorktimes.com)
    • MP3 Players
    • Napster
    • Wii
    • Reading a book on an iPad or mobile phone

    Example from Science Fiction:
    Tron
    WarGames


    small_organic.jpg

    Computers as Organic Material
    What if everything in the environment was embedded with computing power? Or if computing and information had organic qualities? Similar to Verplank and Moggridge’s “computer as life” metaphor, the “computers as organic material” paradigm predicts a fluid, natural, almost biological perspective on our relationship to computers and information. Instead of media streaming through “dumb terminals” such as computers, TVs and mobile devices, computing and information are ambient forces woven into the fabric of the world. Sensors are everywhere; computers are embedded into everything in the environment. Monolithic devices are not only de-emphasized, they are supplanted by an ecosystem of smaller, more portable devices or large public monitors built into the environment. Instead of focusing on devices, people focus on data and information. We come to understand data and data patterns as if it they’re a biological form. The dynamic and life-like qualities of data are celebrated. Systems allow information to forms and reform by connecting to other data, making computer experiences contextual and adaptive.  Computers can anticipate human intent, making interactions “quiet” and “dissolving into human behavior.”


    Key Values:
    • Computing is embedded into the fabric of the world.
    • Computing is quiet and seamless
    • Computing has biological qualities
    • Focus on data and information instead of devices
    • Data empowers us to make better decisions

    Existing Precursors:
    • Smart Environments
    • Organic interfaces
    • Sensors that turn lights on and off
    • Sensors embedded into textiles
    • Biometrics
    • Glucose sensors inserted into the skin
    • Plants and bridges that Twitter

    Science Fiction Examples:
    The Matrix
    Cylon spaceships on Battlestar Galactica


    small_infrastructure.jpg

    Computing as Infrastructure

    What if computing power and information were like water and electricity? The “computer as infrastructure” paradigm prediction is based on the idea that eventually we’ll live in a world where computing power and information are a man- made utility built over or into the environment. We assume it is always there, waiting for us to engage with it. Just like plugging in a hairdryer or turning on a water faucet, people can “tap into” computing functionality through physical mechanisms in the environment like RFID, NFC, and free public WiFi. Interactions become about orchestrating networks, people, and objects through physical computing and tangible interactions. Similar to the hand gesture we make to indicate “I’m on the phone”, our interactions with this infrastructure becomes so pervasive that gestures and mechanisms embedded into the environment serve as a way to communicate our behavior.

    Key Values:
    • Computers and information access are utilities
    • Computing is physical and tangible

    Precursors:
    Oyster Card
    • Nike Plus
    • RFID
    • NFC
    • GPS

    Science Fiction Examples:
    • Magical wands in Harry Potter
    • Avatar operators in Avatar



    small_social_currency.jpg

    Computers as Social Currency
    Since humans are inherently social critters, we’re innately tuned to understand how our actions and behaviors are perceived by our family and friends, our tribes and by society. What if the focus of computing and information consumption became yet another form of social expression? The “computers as social currency” paradigm prediction amplifies Yuri Engstrom’s theory on object-centered sociality, our use of book covers, and the inherent shame we feel for perusing Perezhilton.com. In this future paradigm, computing reflects social behavior. Computers, data and information are social objects that people use to both form connections with others and to express their identity and values in the world.  What we own and consume matters greatly. People become highly conscious of their content consumption and computing ecosystems because computing behaviors are expressions of class, education, socio-economic status and social standing within a given society or tribe.

    Key Values:
    • Computers and information consumption are a reflection of social identity
    • I am what I consume

    Precursors:
    • Apple Fanboys
    • Facebook
    • “Checking-in” to FourSquare
    • LinkedIn
    The digital divide

    Science Fiction Examples:
    • Sensor Web in The Carytids




    Detailed sketches of these paradigms are on my Flickr stream.

    Curious if these three emergent paradigms make sense to you:

    • ORGANIC MATERIAL
    • INFRASTRUCTURE
    • SOCIAL CURRENCY

    Are there are any I’m missing? Let me know in the comments below..