Straight ahead and pose to pose are animation
techniques that refer directly to the animation drawing process. In order to
capture fast, dynamic action with unusual movement, animators will use the straight ahead technique and draw every
single frame of an animation. The pose
to pose drawing technique employs the use of keyframes (the important frames of a sequence) and inbetweens (the intermediate frames that
express movement between the keyframes).
The first illustration depicts the straight ahead drawing technique in which
every single frame of an animation is rendered. The second illustration
represents the keyframes that will be used in a pose to pose animation.)
The vast majority of animations and transitions used
in mobile experiences employ the pose-to-pose animation technique. Pose to pose
will usually suffice for transitions that are not overly complex and can be
described easily. If you’d like to
incorporate unusual or dynamic movement in your experience that can’t be
achieved using pose to pose, you’ll likely need to incorporate the straight
ahead drawing technique in order in order capture the motion you are
(Caption 1.0: Popular games like Plants and Zombies for the iPad employs the use of pose to pose
(Caption 2.0: Games with more complex
movement, like the ones found in the iPad game Fruit Ninja, use straight ahead animation techniques to capture
People keen on selling property often “stage”
a home, meaning they arrange each room in such a way that its purpose is
completely clear. The principle of staging in animation is similar – good
staging makes the central idea of an animation completely clear to the viewer.
In the world of mobile user experience, the principle of staging is most
relevant when considering the transitions between screens and interactions. Interactions
that are well staged combine light, color, composition, and motion to direct
the user’s eye to exactly where it needs to be as they interact with an
experience. Well-staged mobile experiences have a sense of flow and ease,
whereas poorly staged ones feel disjointed.
(Caption: The well-staged illustration
makes the central idea – two characters engaged in conversation – completely
clear. The poorly staged illustration leaves the dynamic between the two
characters open for interpretation, making the central idea unclear.)
Staging is a subtle yet important
consideration when applying animation and motion to mobile experiences. A key
challenge for natural user interfaces is that they lack a strong conceptual
anchor. As a result, users new to NUIs often feel anchorless as they navigate
touchscreen experiences. If good, strong staging is applied to the animation
and transitions of your experience, users will likely feel more grounded in the
Mobile UX Staging Examples
Good staging used in the iPad version of Keynote allows users to see exactly
where the file they are currently working on lives in the application’s file
structure. This subtle use of staging allows the user to feel grounded in the
When an illustrator depicts a moving object or
character, there are three distinct phases that should be considered to make the
object’s movement seem realistic:
• the preparation for the
• the action itself
• the results of the movement
Caption: The crouching pose of a
bowler, winding up before swinging a bowling ball, is an example of type of pose the principle of anticipation should capture.
Whether it’s a baseball batter winding up before a swing, or the recoil
of a spring before it’s sprung, anticipation
is the animation principle used to address the preparation of an object for
movement. Anticipation is about
orchestrating components of a scene – be it the lighting, composition, or even
manipulating the shape and form of an object or character – in order to give
the viewer insight into what is about to happen.
Similar to its application in animated film and cartoons, when applied to
the realm of mobile UX, anticipation is all about giving the user insight into
what is about to happen next. For example, it’s a principle that can be applied
to the visual treatment of the interface as a user opens up an application. It
can also be applied to transitions between experiences. Because gesture
languages are relatively new for users, the principle of anticipation can also
be used to provide affordances for gestural UIs. Anticipation gives insight into
the speed and direction with which objects within a UI can move as well and the
gestural possibilities of those objects.
(Caption 1: The aperture animation
found on the camera application of many smart phones prepares the user for the
action of taking a photograph.)
(Caption 2: The window shade animation
on the homescreen of the Windows Phone 7 employs the principle of anticipation
by giving users a peak into the phone’s dynamic tile UI.)
(Caption 3: The way in which the cards of the Palm Pre’s user interface
move acts as an affordance for users, giving them insight into gestural
language of the UI.)
People and objects inherently have a sense of mass. When an
object moves, the character of the movement often indicates the rigidity of the
object. Manmade, real world objects such as bookshelves or a wooden chair are
rigid and have little flexibility. Soft surfaces, like clothing, and organic
objects, like the leaves of a plant, have less rigidity and a higher level of
flexibility. Squash and stretch is the animation principle used to accurately
express the rigidity of an object.
Organic and soft surface objects, such as a balloon filled with water have some
level of flexibility in their shape. Squash and Stretch is the animation
principle that helps depict this character in animation.)
This principle should be used to help communicate the feeling
you want your mobile experience to evoke as users engage with it. Is your
mobile experience a world of solid planes, rigid surfaces and sharp, exact
movement? Or is it a world that’s more organic, with softer, pliable surfaces
with easy, graceful movement? Squash and stretch is the principle that can help
you express your decision through movement.
Mobile UX Squash and Stretch Examples
(Caption: Screen transitions in Flipboard use the
principle of squash and stretch to express the surfaces inside the
world of the application are rigid and “board-like”. In contrast, the screen
transitions in Apple’s iBook use the principle of squash and stretch to echo
the flexible and organic movement of turning the pages of an analog book.)
I recently finished the first draft of Chapter 5 –
Animation and Motion: A New Design Material. It struck me as important subject
to cover in the book since motion is becoming an important design component in
all the major mobile platforms, yet most designers don’t have a lot of
experience working with it.
The backbone of the chapter explores the Twelve Basic Principles of
Animation from the
bible on animation – The Illusion of Life: Disney Animation.
learned a lot writing this chapter so I thought it would be fun content to
share here. So, for the next 12 days, I’ll be posting a principle a day along
with mobile examples. I’d be thrilled if you provide feedback, share any mobile
examples you’ve seen of the principles in action, or share your experiences
applying animation to your own mobile design work.
Growing up, my sister and I were big fans of Shrinky Dink Kits. I fondly remember the many hours we spent meticulously adding color to outlined images of our favorite cartoon characters printed on the weird, slightly slippery Shrinky Dink plastic paper. While the activity itself was akin to coloring in coloring books, the final product was infinitely cooler. A standard kitchen oven was all that was needed to unlock the magical powers of Shrinky Dinks. Bake the colored paper characters in a hot oven like a batch of cookies and they’d magically turn in to tiny versions of themselves.
Shrinky Dinks and Mobile UX
Shrinky Dinks come to mind when I think of the often-cited screen real estate disparity between mobile devices and personal computers; mobile experiences have substantially less screen real estate to work with than their PC counterparts. A common yet unwise method for dealing with less screen real estate is to employ a Shrinky Dink strategy – to simply shrink a PC experience, load it onto a mobile device and call it a mobile experience. While my fondness for Shrinky Dinks clearly runs deep, miniaturizing a PC experience for a mobile device is a bad idea. It’s a surface solution to a structural problem. Successful PC and mobile experiences are built on fundamentally different conceptual models and leverage different psychological functions of the user. Understanding these differences will help you create better experiences for both contexts.
PC Design Patterns: Anchors, Stacking, and Recognition
All PC experiences have a conceptual anchor – the desktop – from which users can navigate. Similar to a Jenga tower or a stack of papers, PC experiences have a conceptual “bottom” and “top” that’s anchored to the desktop. Like stacks of paper placed on a table, the desktop metaphor enables multiple cascading application windows to be open at once. These open windows can be shifted and shuffled (reinforced by functions like “bring to front” or “send to back”.) This sense of a static anchor coupled with the ability to layer and cascade application windows enables users to traverse between applications with ease and multi-task.
Similar to a Jenga tower, PC experiences have a conceptual “bottom” and “top”, making it easy to stack cascading application windows in a layered fashion.
Users of desktop experiences interact with graphical user interfaces (aka GUIs). Graphical user interfaces are built on the psychological function of recognition. Users click on a menu item, the interface provides a list of actions, the user recognizes the appropriate action and clicks on it. GUIs reliance on recognition gave rise to the term WYSIWYG (What you see is what you get). Users can see all their options and minimal visual differentiation between interface elements is commonly used.
Unfolding, “Topping In” to Information, and Intuition
In contrast, mobile experiences – especially those with touch screens and natural user interfaces – can feel anchorless by comparison. Instead of cascading windows stacked on top of each other, open mobile applications take up the entire screen. Lacking the screen real estate to present all the interface options at once, mobile UIs intelligently truncate and compartmentalize information into bite-size portions that users can navigate in a way that feels intuitive. If PC experiences are anchored, mobile experiences are about movement and unfolding.
Instead of possessing a strong conceptual anchor, mobile experiences unfold and progressively reveal their nature. While PC experiences present all the content and functionality at once, great mobile experiences allow users to “top in” to information, and reveal more content and complexity as the user engages with the application or experiences.
The natural user interfaces (aka NUIs) found on most modern mobile devices are built on the psychological function of intuition. Instead of recognizing an action from a list, users must be able to
sense from the presentation of the interface what is possible. Instead of “what you see is what you get” NUIs are about “what you do is what you get.” Users see their way through GUI experiences, and sense their way through NUI ones. Unlike GUI interfaces with minimal differentiation between interface elements, NUI interfaces typically have fewer options and there is more visual differentiation and hierarchy between the
Patterns are emerging with regard to the way in which mobile experiences unfold. The following examples are some patterns I’ve been tracking.
Mobile experiences that employ the nesting doll pattern are all about funneling users to detailed content. This pattern allows users to toggle easily between an overview screen displaying many pieces of content, to a detail-level view of a specific piece of content. It’s a pattern has a strong sense of forward/ back movement.
Nested Doll Examples: iPhone Email App, BravoTV App, Netflix App
Mobile experiences with a hub and spoke pattern have a strong central anchor point from which users can navigate. Similar to the model of an airport hub, users can bypass the hub and navigate to other “spokes” of the system. However, users interacting with hub and spoke experience often traverse through the hub of the application several times while engaging with the experience. This pattern works best when applied to experiences with large quantities of content or to experiences with a several disparate types of functionality.
Hub and Spoke Examples: Flipboard App, Facebook App, FourSquare App
Just like a bento box from a Japanese restaurant, this pattern carves up the surface area of a mobile device’s screen into small compartments, each portion contributing to the overall experience. This pattern is a good way to express sets of information that are strongly related to each other and is more commonly used on tablets than smartphone experiences.
Similar to the optical refractor used in an optometrist’s office, the filtered view pattern allows users navigate the same data set using different views. It’s a pattern that’s especially well suited for navigating large sets of similar digital media such as music, photos, video, or movies.
Filtered View Examples: iPod on the iPad or iPhone, CoolIris App, Calendar Apps on most smartphones and tablets
What mobile “unfolding” patterns have you been noticing?
Last week I had the opportunity to share some fresh content from **The Mobile Frontier** at [WebVisions 2011](http://www.webvisionsevent.com/) in Portland. Thanks to all who attended the talk and tweeted such kind words – especially to conference organizer Brad Smith for inviting me to be a part of a fantastic event.
Last week I presented the following talk on Mobile Prototyping at Web Directions Unplugged in Seattle. It was a great opportunity to share content from my latest chapter of The Mobile Frontier on prototyping. Thanks to John Allsopp, Maxine Sherrin, and Brian Fling for including me in such an inspiring event.
The mantra I scrawled in
serial-killer-styled handwriting across a draft of the latest chapter of The
Mobile Frontier should give you a sense of how fun the chapter on
“mobile context” was to write.
frustrated, angry… all those words fit my state of mind over the last month and
But it’s done! I finished
it today and it didn’t kill me. I’m hoping for smoother sailing as I tackle the
chapters that lie ahead. Plus, I’m planning to start the mobile expert
interviews in earnest and plan to post them here. They should be fun so stay
I’ve enclosed some excerpts from
the context chapter (that I slayed like a dragon) below. Comments and
feedback are welcomed!
are like scuba diving.
Mobile experiences are like snorkeling.
PC experiences are
scuba-like because they are designed to be immersive. Just as a wet suit and a
tank of air enables scuba divers to plunge deep into the ocean and become
immersed in the exploration of a different world, the large screen, and static
environment implicit during PC use enables users to become immersed in the
rich, graphical world behind their computer monitor. Just as it’s easy for
scuba divers to maneuver through the water, it’s easy for PC users to move
through content quickly and easily with the precision afforded by a keyboard
and mouse. Overlapping windows and visual cues allow for easy exploration of
multiple applications and documents at one time. Just like the world beneath
the ocean, the PC invites exploration and discovery. Engagement is prized.
Mobile is akin to snorkeling
because attention is divided. Similar to snorkelers who float on the surface of
the water and must ride with the ebb and flow of the ocean, mobile users often
need to access content while in an uncontrollable and unpredictable
environment. Snorkelers tend to dip in and out of the water in search of
interesting seascapes, just as mobile users “dip in and dip out” of content and
information. The dynamics of both snorkeling and mobile experiences make it
inherently difficult for users to get totally immersed because attention is
divided. Slow connection speeds and small screen sizes do not allow users to
multi-task or become engrossed.Bill squared, the interaction design duo of Bill Verplank and Bill Moggridge, created a framework to describe six computing paradigms. Bill Verplank asserts the first three are firmly established computing paradigms, while the final three are paradigm predictions he thinks will take shape in the future. (sketch by Bill Verplank below)
• Computer as PERSON• Computer as TOOL• Computer as MEDIA• Computer as LIFE• Computer as FASHION• Computer as VEHICLE
I agree with their first three paradigms. Computer as person, tool and media accurately express the paradigms that have given shape to the computing landscape for the last 50 years. Where my opinions differ is on predictions for future paradigms. Perhaps I’m splitting hairs, but LIFE, VEHICLE and FASHION seem vague and difficult to envision with any specificity.
There are three similar yet distinct future paradigms I’ve been tracking that I believe will become important and emergent in the years to come. They are:
• Computer as ORGANIC MATERIAL
• Computers as INFRASTRUCTURE
• Computers as SOCIAL CURRENCY
What does this have to do with mobile experiences? Gone are the days of computing in a static environment. I’ve long believed that mobile phones aren’t really phones anymore. Instead, they are precursors – tangible instantiations of what computing experiences will evolve into. As such, our experiences with mobile devices offer early glimpses into new computing paradigms. Nothing illuminates ideas about the future like a good science fiction reference, so I’ve leaned on a couple favorites to communicate these ideas in the list below.Computing Paradigms: Past, Present, and Future
Computer as Person
Initially, computers were conceived of as “intelligent agents” or “electronic brains”. In this paradigm, computers act as intermediaries between humans and the digital world of information. To reinforce the notion of computer as person, designers give systems that reflect this paradigm anthropomorphic qualities such as interfaces that “listen” or “hear” human commands. Computers in this paradigm are intelligent agents that can replace the need for humans to perform mundane tasks. Research areas like computer visioning, artificial intelligence and robotics continue in this tradition by trying to give computers human-like attributes.Key Values:
• Computers as an assistant or servant
• Command and control
• Computers can replace people
Expressed in interactions through:
• Voice Commands
• Text/language interfaces
• Text input
• Command line interfaces.
• Voice-driven interfaces
Examples from Science Fiction
• Hal in 2010
• Sonny in I, Robot
Computer as ToolThe notion that computers are a tool that can augment human intelligence emerged in the 1970’s and has been best exemplified by the desktop metaphor and the graphical user interface. Instead of replacing people, the computer as tool paradigm relies on our ability to view computers as we would a hammer a pen – as a tool for completing tasks. This paradigm supports the notion that computers exist to enable people to be more efficient through our own agency. It celebrates values like utility, task completion, and efficiency. Many of the hallmarks of interaction design used today are deeply anchored in the “computer as tool” paradigm.
• Computers should empower people
• “Getting stuff done”
• Utility and usability
• Computers should be useful and efficient
Expressed in interactions through:
• “The desktop”
• Graphical user interface
• Mouse and Keyboard
• Microsoft Office
• Folders and Files
• Using your mobile phone as a remote
Science Fiction Examples:
• PreCrime tools in Minority Report
Computer as Media
The notion that computers could act as distributors of media existed before the 1990s. However, it wasn’t until the widespread proliferation of the Internet that the “computers as media” paradigm got traction and became convincing. Instead of tools for efficiency, computers bear a likeness to televisions and radios in that they distribute media. Instead of helping people complete tasks, computers provide content that can be watched, read, heard, engaged with and enjoyed. This paradigm celebrates values like engagement, expression, content distribution, play and access. In this paradigm content can be prismed through a variety of devices – televisions, computers, mobile phones, and portable media players. As such, anything that can deliver content and provides an engaging and immersive experience is “a computer.”
• Computers should entertain us
• Expression and Engagement
• Immersive experiences
• Focus on content
Expressed in interactions through:
• Web Pages
• Content stores (iTunes, Netflix)
• Game consoles
• GUI/NUI Hybrid interfaces
• Content as the interface
• Online publication (ex: newyorktimes.com)
• MP3 Players
• Reading a book on an iPad or mobile phone
Example from Science Fiction:
Computers as Organic Material
What if everything in the environment was embedded with computing power? Or if computing and information had organic qualities? Similar to Verplank and Moggridge’s “computer as life” metaphor, the “computers as organic material” paradigm predicts a fluid, natural, almost biological perspective on our relationship to computers and information. Instead of media streaming through “dumb terminals” such as computers, TVs and mobile devices, computing and information are ambient forces woven into the fabric of the world. Sensors are everywhere; computers are embedded into everything in the environment. Monolithic devices are not only de-emphasized, they are supplanted by an ecosystem of smaller, more portable devices or large public monitors built into the environment. Instead of focusing on devices, people focus on data and information. We come to understand data and data patterns as if it they’re a biological form. The dynamic and life-like qualities of data are celebrated. Systems allow information to forms and reform by connecting to other data, making computer experiences contextual and adaptive. Computers can anticipate human intent, making interactions “quiet” and “dissolving into human behavior.”Key Values:
• Computing is embedded into the fabric of the world.
• Computing is quiet and seamless
• Computing has biological qualities
• Focus on data and information instead of devices
• Data empowers us to make better decisions
• Smart Environments
• Organic interfaces
• Sensors that turn lights on and off
• Sensors embedded into textiles
• Glucose sensors inserted into the skin
• Plants and bridges that Twitter
Science Fiction Examples:
• The Matrix
• Cylon spaceships on Battlestar Galactica
Computing as InfrastructureWhat if computing power and information were like water and electricity? The “computer as infrastructure” paradigm prediction is based on the idea that eventually we’ll live in a world where computing power and information are a man- made utility built over or into the environment. We assume it is always there, waiting for us to engage with it. Just like plugging in a hairdryer or turning on a water faucet, people can “tap into” computing functionality through physical mechanisms in the environment like RFID, NFC, and free public WiFi. Interactions become about orchestrating networks, people, and objects through physical computing and tangible interactions. Similar to the hand gesture we make to indicate “I’m on the phone”, our interactions with this infrastructure becomes so pervasive that gestures and mechanisms embedded into the environment serve as a way to communicate our behavior.
• Computers and information access are utilities
• Computing is physical and tangible
• Oyster Card
• Nike Plus
Science Fiction Examples:
• Magical wands in Harry Potter
• Avatar operators in Avatar
Computers as Social Currency
Since humans are inherently social critters, we’re innately tuned to understand how our actions and behaviors are perceived by our family and friends, our tribes and by society. What if the focus of computing and information consumption became yet another form of social expression? The “computers as social currency” paradigm prediction amplifies Yuri Engstrom’s theory on object-centered sociality, our use of book covers, and the inherent shame we feel for perusing Perezhilton.com. In this future paradigm, computing reflects social behavior. Computers, data and information are social objects that people use to both form connections with others and to express their identity and values in the world. What we own and consume matters greatly. People become highly conscious of their content consumption and computing ecosystems because computing behaviors are expressions of class, education, socio-economic status and social standing within a given society or tribe.
• Computers and information consumption are a reflection of social identity
• I am what I consume
• Apple Fanboys
• “Checking-in” to FourSquare
• The digital divide
Science Fiction Examples:
• Sensor Web in The CarytidsDetailed sketches of these paradigms are on my Flickr stream.Curious if these three emergent paradigms make sense to you:• ORGANIC MATERIAL• INFRASTRUCTURE• SOCIAL CURRENCYAre there are any I’m missing? Let me know in the comments below..