Super stoked to announce that after what seems like forever, The Mobile Frontier: A Guide to Creating Mobile Experiences, is now available.Hooray! I hope you enjoy it …
Alex is Head of
Product at foursquare. Alex brings 12 years of product development experience
and a multidisciplinary background to his work, with a focus on mobile, social
and emerging technologies. Previously, he co-founded Dodgeball, one of the
first mobile social services in the U.S., which he sold to Google in May 2005.
He is a lifelong New Yorker currently living in Brooklyn with his wife,
daughter, and dog. Alex holds a master’s degree from New York University’s
Interactive Telecommunications Program and a bachelor’s degree in philosophy
from Trinity College.
How did you find
your way into the mobile user experience space?
I started getting interested in mobile when I attended
New York University’s Interactive Telecommunications graduate program. I went
to ITP in 2003 and 2004 when, believe it or not, Friendster was still en vogue.
At that time, mobile technology was still super frustrating, but just starting
to turn the corner to be a little bit more consumer friendly. ITP is an
environment where students are encouraged to play around with the newest
technology as part of the curriculum.
I’ve always been interested in the idea of mobility and
presence and how you can alter and enhance the way people interact with the
world around them through technology in a non-intrusive way. At ITP, I started
working with Dennis Crowley on an application called Scout. When students arrived at school, they had to swipe their ID
cards to enter the building. We designed Scout around that core interaction.
When students entered the building and swiped their card, Scout would drop them
into a virtual space and then other students could query that space with
questions like, “Is there anyone on the floor right now who knows action script?”
Scout used the idea of presence and social connection to enhance the way
students were interacting with each other based on space. In a lot of ways,
foursquare has been a natural extension of that idea. We’ve tried to take
something simple like a check in and build a rich experience around that.
One thing that has been challenging – both with the
early version of Scout and now foursquare – is that when you’re designing
mobile experiences, it often feels like you’re trying to build things that help
pull people over that hump to appreciate the richer experience that can come
from designing around the intersection of mobile, social, and place.
How do you pull
people over that hump so that they can realize the value of the types of mobile
experiences you’re designing?
Part of pulling people over the hump is staying focused.
The foursquare team is a group of people who have an incredibly active
relationship with our phones. It’s easy to forget that not everybody has that type
of a relationship with their mobile devices, and we have to always make sure
we’re designing for those outside of our power user set.
foursquare has always been a social utility at its
core – find out what your friends are doing, tell your friends what you’re
doing. We use levers like game mechanics (encouragement though points, the
leader board, badges), recommendations, and specials to encourage engagement
with the app. The challenge is tweaking all those different levers without
losing site of what is central to the app’s experience – social and place.
Now that people can carry around these powerful devices,
and have access to rich content like maps, images, and video, it’s easy to
think, “Oh, you can watch videos on it” or “We can create an augmented reality
lens to enhance people’s view of the world.” We don’t want people to open up
foursquare and be buried in there or force people to look ridiculous waving
their phone in the air to see things. That’s definitely not the kind of
experience we’re trying to create. We want to build something that people can
pop open anywhere in the world and provides a quick, valuable interaction, and
then it’s done. They can close it and get back to enjoying what it is they were
From day one, we’ve been building the foursquare
experience for people to share things in the real world – to share rich
experiences – and everything we’ve done has gone into building towards that
vision. We feel that’s our beachhead – to keep plugging away and being able to
focus on that area is our competitive advantage.
There seems to be
a theme in your professional history. Dodgeball, Scout, and foursquare all
combine mobile, a sense of place with a social layer. Where does that interest
I think part of it is my personality. I’m personally
drawn to things that bring people together. I love that a big part of my job is
building the team that builds the product. I’ve been managing a softball team
for 12 years, and I run a football office pool. I know the latter two are sort
of trivial examples, but it’s coordinating groups of people around a thing, and
that thing can be a fantasy baseball league, or that thing can be going out for
happy hour. That’s something that’s been true about me my whole life.
Do you think the
fact that you have spent so much time in New York City has influenced your
thoughts about mobile design?
Definitely. New York is a unique place to design things
around real-time place-based social interactions. Designing mobile experiences
in New York is very much a gift, but it’s also a challenge not to get too
swayed by that. Currently, foursquare has over 20 million users. We have to
design for the next 40 million users and not the first 20 if we want to build
the type of experience that I think we can, and a lot of those 40 aren’t
necessarily going to be urban dwellers.
What Mobile design topics interest you the most?
I’m really interested in
designing experiences that leverage mobile devices as location-aware sensors.
There’s something really powerful about the idea that the phones people carry
with them can act as sensors alerting people about interesting things in their
environments. Devices can know about the people you’ve been at places with, the
things you’ve done and shared… even the speed at which you’re moving. That
opens up the opportunity to build experiences that are even less disruptive
than the experiences we have now. Now, it’s still very much like, “Let me
open up Google maps and get directions to go do such and such.”
Granted, this all has to be done with
the user’s privacy always kept front of mind, and I think the technology is
finally getting to a point where we can find that balance and design an
incredibly engaging augmented experience while respecting a user’s privacy.
Ultimately, I think we’ll settle into some place where people will feel
comfortable sharing more information than they are now, and I’m interested in
seeing the kinds of mobile experiences we can create based on that information.
It seems weird to think that in our
lifetime, we had computers in our homes that were not connected to a network,
but I can vividly remember that. But that’s something my daughter will never
experience. I think a similar change will happen with some of the information
sharing questions that we have today.
There’s a weird line, though. Those kinds of experiences
can get creepy super fast. I think the important thing to remember is that some
problems are human problems. They’re problems a computer can’t solve. I’m
definitely not one of those people who says stuff like, “We think phones
will know what you want to do before you want to do it.” I think there’s a
real danger to over rely on the algorithm to solve human problems. I think it’s
finding the right balance of how you can leverage the technology to help
improve someone’s experience, but not expect that you’re going to
wholeheartedly hand everything over to a computer to solve. It’s a really
difficult dance to try and be the technology in between human beings. However,
no matter how far the technology goes, there’s always going to be that nuance
that needs to be solved by people.
here’s a little fact that feels surprising: Today on our small blue planet,
more people have access to cell phones than to working plumbing. Think about
that. Primitive plumbing has been around for over a thousand years. Modern
working plumbing has been around for at least 200 years longer than the fleeting
few years since 1984 when Motorola first ripped the phone off the wall and
allowed us to carry it around. Most people find plumbing useful. Apparently
many millions more find cellular phones indispensible.
a big part of modern life–the Internet, video games, search engines,
smartphones, iPads, social networking systems, digital wallet payment systems–are
so useful that we can no longer imagine life without them, we act as if they
will forever be the way they are now. This childlike instinct has its charms,
but it is always wrong and particularly dangerous for designers. People who
think deeply about the built world necessarily must view it as fungible, not
fixed. It is the job of thoughtful designers to notice the petty annoyances
that accumulate when we use even devices we love; to stand in the future and think
of ways to make it more elegantly functional, less intrusive, more natural, far
more compelling. In the best such cases, designers need to surprise us–by
radically altering what we think is possible. To create the futures we cannot
even yet imagine.
future is a scary place replete with endless options, endless unknowns. Of
course, like everyone else, designers don’t have a crystal ball. There is a
constant risk that we will make assumptions which turn out to be either too
bold or too timid. Designers must rely instead on methods to think through
which evolutionary and revolutionary shifts are most likely–among an infinite
array of possibilities.
In The Mobile Frontier, Rachel Hinman has
tackled one of the most vital issues in the future of design: how will our lives change while we are on
the go? She has used her vast prior experience in working to shape the
future for Nokia, then added disciplined methods to do us four vital favors:
the structures of current and coming mobile interfaces…
as cars have gone through several design eras (remember tailfins?), The Mobile Frontier has clarified four
waves of successive strategies that make a device successively easier and more
pleasant to use. Whether you are a
designer, or simply an enthusiast, this is a revelation. It shows how the
metaphors and strategies for how to use a device evolve as there is more
processing power, memory, and display capabilities available to make a device
patterns in how we behave when we are mobile…
When you observe people deeply enough you discover
something fundamental. While there are an infinite number of things people
theoretically might do with mobile devices, inevitably the real activities we
choose to do can be distilled into clear patterns with a few themes and
variations. The Mobile Frontier has made
these clear, so that the challenge of thinking about mobility becomes vastly
more interesting, more tractable and far easier to either improve or reinvent.
strategies for designing better mobile experiences…
Whenever we want to improve
or reinvent a category there are some methods that are better than others. The Mobile Frontier helps lay out active
design and prototyping strategies that make the otherwise daunting task of
building new interface alternatives likely to succeed instead of fail. This allows designers to proceed with
courage and confidence, knowing they can reliably imagine, develop and test
alternative interfaces, in order to get the future to show up ahead of its
regularly scheduled arrival.
about what will come next…
Mobile Frontier bravely peers down a foggy windy road to guess what lies
around the corner. This is a task always doomed to failure in detail, but
Rachel does a brilliant job of giving us the broad outlines. This is essential for helping us get past
the trap of merely filigreeing around the edges of the known, to instead
imagine the breakthroughs still to come.
Collectively, these four deep insights advance the
known boundaries of understanding today’s mobile devices and experiences. Thus
they help usher in the vastly new ones sure to emerge soon. Here’s why that
matters: we are only three decades into one of the most important revolutions
the world has ever seen. In design development terms, that is a mere blink. Just
as the mobile device world has zipped past plumbing like a rocket sled would
pass a slug, we simply must see ourselves at the very beginning of this
revolution. With mobile devices, we are today where autocars were when the
Model T was the hottest thing on wheels. We will see vastly more change than
most of us can possibly imagine. Through our mobile devices we will find new
advances in learning, security, community, interaction, understanding,
commerce, communication and exploration.
Rachel Hinman is helping us make all that come along
a little sooner, a lot easier, and far more reliably. See for yourself. Better
yet, join in. Get a move on. Oh, and
bring your devices. Let’s make ’em more amazing.
President and Co-Founder
Unlike personal computer experiences, which involve many physical
buttons like keyboard keys and mice with scroll wheels, most mobile touch
screen experiences involve interactions with nothing more than flat screens of
glass. While there are few physical buttons, the nature of touch screen
interactions are highly physical because they are explored through human hands.
Subsequently, it’s important that touch screen layouts not only offer generous
touch targets, but also accommodate the ergonomics of fingers and thumbs.
Smartphones and the “Thumb Zone”
One of the great things about smartphones is that they’re designed to fit in
the palm of your hand – often resulting in one-handed use. This means touch
screen interfaces must not only be aesthetically pleasing, they should be
organized for the fingers, especially the thumb. It’s the finger that gets the
workout and the reason why most major interface elements are located at the
bottom of the screen instead of the top.
Interfaces designed for the desktop experience typically follow
the design convention of placing major menu items across the top of the screen.
The reverse is true of mobile experiences. Major menu items of your mobile
experience should reside in “the thumb zone” – the area of the screen that is navigable using just a thumb.
What about Tablets?
While they have many similar characteristics (few physical
buttons, user mostly interacting with a piece of glass) the ergonomic
considerations for tablets are quite different than smartphones, mostly because
one-handed use of a tablet is very difficult. Instead, people use tablets in a variety of
ergonomic configurations. From curling up with it like a book, to holding it
like a clipboard, to propping it up in a kitchen while cooking – the variety of
ways people use tablets make it difficult to recommend a single set of heuristics
about navigation and content placement.
Instead, it’s important to consider how mutual reconfiguration of the user’s body and the device occur during tablet use. This
involves considering the ways a user will likely configure their body when
using a tablet application and placing the the
primary navigation elements accordingly. Here are a few examples:
“Curling Up” Stance
For tablet experiences that encourage the “curling up” user stance, opt for navigation at the top and consider incorporating horizontal gesture controls.
is a word that’s floated around the vernacular of the mobile industry for as
long as I can remember. To be honest, I’m guilty of dismissing it. More times
than not when people use the term I relegate it to the pile of meaningless
buzzwords nobody can quite define along with the likes of “synergy”
the frequency with which I hear this word in recent times has become somewhat
alarming. Leaving me to wonder… when people say “convergence” what do they
actually mean? What does this word mean to me?
Some Thoughts on Convergence
think of convergence, shapeshifting comes to mind. Just like the Wonder Twins
transforming into “the form of” a convenient animal/water configuration that
will save the day, convergence is what enables experiences to shapeshift
between different devices and environments. Instead of being siloed and trapped,
experiences can move fluidly through multiple devices.
thinking of late is that convergence actually occurs on three levels that are
separate but interrelated:
Technology convergence is when
a set of devices contain a similar technology, which enables experiences to
move across multiple devices. Examples: Wireless Internet or a software
platform like Android.
Media convergence is when
content/information is prismed through multiple devices or touchpoints. The
content and interactions often responds appropriately to the context
(smartphone vs. big screen TV, etc) – but the focus is on the throughline of
the content through the ecosystem of devices. Examples: Pandora, Netflix
Activity convergence enables
user to perform an activity regardless of the device. The key to this type of
convergence is figuring out how allow users to complete a task or achieve their
goal in a way that is intuitive given the high degree of variance between types
of devices and vast number of use contexts. Examples: Email, browsing the
Internet, looking up a restaurant on Yelp.
asked some friends at work what convergence meant to them, they referred me to
the video below.
What does convergence mean to you?
Please let me know in the comments below!
squirrel, running across your lawn. The movement of the squirrel’s spry legs
(considered the primary action) would be animated to express the light, nimble
nature of his gate. The agile, undulating movement of the squirrel’s tail –
considered the secondary action – would have a separate and slightly different
type of movement than his legs. The squirrel’s tail is an example of secondary
action – an animation principle that governs movement that supports a primary
action of an animation sequence without distracting from it. Secondary action is applied to
reinforce the mood or enrich the main action of an animated scene. The key to secondary actions is that it should
emphasize, rather than take attention away from the main action being animated.
(Caption: The primary action of thie animation is the squirrel’s body and
legs moving. The shape and character of the squirrel’s tail as it moves is the
secondary action.The secondary action serves to reinforce the mood and
character of the primary action and is uesed to make the animation feel more
Mobile UX Secondary Action Example
(Caption: The transition that occurs when a user clicks on a URL in an
email, activating the phones browser on an iphone is an example of secondary
action. The primary action is the browser window emerging forward into user’s
view. The secondary action is the email view receding into the background. Both
actions occur simulataneously, but the secondary action of the email application
supports the primary action – opening a browser window.)
Secondary Action and Mobile UX
When used prudently, the subtle incorporation of secondary action can make the animation and transitions within your mobile experiences really sing. Subtlety is the key, though. It’s a natural novice tendency to go a little “nutso” when learning to integrate motion into your work. The principle of secondary action can help you edit your use of motion and prevent your experiences from feeling like a trip to a carnival’s fun house for users.
– Support, not upstage. Secondary action should reinforce the primary action, not detract from it.
– Subtlety is key. If the secondary action/movement is competing with the primary animation, the motion phrase will feel superfluous or confusing for the user. Think squirrel tail 🙂
What examples of secondary action in mobile UX have you seen?
Objects don’t move through space at random. Instead they move along relatively
predictable paths that are influenced by forces such as thrust, wind resistance, and
The outline of a sparkler on the Fourth of July or skid marks on the pavement
from a braking car are rare examples of the physical traces of these paths. Usually
an object’s trajectory is invisible. While these paths lay largely unseen by
the human eye, patterns exist for trajectory paths based on whether an object
is organic or mechanical. Objects that are mechanical in nature such as cars,
bicycles, and trains tend to move along straight trajectories, whereas organic
objects such as plants, people and animals tend to move along arched trajectory. The object you wish to
animate should reflect these characteristics of movements for greater realism.
(Caption: An object’s trajectory lies largely
unseen except in rare occasions, such as the glowing sparks of a lit sparkler
that traces the path of where it’s been.)
When integrating motion into a mobile experience, it’s
important to consider whether the object being animated should reflect organic
or mechanical qualities. If the object possesses organic qualities, the arc
animation principle suggests the object should move along an arched trajectory.
An object that is mechanical in nature would move along straight or angular one.
(Caption: The animaiton used to express the
motion of elements such as fish and water in the iPhone application Koi Pond move along arched trajectories
giving the experience an organic feeling. The interface elements in an iteration of the Android mobile
platform tend to move along straight trajectories, giving the UI a mechanical
Yesterday I taught a workshop about the ways and means of mobile prototyping at UX Australia in Sydney. Great fun. Thanks to all who attended.
Whether it’s a car peeling out from dead
stop, or a sprinter, bursting out of the blocks and making tracks in a race,
objects need time to accelerate and slowdown. The sixth animation principle,
Slow in and Out, deals with the spacing required to accurately depict the
inherit law of inertia that governs all objects and people. Objects in the world need time to accelerate and slow down. A
strategy for accurately depicting this type of motion when creating an
animation is to include more frames of the object near the beginning and end of
a movement, and fewer in the middle. This principle goes for characters moving
between two extreme poses, such as sitting down and standing up, but also for
inanimate, moving objects, such as a bouncing ball.
(Caption: A strategy for accurately
depicting the laws of inertia that govern most objects is to include more
frames of the object near the beginning and end of a movement, and fewer frames
in the middle.)
While the experiences we create for mobile UX often live in another
world – the world behind the glass of our mobile device – allowing some of the
laws of physics to exist in that world makes those experiences more relatable
to users. Whether it’s a subtle timing difference in how a list view of data
scrolls, or the transition between two applications, the principle of slow in
and out (more frames at the beginning and end of a movement) will help your
animations feel more natural and intuitive.
principle of slow in and out is applied to the scrolling lists of many mobile
UIs. There are more frames at the beginning and end of the movement. This
effect makes the UI appear as if it is governed by the laws of inertia.)
are more frames at the beginning and end of the scrolling transition of the
home screen of the iPhone, making the application icons movement feel more
natural and intuitive.)
Imagine a big dog with giant jowls – like a Boxer or a Bulldog – shaking his head side to side.
The dynamic movement of the flabby skin on his face as he shakes his head to
and fro is an example of the fifth animation principle: follow through and overlapping
anticipation is the preparation of an action, follow through deals with the end of an action. Actions rarely
come to a sudden and complete stop, but are generally carried past their
endpoint. Follow through captures how parts of an
object continue to move even after other parts of that object have stopped
(Caption: Follow through captures how parts of an object (like
the dog’s jowls) continue to move even after other parts of that object (like
the dog’s head) have stopped moving.)
Now imagine that very same dog walking down a sidewalk
with his owner. The dog’s entire body is moving, but different parts of his
body are moving at different rates. The timing of his legs is different than
the timing of the movement of his tail, or head. Overlapping action is the animation principle that captures how
parts of an object move at different rates. Capturing the nature of the movement as well as the slight
variations in timing and speed of these parts makes objects seem more natural.
An action should never be brought to a complete stop before starting another
action. Overlapping maintains a continual flow between whole phrases of
A user interface is a collection of small elements that
work together to form a system. Just as parts of a dog’s body move at different
rates, the movement of elements within a UI should move with different yet
appropriate timing. While UI elements of a mobile experience should work
together to form a whole, the principles of follow through and overlapping
action can help define and communicate the nature of the relationships between UI elements. Follow through and
overlapping action are subtle principles that can help express how elements of
a UI interrelate to each other with the use of movement.
Mobile UX Follow Through and Overlapping Action Examples:
transition animation to and from the dynamic tiles experience on the Windows
Mobile 7 employs the principle of overlapping action. The tiles do not travel
as one unit, but rather each tile moves at a different rate.)