Eurostar Card Sorting Case Study

Posted on | Leave a comment

  • Eurostar Card Sorting Case Study

    Etre

    In September 2005, our company—Etre—was engaged by Eurostar to help redevelop its global web presence. Eurostar is the high-speed train service that connects the UK with mainland Europe and has been named “World’s Leading Rail Service” at the World Travel Awards every year since 1998.

    The project was extremely ambitious in nature—the main objectives being to make significant improvements to the usability of the company’s various websites; to introduce a host of new travel booking features; and to incorporate a new global brand identity—in a timeframe of just six months.

    An important part of the programme involved assisting Eurostar in redesigning its global information architecture—a taxonomy that serves not only as the backbone of the company’s main website, www.eurostar.com, but as a template for the company’s 27 country- and region-specific websites too. In order to ensure that the redesigned version was as user-friendly as possible, we decided to make card sorting an integral part of the redevelopment process.

    Playing our cards right

    Eurostar’s websites contained more than 11,000 different webpages and PDFs (and that was just in English!). This presented us with a real challenge. Something told us that users would find the prospect of sorting 11,000 cards a little overwhelming, so we figured that we would have to get this number down to something more bearable. Luckily, our initial mandate was to focus on the high-level structure of the global information architecture only—deeper levels would be addressed in subsequent projects, once the overarching classification scheme had been nailed—this made our job a little easier.

    We began by inventorying the content found at the top few levels, using a spreadsheet to capture the details of each item and its position within the information hierarchy. At the end of this process, we’d logged about 500 items, which gave us a good steer as to the types of content that were available in Eurostar’s various sites.

    We next held a series of workshops with Eurostar’s business stakeholders to select the most important items from our spreadsheet for inclusion in the card sort. We chose to define “most important” as items that were either “most important to users” or “most important to the business” (in terms of fulfilling its commercial objectives); rather than as those “most representative of the site as a whole”. We excluded items from any area that was deemed to be of low value, in order to ensure that that study was focussed on understanding the content that really mattered. With this in mind, we also omitted items from well-defined areas like “News”, “About Us”, “Customer Service” and “Contact Us”. These areas are common to the majority of websites we’ve worked on, and rarely differ dramatically. As such, we felt confident that we’d be able to design them well, without input from users. (We’ve actually created a set of standardised “blueprints” that we use to give us a head start when constructing these areas for clients. These blueprints have been user-tested many times to ensure that they are already user-friendly).

    At the end of this content selection process, we ended up with about 80 cards for our users to sort—a large, but manageable number.

    Recruiting the right sorter users

    It is often said that user testing your website with one person is better than testing it with no one at all. The same is true of card sorting. That said, however, in our experience, your results are only as good as your recruitment. If you don’t take the appropriate level of care in vetting and selecting your users, you can end up redesigning your site based on the input of people who will never need to use it. This is obviously less than ideal. Thus, we always make sure we recruit participants that are as representative as possible of the real-life end-users of the websites we test.

    In order to gather meaningful data during our Eurostar study, representatives of three demographic groups were sought: “Inexperienced users”, “Experienced users” and “Business users”. These groups were provided to us by Eurostar’s marketing department and, while their names might make them seem slightly woolly, each one had in fact been very tightly defined based upon extensive market research and ethnographic data. Indeed, each group was defined as a function of between 30-40 different demographic variables that took into account age range, socioeconomic status, job title, internet / computer use, frequency of travel, patterns of travel behaviour and so on. At a high-level however, “Inexperienced users” were defined as people that used the internet infrequently and did not regularly book travel online; “Experienced users” used the internet daily, had registered to use a Eurostar website and often booked travel online; and “Business users” used the internet daily, regularly travelled abroad and were members of Eurostar’s Frequent Traveller reward scheme. “Inexperienced users” were selected from Etre’s participant recruitment database, while “Experienced users” and “Business users” were selected from Eurostar’s customer database. All were screened via an online questionnaire based on the criteria outlined above.

    180 participants were recruited in total—20 per demographic group [1] in each of company’s three main target markets: UK, France and Belgium. As the majority of Eurostar’s revenue is generated from UK residents, it might have been tempting for the company to focus its design efforts around the needs of UK users alone. However, to its credit, Eurostar insisted upon involving representatives from other operating countries. (After all, it would have been extremely unwise to have redesigned a global information architecture based on the feedback provided by users of a single country).

    Involving French and Belgian users did, of course, increase our preparation time, as we had to translate all of our card sorting materials into both French and Flemish, but it was well worth it. These users proved to be extremely important in helping us get to grips with various localisation and translation issues, and involving them proved to be essential in helping us garner support for the study from business stakeholders in these countries (since the redesign work arising from the study would have a major influence on the composition of their sites, they were naturally very pleased to have been involved in the project).

    Choosing our weapons

    One of the main disadvantages of conducting a traditional card sort is that it can take a great deal of time to complete. Inviting participants to attend facilitated one-to-one sessions at your premises—or visiting them in their homes and workplaces—isn’t always cost-effective either. Particularly when you’re testing large numbers of users. In order to address these limitations, we’ve developed our own web-based card sorting application, allowing us to conduct the whole process online. This enables us to collect data from participants over the internet, without the need or expense of conducting in-person interviews or visiting users in situ. [2]

    As our study involved 180 participants in three different countries, our online application proved invaluable. It saved us, and our client, thousands of pounds in travel and accommodation expenses; and it meant that we were able to deliver the whole project in just three weeks (one week to recruit participants; another to test them; and one more to analyse the data and write up the results).

    We chose an open card sort, rather than a closed sort, because we wanted to find out how users grouped Eurostar’s content, rather than imposing groups upon them. In our experience, closed card sorting is best suited to situations where you either need to validate an information architecture with end-users or add content to an information architecture that has already been validated with end-users. Eurostar’s pre-existing global information architecture met neither of these criteria. It hadn’t been validated with end-users previously and, as we only had time to complete a single round of card sorting, we decided that attempting to validate it at this point was too risky. After all, if it proved invalid, we’d have had little information to go on with respect to redesigning it (all we would have known is that the pre-existing classification scheme didn’t work).

    Eurostar’s pre-existing global information architecture had been constructed without user input and, as such, could only be said to represent the business’ best guess as to how users would expect travel information to be organised. We wanted to find out how Eurostar’s users understood its content, without influencing them in any particular direction—thus open card sorting was the only option.

    Covering our backs

    As mentioned previously, our card sorting study was only one component of a much larger redesign programme. As such, a number of other user experience activities were running in parallel with it—one of which was an extensive usability testing exercise involving the British, French and Belgian Eurostar sites (also conducted by Etre). The results of this activity were crucial in validating and substantiating the findings of our card sorting work. An obvious disadvantage of online card sorting is that you aren’t there to see how users pull groups together or to ask them questions about the categorisation systems they employ. Thus, a degree of fidelity is lost. However, by running in-person usability testing at the same time, we had an opportunity to pick up on questions raised during our card sorting and ask them of users in the flesh. This helped us to fill in many of the gaps in our analysis.

    In addition to usability testing, we also conducted an online survey that asked users in the UK, France and Belgium to define key terms used within their respective country’s website. This was equally useful, as one of the things that card sorting doesn’t always make explicit is terms that users don’t understand. Sometimes users will put any items they are unsure of in any old group—just to put them somewhere. This can be very hard to pick up on (especially when card sorting is conducted online). And even when users create explicit “Don’t Know” groups, you can be left with questions. For example, it isn’t always clear what “Don’t Know” means. While you might be tempted to think that it means “I don’t know what the terms in this group mean”, it could just as likely mean “I know what these terms mean, I just don’t know where to put them”. The distinction between the two is critical.

    By conducting usability testing and an online survey alongside our card sorting study we were able to cover all our bases (or as many as possible, at least). We were able to synthesise what we learnt during the card sort with what we’d learnt from these other activities to get a complete picture of the usability of Eurostar’s global information architecture. We weren’t reliant on card sorting to provide us with a silver bullet solution.

    Analysing the results

    Once all users had completed our card sorting exercise, we were able to begin our analysis process. If we had run the study in person or asked users to send us their results by email or post, we would probably have gone crazy at this point. The mere thought of having to enter 180 participants’-worth of data is enough to make grown men weep (Especially when you consider that each participant had sorted 80 cards—as that’s 14,400 items to catalogue!). Thankfully, as we’d used our online application to run the study, this laborious data preparation step had already been taken care of. Thus when our study was complete, all of the data was right there—in the right format—waiting for us to begin our analysis. Phew!

    First off, we examined the group names that users had created and straight away we found something interesting: inexperienced users tended to have grouped content chronologically, while experienced users had grouped it in line with established web conventions.

    Yes, inexperienced users had grouped content based around activities performed at certain points during the travel booking experience. Their logic seemed to have worked as follows: “I’d start my journey by planning it, then I’d book it, then I’d prepare to go (by obtaining a visa, packing my stuff, and so on), then on the day of travel I’d go to the station, then I’d actually travel…and finally, I’d arrive at my destination”. This mode of thinking resulted in them creating group names like: “Planning Your Travel”, “Booking Your Travel”, “Before You Go”, “At the Station”, “During Travel” and “At My Destination”.

    By contrast, experienced users had grouped content around existing online conventions. This mode of thinking resulted in group names like: “Buy”, “About Us”, “Manage Your Account”, “Help”, “News” and “Latest Offers”. Their past online experience had clearly influenced the categories they created, as these group names are similar to those found on many other ecommerce websites.

    This presented an interesting challenge—the redesigned version of Eurostar’s information architecture would need to hold inexperienced users’ hands as they made their way through the “travel booking lifecycle”, while at the same time supporting experienced users by using established web conventions to ensure an intuitive, familiar “website experience”.

    Having analysed the group names, we then used our online tool’s clustering capabilities to combine the data obtained from all participants so as to uncover the single best categorisation across the board.

    Like all methods of analysing card sorting data, clustering analysis has one or two flaws (as described elsewhere in this book). However, we find it to be extremely useful when analysing data from large numbers of participants, as it can reveal patterns that would be undetectable if you were to sift through each participant’s data manually.

    Different clustering algorithms can produce different results, so we’re always careful to run several and look for the consensus view. During this particular project we ran several hierarchical clustering algorithms (including single linkage, complete linkage and average linkage) and a divisive partitional clustering algorithm (k-means) for good measure. We also took a random sample of about 50 users and examined their groupings manually—just to make sure that clustering wasn’t leading us down the wrong path.

    The results of our clustering analysis revealed more important insights. For example, users had made a clear distinction between planning- and booking-related items. Items like “Route Maps”, “Timetables” and “Passports and Visas” appeared in one group; while items like “Book Now”, “Purchase Upgrades” and “Book a Hotel” appeared in another. This was interesting as, in the pre-existing global information architecture design, Eurostar had merged these two groups, displaying both types of information in a single, unified “Plan Your Travel” area.

    We were also surprised to see that “Special Offers” and “Promotions” were often grouped with items relating to Eurostar’s Frequent Traveller reward scheme. It appeared that users felt that special offers were only available to members of this loyalty program.

    Finally, we were happy to find that some of the groups in Eurostar’s pre-existing global information architecture were validated. Users consistently grouped cards like “Paris”, “London”, “Rome” and “Prague” together, for example, usually under the title of “Destinations”. This was encouraging as this was exactly how they were grouped in the information architecture at that time. [3]

    Having concluded our analysis, we fed back our findings to Eurostar’s business stakeholders in the form of a comprehensive report. Per recommendations made elsewhere in this book, we chose not to use dendrograms (or indeed, any other complex diagrams) when presenting our findings. We simply listed the contents of each cluster, which made our results far easier to digest. These diagrams were supplemented with detailed interpretation of the issues we’d uncovered and a series of actionable recommendations for addressing them.

    We also presented the report in-person at Eurostar’s premises and held a Q&A session with attendees thereafter. In our experience, clients sometimes find card sorting to be something of a technical, esoteric—and therefore confusing—subject. Addressing their queries in person gives us a chance to tackle any queries they have and ensure that our findings are understood. This, of course, significantly improves the likelihood that our recommendations will be taken forward in their redesign work.

    Redesigning the information architecture

    Using our report as a basis, Eurostar’s design team produced a revised version of the global information architecture, mapping the company’s 11,000+ content items to new homes within. This redesigned structure was then translated, localised and incorporated into all of Eurostar’s international websites in time for their re-launch in March 2006. Many of our recommendations made the grade:

    Information was grouped chronologically in certain areas to assist inexperienced users. While standard web conventions like “Your Account”, “Customer Care”, “About Eurostar” and “Contact Us” appeared as top level content channels to make experienced users feel right at home.

    Booking- and planning-related information was separated into distinct top level groups labelled “Book Online” and “Travel Information”; while groups that were validated by our card sorting—like “Destinations”, for example—were retained without modification. Separate areas labelled “Latest Deals” and “Frequent Traveller” were also included to help users recognise that special offers and promotions weren’t necessarily tied to Eurostar’s customer loyalty scheme.

    In May 2006 we conducted a second card sort to verify that the redesigned global information architecture had improved findability for users. (Ideally, we’d have conducted this study before the redesigned global information architecture had gone live, however time constraints had rendered this impossible). This study was identical in composition to its predecessor—involving 180 users across the UK, France and Belgium and the same deck of cards. However, this time around we chose to employ a closed card sort, as opposed to an open one, in order to validate the redesigned classification system. We therefore gave users the five groups that comprised the top level of the redesigned global information architecture—i.e. “Book Online”, “Travel Information”, “Destinations”, “Latest Deals” and “Frequent Traveller”—and asked them to place cards within them.

    Overall, the results of this second sort were excellent. Indeed, four of the five top-level groups were validated—namely, “Book Online”, “Destinations”, “Latest Deals” and “Frequent Traveller”—with the vast majority of users placing the cards that belonged to these groups in the right places. However, we did find a few kinks that needed to be ironed out. “Travel Information”, for example, proved to be something of a catchall group for users, who placed cards of all varieties therein—this is unsurprising when you consider that every piece of information found on Eurostar’s site is essentially “travel information”. This area has therefore been scheduled for revision as part of a subsequent redesign project.

    Measuring the impact

    As mentioned previously, our card sorting activities were only one component in a much

    larger programme of user experience work that involved overhauling almost every aspect of the design of Eurostar’s global web presence. This makes it difficult to separate the impact of card sorting from the impact of the other activities that comprised the project when discussing performance. That said, in the year since the redesigned site launched, Eurostar’s online revenues grew from £110 million to £136 million (an increase of 24%, or £26 million!). Furthermore, while online sales accounted for 23.7% of Eurostar’s total sales in 2005 (i.e. sales from all channels including online, offline and third-party travel agents), this figure increased to 25.9% in 2006. Email and call volumes fell by about a third during the same period, delivering massive cost savings for the company to boot. The redesigned global information architecture presumably played an important role in helping to achieve these remarkable results.

    Eurostar’s Head of eCommerce and Relationship Marketing, Jonathan Davies said: “Our relationship with Etre has been one of the key contributors to the ongoing success of Eurostar.com. A rigorous approach to user-centred design coupled with their commercial understanding has delivered consistently high standards of insight and recommendations. I would not hesitate to recommend Etre to any company looking to take their online channel to new heights of performance”.

    Footnotes

    1. We chose to test 20 users per group because 15-20 participants is the minimum number recommended by usability experts in order to obtain a reliable set of data. (This number helps cancel out the variability in people’s mental models and the vocabulary they use when labelling their card groups).

    2. We’re currently developing our card sorting application—Sort My Info—for public release.

    3. If you’re familiar with Eurostar, you’ll know that the company serves neither Rome nor Prague. You’ll therefore be wondering why we included cards like these in our study. Well, since many of our participants were recruited from Eurostar’s own customer database – i.e. our “Experienced users” and “Business users” – we feared that they might be tempted to log on to www.eurostar.com and group their cards in alignment with the site’s pre-existing information architecture design. This would have biased our results, so we decided to include a small number of “dummy” cards – representing items that weren’t present in Eurostar’s sites at the time of testing, but were consistent with the types of content that the company provides – to disrupt them. These cards prevented participants from seeing a one-to-one relationship between the cards in our deck and the contents of the pre-existing sites, and in doing so, reduced the chances of them adopting a “copycat” strategy.

    Leave a Reply