Conference Program

Session Type:
Time Zone:

Asking after the dynamics of artificial intelligence’s extraordinary recent rise recalls Hemingway’s famous line about going bankrupt: “In two ways. Gradually, then suddenly.” That combination underscores the emergence of many technologies, of course. It creeps up on us, and then is suddenly moving at speed, everywhere. This makes it hard for cities and places to work with the grain of tech, in order to produce equitable or sustainable outcomes. Although we rarely do it, tech asks us to step back and ask the deeper questions lurking behind all the noise.

In this talk I’ll describe how everyday technologies, digital and physical combined, define how we live together; how they tend to articulate what we stand for as a society, or how our cities work — and what’s on the table now. Drawing together inspiring projects and cases ranging from Norway to Japan, and from new cities to reimagined regions, I’ll suggest how we might align design practices in order to address our contemporary shared challenges, like climate breakdown, social cohesion, and sweeping demographic changes. As AI moves beyond shuffling playlists or improving grammar and starts coordinating mobility, energy, and water systems, or how housing is allocated or buildings are made, we must rapidly figure out how design, governance, and community best understand and take advantage of these new distributed, decentralised and collaborative technologies. In doing so, we might well challenge our preconceptions of technology, economy, and community themselves.

Break

It seems like every company is adding a conversational AI chatbot to their website lately, but how do you actually go about making these experiences valuable and intuitive? Savannah Carlin will present a case study on a conversational AI chatbot—Marqeta Docs AI—that she designed for a developer documentation site in the fintech industry. She will share her insights, mistakes, and perspectives on how to use AI in a meaningful, seamless way, especially for companies like Marqeta that operate in highly regulated industries with strict compliance standards.

The talk will use specific examples and visuals to show what makes conversational AI interactions uniquely challenging and the design patterns that can address those challenges. These include managing user expectations, handling errors or misunderstandings within the conversation, and ensuring that users can quickly judge the quality of a bot’s response.

You’ll gain a deeper understanding of the intricacies involved in designing interactions for AI, along with practical advice you can apply in your own design processes.

Take-aways

  • What to consider before you add AI to your product to ensure it will be valuable, usable, and safe for its intended workflows
  • The interactions that are unique to conversational AI experiences and the design patterns that work for them
  • Common challenges in designing conversational AI experiences and how to overcome them

Break

Following the emergence of Generative AI as a potential revolution in the UX field, a great deal of AI-driven tools arose to enhance the efficiency of UX research, including data analysis. Qualitative data analysis is a process that conventionally relies on human intelligence to discern patterns, establish connections, and derive actionable insights and frameworks. Many studies have involved comparing the quality of qualitative analyses generated by humans with those produced by AI language models like ChatGPT (Hamilton et al., 2023).

Despite the undeniable appeal of automation and speed, there is ongoing debate about AI’s ability to replace human intelligence in qualitative analysis, which may be unlikely at this moment. Then the question is: To what extent can AI contribute to qualitative data analysis?

In this case study, I delved into the thematic analysis and post-analysis stage, i.e. synthesizing insights into a framework. Framework, in this context, refers to a conceptual structure that illustrates the components of a human experience and how the components interconnect and operate within the structure. It is a concise model that encapsulates the entirety of research insights.

The topic of my case study is “trust relationships between job seekers and hirers in the marketplace,, aligning with the business focus of my company. From my secondary research, I found that, ChatGPT needed multiple rounds of training using diverse prompts to conduct precise and comprehensive thematic analysis. ChatGPT can execute fine-quality thematic analysis under the help of right prompts, yet it falls short in replacing human intelligence for synthesizing insights and crafting frameworks for engaging narratives.

Its limitation lies in lacking the depth of contextual understanding within a company, such as understanding what’s missing from the company’s mainstream discourse to create a human-centered story based on data analysis. To craft a framework that conveys good storytelling and organizational impact, it requires the researcher’s introspection into knowledge gaps in the specific organizational context. Thus, the best practice is to combine human interpretation and AI production. In my talk, I will demonstrate several principles to guide this practice.

Takeaways

We’ll cover principles of how to employ ChatGPT in qualitative analysis, specifically focusing on its application in synthesizing and crafting frameworks that convey compelling and insightful narratives:

  • Effectiveness of ChatGPT in thematic analysis: Learn about my process of training ChatGPT to conduct precise thematic analysis. You’ll gain insights into the capabilities and limitations of ChatGPT in providing accurate and comprehensive analysis for framework construction
  • Value of human potential: We’ll address the value of human self-reflection and the ability of interpreting organizational context for crafting engaging frameworks
  • Comparison between human and ChatGPT: By comparing the human-driven outcomes against ChatGPT for qualitative analysis, you’ll see how effective the synthesized frameworks are generated by the researcher and ChatGPT separately.
  • Collaboration between human and ChatGPT: You’ll also learn when and how to incorporate human interpretation with ChatGPT to achieve the best practice in qualitative analysis and synthesis

Long Break

Take a break to grab a bite, stretch your legs, chat with speakers and attendees in the conference Slack, or even take a nap. Just remember to set your alarm and be back in time for the second half of Design in Product!

In this talk, I’ll delve into how AI has helped to catapult me from a researcher to a strategist—a “super researcher” if you like. AI tools have streamlined my research processes, enhanced stakeholder communication, and freed up more time for strategic thinking. I’ll demonstrate how rapid data synthesis and efficient communication via AI have led to faster project turnarounds, allowing me to have better relationships with stakeholders and giving me more time to think and move more strategically in my company. We’ll illustrate the significant time-saving advantages of AI integration in UX research, positioning it as a crucial tool for staying ahead in a fast-paced tech world.

Takeaways

  • Strategies for using AI to expedite UX research processes
  • Techniques for leveraging AI in enhancing stakeholder communication
  • Insights into transforming data analysis into strategic UX decision-making
  • Demonstration of the efficiency of AI in UX project turnaround
  • Understanding of AI’s role in maintaining agility in fast-paced tech environments
  • Practical tips on integrating AI tools into UX research workflows

In this session, I will explore the groundbreaking approach taken by the Austin Public Library’s Innovation Lab in integrating AI with GitHub for UX enhancement. Our primary challenge was to streamline and enrich the documentation process, adhering to the complex guidelines of the U.S. Web Design System (USWDS) while ensuring the content was accessible and user-friendly.

Leveraging AI, we developed a system that not only automates the creation of diverse documentation formats like Markdown, but also ensures alignment with USWDS standards. This innovative method significantly improved the efficiency of our documentation process, making it more coherent and navigable. A key highlight is our recent revamp of the Breadcrumbs navigation strategy, demonstrating AI’s role in enhancing user experience and interface design. The session showcases how AI can transform UX practices, offering practical insights for UX professionals looking to leverage AI in their work.

Takeaways

  • Insight into the innovative use of AI for streamlining documentation processes, adhering to USWDS guidelines in a public library setting
  • Practical examples of how AI can transform UX practices, with a focus on creating efficient, user-friendly, and standards-compliant documentation
  • Demonstration of the AI-assisted update of the Breadcrumbs navigation strategy, demonstrating tangible improvements in UX design and user navigation
  • Strategies for leveraging AI to enhance collaboration among UX designers, developers, and content creators, fostering a more integrated and effective workflow
  • Tips for utilizing tools like GitHub Pages for rapid deployment and continuous integration, highlighting the role of AI in accelerating project development

In this high-pressure scenario, the challenge was to conduct 17 user interviews in three days and synthesize a comprehensive report in just one additional day. I’ll explore how we used AI to streamline the research process, from transcription to synthesis, and how tools like ChatGPT contributed to efficient data processing and insight generation. We’ll reflect on the potential and pitfalls of using AI in accelerated user research, from practical aspects to more philosophical considerations on potential changes to the research process.

Takeaways

  • Practical insights into integrating AI with traditional research methodologies to expedite the research process
  • An overview of the effectiveness of AI transcription and synthesis tools in real-world research scenarios
  • Critical examination of AI’s role in data processing and how it compares with human analysis
  • Strategic considerations for service designers when employing AI to support rapid user research
  • Reflection on the ethical implications and potential impact on the quality of insights and researcher well-being when relying on AI to speed up research processes

Break

Day 1 Wrap up

The Cozy Juicy Real Experience: Connect with your peers


Join us the evening of Monday, March 25 to play Cozy Juicy Real—an engaging online board game where the purpose is anything but trivial: creating authentic and truly meaningful connections with your peers.

This is an interactive session and spaces are limited. RSVP is required!

Looking for a fun and unique way to (virtually) connect with others in the community? Then this one’s for you!

Whether you’re looking to expand your network, meet your next client or connect with collaborators, this is the perfect opportunity to make it happen.

Cozy Juicy Real has been played in 71 countries and is proven to create stronger team bonds at the world’s most successful organizations – including Google, Adobe and the UN.

“You will connect. Cozy Juicy Real is the best way to foster connection online.”
– Marcia Goddard, Chief Culture Officer, The Contentment Foundation

Learn more…

?️ Teams edition

❤️ Friends edition

⭐️ Top-Rated on Trustpilot

This talk is a case presentation about using generative AI and graph languages to come up rapidly with complex enterprise designs. We are using a repository based enterprise architecture tool and EDGY, an open source visual language, to feed GPT4 with context-rich queries. The resulting maps and models are … wrong.

But they have proven to be inspiring or even triggering for conversations across a diverse stakeholder community, and shortcut our way to a set of correct and useful models that inform design decisions. Moreover they can highlight blind spots and interrelationships previously unknown and thereby enrich the design process with minimal effort.

Takeaways

  • Recognising blank page moments in complex challenges
  • How to embed context and an ad hoc Training in an LLM prompt
  • How to make generate a web of coherent maps such as Journey, JTBD, Organization, Process Maps that cover a complete design related to a given challenge
  • How to use these maps and how not to use them when co-creating with others
  • When to keep tackling the blank page yourself instead

Break

Documentation technology is the foundation of modern healthcare delivery. Convoluted, redundant, and excessive documentation is a pervasive problem that causes inefficiency in all aspects of the industry.

At IncludedHealth, we are developing an AI-assisted documentation that summarizes and documents conversations between patients and their care providers. A care provider can push one button and have their entire patient encounter captured in a succinct and standardized format. Upon a pilot launch, the results were staggering. Within 6 months, we demonstrated a 64% reduction in time per encounter!

However, despite our promising results, there still remain challenges specific to the demands of the healthcare domain. As our team continues to develop solutions to meet these challenges, we gain even more clarity on what it takes to design a human-backed, AI-powered healthcare system.

Takeaways

From this session, you can expect to learn the following:

  • Developing AI design in healthcare requires close collaboration between end users and your data science team
  • Piloting GenAI solutions may be more effective than traditional prototyping
  • Trading accuracy for efficiency is a barrier to adopting GenAI tools in healthcare
  • GenAI design in healthcare requires establishing critical boundaries as well as a good understanding of cognitive processing
  • Other factors to consider when designing AI solutions for service-based industries are understanding how training might be impacted, the importance of standardization vs. personalization of data output and the need for more autonomy and control elements due to consequences of unpredictable output errors

Break

AI tools like ChatGPT have exploded in popularity with good reason: they allow users to draft, summarize, and edit content with unprecedented speed. While these generic tools can generate any type of content or perform any type of content task, the user needs to craft an effective prompt to get high-quality output, and often needs to exchange multiple messages with additional guidance and requirements in order to improve results.

When you’re building an AI-powered text generation feature, such as a product description or email writer, you typically can’t expect users to craft their own prompts. And unless you’re building a chat interface, you’re unlikely to offer the ability to iteratively improve the output. Instead, your feature needs a robust prompt skeleton that combines with user input to produce high-quality output in a single response.

For the designer, this means building an interface that helps users provide the exact information that creates a successful prompt. This process is more complex than simple form design or a mad-lib prompt completion tool. The user input, often including free form text fields, might be required to fill in prompt variables, but it also could change the prompt structure itself, or even override base instructions.

The effectiveness of the user input significantly influences the quality of the output, underscoring the need for designers to be deeply familiar with the backend prompt architecture so they can design the frontend.

Drawing on recent text generation projects, I’ll demonstrate how the interface design can respond to and evolve with the prompt architecture. I’ll talk about how to determine which prompt components to make invisible to the user, which to provide as predefined options, and which should be authored by the user in free-form text fields.

Takeaways

  1. How prompt structure can impact user interface design and conversely, how design can impact prompt structure
  2. Techniques to provide effective user guidance within AI generation contexts to ensure consistently high-quality output
  3. Real-world examples and learnings from recent generative AI projects in an e-commerce software product

Session description to come.

Under biometric privacy laws like BIPA and CCPA, user research recordings containing users’ faces or voices can put your company at risk for lawsuits and fines. Legal departments are increasingly requiring more stringent redaction, and in some cases banning recording outright. This comes at a high cost for UX teams who are already being asked to do more with less, as losing access to recordings can increase duplicative research effort and reduce the accuracy of results.

AI offers new solutions for UX teams who want to keep research recordings longer without violating biometric privacy laws. In this demo, we’ll show how we used off-the-shelf tools to intelligently redact users’ voices, faces, and bodies in research videos. By removing biometric identifiers, you can compliantly archive research recordings indefinitely, enabling your team to mine them for insights for years to come.

Long Break

Take a break to grab a bite, stretch your legs, chat with speakers and attendees in the conference Slack, or even take a nap. Just remember to set your alarm and be back in time for the second half of Design in Product!

Break

What we design is changing; therefore, how we design is also changing. Design innovation is being affected by emerging AI technologies. In this talk, I will set the context for the role of design in creating purposeful and pragmatic technology, both historically and today. I will talk about some of the problems with AI innovation and I will show some examples from our research showing the impact of design in creating, developing, and deploying AI systems, with the goal of creating better social systems, better economic relations, and a better world in which to live.

Day 2 Wrap up