NASA faced increasing complexity in managing data, meeting customer needs, and streamlining processes across its diverse missions and operations. With fragmented data sources, manual workflows, and a lack of integrated AI tools, teams struggled to maintain efficiency, consistency, and innovation.
Through a cross-organizational AI-readiness initiative led by the OCIO, NASA engaged stakeholders across Service Lines, Centers, and Mission Support Offices to identify challenges, articulate use cases, and prioritize AI opportunities. Workshops, assessments, and stakeholder collaboration provided a roadmap for integrating AI solutions, enabling secure data usage, and automating repetitive tasks. Focused efforts on training, governance, and iterative implementation ensured alignment with organizational goals.
The Challenge
Users engaging with the AI shopping assistant often felt constrained by limited options, excessive follow-up questions, and a lack of personalization. These shortcomings led to user fatigue, misunderstandings, and a subpar shopping experience. Insights from user research (UXR) and transcripts revealed that users wanted more intuitive, human-like interactions that catered to their unique needs.
The Solution
A robust, adaptable framework was designed to transform AI conversations into sales-like consultations. By breaking user queries into three core components—use-case, constraints, and preferences—the framework enabled the bot to understand intent and deliver relevant, personalized results. Key enhancements included:
Allowing users to skip questions and navigate freely.
Providing contextual help for technical queries.
Transitioning to open-ended interactions after gathering essential details to prevent over-questioning.
Displaying diverse and curated results aligned with user preferences.
LLMs are known for generating ‘original’ content by predicting the next most likely word. But their real power lies in generating thoughtful, nuanced feedback. Drawing from my background in writing and technology, I discovered that LLMs can be remarkably insightful critics when prompted with the right questions.
Using composite examples inspired by Hear Me Out’s culture assessments with leadership teams and employees, I’ll demonstrate how extended conversations with LLMs like Anthropic’s Claude help us pressure-test our research from multiple angles: anticipating participant concerns, surfacing potential misinterpretations, and identifying gaps in our analysis that could undermine stakeholder trust.
We built a novel math tutoring app for 11-year-olds in the UK. Since this was our first AI project, we expected lots of technical issues. Those happened and on top of it, we ended up questioning the value of designers in AI-driven products.
AI moves fast. In the time since we began programming DwAI25 in December, we’ve already seen important new developments like DeepSeek R1, Deep Research from OpenAI and Perplexity, and Mira Murati’s Thinking Machines Lab. By June, the terrain will have transformed even further. In this panel, we’ll learn about the latest developments in AI from representatives from major AI players, and what they mean for UX professionals.
“Feels Like Paper!” is a series of prototypes about augmenting physical paper through AI. Various ML models, LLMs and a mixed reality headset are used to infuse physical paper and ink with properties of the digital world without compromising on their physical traits.
As a data visualization designer and developer the challenge I often face is what to do with unstructured data. One case study I can show is exploring survey results where the multiple-choice questions are straightforward to analyze but interesting open-ended questions like “”What do your colleagues not understand about data visualizaiton?”” are much harder to crack.
Latent Scope is an open source tool I built that streamlines a process of embedding text, mapping it to 2D, clustering the data points on the map and summarizing those clusters with an LLM. Once the process is done on a dataset structure emerges from the unstructured text, allowing us to get a sense of patterns in the survey answers. Themes like “”the time it takes”” to develop data visualization pop out, as do “”the importance of good design.”” While people don’t use the same language to describe these themes, they show up as clusters in the tool thanks to the power of embedding models.
https://github.com/enjalot/latent-scope
We partnered with a non-profit to reduce opioid-related deaths by combining human expertise with AI-driven research, streamlining workflows, enhancing engagement through empathy-driven materials, and delivering actionable, high-impact insights for community-focused funding initiatives.
Generative AI has helped a number of designers “think in code” for the first time but often hit a wall when trying to ship that code to production. Let’s talk about how we break down the wall.