Day 3-Research in An Automated Future
— Thank you everyone for inviting me to Advancing Research again
— This is my dip into the future, and discuss the birth of AI/ML Design research
— I’ve lead teams responsible for AI in platforms, products, and services
-
My work includes building tools for model developers
— I’ll begin with a design ethics statement: “To amplify the beauty of humanity with design while avoiding practices that exploit its fragility”
— I’ll tell you about myself as well:
-
Prior to CapitalOne, I was the lead design researcher at IDEO. I was both a researcher and designer and I consider design and research as a symbiotic relationship, not knowing where one begins or ends
-
I also teach Ethical AI at DePaul University
— My background includes 30 years of working with data and people
-
I’m a former journalist, and have been designing intelligent systems using IOT, automating healthcare appointments, and working in finance and banking
— I believe that in automated world, we need to determine what not to design, to preserve human culture and values, in our future world
-
We are very close to world where everything can be automated
-
There will be a shift to valuing more abstract things like trust and culture, in order to make the tradeoffs about what technology should not do
— But first some definitions
-
Design for me is an effort to impose order upon chaos and I consider design as a verb
— Research is design in my view, and I have strong opinion about research and design, as they are two sides of the same coin
— When I lead design teams, its difficult to understand where research ends and design begins, as they are so symbiotic
-
Research is an active form of design
— Now I’ll go through Machine Learning, AI, and MLXD, to provide an overview of what it is and what it’s not
-
Hopefully you’ll learn something new.
-
I also do workshops on ML + AI and how it fits with design
— Machine learning is a category of computer science where computers learn to achieved desired outcomes, through applying problem solving rules automatically
-
Some ML networks are ’neural’, mimicking the human brain, and have outcomes that are achieved without human programmers
-
Machine learning is limited to past observations though, rather than actively interacting with certain environments
-
It depends on data already collected
-
— Algorithms are series of unambiguous rules to solve problems
-
We create algorithms every day— such as grabbing an umbrella after it rains
— If you’ve taken HCI, you’ve seen the visual above
-
To summarize: There is input of sensory information, which you run through memory, and determine your behavior from there
— This is how we make decisions of how we want to act, and process the world around us
— Data science models work in the same way
-
There is input of historical/environmental data
-
The model is trained on the data, and runs it through validation to see if training accurate
-
Fine-tuning goes on every time the model is trained in tasks, such as recognizing a cat from a dog
-
Model developer will tune model, and keep training until it passes validation results
— We will then look at results of the model, with aim of providing a high-level accuracy for developers
— So what does machine learning look like in practice?
-
Detecting anomalies that stand out (seeing 100 degree temperatures in January)
-
Classifying family photos and placing them in same album
-
Recommendation models such as those on Netflix or Amazon
— To bring machine learning models to life, I’ll use example of a dog I took care of for seven days, Silver
-
Supervised Learning: Teaching the dog to pick-up bones and giving positive reinforcement
-
I gave the outcome that I wanted and trained to give outcome all the time
-
-
Unsupervised Learning: Letting the dog find the pattern and bring it back
-
This was analogous to reviewing data-sets and surfacing a pattern
-
-
Reinforcement: I would leave the dog crate open and drop treats in there to encourage dog to stay in the crate
-
This would train model to respond to right cues, and negative rewards
-
This model is used a lot in self-driving cars
-
— We do AI/ML in real-world as people
-
Computer vision corresponds to sight
-
Auditory sensors corresponds to hearing
-
Natural Language Processing corresponding to speaking
-
Automation corresponds to how we act
-
Recommendation Models corresponds to how we make decisions
— But AI/ML does have limitations
-
It needs data to function
-
Bad data leads to bad outcomes
-
No rules/No action, at least not yet
-
Nuance is the enemy of AI
-
The obstacle of human irrationality, which ML/AI can’t replicate
— As user researchers, designing for AI is a little bit different than designing for software/enterprise/interaction
-
Ways for how things are processed is different
— Current tech
-
Current technology is static, and doesn’t change over time
-
Interactions are contained between user and device
-
The user controls the device, and that user has total control over device
-
Interaction is one way
-
Technology is task-based, like clicking on a link
-
There are affordances where people interact in the same consistent way
-
Technology is static and performs task in same way every time
— For AI/ML
-
The user or the machine can be in control
-
Multi-agency context of use, where both machine and I can act
-
Things are decision-based, not task-based, which makes things harder
-
Affordance can change over time as machine can learn
-
-
Technology is dynamic and ML’s performed differently with new knowledge
-
Google searches evolve based on information you feed into it
-
— Challenges for research in AI system.
-
It’s future oriented instead of present oriented
-
Speculative research is hard with current UXR methods
-
Sometimes it is hard to realize all the things machine can do for UXRs
-
Technology isn’t static
-
It goes beyond simple agency framework
-
— Pre-software focuses on how a product is made, while post-software focuses on how product behaves
-
Need to seek unexpressed rituals, cultures and values
— I use the framework of design anthropology for an automated world
-
Its a hybrid mode of investigation, to overcome speculative objects
— Design anthropology puts both ethnography and anthropology together and get design anthropology
-
Focuses on how design translates human values into tangible experiences
-
There is trust between machine and human interaction
— Characteristics of design anthropology include
-
Trans-disciplinary work
-
Multi-agency and requiring co-participatory design to put people in speculative scenarios and situations to figure out how it works
-
Research-led in order for us to focus on what needs are and go backwards
— Methods
-
Provocation prototypes to capture what people think about tools, rather than surveying
-
Perpetually synthesizing things
-
Future-oriented
-
Highly considerate of value orientation needs
— So I’ll conclude for now and will take questions
Q&A
-
What’s the way forward for ML algorithms becoming more transparent? What skillset expansion is needed?
—> This is key, which is why I explained how ML works at a conceptual level
-
As UXR you need to have high-level data and AI + ML literacy, as you can’t impact model once it’s made
-
Need to impact model in the data collection phase of the process
-
—> No model runs well without data and you need to be literate about how data is used in models
-
There are many kinds of cognitive biases and those that are not mathematical, which you can’t tweak in model development process
—> Need to look out for, and think about what goes into the hood
-
i.e. What’s missing in the data-set like FICO scores, and alternative ways to assess creditworthiness
-
Consider sins of omissions, and who is excluded from the model
-
Start at the beginning to see if what goes into a model is fair, and mitigate bias as much as possible
-
—> As far as transparency, have clarity at every level
-
It’s hard to explain how model works in-depth for some people, and hard to trace and audit it
-
You need stage gates to make model building process a glass-box versus a black-box
-
Having model governance to have someone audit the model development process to make sure the model adheres to our values
-
Explainability is constantly worked on at Capital One
-