Day 1- Navigating the Ethical Frontier: DesignOps Strategies for Responsible AI Innovation
— Thank you for the intro. I’m Jay and we will jump straight in
— I’ll break some news
-
AI is kind of a big deal and all we can talk about in DesignOps
— When CEOs mention AI, company stock soars, and when they fail to mention it, it suffers
— Nvidia made AI related announcement and it’s stock increased
— Meanwhile, Siemens slowness in adopting AI has contributed in stock dropping 36% since June
-
Companies are under tremendous pressure to get AI out and quickly
— We are familiar with equation of pressure and speed leading to shortcuts
-
Add to that tech staff being cut everywhere and us having to do more with less
-
Dangerous recipes for shortcuts and mistakes when we work at breakneck speeds
— Not a new problem, but while tech debt is as old as software development, AI hits differently
— The old way had technical debt, but AI has compounding interest
-
Rushing AI to market, has bad effects on product and brand
-
If we release an AI with unfair biases to people in disabilities the damage to brand and reputation will far outlast any fixes we patch in after the fact
-
— So what do we do? There are many directions to take
-
So how to keep implementation ethical?
— So I’ll tell a story about Robert
-
He is married, with two young daughters, has an ordinary life in suburbs, works at an automotive supply company
-
But you should know he has secret that not even his mother knew about
— On the day before his 42nd birthday, Robert was arrested for larceny
-
Why?
-
Six months earlier, while driving home from work, he created an Instagram reel, and had dinner with wife and kids
-
-
So what happened?
-
For Robert, a burglary happened 25 miles away and captured video surveillance was analyzed by police with an AI software called DataWorks+, which explicitly doesn’t measure its system for accuracy or bias
-
Police ran images through DataWorks+ and Robert’s driver license photo was a match
-
-
Robert then got a call from unidentified number and picked up the phone
-
The caller identified himself as police and asked Robert to run himself in
-
Robert refused to do it, and thought the whole call was a hoax
-
-
Later in the day the police came with a warrant to his home
-
Robert’s wife called Robert up, and Robert pulled in to driveway, only to be arrested by the police in broad daylight while his kids, wife, and neighbors watched
-
-
-
Finally, 18 hours later, police pulled Robert into an interrogation room and ask him to confess to the robbery
-
Robert held the photo up to his face and said the photo was not of him — police realized this, and said it looked like the computer got it wrong
-
-
It took 12 hours and $1,000 for Robert to be released. But the consequences were severe
-
Neighbors treat him differently
-
Kids don’t trust police
-
His boss advised Robert to keep events to himself
-
-
Shame of being wrongfully arrested, left Robert so shook-up he wouldn’t tell his mother about it
— This was a consequence of poor police work, yes, but also the AI system refusing to measure its system for accuracy or bias
— I believe we should insert ourselves into the story, so that these things don’t happen as we push out AI out the door as quickly as possible
-
AI is even now discriminating against women in hiring practices and people with disabilities
— We are the solution to prevent this from happening further
— I’ll move onto the solution now
— Parties are fun
-
I don’t like attending parties, but appreciate being invited and what to bring to the party
-
DesignOps role is to ensure the right people are at the party
— We need to ask if we have a team in AI process to provide variety of perspectives,
-
If no, send out those invites
— Disclaimers
-
I vew these roles as hats people wear, as there is overlap with goals
-
Make sure roles are all represented
— On this first slide
-
We need legal expert to protect people from turmoil and who is familiar with fine print
— To bring the AI to life we need the Machine Learning Engineer
— This slide comes with a story
-
In my interaction with ChatGPT, I figured out how to eventually talk with the AI
-
Saw potential for business applications and asked ChatGPT to ask for risk factors. It’s answer:
-
The number one answer to find use case or use cases to address
-
-
I followed up with: Who is needed for use cases for AI?
-
The number one answer is UX research
-
— Domain experts have SMEs, and this is area where AI can make a difference
-
BA takes info from UXR and domain experts and turn into business requirements and objectives for the AI
— For ethical AI integration, you don’t just need an ethicist
-
You need a data scientist to make data the clean
— You also need a data engineer to provide all the data
-
A mystery card remains
— That mystery person is the DesignOps leader
-
Focus on superpowers brought to conversation
— A quick story on AI bias
-
I used MidJourney to create the images in this deck, but noticed the following when I asked the AI to generate the image of a CEO
-
I got the image of a CEO sitting at a desk and only four white men were provided
-
-
I refreshed the prompt for ‘men and women’ but they were all white
-
I then asked the AI to give me pictures of high status jobs like ‘doctor’, ‘nurse’, ‘teacher’
-
Combination of white men and women
-
-
For low-status job like fast fastworker though
-
Found minority representation under fast-food worker
-
-
Midjourney’s response was as follows:
-
Dataset was based online, which was stereotyped, and Midjourney was working on it
-
Dataset problem applies to the Internet as a whole
-
-
— So let’s ask questions that you should ask when launching a product like a) what problem we are trying to solve and b) what questions are essential
— Ask the following (we don’t need to know answer to questions, but need to ask them)
-
How is data we are using supporting problem we are solving?
-
Where is data coming from?
-
Is the data tested for biases?
-
How will we address these biases?
— What measures do we have to monitor and evaluate data (i.e. Midjourney)?
— And how will we incorporate user feedback and real-world insights into our AIs ongoing development?
— So we’ve coralled people identified problem, asked ethical AI questions, and now time to build
— A rock solid foundation lets you create ethical prototypes
-
Prototype any new product features and how it holds up against user personas and questions for biases and plan for anything that comes up
-
Proactive approach will save you time, technical debt, and back-end work
— Then take the prototype for a test drive
-
Similar to how auto manufacturers test cars for crash speeds
— Need to ask what is ethical stress testing
-
The AI is subject to simulated scenarios that are ethically challenging
-
i.e. Autonomous vehicles needing to choose between passengers and pedestrians
-
-
Ethical stress testing will make sure results align with norms
— User-centric testing to focus on AI
-
How does AI respond to different accents, genders and cultural contexts?
-
Cater specifically to what you are developing and think of broad range of people that can encounter AI
— Finally iterate ethically
— Remember being in grade school and learning things from trusted sources, and learning things from untrusted sources like your friend Taylor?
— This is where ethical iteration comes
-
AI is tech debt with compounding interest, as AI doesn’t just exist, but continually learns how to be more useful and helpful with users
-
Technology and society are evolving quickly
— As you refine products, keep gathering data, user feedback, and real-world insights for improving accuracy and aligning with ethical goals
-
Create processes for cycle of improvement
-
Fair inclusive and ethical considerations
— So let’s go full circle, going forward
-
You are then doing your part in making sure what happened to Robert Williams doesn’t happen to others
-
Your expertise , passion, and commitment matter here
-
— DesignOps thanks you as we map our critical role for ethical AI innovation
-
Ethical AI is not a buzzword, but Ethical AI won’t happen on its own
-
We are the solution for a responsible technological future
Q&A
-
How have you navigated pushback your received from unethical datasets or outputs from AI?
-
Sharing examples of unethical AI, makes the case for why to do this
-
-
What about the intentionality of people invited along with disabilities?
-
Yes, include diversity as part of invites, and classify who will encounter AI, under the ethicist card
-
Lot of people in the world, but have access to only a few– so need to think how we can avoid missing out
-
-
What to do in org that considers AI as important, but leadership set-up tiger team and you are not included?
-
I feel engineering led orgs will try to take control of AI, so shoehorn self into conversation and show impacts of what happens to brand
-
Ramifications of damaging AI , is that you can’t put toothpaste back in tube, and damaging impacts now part of brand
-
Escalate conversation until you reach someone who cares about it
-
-