Design at Scale 2021-Responsible Design in Reality (Cennyd Bowles)
—> Fantastic to be here, so let’s get started
—> I’m here to talk about a revolution in ethical technology in recent years
—> I run a design studio in London focused on design ethics, and I’ll discuss industry trends and tensions I’m seeing. My thoughts fall into two buckets.
-
What do we do? What are our responses?
—> The eruption of interest in design ethics coming from three tectonic plates converging from customer, regulatory, and employee pressure
—> For a long time the tech industry was optimistic, and industry scandals didn’t appear to be changing public attitudes
-
But now the public is losing confidence per Pew Research, and industry reputation fell off a cliff in the past five years
— We’ve reached point that only 19% of UK citizens believe, tech companies design with their best interest in mind
-
This is a horrifying indictment of ethical problems in industry, and a clear sign to do better
—> Driling into the stats, I see a common theme
-
People believe technology will have positive affect on them individually
-
But there is a murkier impact on society as a whole
—> Pattern is across multiple research studies
—> Public disquiet is creeping into the media, with documentaries like Coded Bias, Social Dilemma, and Black Mirror
—> Shows and programs reflect concerns about design ethics, and amplify fears that technology is harmful and out to get people
—> The press is also understanding of critical voices, and increasingly in contact with disgruntled employees of various tech companies
-
Spotlight on the field and growing disenchantment with industry
—> Something strange is happening though:
-
Expressed belief about industry and behavior around industry are dissonant with each other
-
Various tech stocks are up, despite souring of public attitudes, and people like technology despite best efforts of tech sector
-
There is a sense of being resigned to being abused by the products
—> Customers wants companies to behavior more responsibly and ethically
—> Per Salesforce research, there are key rewards that can be provided by ethical tech
—> It’s part of a wider narrative that a shift is overdue
-
Financial Times and Economist, paragons of business interest, pushing back on the current shareholder value model to focus on stakeholder capitalism
—> I’m also confident in saying that regulatory momentum will continue gather
-
Democracies haven’t really regulated technology, and left it to self-regulate, in contrast to more authoritarian states like China
—> This is changing with areas like privacy
-
GDPR in the EU
-
Local enforcement
-
Countries in India, Brazil
—> Expect more regulation outside privacy, such as limits on the use of facial recognition
-
AI decisions are in the spotlight, and EU has an appetite to regulate AI
-
California has already mandate to disclose when a bot used
-
Likely will see laws prohibiting synthetic content like “deep fakes”
—> Dark Patterns also run risk of being labeled fraudulent or anti-competitive
—> So is the point of tech ethics to avoid future regulation?
-
Have never witnessed that as explicit motivation in company’s worked with
-
But no one wants “bad regulation”, including the regulators
-
If tech sector takes challenges seriously, we can prevent “bad” regulation down the line
—> We’ve also seen employees learning to band together to push for ethical changes
-
Example of Google pushing back at DoD project that involved drones
-
There was also a wildcat strike over sexual harassment and coverups at the company
—> Employee action is starting to trickle down into smaller firms
-
Clients I’ve worked with, even those without formal advocacy groups, there is still grassroots pressure on tech ethics
—> It tells me that technologists are deeply motivated by values and importance of the issues, and in a position of power and authority, with a responsibility to use that power responsibility
-
Activism will motivate a latent desire for better workplace and world
—> Companies will have to manage this
—> Moreover, younger entrants into work-force are more value driven, and know the luxury of having many options
-
They are being selective and tough on “toxic” brands
—> Example of Facebook losing its luster, as a result of its various scandals
-
Prudential reports now that 1/4 workers are looking for new job, and recognizing they hold a collective power
—> Tech employees are expensive and hard to hire, and can literally prevent nothing from being built
—> On surface, this looks like labor activism and makes the employers nervous
—> To certain extent this is is true:
-
But the activism is not around pay or conditions, but on changing ethical direction and social impacts
—> I’ve seen a surge of interest in technology ethics, categorized into three forms
-
Philosophical (Ethics is important and we need to learn from it)
-
Technical (Focused on AI and auditing algorithms)
-
Social Justice (Key feature that emerged in the US after BLM protests)
—> All are valid, but none sufficient on their own
-
Need to look at all angles, as well as climate and ecological issues
—> This causing a backlash against the tech-lash with teams feeling they’ve been painted into narrow corners
-
In the MIT Tech Review I wrote about Facebook’s Responsible AI group that was kept from key Facebook issues like hate speech and disinformation
—> There is a risk of “ethics washing” preventing significant change
—> So will the teams be setup for success? Or will they be boxed in by higher-ups?
—> Some companies are withdrawing from politics all together, such as Coinbase, which paid staff to leave the company over social justice issues.
—> This is understandable to a point
-
But there is profound self-deception happening there
-
It’s absurd for crypto company to say “we don’t do politics”, as cryptocurrency is an incredibly political sector
—> Choosing not to engage with political/ethical consequences is a choice, and a vote for the status quo
-
That’s okay, but this needs to be a conscious decision
—> Better to handle this thoughtfully, rather than say it’s not my problem
—> In addition, prominent ethics advocates are starting to be resisted and fired
-
Amazon fired employees over complaints about Amazon’s climate record
-
Google fired it’s AI ethics specialists
—> As a result of this, we enter a state of “akrasia”, where you know the right thing to do, but do the wrong thing anyway
—> Companies are aware their moves are damaging, but not making changes. This is due to things like
-
Inertia
-
Lack incentives
—> But mainly it’s a lack of operational knowledge, with no way to really tackle the issues
-
My work focuses on operationalizing ethics, especially with product and design teams
—> So we move to responses
—> What practical advice can I offer to people tackling these issues?
—> First, push every button: tackle ethics at all levels, individual, collective and systems
-
As a caveat, I will not focus on systemic issues like capitalism in talk
—> I will focus on companies, collective, and thinking of issues individually as well
—> The first step in ethical design process is to de-center thinking, and consider what actors are relevant to the design process
—> As designers, we are trained to focus on the user, which is an understandable impulse
-
The accruements of UX design is assuming that people and tech have positive interaction
—> But, tech is sometimes imposed on people without their consent, and people might even want a negative outcome with the technology
-
i.e. Persona non-grata like a terrorist or stalker
—> We need to broaden range of issues, and realize interactions might not be poistive
—> I spent three years at Twitter, where I encountered the Gamergate campaign, which was a systematic harassment campaign done under the pretext of pursuing journalistic ethics
-
Working at the company at that time was profound wake-up call
-
I recognized I could build/design things that would ruin people’s lives. And that I worked at a company that did little about that fact
—> Moreover, even a positive interaction with one user, can damage other users
—> Helping someone buy a gun or an SUV, can damage safety of overall communities
—> We have a right to recognize impacts go beyond the individual, Tech impacts families, neighborhoods, and nations
—> UCD is inadequate for 21st century, where individual task success is promoted, but wider considerations ignored
—> Worst consequences will fall under non-users
-
For example, if you live near an Airbnb rental, you suffer with local rents getting pushed up
-
Automation pushes people out of jobs
-
Policing algorithms are used against minorities
—> Design impacts social relations and power, and the impact of design falls on those who are already worse off under the status quo
—> Design justice attempts to square that circle, by recognizing design can address inequity and unfairness
-
Amplifying unheard voices and including them in the design and innovation process
—> Design also impacts social constructs and social goods like trust, democracy, and law and order and we need to account for them in design process as well
—> We need to label these things as worth defending
—> Also need to include non-human actors exploited in the technology process, where their survival depends on our stewardship
—> Lots of discussion over user/society centered, but we need to weight relevance of all stakeholders, and choose approaches that address their concerns
—> The metaphor I like to use is of mixing desks to synchronize disparate elements
-
Alternating between business needs and environmental needs
—> Next, need to think of anticipating user needs
-
Need to understand unintended uses and harms from work and making space to anticipate consequences
—> Closing natural loop of design, to understanding how the world influences a product and living up to natural responsiiblities of the technology
—> We can use tools like Ethical Explorer to explore ethical risks with good prompt questions to start anticipation to imagine what might go wrong
—> Can learn from field of foresight and futures.
—> If you speak with ethicist, they will discuss moral imagination and discuss who wins/loses in imaginary scenarios
-
Many tools can be borrowed that look at consequences of decisions like impact maps and futures cones
—> Designers also have a superpower, despite lacking the ultimate authority of executives and product people
-
Can make things that stimulate moral imagination and make future feel real
-
Takes decisions out of the future and into today
—> This can be done with things like design fiction
-
Example of Dutch Burn charging station for world of energy scarcity, which deals with who gets priority for charging
—> Designers prototyped RFID cards with people who were given high priorities versus probation IDs
—> Not saying this is a correct solution, but rather asking right questions that shortcut the thought experiment, and bring people into process
—> Not useful to ask non-tech people to imagine evolution of tech
-
More effective to given them future and engage people with moral consequences of decisions
—> This work means we need space to think of impacts, but the space for anticipation is out of fashion
—> Lean/Agile are very empirical movements, but they treat ethical dimensions as something that can be fixed after release
-
Need to anticipate impacts today, not tomorrow
—> We need a way to make space for this anticipation, recommend design critiques as way to include more ethical perspectives
-
Need to start conversation, and give space for conversations
—> Once you have scenarios of what can for wrong, and need to figure out what’s most important
—> We need to lose habit, of thinking we are the pioneers in the field
-
People in other fields, from humanities on, have been exploring social impacts of tech for decades, and design has ignored them
-
It’s time we listen to these people
—> No guaranteed answers, but they will provide structure for debate
—> People think ethics is subjective and flimsy
-
In reality it’s far more robust and rigorous
—> Need to include other people in the process, and getting people who decide what is ethical
-
Currently, it’s the domain of the elite and privileged
—> Talk about technologies process of democratization
-
Not falling into trap of thinking technocrats control direction of technology
—> We need to design with people, not to design for them exclusively
-
Starting to see progress in radical inclusion and participation in design
—> Once risks are evaluated then we need to do something
-
Defensive design can stop bad ideas from happening, but design can also look to see positive opportunities
—> Need to reframe how we think about responsibility and ethics, where too many people ethics as a negative influence that will slow things downs
—> Ethical thinking is a positive force, that will drive innovation and creativity
-
I like to use the analogy of a trellis that builds a frame which products will grow and lets them take on shape of your values
-
This will help you stand out in crowded marketplaces
—> Process is helpful but can’t be a one-off, need infrastructure to build an “ethos” of ethics,
—> Here we reach out to documents, benchmarks to track ethics, appointments and oversight boards and committees
-
Step into infrastructure slowly, and don’t make it a structure forced upon everyone else
—> A process like this is helpful, but just a start. Needs to be paired with critical mindset, and that takes guts.
-
You need safety to speak out, and I don’t judge people who don’t step up, as a comprehensive sense of safety is rare
—> If you feel comfortable, safe, and respected, you are in good position to speak-up and push for change we need in the field.
—> We need all the help we can get!
Q&A
- It seems like many of the new privacy / tracking regulations all come down to just clicking “OK” in a privacy notification on every website we visit. What are your thoughts on that?
—> Fairly accurate statement of flaws of GDPR, which individualizes a collective problem, as users don’t fully understand consequences of their consent
—> Changes are afoot, and pending potential changes that will make it less clicking boxes, and more a whole browser experience
—> Apple is pushing the change, especially against the ad-tracking industry
—> We also need enforcement of existing laws to make it work
2. Have you seen examples of organized “I would not work at your company for these reasons” — where not just employees but potential employees can affect the policies or management?
—> Yes, there is a movement of “Tech Won’t Build It”. C
—> There’s also a stigma: Lots of fairly well-sourced rumors that Uber employees who worked at the company when Travis Kalanick ran it have trouble finding work
—> Collective pressure is building in this space, but Tech Won’t Build It is biggest example