Tinkering with Inequity in Emerging Tech: Week One

The d.school was bustling last week with the launch of the Fall academic quarter, and it felt good to get back to it!

This quarter I’m teaching an IntroSemTinkering with Inequity in Emerging Tech, for Stanford undergrads. A short snippet on the class:

“Throughout history, innovations in science and technology, while bold and visionary, have often resulted in catastrophic consequences for Indigenous and Black communities, immigrants and the natural world…What have we failed to learn from past historical events that we could learn today? This course is grounded in applying a socio-historical lens to the current design of emerging technologies, in order to better evaluate the implications for various communities and the environment.”

Week one was hectic, and also wonderful to finally meet my students. It’s a diverse, curious, creative bunch, and we’re slowly building community together. This week we dug into readings from Design Justice from Sasha Costanza-Chock and Race After Technology from Ruha Benjamin. Students asked really wonderful reflections and questions like…

”All objects have politics, and so, when designing anything, it is imperative to have these axes of inequality at the forefront of the design process.”

“I think that this idea should be at the center of the conversation on design: while many may assume a hands-off approach when developing technology, it’s important to look at the potential consequences in the long term, and not just immediate affordance, of a product. Regardless of whether a piece of technology is intentionally discriminatory, its failure to dismantle societal barriers for marginalized groups within its design can still be considered a design flaw.” 

“In the discussion of what technologies are necessary, we must discuss not only whether something is profitable, but whether it is a positive contribution to the progression of social efforts and equity. Maybe need finding should be reframed to not only look for what some might need, but also to look through a societal lense to see what is needed AND beneficial.” 

Big themes and ideas for week one fell along a few intersecting axes:

#1: Technological solutionism (”we can use technology to solve every problem”) and technological determinism (”the machines are coming and will replace us”) both rob us of human agency and absolve us of our responsibility to design equitable technologies. These narratives are distractions that ultimately serve a few actors. ****

#2: Value-neutral technology is a fallacy. We saw in many examples from our readings and real-world stories that values, perspectives, biases, interests and desires are embedded in our technologies, and they can reflect that of some groups of people over others. We also reflected on the Matrix of Domination to

#3: Because no technology is “neutral”, design is political and can be hostile. In Race After Technology, Ruha Benjamin gives a very good example of how the Southern State Parkway, built by Rob Moses, was engineered to be so low that school buses (with youth from lower socioeconomic backgrounds) could not access the beach. There is some dispute over why the parkway was built so low, but it doesn’t matter. It demonstrates how if outcomes are not prioritized, particularly for communities who are historically oppressed, technology just reproduces existing inequities.

We took these themes into our design work for the day, a historical excavation centered on uncovering trends across science and technology disasters from the recent and distant past. This included Cambridge Analytica, the Chernobyl nuclear disaster, the Rohingya genocide and Facebook, the NSA Surveillance program, and a few other events.

We explored questions like:

  • Who benefitted?
  • Who was harmed?
  • What were the implications or consequences for different communities?
  • What inequities did this amplify or reinforce?
Cambridge Analytica.
Rohingya genocide and Facebook.

Some of the common trends we saw after doing synthesis was:

  1. Those with little decision-making power were consistently harmed, which includes workers, non-dominant groups (however that might be defined within a given context), women, etc.
    • Reflection: How do we center those who have been historically harmed the most, so that we can prevent this continuous cycle? How can we lean more into universal design and other justice-based design frameworks?
  2. Minimizing safety and fairness for efficiency and optimization was/is the status quo.
    • Reflection: More legal frameworks and policies need to be put in place to protect people and the natural world, at the cost of efficiency and optimization.
  3. Business models are built on the exploitation of people’s data, and there is little incentive to conduct business otherwise.
    • Reflection: How do we completely dismantle the data economy model?

In week two, we’ll further explore these themes through algorithmic decision-making.

Thoughts

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: