It’s Indigenous Peoples’ Day. Your AI Assistant Might Tell You Otherwise.

Reposting from publishing in Medium’s The Startup.


If you ask one or more AI assistants today “who discovered America?”, the alarming response you might get is:

“Christopher Columbus… Americans get a day off work on October 10 to celebrate Columbus Day. It’s an annual holiday that commemorates the day on October 12, 1492, when the Italian explorer Christopher Columbus officially set foot in the Americas, and claimed the land for Spain. It has been a national holiday in the United States since 1937.”

It’s Indigenous Peoples’ Day and there are many reasons this response is a problem.

First, let’s start with the facts. Christopher Columbus did not discover America. He initiated the “Columbian exchange”, which established settler colonies through violence and genocide. The dominant Eurocentric narrative about Columbus discovering America has been challenged and delegitimized for decades.

Second, whose narrative is being centered? When this question is asked, the perspective and history of Indigenous peoples of America, who were in fact here first, is not addressed or recognized by this AI assistant. Why does this seem like a big deal if many of us know that Christopher Columbus didn’t discover America? Because there are still many people, including young people in the United States and across the world who don’t, have been taught to accept this dominant narrative, or have yet to learn about this time in history. This AI assistant is disseminating and reinforcing a false historical narrative to thousands, if not millions of people, and this was a design choice.

All the information that disseminates from technologies like AI assistants are design choices made by people, and people can either choose to reinforce oppressive narratives or amplify the histories of those who have long been oppressed.

What if, when someone asks an AI assistant “who discovered America?”, the epistemology, knowledge and perspectives of Indigenous peoples’ of America disseminated from over a billion AI assistants?

“Who discovered America?” is just one question. What other questions are there for us to discover, unpack and make space for critical discourse? How might we re-center non-dominant perspectives through technology to advance social justice and equity?


Controlling the minds of the masses.

By the year 2026, the AI market is expected to reach $300.26 billion and one of the primary factors driving that demand is AI assistants like Google Home, Siri, and Alexa. There are already over a billion Google assistant devices in homes, offices, and other spaces and that will only grow exponentially. These technologies have incredible capabilities and help us do everything from complete mundane tasks to provide us with timely information. Want the latest news from NPR? Need directions to get to a friend’s place? Want to find out how cold it is outside for your morning run? Ask Google, Alexa, or Siri.

These technologies can perform many convenient functions and are becoming increasingly accessible, but are we assessing how they’re unconsciously shaping our understanding and knowledge of the world? Are we equipping our young people with the skills to recognize the influence of these technologies, question their authority, and push back?

Malcolm X once said:

“The media’s the most powerful entity on earth. They have the power to make the innocent guilty and to make the guilty innocent, and that’s power. Because they control the minds of the masses.”

AI assistants and other technologies are no different than the media or our education system, they’re an extension of this apparatus and wield bias and influence through power. Safiya NobleRuha BenjaminCathy O’Neil, and other scholars have thoroughly documented the many biases, racist and sexist in particular, perpetuated by emerging technologies. In spite of this scholarship, emerging technologies are positioned by the technology industry as “neutral” and when there have been incidents of bias, they’ve been written off as innocent “glitches.” Joy BuolamwiniTimnit Gebru, and other prominent computer scientists have uncovered how these technologies and the datasets they use are designed and curated by human beings who encode their own biases, values and identities.

If we return to our AI assistant and Christopher Columbus example, it’s possible that the person(s) who created the algorithm designed it to pull up the top Google search engine result fueled by advertising dollars (VOA News), without taking the time to critically review the information for historical accuracy; or they used autosuggestion; or they manually curated the dataset and believed it to be a good source of information for users, but we really don’t know.


“I don’t know how to respond to that.” — AI Assistant.


Unlike our nightly news anchor who we can tweet at, or our radio station where we can call in to, or the editor of our local newspaper who we can write a letter or op-ed to, emerging technologies maintain fortified black boxes. Actual people remain nameless and faceless and this prevents the creation of spaces for engagement or discussion.

“How does one escape a cage that doesn’t exist?”, Maeve (robot) from Westworld ponders in season three, and it’s a question that so aptly reflects this dilemma. The invisibility of how decision-making processes are designed and embedded in emerging technologies, and the perceived divorcement from human bias or error is what makes their influence so insidious. Many scholars cite how social trust and overdependence on technology prevents us from questioning these black box algorithms and data sources. We believe it’s not our place, we’re not the technical experts, we’re told it’s too complicated, we don’t think about it at all, or we’ve been indoctrinated into believing that everything in our Google search return is accurate and what we need to know (“just GOOGLE it!”).

AI assistants and other emerging technologies are a great case study for Foucault’s knowledge-power theory, positing that power is everywhere and pervasive, it is established through accepted forms of knowledge, scientific understanding, and “truth,” and few industries are better at upholding “universal truths” than the technology industry. As we saw in the case of the AI assistant and Christopher Columbus, these “universal truths” prop up dominant narratives which continue to oppress non-dominant peoples. Our consciousness and ability to rebel against these universal truths and dominant narratives is fundamental to dismantling structural inequity.


Why rebelling is even more important for K12 now.

AI assistants are increasingly being used as educational aids by young people to answer questions and fact-check their work outside of school, and these technologies are being positioned as tools to bolster the development of inquiry and curiosity. When schools are at their best, children conduct research on the web at school with the support of teachers and librarians, who are trained educators tasked with supporting them to build their information and media literacy skills. With adult guidance they learn how to evaluate a source, debate the content of that source with their peers and create their own content.

How are AI assistants and black box algorithms altering this dynamic, especially in light of the COVID-19 global pandemic? How might these technologies create even greater harm at scale in K12 education through the dissemination of misinformation and dominant narratives that are prioritized according to which private interest has the biggest budget for search engine optimization?

We at the Stanford d.school are determined to support educators, families, and children to participate: to see what’s not visible, question these technologies, and embrace the role of creator and decision-maker. We’re also determined to equip designers and technologists with the skills to reflect on their own positionality, recognize discriminatory design practices and inflict less harm. If we are serious about equity, we must thoroughly evaluate the implications of our work on society before and iteratively as we design, and those who might be affected should give the greenlight before we set our creations loose in the world.

We at the Stanford d.school are determined to support educators, families, and children to participate: to see what’s not visible, question these technologies, and embrace the role of creator and decision-maker. We’re also determined to equip designers and technologists with the skills to reflect on their own positionality, recognize discriminatory design practices and inflict less harm. If we are serious about equity, we must thoroughly evaluate the implications of our work on society before and iteratively as we design, and those who might be affected should give the greenlight before we set our creations loose in the world.

Good intentions aren’t good enough. This is why we created “Build a Bot.”


A Peek Inside the Prototype: “Build a Bot.”

There are many layers of design involved in creating AI assistants, which include how we interact with them, how they select and collect the information they share with us, and what they do with the information we give them (yes, we give them information, sometimes we just don’t know it). In our prototype “Build a Bot” educators, families, and young people can design their own personalized responses to help requests and contend with the implications of various design choices. If you asked Alexa for directions, how would YOU want Alexa to respond back to you? That’s an intentional design choice that we as designers make and can change.

Educators, families, and young people can explore other design choices that aren’t always made very public but are shaping our society and future, and sit in the driver seat. As you build and craft your own AI assistant, or tinker with one you might have, questions this learning experience will provoke are:

  • If my AI assistant doesn’t understand a question someone asks, how should I design it to respond? What kind of questions should I create so that my AI assistant can answer someone’s question and truly be helpful?
  • When someone asks my AI assistant a question, where should it get the answers or information from? Newspapers, Twitter, Wikipedia? Is one place better than another place? How do I know? Whose perspective is this information positioning and is it propping up a dominant narrative or misinformation? Should I pick a source that presents multiple perspectives?
  • Should my AI assistant be able to listen to every conversation someone has, and is that conversation safe? Where does that data go and should it be saved? Should someone else have access to it?

These cards were inspired by the early work of Josie Young on the Feminist PIA (personal intelligent assistant) standards, and the wonderful work of the Feminist Internet and Comuzi on F’xa. This prototype along with more information can be found here.

A number of popular AI assistants were recently updated with data sources to show support for the Black Lives Matter movement (“Black Lives Matter”), and if a person asks “do all lives matter?”, they all express some version of “saying ‘black lives matter’ doesn’t mean that all lives don’t. It means black lives are at risk in ways others are not.”

While it’s encouraging to see this response to the shifts in global discourse around policy brutality, what was the response to this query a few months ago? “I don’t know”? “Yes”? It shouldn’t take a civil rights movement to prompt the technology industry to simply do the right thing.

My colleague Manasa Yeturu and I started re-phrasing the popular slogan “design starts with the user” to design doesn’t start with the user, it starts with YOU” which not only includes examining our own positionality, but all that we don’t know and all the ways in which we fail to act, fail to learn more about others, and fail to prevent harm.

Everything we do and everything we don’t do is an intentional design choice.

Join us.

What questions do you want to discuss and debate with AI assistants and their creators? What new help requests should we add to this deck of cards? Tweet at us @k12lab.

ADDITIONAL RESOURCES:

Designing for Digital Agency at the Stanford d.school

Over the last few months I’ve been working at the Stanford d.school in their K12 Lab on a special project around emerging tech and equity. Re-posting a blogpost written in collaboration with stellar colleagues Laura McBain, Lisa Kay Solomon, Carissa Carter and Megan Stariha.

 

Technology is power.

It can enable you to share an idea with millions of people around the world in a matter of seconds. And in those same few seconds, it can enable someone else to steal your identity and drain your bank account.

Whether it’s being used to spread information, incite violence, influence elections, or shop for glasses, who should have access to such powers? Who should be able to design and utilize technology to shape the world in their vision and image?

The present reality is that this power is in the hands of very few, and manifesting into serious consequences for the most marginalized people in the world. This is why we all need to be technologists. We all have the right to participate in and shape the growing influence technology has on our lives and communities, and build our digital agency. Whether you are the creator, user or policymaker, we all have a role in designing and deciding the future we all want to live in.

Today many emerging technologies (still in a phase of development and/or haven’t reached commercial scale) like machine learning, wearable tech, synthetic biology and others are often riddled with embedded biases (Ruha Benjamin, 2018). Computer scientist Joy Buolamwini found that three widely-used gender recognition tools could only accurately identify dark-skinned women as women from a photograph 35 percent of the time, while white men were identified as men 99 percent of the time (New York Times, 2018). This is a symptom of how emerging technologies are not created by diverse groups of people who reflect different values, life experiences, expertise, and take the responsibility to ensure all voices are represented in the design process.

At the d.school we believe educators are uniquely situated to address this critical issue. Educators have the capacity to shape a future in which all voices are represented and valued. They have the ability to equip students with the skills, mindsets, and dispositions needed to evaluate the ethical implications of technology and prioritize equity-centered design. But educators, particularly those who are serving students furthest from opportunity, need new resources to help students engage and create with emerging technology.

IMG_8921
Educators experiment with the “I Love Algorithms” card deck designed by the Stanford d.school Teaching and Learning Studio. Photos courtesy of the Stanford d.school/Patrick Beaudouin.

We believe that design can play an important role in addressing the digital inequities that exist in our K-12 communities, and the challenges facing digital inclusion. Built on our ongoing exploration of emerging tech, equity, and design we are exploring questions like…

  • How are emerging technologies used by different communities?
  • Who is creating emerging technologies like machine learning, blockchain, and synthetic biology?
  • Who is not being represented in the creation and pioneering of these emerging technologies?
  • How are oppressive social structures and practices, like racial profiling, manifesting in the early stages of the creation and application of emerging technologies? Why?
  • How might we equip educators and students with the creative confidence to understand, evaluate, and create with emerging technologies in their communities?

These questions and the research we’ve done are leading us to this design challenge:

How might we leverage emerging technologies to advance equity, empathy, and self-efficacy in K-12 education?

Our design work is grounded in four pillars of understanding, centered around participation and radical access, built on the early design work from Carissa Carter’s You Love Algorithms:

  1. It’s not about becoming a coder; it’s about knowing what the code can do (Carissa Carter, 2018). We all need to understand what emerging technologies can do, how they’re interlinked, and how they can be designed by increasingly diverse groups of creators and decision makers. This means that each of us should have a basic understanding of how emerging technologies such as blockchain, artificial intelligence, the internet of things, brain computer interface technologies, etc. work. Does that mean we’re all verifying transactions on a blockchain? No. But it does mean that we understand it’s rooted in decentralization, transparency, and immutability, and why some systems may or may not benefit from using blockchain.
  2. If we want emerging technology to represent all of us, it needs to be created by all of us (Carissa Carter, 2018). Technology needs to be inclusive. Creation encompasses more than just technical production or programming, it means all of our experiences, perspectives and voices are incorporated in the creation, adaption, and delivery of the technology. It requires that we all have an understanding of the concepts underlying emerging technology, and that each of us are an integral part of the design process.
  3. Technology is personal. Educators need support with how to cultivate and leverage the valuable digital practices and identities their students bring into the classroom (Matt Rafalow, 2018). To cultivate students’ abilities and support them in connecting with emerging technology, we need to consistently find ways to make technology personal to them. If students don’t recognize themselves or their communities in the technology they are using or designing with, this only further marginalizes them and reinforces embedded bias.
  4. Learning is about lifelong participation and creation; not consumption. Constructionism has shown us that the most powerful learning experiences emerge from creating something from our own curiosity, understanding, knowledge, and experience of the world. There is nothing more rewarding than designing something that solves a problem for you and the people you care about in your community.

How we are getting started.

In our pursuit to expand radical access to emerging technology and to cultivate a diverse generation of technology creators, we’ve launched a design project called 10 Tools for 1,000 Schools, a portfolio of design resources, tools, and approaches to help build the creative and digital agency of K-12 communities.

In the toolbox, educators will find engaging activities which will help them understand and teach the foundational concepts of emerging technologies, and resources on how to integrate them into various academic disciplines, along with easy-to-adapt community-based design challenges. We kicked off the playteest of two of our first 10 Tools resources at the first ever K12 Futures Fest, a gathering of more than 200 educators, students, and other community members who showcased their work and engaged in our new experiments.

IMG_9006(1)
Educators participate in a Futures Fest session on Blockchain. Photos courtesy of the Stanford d.school/Patrick Beaudouin.

Educators participated in a session which immersed them in the blockchain concepts of decentralization and transparency through taking on the persona of detectives tasked with cracking unsolved mysteries; and in another session, designed their own dance moves to express different machine learning algorithms. Participants pushed back on the perceived benefits of the technologies, rapidly came up with new ideas for how they might apply these technologies to new design challenges, and asked thought-provoking questions about the potential impacts on their students.

As our prototypes and learning evolve, we aim to share our work on the K12 Lab site. And we hope to encourage more educators to take up this challenge in their own communities by adopting and remixing these resources to fit the diverse needs and identities of their students.

Our collaborators include a crew of pioneering educators: Kwaku Aning, Louka Parry, Jennifer Gaspar-Santos, Akala Francis, and Daniel Ramos. They are each collaborating with us to create, integrate, and adapt these resources in their own contexts.

On the horizon.

In 2020 Karen Ingram, a designer who has a special focus on synthetic biology will join the team as an Emerging Tech Fellow.

How to learn more?

Want to learn more about our work? Read updates here. You can also join our newsletter for updates and events! Follow our progress on twitter using #10tools4schools.

_ _ _ _ _ _ _ _ _ _ _ _ _ _

References:

  1. Benjamin, R. (2019). Race after Technology: Abolitionist Tools for the New Jim Code. Polity Press.
  2. Lohr, S. (2018, February 9). Facial Recognition Is Accurate, if You’re a White Guy. Retrieved from https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.
  3. Green, B. (2019, April 17). Can predictive policing help stamp out racial profiling? — The Boston Globe. Retrieved from https://www.bostonglobe.com/magazine/2019/04/17/can-predictive-policing-help-stamp-out-racial-profiling/7GNaJrScBYu0a5lUr0RaKP/story.html.
  4. Matt Rafalow (2018). Disciplining Play: Digital Youth Culture as Capital at School. American Journal of Sociology. 123:5, 1416–1452.