by Marsha Rakestraw
Technology is not neutral.
Those algorithms limit and shape what we see, how we spend our time, and what we think about.
Those algorithms can even exhibit racism and similar kinds of bias.
It’s easy for us to forget that behind those ones are zeros are fallible human beings with specific worldviews and personal beliefs and experiences that influence their work, and thus, our lives.
Especially now that younger generations can’t live without their digital connections, it’s vital that they’re taught to think about how technology can influence our views and experiences.
Here are 6 short videos (13 minutes or less) you can use to jumpstart discussion with your students about bias and influence in our technology.
1. “How a Handful of Tech Companies Control Billions of Minds Everyday.” (2017) (13 min)
Tristan Harris, a former Google design ethicist, talks about how technology companies “steer what a billion people are thinking today” and some of the things they do to ensure they get (and keep) our attention.
2. “Beware Online Filter Bubbles.” (2011) (9:04 min)
Eli Pariser talks about an invisible shift that’s happening online that can hinder our pursuit of democracy and meaningful citizenship: invisible algorithmic editing of the web. Pariser notes that algorithms and filters on platforms such as Facebook, search engines like Google, and news sites like Yahoo! influence what we see, without us knowing it.
3. “The Moral Bias Behind Your Search Results.” (2015) (9:18 min)
Andreas Ekström highlights the fact that “behind every algorithm is a set of personal beliefs that no code can ever completely eradicate” and reminds us that there are very few “unbiased clean” search results.
4. “How Emerging Technology Can Promote Racism More Than Ever.” (2017) (1:51 min)
Via Salon, Sara Wachter-Boettcher talks about how racism and other kinds of bias can arise with new technologies.
5. “Can Tech Be Biased?” (2017) (4:58 min)
CNN Money looks at how technology (code) can be biased, even prejudiced. The piece includes the work of Joy Buolamwini, who explores the problem of “algorithmic bias” and discrimination.
6. “Machine Learning and Human Bias.” (2017) (2:33 min)
Google offers a brief overview of how machine learning works and how human bias contributes to machine bias.
Here are a few questions to help spark discussion:
- What is my own worldview, and how does that influence my assumptions about others?
- How is my worldview influenced by the media and technology I consume/use?
- Why is it important that we’re aware of our own worldviews and biases and how can we use that knowledge to help us be better citizens of the world?
- How often do I use technology without thinking about what I’m seeing/doing?
- When/where have I noticed bias or influence in technology?
- What might my own filter bubble look like? How can I expand my filter bubble?
- How can we reduce racism and other biases in technology?
- How accountable should technology companies be for their bias/influence, and how can we as citizens help hold them accountable?
- How can we reduce biases in ourselves/others and increase critical thinking skills about the technology we use?
Be sure to forward this to at least ONE person who would benefit from these resources.