Given our long history with tools, the idea that we inject bias into technology isn’t exactly new. What is new is the way that machine learning introduces subtle new forms of technology bias.
Technology Bias: the embedding of a particular tendency, trend, inclination, feeling, or opinion into technological systems
1) Designers and Technology Bias
The most obvious ways we bias our tools is through the assumptions we bring to the design process. Sometimes those assumptions are deliberate, but more often than not, they are unconscious.
All design decisions are judgments, and as such, convey some form of bias. We often just don’t notice it—especially if a particular tool has been with us for a while.
2) End Users and Technology Bias
The way we like and share stuff on social media streams, for example, doesn’t just shape our own experience. It also influences what happens to our friends on these networks. Your bias for cute kittens, clever memes and birthday messages increases my likelihood of seeing that stuff in my stream. Our interactions with each another cause the network to become our bias.
The strange thing about end user bias is that radically different types of bias can coexist simultaneously on the same platform. Clusters of hatred and bigotry can thrive right beside communities of love and inspiration. Our engagement fragments us into echo chambers of shared bias.
[Tweet “Our interactions with each another cause the network to become our bias.”]
3) Algorithm Trainers and Technology Bias
Machine learning algorithms learn by interacting with humans, often via services like Google Search and Facebook. What that means is that we humans are training the artificial intelligence that fuels our intelligent devices. What that also means is that human trainers play an important role in determining the values — and biases — of these systems.
Training bias is a serious concern. In machine learning, selecting human trainers is a core part of the design process. As more of these systems come online, learning and growing through their interactions with us, we must guard against imbuing them with harmful human bias.
[Tweet “In machine learning, selecting human trainers is a core part of the design process.”]
Designing Containers of Culture
As intelligent systems control more and more aspects of society and our economy, it’s essential that we learn to identify and isolate harmful bias as a proactive part of the design process for any intelligent system. Doing so won’t just weaken the grip of frail human egos. It will strengthen the better angels of our culture.
[Tweet “Artificial intelligence also acts as a container for human culture.”]
Silent house party by Imokurnotok CC BY-SA 3.0,
One person likes a boom-box. Others like iPhones. Where is the bias? Is it “bias”, that other people are hearing one person’s music? How?
How do you propose to “correct” the opinions of humans without introducing your own bias?
The bias in that example is really just a designer responding to customer preferences. It’s kind of a silly example, actually. I’m using the boombox, which is something I’d not seen someone using on the street in a very long time, to show how most designers no longer build that product with that use in mind – though they might have a couple decades ago. It’s a design assumption, or bias, that isn’t really consciously stated anywhere. It just happens.
As for correcting people’s opinions, that’s not what this is about. It’s about building systems with a conscious eye towards the fact that bias is going to creep in through a variety of ways and building options into our services to give people a way of removing those filter biases, if they they so choose.
Great article! Got me also thinking about how will morality play into the ethics of designing/training machine intelligence. Whose morality is the “correct” morality? Will artificial intelligence be used for propaganda or to covertly quell an uprising?
Thanks, Freya. Good questions. The sad thing is that we’re already seeing it used for propaganda in Cambridge Analytica’s work for the Trump campaign (and others). Quelling dissent is even more scarey.