5 Examples of Machine Bias and Flawed Algorithms

In light of the recent discussions around platforms and algorithms, such as Tumblr’s broken adult content filter, I would just list a few examples of machine bias. At the moment I am working on a course on algorithms and narrow AI for Creative Business, so I have stumbled into lots of interesting cases that you might like to hear about.

What most of these cases will tell you is that data is not neutral, and that algorithms often mirror a reality based on centuries of bias. In other words, it’s usually not the machines that are biased, but it’s either the data, which is socially constructed, or the humans in charge. 

  1. Tumblr banning #gay in 2013

Tumblr has made the news with its attempts to filter adult content for many years. In 2013, search terms like #gay and #bisexual were blocked. The company announced: ‘The solution is more intelligent filtering which our team is working diligently on. We’ll get there soon. In the meantime, you can browse #lgbtq — which is moderated by our community editors — in all of Tumblr’s mobile apps. You can also see unfiltered search results on tumblr.com using your mobile web browser.’

Obviously, sexual minorities were again at the loosing end, and Tumblr reflected societal biases in a feeble attempt to stop porn.

2. Racist soap dispenser

This example went viral, and rightly so. As IFLScience writes, the problem is most likely related to sensors and design bias. ‘ The no-touch soap dispenser most likely uses some kind of light sensor to detect when a hand is beneath the contraption. Apparently, a dark-skinned hand wasn’t light enough to register on the sensor. This simple problem would have been avoided if it had been tested on a variety of skin tones. That, of course, requires people working in the industry from a variety of backgrounds.’

Diversity in technology is a must, but we need to overcome our design biases first.

3. White House on Maps 

Image result for niggahouse

I was a bit shocked to hear about this one. Googling for “N—– house” (see above)  led to led to the White House Google Maps. While this is most likely a joke, and obviously the result of user tagging, it’s surprising to see what users can get away with before the Google moderators catch them. One can wonder: Will automatic filtering solve the problems or amplify them?

4. Breastfeeding on Facebook Controversy

Image result for facebook breastfeeding controversy

Tumblr may have been critiqued for its long user guidelines and policies around “male-presenting nipples” but nipples have been a huge thing on platforms for a while. They are easy to recognize with specific software, but the cultural context is always difficult – is a nipple sexual, educational, critical, gendered?

Here’s Facebook’s take on things as stated in Wired: ‘We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring.’

Custodians of the Internet has a powerful, in-depth chapter on this controversy, focusing also on how users took matters into their own hands, and how guidelines were eventually tweaked. The author, Tarleton Gillespie, concludes: ‘Disputing the rules set by platforms like Facebook provides an opening for political action online and in real spaces. And in the eyes of those newly motivated by this perceived injustice, the shift in Facebook’s policy was a victory in the broader dispute’.

5. Gendered Tools like Alexa   

Image result for alexa sexist

When Alexa was just released, she was very obedient. If you called her a b-tch, she’d respond:  “Well, thanks for the feedback.” This changed in an update, and now she states things like: “I’m not going to respond to that.” Alexa now even responds, when asked, that she is a feminist.

Tools like Siri, Alexa and Google Home (once dubbed Holly) are heavily gendered and stereotyped technological interfaces. AI is portrayed as a seemingly obedient woman, who caters to our needs. They are designed by a particular sector that is predominantly male-dominated. The gendering of virtual assistants was not a thing historically, but lately it’s a common trope in pop-culture and actual technology.

The Atlantic also devoted an article explaining why this is problematic, and why virtual assistants should reflect our reality. It concludes that revealing Alexa as a feminist is actually a setback: ‘When Amazon enjoys effusive praise for a version of feminism that amounts to koans and cold shoulders, then it can use that platform to justify ignoring the broader structural sexism of the Echo devices—software, made a woman, made a servant, and doomed to fail.’

Can we somehow construct devices in inclusive, less gendered way? This question is important, and it’s one that I also explore in an upcoming article in  Image, a German Media Journal. Spoiler: AI-driven characters and interfaces are not detached from their cultures and reflect the societies in which they are created.

We need to do better than this.

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this:
search previous next tag category expand menu location phone mail time cart zoom edit close