Why Algorithmic Fairness is Elusive

Why Algorithmic Fairness is Elusive

In 2016, Google photos classified a picture of two African-Americans as “gorillas.” Two years later, Google had yet to do more than remove the word “gorillas” from its database of classifications. In 2016, it was shown that Amazon was disproportionately offering one-day shipping to European-American consumers. In Florida, algorithms used to recommend detention and parole decisions on the basis of risk of recidivism were shown to have a higher error rate among African-Americans, such that African-Americans were more likely to be incorrectly recommended for detention who would not go on to re-offend. When translating out of a language with gender-neutral pronouns, and into languages with gendered pronouns, Google’s word2vec neural network injects gender stereotypes into translations, such that pronouns become “he” when in conjunction with “doctor” (or “boss,” “financier,” etc.) but become “she” when translated in conjunction with “nurse” (or “homemaker,” or “nanny,” etc.).