top of page

How Artificial Intelligence is Affecting Society Today.

AI biases have bigger implications than we thought.

By Veena Ramachandran

Human bias is seen everywhere in society. Resumes with names typically associated with African American or Hispanic candidates often receive fewer callbacks in comparison to their Caucasian counterparts. Women, on average, earn less than men for performing the same job. Law enforcement practices such as stop-and-frisk are often criticized for targeting people of color. Clinical trials have often underrepresented certain demographics, leading to treatments that may not be as safe or effective for everyone. But how can AI, something with no feelings or opinions, have a bias?


To answer this question we need to understand some basics about AI. Artificial Intelligence models are trained on millions of pieces of data. When a model is “trained” it must be given a diverse dataset relevant to the task the model has to perform. For example, if we’re building a model whose purpose is to predict an audience’s reaction to a new movie, we’ll probably feed it data about movie reviews. The model will then predict a reaction to a movie based on reviews about similar movies from the past. This is where the concept of bias comes in. If the majority of our data comes from a group of sci-fi fanatics the model will likely give us a positive review for a movie like Ex Machina. However, it’d probably give us a bad review about a rom-com like 13 Going on 30. Although our movie review model probably wouldn’t do any significant damage to people’s lives, there are models that can. We can take a look at Myanmar as an example. In the early 2010s, Facebook became widely accessible to the citizens of Myanmar. The platform quickly gained popularity and became the primary source of news and information for these people. Although this seems harmless, Facebook had bigger implications in Myanmar. Rohingya Muslims, an ethnic group in Myanmar,

“have been persecuted by Myanmar’s Buddhist majority for decades, but Facebook exacerbated this situation”

Amnesty International, a human rights organization, states. Amnesty International claims that Myanmar’s armed forces used Facebook’s AI algorithms to help spread propaganda and hate speech against the Rohingya Muslims. As Facebook was a primary source of news for Myanmar citizens, they believed the propaganda and supported the military as they committed mass killings, torture, and arson against Rohingya Muslims. A seemingly small flaw in Facebook’s algorithms was a contributing factor to the Rohingya crisis.


So what has to change? If we go back to our movie review model, a glaring problem is our dataset. By only including reviews from people who love sci-fi movies, we’ve successfully ignored a whole population of other movie reviewers. We can diversify our dataset by gathering reviews about all types of movies from romance to heist movies. This will lead to a more fair result when we ask our model to produce a review about a certain movie. Another step developers of AI can take is to form a diverse team of experts who can contribute to the process of building an AI and add a different perspective. A diverse team of individuals is more likely to identify and address biases effectively. There are many other ways to prevent biases in AI like collecting user feedback and conducting regular assessments of the AI’s performance that we need to implement in order to avoid biases.



Sources:

  • https://www.sciencedaily.com/releases/2023/11/231102135100.htm

  • https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

  • https://time.com/6217730/myanmar-meta-rohingya-facebook/

Comments


bottom of page