Gold Penguin Official Logo

Tackling Deep-Rooted Bias in AI To Shape a Fairer Digital Future

Since ChatGPT's launch and subsequent viral success, AI has become a hot topic. While many are excited about its potential, it's crucial not to overlook the potential pitfalls. What if AI amplifies human biases instead of eliminating them?
Updated April 20, 2023

Ever since ChatGPT was launched and went viral, AI has been all over the news. And while most tech-affine people are amazed and can’t wait to test out the new possibilities, its downsides shouldn’t be ignored.

What if instead of being better than humans and eradicating our mistakes, AI adopts biases and starts discriminating against certain groups of people?

Unfortunately, stereotypes and prejudice in machine learning are still very much present and harmful. They have an endless list of tragic consequences and the ones who cause it are often not held accountable. So how do we erase these biases and really pave the way for equality? Seems like a dream come true since AI itself can be trained to have these patterns.

What Does AI Bias Look Like?

Let’s go over few examples of AI seeing people from the view of a human - with all its prejudice and drawer thinking.

1) The Dutch tax authorities started using a self learning algorithm to decrease fraud in social security. Specifically it was supposed to detect cases in which the risk for child care fraud was very high. What originality seemed like a reasonable plan ended with the algorithm adapting a racist bias.

Factors like non-Deutch nationality would increase the risk of fraud; consequently generalizing all people of one label and creating a link between their nationality and being more likely to be criminal.

As a consequence, the algorithm-selected individuals had to deal with bureaucratic and legal issues and even pay back child care allowances, which can be up in the triple digits - oftentimes leaving them in great debt and challenging their private lives – all because an artificial algorithm told them so.

2) A large number of U.S. hospitals and insurance companies are using an algorithm to detect patients attending “high-risk care management” to make them eligible for special care and medical attention.

To find out who qualifies, previous cost for medical visits was a factor. Looking at the numbers, even though both black and white patients often shared the same numbers, black patients rarely qualified for this special care program.

The study found that black and white patients with the same spending had different health care requirements, with black patients often requiring more active interventions. Despite having more chronic illnesses, black patients received lower risk scores, leading to the program overlooking them for high-risk care management.

This issue is exacerbated by the fact that race and income are correlated, and poorer patients, who are disproportionately people of color, tend to use medical services less frequently or have reduced access to them.

Implicit racial bias also contributes to disparities in health care, with black patients receiving lower-quality care and having less trust in doctors, leading to reduced health care utilization. 

Thus, using health care spending as a proxy for medical needs inadvertently creates a biased system that fails to account for the unique challenges faced by different racial and socioeconomic groups.

3) Arrested, humiliated, and innocent, Robert Williams was wrongfully locked up because a facial recognition software falsely identified him as a man wanted for a robbery. The technology wasn’t able to differentiate between two men of the same race and body measurements. Although the accusations were dropped, his DNA samples, fingerprints and mugshot remain in the system.

These incidents highlight that technology does not solely operate on impartial data; rather, it inadvertently absorbs the biases of those who develop and train it. Consequently, the algorithms can perpetuate systemic prejudices, reinforcing societal inequalities and perpetrating injustices against marginalized communities.

How Can AI Possess Bias?

Automation streamlines processes, saves time, and reduces the need for human labor, ideally leading to improved efficiency and fewer human errors. When functioning optimally, technologies like facial recognition can even help solve and prevent crimes. As a result, numerous companies and developers strive to make their algorithms and AI systems as intelligent as possible to save time and resources. But how does an algorithm learn to identify patterns and filter information? The answer lies in training. Similar to humans, AI systems are fed data to learn and draw conclusions based on that information.

One contributing factor to AI bias is the inherent bias present in the training data, which mirrors the stereotypes and prejudices of the world around us. These biases can be found in various aspects of life, including Google searches and statistical data. A team of researchers discovered that more than a third of the analyzed database exhibited bias. Consequently, individuals from different religious, gender, racial, or occupational backgrounds may experience disparate treatment, either positively or negatively.

Another reason for the imperfections in AI algorithms is the lack of diversity within the developer and AI professional community. Diverse teams typically produce more nuanced and comprehensive results that better represent the complexities of our modern, globalized world. By fostering diversity in the field, we can work towards developing AI systems that mitigate, rather than perpetuate, existing biases and inequalities.

Consequences of Biases AI

Biased AI doesn't just hurt victims, but also those considering using self-learning algorithms. Business leaders know there are risks with using new technology.

They worry about losing trust from customers and workers, bad publicity, legal problems, and ethical issues. These worries make sense, as less trust can lead to less money and more legal trouble. So, it's important for businesses to fix AI biases to keep a good reputation and avoid problems.

A graph showing the impact of data bias on businesses showing 62% lost revenue and customers

So What Can Be Done?

While the initial sections of this post painted a somewhat bleak picture, it is important to recognize that these issues can be addressed. Ideally, eradicating prejudice from society and human consciousness would be the ultimate solution from an ethical standpoint. However, this is a long-term goal and not the primary focus of this blog. Fortunately, AI can be improved in the meantime if the quality of training data is enhanced.

One suggested approach is to educate data scientists about the challenges associated with their work and how to maintain ethical awareness while handling databases. This can involve removing specific categories from their data, such as race or gender, to prevent the introduction of biases in the algorithm. By taking these steps, data scientists can contribute to the development of more equitable AI systems.

Additionally, promoting diversity within AI development teams can lead to a broader range of perspectives and help identify potential biases that may be overlooked by a more homogeneous group. Encouraging collaboration among professionals with different backgrounds and experiences can foster a more comprehensive understanding of potential biases and facilitate the creation of AI systems that better represent the complexities of our diverse, global society.

McKinsey and Company graph showing why minimizing bias is necessary to enable artificial intelligence to reach its potential levels

The EU AI Act

The European Union has recently passed the AI Pact, a pioneering law on artificial intelligence (AI) that categorizes AI applications into three risk levels. This groundbreaking legislation has the potential to influence AI regulations worldwide, much like the EU's General Data Protection Regulation (GDPR) in 2018. AI impacts numerous aspects of daily life, including online content recommendations, facial recognition, and medical diagnoses. 

The AI Act assigns AI applications to three risk categories. The first category bans systems that pose an unacceptable risk, such as government-run social scoring systems. The second category, high-risk applications, includes tools like CV scanners that rank job applicants and mandates specific legal requirements for these applications. The third category leaves applications not explicitly banned or listed as high-risk largely unregulated.

While the EU AI Act is a significant step forward, it has its limitations and loopholes. For instance, police facial recognition is prohibited except in cases where images are captured with a delay or used to locate missing children. Additionally, the law lacks flexibility to respond to emerging threats, as it does not provide a mechanism for labeling new, dangerous AI applications as "high-risk" in the future.

To further enhance the effectiveness of the AI Act, lawmakers could work on closing these loopholes, refining exceptions, and increasing the law's adaptability to accommodate unforeseen risks. The EU AI Act could become a global standard for AI regulation, ensuring that AI is a force for good in people's lives, regardless of their location.

This legislation may inspire other countries to adopt similar frameworks, as evidenced by Brazil's recent move to create a legal framework for AI, which is currently awaiting Senate approval.

Final Thoughts and Outlook

AI offers amazing chances for progress and new ideas we couldn't imagine before. But we must remember that we're still learning about AI. Until we're sure it's better than humans, we should be careful using AI in important situations.

As we learn more about AI, it's important to talk openly and listen to different viewpoints. By fixing biases in AI and focusing on ethical development, we can create AI that helps everyone. People need to be involved and aware so that developers and lawmakers make sure AI is a good thing, not something that continues old problems and unfairness.

We want to hear what you think about this article and your experiences with AI and biases. Your thoughts can help us understand AI's challenges and opportunities and keep the conversation going about making technology fair and inclusive. Please leave a comment or let us know how we can do better!

Want To Learn Even More?
If you enjoyed this article, subscribe to our free monthly newsletter
where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.
Written by Justin Gluska
Justin is the founder of Gold Penguin, a business technology blog providing the latest news and tools in the artificial intelligence, business, and SaaS world. If it can help you make more money or save you time, he will write about it!
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
Join Our Newsletter!
If you enjoyed this article, subscribe to our free monthly newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.
magnifiercross