Showing posts with label targeted advertising. Show all posts
Showing posts with label targeted advertising. Show all posts

Artificial Intelligence - What Is Algorithmic Error and Bias?

 




Bias in algorithmic systems has emerged as one of the most pressing issues surrounding artificial intelligence ethics.

Algorithmic bias refers to a computer system's recurrent and systemic flaws that discriminate against certain groups or people.

It's crucial to remember that bias isn't necessarily a bad thing: it may be included into a system in order to fix an unjust system or reality.

Bias causes problems when it leads to an unjust or discriminating conclusion that affects people's lives and chances.

Individuals and communities that are already weak in society are often at danger from algorithmic prejudice and inaccuracy.

As a result, algorithmic prejudice may exacerbate social inequality by restricting people's access to services and goods.

Algorithms are increasingly being utilized to guide government decision-making, notably in the criminal justice sector for sentencing and bail, as well as in migration management using biometric technology like face and gait recognition.

When a government's algorithms are shown to be biased, individuals may lose faith in the AI system as well as its usage by institutions, whether they be government agencies or private businesses.

There have been several incidents of algorithmic prejudice during the past few years.

A high-profile example is Facebook's targeted advertising, which is based on algorithms that identify which demographic groups a given advertisement should be viewed by.

Indeed, according to one research, job advertising for janitors and related occupations on Facebook are often aimed towards lower-income groups and minorities, while ads for nurses or secretaries are focused at women (Ali et al. 2019).

This involves successfully profiling persons in protected classifications, such as race, gender, and economic bracket, in order to maximize the effectiveness and profitability of advertising.

Another well-known example is Amazon's algorithm for sorting and evaluating resumes in order to increase efficiency and ostensibly impartiality in the recruiting process.

Amazon's algorithm was trained using data from the company's previous recruiting practices.

However, once the algorithm was implemented, it became evident that it was prejudiced against women, with résumés that contained the terms "women" or "gender" or indicated that the candidate had attended a women's institution receiving worse rankings.

Little could be done to address the algorithm's prejudices since it was trained on Amazon's prior recruiting practices.

While the algorithm was plainly prejudiced, this example demonstrates how such biases may mirror social prejudices, including, in this instance, Amazon's deeply established biases against employing women.

Indeed, bias in an algorithmic system may develop in a variety of ways.

Algorithmic bias occurs when a group of people and their lived experiences are not taken into consideration while the algorithm is being designed.

This can happen at any point during the algorithm development process, from collecting data that isn't representative of all demographic groups to labeling data in ways that reproduce discriminatory profiling to the rollout of an algorithm that ignores the differential impact it may have on a specific group.

In recent years, there has been a proliferation of policy documents addressing the ethical responsibilities of state and non-state bodies using algorithmic processing—to ensure against unfair bias and other negative effects of algorithmic processing—partly in response to significant publicity of algorithmic biases (Jobin et al.2019).

The European Union's "Ethics Guidelines for Trustworthy AI," issued in 2018, is one of the most important rules in this area.

The EU statement lays forth seven principles for fair and ethical AI and algorithmic processing regulation.

Furthermore, with the adoption of the General Data Protection Regulation (GDPR) in 2018, the European Union has been in the forefront of legislative responses to algorithmic processing.

A corporation may be penalized up to 4% of its annual worldwide turnover if it uses an algorithm that is found to be prejudiced on the basis of race, gender, or another protected category, according to the GDPR, which applies in the first instance to the processing of all personal information inside the EU.

The difficulty of determining where a bias occurred and what dataset caused prejudice is a persisting challenge for algorithmic processing regulation.

This is sometimes referred to as the algorithmic black box problem: an algorithm's deep data processing layers are so intricate and many that a human cannot comprehend them.

Different data is fed into the algorithm to observe where the unequal results emerge, based on the right to an explanation when, subject to an automated decision under the GDPR, one of the replies has been to identify where the bias occurred via counterfactual explanations (Wachter et al.2018).

Technical solutions to the issue included building synthetic datasets that seek to repair naturally existing biases in datasets or provide an unbiased and representative dataset, in addition to legal and legislative instruments for tackling algorithmic bias.

While such channels for redress are vital, one of the most comprehensive solutions to the issue is to have far more varied human teams developing, producing, using, and monitoring the effect of algorithms.

A mix of life experiences within diverse teams makes it more likely that prejudices will be discovered and corrected sooner.


~ Jai Krishna Ponnappan

You may also want to read more about Artificial Intelligence here.



See also: Biometric Technology; Explainable AI; Gender and AI.

Further Reading

Ali, Muhammed, Piotr Sapiezynski, Miranda Bogen, Aleksandra Korolova, Alan Mislove, and Aaron Rieke. 2019. “Discrimination through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes.” In Proceedings of the ACM on Human-Computer Interaction, vol. 3, CSCW, Article 199 (November). New York: Association for Computing Machinery.

European Union. 2018. “General Data Protection Regulation (GDPR).” https://gdpr-info.eu/.

European Union. 2019. “Ethics Guidelines for Trustworthy AI.” https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (September): 389–99.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Pasquale, Frank. 2016. The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press.

Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2018. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harvard Journal of Law & Technology 31, no. 2 (Spring): 841–87.

Zuboff, Shoshana. 2018. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.




Analog Space Missions: Earth-Bound Training for Cosmic Exploration

What are Analog Space Missions? Analog space missions are a unique approach to space exploration, involving the simulation of extraterrestri...