top of page

Human Rights in the Era of Artificial Intelligence and Automation

Article by Dr. Saravanan Thangarajan | October 29, 2024 | Global Rights Defenders

Introduction

Artificial Intelligence (AI) and automation are transforming nearly every aspect of society,  from how we communicate to how governments administer justice. These technologies promise  improved efficiency, cost savings, and societal advancement. Yet, they carry serious risks that threaten  fundamental human rights, particularly in the realms of policing, privacy, and freedom of speech.


The issue is not just technical but deeply ethical. How do we balance the benefits of AI with  its potential to infringe on rights, especially for marginalized communities already vulnerable to  discrimination? In this context, AI can both amplify systemic injustices and create new forms of  oppression. This article delves into the significant implications of AI on human rights, the moral  dilemmas we face, and the urgent need for robust safeguards to ensure that technological advancements  do not come at the cost of individual freedoms.


The Intersection of AI and Human Rights

At its core, AI technology is designed to improve efficiency, predict outcomes, and expedite  decision-making. However, when deployed without careful oversight, AI has the potential to violate  key human rights, such as privacy, fairness, and equality. From criminal justice to immigration  services, AI is now making decisions that were once the sole responsibility of humans.


When decision-making is transferred to AI systems, issues of transparency, accountability, and  bias become critical. AI algorithms, designed by humans, reflect the data they are trained on. If that  data carries historical biases—such as racial or gender discrimination—those biases become  entrenched in AI outputs. As Kate Crawford, a leading AI researcher, explains, "AI is neither artificial  nor intelligent. It is made from natural resources, human labor, and it reflects human biases—often  exacerbating inequalities".[1] The consequence? AI systems reinforce social  hierarchies rather than dismantling them.


AI in Policing and Law Enforcement

AI's role in law enforcement is perhaps one of the most controversial. Technologies like  predictive policing and facial recognition promise enhanced crime prevention but raise alarms about  fairness and racial bias.


Predictive Policing: Exacerbating Bias

Predictive policing tools, which forecast where crimes might occur, rely heavily on historical  data. However, this data is not neutral; it reflects decades of biased policing practices, particularly in  communities of color. As a result, predictive policing often targets minority neighborhoods—not  because crime rates are higher, but because the data is skewed. This leads to over-policing, reinforcing  stereotypes, and escalating tensions between law enforcement and marginalized communities.  Research shows that predictive policing systems frequently target areas with higher minority  populations due to biased historical data.[2]

 

Facial Recognition and Systemic Discrimination

Facial recognition technology, widely adopted by law enforcement, is marketed as an objective  tool. Yet, research by Joy Buolamwini and Timnit Gebru found that facial recognition misidentifies  people of color, especially Black women, at alarmingly high rates. Their study revealed that error rates  for identifying Black women were 34.7%, compared to just 0.8% for white males.[3] These inaccuracies lead to wrongful arrests and other severe consequences.


In 2019, the National Institute of Standards and Technology (NIST) conducted an analysis  revealing that facial recognition algorithms misidentified Black and Asian faces 10 to 100 times more  than white faces.[4] Such disparities underscore the risks of using AI technologies without  proper safeguards.


AI in Migration and Refugee Status Determination

AI in immigration services and refugee status determination processes represents another area  where automation can fail. While AI promises to speed up overburdened immigration systems, it often  struggles to handle complex, life-altering decisions.


The Role of AI in Migration Management

AI is now used to process refugee applications, screen asylum seekers, and assess immigration  risks. However, machine learning models struggle to account for the nuanced social, political, and  humanitarian contexts necessary for informed decisions. As a result, some asylum seekers face  wrongful deportations or unfair denial of refugee status. Petra Molnar points out that the opacity of  these systems leaves individuals unable to understand or challenge AI decisions, stripping them of  their rights.[5]


The Problem of Bias in AI Systems

Bias in AI is one of the most pressing concerns. AI systems are trained on datasets that often  carry historical biases and prejudices from the societies that generated them. For example, an AI  trained on biased hiring data may perpetuate discriminatory practices, favoring male over female  candidates if historical practices were gender-biased.


Bias in AI can have far-reaching consequences across healthcare, education, employment, and  criminal justice. In healthcare, biased algorithms may lead to unequal treatment based on race or  socioeconomic status, while in hiring, they may limit opportunities for marginalized groups. Virginia  Eubanks stresses the need for algorithmic accountability and regular auditing to ensure fairness in AI  systems.[6]


AI’s Influence on Privacy Rights

One of AI’s most profound impacts on society is its influence on privacy rights. AI systems can  collect and analyze vast amounts of data, often without individuals' knowledge or consent. AI  algorithms can infer sensitive personal details from seemingly unrelated data points, raising concerns  about privacy and autonomy.


In the era of surveillance capitalism, a term coined by Shoshana Zuboff, data is the new oil— collected, commodified, and sold to the highest bidder.[7] The commodification of

personal data by AI technologies raises critical questions about the future of privacy. How do we  protect individuals in a world where their every move can be tracked and analyzed by algorithms?


AI in Mass Surveillance

Governments and corporations increasingly use AI to monitor citizens, often infringing on  privacy rights. AI-driven surveillance, such as facial recognition and predictive analytics, enables  unprecedented tracking of individuals’ behaviors. In authoritarian regimes, these technologies are  weaponized to suppress dissent and control populations.


Safeguarding Human Rights through Regulation

To mitigate AI’s risks, robust legal and ethical frameworks are essential. The European Union’s  AI Act and UNESCO's AI Ethics Recommendations represent significant efforts to regulate AI in ways  that protect human rights.


The EU AI Act and UNESCO’s AI Ethics Recommendations

The EU AI Act (2021) seeks to regulate AI in high-risk areas, such as law enforcement and  migration, emphasizing transparency, accountability, and human oversight (European Commission  AI Act). Similarly, UNESCO's AI Ethics Recommendations (2022) provide guidelines for the ethical  development of AI, with a focus on fairness and human rights protections.[8]


Regular Audits and Ethical AI Development

Regular audits and algorithmic transparency are critical for ensuring fairness in AI systems.  Audits help detect bias and prevent AI from reinforcing social inequalities. Ethical AI development  also demands a commitment to transparency, where people have the right to understand how AI  systems function and make decisions.


AI’s Misuse by Authoritarian Regimes

Perhaps one of AI’s greatest dangers is its misuse by authoritarian regimes. In repressive  governments, AI technologies provide tools for mass surveillance, control, and the suppression of  dissent.


The weaponization of AI in China’s Social Credit System

One of the most notorious examples of AI misuse is China’s Social Credit System, which  assigns citizens scores based on their political and social behaviors. These scores determine access to  services, jobs, and freedom of movement. As Rogier Creemers notes, this system is a dangerous fusion  of AI and state control, where AI is used not to empower citizens, but to punish them.[9]


AI, Free Speech, and Censorship

In authoritarian regimes, AI-driven content moderation is often used to censor free speech and  control information. As Shoshana Zuboff warns, "AI becomes weaponized, and human rights and civil  liberties are trampled on a massive scale”.[10] The chilling effect on freedom of expression  is one of the most dangerous aspects of unregulated AI.


The Role of Human Oversight in AI Systems

AI should never act autonomously in high-stakes decisions affecting human rights. While AI  can assist decision-making in areas like refugee status determination or law enforcement, human  oversight is essential. Decisions impacting individual freedoms and rights must remain under human  control, ensuring empathy, context, and fairness.


Human-AI Collaboration for Ethical Decisions

Kearns and Roth advocate for AI systems that work in collaboration with human decision makers, ensuring ultimate responsibility lies with humans. In contexts like criminal justice or  immigration, human oversight ensures decisions consider individual circumstances and prevents AI  from making cold, context-free judgments.[11]


Conclusion

AI and automation are reshaping society's boundaries, offering tremendous benefits but also unprecedented risks. From predictive policing to refugee status determination, AI has the potential to  exacerbate systemic inequalities, infringe on privacy rights, and become a tool for oppression,  especially in authoritarian regimes.


However, with strong regulation, ethical development, and international cooperation, AI can  be harnessed for good. Governments, corporations, and civil society must work together to ensure that  AI technologies respect human rights and promote social justice. Only by establishing robust  frameworks for transparency, accountability, and human oversight can we ensure that AI serves  humanity, rather than undermining it.


References

[1] Crawford, K. (2021). Atlas of AI. Yale University Press.


[2] Richardson, R., et al. (2019). Predictive policing explained: How data-driven policing algorithms  work. Data & Society Research Institute. https://datasociety.net/pubs/ia/Data-Driven-Policing.pdf 10. European Commission. (2021). Proposal for a regulation laying down harmonized rules on  artificial intelligence (AI Act). European Commission. https://eur-lex.europa.eu/legal content/EN/TXT/?uri=COM%3A2021%3A206%3AFIN


[3] Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in  commercial gender classification. Proceedings of Machine Learning Research, 81, 77-91. https://proceedings.mlr.press/v81/buolamwini18a.html


[4] National Institute of Standards and Technology. (2019). Study evaluates effects of race, age, sex  on face recognition software. National Institute of Standards and Technologyhttps://doi.org/10.6028/NIST.SP.500-332


[5] Molnar, P. (2019). Technology on the margins: AI and global migration management. Refugee  Studies Quarterly, 38(3), 83-97. https://doi.org/10.1093/rsq/hdz003


[6] Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the  poor. St. Martin’s Press.


[7] Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new  frontier of power. PublicAffairs.


[8] UNESCO. (2022). Recommendation on the ethics of artificial intelligence. UNESCO.  https://unesdoc.unesco.org/ark:/48223/pf0000381137


[9] Creemers, R. (2020). China's social credit system: An evolving practice of control. China Law  Review, 22(2), 42-57. https://doi.org/10.1093/clr/clraa010


[10] Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new  frontier of power. PublicAffairs.


[11] Kearns, M., & Roth, A. (2020). The ethical algorithm: The science of socially aware algorithm  design. Oxford University Press. https://doi.org/10.1093/oso/9780190948207.001.0001

30 views0 comments

Recent Posts

See All

Comments

Couldn’t Load Comments
It looks like there was a technical problem. Try reconnecting or refreshing the page.
bottom of page