Artificial intelligence in public administration: risks to human rights and legal regulation mechanisms.Jalgasbaeva Gulbakhar.
Abstract. The article analyzes the impact of artificial intelligence (AI) on the implementation and protection of human rights in the context of its active introduction into public administration and social life. It examines the main risks associated with the use of AI, including algorithmic bias, discrimination, invasion of privacy, threats to freedom of expression, the right to work, and political participation. Particular attention is paid to issues of gender equality and the vulnerability of certain social groups in the context of digital transformation. The paper shows that existing legal mechanisms in many countries are unable to keep pace with the rapid development of AI. Based on an analysis of international and regional initiatives, the author argues for the need to develop a comprehensive system of legal regulation of AI aimed at ensuring transparency, accountability, and the protection of human rights.
Keywords: artificial intelligence; human rights; public administration; discrimination; databases; freedom of speech; gender equality.
Introduction.
The scientific community has yet to reach a consensus on a universal definition of the term Artificial Intelligence (AI). With the incredible speed of technological development and digitalization, the concept of this phenomenon is also changing. In a general sense, AI is a set of computer techniques and processes used to improve the ability of electronic computers (EC) to perform intellectual tasks such as image recognition, understanding and communicating in human language, as well as computer vision – the recognition, interpretation, and analysis of graphic materials. It can also incorporate functions such as predicting future events and solving complex problems. In this regard, AI significantly contributes to improving the quality of life and the development of humanity. However, there is another side to the coin that has been kept quiet until recently. AI has some negative effects on human rights; in particular, achievements in the field of gender and racial equality are at risk of being undermined. The positive and negative qualities of AI create a need to promote AI for progressive development, while limiting its negative impact on human rights as much as possible. The most effective way to achieve this goal is to establish a system of rules to control AI. The law must play a key role in protecting human rights in the age of AI. However, laws are not keeping pace with the rapid development, expansion of capabilities, and coverage of AI.
This work is devoted to analyzing the relationship between AI in public administration and human rights. In particular, it demonstrates the negative impact of AI on human rights achievements, citing examples. The work also analyzes how public administration and legislation around the world lag behind the pace of technological development and are therefore unable to effectively regulate the impact of AI. If this problem remains understudied and unresolved, humanity risks achieving technological progress at the cost of exacerbating the situation of particularly vulnerable populations.
This study consists of four parts. The introduction is followed by a second part, which explores aspects of the relationship between AI and human rights. It demonstrates how AI exacerbates social discrimination and inequality and negatively affects human rights achievements. The third part will develop potential rules for managing AI to maximize its benefits and minimize its negative qualities.
Part I. Artificial Intelligence and Human Rights: The Connection.
So how are these two seemingly unrelated fields connected? First, AI is gradually replacing humans in the workplace. Professions such as web designer, financial analyst, dispatcher, management analyst, process engineer, and many others may be completely replaced by AI in the near future. Non-interference in human life is one of the main principles in the implementation of human rights. However, AI, contrary to this principle, is replacing people and increasingly interfering in social and economic processes. This trend is worrying the international community because it threatens to leave people without work, and unemployment can leave people on the brink of poverty, which is incompatible with the UN's sustainable development goals.
Second, AI is deeply integrated into many aspects of public life, so it must be viewed through the lens of social context and direct impact on society. People are increasingly witnessing the rapid growth of AI exploitation in everyday life. AI is used in healthcare, finance, criminal justice, education, and, of course, information technology and social media. It brings many benefits and conveniences, but it is also a source of negative processes.
Thirdly, it is impossible to ignore the risks of AI in the implementation of human rights. The development of artificial intelligence is based on existing technologies, but it creates more problems and dangers than previous technologies, especially when it comes to responsibility and trust. In particular, the use of AI in decision-making often carries the risk that we cannot track what data and sources the computer used to make a particular decision. The introduction of AI into the sphere of public administration, where every decision made by an official plays a huge role in the lives of citizens, is particularly problematic. People invented AI to assist them in making complex decisions, but it is often very difficult to understand how AI arrives at a decision, which raises questions and doubts about the transparency and accountability of AI.
Like other computer tools, AI is prone to making mistakes. However, AI errors are much more dangerous because most people consider it to be the most accurate and error-free system. Because of this, the results of AI work are more trusted, and people do not bother to delve into the details of how the system produced a particular answer. AI works exclusively on the basis of data or templates entered into the database, without explaining the reasons for its decisions. For example, a very strange glitch was found in the Google Photo 2015 face recognition system: it very often identified photos of black people as gorillas. Another example of a similar case occurred when the US Customs and Border Protection system used AI to recognize criminals, and the system identified people with certain racial characteristics as criminals. The algorithm mistakenly classified people with certain racial characteristics as suspects or criminals. Although the system openly stated that it had a 0.1% failure rate, in real life this percentage led to 75,900 people being misidentified. The system exhibited a pronounced algorithmic bias towards certain racial groups. This predisposition was due to the imbalance of training data and the reproduction of historically established social stereotypes in algorithmic models. As a result, the system demonstrated a higher error rate for certain demographic groups, which created a risk of discrimination. In addition to equality, the basic human rights that AI infringes upon include the right to participation, the right to privacy and security of personal information, the right to work, and freedom of speech.
AI not only takes jobs away from people, it also exacerbates discrimination in hiring. Built-in AI algorithms contain social biases and worsen social inequality towards minorities and vulnerable groups. Artificial intelligence systems are now used in decision-making processes to sort the most and least suitable options. For example, when hiring, when the HR department receives a large number of applications, candidates are often selected based on certain criteria: completeness of documentation, criminal record, level of education, and other factors. AI significantly speeds up the process by automatically narrowing down the pool of applicants and selecting the “most suitable” candidates. Algorithms can take into account indicators such as age, graduation from a prestigious educational institution, no criminal record, expected productivity, job stability, and even the likelihood of frequent use of vacation or sick leave. Although this speeds up the hiring process for the employer, such a system will inadvertently exclude certain groups of people from the list of candidates without even giving them a chance to demonstrate their competence. In doing so, AI exacerbates existing social inequalities and reproduces hidden biases. For example, the algorithm may indirectly take into account factors related to gender or marital status. If, in the company's historical data, most management positions were held by men without children, the algorithm may favor male candidates. As a result, a woman with children may be automatically excluded from the competition due to presumed maternity leave or frequent sick leave, even if her qualifications fully meet the requirements. A similar situation may arise with people with disabilities. If the algorithm analyzes past performance indicators and, for example, associates disability with lower efficiency, the system will reject such a candidate despite their competence and professional experience. AI rarely uses direct criteria such as “gender” or “disability.” More often, it discriminates not directly, but through related characteristics: gaps in employment history, flexible work schedules, medical records, place of residence, type of educational institution, etc. Thus, the algorithm can reproduce existing social stereotypes and reinforce inequality by excluding competent candidates from the selection process.
AI systems are trained by accessing and analyzing huge databases. Data is collected for the purpose of creating feedback, and analyzing this data using AI can lead to the disclosure of sensitive and protected personal information. The situation is exacerbated by the fact that AI is used for business purposes and to increase profits, while governments use AI for facial recognition and monitoring. Of course, surveillance prevents illegal activities, tracks offenders, and is necessary for public safety in general, but the more surveillance there is, the more vulnerable the right to privacy becomes. If confidential information falls into the wrong hands, crimes such as fraud, blackmail, extortion, or humiliation of people's honor and dignity are inevitable. The field of e-commerce is particularly problematic. Online retail platforms, among other things, create huge databases of behavioral data about customers and those who simply browse products. Based on this data, platforms offer similar products and advertisements. Most often, this is done without the consent of the internet user. Such systems can recognize a user's profile even if they visited the site in incognito mode, and incognito mode is usually used by people to keep their identity secret.
Artificial intelligence has a significant impact on the right to access, search for, and disseminate information. Many AI systems operate within social networks and search engines, thereby controlling information flows and influencing what information users receive, create, and disseminate. Controlled information ceases to be free. For example, the Google search engine generates a list of results based on the content of the query, thereby determining which sources will be visible and which will not.
Particularly serious consequences arise when governments use similar AI technologies for censorship and information control. The most striking example is how China has replaced state censorship bodies with artificial intelligence systems. The popular Chinese video platform iQiyi uses AI to identify sexual, violent, and politically sensitive content. On the one hand, such restrictions by the Chinese government are motivated by the protection of public morality, but this has a negative impact on the pluralism of opinions and diversity of viewpoints. Users, in turn, when they realize that their every word is being monitored and controlled, stop freely expressing their opinions and trusting government structures. This leads to a change in their communicative behavior and self-censorship, thereby limiting the exchange of information among citizens.
The problem is exacerbated in cases of abuse of AI-powered information control for malicious purposes. Propaganda, manipulation of public opinion, and dissemination of false information are just a few examples. Naturally, such systems and manipulation through these systems can influence democratic processes and people's right to self-determination. Campaigns aimed at changing public opinion by presenting false information are increasingly being carried out using AI on social media. It is believed that Russia interfered in the democratic process during the 2016 US presidential election by launching various campaigns on social media, thereby influencing citizens' votes when choosing candidates. Like states, companies do not always use AI for control purposes. Companies use AI to identify and remove prohibited content in order to comply with legal requirements such as the prohibition of propaganda for terrorism, narcotic and psychotropic substances, child pornography, incitement to national and interracial hatred, and the dissemination of false information. However, due to technical limitations of the system, since AI is not infallible, permissible materials may also be removed, which also has a negative impact on the exercise of freedom of expression. behavior.
AI poses a particular risk to gender equality. Of course, more and more women are starting to use the internet every day, but the number of men who have access to the internet still exceeds the number of women. In particular, in the least developed regions of the world, only 20% of women have access to the global information network. This gender digital divide creates a gap in the database that is reflected in the gender bias of AI. Therefore, the perpetuation, widening, or narrowing of the gender equality gap depends on who creates AI and what biases are embedded in AI data.
A number of studies have found that AI has gender bias. A study conducted by the Berkeley Haas Center for Equality, Gender, and Leadership found that among 133 different AI systems, 44% demonstrated gender bias, and 25% had both gender and racial bias. Essentially, AI gender bias can be explained by the fact that it treats people differently based on their gender because it has learned this from the data entered into its system. Similarly, generative AI, which can create images, videos, animate photos, and much more, also creates requested content based on the data it was trained on. For example, a girl from Turkey asked AI to write a story about a doctor and medical staff. AI wrote a story in which the doctor is a man and all the medical staff are women. The girl continued to enter queries, and the AI continued to select stereotypical roles for each character and associate certain abilities and qualities with their gender. Finally, when she asked directly why it was so biased against women, the AI explained that the system works on the technique of “word embedding” — encoding words in machine learning to reflect their meaning and connection to other words so that machines can work with human language. It follows that AI trained on data that associates women and men with specific professions will create content that reflects gender bias.
Natasha Sangwa, a student from Rwanda, noticed that AI is mostly developed by men and trained on datasets that are mostly based on a male perspective. She also saw how this affects women's experiences with AI in everyday life. When women used certain AI features to diagnose illnesses, they often received inaccurate responses because AI is unaware of symptoms that may manifest differently in women. Ultimately, real-life examples show that AI embodies existing gender stereotypes in society and promotes them in the digital space. Who develops AI and what data it is trained on could contribute to solving this problem.
Eliminating gender bias in AI requires incorporating the principle of gender equality at the initial stage of technology development, from data analysis to team formation. Today, women are still underrepresented in AI and STEM: according to the World Economic Forum, women make up about 30% of AI professionals and 29% of STEM workers, and are more likely to occupy entry-level positions and less likely to hold leadership roles. Experts emphasize that technologies created solely from a male perspective only reinforce existing inequalities. It is therefore important to involve specialists from different fields in the development of AI so that the systems reflect the social experiences of everyone—women, people with disabilities, marginalized groups—and serve the interests of the greater good. Currently, global AI regulatory mechanisms remain inadequate because there are no effective tools capable of limiting the launch of discriminatory systems. Power and resources are concentrated in the hands of a limited number of corporations and states, while the risks of discrimination, increased social vulnerability, and changes in the labor market are unevenly distributed. Negotiations on the Global Digital Compact at the 67th session of the UN Committee on Women's Rights in 2024 open up the possibility of incorporating a gender perspective into the new global digital governance system. Following the meeting, UN Women proposed specific recommendations to ensure that digital transformation empowers women and girls rather than exacerbating existing gender gaps.
If the introduction of artificial intelligence, with all its inherent risks, becomes an integral part of our lives, the following question arises: how can we develop rules for managing AI in order to maximize its benefits, minimize its limitations, and prevent possible negative consequences for society?
Part II. Artificial intelligence law for the protection of human rights.
Artificial intelligence and its further development are an irreversible process. Therefore, it is extremely important to use the potential of AI for good and to make efforts to minimize its negative qualities. To this end, legislation should establish a system of norms that would control the development of AI and oblige all involved parties to respect human rights. States are obliged to develop policies and laws to ensure the fulfilment of human rights obligations in the age of AI, in particular to encourage the private sector to comply with human rights standards. Most importantly, states must establish their own responsibility for the misuse of AI. States that have incorporated AI into their public administration structures are obliged to ensure transparency in public procurement, guarantee accountability and responsibility for the misuse of AI, conduct mandatory monitoring of the impact of AI on human rights, and create effective legal protection mechanisms in the event of human rights violations by artificial intelligence. In accordance with the UN Principles on Business and Human Rights, the private sector should implement human rights compliance assessments to ensure maximum transparency and provide a mechanism for accountability and redress.
The introduction of AI into public administration should be based on the principles of accountability and transparency. There is a perception that transparency requirements hinder and limit innovation and slow down the pace of digital development. However, such claims are not entirely accurate. Transparency and accountability are not aimed at slowing down the pace of AI development, but rather at cultivating trust and sustainability in innovation. Effective AI regulation mechanisms require a phased approach. The first step is to develop principles and standards aimed at identifying, preventing, and eliminating the negative impact of AI on human rights.
Various initiatives in this area began to appear even before 2021, when AI gained widespread popularity. For example, in 2017, the Asilomar AI Principles were formulated, defining the basic guidelines for AI research and application. There were also the Ethical Guidelines of the Japanese Society for Artificial Intelligence (2017), the Montreal Declaration on Responsible AI (2017), and the IEEE Principles on Ethics of Autonomous and Intelligent Systems (2017). In 2018, initiatives such as the Partnership on AI, the UK AI Principles, and Google's AI Ethics Principles were created. Finally, the Michael Dukakis Institute for Leadership and Innovation, in collaboration with the Boston Global Forum, proposed the concept of the Artificial Intelligence World Society (AIWS) — a system of norms and best practices for the safe and humane development of AI.
One of the foundations laid in AI regulation is the Ethical Charter on the Use of AI in Justice Systems adopted by the European Commission for the Efficiency of Justice in 2018. In 2019, the EU developed Guidelines on the Ethics of Trustworthy AI, defining seven key requirements:
- Human oversight and autonomy — AI should not restrict human free will; the possibility of human intervention must be preserved.
- Technical reliability and security — systems must be stable and protected from external attacks.
- Privacy and data management — personal data must be protected.
- Transparency — Algorithms and decisions must be explainable and verifiable.
- Diversity, non-discrimination, and fairness — AI must not create bias.
- Environmental and social well-being — technologies must contribute to sustainable development.
- Accountability — systems must be auditable, and risks must be identified and prevented in advance.
Despite all the proposed projects and initiatives, global AI regulation remains fragmented. Many principles are general in nature (“soft law”) and are interpreted differently, which makes it difficult to develop binding legal norms. International human rights law is based primarily on conventions and treaties, and the challenges posed by AI were not taken into account when these documents were drafted. The UN has not yet developed a comprehensive mechanism for addressing the risks of AI to human rights. With the exception of the EU, there are virtually no regional mechanisms either.
The rapid development of AI creates the following legal challenges: determining the legal status of AI, establishing liability for harm caused, and changing traditional legal approaches. One of the key tasks is to strike a balance between stimulating innovation and protecting fundamental human rights and freedoms. The development of international conventions, agreements, and universal standards is a necessary condition for the formation of a fair and democratic digital environment. Formal legislation does not always keep pace with the rapid development of technology, so voluntary standards and corporate responsibility play an important role. The law should clearly define basic concepts, the rights and obligations of the parties, and create mechanisms for control and enforcement in cases where AI is used in decision-making. Thus, the formation of a legal system for AI focused on the protection of human rights is one of the key tasks of modern legal science and international cooperation.
List of References:
- Dang, M. T. (2021). Human rights and law in the age of artificial intelligence. Journal of Legal, Ethical and Regulatory Issues, 24(S4), 1–10.
- Filippo, R., Hannah, H., Vivek, K., Christopher, B., & Levin, K. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center for Internet & Society at Harvard University.
- Molinier, H. (2024). Placing gender equality at the heart of the Global Digital Compact: Taking forward the recommendations of the sixty-seventh session of the Commission on the Status of Women. UN Women.
- Nicoletti, L., & Bass, D. (n.d.). Humans are biased. Generative AI is even worse.
- Smith, G., & Rustagi, I. (2021, March 31). When good algorithms go sexist: Why and how to advance AI gender equity. Stanford Social Innovation Review.
- United Nations Human Rights Council. Guiding principles on business and human rights: Implementing the United Nations “Protect, Respect and Remedy” framework– New York; Geneva: United Nations, 2011.
- UN Women. (2024, June 28). Artificial intelligence and gender equality. https://www.unwomen.org/en/articles/explainer/artificial-intelligence-and-gender-equality
- UN Women. (2024, April). Girls who can code and break stereotypes: An interview with Natacha Sangwa. https://www.unwomen.org/en/news-stories/interview/2024/04/girls-who-can-code-and-break-stereotypes
- World Economic Forum. (2023, June). Global gender gap report 2023: Insight report.
- UN Women. Executive summary. (2024). Placing gender equality at the heart of the Global Digital Compact: Taking forward the recommendations of CSW67.
Jalgasbaeva Gulbakhar.