“Maria1” is “Maria”, or Would AI promptly say otherwise?
- Ellen Fraga

- Oct 27
- 6 min read

Maria is not a statistical datum; she is a subject of rights. Categorized by algorithms yet not understood by them. Her resilience, her origins, her history; elements that should be recognized in all their complexity are often reduced to risk variables, exceptions, or noise. Rendered invisible in data, marginalized by patterns, Maria represents womanhood: strong, combative, and, often, misaligned with the predefined models of Artificial Intelligence (AI) and its multifaceted expressions.
In these terms, if the legal universe were a stage, AI technologies would undoubtedly take center spot. It is common knowledge in this sphere that AI is and – spoiler alert! – will continue to be the protagonist when the discussion revolves around the triad of innovation, rights, and regulation. Maria is complex; Maria is multifaceted. However, so-called discriminatory biases bring to light a heated debate, fueling legal discourse on the matter.
The increasing adoption of AI systems in sensitive areas, such as public safety, employment recruitment, criminal justice, credit approval, social benefits, among others, has revealed a concerning layer of technology: the reproduction and amplification of algorithmic biases that disproportionately affect historically marginalized groups. Despite being presented as advanced and objective tools, these systems are built on datasets often marked by structural inequalities, including racism, sexism, and socio-economic exclusion.
Beforehand, it is worth noting that, according to the Department of Computer Science at Stanford University, AI is “the science and engineering of making intelligent machines, especially intelligent computer programs” (NATIONAL GEOGRAPHIC, 2023). In other words, these so-called “intelligent machines” or even “robots” aim to execute tasks and processes characteristic of human behavior, reproducing human intelligence through algorithmic learning from past experiences, known as “Machine Learning”.
Given this context, it is necessary to promptly identify the various biases embedded in AI systems, many of which are considered discriminatory and directly impact fundamental human rights. These biases challenge the very structure of the internationally recognized and protected human rights framework, raising substantial regulatory challenges both globally and within Brazil’s emerging legal debate.
In her book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016), Cathy O’Neil categorizes certain algorithms as “Weapons of Math Destruction” (WMDs) due to their opacity (“black box” nature), mass impact, and their potential to amplify social inequalities. Though seemingly neutral and objective, these systems are fed by biased historical data, which leads them to reproduce and reinforce racial, gender, and class prejudices, thereby violating fundamental rights.
Operating from the premise that society itself is inherently biased, other scholars contribute enriching perspectives to this ongoing discussion. In the article Why Fairness Cannot Be Automated (Computer Law & Security Review, 2021), Sandra Wachter, Brent Mittelstadt, and Chris Russell of the Oxford Internet Institute criticize attempts to address algorithmic bias solely through technical “fairness” metrics.
For these authors, “fairness” is a legal and moral concept that cannot be automated by statistical measures such as equal opportunity or demographic parity. They highlight the tension with European anti-discrimination law, which is grounded in individualized treatment, and rejects the legitimization of discrimination through seemingly favorable aggregate statistics.
Accordingly, while these metrics may be valuable in engineering contexts, they are legally insufficient within the European framework. However, it is worth noting that the 'Old European Continent' is not so old when it comes to addressing this issue, while it has proven anything but outdated in its approach. Quite the opposite, Europe Union’s AI Act (2024) and General Data Protection Regulation (GDPR, 2016) demonstrate advanced and innovative regulatory frameworks with growing concern for algorithmic bias and discrimination.
The AI Act, for instance, mandates that data used to train AI systems must not reinforce historical discrimination. This requirement compels companies to reassess their models before entering the European market, with detailed provisions on bias, oversight, and potentially strong sanctions enforced by competent authorities. Moreover, the GDPR addresses automated individual decision-making, including profiling.
Brazil, in contrast, still lacks a specific AI law. However, regulation is under debate through the proposed Bill No. 2,338/2023, drafted by a commission of jurists and currently under consideration in the Senate. The Bill outlines general principles such as non-discrimination, transparency, accountability and explainability regarding algorithmic discrimination. It also includes provisions for risk analysis and impact assessments for high-risk systems, similar to the European standards.
In the United States, regulation remains more fragmented and sector-specific, as there is no unified federal AI law. Instead, civil litigation and regulatory agency action, driven by strong civil society advocacy, guide the approach. Federal agencies are pressured to ensure greater algorithmic equity across sectors such as public services, employment practices, and facial recognition, seeking to mitigate the risks posed by discriminatory AI biases.
Globally, there is a growing consensus on the need for ethical data treatment and human oversight, particularly in decisions that affect fundamental rights. This happens because the realm of human rights is breached when AI systems operate based on probabilities and generic patterns, triggering exclusionary automated decisions without considering individual context, relying solely on mathematical justifications that lack legal grounding from the perspective of fundamental rights frameworks.
There are, unfortunately, numerous real-world examples across diverse domains, including criminal justice, education, labor markets, and police surveillance, which underscore issues such as racial bias, lack of explainability, socio-economic inequality, privacy violations, and gender discrimination.
One particularly emblematic case involves a major Tech Company that developed an AI system to automate résumé screening. The algorithm, however, began to reject female candidates, favoring male applicants because it had been trained on the company’s own historically male-dominated data. As a result, the system was internally scrapped in 2018 before public release, exemplifying how biased historical data can perpetuate discrimination.
This historical context conveys important messages about social biases and distorted algorithmic forecasting, leading to stereotyped and discriminatory outcomes. One must then ask: How – and from whom – are these AI systems learning? The answer, though complex, is relatively straightforward: from people. And people are embedded in a society marked by inequality and bias, which inherently amplifies the risk of violating principles and rights related to equality and non-discrimination, which are enshrined in constitutional norms, international treaties, consumer rights, and data privacy laws.
Hence, the notion of technological neutrality must be rejected. In societies shaped by racism, sexism and inequality, algorithms inevitably reflect these distortions, even if unintentionally. The data feeding these systems are records of human decisions – with all their flaws and biases. By learning from this information, algorithms replicate and often intensify discriminatory patterns, presenting them with a veneer of objectivity that conceals their biased roots.
Additionally, bias stems from the lack of diversity among the teams that develop and train these systems. The absence of varied social experiences among programmers, engineers, and analysts narrows the scope of technological vision, contributing to automated responses that ignore human plurality. Algorithms tend to reinforce what they know best: historically exclusionary patterns.
AI often behaves like a child in a learning phase: absorbing patterns uncritically and repeating what it has been taught. As Professor Ana Cristina Bicharra, PhD in Engineering and AI specialist from Stanford University explains, algorithms learn from the data we provide. If these data reflect social prejudices, the systems will simply reproduce and amplify them.
The core issue lies not solely with technology but also with those who feed, design, and apply it. Just as a child needs ethical guidance, AI requires intentionality, diversity, and responsibility. Without this, we risk transforming human prejudice into automated “truths” under the guise of neutrality. As O’Neil rightly argues, algorithmic transparency, independent audits, and proactive government regulation, plus coupled with impact assessments, form the triad necessary to combat discriminatory bias.
Therefore, if AI learns from past experiences, we must invest in societal reeducation. The closer these systems get to mimicking natural rationality, the greater the risk of encoding structural discriminatory biases.
In interactions with generative AI3, for instance, a seemingly simple gesture, such as saying “Thank you!”, may appear irrelevant or even raise concerns about resource usage4. Notwithstanding, it reveals how human behavior shapes machine learning patterns. Ethical and courteous social interactions help AI learn from positive examples, fostering more inclusive and socially acceptable outcomes.
In stark contrast to this human symbolic force, AI algorithms trained on biased data tend to perpetuate historical stigmas and erase individual trajectories. In the end, we are all “Marias”; we are diverse, and diversity is the cornerstone of a more equal society, even in the digital and algorithmic domains, arisen from the deepest layers of the AIs.
As Milton Nascimento and Fernando Brant, both key figures in Brazilian popular music, powerfully expressed in their song “Maria, Maria”, in which continues to remind us that society can be revitalized by embracing its complexity. After all: “[...] quem traz na pele essa marca possui (those who bear this mark upon their skin) / a estranha mania de ter fé na vida5.” (carry the strange habit of believing in life.
Follow LexTalk World for more news and updates from International Legal Industry.

Comments