
In an evolving digital world, algorithms are everywhere and are in control of our lives. In such a scenario how can humans ensure they stay in control?
We as humans, have become more reliable on machines to make processes more efficient. But on the other hand, our reliability has raised potential for conflict between Artificial Intelligence and human rights. If we do not control the situation in hand, it is likely, that equality will be demonstrated across the globe denying people of their human rights. But, if used optimally, AI will surely be able to enhance human rights in order to create a better future for us.
One thing is clear, algorithms have become increasingly prominent in our everyday lives. What is less clear is what we can do about that? With techniques to make decisions, algorithms are not new to us. But the unprecedented computational power of machines, the accessibility of data, the continuous evolvement of AI is new. Whether the algorithm decision making process is fully or semi-automatic, depends on the extent of human influence in the actions that follow from the outcomes an algorithm generates. Although these systems provide with the benefit of improving the efficiency of decision making, they have certain risks too.
Algorithms created to regulate speech online have censored speech ranging from religious content to sexual diversity. In many cases, AI systems which are created to monitor illegal activities have been used to track and target human rights defenders. This can be explained through algorithmic discrimination. The common cause of algorithmic discrimination is the difference present between people in the real world, which reflect in the virtual world. Human’s stereotypes when it comes to gender or different races, add up to this cause too.
Algorithm decision making process comprises of five stages. These phases are namely conceptualization and initial analysis, design, testing, deployment and monitoring & evaluation. In a nutshell, the first phase identifies the specific problem which the algorithm is intended to solve and the context which will be used in it. In the next phase, the algorithm is designed and tested in a repetitive process, in order to refine it before it gets deployed. Discrimination can be at any of these stages, especially if it is present at an earlier stage and not detected, its effects can be amplified as it flows through the algorithmic system. These discriminations can sometimes occur due to a different design model, or because of biased data inputs.
So, the question is, what is being done to address this concern?
Many companies voluntarily adopt ethical frameworks to avoid violations in social justice. But these are often difficult to implement and have little effect. Ethical frameworks are all about environment well-being, transparency and human agency, which is clearly not enough when it come to AI. These ethics are nowhere based on rights but on values, and ethical values tend to differ across spectrum. Moreover, these frameworks cannot be enforced, hence making it difficult for people to hold corporations accountable for any violations. Frameworks such as Canada’s Algorithmic Impact Assessment Tool that not necessary, and it does not prove of much use too. The Algorithmic Impact Assessment (AIA) is a mandatory risk assessment tool intended to support the Treasury Board’s Directive on Automated Decision-Making. The tool is a questionnaire that determines the impact level of an automated decision-system. It is composed of 48 risk and 33 mitigation questions. These frameworks act merely as guidelines supporting best practices. These self-regulatory approaches do not provide much in amending the AI’s laws. Many have even argued that amendment of laws will not bring any benefit to the violations in human rights. For instance, European Union’s AI regulation, a draft to develop human-centric AI by eliminating mistakes and biases to ensure it is safe and trustworthy, illustrates that attempting towards developing such laws have drawbacks. This bill assesses the scope of risk associated with various uses of AI and then subjects these technologies to obligations proportional to their proposed threats and attempts towards developing such laws have drawbacks. It was also mentioned that such approach will only permit the companies to adopt AI technologies so long as their operational risks are low. It won’t protect human rights in the long run. At its core, this approach is anchored in inequality. It stems from an attitude that conceives of fundamental freedoms as negotiable.
While considering the laws to anchor human rights, we can consider the European Union’s General Data Protection Regulation (GDPR).Implemented in 2018, the GDPR is a legal framework that sets guidelines for the collection and processing of personal information from individuals who live in the European Union (EU). The Regulation applies regardless of where websites are based; it must be heeded by all sites that attract European visitors, even if they don't specifically market goods or services to EU residents.
Though the GDPR allowed users to control their personal data and oblige companies to respect those rights, this law did not have its desired effect. One of the main reasons being, it failed to empower users on how to value the safeguarding of their personal information.
Both the above approaches were an attempt to mediate between both the subjective interests of citizens and those of industry. They tried to protect human rights while ensuring that the laws adopted don't impede technological progress. But this balancing act often results in merely illusory protection, without offering concrete safeguards to citizens' fundamental freedoms.
To achieve this balance, solutions adopted must be adapted to the needs and interests of individuals. Citizen participation should be given importance. Also, Legislative solutions which are implemented to protect human rights only seek to regulate technology's negative side effects rather than address their ideological and societal biases. Addressing human rights violations triggered by technology after the fact isn't enough. Technological solutions must primarily be based on principles of social justice and human dignity rather than technological risks.
#lawfirmstrategy#lawfirmmarketing#lawfirmmanagement#lawfirmgrowth#lawfirmleadership#legaltech#legalindustry#legal#legalmanager#legaldocument#legaldisruption#legalinnovation#legalknowledge#legalmarketing#legaleducation#legaladvice#legalcommunity#legalhelp#legalissues#legalawareness#legaldesign#legalblog#legalmatters#legalcounsel#legaldocuments#legalconsulting#legalai#legalcloud#lawtech#lawyers#lawpractice#lawupdates#laws#lawnews#lawenforcement#law
Follow LexTalk World for more news and updates from International Legal Industry.
Comments