top of page
Writer's pictureLexTalk World

Application of AI by Governmental Agencies – detrimental fallouts and repercussions


The Salzburg Questions for Law and Technology is an online discussion series introduced and led by the people of the Salzburg Global Law and Technology Forum. In their recent global seminar, many recognized authors and commentators represented their views on various topics related to law and technology. The given article sheds light on Lee Hibbard's view on the relationship of government with artificial intelligence (AI).


People often relate AI with science fiction robots but the reality is the improvement in technology which helps in accomplishing things in less time. Social media, digital assistants like Google, Alexa, web stores, online services, and many more have now become a part of our daily lives. Whenever we talk about all these AI is always connected to them.


For example - an advertisement shown on social media is generated by AI. The Google assistant that we use, the web searches we perform, all these are done through an AI program. Even the Spotify recommendations or recommendations from online shopping are influenced by AI.


Although, when we talk about AI as a socio-technical system, the outcome won’t be the same. The way, AI has been used in the public sector is different from how it is impacting an individual's life.


The report, "AI watch Artificial Intelligence in public service - Overview of the use and impact of AI in public services in the EU,” speaks about the use of AI in support of public services in the EU. The report also informed that one of the five categories of governmental tasks employing AI is adjudication. Adjudication is the act of judging a case or making a ruling judgment. This process is when AI systems assist or conduct the granting of benefits or the entitlement of rights to citizens.


A survey held between May 2019 and February 2020 concluded, that of the 230 AI initiatives in European government bodies and agencies, five percent of AI services were used to assist in adjudicating social benefits and entitlements, while 20 percent helped in the monitoring of social media behaviors.


But like every coin has two sides, there are pitfalls of adopting this system too, especially in the public sector. Since a majority of AI is used in assisting adjudicating social benefits, there is a large probability that it might go wrong. The machine-learning models are trained on human patterns and behaviors, and behavior is meant to change. A change in the input pattern is bound to change the performance of the model, making it fragile and difficult to work.


Below are two such instances from the Netherlands that reflect how a fault in AI-enabled adjudication can harm society.

  1. January 2021 - Amid an escalating scandal over child benefits in which families were wrongly accused of fraud by the tax authority, the Dutch government resigned from its duties. The Dutch tax authorities wrongly accused about 26,000 parents of fraudulently claiming child allowance over several years from 2012. About 10,000 families were forced to repay thousands of Euros, which caused unemployment in the country. In many scenarios, it even led to bankruptcies and divorces.

  2. February 2020- an algorithm-based system used by the government of the Netherlands was demanded to be taken down after it was identified that the system pooled citizen data and "exclusively targeted at neighborhoods with mostly low-income and minority residents." The system was meant to hunt down potential housing and benefit cheats.


This decision to take down the AI-enabled system sets a strong example for other courts to follow. It was the first-ever decision to be taken where a court stopped the use of digital technology and abundant digital information by welfare authorities on human rights grounds.


What do we learn from this?

Would the technical improvements to the quality of training data be enough to compensate for the wider and deeper effects on people wrongly accused due to faulty AI-enabled adjudication? So how can the government avoid such traps while using AI?


  1. Encourage the procurement of AI-enabled products and services that "design in" ethical principles, such as transparency.

  2. There should be a balance between efficiency in the provision of social services and the protection and respect for human rights.

  3. Strengthen the training of civil servants to oversee and strengthen the machine learning models used.

 
 

Follow LexTalk World for more news and updated from International Legal Industry.

 









22 views

Comments


bottom of page