top of page

ARTIFICIAL INTELLIGENCE (AI) AND ITS LEGAL IMPLICATIONS



What is AI? AI refers to the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem-solving, decision making, etc. Initially conceived as a technology that could mimic human intelligence, AI has evolved in ways that far exceed its original conception. These systems are powered using machine learning, deep learning, natural language processing, image recognition, machine reasoning, computer vision, etc. which enables it to perform a variety of tasks. As AI’s capabilities have dramatically expanded, so has its utility in a growing number of fields. Even at its relatively nascent stage, AI permeates various aspects of our society like urban infrastructure, manufacturing, logistics, law enforcement, manufacturing, banking, healthcare, legal services, customer support to recommending entertainment solutions, beating you at a game of chess, acting as your personal assistant and cracking a joke based on your current mood.

The impact AI has on our lives is increasing every day and is only going to increase with advances in the field of data collection, processing, and computation power. Policymakers are enthusiastic about AI’s potential to improve human lives and transform their country’s economies. While there is a regulatory vacuum in the field of AI, the legal issues associated with its use cannot be ignored.

AI, the Responsibility Quotient? An important issue that arises is a tortuous liability due to decisions made by the AI. The most common tort—the tort of negligence—focuses on whether a party has a duty of care to another, and whether damages have been caused by a breach of such duty. The further that AI systems move away from classical algorithms and coding, they may display behaviors that are wholly unforeseeable by their creators. When there is a lack of foreseeability, we would be placed in a position where no one is liable for a result that may have a damaging effect on others.

Autonomous vehicles and robots which were part of sci-fi novels and movies, a few years ago, are now a reality. Autonomous vehicles that rely on AI can significantly reduce road accidents. Companies like Tesla and Uber already have plans to operate autonomous taxis. However, in an event where the accident is caused by such a vehicle, it may be tough to fasten the liability on anyone. We face similar risks when we use AI in the healthcare sector to diagnose and detect diseases, perform complicated surgeries, etc. including the manufacturing sector where AI robots manufacture products.

The inherent nature of AI may require individuals or entities contracting for AI services to seek out specific contractual protections. In the past, software was intended to perform in terms of a pre-defined algorithm. Machine Learning, however, is not static but is constantly evolving. Parties may consider contractual provisions, which covenant that if the AI technology results in unwanted outcomes, contractual remedies will follow. These additional provisions might include an emphasis on audit rights with respect to algorithms within AI contracts, appropriate service levels in the contract, a determination of the ownership of improvements created by AI, and indemnity provisions in the case of malfunction.

AI, a Creative Genius? Another issue that arises is the ownership of research and innovations produced by AI. Microsoft’s artificial intelligence product has published a book of poetry titled “The Sunlight That Lost the Glass Window” in May 201. In 2018, Adam Basanta created a computer system that operates on its own and produces a series of randomly generated abstract pictures. Adam is being sued in Quebec Superior Court for trademark infringement because an image created by the system violated the copyright on the claimant’s photographic work and the trademark. This raises the question of who is the owner of the works created by these robots? Is it the inventor of the robot software or the robot itself?

Many experts have mooted for establishing a legal personality to AI. The European Parliament is considering creating a specific legal status for robots in the long run, so that the most sophisticated robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise act with third parties independently. This would imply that owners would have to buy insurance for their robots to cover damages that may arise from the robot’s acts.

AI, a Criminal Mastermind? The constructive uses of AI seem to be infinite, however, AI may also become an enabler to commit criminal offences. Many innovators including likes of Elon Musk and Stephen Hawking have raised potential risks associated with AI. Recently, cyber-criminals used Deepfake Software Tool, an AI-powered voice technology, to imitate a company’s CEO voice to trick the other party into transferring €220,000 (US$ 243,000) to their account.

Online streaming platforms and content libraries recommend content based on the digital footprint of a person’s viewing history to keep the person engaged. There may be a case in which content that normalizes and/or portrays violence, substance abuse, anti-social behavior, suicide, etc. pitches similar content to a young adult with an impressionable mind and as a result, such person is inspired by it and acts on it. The question arises of whether AI can be prosecuted for participating in a criminal conspiracy.

AI, the Big Brother? Companies collect large amounts of information about consumers and use it for marketing, advertising, and other business purposes. What AI brings to the table is the ability to gather, analyze, and combine vast quantities of data from different sources, thus increasing the information-gathering capabilities of these companies. In 2018, Cambridge Analytica used AI to analyze Facebook profiles of millions of people to predict their voting behavior and create political advertisements, following which Facebook was fined $100 million by SEC and $5 billion by FTC for failure to protect users’ data. Recently, Apple confirmed that contractors regularly hear private conversations of people which are recorded by the company’s personal assistant Siri. The program was shut down after a huge public backlash.

Voice recognition and facial recognition are two methods of identification that AI is becoming increasingly adept at executing. If these methods are used by the government, can a person be said to have a right to privacy enshrined on all people by the Supreme Court of India? Using AI, China has created a social credit rating system that will be used to impose flight ban, exclusion from private schools, slow internet connection, exclusion from high prestige work, exclusion from hotels, and registration on a public blacklist.

In light of recent developments in this space, various countries have introduced data protection laws to protect the interests of their citizens. The European Parliament has passed the General Data Protection Regulation (“GDPR”) and India has introduced Personal Data Protection Bill, 2019 which is yet to be tabled in the Parliament. These laws will be at the forefront of discussion and will have to address concerns ranging from the creation of surveillance states to the disclosure and access of personal data to AI systems.

The Way Ahead The champions of AI technology will argue that any regulation at an early stage will stifle the use of AI and create a regulatory minefield for its developers. However, such a regulatory vacuum demotivates the big players to invest in the technology as there is uncertainty regarding the legality of the technology.

Microsoft and Google have published a set of principles to guide AI development. Both these guides seek to promote fairness, safety, reliability, privacy and security, inclusiveness, transparency, and accountability. This could be a useful starting point for an organization’s own AI principles. This shows that there may be an industry-wide collaboration to develop ethics into AI systems and develop best practices. However, such efforts will have to be complemented with a comprehensive framework developed by consulting all the stakeholders. We would need a robust data protection law like the one envisaged by Personal Data Protection Bill, 2019 coupled with sectoral regulatory frameworks to provide additional protection to user privacy and security. Japan and Germany have developed new frameworks applicable to specific AI issues such as regulating next-generation robots and self-driving cars respectively. There is a need to frame international standards to ensure safe development by having standard practices and measures which may be mandated to be part of every program.

NITI Aayog published a report titled “National Strategy for Artificial Intelligence” in 2018, which proposes setting up a Centre for Data Ethics and Innovation, aimed at enabling and ensuring ethical, safe, and innovative use of data like it is being done in the UK. The Report also proposes BIS standards for AI systems and adopts a Regulatory Sandbox Approach to live-test innovations, like it has been done by SEBI, RBI, and IRDAI.

The regulatory vacuum and non-existence jurisprudence on such issues will be a big test for the legislatures and judicial institutions all around the world which will have to balance the interests of people, corporations, and the State.

 

#lawfirmstrategy#lawfirmmarketing#lawfirmmanagement#lawfirmgrowth#lawfirmleadership#legaltech#legalindustry#legal#legalmanager#legaldocument#legaldisruption#legalinnovation#legalknowledge#legalmarketing#legaleducation#legaladvice#legalcommunity#legalhelp#legalissues#legalawareness#legaldesign#legalblog#legalmatters#legalcounsel#legaldocuments#legalconsulting#legalai#legalcloud#lawtech#lawyers#lawpractice#lawupdates#laws#lawnews#lawenforcement#law

 

Follow LexTalk World for more news and updated from International Legal Industry.

19 views
bottom of page