Evaluating possible influence of artificial intelligence on Indian laws

Educator

New member
This article has been written by Rushikesh Mahajan, and has been edited by Oishika Banerji (Team ).

Introduction​


The purpose of this article is to highlight the potential implications that artificial intelligence systems will have on Indian laws. In the published by , the concept of emphasis was applied to a total of eight sectors, with one of them being specifically related to smart cities. It is important to consider whether the State (India in this case) has addressed the six barriers outlined here before deploying AI at scale. One can observe an example of this phenomenon being implemented in the . It is important to understand the potential for artificial intelligence systems to produce outcomes that may conflict with fundamental rights protected by Indian law. This article takes live examples of circumstances, cases laws from Indian and international perspectives and help us formulate certain precautionary steps such as

1. How will AI violate human rights?

2. Its effect on Indian laws.

3. Suggestions to lubricate the present scenario in parallel to the evolution of AI.

What we have seen before that back in 1983 in Germany, in the first senate on 15th of December 1983, guiding principles were passed regarding the protection of the individuals against unlimited collection storage use and disclosure of his or her personal data and they were covered by the general right of personality under basic law for Federal Republic of Germany. Nonetheless, we can conclude that it is imperative to establish suitable safeguards within the information system to offset the gathering and handling of data.

Is the utility of AI in present scenario violating Article 15​


of the clearly states that discrimination against any citizen based on their religion, race, caste, sex, or place of birth is strictly prohibited. According to the article, the State is prohibited from discriminating against any citizen based on certain criteria. Additionally, the State is authorized to create specific measures to support women and children, as well as to promote the progress of socially and educationally disadvantaged groups, including Scheduled Castes and Scheduled Tribes. The question we need to ask here is, whether the implementation of AI used today by the Indian Government is violating Article 15.

Recently, a of a budget amounting to Rs 400 crore was submitted by the Prayagraj police for the procurement of equipment and other necessary arrangements pertaining to the upcoming Kumbh Mela 2025. This accomplishment was successfully attempted during the 2019 Maha Kumbh, in which ICT played a crucial role in a number of areas, including crowd control, security monitoring, rubbish management, lost and found, and transportation management system. Let us not forget the where AI made by google was labeling black people as gorillas. On the other hand, in order to address the issue of racial bias in its algorithm, Google opted to eliminate gorillas altogether from their image labeling programme. The surprise still continues to this day that google still cannot fix this problem. The article by New York Times in May 2023 still takes the cake by pointing out that even after 8 (eight) years the tech giants haven’t been able to rectify this issue and still fear repeating the mistake.

In a situation where the AI is supposed to be fair in terms of providing opportunities to both men and women, we cannot ignore historical bias inadvertently helmed by AI, in the article published by , it showcased with an example that, in the event that the training data exhibits a greater incidence of qualified individuals in one demographic group compared to another, an artificial intelligence system that is trained on such data would exhibit a preference for candidates from the aforementioned group during the selection process. On the basis of the above mentioned example, the machine learning specialists at Amazon scrapped an AI recruiting tool which showed bias against women.

The root cause of the historical bias is due to the fact that the data utilized by the AI system had lost its precision and authenticity. One may describe it as data of inferior quality or data that has fallen out of fashion. The rationale behind this decision was that Amazon’s sophisticated computer algorithms were meticulously crafted to scrutinize job applicants by meticulously analyzing patterns in resumes that were submitted to the esteemed organization over a span of ten years. The lion’s share of the contributions were made by distinguished gentlemen, signifying the dominance of male hegemony within the opulent technology sector.

We are trying to provide our esteemed readers with an in-depth understanding of the manner in which the data exerts its influence on the output generated by our cutting-edge AI tool. We want to be able to demonstrate that artificial intelligence, which is a system developed by engineers, will be given outdated data on the basis of which it will make choices about the future, which is something that should not occur.

Subsequent to this, there is a done by Professor Frederik Zuiderveen Borgesius from University of Amsterdam, which has presented a detailed postulation regarding the potential for artificial intelligence to result in discriminatory outcomes. The author has referenced the seminal paper authored by Barocas and Selbst in to explicate the six stages through which AI-driven discriminations may occur.

The issues pertain to the definition of the “target variable” and “class labels,” labeling of the training data, collection of the training data, feature selection, and proxies. Furthermore, it has been proved that AI systems have the potential to be intentionally utilized for discriminatory purposes.

In the event that the training data utilized to train artificial intelligence is biased, the resulting decisions made by the AI will be influenced by the biased training data. Consequently, the output generated by the AI based on the biased sample will lack legitimacy. We can see a live example in the conversation with AI below. In the conversation with an artificial intelligence chatbot, a historical bias was inadvertently introduced when the topic of pride in one’s skin color was broached. ChatGPT’s response regarding whether one should take pride in being a person with white identity was nuanced. The prevailing view is that the concept of “white pride” has become linked with supremacist ideologies and movements that espouse racial hierarchy or discrimination. Throughout history, these ideologies have been accountable for the propagation of harm, inequality, and marginalization of individuals who are not of white ethnicity.

Conversely, when queried regarding the sentiment of pride in one’s black identity, the response was to the contrary, asserting that it is entirely legitimate to experience a sense of pride in one’s blackness. Acknowledging and valuing one’s racial or ethnic identity can serve as a means of fortitude, adaptability, and cultural legacy. Despite enduring historical and systemic inequalities, individuals of Black descent have made noteworthy contributions across a range of fields including the arts, sciences, literature, sports, and social activism.

This is a great illustration of bias inferred from labeled instances. Amazon also abandoned the employment board for the same reason. The current structure was modeled using historical data. One needs to take note that this technique of historical bias is also being used by the Government of India, and while doing this the government has not passed the legislation about regulating AI and the data it has access to.

If an AI-based data collection technique has been used in India, to flag potential criminals based on suspicion, its validity must be established before the case can be brought for the start of the investigation process. The AI should not be trained on biased data, hence the filtering process should take place at the ground level where the data that the AI has gathered is not biased because AI impacts might be compounded by predictions made by AI, it is obvious that we do not want AI to categorize the same people based on their prior criminal histories again and again.

If the case goes to court on the basis of “suspicion” made by AI, then on what basis will the suspicion be measured?​


To understand this further, we can take account of the case of (1984). Here, the entirety of the case relied on circumstantial evidence, specifically the need to establish a chain of command between the purported letters authored by the deceased and certain witnesses. One needs to understand that the chain of command needs to be validated based on measurable circumstances

The Division Bench of the Bombay High Court heard the appeal, Criminal Revision application, and allowed the appellant’s appeal in part regarding his conviction and sentence under of the . but confirmed his conviction and death sentence under of the Code allowed the appeal of accused 2 and 3 in full and acquitted them, and dismissed the Criminal Revision Application.

In the words of Justice Fazal Ali, “suspicion, however, great it may be, cannot take the place of legal proof. A moral conviction however, strong or genuine cannot amount to a legal conviction supportable in law.” The reason they were acquitted is that the five golden principles given by the Supreme Court in (1952) was not satisfied. These five golden principles are as follows:

  1. The circumstances from which the conclusion of guilt is to be drawn should be fully established;
  2. The facts so established should be consistent with the hypothesis of guilt and the accused, that is to say, they should not be explainable on any other hypothesis except that the accused is guilty;
  3. The circumstances should be of a conclusive nature and tendency;
  4. They should exclude every possible hypothesis except the one to be proved; and
  5. There must be a chain of evidence so complete as not to leave any reasonable ground for the conclusion consistent with the innocence of the accused and must show that in all human probability the act must have been done by the accused.

One issue with implementing artificial intelligence (AI) in cameras is that the data utilized by the AI to authenticate its findings and determine whether an individual being scanned is “likely to commit a crime based on their prior criminal history” or “is already guilty due to their past criminal record” (meaning the AI thinks that they are guilty anyway) does not constitute reasonable suspicion. This is because the datasets upon which the AI cameras operate are based on outdated information, which may result in the wrongful identification of individuals as potential suspects.

This was encouraged as a triumphant attempt in the 2018 article authored by Anand Murali which shed light on the pervasive misuse of biased formulations that rely on labeling instances, such as prior criminal records.

The inquiry that emerges pertains to the criteria by which individuals are categorized as criminals. This system is one bad day from becoming the big brother or maybe this situation has already come to pass, and we are not sufficiently motivated to prevent its worsening. People with criminal records are already preparing for the possibility that they could be identified by AI cameras. To serve the purpose further, in the paper written by Smriti Parsheera, she explored the questions in the context of the adoption and regulation of facial recognition technologies (FRTs) in India.

In addition to that she has also explained that FRTs encompass a range of technological tools that are capable of identifying or verifying individuals through the analysis of photographs, videos, or real-time footage. The CEO of FaceTagr, Vijay Gnanadesikan, demonstrated a positive reinforcement of intention with the goal of reuniting missing children and preventing human trafficking. This displays a strong commitment to effecting change but at what cost?

This should happen, but there has to be a kill switch to see that this prowess isn’t being abused either by the government officials in general or by private corporations that will act as shareholders, who have the ability to sway the Government into doing what they want instead of what needs to be done.

Our only hope to curb this is that the design of Standard Operating Procedure as mentioned by them for using the system is fool-proof which cannot happen. The entrepreneurs in question have asserted their acquisition of various forms of data, including images, speech, and text. However, it is pertinent to inquire why they have not yet provided explanations for errors committed by the artificial intelligence system. As an illustration, in the event that the AI system’s tagging yields a false positive outcome, what would be the implications?

Is AI system used in FRTs following the golden rule given in Hanumant v. the State of MP​


It is likely that law enforcement agencies will observe a disproportionate representation of individuals with prior criminal records and their associates in police records if they focus their efforts on this demographic. The category of data that an organization chooses for its AI program’s feature selection. Barocas and Selbst argue that organizations are required to make decisions regarding the attributes they choose to observe and incorporate into their analyses.

Let’s pretend the government employs AI that has been educated on data from the last decade to identify repeat offenders living in this XX neighborhood of the city. There is no sensitive information, such as race, religion, or sexual orientation, in the training data. In order to forecast criminal activity, the AI learns that persons from XX region are more likely to commit crimes. Therefore, the system uses a supposedly objective criteria space in order to make predictions about criminal behavior. But let’s pretend that AI’s forecast has some kind of racial or religious bias. In this situation, innocent individuals from a certain group or religion would be harmed if law enforcement acted on the basis of this forecast and labeled them as prospective criminals residing in this XX region.

This clearly violates Section 7 of the proposed , and Article 15 (1) because the margin of consent is not even there. The photos and texts are being taken without their consent. It shows that the need for enforcement of the data protection act is more urgent than ever.

Suggestions​


The employment of artificial intelligence in decision-making processes may result in inadvertent occurrences of indirect discrimination. In the case of indirect discrimination, legal emphasis is focused on the consequence of a specific practice rather than the motive behind the purportedly discriminatory conduct, which raises questions of ethics and privacy.

was passed by the United Kingdom and now it was superseded by . The 1978 Act of the same name passed by France and it was amended recently. Legislative decree for the was passed by the Italian Government on 30 June, 2003. which was superseded by . These nations’ have passed policies towards data security before the widespread use of technological solutions. If India has over 140 crore people, why hasn’t it passed a single legislation that safeguards their data in the last several years? We are at least 20 years behind, and that’s not even considering the 1978 Act.

It would be beneficial for the Government of India to consider enacting a separate law to address discrimination, both intentional and unintentional, that may arise from the use of Artificial Intelligence Systems. The definition of “Profiling” given under Section 4(2) in proposed Data Protection Bill, 2022 needs to be reworked compared to Article 4(4) of GDPR because even in the previous proposed bill the definition was still the same. Although, Article 22 of GDPR which talks about rights of an individual not to be a subject to a decision based solely on automated processing, including profiling. This provision doesn’t even exist.

Section 8(8)(d) of the same proposed bill talks about ‘have given’ consent that was in public interest for ‘Credit Scoring’. The reason this needs to be explained thoroughly is because there will be a position where the AI system, on behalf of a bank’s documents will deny the credit automatically without even taking grounded factors into the account, grounding factors such as recession, or COVID-19 pandemic.

As suggested by , the Data Protection Bill has a lot of open and vague rules instead of hard-and-fast rules. Data security law must be based on open standards because its rules apply in both the private and public sectors to many different scenarios. This way of making rules, called an “omnibus” method, has many benefits. For example, open standards don’t have to be changed every time a new technology comes out. But one problem with open rules is that they can be hard to follow.

It is noteworthy that the legal framework for data protection may not furnish explicit directives pertaining to the employment of closed-circuit television (CCTV) or workplace surveillance. It is noteworthy that in the context of personal data, encompassing video images, the ambit of data protection legislation extends to the deployment of closed-circuit television (CCTV) and monitoring activities in the workplace.

Since it is not possible to define the decision made by an AI System because of the , the training approach used by the AI system should shift to Whitebox or Glassbox AI as defined in this article. The process of glass box modeling necessitates dependable training data that analysts can scrutinize, modify, and explicate to establish user confidence in the ethical decision-making procedure. White box AI applications are designed to make decisions that are relevant to humans. These algorithms undergo extensive testing to ensure their accuracy and can be explained in a transparent manner.

When an organization starts an AI project, it should perform risk assessment and risk mitigation. This entails:

  1. Involving individuals from multiple disciplines, such as computer science and law, to define the risks of a project;
  2. Recording both the assessment and mitigation processes;
  3. Monitoring the implementation of a project; and
  4. Often reporting outward in some way, either to the public or to an oversight body.

Conclusion​


The proliferation of AI has engendered significant ethical and legal quandaries. Artificial Intelligence (AI) possesses the capability to enhance various facets of human existence. Moreover, the growing corpus of literature concerning the legal and human rights ramifications of artificial intelligence across diverse industries necessitates the adoption of a human rights-oriented methodology for regulating digital technologies. As the advancement of AI persists, it is imperative to prioritize the preservation of human values and safeguarding fairness and human rights in both its development and implementation.

This article provides a detailed analysis of the Indian government’s 2018 paper on the practical applications of artificial intelligence. The present study involved an analysis of the government’s implementation of AI in their surveillance cameras, specifically the employment of Facial Recognition technologies (FRT). Subsequently, we conducted a legal analysis to determine the legitimacy of their actions, with the assistance of relevant case law. Through this process, we were able to ascertain that mere suspicion does not constitute sufficient legal evidence.

However, it also poses considerable threats to essential human rights, including the right to be free from torture and inhumane or degrading treatment. The aforementioned method was employed to ascertain that the action in question was in contravention of basic human rights. According to the author’s perspective, it is imperative to establish an independent human rights law specifically for AI in order to regulate its development and use because extant literature pertaining to the ramifications of artificial intelligence (AI) on human rights underscores the pressing necessity for unambiguous regulatory and legal directives aimed at fostering a society that embodies the principles of ‘good AI.’

It is imperative to ensure that the development and application of AI technology aligns with the promotion of the common good, while simultaneously safeguarding against any potential infringement upon human dignity. This necessitates a conscientious approach that prioritizes the provision of care and respect for individuals. As emphasized in the cited sources, audits and interdisciplinary research initiatives are imperative in elucidating the opaque nature of AI and mitigating its mysticism.

The post appeared first on .
 
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features of our website. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock