Recommender Systems: Its Ethical Challenges Amidst The Nascence Of European Union And Indian Legal Landscape

Educator

New member
By Ritvik Chouhan & Vaishnavi Tiwari



Recommender Systems have become a humongous part of our digital space and in turn, our political reality because of their use as intermediaries. This technology, prima facie, has an innocent task to accomplish, i.e., to recommend to its users similar content that adheres to their liking but in doing so, it notoriously picks up each interaction that a user does not necessarily consent to submit, in order to suffuse their interface with content that does not fall out of that direct or inferential liking. In this post, we have discussed the ethical implications that follow the deployment of recommender systems by tech giants and while there are several dimensions to its effects, the one particularly discussed here is the political and the rights-based dimension. Further, the blog comments on how various legal systems, specifically the European Union and India, are endeavoring to combat the fallout of such systems through legislation.



Introduction


Recommender Systems (“RS”) are machine learning algorithms that collect and analyse user data by utilizing deep learning and in turn, provide personalized recommendations based on that analysis. These systems use various types of data, which includes user preferences, contextual information, historical behavior, or past interaction and with the help of their information filtering system, they are able to increase a user’s engagement with the platform owing to increased satisfaction and loyalty by providing recommendations curated around their needs and preferences.

Recommender Systems are running part of our reality by exerting control over how we, users of social media, navigate in the digital space. These systems employ nudging, which is a choice of architecture that molds individual behaviour, through content curation and presentation without actually restraining a person’s choice. Simply put for the context of this blog post, these systems carry the danger of presenting bigoted beliefs to a user that already carries such notions instead of presenting them with fresh content that does not align with their liking.

A 2020 report titled “Communal Conflicts in India Part I” by researchers at the social media giant Facebook paints a gruesome image of the Indian democratic system. It shows how the major devastating events, from protests to riots, which the nation has witnessed, are all aided by the use of the application and its algorithmic system. Even though a platform for content sharing, social media apps like Meta and WhatsApp notoriously gravitate towards proliferating the dark side of freedom of speech and expression.

The report demonstrates how incendiary content that is freely uploaded on the platform strikes the communal chord at its worst, promoting hatred and fear-mongering towards a particular religious/ethnic/cultural group. Moreover, it describes how algorithms of such applications operate to favour the elites by amplifying their content and targeting users to allow the breeding of discord and communal violence. Therefore, such algorithms that augment biases by amplifying and feeding targeted content to the users are called RS.


Down the Rabbit Hole Through Recommender Systems


RS are Machine Learning (a subset of Artificial Intelligence) algorithms that are well ingrained within social media like Instagram, YouTube, Meta and WhatsApp, and produce a filtered output when a user interacts with these platforms. To break it down further, every digital platform from search engines, and news providers to social media exquisitely tailors personalized content, for its users by using algorithms that first, take input from the user in the form of metadata, inferential data based on user interaction with a piece of content, demographics, and even interaction of users with other users and second, recommend to its users, an interface filled with content that they might like.

These systems, by taking in user feedback, project a unique user experience that bolsters several biases, namely popularity bias, by recommending content that gains attention by the population in an area, cultural/racial/ethnic/religious/gender bias by allowing discordant or even hateful content towards a particular group in its panel of recommendations and, confirmation bias by suggesting content that affirms an individual’s prior beliefs, stereotypes or hypotheses. They select information based on a feedback loop and filter them largely in two ways:

Content basis – where the assortment is done by user inputs and how they have interacted with a piece of content.

Collaborative filtering – where the assortment is done by the feedback of other users on the platform who have shared similarities of opinion over a scheme of content, thus grouping users with similar biases together.

With the help of the aforementioned sorting bases, RS eliminates the need to consume content with conscious awareness and therefore, severely impairs the individual’s capacity to understand why a certain piece of content is targeted at them and hampers their right to self-determination. To keep the users hooked or engaged on the platform, these systems deploy ‘captivation metrics’ that retain user attention to the platform. They inadvertently reinforce cognitive biases that lead to the creation of a ‘filter bubble’ which cuts down the diversity of content presented to a user. This leads them down the rabbit hole of controversial content and obstructs the display of any content that is incongruent with their biases.


The Laws of The European Union Amidst The Recommendation Paradigm


After the introduction of the General Data Protection Regulation (“GDPR”) in 2016, the European Union (“EU”) has another framework lined up i.e., Artificial Intelligence Act (“AI Act”). The AI Act contains provisions that particularly flag and heavily regulate software driven by AI that accrues potential risk through the data provided by the users of the system.

The AI Act has taken a massive step by banning high-risk AI technology that negatively manipulates human behavior, thoughts, or decisions. This includes exploiting vulnerable groups and using harmful subliminal techniques to influence people to view certain online content. A humongous downside to RS is its algorithmic opacity or its ‘black box effect’ that discourages the user from truly understanding what happens through the ungraspable programming techniques by which their interface is personalized.

For the same, the AI Act through Articles 9, 13 and 14 has maintained a risk management system and adopted transparency at every step of the design and creation of the AI system. Moreover, the EU has also recognized RS by Articles 29 and 30 of the proposed EU Digital Services Act (“DSA”), which raises the levels of transparency owing to the users and the public at large. A striking feature that comes forward by the perusal of the two Articles is that the EU has actively taken steps to embody the user’s autonomy in the digital space.

RS affects the informed decision-making capacity of an individual as they are presented with a cramped perspective or filter bubbles, which more often than not become a tool for prompting polarisation or radicalisation of political views. Article 29 of DSA requires platforms using RS to disclose their main parameters and mitigate identified risks, subject to audits and disclosure. This will allow users to modify these parameters to regain control over their choices and receive recommendations not based on personal profiling.

The RS collects user data directly or through inferences through implicit data such as user actions and interactions with content directly affecting users’ privacy, identity, and informational self-determination. The GDPR emphasizes user privacy and data security as key principles in all technological advancements, and Articles 25 and 32 necessitate that the AI systems ‘by design’ incorporate these values. The ePrivacy Directive issued by the European Parliament mandates informed consent for cookies and explicit opt-out mechanisms.

Therefore, the EU has taken several steps to address the issues related to RS and with time has regulated the software driven by AI. With such steps, the EU has banned the use of certain high-risk AI technologies that negatively manipulate human behavior, thoughts, or decisions.


How far has India come in recognizing Recommender Systems?


The Indian legal system with respect to the laws pertaining to technological advancements, such as the RS itself, is still in its infancy. Social media giants that employ RS largely fall under the category of ‘intermediaries’, as per the Information Technology Act, 2000 (“IT Act”). Social media intermediaries such as Meta, YouTube, and WhatsApp only store or transmit information uploaded by their users and therefore are generally exempt from liability for any uploaded content under the “safe harbour provision” of Section 79(1) of the IT Act. However, safe harbour is not a blanket protection, and the intermediaries are required to observe due diligence with respect to the content that stands at crossroads with Article 19(2) of the Indian Constitution.

The induction of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”) has changed the dynamics for Significant Special Media Intermediaries by requiring compliance reports and grievance redressal mechanisms. However, Rule 4 of IT Rules includes automated tools to filter out harmful content but fails to address the methods used by platforms to recommend such content to users. Therefore, the laws of India do not have the same transparent approach as the EU regarding intermediaries in their design and creation processes.

As such, the Indian legal system is constantly evolving when it comes to imposing liability on intermediaries. There are robust laws, that fairly cover illicit acts that threaten users and their personal data and sensitive personal data. The recent report of the whistleblower of the Meta Group, Frances Haugen shows that social media platforms have not taken down inflammatory content as required and even promote it through their AI systems.

With the proposal of the Personal Data Protection Bill in 2019 (“PDPB Bill”), which borrows inspiration from the GDPR, the technology sector will witness a shift to becoming more consent-centric as the ‘data fiduciaries’, apart from maintaining strict due diligence will now be required to protect personal as well as non-personal data. The Bill underwent various modifications due to the continuous advancement in technology, resulting in a significant amount of personal data generated by data principals. Furthermore, the computational capability possessed by companies today enables them to process such data in ways that can negatively impact the autonomy, self-determination, freedom of choice, and privacy of the data principals.

Therefore, owing to its failure to encapsulate all the relevant aspects governing a data protection law, the government withdrew the PDPB Bill. Instead, the Indian Ministry of Electronics and Information Technology released a draft of the Digital Personal Data Protection Bill 2022 (“DPDP Bill”) to create a comprehensive data privacy and protection framework. The purpose of the DPDP Bill is to establish clear rights and duties for Data Principals and Data Fiduciaries, the entities to whom the personal data relates, and the ones responsible for processing such personal data.

Section 7 of the DPDP Bill mandates a company to obtain explicit consent from users before collecting and processing their data. This means that the Data Principal must be informed about the data being used by the Data Fiduciary or Data Processor, and must have the option of correcting or erasing such personal data. With this, the Bill provides an additional obligation on Data Fiduciaries to regularly audit their data to ensure they do not pose any harm or risk. Therefore, it can be inferred that the DPDP Bill will require the companies to provide clear and concise explanations to users about the purposive use of collecting their data at every step. As an extension, it will apply to those entities that deploy RS algorithmic approach in implicating a positive duty to specify how RS works and why specific recommendations are being made to the user.

While the laws persistently engage in addressing the root of the problem, i.e., flagging and removal of incendiary content and giving primacy to user data, the nation still has a long way to go before addressing the harmful effects of RS that tend to fall outside the scope of these laws. These AI systems are not inherently harmful and there is no such way to effectively account for their opacity.

Moreover, the RS deployed by tech giants have the potential to shape public opinion on vital moral issues. In India, where societal values and beliefs vary widely, these systems could reinforce cultural biases and stereotypes. These systems can also compromise user privacy by collecting and analyzing personal data without consent. This can result in the marginalization and exclusion of certain groups and the reinforcement of existing power imbalances. Thus, it is essential for companies deploying RS in India to carefully consider the ethical implications of their algorithms and take measures to mitigate potential harm.


Conclusion


In the EU, there is a growing concern about the ethical implications of RS, particularly regarding user consent and ensuing biases. The GDPR requires platforms using RS to disclose their main parameters and mitigate identified risks exposed through audits and disclosure. The autonomy and ability of users to make informed decisions should be the most important consideration when introducing any new technology. A growing dismal trend indicates that social media companies often prioritize profits over this fundamental requirement which can undermine democracy by eroding independence. While the EU is on the brink of bringing and addressing the harmful effects of RS through two dynamic pieces of legislation, there must be improvisation regarding transparency and accountability.

On the other hand, in India, there is growing awareness of the ethical challenges posed by RS. However, currently no specific legislation that regulates these systems exists. Instead, one has to resort to the broader legal frameworks such as the Constitution of India, and the IT Act which mandates protection of user data privacy. However, owing to the motive of profit-making, this acquiescence to privacy directives becomes redundant in the face of enforcing outline privacy agreements that do not truly honor privacy and self-determination, which ultimately leads to unfair outcomes. Therefore, there is a growing need for ethical guidelines and regulations to be implemented to ensure the responsible development and deployment of RS in India.

Organizations must ensure that users have greater control over their data, such as the right to reject cookies, track their online activity and delete their data upon request. Companies should ensure that their RS is transparent and explainable to users. This means clearly and concisely explaining how the system works and why certain recommendations are being made. This will help build trust and reduce privacy issues about their data use. Moreover, to avoid perpetuating biases and discrimination, RS should be designed to prioritize diversity and fairness ensuring that recommendations are not based on gender, race, religion, or caste.


The authors, Ritvik Chouhan & Vaishnavi Tiwari, are undergraduate law student at Institute of Law, Nirma University, Ahmedabad.



Maha Lakshmi N & Krithika J, Recommendation System on Cloud Environment: A Descriptive Study on This Marketing Strategy 10 IJRASET XI, 1 (2022).

Robin Burke, Alexander Felfernig & Mehmet H. Göker, Recommender Systems: An Overview 32 AI Magazine 3, 13-14 (2011).

Mathias Jesse & Dietmar Jannach, Digital Nudging with Recommender Systems: Survey and Future Directions, 3 Computers in Human Behavior Reports, 1 (2021).

Tricon Infotech, How Recommender Systems Influence Customer Behavior, June 28, 2019, available at (Last visited on March 14, 2023).

Casey Newton, Meta might let anti-vax posts back onto Facebook and Instagram, July 27, 2022, available at (Last visited on March 14, 2023).

Newley Purnell & Jeff Horwitz, Facebook Services Are Used to Spread Religious Hatred in India, Internal Documents Show, October 23, 2021, available at (Last visited on March 14, 2023).

Sahana Udupa, Digital Technology and Extreme Speech: Approaches to Counter Online Hate, April 2021, availaible at (Last visited on March 14, 2023).

Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher & Robin Burke, Feedback Loop and Bias Amplification in Recommender Systems, Arxiv Cornell University, 2-3 (2020).

Sourabh Mehta, Why is the fairness in recommender systems required?, July 4, 2022, available at (Last visited on March 14, 2023).

University of York, AI search and recommendation algorithms, available at (Last visited on March 15, 2023).

Kyle Chayka, The Age of Algorithmic Anxiety, July 05, 2022, available at (Last visited on March 15, 2023).

Himan Abdollahpouri, Masoud Mansoury, Robin Burke & Bamshad Mobasher, The Unfairness of Popularity Bias in Recommendation, Arxiv Cornell University, 1 (2019).

Greg Jarboe, Understanding Biases in Search & Recommender Systems, December 13, 2019, available at (Last visited on March 15, 2023).

Min Dong, Xianyi Zeng, Ludovic Koehl & Junjie Zhang, An interactive knowledge-based recommender system for fashion product design in the big data environment, 540 Information Sciences, 2 (2020).

Victor Dey, Collaborative Filtering Vs Content-Based Filtering for Recommender Systems, August 25, 2021, available at (Last visited on March 15, 2023).

Lav R. Varshney, Respect for Human Autonomy in Recommender Systems, Arxiv Cornell University, 2 (2020).

Nick Seaver, Captivating algorithms: Recommender systems as traps, 24 Journal of Material Culture 4, 9 (2019).

Tien T. Nguyen, Pik-Mai Hui, F. Maxwell Harper, Loren Terveen & Joseph A. Konstan, Exploring the Filter Bubble: The Effect of Using Recommender Systems on Content Diversity, Association for Computing Machinery, 677 (2014).

General Data Protection Regulation, 2016 (European Union).

Artificial Intelligence Act, 2021 (European Union).

Yavar Bathaee, The Artificial Intelligence Black Box and The Failure of Intent and Causation, 31 Harvard Journal of Law & Technology 2, 923-924 (2018).

Digital Services Act, 2020 (European Union).

Supra note 16.

Dr. Amy Ross Arguedas, Dr. Craig T. Robertson, Dr. Richard Fletcher & Prof. Rasmus Kleis Nielsen, Echo chambers, filter bubbles, and polarisation: a literature review, January 19, 2022, available at (Last visited on March 15, 2023).

James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy & Taylor Van Vleet, The YouTube Video Recommendation System, Association for Computing Machinery, 294-295 (2010).

John Perrino, Using ‘safety by design’ to address online harms, July 26, 2022, available at (Last visited on March 16, 2023).

Directive on privacy and electronic communications, 2002 (European Union).

The Information Technology Act, 1986, §79(1).

IT (Intermediary Guidelines and Digital Media Ethics Code), 2021.

The Constitution of India, 1950, Art. 19(2).

infra note 29, Rule 4.

Vijay Pal Dalmia, India: Data Protection Laws In India – Everything You Must Know, December 13, 2017, available at (Last visited on March 16, 2023).

Ministry of Communications and Information Technology, G.S.R. 313(E) (Notified on April 11, 2011).

Pranav Mukul, Facebook didn’t flag India hate content because it lacked tools: Whistleblower, October 7, 2021, available at (Last visited on March 16, 2023).

The Personal Data Protection Bill, 2019, 373 of 2019.

The Digital Personal Data Protection Bill, 2022, Cl. 2.

Id., Cl. 7.

Id., Cl. 13.

Shrikant Saxena & Shweta Jain, Exploring and Mitigating Gender Bias in Recommender Systems With Explicit Feedback Arxiv Cornell University, 1-2 (2021).

Silvia Milano, Mariarosaria Taddeo & Luciano Floridi, Recommender systems and their ethical challenges, 35 AI & Society, 964 (2020).

Ryan Mac & Cecilia Kang, Whistle-Blower Says Facebook ‘Chooses Profits Over Safety’, October 03, 2021, available at (Last visited on March 16, 2023).
 
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features of our website. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock