In February 2020, Delhi witnessed one of the worst incidents of communal violence where 53 people, mostly Muslims, were killed, hundreds were injured, and property worth millions was destroyed. While the complicit role of the police, as an instrument of the state, became visible in engendering violence, the role of social media, especially big tech like Meta (formerly Facebook), in exacerbating violence remained imperceptible. Although the preponderance of hate speech and its unbridled reposting on social media is not new or surprising, what merits attention is the way in which the big tech is absolved of its responsibility to be accountable for the social damage caused by its platforms. Yet, during the Delhi Legislative Assembly’s proceedings in 2021, where the chair of the Peace and Harmony Committee asked Facebook India about the steps taken to curb hate speech propagated on its platform and address user’s complaints about the content posted days before violence broke out, Meta’s complicity in the violence became evident. Not only did Facebook India not have a clear, exact definition of what is considered “hate speech” in the Indian context, the representative outrightly refused to answer what steps Facebook India took during and before the outbreak of violence in Delhi to moderate incendiary speeches and vitriolic content on its platform. Instead, Facebook India explained, in a rather generic tone, that the free speech and safety of its users are mediated and moderated through the complex processes of “machine learning tools,” “community standard measures,” and “third party fact-checking.” However, in the event of communal violence, “machine learning tools” and “fact-checking measures” used by Facebook seem nothing but feeble paper tigers— ineffectual mechanisms mired in political bias.
The objectivity attached to algorithms and the sacrosanct status given to machine learning tools thwarts any understanding of how the digital infrastructure works, the subjective biases it is encoded with, and, when not regulated, how it can become a potent partisan instrument of hate. Frances Haugen, a data scientist turned whistleblower, explains how Facebook’s algorithms “give the most reach to the most extreme and divisive ideas,” which become instrumental in the production of a social crisis and destabilizing political regimes. She contends that since the algorithms of social media are systemized to elicit most reactions from a post/idea, the posts that are more extreme are favored by algorithms because they elicit more reactions compared to posts that are moderate in nature and content. The preoccupation with “engagement” on social media overrides the possibility of solutions that are not only about censorship but about changing the dynamics of the algorithmic system whereby equal reach and preference can be given to moderate, neutral ideas. To argue that machine learning tools are objective algorithms capable of circumventing all social, political, and religious biases is to misunderstand the content and form of their operation.
As digital infrastructure, algorithms work in totalizing ways that are increasingly dynamic. Burrell and Fourcade argue that
The more one interacts with digital systems, the more the course of one’s personal and social life becomes dependent on algorithmic operations and choices… As new data flow in, categories and classifications get dynamically readjusted, and so do the actions that computing systems take on the basis of those categories. This has important implications for how people ultimately perceive themselves and for how social identities are formed (2021: 227).
As digital infrastructures become instrumental in shaping our social identities, beliefs, and even our behavior as social agents, the uninhibited and unchecked spread of hate speech on online platforms can rip apart the social fabric and communal harmony of a society. As a case in point, Haugen referred to the unchecked spread of “fear-mongering content” on Facebook in India in her complaint report to the US Securities and Exchange Commission. Her complaint noted that Facebook was well aware of the anti-Muslim content promoted by pro-right-wing groups like the Rashtriya Swayamsevak Sangh (RSS) and did not take any action to classify RSS and its affiliates as a “dangerous organization” because of fear of a political backlash from the ruling dispensation. The complaint also underscored Facebook’s lack of technical classifiers that can track and flag the hate speech content in local languages of Bengali and Hindi and presented a strong correlation between misinformation and the number of “deep reshares”— posts that are reshared many times and contain divisive and sensational content, from Facebook’s own data analysis. Moreover, Facebook India’s public policy head Ankhi Das, responsible for fact-checking posts and applying hate speech rules to politicians, failed to arrest numerous instances of hate speech and community standard violations made by politicians of the right-wing coterie. She said on record that “punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country, Facebook’s biggest global market by number of users.” In light of digital crony capitalism, algorithms and “community standard measures” of big tech become tools of institutionalizing hate. While the big tech needs to implement better checks and balances and strengthen transparency, the meddling of the Indian state on digital platforms has only bolstered the process of political bias and exacerbated the marginalization of minorities. The proposed amendments to the Information Technology (IT) Rules, 2021 are a case in point.
According to the amendments, the government will appoint appellate committees for grievance redressal by users who are unsatisfied with the decision taken by the big tech’s grievance officer. This would incentivize the social media platforms to suppress content that poses dissent to the government and empower the government to censor speech that they consider “unlawful.” The law states that any content that is “ethnically objectionable” and is “misleading in nature” can be taken down by intermediaries, thereby giving unbridled power to arbitrate over vaguely defined phrases. Since the government will become the final arbiter, it can delete or modify any content published by Digital news media and other OTT platforms. This will not only compel news platforms to publish content palatable to the ruling dispensation but also stifle a citizen’s right to access multiple perspectives and points of view as the nature of the content will be circumscribed. While, on the one hand, the big tech fails to address the unchecked spread of hate speech, on the other hand, the Indian state, through the IT Rules 2021, aims to thwart any possibilities of dissent or sharing of correct information in the digital era of post-truth and instead provides mechanisms of impunity to hate mongers.
This debilitating nexus of the big tech and the state furthers the shrinking of public space accessible to minorities and civil society in the digital landscape. The authoritarian regimes, in compliance with big tech, set discursive boundaries for the minorities who not only suffer the brunt of online hate speech as victims in real life but also run the risk of being punished for dissent. The big tech social media platforms can be easily controlled by right-wing ideologies, thereby rendering minorities and civil society policed, trolled, disciplined, and terrorized on the digital platforms. The shrinking of the “public space” of the internet and marginalization of minorities online reveals how the tech-mediated authoritarianism and bigotry perpetuate a culture of hate, mediates violence, and effectively undermines democracy.
Bibliography
Burrell, J., & Fourcade, M. (2021). The Society of Algorithm. The Annual Review of Sociology Vol. 47, 213-237