AI and the Modern Nation State

AI World School
7 min readOct 21, 2021

--

Nayanika Mathur, in her book ‘Paper Tiger: Law, Bureaucracy and the Developmental State in Himalayan India’, details the troubles that a tiger causes in a small village in Northern India. She argues that the progressive developmental laws and policies that were put in place by the Indian state to safe-guard the villagers in situations like these, were in fact never implemented because of the complicated and ineffective bureaucracy. The excessive emphasis on ‘paper’(documents) instead of the ‘real’ scenario, she notes through an extensive ethnography, becomes the reason for this inefficiency. This emphasis gets in the way of ever actualizing the goals of well-intentioned laws and policies.

Today similarly, a lot of emphasis is placed on technology, increasingly on AI. Most offices rely on biometric devices to log the entry and exit of their employees. Universities record students’ attendance using biometric technology too. The presence of an individual is no longer proof of their presence. Many universities even have algorithms for the student selection process. Traffic systems have also ‘evolved’ to include CCTV cameras. In the past year, there has also been some discussion over the introduction of drones to track the implementation of lockdown protocols. Machines such as these do more than just the work of an attendance recording register, or that of a camera. They are fed with data that they store and utilize to analyze people and places through the lens.

For instance, the college attendance biometric devices not only record the students’ time of arrival and exist, but also their fingerprints, eye prints, and even facial identity. This data is connected to the name, gender, age, grades, and every other piece of information about the student that the college holds. This includes sensitive information such as the student’s medical history, their parents’ income levels etc. Similarly, the CCTV cameras on the road, are programmed sometimes to recognize individual’s faces to track people who flout norms. In this situation however, there is a lot more at stake. The city and the state hold all the records of the entire population. The systems are fed with this data, and are then trusted to make accurate matches of data with people. This strikes an overwhelming similarity with what Mathur notes about the increasing emphasis on ‘paper’ rather than the real. In this case, it is the algorithm that is trusted over people. However, this does not always work in a fair and indiscriminatory manner. The installation of such devices is supposedly well-intentioned, however, just like the policies and laws that Mathur talks about, they seldom translate.

Michel Foucault, a French sociologist, formulated the concept of ‘Governmentality’ to define the rational form of governance that is the modern nation state. In his conceptualization, he posits that the modern nation state designs systems to ‘define’ populations in such a way that makes it easier to govern a diverse people. Defining a population often means creating categories based on the visible and knowable factors of persons, for the purposes of easy governance. This system is not always very effective because it seeks to define people through an ‘outside-in’ approach. This is because the origin of this approach goes back to the colonial era. For example, in colonial India, the vast diversities of the Indian population — the varied castes and communities (differing across regions), were reduced into four simple overarching categories (Brahmins, Kshatriyas, Vaishyas, and Shudras) for the sake of easy control. These categories unfortunately exist till date, and have formed the basis of many contemporary laws and policies too. The state’s lack of understanding of the diversity of the Indian population reduces people’s identities into neat categories that nullify their varied cultures. They are broad generalizations, and are often insensitive or simply inaccurate.

The algorithms of surveillance that are fed with data like this, thus follow the same pattern of inaccuracy and insensitivity. Just like the police, and other officials in power, who use a method of ‘profiling’ based on population categories such as race or sexuality to detect criminals, the AI based facial recognition technology recognizes faced based on similar generalizations and categories, resulting in inaccurate conclusions. For example, a campus study in UCLA by a prominent tech company’s facial recognition software returned at least 58 false positives linking students to criminal records. The results also showed that “the vast majority of incorrect matches were of people of color.” When such AI based facial recognition technology for surveillance are deployed over people in public, the issue of consent is vastly overlooked.

Foucault, in his conceptualization of Governmentality, also highlights that in a rational form of government such as the modern nation state, whereby people are defined as populations, the citizens actively consent to being governed. This consent to be governed, in the modern nation state that is intertwined with technology, translates into widespread surveillance. The point of contention is now that the consent of the individual to be surveilled is taken as a given since it is meant to be in the interest of the population to be safeguarded against any threats.

For instance, in England, an AI based facial recognition technology was used by the police to detect criminals. This method most often involved a major invasion of people’s privacies and personal space based on the wrongful conclusions that the algorithm drew. This issue of consent was highlighted, with particular examples, in the Netflix documentary ‘Coded Bias’. The documentary records an instance, whereby an English man refuses to reveal his face to the facial recognition algorithm. This refusal resulted in a conflict with the law enforcement. He was accused of ‘obstructing’ the safety enforcement by the law. The man was finally forced to cooperate with the surveillance. His individual will to refuse to be subjected to the lens of an untrustworthy AI technology that is prone to biased errors, was overlooked by the state sanctioned law enforcement officers. If AI is so prone to errors, why does the state and the law place so much faith in it?

The military and defense industries define AI as ‘autonomous systems’ that they can rely on for various purposes. Oliver Thorn (now Abigail), an academic-turned-youtuber, argues in her 2018 lecture to the students of The Hague University, about the ethics of use of AI in warfare, that the use of such a definition of AI is very misleading in terms of what the technology can really. That AI is not autonomous by any stretch. Janelle Shane, in her Ted talk about the dangers of AI, very specifically mentions that “the problem with AI is not that it has its own goal, but that it does exactly what we tell it to do.” Clearly, she too agrees that AI is not autonomous. However, people continue to place so much faith in the technology because it is sanctioned by powerful national and international bodies. The modern nation state’s power hinges on the population’s faith in the autonomy and credibility of AI, very much like the collective faith citizens hold in their government identity card, or any other piece of paper that the government signs off on. The credibility of AI, and other state sponsored algorithms play an important role in governance i.e., surveillance. This form of surveillance is not simply meant for detecting criminals accurately, but is more importantly meant to be internalized by the people. A collective fear of being detected by unquestionable state sanctioned algorithms further re-instills the state’s legitimacy and power over its populations.

When AI and algorithms are created by powerful people for their own ends and goals, such a perception of AI’s autonomy is very problematic. Virginia Eubanks, a scholar who studies the entanglements between AI and inequality, highlights that “the most punitive, most invasive, most surveillance-focused tools that we have, go to the poor and working-class communities first”. Cathy O’Neil, a mathematician and the author of the book ‘Weapons of Math Destruction’, echoes Eubanks’ point. She notes that “the people who own the code (or AI, or algorithm) then deploy it on

other people. There is no symmetry of power here”. It seems as though the combination of AI and power, results in a world that the powerful want. This creates an illusion of autonomy of AI technology, when in fact it is the power that the technology is backed by, making the technology a weapon in the hands of the powerful — a weapon of mass destruction indeed.

Increasingly, the use of AI in the world’s major militaries has cast it in a new light in terms of what it means to the nation state. It takes on a nationalistic and patriotic value, whereby questioning the legitimacy of the state sponsored AI, is akin to doubting the nations credibility. In countries that host nationalistic politics, the risk of creating dangerous AI is higher because of a lack of dissent and critique. The danger here does not lie in the capacity of the weapon to think autonomously. The danger lies in the intention of the military body that writes the code for the weapon to act on. The coder then cannot go back on it based on human, ethical considerations. The AI will act on it, and this has damaging implications while talking about mass destruction weapons. This is precisely what Janelle Shane refers to as the single biggest ‘danger’ of AI, in her Ted talk.

It is clear that AI-based technology and algorithms are flawed, and that they make discriminatory conclusions, based on generalized data. It is important to note thus that AI technology and algorithms are not autonomous, and that such modes of intrusive surveillance need not always be consented to. The conclusions drawn by AI must be questioned, their codes rewritten, and the owners of the algorithm held responsible for weaponizing technology against the vulnerable. The modern nation state is built on the consensus of people to be governed by elected people, and not by Artificial Intelligence.

--

--

AI World School

AIWS is an online self-learning platform providing transformational AI and Coding technology education to students at home, to homeschoolers and in K12 schools.