Machines, Procedures, and Avoiding Responsibility
Machines, Procedures, and Avoiding Responsibility
To pretend that a telephone can be "smart" is probably indicative of what we think intelligence is. But machines are far from being able to replace human beings.
Join the DZone community and get the full member experience.Join For Free
Some people are trying to make us believe that artificial intelligence is a "revolution." What if it isn't? Can we not simply see the logic of a process that goes back at least fifty years ago? Bureaucracy has pushed us to put in place simple procedures in all areas of everyday life, allowing everyone to avoid any responsibility, to no longer have to think and to be smart. Algorithms are scary. We wonder where the "human" is in these decision-making procedures. What if he had already disappeared long ago?
As Crozier (1963) had shown, bureaucracy appears as a rational mode of organization in which citizens are protected from clientelism and arbitrariness by the establishment of objective rules. Because bureaucracy is not only a hierarchical (and state) administration, it is above all a set of norms, procedures, and formalities that encompass all human activities. Insurers, for example, quickly understood the value of these procedures. In the context of road accidents, the settlement of material accidents is made according to the "IRSA Convention" (Agreement on Direct Indemnification of the Insured and Recourse Between Motor Insurance Companies) when two vehicles are involved (succeeding the so-called IDA Convention, created in 1968). After a damage assessment by an expert, the insurer establishes the liability of his insured and directly compensates him for the material damage and prejudice suffered. He then turns against the opposing insurer(s) according to the terms of recourse established by the agreement.
This convention is based on the use of scales, corresponding to 13 cases of classic accidents: the case 51, for example, relates to a collision between two vehicles, one of which was backing up or making a half-turn, and in this case, the responsibility is normally full (100%) for the driver of the vehicle reversing (or turning half-turn). In the case of personal injury accidents, the procedure is less transparent, but the so-called Badinter Law proposed to set up a simple mechanism (for the victim but also, finally, for the driver's insurer), with the aim of "improving the situation of road accident victims and speeding up compensation procedures."
To do this, several scales are used. For example, Partial Permanent Disability (PPI) measures the "reduction in physical, psycho-sensory or intellectual potential that a victim remains with" and translates into a percentage of permanent disability, on a scale from 1 to 100, corresponding to an "indicative scale of functional deficits after sequellaires in common law." If this scale is officially "indicative," following it avoids having to justify oneself. The whole assessment of compensation will be based on "medical certificates," which have become the basic administrative documents, themselves increasingly standardized, and meeting specific forms and standards. In a very bureaucratic logic, these procedures aim to avoid empathy being encouraged. The development of procedural standards also has this objective: to distance the victims, so that neutrality can be exercised, objectivity can be (supposedly) ensured and justice can be effective. Isn't this form of indifference the basis of everyday insurance? Don't claims managers want to be as far away from the caller as possible because the roof of his house has caught on fire, or because his wife is in hospital following a traffic accident?
Bauman (2002) shows that this "production of indifference" often emerges from an extremely banal configuration, characteristic of all modern societies, namely the coupling of the "functional division of labor" and the "substitution of technical responsibility for moral responsibility." In describing the bureaucracy in Pakistan, Hull (2012) explains that "these procedures are developed not because of a rationalization logic, but because public servants protect themselves by deploying them vigorously and widely."
It is not surprising that Adolf Eichmann chose this line of defense at his trial in April 1961. In fact, when Arendt (1966), present during the trial, is reread about Adolf Eichmann's responsibility in the implementation of the "final solution" (then sent by The New Yorker to Jerusalem to testify), she presents us with a "robotized" character, disubstantialized, having only obeyed orders. Without going back over the historical reality of Eichmann's role and the staging of the trial, Hannah Arendt's analysis of the bureaucracy is interesting. The subtitle of the book, the "banality of evil," is characterized by the inability to be affected by what one does and the refusal to judge, like any bureaucrat analyzing a situation through the prism of a form, becoming a robot, whose responsibility would be called into question by mechanical obedience to orders.
The latter would ultimately be "representative of a bureaucratic system, in which each individual is merely a blind cog... mechanically executing orders from respected authority." Arendt's thesis (in Cesarani's words (2010)) is that "Eichmann was telling the truth when he presented himself as a civil servant without passion, as a tiny cog in the vast exterminating machine, and when he claimed that he could very easily have been replaced by someone else. Obeying orders, following a procedure then makes Eichmann guilty, but the question of responsibility remains open (in particular by distinguishing individual responsibility, and collective responsibility). Returning precisely to the question of responsibility a few years later, Arendt (2005) wonders, "How to judge without clinging to standards, preconceived norms and general rules under which to subsume particular cases?" Bureaucracy can reassure by its rationality, but frighten by the indifference it generates.
Because bureaucracy is nothing new. The ancient scribes were the first bureaucrats, as Wilford (2001) states. More recently, if Karl Marx has looked at the bureaucracy of industry to study the domination of the bourgeoisie and capitalism, Max Weber has shown that the bureaucracy accommodates all forms of power. He writes that "real domination [is exercised] in the maintenance of daily administration," "bureaucracy is characterized by a much greater impossibility than one has to escape it." The company is the privileged place for the development of a bureaucracy, "the requirement of calculability and predictability as rigorous as possible promotes the growth of a special layer of administrators and imposes a certain type of structuring on it" (quoted by Claude Lefort). For Max Weber, bureaucracy is not a parasite (as Marx thought) but a fundamental component of capitalism. Subcontracting, outsourcing, just-in-time work are only possible thanks to practices based on greater bureaucracy. All information should be codified as accurately as possible, no approximation should be made in decisions, and the division of tasks should be certified and standardized. Partitioning protects employees from a sense of responsibility. The "silos" have become the "experts' paradise." The segmented and sequential form of the work then protects the members of the organization as Dupuy (2011) shows.
This production of indifference was studied at length by Michael Herzfeld, who showed the specificity of state bureaucracy as a public power, and concretizing the denial of any difference: "it offers the effective and generalized capacity to reject those who do not fit into pre-established categories and considered normal," as Hibou (2012) notes. Sainati & Schalchi (2007) also notes this by studying the importance of bureaucracy and standards in the security drift of recent years, "each individual citizen is apprehended according to the category of offender to which he or she is supposed to belong. Everyone is necessarily a suspect in having committed, wanting or being able to commit an offense. This policy of social intolerance will complete the transformation of justice (especially criminal justice) into a total bureaucratic system..."
Normative inflation, which is observed in the world of justice, has also been observed in the world of finance, with the decline of state authorities (central banks, financial market authorities, various regulators) which no longer intervene directly, but through the imposition of increasingly strict administrative rules. These include Basel II-type management rules, but also the different partitions between the various activities (deposit and merchant banks, advisory activities and market activities, for example). Risk management is fundamentally bureaucratic, involving standards, grids, and codes that will generate automatic reactions. This reporting gives a very simplified vision of the activity, but this synthesis of information makes it possible to take decisions more quickly.
This phase seems essential given the specificities of each branch, almost preventing a global vision of the bank's activity. This set of rules and procedures also provides "protection:" in the face of (judicial) uncertainty, the best way to defend yourself is to respect procedures and rules. The goal is not to avoid bankruptcy, but to protect oneself in the event of an accusation. Respect for the rules then becomes more important than their purpose. These rules seem to be a political response to the various crises experienced by the banking world. This bureaucratic inflation brings a form of tranquillity and comfort while creating a form of dilution of responsibilities.
As Hibou (2012) notes, "in the name of individual responsibility, everyone must respect the standards, but compliance with the standards is worth failing responsibility in the event of a problem" (recalling the Kerviel affair in passing). This is noted, for example, by Brunson & Jacobsson (2002) when they state that the audit culture is certainly a culture of responsibility, but of individual responsibility. Collective responsibility is all the more diluted as governments delegate their regulatory power to private actors in a vague manner if the standard is adopted, traceability techniques will make it possible to trace back to the individual responsible for the act causing a failure; whereas if the standard is not, no one will be responsible.
And this search for standards is endless. Thus, many opponents of industrial agriculture opposed the standardization of food products, and in response, developed organic consumption standards. The latter were then questioned by local networks, which wanted to support the concept of "eating local", and to be recognized, adopted new standards. The response to proceedings is an escalation of proceedings. Max Weber said it in 1920, "when those who are subject to bureaucratic control seek to escape the influence of existing bureaucratic apparatus, normally this is only possible by creating a proper organization that will also be subject to bureaucratization.
MacDonald has long been committed to ensuring a consistent product throughout the world. Following the logic of the Taylorism method of organization, the company has written a procedure guide explaining the correct steps to take when salting French fries, filling a glass of soda, etc. In call centers, procedures are followed very scrupulously by employees, with call queue management, scenarios displayed on the screen, and the employee only has to unfold it. Simone Weil spoke of a "scientific organization of work," where work was dehumanized, reduced to a state of mechanical energy. Taylorism was the expression in factories of this fascination for science, seeing the human being as a machine. But there is nothing new, since in the middle of the 17th century Thomas Hobbes wrote, "Since life is nothing more than a movement of limbs, the beginning of which is in some way an inner main part, why could we not say that all automata (machines that move themselves, by springs and wheels, like a watch) have an artificial life?"
Computers were invented to relieve a number of repetitive tasks by unwinding an algorithm implemented by a human being. Hanoi tricks are a very simple, very repetitive game... but solved by an incredibly simple algorithm. This is probably why this "game" is still taught in all computer science and algorithmic courses because the "game" itself is actually very boring very quickly. Many training courses no longer teach creativity, but sets of procedures to follow. The Box & Jenkins method of forecasting is a long procedure that we simply follow to the letter, mechanically: we park the series, we model it by an autoregressive process, then we validate the hypotheses. And if it doesn't work, we do it again.
The actuarial profession, as a science, is based on a set of simple procedures. For example, to construct a rate, we start by building a base, using underwriting information, we will look at the number of claims for each policy, and we will use a model to describe this counting variable (typically a Poisson regression). If we wish to do more advanced things, we will use part of the data to build the model, and another part to test the predictions of our model. We will do the same thing with claims costs. The approach is simple: we collect data, we estimate a model, we test the model, possibly we retain the best if we have the choice between several. It's so simple that a computer can almost do it itself...
For decades, organizations have tried to put in place procedures to avoid arbitrary (humane?) decision-making. At the same time, engineers have developed increasingly powerful machines to repeat routine tasks over and over again. From this double observation, we cannot be surprised to see the machines more and more present, everywhere. At least this is the thesis defended by Susskind & Susskind (2015) which anticipates many transformations of the majority of professions, going beyond a simple robotization of routine tasks (including for legal professions, as Remus & Levy sees it (2015)).
But we must not be mistaken, if machines replace men, they are not men for all that. For example, responsibility always lies with one person: the designer of the machine, its operator — who has some technical knowledge — or the user — who often has none. Abiteboul (2016) goes even further by asking the underlying ethical question, namely "can a computer system be assigned a responsibility" (and joining Hannah Arendt's questions). But beyond responsibility, one can argue that the machines have no intention. They sometimes perform complex tasks, but because they have been programmed to do so. That's the beauty of engineering: a phone (as smart as it is) is just a set of electronic components that can perform increasingly complex calculations (sometimes it can locate it on a map and offer me a quick route to a restaurant), but the phone wants nothing.
An automatic translator makes it possible to translate a text in a few seconds, but only because it has been programmed for that, whereas if a child learns a language, it is because he understands that it is an essential step to communicate with his parents. If I type a sentence in an unknown language into a translator, I am impressed to get an answer that makes sense. But if I master both languages (even a little), I am on the contrary often disappointed, probably hoping better. These machines are often very predictable, which is both a quality and a defect. Isn't that what you ask every engineer? The machine must be reliable, obey the finger and the eye. The first difference between man and machine is that man can disobey. And this is his greatest wealth!
A word that comes up again and again when we practice data science is the word "learning" (we speak of "machine learning"). We are told that the machine tries to reproduce the human walk of the human being when he learns. The child learns to recognize letters, then learns to put them end to end to write words, in a fairly conscious way since he is usually guided by his teacher. He also learns to recognize faces, often unconsciously this time. The machine also learns to recognize handwriting or faces. The machine will make mistakes, but it will also "learn from its mistakes," and it will improve. Like a human being? It's not as simple as that. The child will get discouraged several times before being able to read correctly, and he will persevere. We find the "conatus" dear to Spinoza, the effort that everything that actually exists makes, the perseverance that every living being shows. And often, this learning is done in pain. If a machine makes a mistake, recognizing a "5" when it was a "3," it makes a mistake, and for it stops there. For the child who hoped to have 5 candies and has only 3, he will really feel the mistake. What about this person who thought he had an important appointment at 3:00, and who arrives two hours late?
To pretend that a telephone can be "smart" is probably indicative of what we think intelligence is. But machines are far from being able to replace human beings. They don't want to, and fortunately, as long as they're not programmed to, that's not going to happen.
Published at DZone with permission of Arthur Charpentier , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.