Artificial Intelligence: A Radical Anti-Humanism
Artificial intelligence and ethics, two subjects that go hand-in-hand in current literature, are often confined to the spectrum of the subject of privacy.
Join the DZone community and get the full member experience.
Join For FreeArtificial intelligence and ethics, two subjects that go hand-in-hand in current literature, are often confined to the spectrum of the subject of privacy. Eric Sadin, a French philosopher, broadens the subject of the impact of artificial intelligence to the civilizational field and explains what he believes is at stake. His book "Artificial Intelligence or the Challenge of the Century (in French: “Intelligence artificielle ou l'Enjeu du siècle”) has made headlines in France, and it seemed natural to me to share his thoughts with you.
Most rich countries intend to become champions of artificial intelligence, so much so that you say that AI has become the "great obsession of the time."
Indeed, companies, politicians, and researchers swear by AI because it heralds the emergence of a world that is secure, optimized, and fluidized everywhere...and brings unlimited economic prospects. The major powers, the United States and China, are mobilizing enormous resources to be at the forefront. Since it is understood that "we must not miss the train of history," investments are made with the greatest haste. So much so that some people say they prefer to advance on technologies first and then think about it. On the contrary, given the scope of the issues at stake, it is imperative that these issues be the subject of public debate and constructive public controversy, which is not the case today.
Your book points to AI as "radical anti-humanism," yet these technologies seek to imitate our brains and are supposed to help us.
This ambition to design modeled systems on the human brain guides research in laboratories. IBM, for example, claims to have developed synaptic chips, and Intel has developed a so-called neuromorphic chip. But this is inappropriate vocabulary: we are not, in any way, dealing with a replica of our intelligence, even partial. The term "artificial intelligence" is an abuse of language, suggesting that it would naturally be entitled to replace our own intelligence in order to ensure a better conduct of our lives. This anthropomorphic name must be challenged. In truth, so-called "AI" represents a mode of technological rationality that seeks to optimize any situation, satisfy many private interests and, ultimately, promote widespread utilitarianism.
You are talking about an almighty power in the making that is capable of stating "the truth" for us.
What characterizes AI is that it is a power of expertise that is constantly being improved. Its self-learning systems are able to analyze ever more varied situations and reveal to us facts of which we were unaware in some cases. They do so at speeds that exceed our cognitive abilities. We are experiencing a change in the status of digital technologies: They are no longer intended to only give us access to information but to reveal to us the reality of the phenomena beyond appearances. Basically, these computational systems have a singular and disturbing vocation: to state the truth. Technology is given new prerogatives: to illuminate the course of our existence with its lights. That is a major fact.
How does this techne logos, which you define as a technology capable of uttering verbs, manifest itself in our daily lives?
At the moment, when these machines are called upon to tell us the truth, they find themselves endowed with speech, just like these connected speakers with whom we can dialogue. This provision is also at work in chatbots, conversational agents, or personal digital assistants designed to guide us in our daily lives. We will increasingly be surrounded by spectra in charge of administering our lives. This is what I call "power-kairos" — the will of the digital industry to be continuously at our side in order to seek and, as soon as the opportunity presents itself, to influence our actions by stating what is supposed to suit us. The upcoming economic battle between Google, Facebook, Amazon, Baidu [the Chinese “Google”], and others will result in fierce competition for this spectral presence, with each actor struggling to impose its hold at the expense of all others.
So you are saying that it is the "divestment" of our right to decide our lives that threaten us.
We are currently experiencing what I call the "injunctive turn of technology." This is a unique phenomenon in the history of humanity that sees systems telling us to act in this or that way. This can range from a moderate and incentive level, at work in a sports coaching application for example, to a prescriptive level, in the case of the review of a bank loan. Even the recruitment industry is beginning to use conversational robots to select candidates! We are being told the fable of a "human-machine complementarity" but in reality, the more advanced the level of automated expertise, the more marginalized human evaluation will be. And coercive injunction levels are already being reached in the field of work, with systems dictating the actions to be performed by people. The free exercise of our judgment is replaced by protocols to guide our actions. This is an unprecedented political, legal, and anthropological break.
The danger is that these "helping" machines are making more and more decisions for us. But without them, could we follow the digital acceleration?
Under the guise of increasingly facilitating our tasks, we have not acknowledged the reversal that has occurred. These decision-support technologies have become decision-making authorities. In a way, we will be called upon less to give instructions to machines than to receive instructions from them. This logic is already at work in medicine, whose benefits from artificial intelligence are constantly praised. We welcome the automated diagnosis, which would offer a qualitative leap forward, but we never mention the fact that these same systems already have the ability to prescribe, leading to the purchase of keywords by pharmaceutical groups. By stuffing us with sensors, these financial powers promise to constantly interpret our physiological flows in order to recommend wellness products and therapeutic treatments. But this promise of major medical advances hides the real objective of the digital industry, which intends to take over health! It is time to take issue with the words of a Yann LeCun, for example. This French machine learning specialist, who has become Facebook's "Chief AI Scientist," repeats everywhere that these advances legitimize artificial intelligence. Why does he not talk about the techniques for interpreting behavior that he designs for this firm? If made public, this research would provide a better understanding of what is at stake.
Is AI a threat that could one day "destroy" us as Elon Musk claims?
These sensationalist remarks testify to a certain schizophrenia. Engineers and researchers have fallen into a fatal trap. The technique, as a relatively autonomous field, has now disappeared. Only the techno-economic aspect remains. To clear his conscience, Musk and the entire sphere of engineers recruited by the digital industry keep repeating in a loop that "the machine must be at the service of man." They continually invoke ethics. It is one of the great deceptions of our time. These façade speeches make it possible to look good at little cost, but it is these same people who are working to increase the expertise of these systems without regard for the consequences.
You explain that this "automated invisible hand" will also "organize the end of politics," and therefore, of democracy.
We are witnessing the realization of the dream of Saint Simonians aspiring to a healthy administration of things. Politicians intend to use AI to establish automated governance in many sectors of society: relations between citizens and the administration, transport, education, justice, etc. This logic has the advantage of requiring fewer human agents and reducing costs. Hence the importance of open data for socio-liberal governments, which, thanks to the availability of public data, intend to leave it to the private sector to organize the course of collective affairs, leading to the commodification of public life. The "smart city" is emblematic of this ideology that would see systems best regulate our daily lives: we let systems work in a perfect world because they are signatory-less and governed by signals. In this respect, we are witnessing the ongoing liquidation of politics understood as the engagement of uncertain choices after conflict and deliberation.
How can we explain the low resonance of critical discourse on artificial intelligence in public debate?
I think we are experiencing a bankruptcy of consciousness. When we want to be vigilant, we always come to the issue of personal data, which is certainly an important issue, but which remains limited to the primacy of privacy alone. We never worry about preserving our freedom in the context of living together and the new asymmetric power structures brought about by the use of AI, in management, for example. For some time now, the issue of "bias" has been central. Again, these are important points as they can lead to possible discrimination. But even if we were to get rid of it, this would not prevent this broad movement of automated enunciation of the truth for commercial and utilitarian purposes from taking root. But what about the free exercise of our faculty of judgment, the denial of our sensitivity and fallibility, and the respect for human plurality? These are civilizational issues that we should confront without delay.
So what can we do to stop this "Leviathan algorithmic" from working?
We are rendered powerless by the speed of developments that prevent us from making a conscious decision and that are presented as inevitable. The evangelists of the automation of the world swear by the dogma of growth, in defiance of all the civilizational consequences. Citizens must emerge from apathy. We must contradict techno-speeches and bring up testimonies from the reality on the ground, where these systems operate — in the workplaces, in schools, in hospitals. We should reject certain devices when they are seen as violating our integrity and dignity. It is still up to us to promote alternative ways of life other than those based on the desire to optimize everything and commercialize it. Against this anti-humanist assault, let us prevail a simple but intangible equation: the more we intend to divest ourselves of our power to act, the more it is necessary to be acting.
Opinions expressed by DZone contributors are their own.
Comments