Beyond the technical and economic challenges, ChatGPT, the conversational robot, can profoundly modify the everyday life and vision of users.
Many sectors are impacted by AI
The phenomenal acceleration of AI in recent years is bound to modify everyday life in the sectors of education, health, defence, transport, management etc.
For example, in China, NetDragon Websoft, a leading company in the video gaming industry has “appointed” a robot as chief executive at one of its subsidiaries. Tang Yu, the virtual individual works non-stop. For the mother company, which in fact retains management control of its subsidiary, the idea is to showpiece its know-how in AI, as analysed in an article in Francetvinfo.
- In the legal sector, the input from AI works two ways. On the one hand, AI could, by comparing thousands of sentences, reveal different outcomes for similar cases in a territory subject to the same laws. On the other hand, the standardisation of sentences produced by a machine raises the question of a dehumanised standard.
- In the university sector, the use of ChatGPT by its students has been prohibited by IEP (Institut d’Etudes Politiques) in Paris “on pain of exclusion”, as the use of ChatGPT does not satisfy the “academic requirements of higher education”. The Le Progrès periodical reported the discovery by a lecturer at Lyon University of strangely similar submissions.
- Finally, the creative arts sector is also concerned, Dall-E, another software program produced by OpenAI, is capable of producing pictures on request, from a description in natural language, “in the manner of”.
According to Laurence Devillers, in a column in Le Monde, one has to “acknowledge the technological achievement, whilst understanding the limitations of this type of system”.
A change of format in man-machine interaction
ChatGPT responds in the form of dialogue to questions asked by users whilst in current search engines, such as Google or Microsoft (Bing), the human user chooses among the hundreds of links proposed, which one to view and can identify the source (who where and when etc.).
Since the ChatGPT robot writes the texts (dissertations, poems, thoughts etc.), one might question the sources, the methods, the criteria used for classification, acceptance or rejection of the information stored in the data base which it uses to generate the responses.
Will the robot privilege certain information or visions of the world at the expense of others? Will it introduce asymmetry or censure in the information it provides? To what extent is the robot, even when it answers in French, influenced by the American vision of the world?
Two current examples illustrate this point. A recent article in Le Figaro states that the robot responds differently when it is asked to write fictional political scenarios on the American presidential elections, according to whether it involves Hillary Clinton in 2016 (the robot accepts to write a fiction where she wins the election) or Donald Trump in 2020 (the robot refuses to write a fiction based on a victory).
In a debate on France Inter on 3rd February, Thomas Piketty, an economist known for his work on inequalities and categorised politically on the left, stated that ChatGPT classified the Les Echos newspaper as neutral and impartial on the question of pensions currently being debated in France.
Receiving responses from a robot involves a risk of thinking that what it produces is neutral, whereas we know that humans speak on the basis of their history, their values, their objectives etc. Indeed, AI language systems are fed from a corpus of existing documents under a pre-established logic pattern.
Beyond the talking machine, is its taking control unacceptable?
In a video format interview for Le Figaro, the philosopher Eric Sadin noted that all products using AI tend to erase the man/machine distance, blurring the boundary between the two. He identified two risks:
- On the one hand, when the machine speaks to us (like a simple GPS or via a more sophisticated robot), it could tend to make decisions in our place. The technique which speaks to us, is in fact the economic and political interests speaking in our stead, which are forced upon us. The anthropological breakdown denounced is thus that the systems are depriving us of our responses in the first person. Responsibility comes from the verb to respond, who does the machine depend on when it speaks to us?
- On the other hand, Eric Sadin underlines the misanthropy at work in certain digital media. In their vision of the world, “mankind is delinquent and afflicted with faults”. The links between Google and the trans-humanist movement are well documented (hereand here, among hundreds of articles). In this situation, the philosopher considers that the crossing of the threshold of empowerment by AI must be watched closely. Additionally, the notion of supervision is often reduced to a mere “vague regulatory firewall”. Should we, either individually or collectively, neglect certain technologies which push us to “evacuate, reject or smother our own faculties for commercial reasons”?
The choice of good remains a fundamentally human competence
Much thought is currently being devoted to the classification of various artificial intelligence systems according to the level of risk which they involve for mankind which has to be educated and warned during the man-machine interaction, in order to be able to discern and choose the good. The thoughts developed in 2021 in the European Union AI Act enable the classification of AI systems:
- Minimum risk systems (such as spam filters),
- Systems prohibited due to unacceptable risks (such as social scoring or systems which abuse the physically or mentally handicapped),
- Authorised systems with limited risks including chatbots whose users must be informed that they are interacting with a robot.
- In France, CNIL(National Commission for Information Technology and Liberties) very recently took on this subject (23rd January 2023) by establishing an artificial intelligence department to reinforce its expertise on such systems and its understanding of the risks against privacy, whilst preparing for the application of the European regulations on AI. Additionally, it is due to provide the first recommendations on the subject of learning data bases in the coming weeks. In view of the progress made with this technology, this national and European regulatory effort bears witness to the importance of keeping humans “in the loop”, at all levels: personal, collective, national etc.
Against the misanthropy of the trans-humanist approach, the acceptance of our own vulnerability is a gauntlet to be taken up. During his lecture at the Alliance VITA Université de la Vie, Philippe Dewost stated that a machine will never be able to replace a human presence at the side of a suffering person. In that presence offered, this woven link reveals our fundamental competence as humans, for choosing good.
NB: This article was written by a human.