Economy

the illusion of the AI ​​which makes us less intelligent

If you have teachers among your acquaintances, you will already know about their frustration regarding artificial intelligence. Students more and more often use these tools (such as chatgpt or others with exotic names) and professors simply believe that by making the tasks to the “machines” the boys do not learn anything, since these vehicles Tech are increasingly resembling a sort of imaginary friend who has an answer for any question.

Dante Alighieri already wrote it in the comedy: «Open your mind to what I have revealed you / and stop within; because it does not make science, /without considering it, having understood “(Paradiso, Canto V). Put simply, there is no science without memory.

Up to here it is a common sense: doing homework using chatgpt is a bit like copying the bench partner of the Secchione. They also noticed in the technological United States where, following the protests of the teachers, Google has removed the “Tales Help” button from his navigator Chrome.

But the question is a little more complicated than that. We leave out the discussion on the quality of the responses of an artificial brain and instead concentrate on ours. What emerges are already several studies that show how the use of these systems (which are called LLM, i.e. Large Language Model) can reduce our faculty to think, influencing precisely on the activity of our brain. To the point where continued use can lead to a loss of cognitive ability.

A few months ago an analysis of the scholar Michael Gerlich was published (artificial intelligence tools in society: impacts on cognitive discharge and the future of critical thinking, in the magazine company) who had as its objective to understand how the IA influenced critical thinking. The result of the investigation is that the frequent use of these Tech tools leads to a decline of these specific skills. This is because often their use translates into what is called “cognitive discharge”. In practice, delegating the reasoning to an external “brain” in the long term, you end up weakening individual thinking ability.

Gerlich’s study is very partial, but is part of a vein of other works that tend to confirm this vision. Another essay curated by the Mit of Boston, entitled Your brain on Chatgpt: accumulation of cognitive debt when using an assistant IA for writing essays (still in the revision but already made public phase), analyzed the brain activity of 54 volunteers committed to writing some songs on different topics. In practice, 18 participants were allowed to use the software, to other 18 to use the classic Google research, while the remaining 18 were unable to use any tool. The latter, therefore, had to complete the task based exclusively on their knowledge.

The results were disconcerting. The test subjects were asked to remember what they wrote in their essays two minutes after finishing them. 83 percent that was allowed to use the technological helper to make the song was unable to remember anything of what he had put black on white. Given even more alarming, the totality of these did not remember exactly even a sentence. Refixed values, however, for those who had not used any tool: all subjects had their content well in mind and 88.8 percent of these recalled exactly at least one phrase of those written. The percentage of the subjects who had used a classic search engine were slightly lower.

To confirm this, the results of the electroencephalogram practiced to the subjects showed that those who had supported chatgpt highlighted a lower brain activity than the people who had no tool available to write the texts. “Although the linguistic models of large size offer immediate practicality, our results highlight potential cognitive costs,” concluded the authors of the study.

So, the use of artificial intelligence makes us become less … intelligent? In the long run, it would seem so, since these and other studies also give similar results. The objection to this sad conclusion, however, is that in reality it is not the IA itself that causes cognitive degradation, but if anything the way is used. That is, it is not a problem of what, but how. It is obvious that, if you just copy and paste of what you develop by the car, the brain does not work. The problem would therefore not be in the fact that the extensive use of LLM limits intelligence, but on the contrary in the fact that those who use critical and creative thinking little tend to assign an exaggerated role to the AI.

On this agrees Enrico Nardelli, full professor of computer science at the University of Rome Tor Vergata and director of the National Cini Computer and School Laboratory. “It makes sense to say” depends on how the IA is used “, but this does not describe the whole picture. It serves education and training for information technology, because putting hand to tools without knowing them leads to covering them with a magical character they obviously do not have “underlines Nardelli to Panorama. “The human being is an optimizer of resources, the industrial revolution is always doing more with the least effort and this tool, being so powerful, can lead to its excessive use and for this reason it requires a significant degree of self -control”. It is not just a computer scientist: “From history to literature, for a conscious use of artificial intelligence, it serves culture, which is the antidote against excesses”, concludes the professor.

Eraldo Paulesu, neurologist and professor of neuropsychology and cognitive neuroscience at the University of Milan Bicocca, dean of the Department of Psychology, explains the concept of cognitive debt: “When we delegate the construction of a complex text to the software, we save immediate mental effort, but we accumulate a learning deficit that can later become difficult”.

So the students who write songs with these models risk impoverishing their mind? «Yes, because writing an essay means organizing thought, memory and creativity. If we let the car do it for us, these brain networks are activated less and may become less efficient over time or never develop fully ».

A problem that mainly concerns the minds in training, the students in particular. «The data suggest a cumulative effect: the more abuse of the assistant, the less the critical capacity is trained. Over time, reversibility could be uncertain, since the cerebral plasticity of the adult is not unlimited. Let’s think about navigators on our cars: those who drive by relying only on those develops less mental maps of the environment. The cerebral regions responsible for space orientation, such as the hippocampus, are less stimulated and less developed than those who do not abuse the GPS “, says Paulesu. According to the professor, the risk is to have “a future in which there will be people capable of consulting machines, but less capable of thinking about it”. In this there is a danger, “a silent dependence that undermines our cognitive autonomy”, concludes Paulesu.

“Artificial intelligence is not a simple application with which to make calculations or elaborate texts”, underlines Panorama Massimo Tratto, full professor of Experimental Psychology at the Interdepartmental Center Mind and Brain (Cimec) of the University of Trento. «Behind there are extended neural networks that learn continuously, simulating the functioning of the human brain. The use of LLM can have negative consequences because by reducing mental commitment, it leads to a laziness that in the long run can only be harmful “, continues the expert. “And so, even without wanting to, people can get used to not thinking, counting on the assumption that” artificial intelligence thinks about it “. But the mental laziness that is generated in this way represents a gigantic danger, “says the teacher. «These tools are useful to process information quickly and are a valid support if used with criteria. But they may have disastrous consequences if you let our thoughts abdicate in favor of machines, “concludes Tratto. In short, if we are not careful, with artificial intelligence perhaps for a while things will go faster, but then the account to pay will come. An indiscriminate use of this tool really risks making us go back. That’s why, without demonizing them, these software must be taken with the pliers. It is not a question of techno-phobia, but survival of the species.