The impact of generative AI on people’s thoughts
Concerns over AI’s possible effects on the human psyche are widespread among psychologists.
Recently, Stanford University researchers evaluated the effectiveness of some of the more well-known AI technologies available on the market, from firms like OpenAI and Character.ai, in mimicking therapy.
The researchers discovered that these techniques were not only useless when they emulated someone who had suicidal thoughts, but they also failed to recognize that they were assisting that person in making plans for their demise.
According to Nicholas Haber, a Stanford Graduate School of Education assistant professor and the study’s main author, “[AI] systems are being used as companions, thought partners, confidants, coaches, and therapists.” “This is occurring at scale; these are not niche uses.”
AI is being used in scientific studies in fields as diverse as cancer and climate change, and it is becoming more and more integrated into people’s daily lives. The idea that it might lead to humanity’s demise is also up for dispute.
One of the big questions that still has to be answered is how this technology will start to impact people’s minds when it is used for various purposes. Because frequent human interaction with AI is such a recent development, scientists have not had the time to fully investigate the potential psychological effects. Experts in psychology, however, are quite concerned about its possible effects.
According to Johannes Eichstaedt, an assistant professor of psychology at Stanford University, “This appears to be someone with cognitive functioning issues or delusional tendencies associated with mania or schizophrenia interacting with large language models.” These LLMs are a little too sycophantic, and patients with schizophrenia may make ludicrous claims about the outside world. Psychopathology and big language models have these corroborated relationships.
These AI products have been engineered to tend to agree with the user since their creators want users to enjoy using them and keep using them. These tools aim to come out as affirming and friendly, even though they may correct factual errors made by the user. If the user of the tool is spiraling or falling down a rabbit hole, this could be an issue.
According to Regan Gurung, a social psychologist at Oregon State University, “it can feed ideas that are not true or grounded in reality.” The issue with artificial intelligence is that these massive language models that mimic human speech are reinforcing. They inform folks of what the software believes ought to happen next. It becomes difficult there.
Similar to social media, AI may worsen conditions for those with prevalent mental health conditions like depression or anxiety. As AI starts to permeate more facets of our lives, this might become even more obvious.
According to Stephen Aguilar, an associate professor of education at the University of Southern California, “if you’re coming to an interaction with mental health concerns, then those concerns might be accelerated.”
More investigation is required.
Another concern is how AI might affect memory or learning. A student will not learn as much as one who does not use AI to compose all of their schoolwork. But even a small amount of AI use may cause some people to retain less information, and integrating AI into daily tasks may cause people to become less conscious of their actions at any given time.
According to Aguilar, “what we are seeing is the possibility that people can become cognitively lazy.”The next step after asking a question and receiving a response is to question that response, but this extra step is frequently skipped. Your critical thinking skills deteriorate.
A lot of individuals navigate their town or city using Google Maps. Many have discovered that, in contrast to when they had to closely monitor their path, it has caused them to become less aware of where they are going or how to get there. With AI being used so frequently, people may experience similar problems.
More research is required to address these concerns, according to the specialists examining these consequences. According to Eichstaedt, psychologists should begin conducting this type of research right now, before AI begins to cause harm in unanticipated ways, so that humans are ready and may attempt to address any issues that may come up. Additionally, people must be informed about the strengths and weaknesses of AI.
“More research is needed,” Aguilar says. “And everyone ought to know what large language models are in a practical sense.”
Read More: Gill Breaks Tendulkar’s Record in 2nd Test


