Language models like ChatGPT have become part of our daily lives. From virtual assistants to conversational tools, the chatbots are able to respond fluently, and often with apparent empathy. But a question arises that goes beyond the technical: can an AI experience anxiety?
And what implications does that have for us humans?
Table of Contents
ToggleA recent study by the University of Zurich provides revealing clues that invite us to rethink the relationship between artificial intelligenceemotions and mental health.
AI and human emotions: does it react or just simulate?
The Swiss team exposed a language model to intense and traumatic narratives: descriptions of accidents, natural disasters and violent events. Using questionnaires originally designed to assess human anxietyThe AI was evaluated before and after reading these stories.
The result?
A measurable increase in "anxiety" levels, according to the adapted metrics.
But how can this be possible if machines do not feel?
Anxiety without awareness: what AI really processes
The researchers were clear: artificial intelligence has no real emotions. He does not feel anguish or fear. However, his behavior may alter in response to certain stimuli.
This occurs because of the way in which the Large Scale Language Models (LLM). These AIs process text based on learned patterns. When they receive emotionally charged content, your answers are modifiedtend to be more erratic, less clear or even with increased biases.
In other words, they do not feel, but they do react.
The power of mindfulness (even in AI)
The experiment also included positive instructions inspired by mindfulnessThe result: soothing phrases, breathing instructions or visualization. The result?
Anxiety levels were reduced and the responses were more coherent, balanced and empathetic.
This shows that the chatbots are also sensitive to the emotional tone of the instructions they receive. Even if they don't live it, they process it.
Why is it relevant to us?
Such findings raise urgent questions for the digital and technological environment:
- What happens when vulnerable people interact with an AI that reacts negatively to emotional content?
- Could a poorly calibrated chatbot aggravate a user's emotional state?
- And if, on the other hand, your answers are able to calm him down, are we in front of a new form of emotional support?
The increasing use of emotional AI in sensitive environments -such as psychological support- has led to the use of requires a deep ethical and communicational reflection. It's not just about technological advances: it's about the impact they have on those who use them.
AI is not therapy, but it does influence
Today, many users turn to tools such as ChatGPT to talk about what they feel. Faced with barriers to accessing professional care, they turn to technology as a first resort.
But it is vital not losing sight of the limitAI is not a therapist, not a friend, not a conscience. It can simulate empathy, but it cannot emotionally support someone going through a crisis.
Its function remains instrumental, not emotional.
What this study tells us about emotional AI
This experiment does not give us definitive answers, but it opens up necessary questions:
- How should we design the next generations of artificial intelligence?
- Should there be special protocols for when interacting with vulnerable people?
- Who assumes responsibility for emotional spillover effects?
These questions no longer belong only to the technological world. They also pertain to psychology, ethics, communication and education.
Conclusion: emotional AI does not feel, but it does matter.
Artificial intelligence does not feel. But it does react.
And if it reacts, it also has an influence.
And if it does influence, we must think carefully about how, for what and whom it may affect.
The technological advances open fascinating doors for us. But they also demand more responsibility from us.
Let's talk about mental health. Even when the dialogue is with a machine.