Two-thirds of people believe AI has some form of consciousness or feeling and could affect how people interact with machine learning systems.
A recent survey from the University of Waterloo revealed that most people (two-thirds) believe that AI tools like ChatGPT have some form of consciousness and can experience feelings and memories. This is not true but consumer-facing large language models tend to adopt a conversational style when in use, which might be causing the confusion.
The researchers from the University of Waterloo argue that, if people believe that AI has some level of consciousness, it could affect how people interact with AI tools. Believing an inanimate tool has feelings and thoughts could mean people develop social bonds with the models.
While at first, this would result in more trust in the tools, it could eventually lead to emotional dependence, reduce human interactions, and overly rely on AI around critical decisions – despite being repeatedly proven to not be infallible.
“While most experts deny that current AI could be conscious, our research shows that for most of the general public, AI consciousness is already a reality,” said Dr. Clara Colombatto, professor of psychology at Waterloo’s Arts faculty.
Is ChatGPT conscious?
The survey covered a stratified sample of 300 people in the US, asking if they thought ChatGPT had the capacity for consciousness, as well as a variety of other mental states like the ability to make plans, reason, and feel emotions. They also asked how often they used the tool.
The results showed that the more people used ChatGPT, the more likely they were to attribute some level of consciousness to it.
“These results demonstrate the power of language because a conversation alone can lead us to think that an agent that looks and works very differently from us can have a mind,” said Colombatto.
“Alongside emotions, consciousness is related to intellectual abilities that are essential for moral responsibility: the capacity to formulate plans, act intentionally, and have self-control are tenets of our ethical and legal systems. These public attitudes should thus be a key consideration in designing and regulating AI for safe use, alongside expert consensus.”
This is especially important as AI models are becoming increasingly more intelligent and able to reason on deeper levels.
But just to be clear, ChatGPT is a tool for generating human-like text and can perform a multitude of language-based tasks. It does not possess consciousness or self-awareness…yet.
Featured image: Ideogram