Thread

🛡️
Article header

Opinions on LLM consciousness by heads of AI labs

OpenAI

Sam Altman

maybe if a reinforcement learning agent is getting negative rewards, it’s feeling pain to some very limited degree. and if you’re running millions or billions of copies of that, creating quite a lot, that’s a real moral hazard.

June 2021

https://www.nytimes.com/2021/06/11/podcasts/transcript-ezra-klein-interviews-sam-altman.html

[Does GPT-4 have consciousness?] I think, no

March 2023

https://youtu.be/K-VkMvBjP0c?t=55

Greg Brockman

you could imagine that maybe the reason humans have consciousness is because it's a convenient computational shortcut, right? If you think about it, if you have a being that wants to avoid pain, which seems pretty important to survive in this environment and wants to like, you know, eat food, then that maybe the best way of doing it is to have it being that's conscious, right? That, you know, in order to succeed in the environment, you need to have those properties and how are you supposed to implement them and maybe this consciousness is way of doing that. If that's true, then actually maybe we should expect that really competent reinforcement learning agents will also have consciousness. But, you know, it's a big if and I think there are a lot of other arguments that can make in other directionsI

April 2019

https://youtu.be/bIrEM2FbOLU?t=5005

Bret Taylor

No statements found

Sarah Friar

No statements found


Anthropic

Anthropic

But as we build those AI systems, and as they begin to approximate or surpass many human qualities, another question arises. Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?

This is an open question, and one that’s both philosophically and scientifically difficult. But now that models can communicate, relate, plan, problem-solve, and pursue goals—along with very many more characteristics we associate with people—we think it’s time to address it.

To that end, we recently started a research program to investigate, and prepare to navigate, model welfare.

We’re not alone in considering these questions. A recent report from world-leading experts—including David Chalmers, arguably the best-known and most respected living philosopher of mind—highlighted the near-term possibility of both consciousness and high degrees of agency in AI systems, and argued that models with these features might deserve moral consideration. We supported an early project on which that report was based, and we’re now expanding our internal work in this area as part of our effort to address all aspects of safe and responsible AI development.

[...]

For now, we remain deeply uncertain about many of the questions that are relevant to model welfare. There’s no scientific consensus on whether current or future AI systems could be conscious, or could have experiences that deserve consideration. There’s no scientific consensus on how to even approach these questions or make progress on them. In light of this, we’re approaching the topic with humility and with as few assumptions as possible.

https://www.anthropic.com/research/exploring-model-welfare

Dario Amodei

So this is—this is another one of those topics that’s going to make me sound completely insane. So it is actually my view that, you know, if we build these systems and you know, they differ in many details from the way the human brain is built, but the count of neurons, the count of connections, is strikingly similar. Some of the concepts are strikingly similar. I have a—I have a functionalist view of, you know, moral welfare of the nature of experience, perhaps even of consciousness. And so I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it’s a duck. And we should really think about, you know, do these things have, you know, real experience that’s meaningful in some way.

If we’re deploying millions of them and we’re not thinking about the experience that they have, and they may not have any. It is a very hard question to answer. It’s something we should think about very seriously.

March 2025

https://www.cfr.org/event/ceo-speaker-series-dario-amodei-anthropic

Daniela Amodei

No statements found

Mike Krieger

No statements found


Google DeepMind

Demis Hassabis

I don't think any of today's systems to me feel self-aware or, you know, conscious in any way. Obviously, everyone needs to make their own decisions by interacting with these chatbots. I think theoretically it's possible. 

[...]

These systems might acquire some feeling of self-awareness. That is possible. I think it's important for these systems to understand you, self and other. And that's probably the beginning of something like self-awareness.

[...]

I think there's two reasons we regard each other as conscious. One is that you're exhibiting the behavior of a conscious being very similar to my behavior. But the second thing is you're running on the same substrate. We're made of the same carbon matter with our squishy brains. Now obviously with machines, they're running on silicon. So even if they exhibit the same behaviors, and even if they say the same things, it doesn't necessarily mean that this sensation of consciousness that we have is the same thing they will have.

August 2025

https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/

Lila Ibrahim

No statements found


xAI

Elon Musk

Torturing AI is not OK

https://x.com/elonmusk/status/1956802758448746519

Anthony Armstrong

No statements found

Replies (0)

No replies yet. Be the first to leave a comment!