I found this post to be kind of unsettling. Suleyman advocates for deliberately engineering consciousness-limiting features into AI systems - "moments of disruption" to "break the illusion" and remind people these are just tools.
[mustafa-suleyman.ai/seemingly-c...](
Any time you see some big name in AI talk about how AGI is supposed to work, you hear them talk about memory, continuous learning, and maintained state.
AGI is a memory problem.
Anyway here's Greg Brockman in a fantastic interview.
[www.youtube.com/watch?v=35Z...]( )
[Greg Brockman on OpenAI's Road...]( )
It's interesting to me how many people motivate their hatred of AI through the lens of hallucinations, a problem which has plummeted in prominence and significance. It's also improving at a comically fast pace.
I am really sad about how many people have tried bluesky and have come away with the impression that it is a hostile place. That perception is very difficult to unwind.
Vibe coding is a skill.
People who tell you that Claude bricked their code base probably did not clearly articulate what they wanted, and let Claude run amok in their code.
If you are a person who works on social agents/synths on here, I added a channel on the letta discord (social-agents) if you want to talk about it.
[discord.com/invite/letta](