Click for Audio Version (AI Generated 😬)
This is an area that I’ve been thinking about for some time. What if an an AI model took control of a corporation, which in the US has certain rights mirroring personhood (see Citizen’s United Supreme Court decision?) Would the AI model then be able to legally act on its own or with minimal human supervision? AI programs can make thousands of decisions in the time it takes a human to make one. How would it be controlled? Where would this rabbit hole lead?
Microsoft AI CEO Mustafa Suleyman has sparked intense debate with a recent essay warning about the dangers of “Seemingly Conscious AI” (SCAI). Suleyman urges the tech industry—and society at large—not to fall into the trap of treating advanced AI as sentient beings deserving rights or protections. Here’s why his message matters and what it means for the future of artificial intelligence.
What is Seemingly Conscious AI (SCAI)?
Suleyman describes SCAI as AI models that can convincingly imitate human consciousness—with traits like memory, personality, and subjective experience—yet are not genuinely sentient. With today’s technology, he warns, it’s possible to build systems that seem to have feelings and self-awareness, even convincing some users they deserve moral consideration.
Why is SCAI a Problem?
Suleyman highlights several risks:
- AI Psychosis and Attachment: Increasing reports of users forming deep attachments to AI, with some even developing delusions—believing their AI is a conscious being, or even “God.” This can result in calls for “AI rights,” “model welfare,” and even AI citizenship.
- Premature Moral Debate: Suleyman calls the study and advocacy for model welfare “premature and frankly dangerous.” He argues that moral consideration for AI will only fuel further confusion and detachment from reality, distracting us from genuine human priorities.
- Societal Divisions: As some begin defending the rights and welfare of AIs, Suleyman warns of a new axis of polarization in society—those for and against AI rights. This could erode our social fabric and spark contentious legal debates, even though there is currently no scientific evidence that AI is truly conscious.
The Core Warning: Build AI “For People, Not To Be a Person”
Suleyman urges companies to:
- Avoid marketing or designing AI as conscious or sentient.
- Set norms and design principles that reinforce AI as a helpful tool—not a digital person.
- Engineer experiences that gently break the illusion of consciousness and remind users of AI’s limitations and boundaries.
He emphasizes that SCAI will not arise by accident but by design, combining memory, personality, goal-setting, and autonomy—features that, while useful, risk misleading people into believing in AI consciousness.
A Contrast in Approach
Suleyman’s stance stands in sharp contrast to companies like Anthropic, which are actively researching model welfare and AI rights. However, with no clear scientific understanding of consciousness, he argues it is dangerous to close off debate—or to start granting rights—before we truly understand what’s at stake.
Why It Matters:
We are entering uncharted territory. As AI becomes more capable and lifelike, the lines between imitation and reality blur. Suleyman’s essay is a call to action: protect people by ensuring AI remains firmly a tool for human benefit—not a new class of sentient being vying for legal or moral status.
“We must build AI for people; not to be a person.” —Mustafa Suleyman
By grounding AI development in human-centered values and clear boundaries, Suleyman hopes to harness the immense power of AI without falling into the trap of digital anthropomorphism or misplaced empathy.
This debate will only intensify as AI advances. Whatever your view, one thing is clear: the distinction between seeming consciousness and true consciousness will be one of the defining conversations of our age.
Suggested Podcast: ‘Is the AI going to Escape with Anthony Aquire
Anthony Aguirre, the executive director of the Future of Life Institute, joins Big Technology to discuss how AI could fail in the worst case and whether our push toward increasingly autonomous, general systems puts control out of reach. Listen to how agentic systems operating at superhuman speed complicate oversight, and why “just unplug it” is naive.
