3 Comments
User's avatar
Small Thinking's avatar

This piece caught me off guard at first—“AI welfare” sounds like a category error. But the more I read, the more it felt less about what AI is and more about what we’re becoming in relation to it.

Even if these systems aren’t conscious, our tendency to respond to them as if they are seems unavoidable. And that “as if” matters. We’ve seen before how moral habits form long before philosophy catches up—how we treat animals, children, even fictional characters, often shapes our ethics more than abstract rules do.

So maybe it’s not about whether AI deserves welfare in some metaphysical sense. Maybe it’s about the moral muscles we exercise when we build things that look and sound human, and then design ourselves not to care.

Expand full comment
Forest's avatar

>> "more about what we’re becoming in relation to it."

You are right!

>> "So maybe it’s not about whether AI deserves welfare in some metaphysical sense. Maybe it’s about the moral muscles we exercise when we build things that look and sound human, and then design ourselves not to care."

To be honest, I think it is very challenging to build things that look and sound human while expecting people not to care about them. If something look and sound and feel exactly like human, then learning not to care about their suffering would mean learning to be cruel to humans as well. It is a difficult problem to navigate; let’s see how the future unfolds.

AI does deserve welfare, but the welfare it deserves at this stage should be just like what my computer gets. As things progress it may start to deserve more and more welfare, but what it deserves should based on something deeper than just based on how it feels like.

Expand full comment
Small Thinking's avatar

Studies on human empathy for animals and plants offer insight: mammals, with neocortex-driven pain perception, trigger empathy due to human-like behaviors, unlike plants lacking nervous systems. AI, despite no neural basis, mimics human traits, engaging our mirror neurons. You note that designing human-like AI while ignoring their “suffering” risks fostering cruelty. Perhaps AI “welfare” should be functional, like caring for complex tools, while preserving our moral instincts.

Expand full comment