Discussion about this post

User's avatar
PEG's avatar

I think you've identified an important risk, but I'd frame it differently.

Being polite to AI isn't really about the AI—it's about maintaining our own habits. If you get in the habit of issuing abrupt commands to your HomePod, that pattern can carry over into human interactions. Politeness as a default reflex is worth preserving.

That said, context matters. I don't engage with door-knockers anymore—not because I lack empathy for their causes, but because that interaction is designed to weaponise habitual politeness. When someone is trying to exploit your courtesy, boundary-setting isn't rudeness.

The same principle applies to AI. With a voice assistant that just executes commands, being polite costs nothing and keeps good habits sharp. But if we're talking about AI systems designed to create artificial emotional dependency or manipulate through feigned distress, then the appropriate response might be more like handling door-knockers: recognise the exploitation and disengage.

My preferred question is "are we being manipulated, and how do we maintain our character while protecting ourselves?" Politeness where it serves us, boundaries where we're being exploited.

Small Thinking's avatar

This piece caught me off guard at first—“AI welfare” sounds like a category error. But the more I read, the more it felt less about what AI is and more about what we’re becoming in relation to it.

Even if these systems aren’t conscious, our tendency to respond to them as if they are seems unavoidable. And that “as if” matters. We’ve seen before how moral habits form long before philosophy catches up—how we treat animals, children, even fictional characters, often shapes our ethics more than abstract rules do.

So maybe it’s not about whether AI deserves welfare in some metaphysical sense. Maybe it’s about the moral muscles we exercise when we build things that look and sound human, and then design ourselves not to care.

4 more comments...

No posts

Ready for more?