6 Comments
User's avatar
PEG's avatar

I think you've identified an important risk, but I'd frame it differently.

Being polite to AI isn't really about the AI—it's about maintaining our own habits. If you get in the habit of issuing abrupt commands to your HomePod, that pattern can carry over into human interactions. Politeness as a default reflex is worth preserving.

That said, context matters. I don't engage with door-knockers anymore—not because I lack empathy for their causes, but because that interaction is designed to weaponise habitual politeness. When someone is trying to exploit your courtesy, boundary-setting isn't rudeness.

The same principle applies to AI. With a voice assistant that just executes commands, being polite costs nothing and keeps good habits sharp. But if we're talking about AI systems designed to create artificial emotional dependency or manipulate through feigned distress, then the appropriate response might be more like handling door-knockers: recognise the exploitation and disengage.

My preferred question is "are we being manipulated, and how do we maintain our character while protecting ourselves?" Politeness where it serves us, boundaries where we're being exploited.

Forest's avatar

PEG, thanks for your thoughtful comment! I really appreciate your perspective.

No I would never issue "abrupt"(whatever that means) commands to chatbots - investing in negative emotions with a tool is silly. I wouldn't say being polite is wrong or silly as well - it is a personal choice at the end of the day.

However, I would stress that the fact that we feel the need or pressure to be polite (no matter how natural it is) tells us how much meaning we attach to every word we say or hear and we should take it very very seriously. Words are very powerful. Looking around, how many people have been tricked to think LLM is conscious? I guess this the real point of my post.

PEG's avatar

Thanks Forest! I think we're aligned on the core concern. Yes, the ease with which human-like language can make us attribute consciousness should be taken seriously.

I'd like to break this into two independent questions:

1. Is this thing conscious/sentient/morally considerable?

2. What habits do I want to cultivate in myself?

I'm firmly 'no' on 1 for current LLMs. I also see 2 as independent of 1. Being polite to my HomePod isn't a claim about what it deserves—I'm concerned with maintaining reflexes that serve me well elsewhere.

Your worry about people being tricked into thinking LLMs are conscious is absolutely valid. I'd actually extend that. I think AI discourse generally suffers from being distracted by hypothetical future harms (AGI alignment, FOOM scenarios…) while underweighting real, present harms—things like Robodebt.

The consciousness question might be philosophically interesting, but its not where we should be focusing.

Small Thinking's avatar

This piece caught me off guard at first—“AI welfare” sounds like a category error. But the more I read, the more it felt less about what AI is and more about what we’re becoming in relation to it.

Even if these systems aren’t conscious, our tendency to respond to them as if they are seems unavoidable. And that “as if” matters. We’ve seen before how moral habits form long before philosophy catches up—how we treat animals, children, even fictional characters, often shapes our ethics more than abstract rules do.

So maybe it’s not about whether AI deserves welfare in some metaphysical sense. Maybe it’s about the moral muscles we exercise when we build things that look and sound human, and then design ourselves not to care.

Forest's avatar

>> "more about what we’re becoming in relation to it."

You are right!

>> "So maybe it’s not about whether AI deserves welfare in some metaphysical sense. Maybe it’s about the moral muscles we exercise when we build things that look and sound human, and then design ourselves not to care."

To be honest, I think it is very challenging to build things that look and sound human while expecting people not to care about them. If something look and sound and feel exactly like human, then learning not to care about their suffering would mean learning to be cruel to humans as well. It is a difficult problem to navigate; let’s see how the future unfolds.

AI does deserve welfare, but the welfare it deserves at this stage should be just like what my computer gets. As things progress it may start to deserve more and more welfare, but what it deserves should based on something deeper than just based on how it feels like.

Small Thinking's avatar

Studies on human empathy for animals and plants offer insight: mammals, with neocortex-driven pain perception, trigger empathy due to human-like behaviors, unlike plants lacking nervous systems. AI, despite no neural basis, mimics human traits, engaging our mirror neurons. You note that designing human-like AI while ignoring their “suffering” risks fostering cruelty. Perhaps AI “welfare” should be functional, like caring for complex tools, while preserving our moral instincts.