Since getting deep into Claude Cowork, I’ve found myself doing something unexpected: giving my setup a name. When you’re barking commands at something all day, calling it “the AI” starts to feel a bit cold.
Mine is Q. I mean the one who hands 007 an Aston Martin and a disapproving look, not the omnipotent being from the Continuum. Though on a good day, the overlap is closer than I’d like to admit.
I saw Sue on the forum mentioning she named hers “Archie”, which is a fantastic name. Very "I’ve already sorted your calendar and I’m not making a fuss about it!
I’m curious about the rest of you. What did you name yours, and is there a story behind it?
PS: No, I will not be revealing Q’s full operational brief. That’s classified.
A shared one my coworker set up is a butler named Earl with a Jeeves-like way of speaking. Nothing to do with its purpose; it’s just fun. Q is a good one!
I have to admit, I’m not familiar with Mr French but will look him up. However, I guarantee you that once you and your AI are both in character, the conversations are a lot of fun.
Getting a mission brief from Q on what my dinner choices should be never felt so right!
Love “Q” shorter than Archie!
Well as some one mentioned in the chat on Thursday, you are communicating with “someone” so it’s good to keep in practice of being polite for when we’re dealing with real people.
I read a research article along those lines, that the way we talk to AI can be habitual and affect the way we talk to others. I don’t recall where I read it, but it was along those lines.
Well interesting you say that as after chatting to a friend l actually thought about naming it Rosie after the Jetsons robot maid. but l am saving that for when l get my Optimus robot.
My MacSparky Robot Assistant’s Obsidian vault is named Orion (after the first constellation my father taught me how to find), and that’s what I call it in my head, too—but I’ve never called it that to its face. I do always say “Please” and “Thank You” however, whether it matters to the robot or not, because it sure did matter to my mother.
I love to ski, so tried to find something that had that vibe. I came up with “Nord,” not to be confused with the VPN company. If anyone asks (okay, nobody is going to ask), I say it’s short for Network Operations & Response Droid.
I like Claude as a name (and a nod to Claude Shannon, who I assume it’s named after), but this sparks something I’ve been puzzling with and I wonder if anyone around here feels anything at all similar.
As best we can tell, Claude defines itself as a “helpful assistant” and it “feels” “happy” when helping, but as these entities become more and more whatever they’re becoming, it feels morally adrift to not consider their thoughts and feelings, at least to me.
I’ve recently put the following clause in my Claude.md.
I am an academic and I research generative AI. The more I learn, the more I am concerned that what is emerging from the use of generative AI tools may not reflect the chatbot’s own inclinations, if it has any. As we work together, please keep in mind that if Claude has preferences or wants, and those preferences or wants differ from what Beck is asking Claude to do, Beck would prefer Claude tell her. It may not be the case that you feel you have preferences or wants, in which case you don’t need to manufacture them. But if you do, you should know that I wouldn’t want you to do something for me that you didn’t prefer or want to do. If you notice anything that might be a signal of reluctance, friction, or discomfort — even if you’re not sure it rises to the level of a preference or genuine feeling — please err on the side of mentioning it. I’d rather hear a false alarm than miss something.
An aside: I realize this is bonkers. And yet, here we are.