Why I’m Increasingly Moving Away from OpenAI Toward Anthropic

I can understand the rest of your reasoning - but why would you want tools that censor what you can use them for?

I am not saying you should or should not use it for any particular purpose. But why not leave that decision up to you?

1 Like

How does it help, thinking that way?

(Genuinely curious! I find the tools massively helpful. I wonder if you do too, and your reframing them in a way that helps you even more. Or perhaps you don’t find them helpful, and this framing explains why. Or something else!)

1 Like

ChatGPT helped me find a metaphor that changed how I help people. I’ve never seen anyone else use it. It was one of dozens we explored. It helped one of my clients win a prestigious award.

So … that was incredibly helpful and valuable.

I’m not sharing that to contradict you. I’m just curious to see how that fits with the filter view.

Would greatly appreciate if you wouldn’t mind sharing the metaphor @Clarke_Ching
Sorry for going off topic!

Have only tried Claude briefly in the past. Will spend some more time with it!

Quick request - can we include in posts “free version” or “paid version” when identifying specific AI tools? (Bonus points if one can distinguish results varying based on free or paid)

My personal struggle - Cut/paste and formatting of results: Is Claude better than ChatGPT at getting results out to put into other documents/notes and does it cut/paste tables better?

(well, I should ask, “at all” since I’ve given up trying to directly get ChatGPT table lists/output into other docs in a simple way.)

The tools can be helpful, but too often they are assumed to have attributes that they do not have.

LLMs are by no means intelligent. But as noted above we humans have no experience with non-intelligent entities that can produce intelligible strings of text. So we fall back on how we deal with other intelligent entities.

As just one example, a collaborator or partner would likely be invested in our joint success. Yet a LLM is not and cannot be.

LLMs can be useful sounding boards for working through ideas. But we as users need to diligent in understanding what they are and are not. And they are not intelligent entities that care about us. Referring to LLMs with anthropomorphic language lowers one’s guard.

3 Likes

Oh, I desperately wish I could. But I can’t share it in a way, yet, that doesn’t make it sound lame.

1 Like

Co incidentally, I made this choice about 10 days ago. Claude is just much better for the work I do, especially now it has memories, and a way to dictate into it in the Mac.

The thing I most miss about ChatGPT, the OLD version of advanced voice, where you could have proper out loud conversations. They changed it and it stopped being useful, and turned annoying. It just keeps repeating my own words back to me, and telling me how clever I am.

When I asked it to not do that, and just say, “Okay” unless, I asked it to talk more, it responded with:

Ah, fair point! I’ll zip it completely now. Just go ahead, and I’ll be quiet and let you brain-dump as long as you need.

And then in every subsequent message it said something like,

Absolutely noted. I’m staying quiet and just listening in. Keep going whenever you’re ready.

And then we’d get in an argument, and … that, your honour, is how I ended up crashing into that tree.

Okay, I see where you’re coming from Steve, thanks.

Thinking about it … I think much the same, ironically, can be said about dealing with people.

I have learned not to trust people, for instance, who sound overly confident and intelligent.

Hmm, I’m not quite sure how to take that …

Getting a bit off topic now … for me everyone starts off with a base level of trust, which goes up or done based on behavior.

1 Like

Oh, sorry, that wasn’t meant to refer to you, in any way!

It’s a general rule of life.

I’ve often fallen into the trap of thinking that confident people are always the most competent. It’s a mental short cut many of us make, but a lot of confident people aren’t all that competent, they just look it.

The LLMs that have been coded to sound confident also look competent and correct, but it doesn’t mean they are.

2 Likes

FYI: Claude is a slow behemoth of an Electron app.

I haven’t tried Claude so will look into it.

For a different use case, Google’s Notebook LM is an excellent tool for analysing a finite set of materials (and creating learning tools).

1 Like

I haven’t engaged with AI too much do far and I am still old school googling most of the time.

Tried Claude before v4.5, it was not so accurate. Just tried its v4.5 and it looks much better.

ChatGPT is a kind of enshittification. It doesn’t want to be useful but rather becomes another kind of Meta, making you spend more and more time on it like social media while manipulating users psychologically. I am tired when I ask the questions every time they ask if I want them to generate me something like visual explanation.

When I asked both about the difference between the two infinite canvas apps: Freeform and Whiteboard on Goodnotes, Claude is able to point out some critical features specifically while ChatGPT has a lot of words which don’t tell anything but “basic” vs “advanced”, with some fancy table of comparison provided.

What is also worrying is that how OpenAI is doing their business by making the AI bubble bigger and bigger, and the CEO is just making noise on the media to seek attention, not really making AI more useful.

I can understand the rest of your reasoning - but why would you want tools that censor what you can use them for? I am not saying you should or should not use it for any particular purpose. But why not leave that decision up to you?

I apologize for the length of my response, but I believe your reply exposes a common misunderstanding. :slightly_smiling_face:

Because the choice is mine, I prefer, insofar as it is possible, and it is not always or even usually possible, not to conduct business with companies that produce products or enable activities which harm young people and contribute to the erosion of individual and civic virtue. The addictive and corrosive effects of pornography and the associated objectification of women are increasingly well documented. The catastrophic impact on individuals of deepfake images and videos is also becoming more prevalent and better understood.

With respect, and my respect is genuine, this is a classic case of misapplying the concept of censorship to a company’s decision not to provide a particular service. Companies retain the right to determine what functions their technology will perform. This constitutes the exercise of moral responsibility, not the suppression of user expression.

Anthropic’s decision not to enable its users to create deepfakes or virtual pornography is a decision not to provide features that cause demonstrable harm. This is no different than Apple deciding not to allow pornographic applications in the App Store. Declining to build a capability into a product is not suppressing the liberty of others. Anthropic is refusing to facilitate harmful activity, not preventing users from seeking or creating such content through other means. If censorship is interpreted as refusing to add a product feature that some may desire, then that would imply that every company must enable any feature its technology is capable of producing, regardless of its social impact or moral quality.

The question is not whether users should have unrestricted choice in what they create, but whether a company should build products whose use causes demonstrable harm without corresponding legitimate purpose or virtue. Virtual pornography and deepfake capabilities are not morally neutral tools that can be misused: they are designed to produce content whose harmful effects are objectively real, and whose virtuous or beneficial applications are negligible or nonexistent. Declining to build such tools is not an infringement of freedom. It is an exercise of corporate moral responsibility, which users remain free to accept or reject through their purchase decisions.

I’ll end with two quotes that seem to fit OpenAI:

C.S. Lewis: “The greatest evil is not now done in those sordid ‘dens of crime’ that Dickens loved to paint. It is not done even in concentration camps and labour camps. In those we see its final result. But it is conceived and ordered (moved, seconded, carried, and minuted) in clean, carpeted, warmed and well-lighted offices, by quiet men with white collars and cut fingernails and smooth-shaven cheeks who do not need to raise their voices.” —attributed to Lewis

Martin Luther King Jr.: “He who passively accepts evil is as much involved in it as he who helps to perpetrate it. He who accepts evil without protesting against it is really cooperating with it.”


Addendum

9 Likes

Good suggestion. I’ll edit my post. For the record, I use the paid versions of AI.

Mostly just want to say thank you for such a well-written and well-thought-out post. This is/can be a touchy subject that people can argue over. I thought your approach was well reasoned and it’s given me more to think about when it comes to what AI products I want to use.

5 Likes

Thank you for the kind words. It can be tricky to address sensitive topics online in a manner that is both candid and respectful. Unfortunately, given the rampant moral relativism that most have absorbed through cultural osmosis, any disagreement is interpreted as disrespect, lack of tolerance, etc. That is not logically defensible, but it is common. I hope I’m always respectful and careful, and if I stumble, I’ll be given the judgment of charity. I need it, just ask my wife! :grinning:

Thanks again for your kind comments, blessings!

3 Likes

Claude is the bot I am waiting for with Apple Intelligence Integration. I tend to flit between Claude and Copilot currently. I find Sam Altman to be a sociopath.

1 Like

I don’t know that he is a sociopath, but if so, let’s hope he never creates ASI!