The people refusing to use AI

This particular article wasn’t behind a paywall and I was able to find a link via search. But note that people do link to Apple News articles that are behind both the publisher’s paywall and the Apple News paywall, and those aren’t readily available to those of us who aren’t Apple News subscribers.

In any event, I’m not trying to make a big deal about this.

1 Like

I think all of the people who had criticisms about the “technology” did have a point. It is just a matter of whether or not what was lost outweighed the tradeoffs for efficiency.

I understand. I’ve tried Apple News+ multiple times but I hate the apps design, as well as how the news is presented. However I find many of the stories that are posted to be very interesting.

So once I’m able to access the linked site in a browser, I find some paywalled stories can be read in some popular read later apps.

1 Like

Sometimes you can. I choose, as much as possible, to use Adobe FireFly image generation and AI image tools inside Adobe apps (powered by FireFly), because they are trained on opt-in data which Adobe pays for.

(I know, because I’ve gotten a small check from Adobe when I elected to allow my meager photo collection submitted to Adobe Stock to be used for AI training.)

1 Like

Some effort should be made to properly delineate between human or LLM produced content. I saw one such proposal yesterday.

We call what humans produce “art”.

For LLM’s the term is “Computer Rendered Artificial Pictures.” The acronym created is apt as well.

Sigh. I’ve grown to loathe this genre of article. Exploring, analyzing, and evaluating the reasons people resist using a newly emergent technology, or, alternatively, why they rush to embrace it is a valid and valuable use of a journalist’s time. This article had more of the flavor of “I talked to four people about using AI and they had some thoughts which I will repeat back to you but not probe in any way.”

“Use AI,” meaning what, exactly? AI is more than ChatGPT and it can be used in any number of ways, and with varying degrees of skill. There are plenty of reasons not to use AI, too. Some are practical—it’s not the right tool the job, it’s expensive, it isn’t accurate enough—and some are philosophical. Most of the people quoted in the article fell into the latter camp, and some of what they had to say sounded half-baked, frankly, either because it is, or because we weren’t given any context for how they’d reached their conclusions.

2 Likes

I think “blathering” (meaning “talking without thinking”) is a better term. They just stick words together using some impenetrable algorithm.

3 Likes

I’ve used the paid versions of both Claude and ChatGPT, and found both useful in the way I found smart interns useful: they gathered, digested, and reported back a lot of information, but I had to check their work. Having interns at my disposal did not impair my ability to think critically, and I don’t expect that using Claude as my e-intern will either.

You will pry AI-powered photo processing tools—e.g., noise reduction or distraction removal—out of my cold, dead hands. Being able to reduce the noise in a photo I had no choice but to take at a high ISO in low light will not impair my ability to be thoughtful and creative when I look through the viewfinder.

A lot of the AI slop being served up to us is akin to what Edward Tufte referred to a “chart junk” way back in the days when Excel made it possible to crank out 3-D bar charts with the click of a mouse. But it would have been a mistake to throw out the spreadsheet (one of tech’s great gifts, IMO) because of the proliferation of chart junk.

An aside: I can’t recommend The Visual Display of Quantitative Information highly enough.

2 Likes

Is this a new insight into human behavior?

I hear some people refuse to eat cheesecake.

I like cheesecake. I think I’ll keep eating it.

Katie

1 Like

The race against AI has even inspired the Not By AI Movement.

This reminds me of the Ani DiFranco lyric:

“Every tool is a weapon if you hold it right.”

Should we be concerned about AI – about the privacy, the ethics, the environmental impact, harming creators? Absolutely. But AI isn’t going anywhere. As it becomes more integrated into our lives, the harder it will become to resist it. I think that the more effective resistance will be the push for regulation and ethical use, rather than refusing to use it entirely.

I want to talk about accessibility and access to knowledge, which is something I don’t hear about often when people speak about AI. As a counter to arguments about the dangers of AI, I want to offer this: it’s making massive amounts of knowledge available to people, who may otherwise not have access.

Let me give a niche example from my own research.

I research a prominent family from the Great Power era in Sweden. A lot of this information is buried in archives, museum collections, or church records.

There are dozens of old court records from the 17th century that mention this family. They’ve been transcribed, but the text is in early modern Swedish, so many of the words are not known to modern speakers. Google Translate and DeepL fail miserably in their translations - their models are designed for modern Swedish, not this earlier period.

So, I pasted all of these into the Gemini 2.5 Pro model. Here’s where AI blew my mind. Not only did it identify the text as early modern Swedish, but it provided summaries, full translations, and even line by line breakdowns of the text. It even knew very archaic words and phrases. I was confident of the accuracy because I did have a couple of these records professionally translated and they were similar.

Having all of them translated wasn’t an option within my budget. It would have cost thousands of dollars.

This kind of access and analysis is unprecedented in historical research, imo. In fact, the Swedish National Archives has now used AI to convert more than a million handwritten documents into searchable text.

It’s staying in my research arsenal, but the ethics and people’s reasons for using and refusing are fascinating.

2 Likes

I do all that all the time!!!

But, when A.I. and I blather together, the content is far better thought out, and far easier to read.

I get the cynicism… but if you chose to use A.I. cleverly and thoughtfully, it’s like working with a clever friend.

And who knows how that clever friend thinks?

@WayneG: for me on the Mac (running Sequoia 15.4.1) the “Open in Safari” option is grayed out Do you know how to “turn it on”?

Sorry, AFAIK there is no setting for this. If you have any safari extensions active you could try turning them off, in case one of them is the problem.

If you have opened a link in safari like the BBC News link at the top of this thread, the option to “open in Safari” should be available.

But if you are in the Apple News macOS app, click the the share button to access “open in Safari”.

The biggest question to me is one of how we move forward.

The underlying premise of current AI training is that peoples’ intellectual property, server resources, etc. have fundamentally no value. I don’t think we can just say that everything that a crawler can find and slurp down is fair game, throwing all copyright concerns to the wind. That’s untenable. We need some sort of legal/ethical framework to cover this sort of thing going forward.

But on the other hand we have these AI models, no matter how we got them.

In parliamentary procedure, they use a term - “continuing breach” - to describe a violation that’s still occurring. Nominating a board member that’s not eligible isn’t a continuing breach. Electing that board member and allowing them to continue to serve is a continuing breach.

I think that one of the big questions here is, is the existence of the current models (NOT future models) a “continuing breach”? Are the people whose data was slurped up being harmed in the present moment? If it is, there’s a reasonable argument for not using the models. If not, then the current models are an artifact of bad actions, but themselves are arguably neutral.

It’s an interesting question.

1 Like

I don’t think we can describe current LLMs as neutral because they regurgitate sentences and more from prior art without citation or permission.

Scholarly writing, ouside of textbooks, generally makes very little money. It may even cost money, especially in the sciences, to publish even in reputable journals. The currency of scholarship is the citation. Stripping out citations is metaphorical theft, theft which alters the value of the current scholarship and the copied scholarship.

7 Likes

Or possibly with misinformation. There are already enough groups trying to rewrite history. Imagine the power of flooding social media with AI generated misinformation to the masses?

1 Like

It’s already happening. This article is behind the Bloomberg paywall but is also available on Apple News+.

Here is a short excerpt:

Filippo Menczer caught his first whiff of what he calls “social bots” in the early 2010s. He was mapping how information travels on Twitter when he stumbled onto a few clusters of accounts that looked a little suspicious. Some of them shared the same post thousands of times. Others reshared thousands of posts from each account. “These are not human,” he remembers thinking.

So began an extensive career in bot watching. As a distinguished professor of informatics at Indiana University at Bloomington, Menczer has studied the way bots proliferate, manipulate human beings and turn them against one another. In 2014 he was part of a team that developed the tool BotOrNot to help people spot fake accounts in the wild. He’s now regarded as one of the internet’s preeminent bot hunters.

https://www.bloomberg.com/news/articles/2025-05-08/maybe-ai-slop-is-killing-the-internet-after-all?srnd=undefined&embedded-checkout=true

I run a digital transformation agency and you have to be all in or you will be left behind. There is so much more to generative AI for writing although it helps.

Example I had a set of core themes I want to communicate about, I had Gemini propose 50 article topics and summary outlines. It took about 20 mins. I kept about 50% of them and then have used deep research to obtain some incredible research on those topics? Does that make me unethical? Was it unethical when you used a search engine to research vs going to the microfiche (love that word)? Not really.

At the same time, when we prep for a new prospect, I have a long and detailed prompt we created in the form of an agent that does deep research industry analysis on the company and research on the individual including what they have written about, themes they discussion, key facts about education, location, etc.

I know spending my time studying its and it takes a tenth of the time to prepare. Teh customer experience is better, we are more prepared, etc.

I think the environment thing is really, but people are not really concerned with driving, leaving computers on all night, bright outdoor lighting, running their pool heater, etc. I know it is a bit apples and oranges, but just saying.

My rant is over. :stuck_out_tongue: Thanks for listening.

1 Like

I wouldn’t presume to speak for others, but my concern isn’t with the use of AI itself. While ethical, legal, and environmental issues must be addressed, I suspect AI will inevitably become a professional necessity. Acknowledging those concerns, my primary issue is not that AI is used, but how it’s used.

Several things trouble me, but as I’ve mentioned, authenticity is a significant one. Passing off machine-generated content as one’s own lacks integrity, obscures the writer’s voice, and often results in a flattened, a homogenized, bland prose. It’s akin to replacing a pianist with a player piano—both produce music, but only one creates art.

Would you pay $100 to sit in a concert hall and listen to a player piano? I wouldn’t—nor will I knowingly spend my time reading AI-generated articles or books. Can I be fooled? Of course. But there’s a difference between being deceived and willingly consuming something artificial.

image

Perhaps I’m overthinking this, and I admit my perspective may be off. Still, I have no real interest in reading machine-generated text from a colleague, friend, journalist, or politician. I want to hear their voice—their words—not the product of ones and zeros.

Speaking of which, there is this from New York Intelligencer:

ChatGPT has unraveled the entire academic project.

By James D. Walsh, May 7, 2025

“Roy” Lee stepped onto Columbia University’s campus this past fall and, by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in. “At the end, I’d put on the finishing touches. I’d just insert 20 percent of my humanity, my voice, into it,” Lee told me recently …

Link to article (behind a paywall). Or, from Apple News+:

3 Likes

Paraphrasing the old quote, in my opinion “authenticity is in the eye of the beholder”. Not because authenticity is debatable, it is not, but the relevance it has depends on the receiver of the content, the use case and the cost. I cannot care the less if a support call is attended by an AI as long as I get a prompt & satisfactory answer. I don’t care that much if the Youtube content is machine generated: it will be bland and undifferentiated, but if I’m getting some value of it, for example learning something new, then it’s ok, even without me knowing it’s machine provenance. But I definitely would not pay 100$ dollar to watch a piano roll instead of a real human piano player!

Edit to add that the ethical and environmental issues are not related to this point of view, they have their own angles which would trigger other discussions where I am less lenient to AI.