The people refusing to use AI

Let me use an analogy – calculators.

When calculators were introduced, there were people who argued that they would both make everyone more efficient, and more accurate, and productive, while at the same time there were some who predicted that they would collapse people‘s ability to think for themselves when it came to mathematics.

An engineer who grew up in a world of slide rules certainly benefited from huge productivity gains when calculators were introduced. But more than once I have heard older engineers lament that although they themselves happily use calculators, younger generations who grew up with them at their side, are so reliant on calculators that they are no longer capable of judging or spotting when an answer provided by a calculator is wrong by multiple orders of magnitude (due to a trivial input error) because they can’t even remotely estimate a ballpark of what the correct number should be.

And why would they develop that judgement? Errors, blatant errors are just blindly accepted because the calculator can’t be wrong. Fortunately with calculations, while they are important, they can be easily checked, and we don’t all necessarily need mental arithmetic to function in daily life. Memorised times tables don’t form the fundamental building blocks of how we think, and more importantly, how we learn to think.

But I worry that there are parallels here with the introduction of generative AI - people who have grown up and worked decades without it, have developed their ability to think, process information, judge, compose an argument and critique it without the constant temptation of a AI toolset that will relieve them of any burden of having to think for themselves. Many are now giddy with excitement at the potential of generative AI to make themselves more ‘productive’.

Will future generations who grow up in a world where there is the constant siren call of generative AI offering effortless answers, which may or may not contain authoritative and convincing bullshit, expend the effort of learning critical thinking skills? Will they see the need to exert all the sweat, energy and hard work that you all have over your lifetimes, to hone your own ability to write, critically think, and refine your own thoughts? To spot the obvious flaws in the output of generative AI?

And why would future generations bother spending the same years or decades developing and honing the skills we all have to create new art, new ideas, if what we create can be effortlessly plagiarised in seconds by others using derivative generative AI?

5 Likes

This does raise the interesting question of whether there’s going to be an impasse at some point. I mean, the only reason AI can get better is because it gets to slurp up new, human-created content. As that becomes more and more rare, AI will lose the fuel it needs to improve.

1 Like

Not every organization is this porous, alas. At one point in the early 90s I was put in charge of a department that still did a lot of clerical work in pen and ink. During one budget cycle I requested desktop computers for them so that they would be able to access our burgeoning corporate LAN / WAN and could get comfortable with digital tools before everything transitioned away from manual data input into one of the big mainframes in the basement, which was clearly going to happen sooner than later. The division head called me into his office and asked why they needed computers—“They don’t even know how to use them!” he fumed before denying the request. And there was no opportunity for them to shadow anyone: they were to sit in their cubicles all day, pen in hand, writing out journal entries on paper forms for some someone else to type into mainframe.

Salaried employees above the most junior level could be more entrepreneurial in building an area of particular expertise and the constituency to go with it—I certainly did—but that just wasn’t an option for the hourly-paid employees.

Because there is genuine pleasure in the act of creation, and in making something new. I think people will always take joy in that, be it standing in front of a canvas with a paintbrush, or knitting a sweater, or even using AI as a tool to probe new paths for collaborative creation.

You don’t need AI for a culture to get stuck in a sterile rut: it seems like every Hollywood movie has a number after its name, exploiting the same tired IP franchise yet again. It’s hard to look at Disney+ and think that they’re up to more than self-plagiarisation. It’s the same with popular fiction: a lot of the franchise titles already read like they were written by a chatbot.

I never felt so free as I did on the day I could finally use my slide rule as a straightedge and toss the log tables into the trash can. :wink:

4 Likes

You had a slide rule and log tables!

I still have an abacus on my desk. :slight_smile:

Albeit a paper weight. Although once upon a time I did know how to use it.

1 Like

Seems the new pope gets it:

1 Like

High school chemistry was my first introduction to slide rules and log tables. Born at the right time.

True artificial intelligence will have arisen when self-awareness is realized in some corner of a networked logic system. Until that time AI, as we know it today, is plagiarism of human beings on a grand scale.

2 Likes

Here’s one hypothetical and fairly hopeful way that it could happen, as told in a song called The Collars, which was based on a short story to which I can no longer find a public link.

I think you’ll be able to listen to it here.

The Collars - Bandcamp

Here’s a link to the story. I liked it and the song!

https://www.cyphertext.net/collars.html

1 Like

I think we are being trapped semantically by the vagueness of AI.

It appears that most of the strong negative opinions deal mostly with generative AI (diffusion generated images and LLM-generated text).

Although I use generative AI, I found the greatest utility in embedded AI/ML tools that assist, not replace) classic tools (software, not physical tools) that have been in use for many years.

Slight offramp here, but was just watching a YouTube about a “new and improved” object deletion algorithm added to Adobe Photoshop’s latest release.

The demo included showing using the same exact command on the same image with “use cloud” user-selectable option enabled versus disabled and how much better the cloud results performed.

The demo explained that “use cloud” utilized the cloud interface to Adobe Firefly’s generative fill to use a combination of classic object selection with generative deletion, while the non cloud option relied on the 5-year old (or older) “context aware fill” that used entirely local processing (and presumably more heavily algorithmic rather than generative AI techniques).

FWIW, for some of these AI-based tools, Adobe allows the user to select "local only, cloud, or auto) while other tools which use AI have no exposed levers or user controls.

Yep, I totally agree - the lack of a proper agreement of what we mean when we toss out “AI” is hurting the discussion. The computational AI that’s inside the “Photonic Engine” for instance, powering the Camera is just AMAZING. Getting similar results with a traditional DSLR/mirrorless system takes much more skill at both capture and editing.

This is use case dependent at this point. The “Use Cloud” option for background / texture generation will return lower resolution imagery than the locally generated “context aware” stuff does. Local generation is still the current pro workflow for high resolution editing and copositing. This will probably change once Firefly (or whatever generator is used) can match the detail and resolution of the source image.

Now, creating a selection with the “Use Cloud” option may already give you an enhanced result compared to the first pass local selection. I say may because it also depends on the image and the required complexity of the resulting selection.

Good point. Althought the video also mentioned that the higher resolution local fill did not blend in well and looked less natural than the lower resolution cloud based fill which had a much more natural blending mode.

We are already way off subject, but do you think the lower resolution results of Adobe’s cloud generation is a technical limitation or simply a desire to reduce compute power / computer time needed?

(i.e. could they ‘turn a knob’ to increase the resolution or do they have to do more on their end than justifying the resource costs?)

Right now, I use these tools for quick stuff not portfolio quality hand crafting of my images so I’m ok with the results either way.

My guess is that yes, the limitation is “artificial” and could probably be overcome by either

  1. Adding even more compute power
  2. Extended the render times

Either way - this is increasing the compute, power and cooling costs and I’m sure Adobe needs to strike a balance between cranking out high-resolution renders and serving a lot of clients.

As you say, most people use the tools for “quick stuff” at lower resolutions, where the current resolution is more than good enough. Even for video, the current image sizes will probably be sufficient for 4K, even if you DO need a lot of frames per second :slightly_smiling_face:

That said, after having generated a series of images and you finally decide on one to go with, having an option to re-render that image at a higher resolution is a probable candidate for a future service offering. (It’s already available for other cloud rendering jobs.)

I do like that quote:

Why would I bother to read something someone couldn’t be bothered to write?

5 Likes