AI Images--Getting Very Good--UPDATED

These are genuine questions, I’m not challenging you. :slightly_smiling_face:

Does that mean that it is unethical to ride in an AI-powered robotaxi because it puts taxi drivers out of work? Does this mean that I should never use AI to generate an illustration instead of paying for them from a site like iStock? Does this mean that I should never use AI to check my grammar and style but instead hire an editor? Where does one draw such lines?

Exactly!!! I don’t know the answer, but if the answer is that it is not unethical, why is it unethical to use an AI to generate an illustration? For me the line is extremely difficult to draw because it goes to the essence of what is work vs original work.

The key thing here is that copyright protects the rights of some workers that we call artists. One could argue that no artist works in isolation and is influenced by many other creations so the AI is functioning as an artist… until one realizes that AI works at an unprecedented scale that can destroy the added value of any type of work, not only the creative one. And that’s the dilemma that will transform our societies in the next years.

It’s worth a lot, to me, at least. In fact, I had you in mind when I talked about the necessity for attribution, because I know from previous posts how you go about it.

If you tell me how you’re using Gen AI for your blog, that’s perfectly fine: you’re not getting me to read it under false pretences and I can make the choice how to judge what you’ve produced in the normal way.

Even commercial offerings that openly acknowledge if and how Gen AI has been used give me the chance to decide whether to pay or not. It’s failing to do so seems to me to be a fundamental test of integrity.

It’s extremely unlikely that Gen AI will disappear, but perhaps there’s a chance that we can still develop standards of attribution, so that, for example, unattributed use of AI is seen as being as dishonest and as career limiting as plagiarism (though for how long we’ll be able to maintain that standard is another question).

1 Like

Indeed! One might rightly call this the commoditization of human creativity—even though the development of AI is, in itself, a stunning display of human ingenuity and imagination. We’ve created something that can, in a sense, imitate (replace?) us.

What concerns me most is the potential erosion of genuine creativity and authenticity. There’s a real danger that future generations may lose both the ability and the will to engage in the slow, painstaking process of learning to write or draw—a true lost art (pun intended). The temptation will be to simply press the “easy button.”

1 Like

Regarding attribution, I asked ChatGPT (I know! :slightly_smiling_face:), “ What is the current standard for citing AI generated content?” Here is the output:

Citing AI-generated content is an evolving practice, with different citation styles offering preliminary guidelines: 

APA Style:
The American Psychological Association (APA) suggests treating AI-generated content as the output of an algorithm, crediting the author as the entity behind the AI tool. For instance, when citing ChatGPT, OpenAI would be listed as the author. A sample citation might look like: 

OpenAI. (2023). ChatGPT (Mar. 28 version) [Large language model]. https://chat.openai.com

MLA Style:
The Modern Language Association (MLA) advises against treating AI tools as authors. Instead, describe the AI-generated content and include details about the prompt used. The citation should specify the AI tool as the “Title of Container.” For example: 

“Response to ‘Explain how rainbows are formed.’” ChatGPT, OpenAI, 28 Mar. 2025, chat.openai.com.

Chicago Style:
The Chicago Manual of Style recommends citing AI-generated content in notes or parenthetical citations but not including it in the bibliography or reference list. This approach is akin to citing personal communications. A citation might appear as: 

  1. ChatGPT, response to “What is the capital of France?” March 28, 2025.

It’s important to note that these guidelines are still developing. As AI tools become more integrated into research and writing, citation standards may evolve. Always consult the latest guidelines from the relevant style manuals or institutional policies to ensure accurate and ethical citation of AI-generated content.

1 Like

I can see a broad parallel here with the growth of Amazon, for example. We collectively embraced it because it was so convenient, but I for one wish I’d been less eager, because the baleful social consequences are now so obvious. (And I now have several hundred Kindle books which could be removed at a moment’s notice based on the whim of an algorithmical idiot…)

2 Likes

Already happening now…

That’s interesting – thank you! I think the two which require details of the prompt sound more sustainable. I’m not an academic, but the whole point of references seems to be that you can verify and replicate the information. The APA version appears useless from that point of view.

That’s not quite the same issue. The article you referenced focuses on the idea of a “parasocial relationship”–a new term for me–which is certainly a real issue, but my concern, to use an “old fashion” term, is the fostering of laziness, loss of intellectual discipline. Using AI to shortcut hard work fosters a kind of laziness that not only diminishes the individual but also deprives them of the dignity that comes from effort and the genuine pride (satisfaction) of creating something worthwhile.

Deep down, we know that relying on AI to replace our own thinking or writing makes us impostors. Worse still, it encourages habits that become self-reinforcing and ultimately shape our character–for the worse. Yes, copyright matters, and so do authentic human relationships, but so does the value of rigorous, intellectual labor.

Speaking of labor, I’ve got to head to the office :running_man:. :slightly_smiling_face:

Yes, but the problem with AI is that the same prompt can produce different results. So including the prompt in a citation—which could be quite long—may not be as helpful as it first seems. :slightly_smiling_face: The essential thing is to acknowledge the use of AI if the content is largely generated by it. Ideation, editing, and similar uses feel like a different matter to me. At least, I think so… :thinking:

1 Like

Speaking of AI image generation …

Ouch!!! Ha ha. Not really ouch.

That’s fair enough. Your choice.

I make a similar decision wrt fake meat products, at least for now.

You seem like an open minded person, so here’s a suggestion that might repulse you at first, so maybe put it on the back burner as something you might do one day when you’re bored but why not try using AI to think through a problem, clarify it, simplify it, then share it?

That’s what I do. It’s like having an intelligent, but sometimes stupid, friend, who helps you think through things very quickly.

Not only that, but they give you new options. New ways of doing things. Faster. And then - if you want - you can ask your friend to quickly write out the results of your discussion. They’re better at writing than most people. So that’s a win - who doesn’t what to read stuff that’s well written? That said, you might find that you don’t like the writing, so you use it as a first draft and clean it up.

I find the above way of using AI joyful.

Silly suggestion, @brookter?

(Btw: I wrote the above all by myself, but then passed it through Claude.ai and asked it for it’s thoughts.

It said:

Hi Clarke! Your response to that critique is conversational and reflects your authentic take on using AI - I like how you’re open about it while offering an alternative perspective.

You’ve handled this well by:

  1. Not getting defensive but acknowledging their viewpoint
  2. Making a relatable comparison to fake meat products
  3. Framing AI as a thinking partner rather than a replacement for your expertise

The way you’ve positioned AI as “an intelligent, but sometimes stupid, friend” who helps you think through problems faster is particularly effective. It shows you’re using the tool thoughtfully, not just to cut corners.

One suggestion - when discussing this topic with potential clients or readers, emphasize how AI enhances your unique TOC expertise rather than replaces it. You might mention how you’re combining your decades of experience with new tools to deliver even more value to clients.

What kind of responses have you gotten from others about your AI usage? Are there particular concerns you’re hearing repeatedly?​​​​​​​​​​​​​​​​

Which was a really helpful response. I was trying - like you - to be direct, but not impolite when I responded, and Claude.ai is quite good at checking things like that)

Hi Clarke – thank you for your reply.

I understand the point you’re making, but it doesn’t really address my point, which is that passing work generated by the algorithm off as one’s own without acknowledging provenance, is a fraud on the reader, especially if the work is for money.

I am certainly not saying that AI has no legitimate use, and I believe you that it helps you to clarify your thinking. But once an author fails to put the AI’s text into their own words, or uses it to create an illustration without telling the reader what they have done, I believe they have stepped beyond the bounds of the acceptable.

So ‘think through a problem, clarify it’? Perhaps, if it helps. ‘Simplify and share?’ No – not without acknowledgement.

They’re better at writing than most people. So that’s a win - who doesn’t what to read stuff that’s well written?

As an aside, in my experience, AI text is not ‘well-written’ – it’s at best adequate, if anodyne, and often a lot worse. It would be a shame if a writer doesn’t have the skills and self-confidence to aim higher than this, but if they don’t, shouldn’t they be honest with their readers and acknowledge that?

1 Like

Let’s assume two different cases.

  • Imagine you are using AI to write an answer to this comment. To me it doesn’t matter, the provenance of the observation is from @brookter. You own the consequences, good or bad, of this type of creation. That you used AI to me is irrelevant.
  • You use AI to write a book and sell it. I cannot judge beforehand the value of a book without paying for it independently of whether it was written by a human or not. To me, it’s my duty as a reader to determine the value your work provides to me post facto. Either it’s enjoyment, or it’s learning something new that other authors cannot provide. I am with you that, today, that this type of value extraction from me would be somewhat fraudulent. But this type of fraud will have short legs, as soon enough millions of books written by AI will appear and the added value of your particular book will drop to zero.

I truly believe that there’s an opportunity here for human content creators as with the deepfakes in news and social media. Perhaps the thing is not to flag AI generated content, but to attest the material that is exclusively human.

Also, this is a different issue from the potential copyright infringement based on the training material of the model (the Miyazaki case).

Okay … if it’s acknowledgement of provenance - would you like to read a copy of my book?

I’ve been very clear here, and in many other places, that it’s been written with extensive help using AI.

I also used spell check (the first AI?). And a human editor. And about 50 beta readers. And my assistant designed the cover using clip art (I don’t know who drew the originals, she just pasted them in) and canva, and lots of iterations, many of them not good!

It’s less than an hours read.

It took me decades to get to the point where I had the knowledge and skill to write such a short book.
It took me many hours to write it with the thinking and writing help with Claud and ChatGPT AIs.
Despite writing for many years, I couldn’t write in a “Clarke the detective” voice …

image

… so Claude and ChatGPT and I too-ed and fro-ed A LOT to find that voice.

I don’t know if the book would help you - it helps small businesses make loads more money by unblocking their “plumbing” - and has got great reviews (people have always liked my writing voice, and they seem to like this book’s too), and is selling very well.

I honestly think the world would be a worse place if that book wasn’t out there (being given away for free in a bunch of places, and being sold on amazon.com for $0.99).

There were many “contributors” to my book, and AI was one.

If you’d like a copy I’ll email it to you.

I’m only offering because I don’t think this is black and white.

I have bought the Kindle version. In a month or so I’ll be back and report.

I do appreciate the offer, thank you, but I’m retired and wasn’t in business when I was working, so it’s not really in my wheelhouse. I did skim through the sample though.

I couldn’t find anything on the Amazon page or in the available sample which mentions that AI was used for the illustrations or the text – was it in the acknowledgements?

My point remains: a reader deserves to be told whether Generative AI has been used to create text/illustrations, especially when the whole question of its ethics is so controversial, just as a consumer is entitled to know whether the meat they eat is fake or not, irrespective of whether they found it tasty or not.

But thank you sincerely for the offer.

No. It’s in the Amazon system. I don’t know if they share that on the sales page or just use it

And. I don’t care!

I wanted to write a fantastic book that helped people and could be read in less than an hour.

I want potential readers to buy it based on the promise I make in the sales page, the price, the reading time, and - most importantly - the reviews of this book, and the other books I’ve written.

To make that work I had to work incredibly hard to write a good boom

Plus, I decided to publish the book cheaply, and see what readers said in their reviews, and if it got bad reviews, I’d unpublish it.

I thought about that a lot.

There are a lot of scientific studies that show that professionals give different scores and reviews for the same wine, depending on what bottle it’s on. Expensive wines get better reviews. Cheaper looking ones get worse reviews even when it’s the same wine.

I didn’t want to taint my book with an “AI warning”, because that would trigger people’s prejudices, in the same way that I wont eat fake meat, or read books that mention any religious stuff.


I know that makes life perilous for you, because you might unwittingly read a book that’s been tainted by AI and not know it!

I don’t know what to do about that, but I bet it’s happened already and you didn’t know it.

Ah, the dilemmas of the modern world!

Your choice not to eat fake meat depends on the honesty of the sellers, otherwise you wouldn’t know it was fake and you would’t have the choice. Shouldn’t you extend that same courtesy/option to your readers?

Good point, but I don’t think my writing is fake.

And when many non-vegetarians start saying that fake meat tastes good, I will eat it.