763: Workflows with William Gallagher

I have a question about this. I am preparing a presentation that addresses the national teacher shortage. I’m making the point that, given our criteria for hiring teachers, we fish in a small pond even as the ocean of candidates is shrinking. I worked with AI to generate an image I could use on a slide to illustrate this point. I have a caption on the image that says, “AI-Generated Image,” but the caption is not visible on the slide as I don’t want to distract the audience from my point.

I’m curious if anyone considers using such an image on a slide inappropriate.

I really enjoyed this one too. WG is a great interviewee and David was able to pull him back onto topic when necessary

1 Like

There is no doubt whatsoever that the technology allows authors to create worse books: that can be seen by the flood of AI generated drivel on Amazon.

As for producing ‘better’ books – doesn’t that depend on the reader’s expectations (which is why the thought experiment is couched the way it is).

My expectation is that the author who sells a book under their own name is competent enough to do basic authorial tasks themselves and that means they have thought through their ideas and they can express themselves clearly and cogently, without the need for extensive, active, support from AI. Or at the least, if they have used such tools, they are honest about the fact.

By active I mean that the AI has suggested things which go beyond, say, looking something up in the dictionary, or in a reference book, where the information gained is passive and has to be reworked by the author, and into the realms of suggesting rephrasing, or finding holes in plots, and further into writing the plots, where the reader has no idea whether the idea was the work of the author or not.

If an author is using such active support, then to me that means it isn’t ‘all their own work’, and I would cease to trust them if I found out that they’d used it and not been made that clear.

For one thing, the dubious sourcing of much of the data behind the models is an issue for many people, and may be a factor in their choice whether to buy, particularly when its use is likely to make it harder for living artists/writers whose works the models were trained on to continue to sell their work.

1 Like

I think there’s all sorts of grey areas around what can be used and when, and it is going to take a long time to sort out the implications, which is why I think it’s useful to think in terms of our expectations in terms of readers / audience in the meantime.

Are the audience going to expect that this picture was hand-produced unless you tell them otherwise? In this case, I doubt it, but equally, I wouldn’t be distracted by a small caption given the provenance, any more than I would be by a copyright caption on a Getty stock image.

2 Likes

I don’t know if this is a cultural thing, an ethical thing, a scientist thing, or maybe a nice soup made of all of it, but my view very much is that using AI for writing and not attributing it as such absolutely is plagiarism. There’s no doubt for me on this. Plagiarism is the act of taking work you didn’t do and attributing it to yourself. If AI wrote something and you edit it and take credit, that is plagiarism. You did not write it. It doesn’t matter that you took it from a machine - what matters is that you’re claiming as your own work that you didn’t produce.

(And for what it’s worth, I do some copy-editing and proofing work for scientific writing, and editors rarely get or ask for credit unless the draft is a mess and they contribute to significant changes. Even book editors often only get credit for big projects, or from authors following the convention of acknowledging them in the acknowledgement.)

I would like to see all AI work indicated as such, like in the checklist @brookter shared. It’s a matter of trust. Scientists have to declare their contribution on published works (and as mentioned editing is rarely credited). It’s not unreasonable to ask the same of other published writing that’s listed under an author (or authors).

2 Likes

I think my answer for you is “it depends”. If you’re representing your employer, you probably should add the credit to the slide (or have a last slide or sheet somewhere with all the credits). You should be doing that for all image use. (I am saying that as someone in the UK and EU though, I don’t know what best practice is in the U.S.)

If you’re doing the talk as an individual, it’s still best practice to attribute all your images, but most people don’t and the expectations are different for individuals v. businesses.

(For artwork, I’ve seen “created by Bob using Midjourney”, which seems a good way to label it.)

2 Likes

I don’t know the answer, but I do know when I’m reading a useful and well written book.

It’s a good picture! A nice companion to your wise words, Barret.

A lot of folk will know it’s AI by looking at it, and they won’t care.
The rest? They won’t care.

It’s easy to put a little - “source: ChatGPT” on the slide. No one will care, though.

And, if it makes you feel more comfortable, you could always make a joke of it, "Ironically, when it comes to artist skills, I have none! But I love this little picture that I created with chatGPT. I’m not sure if it’s art, but I hope it helps convey my message. The small pool … "


If you do find someone who cares that you used ChatGPT, they’re allowed their own opinion, but if they focus on that rather than your story, I suspect they’re in the wrong room. You’re doing big picture, change the world stuff, and they are counting the number of angels on a pinhead.

1 Like

Does the image illustrate your point though? It appears the person is fishing where the majority of the fish are. Which seems to me to be the opposite of the point you are trying to make.

And I would say upfront that you used AI to generate the image. Otherwise people will notice, and will be wondering, “is that an AI generated image?” and “what else is AI generated?”. And not be focused on your talk. (Whenever I give a talk, I make sure the examples are realistic and any math always checks out, otherwise people get distracted.)

Good luck with the talk.

1 Like

What a great idea! People who use AI to write should declare that they’re using it, if they want to, and be proud of the fact.

I have crippling pain when I write, and long afterwards. That stopped me writing for years, and it’s been horrible. Every day hurts, and it’s been 3 years since I last published a book. Not so good for an author.

ChatGPT, especially its dictation feature, has got me writing again.

My writing and thinking and ability to help people has soared.

It’s so brilliant!

And, although I don’t think it is plagiarism, to write with chatGPT … I think it’s a wonderful idea for people who are using it to make a big deal about it.

Good point, but my speech which make it clear as I have data about the shrinking pool of educators, at least I hope I make it clear! :slightly_smiling_face: It is certainly not a perfect image but better than I could find searching the net and checking image sites.

My wife thinks “washing up” includes what I’d call “washing up” AND what we both call “drying”.

I think of all of those things as being part of writing, Mitch.

Doesn’t matter, it’s just words.

“Helpful and well written” is not my primary criteria for reading creative writing. I want it to be much more than just those utilitarian basics. Helpful and well-written could equally describe a traffic sign.

When I read a blog post or a book, I hope to encounter another person’s unique thoughts expressed in their unique way. Both of those elements are important to me as a reader. I read to hear what a human author says and how it’s said, not a machine’s predictive text edited by a human.

To me, there’s a significant difference between a traffic sign and a human’s creative writing, both of which might be described as “helpful and well written.”

2 Likes

Somewhat ironic in the context of this discussion.

1 Like

I am following the discussion and believe you and Clarke are both wanting this. Same for your previous paragraph; he’s not advocating for achieving the low standard of utility writing.

The disagreements are whether unique voice can be achieved with early use of LLM in the writing process, what authorship credits would be owed, and if LLMs should be attempted to be used early regardless of the creative thoughts and writing quality.

1 Like

This is a bit disingenuous now isn’t it?

Using a LLM to generate the first draft is very different from using an LLM as a dictation tool.

Ummm … you can do both. I do.

(And, I wonder if you could be a little more generous … and not accuse me of being “disingeous”!!! Maybe it’s a cultural thing, but where I come from that’s quite an insulting thing to say to someone who you’re chatting with.)

I know you can do both. And now I know that you do.

There is nothing wrong with that.

But the comment you were responding to was about the ethics of claiming that one is the author of work generated by an LLM, and you respond, in what looks like a play for sympathy, that LLMs have let you write again despite your physical issues.

Which is a non sequitur. And as you are obviously an intelligent fellow, it looks as if you were intentionally attempting to change the narrative. Which is disingenuous.

I find it insulting that you tried to play the sympathy card. (In fact that entire post appears to be sarcastic and insulting.)

1 Like

Thanks for that, @cornchip, that’s a nice summary. A good egg, indeed!

I’m sure we all love the idea of using tools to create great, helpful writing.

Likewise, I’m confident we all HATE that these same tools can be used to pollute the world with even more horrible words.

The good thing is … MacPowerUsers tend to not fall in the second camp. I don’t write that lightly, I genuinely think that the people who join this forum are good, decent people.

1 Like

“A play for sympathy”.

It wasn’t Steve.