How do you cite AI results in your work?

As part of my work I produce sermons and Bible teaching materials. I use software from Logos and have over 2,000 resources. The software recently introduced AI, which will answer questions based on the content of your resources.

How do you quote such responses from AI when preaching or teaching?

The AI is good in that the summary that it produces has superscripts that link to the resources from which that part came. But the entire summary is actually produced by the AI. In writing, citing the sources is simple. But the actual quote was produced by AI who does it get attributed to?

EDIT
I thought I’d show an example to help clarify the issue.

I use Logos extensively, but I’m not paying for the subscription for the AI capabilities. That said, I’ve instructed ChatGPT to always provide me with an APA-formatted citation for any content generated by it. If I use that content, I footnote the reference. Here is an example:

OpenAI. (2025). Response generated by ChatGPT based on user-provided prompts and context. Retrieved May 2025, from https://openai.com/chatgpt

This may not be helpful for your context, but that is how I’m handling this issue.

1 Like

They are not an author and should not be cited.

Explain how you used the tool instead.

5 Likes

We don’t bother to cite AIs. But we do have a transparency notice on our company website telling people in broad terms how we use AI, and also in what circumstances we won’t use AI.

This is interesting, thanks for sharing. Given the arguments offered, I’ll not cite AI as a source, but I’ll continue my practice of disclosing how I’ve used it. I’ll need to refine my process, perhaps by using footnotes or endnotes. For example, in this article, because it was about using AI to write, I include how I used AI for the article:

How I Used AI for This Article

Like most people, I’m navigating the appropriate use of AI in both my professional and personal life. To illustrate this, I used AI in writing this article in the following ways:

• Searching for examples of AI use in writing.

• Correcting spelling and grammar.

• Editing for clarity while keeping my words and tone.

• Checking for overlap or redundancies.

• Suggesting a “best practice” title.

However, one would not want to do this for articles not specifically about how to use AI for writing. Instead, a general statement in a footnote or endnote disclosing how AI was used may work best. I don’t have AI writing my content, but I do use it to edit, which means I instruct AI to use my vocabulary but to refine for flow and correct grammar. I also use it for ideation, including suggestions for article titles.

On my blog site, I also include this statement in the sidebar:


HUMAN-CREATED CONTENT POLICY

All content on this blog is created by humans unless explicitly stated otherwise. Any use of AI-generated content, such as images or text, will be clearly noted in the respective post or section.

As I explore and experiment with AI tools, I will indicate if and how AI was used in the creation of any content to ensure clarity and integrity.


I’d love to hear suggestions for others. It is important to me to be transparent about where and how AI is used in my writing.

1 Like

I’m wondering if AI is making me lazy? In reality, if I read the sections in the books and articles that AI cites (clicking the link takes you to the text and highlights the relevant passage), I can make my own synopsis. Already AI is making me lazy sheesh…!

1 Like

This is a legitimate temptation and danger for all of us. We already suffer from short attention spans. I’m fortunate in that I’m old enough to have lived much of my life without social media or smart phones, so I am comfortable with extended reading sessions. Even so, smartphones, social media, and the internet in general tend to promote skimming and short bursts of reading, which is not conducive to deep thinking while reading. This is a problem I’m conscientiously seeking to avoid. Similarly, I’m striving to avoid the temptation to take shortcuts with AI, while simultaneously trying to figure out how to leverage it as a tool, not an “easy button” that overtime diminishes my ability and or willingness to struggle with good writing and deep reading.

I have no problem with technology making things easier; I just don’t want it to make me intellectually lazy.

1 Like

I identify text or images generated by AI, the model and date, and in my own notes I record the prompt(s).

I don’t think these models are making me stupid or lazy. (I’m pretty good at generating those results on my own.) I’m learning to probe and confront answers the models give. I also triple check any citations – these models are never trustworthy at that.

The area I think is making me (us, maybe) stupider is the Ai-generated Google, etc., answers to search queries.

Katie

Aren’t we all at times! :grinning:

This reminds me of this clip:

At the 1 minute mark:

Google’s AI replies in search are simply awful, they’re wrong all the time, even with simple queries. Perplexity is outstanding however, a real game changer when your work involves constant searches all day like mine. Once you get in the habit of grilling it rather than just passively accepting what it says, you can great results.

It’s worth checking the sources in perplexity.

I’ve had occasions where it’s provided me with what looks like well sourced, accurate information, but … when I checked the sources I realised that the sources were fake. I wouldn’t have known that if I hadn’t checked.

If I’d seen the sources, I would have realised they were skanky, fake, clickbaity stuff within moments, but perplexity’s answers looked genuine.

Why? Because it hid (not intentionally) all the signals from the original web pages that they were dodgy.

Garbage in, garbage out … but the garbage out sometimes looks really convincing.

Yes, checking sources is good practice, thanks.