AI: “Not Hiding the Machine, but Learning How to Speak Through It”

I came across this statement in the Literature and Latte forum. I’m not entirely sure what I think about it or even if I fully understand its implications, but it’s an intriguing perspective on the dilemma of where AI fits into the creative process.

How would you interpret this, and based on your interpretation, what is your reaction?

Really interesting analogy, and I can see where it comes from. The more you use AI, the more you feel able to work through it. Its just you’re doing it at the level of composition. You choose the structure, shape and purpose of the text, not just placing one word after another.

Im a governor at a local primary school and a teacher gave us a presentation the other day about how they teach writing to 5 to 7 yos. It was all about things like the difference between a joke and a story, about finding a hook (like “happy”) and threading it through your writing, or about how you write differently if the reader knows the topic vs if they don’t. What I pointed out at the end is that this wasn’t so much about writing as composition, and the teacher agreed. Writing out words by rote, even spelling and grammar, are of secondary importance. Their aim is to teach kids to find their voice through the structure and form of their writing.

This really struck me as a good analogy for how I use AI, even in a business context. If Im writing a business proposal or a Board report, it doesn’t boil down to “how do i write a snappy title”. Instead I’m able to focus on the structure, shape and purpose of the prose and let the AI fill in the words in between.

Whether I would want to read a book written by someone else doing this is an interesting question. Depends on the book I guess. I’ve read a lot of scifi and fantasy where the writing is downright appalling, but I had a fun old time, because heroes and villains and spaceships and monsters are almost always fun. I imagine that could be written with an AI, frankly, be a lot better.

But then I also love a lot of classic English authors, like Iris Murdoch, Graham Greene, lan McEwan and Virginia Woolfe. Its hard to imagine any AI reliably managing a turn of phrase as well as those guys, no matter how masterful the AI controller.

This is an interesting approach to AI use. While my thinking on the matter is still evolving—along with the technology itself and my proficiency with it—I find myself inclined to reverse the typical process. Rather than using AI to do the writing for me, I prefer to use it to help shape the outline, structure, and direction of the work, while I supply the words and arguments myself.

I’m currently facing this very question as I develop a white paper on AI in education, with specific application to our school. Out of curiosity, I asked ChatGPT, Claude, and Gemini to propose an outline, conduct deep research, and even generate a first draft. I also provided them with a paper I had previously written on technology as context. However, I have no intention of submitting the AI-generated draft. I will write the paper myself. I do not want to submit something to our board that I did not personally write.

That said, I see no issue with using AI to generate ideas for structure, main themes, or discussion points, and I’m fully comfortable using it after the fact to edit for grammar and improve clarity and flow.

What I’m trying to avoid—despite the temptation—is outsourcing the hard work of thinking and writing. I want to ensure that the words are mine. At the same time, I want to leverage the power of AI to support the process without compromising my responsibility to do the intellectual labor.

A recent Verge article highlights the abuse and practical danger of relying on AI to do one’s writing. For me, however, the concern runs deeper. It’s a matter of authenticity and integrity.

Thoughts? I’m not arguing a point, I’m ruminating. :slightly_smiling_face:

1 Like

The obvious question then - why would you not be happy submitting an AI generated piece?

That is a fair question. I have several reasons:

– If I submit a report that was generated by AI, I could not, with integrity, put my name on it. It would not be my work–it would be the work of the AI. Doing so would compromise my credibility and the trust I’ve built with the board.

– There is substantial intellectual value in wrestling with ideas during the writing process. Bypassing that process short-circuits both critical thinking and creativity–two disciplines essential to thoughtful leadership, and to good writing. :slightly_smiling_face:

– I’m also concerned that relying on AI to write content could lead to the development of lazy habits. As with any tool, overdependence can dull one’s skills.

– I do not want our students submitting papers generated by AI as if they were their own. That would violate the very principles of academic integrity we strive to uphold. As a leader, I must model the standards we expect from our students. Anything less would be hypocritical and detrimental to our mission.

That said, we are actively working through the question of when and how it is appropriate to encourage students to leverage AI as a tool in their academic work. We have not yet reached a final position, and I suspect this will remain a moving target as the technology and its uses continue to evolve.

I’m willing to be shown where my thinking is wrong.

2 Likes

I think this is a question of culture and context. In my organisation we’ve been going through something of a transformation with regards to AI. We’re a non-profit working in healthcare, with never enough time or money, and a clear social mission that often results in a by-any-means-neccesary approach to what we do in all sorts of ways.

Firstly, we established ground rules among my org that

a) what matters is the quality of the material, rather than how it was produced
b) that people will say when something is made with an AI if/when asked
c) people are responsible for what they produce, whether it was by an AI or not. If it’s not good enough, it’s on the staff member.
d) no-one will ever send out a piece of work produced by an AI without human review first. Guided creation is the preferred approach.

These are formal policy.

We have an AI transparency statement on our website, which we link in our business proposals and tenders, that explains in broad terms how we use AI at my org, and when we don’t.

In my organisation, people passing something off as their own work doesn’t really happen, because it’s assumed that people are using AI much of the time. Again, what matters is the output rather than the process, that’s what we judge each other on. When someone comes up with some cool new approach to an AI tool that produces an even higher quality product than before, we share and celebrate it.

The concern about laziness is also culturally driven. The use of AI in our work has resulted in a huge uplift in quality of the written work all across the business. People here aren’t producing the same quality and quantity for less effort. They’re producing higher quantity and quality with the usual effort. It’s been noticeable that all our policies, documents, web posts, leaflets, social media posts, fundraising proposals and so on are all much better than before.

I’m not in higher ed so I don’t know any of the details from within and I do recognise the challenge that AI would represent there. When a student analyses a topic and reflects on it in writing, what matters is the process, not the document. However as a leader within the organisation, writing reports and such, you have different goals to your students. You’re primary aim is the document, not the process. So I personally wouldn’t stop on that account.

2 Likes

I wonder if you could sometimes use AI to write sentences and paragraphs, that you then edit the ar5e off it …. but most times write from fresh.

I know it’s a slippery slope, but I reckon you’re gonna know when you’ve gone too far.

I just wonder if you’ve gone far enough?

This is a very interesting and worthwhile topic and I hope you take my thoughts in the constructive context they are intended - and I am open to being critiqued myself as well.

That said, I would suggest that your stance on using AI is doing a disservice to both yourself and your students.

Suppose you came to my office as a patient with a challenging medical problem. Now suppose I used AI to search the medical literature for a list of diagnostic possibilities. Would that mean that ultimately your diagnosis is from AI rather than from me? I think not. Whlie it is acceptable (arguably mandatory in 2025) to use AI in professional work, I still need to be responsible for the final intellectual conclusion with no exceptions. AI may help to point out some possibilities I missed, it may list some items wrong to even be on the list, and it may omit some possibilities. No matter the situation - I own the final decision (“own” as in I am professionally responsible/liable). You came to me because I know how to use medical knowledge tools - it’s my job to use them properly.

If I take you up flying in a small airplane and set up the autopilot to fly an instrument approach, does that mean I cannot log the autopilot time as pilot in command? Of course I can log it - because I remain ultimately responsible for the outcome of the flight both legally and morally. If the autopilot malfunctions or misinterprets an approach procedure, I cannot blame the adverse outcome on the autopilot. My job is to monitor aircraft automation but to always keep my skills and knowledge up so I can identify a autopilot problem and take over.

The common theme is that despite computer automation, I own the outcome - legally, morally, and in every other way. The same should be true in any other field where AI/automation is used - be it teaching, educational administration, law, engineering, accounting, or wherever else.

There is no need to “credit” AI as your tool any more than a construction contractor needs to credit the company that makes all of the tools he uses to build or fix a house. He is welcome to take credit for the work as “his” even if tools improved the qualitly of his work; but he is also not allowed to blame the tools if something goes wrong and he does not know how to detect and fix the problem.

I suggest your students should be permitted and in fact encouraged to use AI as a tool to check for arguments they omitted, identify errors, and otherwise polish their work. They need not report AI as a citation - indeed, they would be permitted to rely on facts quoted from a traditional printed citation but not permitted to rely on facts from AI! They key point is - if they use AI, they own the consequences.

If your students use AI, the fear of God must be drilled into them to personally check every single fact and to never rely on it as being complete. The potential ethical issue with AI is not failing to cite AI but rather turning in a paper which has hallucinations or false citations or even a logically non-applicable citation. The cardinal academic sin is not failing to cite AI but instead using AI content without unmistakably confirming its accuracy. If I were a professor I would strongly encourage students to use AI for its strengths, while at the same time I would make it clear that even a single instance of AI hallucination in work submitted by a student is grounds for failing the course and potentially ethics charges per the school’s honor code.

This ethic regarding AI use will serve your students well in their careers. Avoiding AI would instead reflect a failure to give them an essential tool they will need for the long-term.

3 Likes

Do all AI models hallucinate? If so, do some hallucinate more than others? Has anyone created an ‘hallucinate’ score for some of the larger, popular models? How do I know which hallucinate more or less??? :person_shrugging:

All models can hallucinate. You can mitigate it to a significant degree by how you use the AI, but human checking of important content is still a must. There are lots of approaches for comparing hallucination rates, but no universally agreed upon metric.

1 Like

@rkaplan, you never have to worry that I’ll take pushback or disagreement personally–I don’t. I’ve always learned best through thoughtful, respectful discussion. Besides, I’m pretty laid-back about most things. So never hesitate to challenge my thinking. :slightly_smiling_face:

Just to clarify, as I said originally, I’m not opposed to students or staff using AI. In fact, I agree–it should be encouraged as part of preparing students for college and career. The challenge is determining when and how it’s appropriate for students to leverage AI as a tool in their learning.

I think we’re approaching this from different perspectives. In an academic context–especially where writing is concerned–the process is not just a means to an end. It’s the point. Writing is where critical thinking, reflection, synthesis, and imagination happen. It’s not only about producing a polished document–it’s about forming understanding and developing the art and discipline of writing itself.

One cannot learn to write–or, I would argue, think critically and imaginatively–without actually writing. If we outsource that process to AI, we risk bypassing the very intellectual and creative work that writing is designed to cultivate.

Students learn by doing.

If AI is doing all the doing, then students won’t truly learn to write. In an academic setting, authorship matters–not only for ethical reasons, but because the learning is in the doing.

To put it in practical terms: suppose I submit an AI-generated white paper to my board–a paper that, for all intents and purposes, I did not write. It’s well-crafted and well-received. I’m praised for my insight and leadership. But am I truly worthy of that praise? Even if I edited and approved the final version, the fact remains: I didn’t write it. A machine did, using scraped material from the work of others. Perhaps I’m wrong, but that doesn’t feel honest. Any recognition I received would feel unearned and hollow. It feels like winning a game by cheating.

That’s why this issue matters deeply to me–not just for my own integrity, but because I’m helping shape a culture that values original thought, honest effort, and intellectual ownership. If I begin submitting work written by AI, I’m undermining the very standards I want our students to uphold.

As I said earlier, I’m not opposed to using AI to support the process–brainstorming, outlining, offering stylistic feedback.

But I draw a clear line between support and substitution.

That line protects the purpose of writing in academic life: learning, not merely producing a product.

I value this discussion. It’s a complex and evolving issue, and hearing other perspectives helps sharpen my thinking. For me, this is a conversation about learning, not persuading. So if there’s a flaw in my reasoning or something I’ve missed, please don’t hesitate to push back. I welcome it. :slightly_smiling_face:

3 Likes

As I’ve mentioned before, my current approach is that we should do the fundamental writing ourselves and use tools like AI in the role of editor—not the other way around. AI can be helpful in suggesting improvements to sentence structure, vocabulary, or grammar—much like a human editor would.

But that’s very different from having AI do the writing while we merely review or revise it. In that scenario, we’re no longer the authors—we’ve become editors of someone else’s work. I believe it should be the reverse: we should be the authors, and AI the editor.

Does that distinction make sense?

Again, what matters is the output rather than the process … I’m not in higher ed so I don’t know any of the details from within and I do recognize the challenge that AI would represent there. When a student analyses a topic and reflects on it in writing, what matters is the process, not the document. However as a leader within the organisation, writing reports and such, you have different goals to your students. You’re primary aim is the document, not the process. So I personally wouldn’t stop on that account.

I think that’s a fair distinction; however, see my response here.

It seems to me that when one is leading an educational institution–unlike other types of organizations–it’s essential to model the standards we expect from our students. It would be difficult, though perhaps not impossible, to argue persuasively that it’s acceptable for the head of school to use AI to generate his work, but not acceptable for students to do the same. That argument rests on the assumption that the head of school no longer needs to grow in the craft of writing or clarity of thought. In my case, that’s certainly not true! :slightly_smiling_face:

Bottom line: students will not be persuaded by a “do as I say, not as I do” approach. If we want them to develop academic integrity and personal ownership of their work, we must embody those same principles ourselves.

I hear you on that distinction and I understand why the fundamental skill of writing is so important. Indeed the literal skill of writing is a lost art - I realized that when my daughter graduated # 1 in her high school class but is unable to write in cursive.

That said - Might I suggest you could design the curriculum / assignments to naturally encourage and develop both writing and AI skills? That’s the best of both worlds.

This reminds me of the debate over calculators which was evolving when I was in engineering school. Ultimately the professors did several things - (a) Permit calculators; (b) Require that students show how they solved a problem - the right answer with the wrong method gets zero credit, the wrong answer with a good method and a minor calculation error gets most of the credit; ( c ) Add some questions to the exam that are very hard to do on a calculator due to the calculator’s inherent limitations; (d ) Add some questions to the exam which require superior calculator skills to solve.

As for AI, a great exercise would be for each student to put together a portfolio (for AI to use as a reference) and system prompt for AI which helps AI to write in his own natural tone. I would much rather read your AI content that sounds like @Bmosbacker than that sounds like generic ChatGPT.

Another great exercise would be to assign students to have AI write a paper and then to critique AI and show how they revised AI’s initial product to be superior and in their own tone.

There are no doubt many other creative exercises which can be assigned to students which force both use of AI and use of human skills for verification and editing. That’s the best of all worlds. That’s the real world your students will encounter both in their careers and in their lives.

Calculus was a required course in my MBA program. (We had four required math courses—five if you count operations research.) The professor allowed us to use both a calculator and handwritten notecards during our exams. He was of the opinion that there was no need to squander neurons remembering, say, the chain rule. He did require us to explain what calculus was in a midterm essay question. I don’t remember the chain rule, but I do remember the basic principles underlying calculus.

Anyway, I think students should be taught how to use AI as a tool for learning and thinking, but not as a substitute for learning and thinking.

2 Likes

The immediate thought that comes to mind - would you refuse to use a scientific calculator yourself, simply because you expect your first year chemistry undergrads to work through formulae manually? Or refuse to use a code library, because computer science students are expect to write all parts of their own code? Would you refuse to use DeepL to translate a webpage, because you have people learning foreign languages?

You are not a student. You’re doing a completely different job to them and as such have access to different tools and resources, in order to achieve different outcomes, for different reasons, according to different standards and ethics.

I suspect that this fact of life doesn’t bother you in a hundred and one other ways in your work. I’m curious why AI might be different in this respect?

1 Like

I appreciate your excellent responses, thanks for engaging so thoughtfully!

That’s a good point and helpful analogies. Perhaps it’s useful to consider spreadsheets, which I use frequently—and am happy to delegate more of to my CFO (my resident AI! :rofl:). They calculate things I’d rather not do by hand. So yes, in that sense, spreadsheets substitute for certain kinds of thinking—specifically, computation.

But here’s the key distinction: even when I use a spreadsheet or calculator, I still have to understand the problem, determine what data matters, choose the right formulas, and make sense of the output. The tool performs the operations, but it doesn’t generate the logic or interpret the results. It carries out my reasoning—it doesn’t replace it. In that sense, tools like spreadsheets extend and support thought; they don’t substitute for it.

Generative AI, by contrast, doesn’t just assist—it can generate the structure, the reasoning, and the language itself. That’s not just support—that’s substituting assembled words for my thinking and imagination. That’s the boundary I’m trying to maintain.

I do use AI—to help with outlining, improve clarity, and strengthen flow. But I want to do the thinking and the writing myself. This isn’t about resisting new technology. It’s about not outsourcing the very work that sharpens understanding and produces insight.

Let me give another example. I’m currently working on a section of my book on school leadership that addresses the selection of one’s senior leadership team. I could easily ask AI to generate an outline of topics to include. That would be faster and might surface ideas I hadn’t considered. I have no particular objection to using AI in that way—as a tool to help generate ideas or highlight gaps.

However, there’s something about wrestling with the content—especially with pen and paper—that sharpens my thinking. It’s not as efficient, and yes, it might mean that some ideas don’t make it into the final version. But I believe the process produces a more authentic result and encourages deeper thought.

The result, WITHOUT AI, is this first draft outline for this section of one of my chapters:

All of the initial thinking is mine. I will at some point ask AI for suggested topics I may have missed.

I take a fivefold approach to writing. First, I jot down initial ideas and a rough outline by hand. Second, I develop a more detailed outline in Scrivener. Third, I may consult AI to suggest additional topics I might have overlooked—but only at the level of structure or content areas, never for generating content. Fourth, I write the section myself. Fifth, once the draft is complete, I use AI to assist with editing and improving flow, while ensuring that my own words, voice, and tone remain intact.

Am I just swatting at gnats and swallowing a camel?

1 Like

Precisely the point I’m trying to make with reference to my use of AI as well as student use of AI.

From what I’ve seen, Barret, you’re probably a 1 in 100,000 writer - you find it far easier, and you’re much better than it than almost every other human in the world, so it may well be that your process is what’s best for you - now, or maybe forever.

Maybe take a look at this, though: Whisper is an AI-powered jet engine for writing – Six Colors

There are two key points:

  1. Dictation with whisper is extraordinary
  2. Clean up with AI makes it even better

This is the bit I hope people will notice:

I’m really looking forward to writing more books.
I’m not, however, looking forward to the holy war that feels like it’s coming.
Because there’s a second step to getting a good, clean copy out of a dictation rig like MacWhisper Pro: You have to feed the transcript to an AI like Claude or ChatGPT and ask it to clean it up for you.
The prompt I used for this piece, for instance, was: “I recorded this blog post using a speech recognition AI, so it rambles around a bit and is full of transcription errors and artifacts. Can you clean it up while keeping as close to my intended tone and content as possible?”
It’s not generative writing. It’s not even close. But for a lot of writers, it’s too much.

That’s largely how I use AI for writing:

  1. Some chatting and thinking with the AI
  2. Some dictation,
  3. Some AI clean up of my dictation.
  4. Some chatting with AI, to see what it thinks about my writing, some reordering, and rewriting (either manually, or by AI).

It’s probably hard to imagine how much better that it is, when you’ve not done it.

It might not help you - because you’re already a prolific writer and don’t need it - but I hope I never have to go back to only being able to typing my drafts and editing them by hand.

Nope - it cannot. At least not if the accuracy and quality of the content is important to you.

Surely you are not going to just mindlessly accept the output AI offers. You will validate it against your own knowledge, review the references/sources that AI provides, and then add, edit, or delete from the first draft it gives you.