Thoughts on Using AI as a Writing Tutor?

There has been robust discussions in prior posts about the ethical and practical use of AI for writing. Moreover, @MacSparky often refers to AI as an “intern.” As I’ve continued to consider the role of AI in my workflow, always conscious of the importance of transparency and ethical use, it occurred to me that I could use AI for more than an editor. In the spirit of being a life-long learner (admittedly an overused platitude by educators), I can use AI as a writing tutor without compromising my integrity. With that in mind, I asked Claude, Gemini, and ChatGPT how they could serve as my writing tutor while avoiding do my writing for me. Of the three, ChatGPT provided the best response (see below). With that in mind, I’m going to try this for a few weeks. My purpose is not for AI to do my writing, but to teach me how to improve it.

To demonstrate how this might work, I submitted the above paragraph, warts and all, to ChatGPT and requested that it serve as my writing tutor. Here is what it returned:


Here is your revised paragraph and explanation formatted in Markdown:

Revised Paragraph

There has been robust discussion in earlier posts about the ethical and practical use of AI in writing. @MacSparky often describes AI as an “intern,” a helpful metaphor. As I’ve continued to reflect on AI’s role in my workflow—with a consistent focus on transparency and ethical use—it occurred to me that AI could serve a purpose beyond editing. In the spirit of lifelong learning (an admittedly overused platitude among educators), I realized I could use AI as a writing tutor without compromising my integrity.

With that in mind, I asked Claude, Gemini, and ChatGPT how each might help me improve my writing without doing the work for me. Of the three, ChatGPT offered the most helpful response (see below). Over the next few weeks, I’ll test this approach. My goal is not for AI to write on my behalf, but to teach me how to write better.


Explanation of Changes

Original Revision Why It’s Stronger
There has been robust discussions… There has been robust discussion… “Discussion” is uncountable here; singular agreement is correct.
Moreover, @MacSparky often refers to… @MacSparky often describes AI as… “Describes” fits the context more naturally than “refers to.”
always conscious of the importance of transparency and ethical use… with a consistent focus on transparency and ethical use… Streamlined for clarity and improved sentence rhythm.
that I could use AI for more than an editor. AI could serve a purpose beyond editing. “More than an editor” is awkward; this phrasing is cleaner and clearer.
life-long learner lifelong learning More natural expression in this construction.
while avoiding do my writing for me. without doing the work for me. Corrects grammatical error and smooths the flow.
I’m going to try this for a few weeks. Over the next few weeks, I’ll test this approach. More polished and purposeful tone.
not for AI to do my writing, but to teach me how to improve it. not for AI to write on my behalf, but to teach me how to write better. Stronger contrast and phrasing.

Using AI in this way doesn’t mean one should accept every suggestion. AI’s phrasing can sometimes feel stilted or impersonal. Still, it can offer valuable feedback that helps improve one’s writing over time. This approach leverages the strengths of AI while preserving both integrity and authenticity—if, and only if, one takes the time to study the explanations, learning from them or rejecting them.

Here is how ChatGPT responded to my original query about serving as a tutor.


Your Role (What You’ll Submit)

You will send:

  • Drafts of any length (paragraphs, blog posts, articles, etc.)
  • Your intended audience and purpose (e.g., professional, devotional, persuasive)
  • Any relevant constraints (tone, word count, biblical references, style preferences)

You want to:

  • Preserve your authentic tone and voice
  • Improve clarity, vocabulary, engagement, and flow
  • Eliminate redundancy, filler, and weak phrasing

You may submit as needed—no schedule required.

My Role (What I Will Do)

For each submission, I will provide:

1. Overall Feedback

  • Brief evaluation of clarity, tone, structure, and flow

2. Line-by-Line Suggestions

  • Edits that retain your style but improve readability and precision
  • Highlights of areas where stronger vocabulary, conciseness, or better phrasing would help

3. Explanations

  • Clear rationale for each significant change to help you grow as a writer

4. Optional Style Tips

  • Occasional guidance on structure, rhythm, and rhetorical strength
  • Only as appropriate and never at the expense of your voice

This will be an iterative, tutoring-style process designed to sharpen your writing while respecting your authorship. Ready when you are to begin the next session.


Thoughts? Is this a wise use of AI, a misuse, or something in between?

I think our resident journalism major, @ismh86, should weigh in. :slightly_smiling_face:

2 Likes

I taught Freshman English composition for many years and gave my students weekly writing assignments. I found I had to spend far too much time correcting problems with grammar and syntax and thus had too little time to attend to important things like word choice, clarity, focus, logic, and argumentative structure.

If we’d had the technology then, I might have built a GPT for them to use to identify—and explain!—the grammar problems with their first drafts, and maybe word choice, but nothing else. (I might require them to submit their first draft chats so I could identify common errors that might warrant 15 minutes of class time.) My job was to teach them how to write a coherent document, but, more important than that, how to use writing to think. I’d want them to leave Composition 101 with the tools to do that on their own. Composition 201 would definitely be all about how they might use a chatbot to help them probe an issue and shape a piece of writing about that issue.

I found the way the author of a recent New Yorker article grappled with using AI as a teaching tool intriguing. It’s behind a paywall, alas, and unfortunately I can’t provide a gift link.

Will the Humanities Survive Artificial Intelligence? Maybe not as we’ve known them. But, in the ruins of the old curriculum, something vital is stirring. It’s by D. Graham Burnett, the Henry Charles Lea Professor of History at Princeton, which is a fine credential to be sure, but I’m more interested in his affiliation with Friends of Attention.

2 Likes

The best ways of improving your writing, aside from practice, are:

  1. Reading
  2. Getting feedback from humans
  3. More reading

The humans in 2 will depend on your circumstances, but could include teachers, colleagues, editors, knowledgeable friends/family, members of writing groups, or even fellow forum users. :slight_smile:

We could also add a 4: online resources with actual people sharing their knowledge. Writers or teachers on YouTube, essays, blog posts, podcasts, etc. There is practically an infinity of resources available in all media types, for all writing styles and genres.

2 Likes

I’m not sure why or how, but I was able to access the article even though I’m not a subscriber. I created a PDF for reading later. Thanks for the link. I’m looking forward to reading the article.

1 Like

[quote=“Bmosbacker, post:1, topic:40724”]
There has been robust discussions[/quote]

The LLM is wrong. It should be There have been robust discussions . . .
It’s more than one discussion because the discussions occur in different places, and at different times, with different particiants. You could reframe it in present tense: There are robust discussions . . ., if you wanted.

In the 1980s a number of software packages to help undergraduates in Freshman comp classes flourished. One, WANDAH, develped at UCLA, was bought from UCLA and sold as HBJ Writer.

Another app, also created at UCLA, was based on the principles discussed in a textbook called Revising Prose, by Richard Lanham. These both pre-dated the Mac.

2 Likes

Its* unethical to not use AI to improve one’s** writing, imo.

We shouldn’t limit it to line-by-line grammatical stuff though . A lot of that stuff is very important (and easy, maybe) to some people, and it puts a lot of other people off writing. Clever people never learnt the rules, hated the rules, and don’t want to look stupid.

Where AI is going to make the world a better place, for millions eventually, is when they can dictate into their cheap phone what they want to tell someone else, the phone rewords the dictation, asks for clarification (out loud), then sends an easy to read message to the other person.

I think of my Dad who was one of the cleverer people I’ve met, and could write short messages and read the newspaper, but couldn’t read my books, or write long messages. That’s limiting, and theirs a stigma, and one day people like him won’t “use AI” to read and write better, they’ll just “use their phones” to communicate with people. They won’t even need to speak (let alone write) in the same language. An ebike for billions of minds.

  • I think this should be “it’s”
    ** is this how one*** should use “one’s”?
    *** and what about this “one”?

Here’s another one for you, from Adam Tooze:

Chartbook 385: Red AI. Or, how ChatGPT went from helping me translate the Little Red Book to generating a revolutionary program for the USA today. And how DeepSeek won’t go there.

ChatGPT did far more than merely translate. At the AI’s suggestion we went down a path that decoded the structure of Maoist revolutionary language in general, offered templates for generating new phrases and then, with a little prompting from me, used the AI-generated Mao formulae to generate a revolutionary program for contemporary China and the USA, including slogans, a manifesto and posters to match.

Turns out LLM are very good at decoding and generating systematically and deliberately formulaic political language. Who’da thunk it!

1 Like

Excellent advice. I find the prose generated by AIs is either flat and uninteresting, or strange and sophomoric. Language is a human thing – robots will never be so clever – we’ll just have clever people who enjoy robots.

Besides, there are few things more enjoyable than sitting quietly and experimenting with words.

Katie

I think that’s good advice - and, realistically, it’s what got most of us here - but it’s not the only way to improve your writing.

It doesn’t match my recent experience though.

My writing has improved enormously, over the last year, and that’s because I’m using AI to help me write better. It turns out it knows a lot more about writing - the bigger patterns, not just spelling and grammar - than I do. I’ve published 5 books, which I learnt to write the hard way, and I’d gotten as good as I was gonna get … but I’ve had several comments recently from long time readers who say that my writing has been amazing recently.

This is important to me because I can help more people.

It should be it’s; you are correct. The ’ marks something left out. In this case, the “i” of is has been left out.

You should almost never use “one” unless you are counting. It’s archaic, and represents a more formal and elevated style that is generally inappropriate (and even pompous). Think of "one does not simply walk into Mordor . . . "

Thanks for that!

I thought I’d got it’s correct, but I wasn’t sure. I spent a weekend when I was in my late 30s trying to memorise that one.

The one one … I never realised it was considered archaic. That’s good to know. I just thought I was a bit thick.

Oh you’re going to have fun with this site -
https://beta.lmarena.ai/

Old version is at
https://lmarena.ai/?leaderboard

Choose Arena

It is a head to head chatbot battle and you pick which one is better then the model is revealed. Don’t limit yourself to only a few models, find one that suits your style.

Claude has a nice feature that allows you to upload samples of your writing and then ot generates a style based on your input.

https://support.anthropic.com/en/articles/10181068-configuring-and-using-styles

You are absolutely, positively, not even a little bit thick.

I dearly love the English language, but on the surface, much of English makes very little sense.

theirs? AI let you do this?

No. That was all me!

“It’s unethical not to buy dodgy watches from Billy Burglar in the pub.”

I do understand that there are benefits to AI, but minimising legitimate concerns about the provenance of AI and its pushers is hardly the way to convince people that it’s an unalloyed force for good, is it?

1 Like

Yeah, I get triggered too when people use the word ethical!

Especially when they take one side - when there are (at least) two sides to every dilemma, otherwise there’s no dilemma.

Have you taken sides? I haven’t. I have just articulated one side.

There’s no ‘two sides’ involved in deciding whether Gen AI is ethical, given what we know about the provenance and the toxicity of the people producing it.

What we have is a debate about how far one is prepared to compromise to get the benefits it brings, real and imagined, given the additional damage it causes.

We all make compromises every day, and none of us is perfect – I make absolutely no claims to personal virtue – but it’s daft to suggest that there isn’t a compromise there. Hence the analogy to knowingly buying a stolen watch from a criminal: what one is doing is clearly ethically wrong, but the hope is that the benefits justify the means. Does one need to save the money to buy food for one’s starving child or to pay off one’s drugs debt?

No, I genuinely haven’t decided whether I think the compromise is worth swallowing yet – or more accurately, where I should draw the line personally. I’m also very lucky that I’m not forced to use Gen AI by an unscrupulous employer, so I have a free choice.

Obviously, ‘it helps me make stupid memes for a laugh’ isn’t worth it, but ‘it helps me with important research which otherwise couldn’t be done’ seems different.

I’ve seen some very good justifications on this forum (e.g. enabling research that otherwise wouldn’t be possible in the humanities and science), but there’s also been a fair amount of trivia.

Personally, I feel that if I can’t develop my thoughts and express myself coherently without AI homogenisation, then I probably shouldn’t be inflicting those thoughts on other people, but then it matters a lot to me that good writers and artists continue to be compensated for their efforts. Other people will make their own judgement as to where to draw the line.

This is a different argument to whether I think AI writing is any good (it clearly has a long way to go before it matches good human writers) because it may one day soon get there. If the AI companies were to compensate their victims appropriately, then that may change the calculus, but these are toxic powerful people and that seems unlikely.

2 Likes

I heard an interesting twist on this. Author David Brin said that great writers put a spell on us. In order to lift the spell, and figure out how they pull it off, he advises to take the time to recopy good prose. I never tried it out myself, but it sounds like good advice.

1 Like

Well … I politely disagree - there are two sides - the “it’s unethical to use AI” side, and the “it’s unethical to not use AI” side.

Like you, I hope there is some compensation for authors because pirated PDFs were used.

For me, the upside to humanity of using AI massively outweighs the downside, and it’s unethical to not use AI.

I suspect you’ll not agree with me! Which would be healthy.