Selling the craft of writing and soul of the “writer” for 30 pieces of silver

It’s a huge problem for trade publishing and sholarly publishing.

When you submit ams. at least one person sits down to read at least part of it. Of the ms. was not specifically requested by the potential publisher, it’s one piece in a sea of “slush” submissons, I’ve read slush for short fiction & poetry journals, paying commercial rates and free. novel publishers, and academic publishers.

A journal or publisher contending with AI/LLM created submissions can’t keep up with reading. Some are having shorter and shorter windows for consideration. Increasingly, those few publishers who accept unsolicited mss. are accepting agented-only works.

It’s not that the AI texts are hard to spot; it’s that there are so many. Each piece has to be entered into a traking database and assigned to a slush reader. And there are hundreds in any given week.

2 Likes

How much do they pay slush readers? Perhaps that could be a side gig. :rofl:

Thank you for sharing @Bmosbacker. I particularly enjoyed your initial criticism that prefaced the link, as well as @chrisecurtis’s response (and book referral). Even though the article is a glorified ad, as @krocnyc pointed out, the fact that there are people throwing money behind this idea is one worthy of a discussion like this.

While I agree with @NiranS that AI is useful for exploring ideas, in my experience the extent of the exploration yields more work than it would save without it.

An exchange with ChatGPT for example will yield either a breadth or a depth of information masquerading as wisdom that you still need to make sense of on your own, unless you aim to cauterize your own intellectual abilities through passive consumption.

Information is already “a form of garbage”, as Neil Postman described it.

Can you hear the monotonous pulse of the automated dump truck approaching the landfill that thus far we’ve built with our hands?


Plato wasn’t entirely wrong. The further societies drifted from orality in favor of literacy, the greater their reliance on authority of the text became. Just last night I read an account in Richard Alticks The Art of Literary Research, where it was commonly believed for some 70 years that Charlotte and Anne Brontë walked through a snowstorm in the middle of July en route to reveal their identities to their publisher. This was due to a mistranscription of “thunderstorm”.

AI for writing and research will have similar effects on the provenance of facts and historical methods of discerning such will need to be taken a new lease of by whoever cares about this. AI “hallucinations” aren’t so different from the errors found in the centuries before our’s. We just hold computers to a thinner margin of error than we do for humans (particularly apprentice scribes, printers and other assistants).

We are a long way from AI making anyone a better writer, or a thinker, than what the influence and discernment of human beings past and present can afford us with.

As of right now I’m indifferent to how this concerns technical, mundane ephemera/boilerplate. Although I did use ChatGPT to write the introductory paragraph to my resume, which is by no means remarkable otherwise. Nonetheless, I’m partial to @AllanC’s reservations about how the sheer use of the technology will affect us long term.

Technical communication is a discipline in its own right for a reason and what qualifies for “functional literacy” is becoming more obscure with each graduating high school class. So I don’t think even ephemeral communications are safe from the detriment that AI poses for literacy and cognition.

Anticipating the effect that AI will have on humanity will require us to synthesize the effects of virtually every technology before it. I reckon that the remedy lies in a similar method as well.

Had we world enough and time, yes. In my profession, numeracy and the ability to genuinely think in math is rightly prized; if you’ve got someone on your team with that portfolio of skills, you learn to accommodate their deficiencies in other areas, including a predilection for run on sentences that seem purpose-built to never get to the point.

I also worked with colleagues who couldn’t calculate themselves out of a paper bag, but could bend the tax code to their will; we might have said that the discipline of basic algebra—not to mention calculus in an era of financial engineering—was exactly what they needed intellectually and professionally (and I believe that it was, by the way), but we accommodated them, too. (They were sitting ducks for bankers selling snake oil; half of my career was spent throwing my body between them and said bankers’ barrage of beguiling jargon.)

I worked for a major pharmaceutical company. I had colleagues who could write the corporate memo equivalent of deathless prose, but who had a careful array of magnets and crystals laid out on their desks and were within a hair’s breadth of not believing in germ theory. The discipline of biology and chemistry was exactly what they needed intellectually.

I wish we’d wring our hands as earnestly about innumeracy and scientific illiteracy as we do about chatbots writing emails. I loathe AI slop as much as anyone; I think students who let chatbots write their papers for them are cheating themselves as much as they are the system; I believe the last thing this world needs is more formulaic writing cranked out for a quick buck. But I really do think it’s OK to let a chatbot untangle one’s prose, just as I think it’s OK to use Excel instead of a thirteen-column analysis pad and cad cam systems instead of protractors and slide rules.

1 Like

Unless you are an assistant editor, you don’t get paid.

No one can read slush for more than a few hours at a time.

Ocassionally the trade publisher and the various academic publishers pack up a box of books and send them to me, and I often get ARCs.

Well, well said. I’d read Postman’s Technopoly a long while ago. I need to reread it. For those interested: Neil Posman. 1992. Technopoly: the surrender of culture to technology. Vintage Books.

As to the statement, “The further societies drifted from orality in favor of literacy, the greater their reliance on the authority of the text became,” I have mixed thoughts. While that is fundamentally true, the converse is also true. Prior to widespread textual transmission, people had to rely on oral accounts and place their trust in the authority of the one delivering them. In either case, we are always placing our trust in some form of authority—even our own rationality and presupposed objectivity. The potential advantage of written authority, however, is that it allows for more ways to test for veracity. It also provides a trail of development, which aids in that endeavor.

1 Like

A wonderfully thoughtful response, thank you.

I agree! I often metaphorically wring my hands over both innumeracy and scientific illiteracy—which is why our school places strong emphasis on STEM education. I also share your concern that students—and I would add adults—cheat themselves when they rely on AI as a substitute for doing their own writing. In fact, I have a blog article in progress with the working title Students and Self-Theft. Perhaps I shouldn’t restrict the focus to students alone. :slightly_smiling_face:

2 Likes

Good for you! Lead them to the water, and entice them to drink!

As someone who was allowed to be innumerate for far too long into my education, I can’t stress enough how important it is to coach students through their discomfort with math and science. There has been nothing more empowering in my life than learning calculus in my late 20s, and I was absolutely furious that no one had made me learn it sooner. I came of age when it was deemed perfectly acceptable to give girls a pass on math and science, especially if we were adept at learning things like literature or history. (Ahem, or home ec, which was still a thing back then.) I hope those days are well behind us.

PS: You might find this article of interest if you haven’t encountered it already: Everyone Is Cheating Their Way Through College - ChatGPT has unraveled the entire academic project.

New York Magazine lets you access a few articles a month before you hit a paywall.

I agree. If I had to advocate for one form over the other it would be written knowledge, for it extends (and as you noted) gives a means of preserving and revising the oral record. But that in no means negates the importance of practical instruction of the oral kind, and it is indeed orality that affords us with the ability to perceive the texture and meter of the written word. The two are like fruit and seed.

The point that I wanted to make in comparing the two was in the paragraph that followed the statement you quoted, which is that AI obscures authority. I can’t help but cringe inside when I see people defer to it to prove even the most banal arguments.

Society seems to be in the middle of a “Neo-oral” culture mediated by technology; the memetic transmission of ephemeral information optimized for mass, passive consumption that is detrimental to our sensibilities and capacity to discern truth from falsehood.

I worry over how AI will exacerbate this. While authors, artists and the like litigate for compensation from AI companies over the use of their work, we all may benefit greater in the long run if they argued for accreditation.

I should clarify. I will use AI to summarize or tidy my voice dictation, or expand my medical notes to something palpable for a client. My professional conclusions are my own. The presentation of those conclusions can be altered.

I will agree that AI should not be used to draw conclusions in a topic that I have no expertise. I should have enough knowledge or experience to know if things are wrong or enough time to research properly.

For low stakes conclusions - eg: what router should I buy- ubiquity ?, I am ok with researching some ai derived conclusions.

1 Like

That’s funny. At a school I once led, I replaced Home EC with computer design courses within my first six months. A much better use of the space. :slightly_smiling_face:

2 Likes

And its not just about the written word, but about music also. The following song is by a band called Velvet Sundown (not to be confused with the Velvet Underground).

This « soulful » song got half a million listens on Spotify. I’m not sure myself, but a lot of people are convinced that they’re an AI generated band playing AI generated songs.

A bit more:

1 Like

Good for you! If we must cling to Home Ec and Shop, we could at least make girls and boys take both, with a side of personal finance while we’re at it.

A few mothers were upset, but I explained—as graciously and empathetically as I could—that if they wanted their daughters (and only girls were enrolled in Home Ec) to learn to cook, sew, and so forth, they were welcome to teach them at home. They weren’t happy campers, but it was the right decision.

1 Like

Interesting. And frustrating.

It’s a worry when it’s not easy to identify slop, and distinguish it from the non-slop?

What incentivises the slop makers?

I replaced traditional Home Ec with high quality catering and fashion design - part of the compulsory for all technology curriculum (alongside design and manufacture, and design and display) for younger students and then an option for older ones. Being able to competently and efficiently plan and deliver a meal for a purpose to a number of people, and to have a grasp of product design principles, is a core skill, every bit as much as computer design or writing.

Interestingly, we had huge support from local chefs, including some fairly famous ones, who donated time to teach skills on our courses (though I remember some brief local anxiety about teaching “knife skills” to young teenagers!) and who were delighted to see cooking being treated as a serious, high-value emplyment area.

2 Likes

Indeed, I would go further: what we call AI–specifically, large language models–appears to presume an authority it neither possesses nor can ever possess. I say “appears” because LLMs are not capable of presumption in any meaningful sense; they lack awareness altogether. Because they lack judgment, self-awareness, metacognition, moral or ethical awareness, conscience, lived experience in the physical world, and the intuition that arises from these, they are incapable of genuine thought.

An LLM is not a mind, a moral agent, or a knower. It has no volition. It is sophisticated software that reassembles and generates language based on statistical probabilities derived from datasets. What it produces is not thought, but statistically generated language masquerading as understanding. It may sound authoritative, but it merely echoes patterns it was trained to replicate–patterns often shaped by flawed, biased, inaccurate, or decontextualized input.

That is why deferring to such systems as if they carried intrinsic authority is not only naïve and deeply misguided–it is potentially dangerous.

Authority implies responsibility, judgment, and credibility grounded in knowledge or experience. LLMs have none of these. They generate output based on training data and probability–not understanding, not wisdom, and not ethical discernment.

In short, an LLM can generate content that reflects genuine human expertise and may therefore sound authoritative. But it is never authoritative in itself. It has no understanding, no discernment, and no responsibility for the truthfulness of what it produces. AI is not like a teacher or a scholar; it is more akin to an unverified anthology—capable of echoing insight or error indiscriminately, without knowing or caring about the difference. It cannot, therefore, ever be truly authoritative. It not only obscures authority—it lacks it entirely.

1 Like

That’s interesting—we’re in the process of developing a greenhouse as part of our science program. Additionally, because we have a commercial kitchen connected to our campus café, we’re considering launching a Culinary Academy for students and parents as an extension of the science curriculum. Like our Aviation Academy, it would be fully integrated into and enrich our broader STEM programs.

The key differentiator between the traditional home economics model and a program like this is that the latter is deeply integrated into the sciences. It also offers a service component by providing fresh produce to those in need and, potentially, to local farmers markets.

2 Likes