LLM as posts on this forum

Since you use Grammarly: Georgia college student used Grammarly, now she is on academic probation

I disagree with the University’s stance. I’d be willing to bet a steak dinner at Ruth’s Chris Steak House that UNG administrators and other staff use Grammarly or other grammar checkers. Moreover, what will they do as Copilot, Gemini, and whatever Apple’s AI will be called are integrated into office suites? My prediction? The University will inevitably modify its policy. I’m still formulating my thoughts on this matter, but I have tentatively distinguished between AI-empowered grammar checking and AI-generated plagiarism.

2 Likes

Sadly, my hunch is that the overwhelming majority won’t be able distinguish, with many unintended (or intended?) consequences. Or even distinguish between real or artificial “intelligence”. We’re seeing a lot of that already in the world with humans creating false and erroneous “intelligence”, believed by many, with no computer involved (but for using the computer as a word processor).

2 Likes

Indeed, this is precisely my point. I’m far less concerned about a “Terminator” scenario than I am about the already existing, and likely to be accelerated, evil use of AI by evil people. I am perhaps overreacting, but I grow weary of apocalyptic prophecies and cries of “crisis” at every turn. The machines will not kill us, but we are likely to use machines to kill, mislead, and for many other nefarious purposes.

5 Likes

I’ll do more than disagree, and say that the professor really botched it.

First, while there might be reasons to discourage the use of Grammarly in some circumstances (for example, in a first-year writing course where we want students to learn to correct their own grammar), it isn’t inherently wrong to use it.

Second, AI detectors just aren’t sufficiently reliable to penalize a student based on the detector’s evidence alone.

Third, given how common Grammarly is, no action should be taken against a student for using it unless the professor has explicitly disallowed it in either the syllabus or the assignment sheet (and perhaps that’s the case here, but if it is, the student missed it). No one is likely to understand a ban on AI use as including Grammarly — because in the sense we’ve all come to understand “AI” since the release of ChatGPT, Grammarly isn’t AI.

5 Likes

This is an excellent summary of where things stand. Having researched these tools and their underlying concepts and, recently, after working with them a lot, I don’t think the catastrophizing happening elsewhere in this thread is warranted.

However, I disagree with this take:

Sure, LLMs won’t replace high-quality original thought anytime soon (or more specifically, abduction and inference to the best explanation). But they do make it much easier and faster to create certain things (and to do them better). Someone skilled at LLMs and how to work with them can manage, delegate to, and supervise LLMs as assistants to do some really cool things. For an example of this, listen to the AI segment in the recent episode of MPU with Jeff Richardson.

As a result, support people will likely be able to help more people in less time, illustrators will be able to iterate on concepts at a greater scale, and writers can write and edit more. In turn, it’s likely that organizations will need less of each of these people. Of course, the flip of this is that it will be easier to build smaller, more focused organizations.

This kind of democratization tends to increase both volatility and capacity. Capacity as we can literally do more; volatility as anyone can do more. Hopefully the capacity increases will be enough to counter the more dangerous kinds of volatility.

Note that I’m not talking about the democratization of knowledge here, but knowledge work. I think it’s a subtle but important distinction.

1 Like

This is not democratisation of work. It’s literally describing the opposite. If your suggestion is accurate, then work becomes available to fewer people, and moves more into the domain of privilege.

This is so good I just saved it in AI my research file with complete attribution. :slightly_smiling_face:

1 Like

I don’t follow your logic, sorry.

If an independent illustrator can more easily generate large scale work, a small software firm can more easily provide high quality support, and a freelance lawyer can more quickly deep-dive on the relevant case law, those kinds of work are democratized by these tools.

I’m sure there is truth in this as there with any industrial revolution, or in this case technological revolution. Some will be displaced. But, there will also be whole new categories of work created. Forbes recently had an article stating that AI represents an entirely new industry. New industries represent novel and expanded opportunities.

You are correct; technology can have unintended consequences, including the loss of life. But what I’m responding to is the fear that AI will become uncontrollable and potentially “intelligent” enough to decide to destroy and kill. I do not find that scenario convincing.

1 Like

A killing machine would not have to actually be intelligent or itself decide to destroy and kill. Imagine an AI-powered killer responding to a text-based prompt that you typed on your computer similar to when you requested a summary of a set of articles. You would be the one providing the evil directive to destroy and kill using an AI-enhanced machine. And if you really can’t imagine this scenario, read Philip K. Dick’s short story Second Variety.

I so wish that would be the case. In theory, an illustrator can generate more work. In practice, someone with no skills, will type into a little box and get something “good enough”, a hundred times over, before they’d hire that illustrator. And in between that potential client and illustrator, will be a surplus of hustlers offering to type into that box for them, for a fraction of the cost the illustrator would charge.

I know a fashion photographer that was just replaced with AI by a good client, even with the weird hands and other artifacts. Photographer, models, stylists, makeup artists, assistants… all replaced with words in a box.

What can gently be called democratization, might better be described as value extraction.

The problem is the scale. AI will cut across all knowledge workers in varying degrees, on varying timelines and the number of roles it will provide can only pale in comparison. There will be many, just not enough.

Truck driver is the top job in ~30 states. That’s not going away tomorrow, but over time, you betchya. Warehouse work. Ditto. The trades will take a drubbing too, maybe not as much by AI but by getting flooded with displaced workers. This is fundamentally different than previous technology advances.

I’m jaded, my experience with tech, VC and PE firms doesn’t afford one much optimism. It just feels like there’s a whole lot of whistling in the dark going on. I do hope I’m wrong.

4 Likes

Exactly. It no longer requires a special caste for the task. Seems like democratization to me.

Once upon a time, photography was a skill processed by few. A complicated arcane art. I think most folks are happy that is no longer the case.

And that’s just one example of new tools disrupting the status quo. And the then current ‘experts’ (I need a better word here) in the field almost always rail against the hoi polloi joining the ranks of the privileged few.

3 Likes

It’s both/and. The work will be democratized, and because of that democratization all but the most exceptional versions of that work will be significantly devalued.

Jobs of all sorts exist because it’s worth paying somebody X dollars for Y value, in hopes of realizing a return on that investment. At the point where a person’s effort of an hour or two can be replaced by $0.10 in AI compute cycles, the odds of that job disappearing goes up significantly.

This isn’t a huge problem when it happens to a small number of people, and there’s plenty of work to reskill them into. The concern with AI is that it’s less likely to be a small number, and we’re not sure what work those people would do.

The question is one of what we do in that situation.

2 Likes

Technology protests and fears have been going on for at least 300 years, and yet you would be hard pressed to find a large portion of the populace who cannot find work. Technology displaces but does not render people unemployable nor does it prevent them from finding new work. Uncomfortable, yes. Dire catastrophe, no.

Unless, of course, it truly is different this time. I mean, hey, there’s a first time for everything, right?

2 Likes

This is why I said above I am more concerned about humans using the tech for evil not machines operating on their own. :blush:

Indeed, this is precisely my point. I’m far less concerned about a “Terminator” scenario than I am about the already existing, and likely to be accelerated, evil use of AI by evil people

2 Likes

I think you miss my point, which is that AI looks to be a force multiplier, on a scale previously unknown, where typing text into a little box on a computer screen will send an AI-enhanced machine to deal out death and destruction. Previously people had to both give the order and deliver the death and destruction.

Although, as some will point out, that has already started to change.

I grant you that, and I claim no expertise on this matter. That said, I have more immediate concerns about the human condition and predicament than a hypothetical singularity. :slightly_smiling_face:

And here I am, after being off forum for many, many months — and knee-jerk clicking on this post wondering “what do they have against postgraduate law students posting on MPU”… Been a long day. :sunglasses:

6 Likes