LLM as posts on this forum

We tend to win when we are writing the scripts for the movies. But maybe not so much after the Artificial Intelligences have taken over that chore for us?

1 Like

A lot of people predicting that LLMs are going to improve dramatically in the next few years are right, but also completely wrong.

LLMs are not spontaneously going to manifest general intelligence. They’ll get better at what they do now, which is produce average output (literally, like, mathematically averaged) based on averaged input. This will be great for a lot of really cool things, like image editing, autocorrection, identifying proteins, finding novel drug treatments, urban planning, and who knows what else where they can support humans and validate or repeat patterns.

What they’re not going to do is become the next William Shakespeare or John Constable.

Watch what Apple and Google do with practical implementations of machine learning. Not Open AI who’re cashing in on hype, misdirection, and early success. Just look at the wreckage of the few publications who’ve tried to move to GhatGPT written articles at the expense of their human staff. Look at the absolute disaster that is Microsoft’s Copilot.

LLMs are not a replacement for people, and given the way they actually work they never can be.

Personally I’m looking forward to when they’re good enough that RPGs can use them to generate unique radial quests with dungeons, short storylines, npcs, and voiced dialog.

5 Likes

That is a fair point, but we write the AI script, literally. We also keep the power going. I’ve read the “doomsday” articles, but I’m not convinced. I’m nowhere as intelligent as AI developers, and I may be overly simplistic, but until AI is mobile on its own, can control access to needed power, becomes sentient (which I do not believe is possible), is weaponized, and has a motive to kills humans, I have no fear of AI. What I do fear are humans using AI to steal, mislead, harm, and destroy. That is far more likely and a real and present danger.

I also fear an “AI Divide” where only some humans have access to AI.

1 Like

Good point!

20 characters

Check out the film Colossus: The Forbin Project. It might make you a little less sanguine about AI.

Before Skynet and The Matrix, This 50-Year-Old Movie Predicted the Rise of AI - IGN

One of the earliest entries of the AI genre came in 1970 – way before audiences had any real sense of where the digital revolution was about to take the world … It remains, 53 years after it was released, one of the most gripping and prophetic films to ask the question: What happens when we create something that is smarter than us?

I will watch it as I want to be fair. My first question will be the premise of “smarter than.” The definition of “smarter” and the underlying presupposed definition of “intelligence” is determinative as to the merits of the argument. :slightly_smiling_face:

Sometime in the next few years, the advancement of research and technology will pass the inflection point where problems are no longer solved faster by applying more human brains but by applying more computing cores.

Intelligence is basically an irrelevance because it’s not really a meaningful word. We know we have it, and we have a limited way of measuring it if fairly innacurately, but however sophisticated computers get they will never actually work like a human mind. Brains are not computers, and it’s a paradoxical idea to simulate the behaviour of a brain by building a more complex machine in order to do so. It violates thermodynamics. Not to mention no arrangement of matter exists in the known universe that comes close to the complexity of the human brain.

Computers do however have exponentially increasing computing power and software complexity and control of systems that humans rely on to provide for their basic needs. Almost every advanced weapons technology is computerised and networked. Modern aircraft fly themselves. Large machinery is computerised. Energy grids are computerised. Nuclear reactors are computerised (and airgapped but Stuxnet exists so that’s irrelevant). Transport systems are computerised. Water provision is computerised.

Computers do not need a motive to kill humans to kill humans. The contrary is true. Computers can follow their programming perfectly for a task that is completely mundane and intended to benefit human beings and kill human beings because the computer always follows it’s programming and has absolutely no capability to make a reactive decision other than those it was programmed for. A computer literally can’t stop itself from killing humans. It can’t make a choice for itself. The more access and automation computers possess, the greater scope they have for triggering an unintended catastrophe.

Since you use Grammarly: Georgia college student used Grammarly, now she is on academic probation

I disagree with the University’s stance. I’d be willing to bet a steak dinner at Ruth’s Chris Steak House that UNG administrators and other staff use Grammarly or other grammar checkers. Moreover, what will they do as Copilot, Gemini, and whatever Apple’s AI will be called are integrated into office suites? My prediction? The University will inevitably modify its policy. I’m still formulating my thoughts on this matter, but I have tentatively distinguished between AI-empowered grammar checking and AI-generated plagiarism.

2 Likes

Sadly, my hunch is that the overwhelming majority won’t be able distinguish, with many unintended (or intended?) consequences. Or even distinguish between real or artificial “intelligence”. We’re seeing a lot of that already in the world with humans creating false and erroneous “intelligence”, believed by many, with no computer involved (but for using the computer as a word processor).

2 Likes

Indeed, this is precisely my point. I’m far less concerned about a “Terminator” scenario than I am about the already existing, and likely to be accelerated, evil use of AI by evil people. I am perhaps overreacting, but I grow weary of apocalyptic prophecies and cries of “crisis” at every turn. The machines will not kill us, but we are likely to use machines to kill, mislead, and for many other nefarious purposes.

5 Likes

I’ll do more than disagree, and say that the professor really botched it.

First, while there might be reasons to discourage the use of Grammarly in some circumstances (for example, in a first-year writing course where we want students to learn to correct their own grammar), it isn’t inherently wrong to use it.

Second, AI detectors just aren’t sufficiently reliable to penalize a student based on the detector’s evidence alone.

Third, given how common Grammarly is, no action should be taken against a student for using it unless the professor has explicitly disallowed it in either the syllabus or the assignment sheet (and perhaps that’s the case here, but if it is, the student missed it). No one is likely to understand a ban on AI use as including Grammarly — because in the sense we’ve all come to understand “AI” since the release of ChatGPT, Grammarly isn’t AI.

5 Likes

This is an excellent summary of where things stand. Having researched these tools and their underlying concepts and, recently, after working with them a lot, I don’t think the catastrophizing happening elsewhere in this thread is warranted.

However, I disagree with this take:

Sure, LLMs won’t replace high-quality original thought anytime soon (or more specifically, abduction and inference to the best explanation). But they do make it much easier and faster to create certain things (and to do them better). Someone skilled at LLMs and how to work with them can manage, delegate to, and supervise LLMs as assistants to do some really cool things. For an example of this, listen to the AI segment in the recent episode of MPU with Jeff Richardson.

As a result, support people will likely be able to help more people in less time, illustrators will be able to iterate on concepts at a greater scale, and writers can write and edit more. In turn, it’s likely that organizations will need less of each of these people. Of course, the flip of this is that it will be easier to build smaller, more focused organizations.

This kind of democratization tends to increase both volatility and capacity. Capacity as we can literally do more; volatility as anyone can do more. Hopefully the capacity increases will be enough to counter the more dangerous kinds of volatility.

Note that I’m not talking about the democratization of knowledge here, but knowledge work. I think it’s a subtle but important distinction.

1 Like

This is not democratisation of work. It’s literally describing the opposite. If your suggestion is accurate, then work becomes available to fewer people, and moves more into the domain of privilege.

This is so good I just saved it in AI my research file with complete attribution. :slightly_smiling_face:

1 Like

I don’t follow your logic, sorry.

If an independent illustrator can more easily generate large scale work, a small software firm can more easily provide high quality support, and a freelance lawyer can more quickly deep-dive on the relevant case law, those kinds of work are democratized by these tools.

I’m sure there is truth in this as there with any industrial revolution, or in this case technological revolution. Some will be displaced. But, there will also be whole new categories of work created. Forbes recently had an article stating that AI represents an entirely new industry. New industries represent novel and expanded opportunities.

You are correct; technology can have unintended consequences, including the loss of life. But what I’m responding to is the fear that AI will become uncontrollable and potentially “intelligent” enough to decide to destroy and kill. I do not find that scenario convincing.

1 Like

A killing machine would not have to actually be intelligent or itself decide to destroy and kill. Imagine an AI-powered killer responding to a text-based prompt that you typed on your computer similar to when you requested a summary of a set of articles. You would be the one providing the evil directive to destroy and kill using an AI-enhanced machine. And if you really can’t imagine this scenario, read Philip K. Dick’s short story Second Variety.