An Interesting Take on LLMs

It’s not that difficult to understand what @pantulis meant by the play on words. People have different interpretations of “intelligence.” I mean, the linked article basically says exactly the same thing at the very beginning.

Speaking of which, that article intentionally or not seems to avoid one item critical to the discussion (I’ll admit I read some but only scanned the latter half or so, so I may have missed it) and that is the definition of “intelligence.” The closest I found was his contention that since “LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think” then we need to accept that intelligence requires a lump of grey organic matter and mechanisms that animals or people use, and that intelligence requires thinking and reasoning.

But the thesis kind of depends on the definition, yes? Everybody needs to be on the same page. If I right click on the word, Apple tells me that it means “the ability to acquire and apply knowledge and skills.” It’s at least arguable that LLMs can do that, in some form.

I know it’s hard to walk the line between constructive conversation and semantics, as I regularly trip over it. I am not trying to do that now. I just got interested in the discrepancy between @karlnyhus 's reference to enthusiasm, skepticism, and usefulness, and @pantulis 's follow up.

Fun discussion, either way.

I am absolutely not an AI enthusiast; at best I’m an open minded skeptic, but I will point out that this is the case whether you’re using an AI, relying on another person (or people), or doing it yourself (for whatever “it” is).

Yes, but there is a significant difference. If I ask a question on this forum, I can reasonably expect the answer will be given in good faith and be accurate to the best of the answerer’s knowledge. This is generally true when interacting with humans.

This is not true of a LLM, because the LLM has no understanding of the words it is stringing together.

The problem is that many people, strongly encouraged by those who stand to profit, convey the same assumption of reasonableness to LLMs as they give to other humans, when it is unwarranted.

2 Likes