It’s not that difficult to understand what @pantulis meant by the play on words. People have different interpretations of “intelligence.” I mean, the linked article basically says exactly the same thing at the very beginning.
Speaking of which, that article intentionally or not seems to avoid one item critical to the discussion (I’ll admit I read some but only scanned the latter half or so, so I may have missed it) and that is the definition of “intelligence.” The closest I found was his contention that since “LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think” then we need to accept that intelligence requires a lump of grey organic matter and mechanisms that animals or people use, and that intelligence requires thinking and reasoning.
But the thesis kind of depends on the definition, yes? Everybody needs to be on the same page. If I right click on the word, Apple tells me that it means “the ability to acquire and apply knowledge and skills.” It’s at least arguable that LLMs can do that, in some form.
I know it’s hard to walk the line between constructive conversation and semantics, as I regularly trip over it. I am not trying to do that now. I just got interested in the discrepancy between @karlnyhus 's reference to enthusiasm, skepticism, and usefulness, and @pantulis 's follow up.
I am absolutely not an AI enthusiast; at best I’m an open minded skeptic, but I will point out that this is the case whether you’re using an AI, relying on another person (or people), or doing it yourself (for whatever “it” is).
Yes, but there is a significant difference. If I ask a question on this forum, I can reasonably expect the answer will be given in good faith and be accurate to the best of the answerer’s knowledge. This is generally true when interacting with humans.
This is not true of a LLM, because the LLM has no understanding of the words it is stringing together.
The problem is that many people, strongly encouraged by those who stand to profit, convey the same assumption of reasonableness to LLMs as they give to other humans, when it is unwarranted.
I would like to point that, in a sense, they do understand the words. LLMs work because they are based on distributional semantics, i.e. that the meaning of a word can be inferred by the words that accompany it. So vector embeddings can successfully conceptualize the meaning of a word and that’s why any LLM does not output total gibberish to the point that it is difficult to tell if the answer comes from a human or machine.
But of course, in another sense which is whee you were coming from, machines do not understand the words they string together.
That is a new use of the word “understand” with which I am unfamiliar.
LLMs have links between how words appear in texts. LLMs do not know what “joy”, or “red”, or “dog” mean. So yes, while humans can infer the meaning from context (the “links”), it is because they have some base understanding of some set of words, and of the objects in the world they refer to. This base understanding is not true of LLMs.
There is a rich literature on this topic. Look up “Searle’s Chinese Room” if you are curious.
Edit: I think it is important to be very precise with the terminology used to describe LLMs and and AI in gereral. The work “understand” carries a lot of unconscious assumptions which apply to humans (or animals) that is are not reallly applicable to LLMs. (Which I think is why a number of folks on this forum eschew the term “AI” when discussing these tools.) Saying a LLM “understands” brings those assumptions along with it, leading many to ascribe human level abilities that don’t really exist.
I’ve found it is more useful the longer the text you are writing is. Shorter things, like a text message, are not a good fit for for this type of tool.