Can AI / LLM really reason - view from Apple Engineers

I stumbled across this article I thought some of you here would find interesting. It is based around the findings from Apple researchers who asked the question “can LLMs really reason?” (TL;DR the conclusion was no - not yet). The paper itself is academic and very mathematical (way beyond my capabilities) but I think there are many here who would understand and enjoy it.

Here is the original article:

An interesting thread on X.com:
x.com

The actual paper:
https://arxiv.org/pdf/2410.05229

1 Like

Genuine intelligence, in my understanding, involves the ability to reflect on one’s thoughts, evaluate arguments, and adjust conclusions based on new information or self-correction. Self-consciousness enables an individual to recognize themselves as a thinker, allowing for metacognition. René Descartes emphasized the importance of self-consciousness in reasoning with his famous assertion “Cogito, ergo sum” (“I think, therefore I am”). Descartes argued that the capacity to doubt or contemplate one’s existence stems from a form of reasoning rooted in self-awareness. According to Descartes’ argument, genuine intelligence is inherently linked to self-awareness, a quality lacking in AI.

Within Judeo-Christian theology and philosophy, including epistemology, this self-awareness, or metacognition, is considered a characteristic of what is commonly referred to as the soul. It is a unique attribute granted to humans, which, along with awareness of morality and beauty (axiology), is what fundamentally defines humans. Artificial intelligence can simulate the functional aspects of intelligence, such as solving complex problems, learning, adapting to new information, and mimicking human cognitive processes. AI can perform tasks that require sophisticated pattern recognition and decision-making. However, this does not involve self-consciousness or metacognition—AI can execute tasks that resemble intelligent behavior without subjective experience or self-awareness.

AI may continue to improve in replicating human general intelligence, but in my view, it will never achieve it. In short, AI mimics the brain, but not the mind. I believe there is a symbiotic relationship between the brain and mind that delves into much deeper issues than what is appropriate for this forum, Hence, I’m not worried about AI initiating an apocalypse. Humans may use AI to initiate an apocalypse, but it won’t be self-generated from within AI. We are our own worst enemy, not AI. :slightly_smiling_face:

8 Likes

I couldn’t agree more. Perfectly stated. That’s why the discussion of AI Podcasts is ridiculous. The technology of it is interesting but I will not waste one minute listening to two computers arguing when humans do such a good job of it.

3 Likes

Indeed, I have zero interest in listing to a computer “talk to itself” ‘while I listen in. I want to hear from humans. I’ll use AI like I use other technology tools, but AI will not replace people for my listening habits. The same thing goes for my reading. I have zero interest in someone posting an article generated by AI. Nor, will I use AI as my ghostwriter. I will use it to edit and perhaps refine, but not to write on my behalf.

2 Likes

I agree, almost entirely.

It depends. I listen to a number of human, thoughtful podcasts. However, I am unlikely to find a podcast discussing 30 articles on a subject I’ve curated, so in this instance I would listen to a computer “talk to itself.”

Which, assumes, of course, that the AI is not hallucinating as it creates the dialogue from the documents. Of course, I’ve listened to many podcasters (I’m NOT referring to MPU) who seem to be hallucinating as well. :slightly_smiling_face:

1 Like

Genuine intelligence also involves conscience – moral recognition of a certain level of right and wrong. Conscience is not infallible but inherent within mankind. AI has no moral consciousness. AI also has no inherent sense of creator/Creator. In Christian terms, AI does not bear the image of God.

2 Likes

That is well said. Have you come across the theory that LLMs are inhabited by demons, though? :slight_smile: Angels and fallen angels are generally intelligent even though they don’t have souls, in the trichotomic distinction between soul and spirit, anyway.

Kidding aside, the technique in this paper seems sound. It is trying to estimate the variations in LLMs that a symbolic reasoning engine wouldn’t exhibit, that were already known. Nothing alarming or revealing, but a good contribution to understanding the limits of LLMs and benchmark techniques.

3 Likes

Have the horses already left the barn? Though AI cannot reason, it gives the impression that it can. As such, it is already being incorporated into various applications (eg, electronic medical health records). We’re being told to double check its output for accuracy, but it’s been my experience, that when someone else is doing the work for busy lazy humans, eventually the humans stop checking, accept it as truth and move on with their lives.

2 Likes

No, but I wouldn’t be surprised that conspirators and those prone to being swayed by conspiracies don’t promote such nonsense! Maybe if you listen to AI backwards? :slightly_smiling_face:

1 Like

Indeed, hence my statement, “It is a unique attribute granted to humans, which, along with awareness of morality and beauty (axiology) … :slightly_smiling_face:

Which sadly, is also true of how many respond to what is in the news, entertainment, and social media.

50% of the human population has below-average intelligence.

I wonder if we’re comparing LLMs and AI with the wrong half of the population.

There’s a brilliant, longish thread by Paul Cantrell on Mastodon about how programming code differs from human laws, and what LLMs do and can’t do.

Highly recommended. Starts here:

3 Likes