This paragraph supports my stance that current “AI” products aren’t actually AI as they simply use pattern matching, i.e. they follow programming.
“The fragility highlighted in these new results helps support previous research suggesting that LLMs use of probabilistic pattern matching is missing the formal understanding of underlying concepts needed for truly reliable mathematical reasoning capabilities. “Current LLMs are not capable of genuine logical reasoning,” the researchers hypothesize based on these results. “Instead, they attempt to replicate the reasoning steps observed in their training data.”
Yes this is a very good paper. And reinforces that fact that these systems are not intelligent.
There is already a discussion here:
This is a textbook example of AI, a process done by a computer that would be considered intelligent if done by a human. Again we agree that LLMs are not intelligent.
And I would not be so dismissive of “pattern matching”. While there is much that we do not know about human intelligence (and biological based intelligence in general), it does appear that pattern matching is an important component of such systems. It is of course the “processing modules” after pattern matching has occurred that differentiate biological based intelligence from the current state of the art of AI systems.
Sadly most people, including the CEO of my company, do not have a layman’s understanding of LLM’s. Worse, is that there are a multitude of snake oil sales reps, peddling “AI” solutions that are just problems in disguise.
Just to be clear, I use these tools and find them useful. But the huckster hype is overwhelming.