Selling AI Snake Oil - “Superintelligence”

WSJ:

The title of a fresh paper from Apple says it all: “The Illusion of Thinking.” In it, a half-dozen top researchers probed reasoning models—large language models that “think” about problems longer, across many steps—from the leading AI labs, including OpenAI, DeepSeek and Anthropic. They found little evidence that these are capable of reasoning anywhere close to the level their makers claim.

Research from Apple:

2 Likes

I think this is correct but it’s a discussion on semantics. (Which is not to say that semantics is not important).

Reaching super intelligence will either require substantial changes to the architectures in terms of symbolic reasoning… or make it emerge through sheer use of more training and more parameters. Maybe it’s 3, 5 or 10 years down the line.

The urgent matter at hand is that the impact the current wave of models, applications and more importantly enterprise platforms are poised to have has already reached the ability to disrupt whole elements of the economy and thus our societies, will it make us more productive and generate a boom similar to the Internet or will it make most of us redundant and unemployed? Will it be both? How will we manage the divide?

1 Like

There’s been more than a little pushback on this paper. As Benj Edwards notes in an Ars Technica article discussing the paper and the response to it:

[T]he results of the Apple study have so far been explosive in the AI community. Generative AI is a controversial topic, with many people gravitating toward extreme positions in an ongoing ideological battle over the models’ general utility. Many proponents of generative AI have contested the Apple results, while critics have latched onto the study as a definitive knockout blow for LLM credibility.

This video by math professor Dr. Trevor Bazett provides a straightforward overview of the paper’s methodology and results and what they suggest (and don’t).

This video addresses some of the study’s methodological constraints.

1 Like

Analysis of the pushback:

And his thoughts on the paper:

1 Like

Just to add some perspectives to the discussion, here are two books taking critical stances against Generative AI. Both are very well researched and reasoned. (I am still reading “The AI Con”, but the highlighter is already getting a lot of use.) The first is focusing a bit more on the risks of relying on the technology, the second I’d say takes a wider, societal view as well. Both well worth reading.

The Intelligence Illusion by Icelandic writer Baldur Bjarnason

“It details, in depth, the risks that come from using generative models at work, with approachable high-level explanations of the flaws inherent in its design.”

The AI Con by Emelie Bender and Axel Hanna

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.

It is worth remembering that the loudest proponents of Generative AI are coming from the companies that have already sunk billions into the development. They are extremely incentivised to turn it into a mass-market product that consumers and companies feel is worth paying for. Always be mindful of who are behind the claims of what unprecedented breakthroughs we definitely will see, (always in the near future).

The drumbeat of “it will only get better with time” was indeed the case for the early web, streaming services and smartphones. It was not the case for blockchain, NFTs, crypto-coin, the Metaverse or any other over-hyped but not-fit-for-purpose technology out there.

4 Likes