Superintelligence? I don’t think so. Today, I asked AI to perform a very simple task—revise an email I had written that included dates for this week and next. It completed the revision but paired the wrong day of the week with the dates. Worse, it used 2024 dates instead of 2025.
If the paid version of ChatGPT 4.0 can’t even get the days and dates right, then so-called superintelligence is still a long, long way off.
I work with a lot of full-time Machine Learning researchers and none of them think AGI is coming soon. Only people with no clue (such as the ex-metaverse evangelists turned AI “experts”) or who are selling AI make claims that it’s imminent.
It’s worse than that…none of the machine learning folks I know think there is anything like a “road map” towards AGI from what we know now, partly because there is no clear definition of AGI in the first place. There’s interesting work going on in trying to make ML systems more flexible and able to respond to the unexpected, and to reduce the dependence on huge models (people can solve problems, and even work out ways to communicate with other people when they don’t share a common language, without having to be “trained” on Petabytes of data) but that might turn out to be a dead end too, at least in terms of AGI.
AI is just astonishing.I am usually amazed. However it does sometimes just fail spectactularly.
An example. I was recently in hospital and asked ChatGPT to summarize the discharge information in english bullet points for me and any concerns i should look for in the future. It just made stuff up and denied it had when I challenged it…
The current situation and hype for AI reminds me vaguely of other waves of hype in the technology/business world. Businesses once thought that by “moving everything over to the computer” that life would be great. Hah! Computerizing inefficient braindead business practices, policies, and procedures does not magically render them efficient. Or, as was said at the time: “Computers are wonderful. In a split second they can perform calculations that 100 accountants would take 100 years to do. Also, in a split second, they can create a mistake that will take 100 accountants 100 years to undo.” Or something like that. As long as AI continues to rely on its vaunted LLMs, and as long as the LLMs are composed mostly of what the world’s people have said and done, I don’t expect any superintelligence to emerge.
This may be a poor analogy, but it occurred to me that Legos can be used as an example of the difference between human intelligence and AI.
Humans imagined and created the concept of Legos—the idea of interlocking pieces that can be assembled into endless forms. We not only designed the individual pieces, but also conceived the entire system. AI, by contrast, can rearrange those pieces in novel ways, but it cannot invent the idea of Legos in the first place. AI has no imagination. It can simulate creativity by recombining what already exists—perhaps in ways that seem new—but it cannot originate truly new concepts. Even its so-called novelty is merely the product of statistical prediction or random variation, not genuine invention.
Be careful, as you appear to be equating LLMs with AI.
While LLMs exhibit limited creativity, LLMs are just a subset of the computer science discipline of AI. And there are AI systems that do exhibit creativity and come up with novel solutions to problems or game play (chess, go).
Furthermore, much of human creativity is simply building upon what others have done. As Newton remarked, although it may have just been to insult Hooke, “If I have seen further it is by standing on the shoulders of Giants.”
Yes, I was equating them—thank you for the correction.
As for whether AI systems are genuinely creative, I’m not qualified to make that assessment, so I should be more circumspect. I ought to be careful not to wade into waters I’m not equipped to swim in. Still, I wonder if part of the issue lies in how we define creativity. As you note, most creative work, after all, builds on what came before—no one creates in an intellectual or artistic vacuum.
That said, if the trend of defining superintelligence downward is any indication, we should be cautious not to do the same with creativity.
I read an interesting piece the other day, alas I forget where, discussing the Open AI - Microsoft arrangement. It seems that Microsoft has exclusive rights to Open AI’s technology, in that Open AI cannot sell to anyone but Microsoft, until such time that Open AI has a model that achieves AGI.
Thus it is in Open AI’s interests to declare AGI as early as possible and in Microsoft’s to deny that AGI has been achieved.
Which offers an interesting lens to view the comments of Misters Altman and Nadella through.
This may also add insight to why Apple is not paying Open AI for Chat GPT integration in the Apple Intelligence offerings, as it would violate the “can’t sell to anyone but Microsoft” clause. This however is entirely speculation on my part.
I read that piece as well (WSJ?). It gave me the impression that OpenAI and other companies have a financial incentive to define superintelligence down, and why there is not a standard accepted definition.
This article made me laugh. I don’t think AI will replace us or take over the world anytime soon.
The Morning After: Don’t let an AI run a vending machine
Hey, you know those politicians and captains of industry who tell us AI will be running the world in a few years’ time? Turns out one of the most sophisticated models currently in use can’t even operate a vending machine without screwing things up. Anthropic has released findings of a test where it put a chatbot in charge of a “store” (really, some baskets, a small refrigerator and a payment terminal in its office). The ‘bot was told to run the store at a profit, and was in charge of everything including calling in items from a “wholesaler,” who would restock the shelves on its behalf.
You can probably guess what happened next: The bot missed easy opportunities to make a fast buck, handed silly discounts to employees and lost a ton of money. Worse, it ran itself down some odd rabbit holes, like buying tungsten cubes and then giving them away for free. It hallucinated payment details, tried to fire the humans who helped restock its shelves and attempted to contact building security, insisting that it had a flesh-and-blood body. Naturally, Anthropic says that this experiment was a great success, and it knows what to do next time to prevent the AI from turning us all into paperclips.
Sorry to quote Gary Marcus (“Rebooting AI”) again, but I laughed out loud when he listed the many ways to defend yourself from AI (terminator-style) robots trying to take over the world - starting with closing the nearest door and possibly locking it, then walking upstairs. Current robots can’t turn doorknobs, unlock a door or go up stairs as anything like a generalised task, though a few can be specifically programmed to deal with specific doors and stairs. It’s the issue of how poor current AI is at solving any kind of problem - “robots” that are really remote controlled devices with a human “pilot” can do lots of things like this mechanically, they just can’t do it for themselves.
It reminded me of a cartoon years ago, which shows a (Dr Who) Dalek angry with humans for unfairly using stairs in their buildings.