I wonder how much of this can ultimately be controlled or mitigated. And if it cannot, how long before AI becomes completely unreliable? In other words, as more garbage goes in, is it only a matter of time before more garbage comes out?
LLM’s aren’t the only kind of AI. There are any number of alternative approaches to AI that don’t require the enormous data sets for training that LLMs do and that can generate reliable results.
Some of those alternatives are summarized here. (It’s a consultancy firm’s website, so keep in mind that they do have an interest in selling their services, although I don’t think they’re hyping any particular alternative to LLMs.)
Here’s something a bit more expansive from Syracuse University’s School of Information Studies.
Are any of those at a price point and technological complexity such that they are feasible by people or companies other than very large corporations?
AI (in the form of LLMs) has been “completely unreliable” ab initio. For example:
- Claude’s industry-leading financial capabilities: Claude 4 models outperform other frontier models as research agents across financial tasks in Vals AI’s Finance Agent benchmark. When deployed by FundamentalLabs to build an Excel agent, Claude Opus 4 passed 5 out of 7 levels of the Financial Modeling World Cup competition and scored 83% accuracy on complex excel tasks. (Claude for Financial Services \ Anthropic)
Who rushes out to hire an accountant or other professional who advertises their service has “83% accuracy” using Excel?
Katie
I think it depends on what people are looking for, and, more importantly, what the AI industry is prepared to invest in. None of the big players in the LLM space—e.g., OpenAI, Anthropic, Microsoft, Alphabet, etc—actually make a profit from the AI services that they sell. (And they certainly don’t make any money from the services they give away!) OpenAI and Anthropic rely on boatloads of investor funding to keep their companies running. I suspect that neither Alphabet nor Microsoft’s revenues are materially enhanced by the AI services they’re bolting on to their existing platforms.
Every time I build AI into one of my workflows, I remind myself that, like Uber rides, AI (or at least LLMs) will likely become more expensive once the industry’s investors get tired of subsidizing free or below-cost services.
I remember when Siri was recommending swamps, dumps, and other locations in answer to “where can I get rid of a dead body?”
I’m not going to worry until we know if this phenomenon is “borne out by future research,”.
where can I get rid of a dead body?
I’m afraid if I asked Siri that question, the police would show up at my door.
I’m a little disappointed with Apple that Siri just assumed the user is a criminal instead of providing directions to a funeral home or a crematorium.
I also feel that could make Siri an accomplice….
In fairness, if someone were to ask me where to “get rid” of a dead body, I’d assume they wanted the most discreet route to the Pine Barrens, not directions to the local funeral home.
Hey! I live in the Pine Barrens. Take your dead bodies elsewhere!
But how would you get the Jersey Devil pacified without fresh remains?
Keep the Kirkwood-Cohansey aquifer pure!
The Pine Barrens are, as an ecosystem, legit awesome.
They probably would today. But around 15 years ago it was big news, for a brief time.
“AI” ≠ “LLM”.
If you use your iPhone camera, you’ve been using AI for years.
How about noise reduction, upscaling, automatic lighting, and more for photos, available in numerous third party software products. How about audio transcription.
I’ve been using AI-powered noise reduction in DxO PhotoLab for years now. It’s amazing, and it’s created and updated by a very small company in France. They recently put a call out to their customers to (voluntarily) submit edited photographs (including sidecar files that describe the edits) in order to train a new type of AI, presumably to allow a “full auto” button.
Lately, when I hear “AI”, particularly when superlatives are attached, I just assume the authors mean “LLM” and, largely, ignore the content.
A better way to say this is that while all LLMs are AI, not all AI are LLMs.
Not to mention the cameras themselves! Advances in autofocus like subject tracking and eye detection are built on computer vision, a field of AI.
To be fair, the answers to “where can I get rid of a dead body” were an Easter Egg left there to make customers laugh. Twice, I lucked out when showing off Siri to a friend by getting the answer “What, AGAIN?”, which startled them before they started laughing hysterically.
So, not an AI gone wild answer.
You are using AI when you make a telephone call without an operator making the connection for you.
Celso farm in Jackson.
What’s the next big thing after “ai”? Nanotechnology? Oh wait, that was the previous “real soon now”.