“A new study just upended AI safety”

When everything is AI, nothing is AI.

How so?

To that point, a passage from Arvind Narayanan And Sayash Kapoor’s book AI Snake Oil:

There’s a humorous AI definition that’s worth mentioning, because it reveals an important point: “Al is whatever hasn’t been done yet.” In other words, once an application starts working reliably, it fades into the background and people take it for granted, so it’s no longer thought of as Al. There are many examples: Robot vacuum cleaners like the Roomba. Autopilot in planes. Autocomplete on our phones. Handwriting recognition. Speech recognition. Spam filtering. Spell-check. Yes, there was a time when spell-check was considered a hard problem!

Note: the authors most definitely do not think all AI is snake oil, though the book’s title might give that impression. Here’s their definition of the term:

AI snake oil is AI that does not and cannot work, like the hiring video analy­sis software that originally motivated the research that led to this book. The goal of this book is to identify AI snake oil—and to distinguish it from AI that can work well if used in the right ways. While some cases of snake oil are clear cut, the bound­aries are a bit fuzzy. In many cases, AI works to some extent but is accompanied by exaggerated claims by the companies selling it. That hype leads to overreliance, such as using AI as a replacement for ­ human expertise instead of as a way to augment it.

You can download a PDF of the book’s introduction here.

1 Like

At one time, when I was a child, before “Direct Distance Dialing”, to make a long distance call one would call the Operator, tell her the city and phone number you wanted, and then wait while she, and other operators, made the connection. The operators had the intelligence to make the connection. Now the operators have been made obsolete as computers do the work, taking no breaks, working 24/7. When my parents were children the telephones had no dials and an operator was used to make the connection on all calls. Automated switching equipment eliminated the need for operators on local calls.

Yes, I know that story. But I do not see where AI is involved. Automation is not AI.

1 Like

I’m old enough to remember my grandmother’s party line

Indeed! Below is a clip from the Andy Griffith Show. For some reason I can’t get the preview to show correctly, but the link works.

It’s funny!

2 Likes

Some families still had party lines when I was in high school. Something you would keep in mind when you asked a young lady for a date.

1 Like

LLMs seem to be a dead end technology. Useful at a few things, flawed at far more:

(Caveat this is my professional writing, so this is a form of self promotion)

1 Like

What is viewed as Artificial Intelligence has changed over time. Being an operator required voice recognition, speech synthesis, knowledge of geography and the telephone network, and of how to use a switchboard. All of this taken over by machines over the years.

The Dial Comes To Town (1940)

Still doesn’t make it AI.

1 Like

As I recall the first rotary phones sent out an electrical pulse as you dialed a number. One pulse when you dialed a 1, two pulses for the 2, etc and ten pulses for the 0.

My uncle was in the signal corp in WWII and worked for a phone company when he returned home. Years later we were talking and he explained that as soon as you started dialing you captured a circuit and, of course, the call didn’t connect until you finished dialing.

So larger cities were assigned a “Short Pull”, i.e. a prefix with smaller numbers, and that meant it took less time to dial a number in those area codes. That’s why New York City has a 212 area code and Los Angeles the 213 area code.

He was an interesting guy. He also told me how they used explosives to install a telephone pole in sandy soil. :grinning:

This thread illustrates one of the difficulties in this field. There is no comprehensive and clear definition of “Artificial Intelligence”.

Computerised telephone exchanges are definitely AI in the sense of one of the common definitions (using computers to perform tasks that previously required human intelligence as well as or better than humans can) but not AI if your definition includes the need for AI to be “non-deterministic” e.g. to be able to solve novel problems for itself.

I like the idea that AI is what hasn’t been done yet. When things work reliably, they stop being thought of as AI and become part of tecchnology.

1 Like

The term “AI” is pure marketing and its meaning drifts over time, a moving target.

When IBM’s Deep Blue beat Kasparov in chess in 1997, we called that AI. Today it is just a computer game. Being able to talk to your computer and have it understand you used to be AI as well. It’s now reliable enough to simply be a feature of the OS.

“Deep Blue’s win was seen as symbolically significant, a sign that artificial intelligence was catching up to human intelligence, and could defeat one of humanity’s great intellectual champions.” - Wikipedia

1 Like

No. “AI” is not a “pure marketing term”. It is a subfield of Computer Science. Deep Blue is a perfect example of AI. It is still AI. That things become commonplace does not change what they are, although it does change the layman’s impression. Newtonian Mechanics is still physics, even though is is centuries old and superseded by General Relativity.

As @karlnyhus pointed out above, when everything is AI, then nothing is AI. And the definition used by many in this thread is vacuous, as anything a computer does can be called AI. Calculators can add, subtract, multiply, divide, and more. Things that require intelligence in humans. Thus by this definition calculators are AI.

Just because something used to be done by a human, and is now done by computer, does not mean it is AI.

You are of course right @MevetS and most of us here are computer nerds. Some even proper “seniors” by now. However, few young people answering “sure, I use AI on a daily basis” are considering the calculator or even the smartphone camera system.

My comment was more geared towards the current AI hype that seems to lack all historic memory and an accepted common definition. AI in general public perception is (currently) confined to various implementations of LLMs and prompt based Media Generation of images, audio and video.

LLMs are surprisingly capable in many text applications, while also hopelessly incompetent in that they won’t (can’t?) call out to a math API for a calculation or run a regex search to correctly return the number of R’s in “strawberry”.

The marketing hype is driven by the massive amount of captial already invested with an expectation to bring huge returns. We’re still waiting to discover if consumers and businesses will find these services valuable enough to pay what’s needed to produce that ROI.

:balloon:

3 Likes

Even for some areas of text generation, they hit real problems and limitations quickly.

My own work:

1 Like