Apple Executing Poorly under Tim Cook

That you choose to use a colloquial definition of AI, and not the technical definition used by researchers in the field, is like Humpty Dumpty in the Alice stories claiming that words mean whatever he wants them to mean.

Artificial Intelligence is computer programs doing things that if a human did them would be considered a sign of intelligence. LLMs clearly do things that if were done by a human would be considered intelligent. And this is why so many C-level folks are buying the hype (sadly, this includes those at my current employer, I feel your pain with the contracts).

AI is all about programming, so of course and AI program would be following instructions. People identify things by pattern matching. AI is using computers to do things that people would call intelligent. How it is done will of course be different in hardware than wetware. But saying that computer vision is just pattern matching by following an algorithm, and thus isnā€™t artificial intelligence shows a fundamental misunderstanding of what artificial intelligence is.

AI != I

Finally, defining intelligence in general is a difficult problem. There is plenty of literature on the subject, so I wonā€™t rehash it here (and would likely do a poor job at it anyway).

What comes after finally? Just for the record, I do not think that there is any understanding exhibited by LLMs. And I think that is where non-AI researchers get hung up on these things.

Summing up (finally cubed?) I think that we agree more than we disagree, in that LLMs, machine learning, et. al., do not have any understanding of the data being manipulated. So claiming these tools arenā€™t AI misses the point. The correct claim is that they are not intelligent. Semantics? Perhaps. But as Lewis Carrol was showing with Humpty Dumpty, semantics matter.

1 Like

Humour only. My father was a professor of Computing Science from the late '60s on. (His obituary: Michael Levison Obituary - Ottawa, Ontario | West Chapel misses his time in the UK).

He frequently commented, ā€œMarvin Minsky has been promising me AI since ā€¦ā€ (I donā€™t recall the date). My father observed that it was never delivered. In memory of my father, I call what I see. These are fancy random number generators, that have some utility.

cc: @geoffaire

4 Likes

The goal posts keep moving. Win at chess, thatā€™s just programming. Win at go, more programming. Natural language processing, still more programming.

(Of course it is just programming, thatā€™s the point!)

The general public wants R2D2 and Commander Data.

Computer scientists just keep making progress. And now I can speak into the air and have the lights turn on. And people are having conversations, with apparently useful results, with LLMs. But it is just discounted as ā€œfancy random number generatorsā€!

Although to be fair, Minsky et. al. grossly underestimated the difficulty of the problems to be solved. Not unlike those nuclear fusion folks. To say nothing of the flying car folks.

Cheers!

2 Likes

Intelligence is hard to define. Thatā€™s why IQ tests are not objective measures of a personā€™s intelligence. They have all sorts of hidden biases that embed cultural knowledge, etc. I recommend the book The Mismeasure of Man that goes into this in detail: The Mismeasure of Man - Wikipedia

4 Likes

:rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl: :rofl:

This technical definition sets such a low bar and is so limited that it is no wonder the non-tech world prefers a more colloquial definition. AI today is, at best, like a very precocious child who still needs to be fed and have his diapers changed. A truly intelligent machine will be able to take care of and provide for itself without human keepers. For example, the development of an artificial brain, or an intelligence that arises within a communications network, may not require us to supply programming for it to learn and grow. Past science fiction tales reveal that we often fall far short in our imagination of what is actually possible and achievable.

I see no reason that human level intelligence (and beyond) need be constrained to a biological substrate. I do not know who said this originally (itā€™s not original with me), but we often overestimate what is achievable in the short term and underestimate what is achievable in the long term.

Some day folks (or AIā€™s!) will think of ChatGPT and its ilk in the same way as kids today think of landline telephones or LP records.