Count me in.
Those who fail to study historyā¦
āIn the late 19th and early 20th centuries, the advent of automobiles faced significant skepticism. Many believed that cars were noisy, dangerous, and unnecessary, especially when horses had been a reliable mode of transport for centuries.ā
āWhen typewriters were introduced, they were met with resistance from traditionalists. Critics argued that typed documents lacked the personal touch of handwritten ones and even raised concerns about potential health issues, such as eye strain from reading typed text.ā
āIn early 19th-century England, the Luddite movement emerged as a response to the mechanization of the textile industry. Skilled artisans destroyed machinery that they believed threatened their livelihoods.ā
(Oh, I used ChatGPT to find quotes about resistence to technological innovations of the pastā¦)
Radioactive materials used to be added to a wide range of high-end beauty products. You probably know why thatās no longer the case now.
In the early stages of the Industrial Revolution, workers from children to elders endured grueling 16-hour shifts with fancy, state-of-the-art machinery. You most likely understand why we have strict laws today limiting who can work with machines, at what time and under what mandatory conditions.
Do you have a friend who can certainly afford an automobile but nevertheless elects to go to work by foot, bicycle or public transport? Do you know another friend who keeps a beautiful handwritten journal or notebook? You probably do.
You donāt need ChatGPT for any of these information.
Thatās why some people donāt use AI.
History is forgetful. Barely anyone remembers all the innovations which were once trendy but eventually abandoned due to fatal flaws. But those innovations did exist, then and now.
I would compare todayās generative AI with the piston-engine aircraft. Both were revolutionary and impressive in its capabilities. Both required another major revolution to address concerns about cost, safety and reliability before they could be ready for more widespread adoption.
The longer my companyās competitors donāt use AI, the better, as far as Iām concerned.
āCount me inā - sorry to ask, but are you are āinā for AI, or āinā for refusing to use AI, Jim?
Just asking for clarity - either option is good!
Steve Jobs - 1981:
I remember uh reading an article when I was about 12 years old. I think it might have been in Scientific American where they measured the efficiency of locomotion for all these species on planet Earth. Uh how many kilocalories did they expend to get from point A to point B?
And the condor won uh came in at the top of the list, uh surpassed everything else.
And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.
But somebody there had the imagination to test the efficiency of a human riding a bicycle.
Human riding a bicycle blew away the condor all the way off the top of the list.
And it it made a really big impression on me that we humans are tool builders. And that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes.
And so for me, a computer has always been a bicycle of the mind.
Uh something that that takes us far beyond our inherent abilities. And uh I think weāre just at the early stages of this tool, very early stages. And weāve come only a very short distance and itās still in its formation, but already weāve seen enormous changes.
I think thatās nothing compared to whatās coming in the next hundred years.
First off, I canāt help but wonder what would happen if we taught condors to ride bicycles.
Second, that was 45 years ago.
Third, itās not as poetic, but AI feels (to me) like an ebike for the mind.
It looks like they interviewed two people who donāt use AI about not using AI, which is a bit ⦠weird? The third person had used it and found it valuable.
This week, I used ChatGPT to solve a massive personal/professional problem. It took about 4 hours, 2 of which were in advanced voice mode.
It started off with me asking it:
āHey, you know a lot about me, maybe stuff I donāt know, what are my blind spots?ā
We then spent 3h59m untangling those blind spots, and figuring out how to tackle them.
I donāt say that things are life-changing, but this was.
Give that the first two interviewees have decided to not use AI, they wonāt have experienced this kind of help, so I donāt really value their opinions all that much.
It seems to me that the issue is not āLuddite vs. Tech Fan Boy.ā I think the issue that is brought up in the article, and in many other places, is authenticity.
At the very beginning of the article, the reporter quotes Zetteler:
"I read a really great phrase recently that said something along the lines of āwhy would I bother to read something someone couldnāt be bothered to writeā and that is such a powerful statement and one that aligns absolutely with my views ⦠"Whatās the point of sending something we didnāt write, reading a newspaper written by bots, listening to a song created by AI, or me making a bit more money by sacking my administrator who has four kids?ā
I believe it is both possibleāand desirableāto appreciate and use AI, as @Clarke_Ching aptly put it, as an āe-bike of the mind,ā while also refusing to rely on it a substitute for oneās own work and thinking. I donāt want our students doing that, and I refuse to do it myself. I agree with Zetteler: if someone isnāt willing to take the time to compose a thoughtful email, Iām not inclined to take the time to read it (and it is not difficult to recognize an AI-generated email). I have no objection to using AI as an editor, but I draw the line at using it as a ghostwriter.
For years, my mantra regarding technology has been: āTechnology has its placeāand must be kept in its place.ā As with so many things, the devil is in the details. Discerning the place of technology is easier said than done.
Another big issue is the morality of the way current models are trained and the theft of intellectual property as a result. That is a big sticking point as well.
This is, for me, the big issue. I can choose whether I want to use or not an AI. But I canāt choose how the foundational models are being trained and how are creators being compensated for the usage of copyrighted materials.
I think it will be a matter of a very short time until the models mix information with advertising because it is financially way too lucrative to mix (more or less subtle) ārecommendationsā for products, technologies, media offerings, etc. into the answers of an AI that shall and will influence the thinking and (purchasing) decisions of anybody who uses an AI.
The latter, mostly. Iām a curmudgeon.
A respectful request: could you provide a link to the original article for those of us who donāt use or donāt subscribe to Apple News?
I do genuinely appreciate it when you bring our attention to articles of interest.
I have used & worked on early very specific use-cases and corpora scholarly LLMs, LLMs designed for text parsing, stylistics, and textual analysis and comparison.
I donāt use commercial LLMs. I donāt use essay mills either.
I donāt think current LLMs are worth the energy costs, Iām appalled by the IP theft, and concerned about the underlying issue of GIGO, and increasing hallucinations.
That time will be very short indeed.
Hallucinations, or the better term ābullshittingā [note 1], is what they are built to do.
Unlike human brains, which have a variety of goals and behaviors, LLMs have a singular objective: to generate text that closely resembles human language. This means their primary function is to replicate the patterns and structures of human speech and writing, not to understand or convey factual information.
Source:
Because LLMS statistically mimic the language people have used, they often fool people into thinking that they operate like people.
But they donāt operate like people. They donāt, for example, ever fact check (as humans sometimes, when well motivated, do). They mimic the kinds of things of people say in various contexts . And thatās essentially all they do.
Source:
Note 1: On Bullshit - Wikipedia