ChatGPT is just awesome

Agree, a soul is not required for “artificial” intelligence but I believe it is for real intelligence, self-consciousness, imagination, moral and ethical decision making and more. :slightly_smiling_face:

3 Likes

If it is clearly identifiable as “Fiction”, why not!?
There are already a couple of books available (since at least 2017), written by so called “A.I.”.

An excellent question, but we need to define our terms (spot the scientist :grimacing:).

If we start with an assumption that only literature 100+ years since publication can be defined as good on the basis that not enough time has passed for pre-100 year literature to have “stood the test of time” yet (a common enough definition for our purposes), and we also assume that most “great works of literature” survive because they speak to something about the human condition (also a common enough definition), it is hard to see how an AI could ever write good literature. It would at best only be mimicking something about human life? Even if it mimics it really really well, would we find the same value in it?

If however we’re just defining good as “readable” or even “marketable”, one might argue that an AI-authored book is probably better than some of the rubbish already being published :grimacing:

1 Like

There was a “AI” project, that wanted to “complete” the 10th symphony of Beethoven.
They take all they could get from Beethoven as an Input for the software, and then at the End, they presented the “completed 10th Symphony” to the public.

Well, at least something that sounds a little bit like Beethoven, and was only around 20Min long. Around 25% shorter, than the shortest one made by Beethoven itself. The 9th, for comparison, took normally over 70 Minutes.

There are a couple of articles, you could find on the Internet about that project.
Almost all of those articles skip one important part of the way, the “AI” really had worked.
It took about 18 Month to get to the final product, but they miss to tell the press, that it not took 18 Month to produce the presented piece of music, but to get literally thousands of samples, who then are listed to, and the developer decided at the end, which one to present to the public.
This was not an “AI”-Production, but simple a software that mimicking classical music, and a group of people in a Try-and-error-process, to select the best outcome for presentation.

This is similar, how those “Writing-AI´s” are working. They use existing texts as an Input, and the system is pretty much just exchanging some word against new ones, to “write” a new book.

1 Like

I’m guessing most people reading this far will enjoy the below piece by Janelle Shane, whose AI Weirdness blog is one of the more entertaining on the subject:

Apparently I Am A Robot

She was also the one who turned me on to the sounds-good-but-is-bogus problem:

Galactica: the AI knowledge base that makes stuff up

2 Likes

This article explains in part why I’m not concerned about AI wiping us out.

1 Like

The current chat AI is all about the probability of which word, or part of a word, will come next, based on analyzing a huge corpus of text. There’s no real intelligence beyond that.

I’m sitting here drinking a cup of ________

The first man on the moon was _________

Leeloo has a ________

So, you are claiming that if we can fill in the blanks we have no real intelligence?

Hmmm …

:slight_smile:

1 Like

This morning I visited StackOverflow and noticed a banner pointing to their policy on Why posting GPT and ChatGPT generated answers is not currently acceptable. In short, the concern is that the generated answers cannot be trusted to be accurate. Users who ignore the policy can be immediately suspended. Not sure how they determine that has happened though. It will be interesting to see how these sorts of things play out over time.

1 Like

Yes, but it’s the ratio between the sizes of the prompts and the blanks that can be filled that is exciting. It seems to be hovering around 1:10 for factual explanations, and 1:20 for creative work.

While I am very impressed about what ChatGPT and other AI projects have to offer these days, I am with those that have issues to consider stuff like that intelligent (and I even cringe to some degree when I am confronted with “machine learning”, which at least is a more honest phrase). Software in combination with processing power and huge storage can achieve impressive results these days - and it is getting better and better… But intelligence? No.

I have grown up with the Computerclub, a fantastic TV show by WDR (German National Broadcaster) about computers - WDR Computerclub - Wikipedia. It was a true marvel educating about computers, software, programming, tinkering and what not.

One of the former hosts, Wolfgang Rudolph, well into his retirement age these days, is using YouTube and audio podcasts via https://cc2.tv to publish content these days. It has been a long time that I have listened or watched, but there is one quote by Wolfgang Rudolph that always comes to my mind, when I am confronted with AI:

KI = Künstliche Intelligenz. Man könnte auch Kleine oder Keine Intelligenz sagen.”

The pun/wit in this quote gets lost to some degree when I am translating it, but:

“AI = Artificial Intelligence. You could also call it little or no intelligence.”

I have huge respect for the efforts of developers and companies publishing AI stuff. But intelligence? No. :slight_smile:

1 Like

I think it is very important to distinguish between “intelligence” and “consciousness” and perhaps even “wisdom” in these discussions.

There is quite a bit of literature on this subject, by a lot of folks much smarter than me, well worth investigating if you are interested in this topic.

I took some courses back in my grad school days on AI, and did a lot of reading on intelligence, computer and biological. I was always bothered by the arguments that claimed that humans did things differently than computers. The argument goes like this (vastly simplified), “humans and computers play chess differently, because we know how computers play chess”. Yep, we know how computers play chess. Where the argument falls down for me, is no-one can tell me how humans play chess, at the same level of detail as we have for computers.

1 Like

Well, as much as a parrot (which is significant) :wink:
Although knowing the answer to the last one is a sign of great intelligence.

1 Like

I agree that ChatGPT is a real breakthrough. But it doesn’t move us towards artificial general intelligence much further than what Eliza did in its time.

Yes, it looks “intelligent” and is going to have multiple useful use cases, but for all the distance that there is between Eliza and ChatGPT, there is still much to do in terms of artificial intelligence. ChatGPT does not have beliefs, it does not have feelings, it won’t fail a test if its significant other dumped it.

While the output is impressive, I tried to engage and ask a few questions about sports and players. It was not correct on many stats. The interesting thing is that when I typed in information to contradict the information it shared, the words it spit out made it seem like I didn’t ask the other questions. I don’t think I would trust the information it shares just yet. It is fun to play with, but seems to have a long way to go.

TidBits has published a good article about ChatGPT. An excerpt from the article, there are several examples that explain very well what is going on and why this is everything but intelligence:

Ben Thompson of Stratechery asked ChatGPT, “Did Thomas Hobbes believe in separation of powers?” and received this reply:

Yes, Thomas Hobbes believed in the concept of separation of powers. In his book “Leviathan,” Hobbes argued that a strong, centralized government is necessary to maintain order and prevent conflict in society. However, he also believed that a single ruler or group should not have complete control over all aspects of government. Instead, he advocated for the separation of powers into different branches, with each branch having specific responsibilities and powers.

This is fascinating. It’s concise, well-written English.

It’s also wrong.

(…)

It’s easy to think that since ChatGPT is actually a computer program, it’s simply running this command for you, like a real Unix shell. This is wrong. It’s going through millions of pieces of training data showing how Unix shells respond, and it’s returning its best guess at the correct text. It has no understanding that a Unix shell is a computer program, while Shakespeare was a person.

Similarly, Thompson asked ChatGPT what 4839 + 3948 – 45 is. It said 8732, and when Adam Engst tried while editing this article, it answered 8632. Both answers are wrong—it should be 8742. Again, ChatGPT may be a computer program, but it isn’t doing any arithmetic. It’s looking through its huge text model for the most likely next words, and its training data was both wrong and inconsistent. But at least it showed its work!

(…)

Computer scientists and philosophers have pondered for years if it’s possible to create a conscious computer program. We may get programs that almost everyone thinks are intelligent and talks to as if they’re intelligent, even though programmers can show there’s nothing intelligent going on inside them. It’s just complex pattern-matching.

Via ChatGPT: The Future of AI Is Here - TidBITS

1 Like

Nature has published an article about ChatGPT which some of you may find interesting: AI bot ChatGPT writes smart essays — should professors worry?

(Nature is a journal for science academics, so their focus is on what this means for professors.)

1 Like

And this graph… HMMM.

(I’m not entirely convinced it means anything, lots of people are just playing with it because of the hype.)

1 Like

I think that there absolutely is a correlation to the hype.

I am impressed by the quality of the output by ChatGPT in matters of “concise, well-written English”. I am deeply concerned about the actual content because there is basically no guarantee that anything of this “well-written” stuff is true or correct. Which is dangerous to say the least.

Fact-checking becomes more and more important these days. It already is a big challenge to deal with the “output” of actual humans, it will become even more important when chat software is “writing” texts.

4 Likes

I was building an email in MailChimp today and I noticed their new beta builder offers an AI function to help you build and write better emails :roll_eyes: I don’t like the idea of newsletters being written by a non-human. (Some marketing emails already read like that’s the case and marketing emails are of less value to me personally so I don’t care about that!)