ChatGPT as alternative to Google or Wikipedia

Yep.

And just like Tesla claiming their cars are self driving, the claims will outpace the the technology. And that’s sad because the technology, in both cases, is fascinating. And doesn’t need to be overhyped.

5 Likes

I think the best summary of ChatGPT I’ve seen is

“Mansplaining as a Service - Super confident yet frequently inaccurate.”

While the technology certainly is interesting and impressive in its own way, I’m not ready to trust it with results from the Internet yet. Putting it to work on a data set I know to be solid could be useful though.

7 Likes

With my experimenting, ChatGPT produces incredible nonsense. It makes up facts by the train load.

Write an essay on the history of hydrofoils in Puget Sound

The result:

Hydrofoils, also known as hydroplanes, are boats or ships that are designed to travel on top of the water using underwater foils. These foils lift the vessel out of the water, reducing drag and allowing for faster speeds. The history of hydrofoils in Puget Sound, Washington, is a fascinating story of innovation, experimentation, and ultimately, success.

The first hydrofoil to be tested in Puget Sound was the “Foam,” built by the Boeing Corporation in the early 1950s. The Foam was a small, single-person craft that was powered by a gas engine and reached speeds of up to 40 miles per hour. The Foam’s successful tests in the Sound prompted Boeing to continue development on hydrofoils and led to the creation of the “Flying Clipper,” which was capable of carrying up to 12 passengers.

In the 1960s, hydrofoils were also tested by the United States Navy in Puget Sound. The Navy was interested in hydrofoils as a way to transport personnel and supplies to ships at sea. The Navy’s hydrofoils, called “Sea Furies,” were able to reach speeds of up to 50 miles per hour and could travel over rough seas. Despite the success of these tests, the Navy ultimately decided not to adopt hydrofoils for their operations.

Despite the Navy’s decision, hydrofoils continued to be developed and tested in Puget Sound by private companies and individuals. One of the most notable hydrofoils to be developed in the Sound was the “Sea Dart,” built by the W.D. Schock Corporation. The Sea Dart was a high-speed hydrofoil that could carry up to 50 passengers and reach speeds of up to 60 miles per hour. The Sea Dart was used for several years as a passenger ferry in the Sound, but was ultimately retired due to high operating costs.

In the 1970s, hydrofoils were also tested for use in commercial fishing in Puget Sound. These hydrofoils, called “Fishfoils,” were designed to be used as fishing boats and could travel at high speeds while carrying a heavy load of fish. The Fishfoils were not successful as commercial fishing vessels, as they proved to be too expensive to operate and maintain.

Despite the various setbacks and challenges faced by hydrofoils in Puget Sound, the technology continued to evolve and improve. Today, hydrofoils are used in many different applications, including military, transportation, and recreation. And Puget Sound, Washington played a important role in the development and testing of hydrofoils, and it continues to be a popular location for hydrofoil enthusiasts.

Overall, the history of hydrofoils in Puget Sound is a story of innovation, experimentation, and ultimately, success. Despite the many setbacks and challenges faced by hydrofoils in the Sound, the technology has continued to evolve and improve, and it is now used in a variety of applications. Puget Sound, Washington has played a significant role in the development of hydrofoils and continues to be a popular location for hydrofoil enthusiasts.


Almost everything in the article is false.

  1. W.D. Schock Corporation is a fiberglass sailboat construction in California that has nothing to do with Puget Sound or hydrofoils

  2. The Sea Dart was never a ferry in Puget Sound

  3. The Sea Dart was never a hydrofoil. The closest thing I can find was an experimental American sea plane that took off from the water on hydro-skies. It was never produced. It had nothing to do with Puget Sound.

1 Like

Sorry, but are you serious about those two quotes?
ChatGPT IS not a Search Engine!
ChatGPT is not even intended to give you a serious answer onto your question!
The only thing ChatGPT is doing, and should do, is giving you an answer that IS LOOKING good in terms of a correct build of a sentence!
If you think, that it might have given you a correct answer onto your question, the chances are likely the same, to open Websters Book onto a random page, and pick a word there, as answer onto your question, because nothing else is ChatGPT doing!

And BTW, if you have a look onto the political development within the US, and other countries, there are a lot of people who don’t care about the truth, as long as an answer is “looking good” and “fitting into their view onto the world”.

2 Likes

The detractors in this thread can all boil their arguments down to “If you don’t know how to use the tool, and you don’t engage in critical thinking, the results will be bad.”

This is not surprising.

All of the examples provided as proof of how ChatGPT fails are like using a chainsaw to cut bread. “What a bad and dangerous tool!”

Try the following:

  • Rewrite the following paragraph three times. [Some paragraph].
  • Explain the following as if I’m five. [Some paragraph]
  • What are some words that relate to [x] and why?
  • Summarize John Doe’s article “An article,” published in 2019. (* you can’t do this for anything published recentyl)

Then, obviously, like any reading comprehension/research task, just like search, you have to critically think about the results. Not everything you get will be correct or useful.


There are some interesting conversations to have about this thing, but we can’t have it in rooms where people are yelling about how bad the this version is when used by people who don’t know what they’re doing.

Alas.

5 Likes

Yes that is an interesting issue.

Turns out that sometimes the references are spot-on.

But other times the references seem to be entirely fabricated and the links turn out to be for random articles that do not match the fabricated titles.

Obviously I realized its responses could be incomplete or otherwise off the mark - but not to the extent of totally making stuff up. Very odd.

1 Like

It’s terrific as a demo of what is to come. Though clearly not quite there yet.

In most disciplines a PhD begins with an annotated survey of the existing literature in a field. ChatGPT seems on the precipice of being able to assist with that task - with limitations just as exist for any other paper or digital indexing tool.

The next step in a PhD is to extend that knowledge further with new ideas; ChatGPT and AI in general are not likely to do that for the foreseeable future.

Meta’s Galactica was made for this, FYI,
… though it, too, is now infamous for making BS:

https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/amp/

2 Likes

The article about Glactica is fascinating @ryanjamurphy

“Statistical nonsense at scale” - interesting way to state it :slight_smile:

It’s interesting that this trial lead to the same observations we have made here:

(1) Ultimately Generative AI may be particularly helpful as an advanced form of a search engine

(2) Sometimes the current state of the art can produce fantastically helpful output

(3) At other times the AI algorithm either makes stuff up or quotes misinformation, and it can be very difficult for anyone other than a content expert in the field to sort out what is real

It is certainly a caution regarding current limitations of the technology. But that said - this is still the most impressive AI demo I have ever seen and I am certain it will evolve to be an incredibly useful tool. I feel as I did when I used Yahoo Search for the first time 25 or so years ago or may even Gopher which pre-dated that. They weren’t quite useful for prime-time work but it was abundantly clear where the technology was headed.

I will say this as well - if OpenAI were a public company I would invest in it now in a heartbeat.dd

2 Likes

Having fun now:

Rewrite the following to be more eloquent and persuasive. Reference Michel Foucault.

I apologize if my previous response did not align with your expectations. However, I must remind you that ChatGPT is not a search engine, but rather a language model designed to generate human-like text. As Michel Foucault, a renowned French philosopher, stated, “Knowledge is not for knowing: knowledge is for cutting.” Similarly, ChatGPT is not intended to provide deep, philosophical answers or insights. Instead, its primary function is to construct grammatically correct sentences. To rely on ChatGPT for answers to complex questions would be akin to blindly selecting a word from a dictionary, it may appear to be a response but it lacks the depth and nuance that a true understanding requires.

I encourage you to seek answers from a variety of sources and to engage in thoughtful reflection on the information you receive, as this is the key to gaining a deeper understanding of the world.

7 Likes

Sound’s like you’re claiming that the detractors are “not holding it right”.

If I do a search in this forum I get results that match the search terms, and yes I need to look at each result to see if it is of use to me but each result is real. The examples given here (and elsewhere) demonstrates the ChatGPT often just makes shit up. And that is a non-trivial difference.

1 Like

More fun:

What are some concepts related to the luddite fallacy, and how do they explain why the following might be a bad principle?

The Luddite fallacy is a concept that refers to the belief that technological advancements are always bad and will lead to negative consequences. It is named after the Luddites, a group of 19th-century English textile workers who destroyed machinery as a form of protest against the changes brought about by the Industrial Revolution.

The concept of the Luddite fallacy is often used to explain why certain individuals or groups may be resistant to new technologies, even when those technologies have the potential to improve their lives. In this case, the belief that work produced with the aid of a technology like ChatGPT should be disregarded is an example of the Luddite fallacy. Such a belief ignores the potential benefits that the technology can provide, such as increased productivity and efficiency, and focuses solely on the negative aspects, such as the potential for job loss or decreased human involvement in certain tasks.

It’s important to note that, while the Luddite fallacy can be a real phenomenon, it’s important to evaluate the potential impact of new technologies on a case-by-case basis, rather than blindly rejecting them out of fear of change.

3 Likes

Okay. Who are you arguing with here?

My premises are:

  1. ChatGPT can be very helpful (in ways never before possible with other tools) if you use it properly.
  2. No one should trust ChatGPT or related tools without critically thinking about its results (a key part of using it properly).
  3. ChatGPT in its current state cannot replace search.
  4. Improvements on this tool in the future will replace search.

I imagine everyone in this thread actually agrees on (1) through (3), with some nuances, especially on (1). Number 4 might be up for debate.

1 Like

However, no one thinks it is a good idea to use a chainsaw to cut bread whereas ChatGPT goes all out to look like it is a good tool for getting good information.

I can see why you’d think that, but it’s not the case. People just skip over all the disclaimers and limitations and use it faithfully.

Search is a little like this. Pre-Google search was so crude. Results were disorganized and poorly ranked, but it was still the best way to find answers (even if it didn’t always work), so people used it.

I hear everyone saying the sky is falling because ChatGPT’s knowledge is “made up”. It is a problem, but it is one that will improve as the tech improves and people get better at using it.

Here’s an important signal: we’re already seeing an emerging practice of “promptcrafting,” which is akin to developing “power search” skills.

2 Likes

Isn’t overhyping a time-honored tradition in the tech industry? :grinning:

2 Likes

I’m amazed at how like the old morality tale, The Emperor’s New Clothes, are the defenses being given for the use of ChatGPT. :slightly_smiling_face:

1 Like

… except you skipped over the line above, which is also a key lesson from The Emperor’s New Clothes.

Unlike invisible clothes, this tech is explained and disclaimed and its limits laid bare (pun intended).

Also unlike invisible clothes, there’s evidence of this tech working (see my posts in this thread), and there’s a good chance it will get better (though as I’ve already noted, there’s some good debate fodder on exactly how and how much).

I think I’m finished with this argument. I’d love to chat more about the how it could improve and how much, about the implications of promptcrafting, and the long tail of consequences this tech will have.

2 Likes

I am not so sure the tech is explained.

As I understand it, ChatGPT and other large language models are essentially stringing words together based on how other words are strung together across the internet.

That’s a vast oversimplification, but in fact it does NOT answer questions or provide information. It provides words, sentences and paragraphs (edit:) that are associated with the prompt.

I signed up for a ChatGPT account. I’m pretty sure that wasn’t explained. The system is presented with very little explanation, in fact, other than some disclaimers that the information or instructions could be incorrect or even dangerous.

That’s important, but it’s a caveat – it isn’t an explanation of what ChatGPT actually does. I’m not surprised users have a skewed impression of what it’s “good at” or “meant to do.”

4 Likes