ChatGPT as alternative to Google or Wikipedia

I think prompt-crafting is fascinating in how quickly this has started to pick up momentum in blogs and both free and paid marketplaces. There is also a template-based prompt-craft app I saw today in ProductHunt called Pickaxe. It is almost like the evolution of an entirely new genre of coding or app generation.

If the issue of “fake data” can be solved I think the implications are stunning.

I am pondering if the fake data issue can be resolved simply by instructing the software what database to use. Such as “Show me annotated articles on how vegetables improve health using references from PubMed.” Currently ChatGPT cannot search the Internet; but it does not seem like an impossible step to add that capability and to be able in a prompt to limit the software to what source(s) are to be used for a response.

3 Likes

It certainly seems like it could be a powerful tool for skilled reference librarians.

1 Like

I meant “explained” simply in the sense that the limitations are known. But yes, explainable AI is a big thing right now and is a very unfinished problem.

My supervisor has done some work in this area, in fact:

1 Like

I’m not sure it can given the current nature of these large language models. It’s very possible to feed them only accurate information, and they will nonetheless construct inaccurate sources, citations and phrases. That’s because they aren’t simply regurgitating what’s been out into them, but recombining and permutating it.

Without understanding what’s “accurate” and “inaccurate”, and without even being asked (by its designers) to do so, rearranging words and concepts can’t be made accurate.

Think of all the grammatically accurate but logically invalid arguments that can be constructed from proper English. Think how difficult it is for human beings to learn (secondary school, college, graduate programs…), and we do (usually) have a sense of “accurate” and “inaccurate”, which these systems do not.

Is this a solvable problem? Probably, but I don’t see the path from the current tools, at the moment.

5 Likes

The reason I look up stuff in Goole and Wikipedia is because I do not know stuff. I could never trust the current version of ChatGPT to provide truthful information. I would be a poor judge of anything outside of my interests and expertise.I am more than willing to trust it as a tool for exploration of thought.

1 Like

Conceptual modelling is one such path, as per the link above.

Basically, it’s possible to construct ontologies of domains so that the model can understand how concepts — the words it’s regurgitating — relate to one another.

I am taking issue with the statement that if I am a “detractor” that I am dismissed as using the tool wrong.

Let’s use the example of the chainsaw. If one fired up a chainsaw and sometimes it cut logs as expected and other times it ran and ran but didn’t cut anything, I would argue that the tool isn’t working, not that the user was using it wrong.

So if ChatGPT sometimes gives useful info and other time straight up word salad, then I don’t see how the issue is not with ChatGPT.

As to these:

I fully agree with 3.
I somewhat agree with 4, in that I fully expect AI tools will one day replace traditional search.

1 and 2 seem to like saying, “hey this automatic driving car works great if you use it correctly” and “to use it correctly always maintain manual control”.

I’m sure there are limited cases where this works, and if you find it useful great. But to me, ChatGPT is more like that chainsaw that only works some of the time than a viable tool.

Spot-on analysis of both good and bad - this is a perfect demonstration of the issue we discussed in this thread regarding how ChatGPT’s accuracy in some ways can be misleading considering at times it will flat-out make up stuff.

1 Like

The good doctor says, “It’s like Google, but it’s got AI.” I would argue that Google has AI, too.

What are the Google Answer Boxes and People Also Ask boxes, after all, that have been around for years? Google has an automated process to extract them. They are not hand-curated by real people. Often they are useful but sometimes they provide incorrect and even laughable information.

Guides are available that will tell you all about them and suggest ways to write and format your online posts to increase their chances of being extracted and used by Google, for example:

The chatgpt answers remind me of the sort of answers I would give on subjects I knew nothing about, off the top of my head … when I was a teenager.

My brain wasn’t fully formed back then. But I didn’t know that.

3 Likes

I enjoyed asking ChatGPT for scholarly references on a subject I want to learn more about (the effects of wood selection in solid body electric guitars). I also enjoyed using ChatGPT’s answers to gently trolls folks in another guitar-related forum I participate in. So double thanks for this!

2 Likes

I tried ChapGPT in the area of my professional work (Scrum). Many answers were well written, but low on facts. Some were drivel. The problem is that you would need to be an expert in Scrum to spot some of the errors.

Wikipedia is several orders of magnitude more reliable.

The problem is, ChatGTP IS NOT a search Tool!
It is not made for that, it has not the intention to be that, and most important, it has not the ability to be that, or to become that!
ChatGPT is simply looking for the next fitting word, to complete a sentence. It does not care, not does it knows, why it is using “Fish” instead of “Chair” just because in a sentence with a “Table” the “Chair” is used more often, than the “Fish”!

And NO, a hugh amount of people do not care for disclaimer. They just do not read them, and if they do, they do not understand them!
If ChatGPT is offering “Answers” to their questions, that sound “real”, they are fine with that!
You could see how people acting with False Informations, if you simply have a look onto so many discussions, videos, “News”-Articles and so on regarding MAGA and Trump in the US. Or similar “trends” all over the world.
In Germany we have similar Groups of People, who are falling for the Fake-News within their Bubble, and with no chances to get them out of that.
And yesterday the Tsar of Russia proclaimed, that the USA are still occupying Germany. So simply because HE had said so, I am pretty sure, we “lost” some more people towards the growing “Reichsbuerger-Communities” who are believing the same bullshit!
Or to come up with an other example, a NewsChannel in Germany (Auto Motor & Sport) has released some interesting articles about what Elon Musk has said about what is coming next, or what is already be true, within the last years, vs. what really shows up, or is the truth about his products.
And also there is a LARGE discrepancy between those two things, but there is also a large group of People or Fan-Boys who do not want to see that, but defending everything their Idol is saying for the truth.
If those People “miss” the Disclaimer, they take everything ChatGPT is spitting out as real, and this is the BIG DANGER for our communities!!

ChatGPT is only in Beta; Wikipedia is a production website.

The huge plus of ChatGPT is that you can specify the output format - and you can do so with natural language. That benefit is HUGE.

Obviously I am assuming that ChatGPT will advance to the point of accurate data before it is ready for real production use. But the potential is immense.

The nature of an AI is that you will never know if it was accurate. It is only as good as its training data. Without it citing sources I can’t know if it’s basis for describing something is good or not.

There are some people in the Agile/Scrum Community, that if they say something I disagree with, I will carefully rethink why I hold that position. When they state something that it is usually so well considered that I look to my own weakness first. Example Dan North: Blog - Dan North & Associates Ltd - he has moved my thinking on key ideas over the years.

Others I won’t name, I skim looking for ideas, but they’re unlikely to shake me from current beliefs.

ChatGPT et al, aren’t intelligent. They regurgitate the data they were fed without knowing if the data was accurate in the first place.

Wikipedia is in general also a very unreliable source for trustful informations!

1 Like

That’s precisely my point - I anticipate ChatGPT as a means to search/retrieve information with references.

Indeed you can request references now in various formats and it does exactly as requested. The problem is that sometimes the references are spot-on and sometimes they are totally fabricated.

A production version of the software that does not fabricate references and can always provide references in a format you request could well be a gamechanger; that is what I very much hope for and anticipate.

Richard - I apologize, I don’t know your technical background, experience with AI’s etc. So this isn’t intended to be the tech equivalent of mansplaining.

With ChatGPT if you ask for justification or references, generates them but they’re not related to the answer they first gave you.

I think the current class/generation of language models are incapable of telling you their sources. I think the flaw is built into the model. It’s not like they go Source Material → Result. There are many many feedback loops between the many sources and the outputs, so we would be hard pressed to find the original source(s).

1 Like

I have a good understanding of how classic computing languages work. My understanding of AI is limited to some experiments I did trying to train early models which did just pattern recognition.

That is an interesting observation of yours if true. You don’t think it would be possible for OpenAI to train ChatGPT to answer questions like “Please search PubMed for articles with the words Covid and Outcome in the title and display the results in a table including the title, abstract, and URL” ??

Or “Search CNN, FoxNews, and MSNBC for the 5 most popular stories today on each site and create a list of hyperlinked titles to those articles”?

1 Like

OpenAI is (currently) not developing a search engine, so why should they train ChatGPT for this?