So I’ve been doing an experiment where queries I would normally do on Google, I go to ChatGPT first. I’m surprised at how often ChatGPT is wrong.
I’m a big baseball fan and I was talking with a friend about over .400 hitters and how it has been so long since anyone has done it (1941, Ted Williams). So I asked ChatGPT for the list: “How many hitters hit over .400 in a season?” It’s response was to give a list of 4:
I’m also a software developer, so I’ve asked ChatGPT to write some code and it’s pretty impressive that it can pretty quickly spit out boilerplate code, but it is often wrong. You can’t really trust ChatGPT, it still requires expertise to figure out if it is giving you a BS answer or not.
What is all the hype about? Why are people worried about an AI that can’t simple facts straight? How is that going to replace humans when it can’t even replace Google?
I use Bing first now all the time instead of ChatGPT itself (for when I want to use text generative AI). In addition to the generated text, Bing has a set of links to source material.
It’s like grading a test where you require the students to “show their work” - makes validating the results much easier, but also gives you a great jumping off point for doing further research on your own.
How useful is that? I manage a team of developers, when someone first starts, I check all of their work. But as time goes on, trust builds and the best developers are the ones I know I can assign a task to and know they will do it right. The worst are the ones who I never trust and I always have to check their work. It sounds like ChatGPT is the second kind of developer, the kind you’d rather do the work yourself because you never know when it will just make stuff up.
Going a bit deeper somewhat OT, it is the same reason I use generative imaging using Adobe Photoshop Beta and not image AI tools directly (such as MidJourney).
The generated fill is right there in a full featured photo editing app to continue making adjustments or additional changes.
Gotta give Adobe massive credit for populatizing generated image in-fill with their “generative fill” Photoshop beta.
Most people don’t realize that tech, which is called “in painting” has been around for a year or more in Dall-E, Runway ML and other tools, but it wasn’t until Adobe made it easy to use inside Photoshop itself that everyone “discovered” what it can do.
Even now, most people don’t know that totally free, you can use Adobe’s image generation free website, firefly.adobe.com, upload a photo, and apply generative fill to it, then download the results to use in any photo editor, all for free.
In many situations, the second part of your sentence (“you never know when it will just make stuff up”) can be true without the first part (“you’d rather do the work yourself”) being true.
Asking ChatGPT for a dozen points regarding a book you’re working on could be useful. If you’re a subject matter expert, you might realize one of them isn’t important - but numbers 9 and 11 might be something you hadn’t considered.
Asking ChatGPT for an argument against the position you’re taking in an op-ed might give you additional insight to work from in your article.
It’s a tool for augmenting your work, not a person to replace a worker. Just like a drill doesn’t turn me into a professional carpenter - but a drill saves a professional carpenter a ton of time if used properly.
I’ve found that ChatGPT can be useful as a catalyst for articles and presentations. If I provide enough detail in my query, it will often generate a useful outline or topic ideas as a starting point. It will sometimes offer points I’d not considered. It certainly cannot do the work for me, nor would I want it to.
ChatGPT Prompt: give me six bullet points for an article about why chatgpt is unreliable
Lack of Contextual Understanding: One of the major reasons why ChatGPT can be unreliable is its limited ability to grasp the context of a conversation. It often generates responses based on patterns it has learned from training data, rather than fully understanding the nuances of the conversation at hand. This can lead to inaccurate or irrelevant responses.
Inconsistent and Contradictory Answers: ChatGPT’s responses can vary widely depending on how a question is phrased or the order in which information is provided. This inconsistency can make it difficult to rely on the model for accurate and coherent answers. Users may receive different responses to the same question, leading to confusion and mistrust.
Propensity for Biased or Inappropriate Content: Since ChatGPT learns from vast amounts of text data available on the internet, it can sometimes generate biased or inappropriate responses. This is particularly true when it comes to sensitive topics such as race, gender, or politics. Without careful monitoring and filtering, the model’s responses can reinforce stereotypes or propagate misinformation.
Limited Fact-Checking Capabilities: While ChatGPT can provide information on a wide range of topics, it lacks the ability to fact-check its responses. It may inadvertently present inaccurate or outdated information as factual, without the ability to verify the correctness of its statements. Users must independently fact-check any information provided by the model.
Vulnerability to Adversarial Inputs: ChatGPT is susceptible to manipulation through carefully crafted inputs. Adversaries can exploit the model’s weaknesses and biases by intentionally phrasing questions or providing information in a way that leads to misleading or harmful responses. This vulnerability makes it unreliable in situations where the model’s output can have significant consequences.
Lack of Accountability and Transparency: As an AI model, ChatGPT lacks transparency in its decision-making process. It can be challenging to understand how the model arrived at a specific response or to identify any underlying biases or errors. Additionally, there is limited accountability for the model’s actions, making it difficult to address or rectify any issues that arise from its unreliability.
(side note) I originally wrote “This is great!” but it was below the 20 char minimum. So I asked ChatGPT to give me a more than 20 character alternative to: this is great. It responded “This is fantastic!”, which if you notice is only 18 characters. So not being able to count is another flaw of ChatGPT!
I am surprised, that you are surprised!
I thought it was at least in this community not a secret, that ChatGPT is simply not able, not constructed, and not intended, to give you correct answers onto your questions!?
A Hammer is a very useful Toy, but you could not use it, to saw Wood…!
You are right, Bing works much better. It even gives you options “More Creative”, “More Precise”, so I chose more precise and asked my over .400 question there and it gave the right answer, plus links to verify, this is cool!
give me three bullet points for an article about why chatgpt is unreliable and three more about why chatgpt is reliable
Why ChatGPT is Unreliable:
Inherent Bias: ChatGPT, like any other AI language model, is trained on large datasets that reflect the biases and prejudices present in the data. This can result in biased responses that perpetuate stereotypes, promote misinformation, or exhibit discriminatory behavior. The model lacks the ability to critically analyze information and may provide unreliable or misleading answers as a result.
Lack of Contextual Understanding: While ChatGPT can generate coherent responses, it often struggles with comprehending the context and nuances of a conversation. It may misinterpret queries or fail to grasp the underlying intent of the user, leading to inaccurate or irrelevant responses. This limitation makes the model unreliable in situations that require a deep understanding of complex topics or require precise and accurate information.
Generation of False Information: ChatGPT does not possess fact-checking capabilities, and it can generate plausible-sounding but false information. It relies solely on patterns and associations learned from its training data, without external verification mechanisms. Consequently, it may provide unreliable answers when confronted with ambiguous or unverified queries, potentially spreading misinformation.
Why ChatGPT is Reliable:
Vast Knowledge Base: ChatGPT is trained on a vast corpus of text from various sources, allowing it to draw from a wealth of information. It can provide accurate and detailed responses on a wide range of topics, making it a valuable tool for retrieving factual information, historical data, or general knowledge. This breadth of knowledge enhances its reliability as an information resource.
Continuous Learning and Improvement: OpenAI regularly updates and refines ChatGPT based on user feedback, addressing its limitations and enhancing its capabilities. By continually learning from its interactions with users, the model becomes more accurate, reliable, and responsive over time. This commitment to improvement ensures that ChatGPT evolves into a more trustworthy and dependable conversational partner.
Flexibility and Adaptability: ChatGPT’s ability to generate coherent and contextually appropriate responses, even in open-ended conversations, contributes to its reliability. It can engage in meaningful discussions, understand and respond to user prompts, and adapt its tone and style to match the conversation’s context. This adaptability allows it to maintain consistency and generate reliable responses across a wide range of conversational scenarios.
Do the sources always check out? I haven’t had a chance to try Bing seriously, but for a while when ChatGPT blew up and people were posting their exchanges (including asking for sources), a lot of them were either invented links or they linked to something not particularly related.
Bing is different from ChatGPT. Bing GPT uses the Bing Index so it basically runs a Bing search, finds the most relevant pages and then feeds those pages -and your original query- to the GPT engine. While ChatGPT can hallucinate URLs, Bing probably gets them right.