Hope one day ChatGPT might be able to summarise these topics in one liners.
I wonder, did we have posts with the title “Google helps me solve a Mac problem” in the past?
(But maybe I just don’t understand people’s fascination with ChatGPT )
I’m happy for everyone who is finding ChatGPT so helpful and don’t want to rain on their parade, but personally I think it’s pretty terrible.
Whenever I’ve tried it, after numerous rounds of it making stuff up and apologising, it either says it can’t help, or arrives at the right answer seemingly randomly.
In this particular instance, typing the search term (“Finder stealing focus on Mac”) into Google and Kagi gave me the solution as the first result. I assume this might be where GPT is getting its answer from, but as it’s a black box that doesn’t reveal its sources, it’s impossible to know if it’s drawing on verifiable information.
My experience is that it makes stuff up without apology. It speaks nonsense with great authority.
So do a lot of people I work with …
Yes I like this about Bing too.
Also my anecdote: I used ChatGPT to modify a chrome extension that I use extensively. Had been wanting to do it for a long time but couldn’t bother figuring out the code on Github. ChatGPT helped me with that and it worked so I see the huge potential here.
All the bots (ChatGPT, Bard, Claude, Llama)
because they draw their “knowledge” from text found on the Internet as
points out the posters know nothing.
That said I have had some success with ChatGPT providing outline code for specific programming problems. The proposed solutions needed tweaking but were basically okay. Although if anyone is an real expert in sed I could do with some help fixing a sed script that sort of works; seems as if the holding buffer is not cleared out at the end of each iteration of the cycle despite the presence of the appropriate command.
ChatGPT has been quite useful and work. It is helping with some templates and letters to clients as well as documentation for clients. However, it does need a strong editorial correction and I do not trust implicitly as it has a tendency to make mistake. Since my programming skills are poor, it has helped with transforming text to CSV or a markdown table.
Interesting. That was not on the first page of the results when I did the search three days ago.
I find ChatGPT4 extremely helpful when I need to do heavy thinking.
For instance:
- It helped me write a short story last week - something I need for a training programme I’m creating. It did a better job than I would have, for the first draft at least, and I’m a professional writer and have sold thousands of books.
- I’ve used it as an interactive coach, by asking it to pretend it’s one of my gurus. It did a great job - not just of guiding me, but reframing things. I then asked it what “Clarke Ching” would suggest (that’s me!) and it wrapped up with some lovely advice, which I’ve followed.
- I do a lot of my thinking - i.e. creating my intellectual property - by “playing” with words and it’s been so helpful for that. I explain my problem, ask it questions, and it helps me create simple frameworks and clever wording.
In all cases, I do the big-upfront heavy thinking - maybe the first 10-20% of the work - by figuring out what I want to chat about. I then use ChatGPT4 to untangle things, create new words, reframe things - maybe 70-80% of the work. If I want to take things beyond thinking and publish stuff, I spend a bit of time doing good old fashioned cutting-and-pasting, editing, and other rewriting.
It never occurred to me to ask it how to fix a Mac problem, though!
This is awesome. Thank you for sharing that experience. It’s inspired me to continue learning how to use ChatGPT (and similar AI tools) better. I am also super excited about the potential of this to improve people’s work in so many areas!
The only thing I’m worried about is it could potentially encourage verbosity where brevity would be best.
You can ask it to reduce the word count or to write so it’s suitable for a 12-year-old or to write like a New York Times article or Malcolm Gladwell, for instance.
The last 10% where you do the clean up is the most important.
And, also, starting small, not expecting miracles, but still being astonished with some of the things it helps you with.
There’s a few tricks to get ChatGPT to be optimally useful. Here’s a great video on a prompt formula which will help fine tune the model to give you exactly what you need:
That is a great video; thanks for sharing. I’ll be passing it along to my senior leadership team, IT head, and my EA.
It plagiarised other people’s work for you.
I really don’t feel like people should feel good about this.
I do not know what @Clarke_Ching did, but that may not be the case.
If one writes a story, and asks a LLM to rewrite it in a different voice, that would not be plagiarism. This would be “helping to write a story”.
If on the other hand, one asks ChatGPT to write the story, I would not see this as “helping to write a story”. I could see this being a real problem at all levels of academia.
Why did it plagiarise other people’s work?
It’s literally a composite of other people’s work, whatever texts were used as training material for the LLM. There’ is no way for an LLM to output something that isn’t, in essense, a tapestry of other people’s words. It creates nothing.
This displays a common, but fundamental misunderstanding, of LLMs.
There are problems with the current state of the art, as the hallucinations show.
But is used correctly LLMs can create.