Agreed, and I’d say the language models are as well. They’re frustrating in their inconsistency but it’s incredible technology that we’re seeing solve real problems and create real opportunities all the time.
No worries, you’re not being a smart alec; you’re being helpful. ![]()
To answer your question, yes, I have, but I did not think to do so in this instance. Given my abysmal experience, I should have. That said, I’m not sure where the threshold is when AI use increases productivity versus reduces it, but in hindsight, I would have been better off asking for suggested missing topic headings and then adding them myself. I’ll chalk this one up as “user error” and “lesson learned.” ![]()
And start a whole new conversation (unless that’s implied by “new prompt” to you?) In a “conversation,” the entire previous history of the conversation effectively becomes a prompt.
Let’s say you started off asking it about how to make pancakes, then asked about the weather in your city, then asked it for advice about buying a new pen, then asked it for essay feedback.
A sane human compartmentalizes those things. But if the AI doesn’t have an explicitly clean slate, it’s effectively “thinking” (I know, it’s not really thinking, bear with me), “given our conversations about pancake recipes, the weather in Boise, and buying a new fountain pen, how would I re-write this essay in light of that?”
If you’re switching tasks/goals - agreed. Start a new conversation.
My specific advice here was for when you’re unhappy with the results of a prompt. Rather than tell it that it did a bad job, write a new prompt that is more specific in the areas of your dissatisfaction. In this case, the immediately preceding context may help but the rational critique is unlikely to and can sometimes hurt.
And, to your point: if you have more than one “round” of this, yes, start a new conversation and (to Clarke’s point) apply what you’ve learned from seeing the unsatisfactory results to write a new prompt. Just don’t think you can teach the Chatbot to learn how to respond to future prompts in the narrow way we all naturally hope we can.
I know you were not replying to me, but to be clear, I always start with a new prompt unless I’m following up on a previous one. I use a separate chat for each unique task. That said, I had a lot of revised, extended, clarified prompts in this most recent effort. Once I realized things were going off the rails, I probably should’ve started a new prompt. ![]()
AGI (whatever it means) is probably far off in the future. Someone, possibly on this site, recommended a great podcast (Podcasts: Complexity Podcast | Santa Fe Institute). I’m currently reading Melanie Mitchells “Artificial Intelligence - A Guide for Thinking Humans” (https://a.co/d/fWJU0v0). The book is maybe a little “mathy” but if you have some background in science and/or engineering I think you will learn a lot.
This artificial intelligence thread has been simply amazing in its honesty.
Finally, disillusion sets in and serious confusion and false hope on the part of amateur practitioners is revealed.
I think the only definition of AGI that people actually care about is the idea that a given AI can think like a human.
The frustrations with ChatGPT usually stem from the fact that it doesn’t do that. Going back to OP’s list, most of those are mistakes that a human could make - but not mistakes that a human inevitably must make.
And when a human made the mistake, it could double back and fix only the broken parts.
Imagine a kid writing an essay, a teacher giving feedback, and asking for corrections - to which the kid responds by crumpling up the whole essay, throwing it away, and taking another shot from scratch. Every. Single. Time.
Or imagine a reporter writing an article about a plane crash where a hundred people died, with the editor giving the feedback that it was very “dark” - to which the reporter responds by crumpling up the whole article, throwing it away, and writing an essay riddled with jokes where, instead, all of the passengers survive.
Or you don’t even have to imagine this one - ask AI to write a 500-word essay. Watch how often you get 200 or 1000 words. A human might give you 410, or 590 - but they wouldn’t stop 40% of the way through the word count.
That’s basically where we’re at with AI.
I’m not a financial analyst, and certainly no prophet
, but I believe we are in an AI tech bubble and we are in for a major market correction. Hold on tight, the ride may get rough!
Don’t worry, crypto will save us ![]()
I think you’re right in terms of the level of investment huge tech companies make vs the amount of money that investment will actually generate. But in terms of general everyday use, I don’t see AI going anywhere.
A tough part to solve is going to be the economics though. I’m not saying we won’t solve it, just that it will be tough.
As I mentioned, we’re spending a ton of money using AI and doing AI/ML research. We’re not spending anything even remotely approaching what it costs to deliver these products/services to us. It’s being heavily (insanely so) subsidized by VC and other investments afraid of missing the AI profit boom.
But what happens when a lot of normal people get used to using AI Chatbots for various tasks but it costs $200/mo for a subscription to use them because that’s closer to what it costs to deliver the functionality? At some point, the cost to supply and what the market will bear will even out, but I think there’s going to be a very ugly gap in the middle and I’m not sure how long that will last.
Maybe we’ll buy a humanoid unit from U.S. Robotics with built-in AI? Or, more economically, perhaps, we’ll buy a dedicated AI appliance? Or maybe a post-scarcity economy will save us and supply such “wants” for free? Or, one of the many imagined dark futures might be what we are forced to cope with. Science fiction has so far been very bad at predicting the future.
![]()
20 characters 20 characters
I agree with that. I was only speaking in terms of an impending market correction. ![]()
I can confidently say there is no way I’m spending that per month in this lifetime. I’ll do my work the “old fashioned way,” with my brain. ![]()
The scary thing is that these AI bots cost… Zero. So the bubble will definitely burst.
Just go to OpenRouter and see how many of the models are free.
Scary because many people are replacing other areas that previously belonged to humans with them.
Really?
I’m not saying we’re there yet, but “no way in my lifetime” is tough to sign on to.
We both have a rough sense of our hourly “bill rate.” If I can offload $200/mo or more of tasks per month from my brain to the robot - why wouldn’t I do that?
There’s plenty of other work for my brain to do each week. ![]()
For people who make language models do long-running work, especially work that involves token-expensive steps/calls to other services, $100-200/mo is already being readily spent. As supporting tools improve, those calls will become more efficient, but that will also lead to broader use of long-running tasks. There’s also willingness to spend on larger context windows or on tools that mimic or compensate for the lack of them.
Your brain is probably worth more than mine! ![]()
Seriously, I cannot ever see myself paying $2,400/year (plus inflation-adjusted increases) to use AI. My brain may not be worth as much as yours, but it “ain’t too bad—yet,”
so I’ll just rely on it and save the $2K. It’s got me this far…I’m pretty sure it will take me to the end.