Chatbot moratorium

Wozniak, Musk & more call for ‘out-of-control’ AI development pause | AppleInsider

Good for Wozniak, Musk, and the rest! Too many people have let their not well-founded enthusiasm lead them to expect far more than the word prediction approach currently used by AI chatbots can ever give them.

1 Like

Even the skeptics can see the world is getting warmer but some are continuing to build coal fired power plants even faster than before. If the world has had a motto during my lifetime it’s been “We’ll worry about that later”.

This request is probably a good idea but no one is going to risk falling behind.


Not if you look over a longer period of time than some are currently using for their comparisons.

1 Like

I’ll only note their concern is that these technologies are becoming too good too quickly, not that they’re useless.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

1 Like

Who used the word “useless?”

History of Technology says any proposed “moratorium” is DOA. Not going to happen.



That’s another discussion for another forum. I mentioned climate only to illustrate my point that those in authority have always “kicked the can down the road”.

Around the time I was finishing school the concern was a half degree drop in ground temperature between 1945 and 1968. I tend to take a longer view of things myself.

Likely meant only as a symbolic gesture. But as such gestures go, will probably have a greater impact than any post I’ve ever made on the web. :slightly_smiling_face:

I rather think, that Elon Musk realized how far back his own developments regarding “A.I.” are, and therefore he tries to get back into the game with this “Moratorium”… :thinking:


I really need to start collecting all these “AI is just X/garbage/autocomplete” posts to save for a couple years. See how they age.


The jury is out on that one.

Check out this ShareGPT conversation

I read that around 50 years ago. The jury is probably dead. :grinning:

1 Like

They probably died from all the Acid Rain that was gonna take us all out.

1 Like

It’s almost like humans are bad at predicting the future or something. :slight_smile:


I enjoy making jokes about “this is how SkyNet started” as much as the next person, but a moratorium is unlikely to be implemented. And any country that does implement one will doom their country to falling behind the rest of the world exponentially.

Feel free to remind me of this post when our AI overlords require daily manual input and/or money before they let us on the internet each day…

My daughter gave me a framed cross-stitch that says ‘I, for one, welcome our new robot overlords.” - in binary


No, they are not!

But there are always economical reasons to hide these predictions from the public, or if it becomes public available, they pay certain politicians to still deny it, and who have enough stupid followers, to believe in their lies!


Nobody is claiming that absolutely nobody in the past was able to create an accurate prediction of what might actually happen in the future.

But “it was an accurate prediction, in retrospect” isn’t useful. The past is littered with predictions - many of them from very smart people - of things that absolutely did not happen (reference for predictions from the 1950s about what the year 2000 would be like!)

People tend to be relatively poor at consistently making correct predictions about the future. The question almost always comes down to which predictions we should believe, and what we can do to alter undesired outcomes.

I would think that’s especially true with things like AI. We have no idea what we’re in for, because we don’t really have much to compare it to. There are a fair number of potentially undesirable outcomes here, so that brings us back to the topic of the OP - what would we do to mitigate those outcomes?

I think a moratorium would be sane IF it were likely to be respected. But I also don’t think it will be.


I well remember the “crisis of population explosion” in the 70s and the predicted “new ice age”. At this point in my life, I don’t let the prophecies of doom and the media fear mongers of “crisis” and “the extinction of the human race” (by astroids, climate change, AI, population explosion and more recently concerns about population decline, nuclear annihilation, plagues, and more) worry me.

As individuals, we’re more likely to be taken out prematurely by someone texting and driving than by some global catastrophe. I can do little about predicted national and global catastrophes. I’ve learned to focus my energies and concerns on “my sphere of influence and corresponding responsibilities” and not let those things over which I have virtually no influence or control preoccupy my thinking or my emotions.


You should differ between someone drawing a vision, like someone at the Smithsonian Institute (or Disney), for the future, and the results of a scientific research.

That is rather a simple to answer question: We should believe in the outcomes, that are reached under scientific methods.
But there are way too many people out there, who rather believe in the stupid things some politicians, or paid lobbyists, are telling them, then what they could even research by themselves if they are going down to the bottom of those informations.