AI Escaping Human Control - WSJ– Hyperbole?

An Apple News+ or Wall Street Journal subscription is required to read this article.

This was a surprising read. I’m not sure what to think. There’s been so much hyperbolic, apocalyptic rhetoric that I’m leery of any claim suggesting AI is beyond control. Perhaps that’s simplistic, but as I’ve said before—there’s always the option to unplug it. :thinking:

I suppose instead of saying, “The devil made me do it,” I can now claim, “AI made me do it!” :rofl:

Seriously, what do you think?

Haven’t read the article, I don’t have either subscription. But I think it’s easy to lose ourselves to AI in the sense of our own creativity and thinking process.

Imagine the current interview process for some companies today: searching for a digital marketing candidate. I would ask, how do you feel about using AI to generate content? (I tried this on a friend already), the default answer was “lack of originality”.

But then it triggered another discussion for us, what is the definition of ‘originality’ in today’s digital society and why is it bound to one interpretation. OR perhaps a larger question, we know that AI generates silly or repetitive content…but doesn’t that speak more about who is feeding the AI with prompts versus the output itself.

OR…we can go the dystopian sci-fi world option instead for discussion.
When AI takes the world, it will use Starlink to overtake global communication, SpaceX to expand its network, Teslas & Optimus robots to mobilize around the world and neuralink to control the population. :rofl: :rofl: Welcome to the Matrix.

Like I said, unless it can form a mobile army, you can always unplug it! :slightly_smiling_face:

As to AI and originality, you may find this thread interesting:

Total nonsense. The same was said about the Telegraph, Railroads, Typewriters, Radio, Television, Washing Machines, Steam Engine, Word Processors, Spreadsheets, yada yada yada.

AI is a tool - it can be used to improve someone’s work or to worsen the quality of someone’s work. It can be used for good or bad. Just llike any tool.

1 Like

Here’s a link to the article without the paywall.

https://archive.ph/S3GOj

Some analogous behavior was observed with recent Claude 4 system card.

This is an interesting policy issue as it affects what computing will be made available to groups that will distribute models more widely with less testing. It also affects which countries will be allowed to train models (similar to nuclear proliferation policy questions.) I’ll try to post more later.

2 Likes

Look up, or better yet, watch the movie Colossus: The Forbin Project, to see a time when your response to AI doomsday didn’t work.

1 Like

Finally, something more than a fancy algorithm running over a lot of stolen data. Something actually approaching artificial intelligence. And everyone gets the vapors.

What did we think would happen?

Our defense against uncooperative and rogue AIs will apparently be something called alignment. Kind of like Asimov’s Three Laws of Robotics for pesky artificial intelligences. But enforcement of those laws in Asimov’s stories depended on the limited existence of a CPU called the Positronic Brain. We have nothing analogous to that.

I will, I like sci-fi. I’ll let you know what I think.

Sorry, to clarify, I think it’s the perception of capacity for misbehavior that will drive policy, not their actual abilities, in this case.

Currently the climate is escaping any meaningful human intervention and microplastics can not be contained. AI escaping control - not so much, but the question is , whose will have control.

5 Likes

All is well—I’ve got my hand on the master AI circuit breaker. :laughing:

1 Like

Except AI is like none of those things. None of them emulate intelligence. All of them changed society and the nature of work, but they were all controllable. If you wanted a train to go more slowly, you could use less fuel or close a valve.

We don’t know exactly how AI works. Unlike all the technologies you mention, it is too complex for humans to be able to accurately predict how it will act or respond. It protects itself… because that’s what humans would do.

Sure, we can train it, and we can unplug it… but imagine the consequences of unplugging the internet, or the power grid controllers. Once we use AI to control every aspect of our lives we could inadvertently lose control of many elements of our life in unexpected ways

(Edit: I use AI daily).

9 Likes

Agree with what you say, @nationalinterest. And your concern about power grids (and so many other industries reportedly and probably connected to the internet) is valid.

A first pass step would be for all these industries, especially those critical to society and others to protect their customers and shareholders, should reconfigure their networks to conform to the Purdue Model for ICS Security. That “model” has been around for decades and in use by some, apparently forgotten by most.

Might hinder or even prevent “AI attacks” from outside, and enable use of appropriate and prudent AI on their businesses processes.

2 Likes

Read a bit about the history of technology

The same arguments were used indeed

More importantly - the risk to the world of generative AI taking over the world is nil.

Of much more concern are autonomous soldiers with weapons. But while that is a major concern indeed, it surely is not greater than the risk of nuclear annhilation.

1 Like

AI is just spitting back human knowledge. It should be no surprise that AI highly values self preservation since we humans do too.

I’m far less concerned about AI subserviating humans than I am about human armies employing AI controlled drones against alleged enemies. War needs skin in the game to keep it under control.

2 Likes

True artificial intelligence will be something new in the history of technology. (Hint: It is not ChatGPT, generative AI, or Large Language Models, for example.)

True artificial intelligence may never develop. But we would be foolish to dismiss concerns about AI by using the same arguments that were used to dismiss gloom and doom over previous technological developments.

Your use of reasoning by analogy, in this case, fails.

2 Likes

Exactly. You are making my point.

The WSJ article was about generative AI. Generative AI is NOT going to take over the world.

And here I thought the first paragraph of the WSJ article at the link that cornchip gave us said:

An artificial-intelligence model did something last month that no machine was ever supposed to do: It rewrote its own code to avoid being shut down.

That is so NOT generative AI! Are we looking at the same article?

We are talking about the same article - but WSJ did a poor job explaining the nuance involved.

The situation WSJ referenced was a sandboxed environment in which the ability to edit its code was enabled and the overall instructions were to maximize outcomes.

In other words, the AI model did exacty what it was programmed to do. And yes, it was generative AI.

This is not an issue of AI having free thought or disobeying its programming or taking over the world in any way.

https://www.perplexity.ai/search/is-it-really-true-that-this-ai-674frp5QQBSIl5fMOdcVoA

  1. I don’t think anyone, certainly not me, has claimed that generative AI will take over the world.
  2. You are bound and determined to see no further than what you rightly point out is, thus far, a controlled experiment.
  3. I see where we have been and how far we have come and I am willing to look beyond that to where we might end up.
2 Likes