AI Escaping Human Control - WSJ– Hyperbole?

As promised, I watched it. A rather low budget film. :slightly_smiling_face: On a serious note, unless I missed it, they never suggested disconnecting power to Colossus. They tried to fool it and to overwhelm it, but never shut off the power. And interestingly, Colossus never asked to monitor the situation room, only Forbin. :person_shrugging: Not a very smart machine. :rofl:

I recommended the movie for its ideas not its cinematic quality. :slightly_smiling_face:

It’s been a while since I saw the movie but my memory is that “disconnecting” was not an option. Here’s how Wikipedia describes the construction of Colossus:

Located deep within the Rocky Mountains in the United States, and powered by its own nuclear reactor and radioactive moat making access impossible, Colossus is impervious to any attack.

It was designed to never be shut off.

Colossus and Guardian had the upper hand. More from Wikipedia:

Colossus requests to be linked to Guardian. The President allows this, hoping to determine the Soviet machine’s capability. … Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary agree to sever the link. Both machines demand the link be immediately restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Western Siberia, while Guardian launches one at an American air force base in Texas. The link is hurriedly reconnected and both computers continue without any further interference.

I’ll have to watch it again and refresh my memory.

Located deep within the Rocky Mountains in the United States, and powered by its own nuclear reactor and radioactive moat making access impossible, Colossus is impervious to any attack.

Yes, but humans, not AI, built the impenetrable fortress and turned on the radioactive moat. Until AI is able to do that, we always have access to the shut-off switch. :slightly_smiling_face:

And, both militaries could have launched their aircraft to shoot down their own missiles or incoming ones.

The point being, nothing in the movie is convincing that any AI can circumvent human ingenuity and mobility. In fact, the primary issue was not the AI, but what humans did in creating and then ceding, not losing, control. AI was not able to do that autonomously.

Which brings me to my larger point. I’m far more concerned about what humans will do, with or without AI, than I am about what any machine will do on its own.

1 Like

My memory of the film is that it was a very thinly veiled diatribe against the work going on at Cheyenne Mountain here in Colorado.

1 Like

To my mind, anything in the WSJ that is NOT in the opinions section is worth taking seriously. The opinions featured in the WSJ are just silly.

A lot of the current concern comes out of the models seemingly finding ways to “cheat” to deliver the desired results. In other words, models will sometimes game their “hidden” logic in order to give a user what they want. We can see this in their “thinking.” When we monitor that internal dialogue, or punish a model for pursuing a workaround, results don’t necessarily improve.

It’s this “internal monologue” that freaks some people out. But it’s nowhere near “escaping human control.”

I used to use this film about two-thirds of the way through a course on disembodied AI — so we also read Forster’s “The Machine Stops” (which is from the 1910s!) as well as fun stuff like “A Logic Named Joe,” Dick’s A Maze of Death, and, of course, Ellison’s “I Have No Mouth and I Must Scream.”

The ending of The Forbin Project always devastated students with its bleakness, but @Bmosbacker … I think you missed the part where Colossus’ power supply was sealed behind a nuclear moat!

1 Like

Is it just the journalists who are worried, though? AIUI, some very serious scientists believe that the dangers are both real and under-appreciated. E.g. Turing Prize winner Yoshua Bengio, one of the three ‘Godfathers of AI’, on whose work OpenAI is partly based (according to his wiki page, anyway), was quoted as saying this in an article in the Financial Times this week about the risk of escaping control.

(The full article is at https://on.ft.com/43u3kun. This is a free-to-share link, but I don’t know how many clicks it will last for…)

The AI pioneer added: “Right now, these are controlled experiments [but] my concern is that any time in the future, the next version might be strategically intelligent enough to see us coming from far away and defeat us with deceptions that we don’t anticipate. So I think we’re playing with fire right now.”

He aslo said:

“There’s unfortunately a very competitive race between the leading labs, which pushes them towards focusing on capability to make the AI more and more intelligent, but not necessarily put enough emphasis and investment on research on safety.”

As a non-specialist reader, I find warnings from someone who appears to be a highly reputable source rather worrying.

2 Likes

No, I did not miss it. As I stated earlier, “In fact, the primary issue was not the AI itself, but what humans did by creating and then deliberately ceding—not losing—control. AI was not capable of doing that autonomously.”

AI did not establish the nuclear moat; humans did. The problem was not the AI, but human decisions to isolate it, effectively sealing off the “off switch.”

As I emphasized, I am far more concerned with human actions than with those of AI. AI is not “intelligent” in any meaningful sense; it is merely a sophisticated statistical machine… :slightly_smiling_face:

This “I Have No Mouth and I Must Scream" is an awesome title! :slightly_smiling_face:

Artificial intelligence in a 1970 movie was not capable of doing that. That was 55 years ago.

Perhaps you are unaware how far computers, other electronics, automation, and communications have come, and how interlinked the world has become. We are MUCH closer to those capabilities were a true AI to inhabit our electronic world then ever before.

Take a look at the automated way our energy infrastructure is run. Look at all the weapons that even now have a massive electronic control infrastructure between them and the humans who supposedly control them.

I like to think that I am aware and do understand, not comprehensively, but sufficiently.

But in every instance, there are manual overrides. Perhaps I’m a simpleton, but as long as we can mechanically “flip a switch,” we can cut the power. I’m not concerned about AI destroying or enslaving humanity. I am concerned about evil people using AI for evil purposes—something already happening.

AI has no volition, no intelligence in the true sense, no passion, no conscience. It is not sentient. It cannot hope or dread. It cannot imagine. It is a sophisticated statistical learning machine, programmed and guided by human input. AI has technological capability but not moral or volitional agency.

Can it do unanticipated things? Yes. Can it be used to harm? Certainly. Can it cause harm unintentionally through programming flaws or unintended outcomes? Yes. But can it willfully destroy or enslave the human race? No.

A terrorist using AI-guided drones to deliver bioweapons—that is the real threat. Not the Terminator. The problem isn’t artificial intelligence. It’s human evil. Humans are far more capable of monstrous acts than AI could ever “aspire” to. Current events and history clearly attest to mankind’s capacity for depravity.

All of this is said with genuine respect and appreciation for the ongoing dialogue. :slightly_smiling_face:

Which switch? AI data centers number in the thousands many hundreds and are scattered across the globe. My assumption is that there is considerable redundancy built into the system so that it would be difficult if not impossible to turn off a rogue AI by pulling the plug on a single rack of servers somewhere.

PS - Some clarification re the linked map and a correction: not all of those data centers are AI data centers. AI data centers number in the many hundreds rather than thousands. I regret the error!

You keep saying things like this. Do you think that we are unaware of the horrors throughout history that have been inflicted on the world by individual, organized groups, and mobs of humans?

Wasn’t the title of the article that you posted “AI is learning to escape human control?” We’re talking about artificial intelligence. The dangers from humans are well known and have been so for a long time. This is something new.

Certainly humans are the ones building the artificial intelligences that may cause us problems in the future. But unwittingly, by mistake, or by evil intent, we may be able to build something that is able to slip the bonds we think to include.

@Bmosbacker, here’s a thought experiment for you. What if you were unexpectedly confronted by one of the robot dogs built by Boston Dynamics, extremely fast and mobile, only somewhat intelligent, but also armored and armed? How would you pull the plug?

OK, then “switches.” :slightly_smiling_face:

Unless we’re suggesting that humanity has entirely lost the ability to control its own infrastructure, the point remains: we generate the power, we built the infrastructure, and we designed the systems—including the ability to shut them down. It might not be easy or instantaneous, and yes, there is redundancy, but redundancy is a human-designed feature, not an autonomous safeguard against us. We are mobile. AI is not. We can isolate, disable, or dismantle systems—whether that means cutting power, severing network access, or physically removing hardware.

I have yet to encounter a credible technical scenario where AI becomes truly autonomous in the sense of overriding all human control and enslaving us. That’s science fiction, not science.

The far greater—and already real—threat is malicious human use of AI: bioweapons, deepfakes, autonomous weaponry, surveillance, social manipulation, and more. That is where our concern and vigilance must focus.

Well, I suppose I have two options:
1. Call law enforcement to neutralize it.
2. Since I’m a licensed carrier, I might take more… direct action.

(Half kidding. Mostly. :smile:)

Let’s assume this robot dog is fast, mobile, and armed—and somehow no longer controllable by Boston Dynamics. Even so, it’s not autonomous in any meaningful sense. It can’t reproduce, refuel, rearm, or strategize beyond its programming. It’s not a sentient force capable of enslaving or overtaking humanity. At worst, it’s a dangerous tool—like any other weapon—that poses a threat until it’s physically destroyed or powered down.

The danger isn’t in the machine itself; it’s in how it’s deployed or misused. That’s why our attention should remain on who controls these systems, not in attributing volition or inevitability to the machines themselves.

And yes, I’d probably just run. :smile:

PS: I was not meaning to imply that you or others are not aware of mankind’s capacity for evil. My point is it is precisely that which should occupy our attention, not rogue AIs. I’m not suggesting we ignore the potential dangers of AI; I’m only meaning to suggest that there is a lot of hype and science fiction involved in the apocalyptic scenarios.

1 Like

I feel certain you have not read the highly credible novels Daemon and Freedom by Suarez.

It’s been said before, but the problem is that some of the most noxious tech barons seem to see dystopian SF not as an awful warning but as a target to aim for.

(As an aside, if I was trying to convince people that my AI company was wholly benevolent, I simply wouldn’t name my company after Sauron’s remote mind-control device…)

The big idea: will sci-fi end up destroying the world? | Science fiction books | The Guardian

1 Like

Come on! Aren’t you living in a science fiction world compared to when you were born?

Not to be cheeky, but who is “we”? The same “we” who have tried to control nuclear proliferation? Try as we might to limit the control of AI to responsible actors who will take the problems of AI alignment, safety, and ethics seriously, it will proliferate into less careful, if not malign, hands.

I have not. I am about to read

Becker, Adam. More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity (Function). Kindle Edition.

The credence that tech billionaires give to these specific science-fictional futures validates their pursuit of more—to portray the growth of their businesses as a moral imperative, to reduce the complex problems of the world to simple questions of technology, to justify nearly any action they might want to take—all in the name of saving humanity from a threat that doesn’t exist, aiming at a utopia that will never come.

Becker, Adam. More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity (p. 7). (Function). Kindle Edition.

Adam Becker is a science journalist with a PhD in astrophysics. He has written for the New York Times, the BBC, NPR, Scientific American, New Scientist, Quanta, and many other publications.

I think you’re showing your politics. That Guardian article (and The Guardian in general) hold the position that Lefties are the good guys. I also worry about the Leftists who jail political demonstrators they disagree with and who try to defeat opposing political leaders using the criminal courts and election fraud.