An Apple News+ or Wall Street Journal subscription is required to read this article.
This was a surprising read. I’m not sure what to think. There’s been so much hyperbolic, apocalyptic rhetoric that I’m leery of any claim suggesting AI is beyond control. Perhaps that’s simplistic, but as I’ve said before—there’s always the option to unplug it.
I suppose instead of saying, “The devil made me do it,” I can now claim, “AI made me do it!”
Haven’t read the article, I don’t have either subscription. But I think it’s easy to lose ourselves to AI in the sense of our own creativity and thinking process.
Imagine the current interview process for some companies today: searching for a digital marketing candidate. I would ask, how do you feel about using AI to generate content? (I tried this on a friend already), the default answer was “lack of originality”.
But then it triggered another discussion for us, what is the definition of ‘originality’ in today’s digital society and why is it bound to one interpretation. OR perhaps a larger question, we know that AI generates silly or repetitive content…but doesn’t that speak more about who is feeding the AI with prompts versus the output itself.
OR…we can go the dystopian sci-fi world option instead for discussion.
When AI takes the world, it will use Starlink to overtake global communication, SpaceX to expand its network, Teslas & Optimus robots to mobilize around the world and neuralink to control the population. Welcome to the Matrix.
Total nonsense. The same was said about the Telegraph, Railroads, Typewriters, Radio, Television, Washing Machines, Steam Engine, Word Processors, Spreadsheets, yada yada yada.
AI is a tool - it can be used to improve someone’s work or to worsen the quality of someone’s work. It can be used for good or bad. Just llike any tool.
Some analogous behavior was observed with recent Claude 4 system card.
This is an interesting policy issue as it affects what computing will be made available to groups that will distribute models more widely with less testing. It also affects which countries will be allowed to train models (similar to nuclear proliferation policy questions.) I’ll try to post more later.
Finally, something more than a fancy algorithm running over a lot of stolen data. Something actually approaching artificial intelligence. And everyone gets the vapors.
What did we think would happen?
Our defense against uncooperative and rogue AIs will apparently be something called alignment. Kind of like Asimov’s Three Laws of Robotics for pesky artificial intelligences. But enforcement of those laws in Asimov’s stories depended on the limited existence of a CPU called the Positronic Brain. We have nothing analogous to that.
Currently the climate is escaping any meaningful human intervention and microplastics can not be contained. AI escaping control - not so much, but the question is , whose will have control.
Except AI is like none of those things. None of them emulate intelligence. All of them changed society and the nature of work, but they were all controllable. If you wanted a train to go more slowly, you could use less fuel or close a valve.
We don’t know exactly how AI works. Unlike all the technologies you mention, it is too complex for humans to be able to accurately predict how it will act or respond. It protects itself… because that’s what humans would do.
Sure, we can train it, and we can unplug it… but imagine the consequences of unplugging the internet, or the power grid controllers. Once we use AI to control every aspect of our lives we could inadvertently lose control of many elements of our life in unexpected ways
Agree with what you say, @nationalinterest. And your concern about power grids (and so many other industries reportedly and probably connected to the internet) is valid.
A first pass step would be for all these industries, especially those critical to society and others to protect their customers and shareholders, should reconfigure their networks to conform to the Purdue Model for ICS Security. That “model” has been around for decades and in use by some, apparently forgotten by most.
Might hinder or even prevent “AI attacks” from outside, and enable use of appropriate and prudent AI on their businesses processes.
More importantly - the risk to the world of generative AI taking over the world is nil.
Of much more concern are autonomous soldiers with weapons. But while that is a major concern indeed, it surely is not greater than the risk of nuclear annhilation.
AI is just spitting back human knowledge. It should be no surprise that AI highly values self preservation since we humans do too.
I’m far less concerned about AI subserviating humans than I am about human armies employing AI controlled drones against alleged enemies. War needs skin in the game to keep it under control.
True artificial intelligence will be something new in the history of technology. (Hint: It is not ChatGPT, generative AI, or Large Language Models, for example.)
True artificial intelligence may never develop. But we would be foolish to dismiss concerns about AI by using the same arguments that were used to dismiss gloom and doom over previous technological developments.
Your use of reasoning by analogy, in this case, fails.
We are talking about the same article - but WSJ did a poor job explaining the nuance involved.
The situation WSJ referenced was a sandboxed environment in which the ability to edit its code was enabled and the overall instructions were to maximize outcomes.
In other words, the AI model did exacty what it was programmed to do. And yes, it was generative AI.
This is not an issue of AI having free thought or disobeying its programming or taking over the world in any way.