Count Me Skeptical but Not a Cynic

I read the statement. There are no specifics about exactly how AI is capable of perpetrating the extinction of the human race.

It is the understatement of the century to state that the signatories know more about this issue than I’m capable of knowing; heck, I can’t even create a decent shortcut! :rofl: But I am increasingly skeptical of prophecies of human extinction. Theoretically, a pandemic could wipe us out, but that seems unlikely. Indeed, I think that would have happened well before we invented vaccines. At one time, we were told that overpopulation would wipe us out; little account was given to increase agricultural productivity. And, if I recall correctly, a new ice age was predicted. Now, climate change is heating the world and is claimed to be an existential threat. And in addition to climate change, we have AI to worry us. Oh, and at a smaller scale, we were told eggs were bad for us; now, they are healthy. :slightly_smiling_face:

Climate change is, and will, create significant challenges. However, I doubt it will destroy us. A pandemic could kill all of us, but as I stated above, I would think that that was much more likely before modern medicine. I believe nuclear or biological warfare is the more likely existential threat. We certainly don’t want to give nuclear launch codes to Skynet.

In my admittedly non-expert opinion, AI’s biggest threat is social interruption caused by deep fakes, manipulation, distrust, and the like. Besides, I asked ChatGPT if it was a threat, and I was assured it was not. :slightly_smiling_face:

Thus, I’m skeptical but not cynical. I recognize the perceived danger is sincere. I happen to think it is overblown. There are lots of ways to destroy a computer and its software. We could nuke it or give it a computer virus. Sorry, I couldn’t resist!

Accordingly, I’m not devoting worry or losing sleep over AI. I’m more concerned with our politicians and authoritarians than a treacherous, menacing AI.

1 Like

I think climate change will cause more catastrophic changes that AI ever will. In the next 5 to 10 years life near the equatorial zone is going to get far more challenging and whatever the fallout is for the rest of the world with dying insects and oceans. I am with you, I am hard pressed to see how deep fake videos will change anything. People believe what they want to believe, evidence or otherwise. AI will just be another reason not to trust.


I can’t remember… were the dinosaurs skeptical or cynical?

1 Like

That’s funny, but they were capable of neither nor did they have the intellectual tools to respond to external threats, astroids, or whatever took them out. :slightly_smiling_face:


Your position of skepticism without the cynicism is similar to mine and reminded me of a couple of recent essays that have informed my own thinking on the topic from Jaron Lanier and Noam Chomsky (links below). I feel the harms from AI in the short to medium term come from biased data sets used in training as well as non technical people believing that any given AI product is smarter than a human.

It’s still early days and some fascinating work is being done in the area of AI ethics by researchers such as Dr. Timnit Gebru and Dr. Joy Boulamwini among others. I found the books, “Weapons of Math Destruction” by Cathy O’Neil and “Atlas of AI” by Kate Crawford to be quite informative and can recommend others I found useful if anyone is interested.

ChatGPT and its successors will not overrule us. But they certainly seem like a powerful step forward towards something that may resemble like a general AI, and that brings the question of what could possibly happen. So the moment is right. Lanier, a forward thinker, hits the nail in the head: it’s not what the machines will do, it’s what we Homo Sapiens will do with these machines.

But this is not new, there has been an ethic angle to each disruptive technology that has happened in the past: nuclear technology, genetic modification, you name it.

As an optimist, I would say that nukes, genomics and so on quickly raised concerns and were subject to government regulations. I’d bet that as other stuff that does not seem so immediately risky (like, say, combustion engines or CFCs) have had less control and thus have effected more dangerous effects to mankind.

As a pessimist, look at what Cambridge Analytica did with only social media data. What could be achievable by similar actors having access to AI technology in the future?



I would argue that history shows that technology in the hands of bad actors is certainly a concern. I wonder if Oppenheimer would urge us to be cautious?

1 Like

Extinction seems virtually impossible, excluding an all-life-on-Earth extinguishing event like a nearby supernova.

However, civilization-ending events seem possible.

I think about that sometimes. Homo sapiens began 550,000 to 750,000 years ago.

Agriculture was developed 12,000 years ago, and the first known civilization was 5,000 years ago.

Humanity’s normal state is as hunter-gatherer tribes. Maybe the preceding 5,000-12,000 years was just a blip.

1 Like

To the main point: I think it’s adorable that billionaires and centimillionaires are worried about AI causing human extinction. The rest of us are worried about jobs, having enough money to pay for food and healthcare, and becoming homeless.


Well, Systems like those “AI´s” who are capable of creating Texts, Images, Videos and so on, are able to start wars.
And this could easily destroy the humanity!

It was/is a huge mistake, to open this systems towards public usage! As long, as those systems are able to produce Deep Fakes, they should not be available for the Public, the Governments or private companies.
This is the same level of dangerous system, like autonomous military robots, or Nuclear Material in the possession of Terrorists (besides the last only will damage a certain region).

Last week somebody shared a picture of a fire at the Pentagon, and the Wall Street dropped!
This will happen on a frequent base very soon, and will severely damage our Economic System.
It is very easy right now, to copy someone you want to get in trouble, into a child porn, or produce any other kind of stuff about people you do not like, to harm them in a serious way!

There are right now thousands of Hacker sitting right now in Russia and North Korea (and some criminals in Nigeria) and are thankful for the stupidity of those developers of those available “AI”-Systems to grant access to everyone to those systems, because their job just became easy as making a sandcake.

1 Like

And this is the reason, why those “adorable” billionaires are right with their worries!

1 Like

Misinformation is nothing new. Goes back thousands of years.

But AI isn’t causing any of those problems. It might exacerbate those problems, as it becomes an excuse for putting more people out of work.

The US Government used to consider cryptographic software as a munition subject to arms trafficking export controls. Then in 1991 Phil Zimmerman published PGP on the open web. If that had not happened it is possible our files and transactions wouldn’t have the protection of strong encryption today.

For better or worse AI has been open sourced. So there are models available to everyone.

List of Open Sourced Fine-Tuned Large Language Models (LLM) | by Sung Kim | Geek Culture | Medium

The open-source AI boom is built on Big Tech’s handouts. How long will it last? | MIT Technology Review

Personally I think the greatest threat may be economic. In much the same way that automation has eliminated a lot of manual labor jobs, current AI technology may eliminate a lot of white collar jobs.

Seems like an AI trained on LexisNexis and/or WestLaw, etc. databases could eliminate a lot of legal assistants. The same possibilities probably exist in other fields like accounting.

I’m not worried about social interruption and distrust. We were experts at that long before DOS 1.0 was released.

Layers might be safe for a while longer yet. :wink:


This guy may not be safe. Even if the judge lets him go with a warning who’s going to hire him to handle their legal affairs?

AI is subject to the same rules of computing.


1 Like

Do any of the AI engines currently cite the integrity of the data they are using and where it comes from?

AFAIK GPT4 does not reveal its sources, but BingGPT does. That’s because Microsoft is feeding Bing’s search index to their GPT models.

I think citing sources should become a legal requirement to stop plagiarism (and allow remuneration to copyrighted material) and evidence integrity of data. The citing should also be specific to the source and not some aggregate.

It would be good to see websites banning AI output without proper citing.

1 Like

And how should they know that the output was produced by an AI?