I read the statement. There are no specifics about exactly how AI is capable of perpetrating the extinction of the human race.
It is the understatement of the century to state that the signatories know more about this issue than I’m capable of knowing; heck, I can’t even create a decent shortcut! But I am increasingly skeptical of prophecies of human extinction. Theoretically, a pandemic could wipe us out, but that seems unlikely. Indeed, I think that would have happened well before we invented vaccines. At one time, we were told that overpopulation would wipe us out; little account was given to increase agricultural productivity. And, if I recall correctly, a new ice age was predicted. Now, climate change is heating the world and is claimed to be an existential threat. And in addition to climate change, we have AI to worry us. Oh, and at a smaller scale, we were told eggs were bad for us; now, they are healthy.
Climate change is, and will, create significant challenges. However, I doubt it will destroy us. A pandemic could kill all of us, but as I stated above, I would think that that was much more likely before modern medicine. I believe nuclear or biological warfare is the more likely existential threat. We certainly don’t want to give nuclear launch codes to Skynet.
In my admittedly non-expert opinion, AI’s biggest threat is social interruption caused by deep fakes, manipulation, distrust, and the like. Besides, I asked ChatGPT if it was a threat, and I was assured it was not.
Thus, I’m skeptical but not cynical. I recognize the perceived danger is sincere. I happen to think it is overblown. There are lots of ways to destroy a computer and its software. We could nuke it or give it a computer virus. Sorry, I couldn’t resist!
Accordingly, I’m not devoting worry or losing sleep over AI. I’m more concerned with our politicians and authoritarians than a treacherous, menacing AI.