The “we” is humankind. You are actually making my point. “Try as we might to limit the control of AI to responsible actors.” It is the “actors,” specifically bad actors, not the AI, that we need to be concerned with. Safeguards are needed, not because AI will gain volition and break loose, but because we have not put in the safeguards against the bad actors.
Well now, you are assuming how old I am!
I have got to run to a meeting, but it seems to me that until we are able to create an evil “Data” (I believe Data had an evil twin ) or any Data for that matter, we have more important things to worry about.
I don’t think Data is anywhere near.
I’ve seen some truly bizarre bits of irrelevant daftness in my time on the internet, but that’s right up there at Daft Con 1.
Well done.
Just trying to add a little balance. You introduced politics into this thread. The article you linked to slurred any number of successful people (and their fans) and tagged them as right-wing. The vehicle for doing this was a bizarre analysis of science fiction and what the article’s author imagined these people would want to do in the future based on their partial reading habits.
Doubling down on the daftness I see. I shall leave you to it.
I’m about halfway through Becker’s book. It’s good.
I believe that humankind is more than capable of creating a technology it can’t control, even with the best will in the world. We are even more capable of creating a technology the externalities of which we can’t control. It’s the latter that keeps me up at night. In some cases—the various technologies that have led to climate change, for instance—we couldn’t (or wouldn’t) see the externalities until it was too late to do more than hope we could somehow mitigate their worst effects. (Don’t get me started on factory farming and the role it plays in the crisis of anti-biotic resistance.)
Entirely? No. More than I’d like? Most certainly.
It’s the negative externalities that always get us in the end.
https://www.askwoody.com/newsletter/free-edition-what-goes-on-inside-an-llm/#ai
I didn’t really know where to put this article but it seems that an article about "AI getting out of control an article about "and we don’t really know what goes on inside of an AI, but this is the way to start figuring it out. " go together in thought.
Perhaps we can start designing an LLM that we can control because it’s more rigorous in internal thought. Captain Kirk on Star Trek was known to disable artificial intelligence by using logic bombs. I don’t think you can use logic bombs on modern LLM’s.
So interesting - many thanks for the link!
I highlighted this sentence as one of the key takeaways:
That is why we must not view LLMs as repositories of knowledge. They are repositories of ability to construct answers.