This I think, is the best summary I’ve read about the real and imagined dangers of “A.I”: “ This is my plea to all my colleagues. Think of people. People are the answer to the problems of bits.”
I can’t wait to read this article, because I completely agree with the premise (and the first few paragraphs that I’ve read so far). Thank you for sharing this.
I also think the tendency to mythologize (the author’s term) and then anthropomorphize mathematical models gets backwards applied to the human brain. And there are scholars that seem to take the position that the brain is essentially a mathematical model and given the same inputs, all brains will output the same output. Of course, you don’t experience it because you can never get precisely the same inputs. On the one side this tendency elevates a mathematical formula to sentience and on the other, abases (denies?) free will and thus diminishes sentience in us because of its supposed mathematical operation.
Okay, maybe that’s too philosophical, but it’s an observation that keeps reoccurring to me.
Not too philosophical at all. I love philosophy and related disciplines. You rightly point out the problems of reductionism and scientism (my terms) as applied to our thinking and conversations about “A.I.” I’ve never bought into the hype that “A.I.” will annihilate us; we are more likely to do that to ourselves than have it done to us.
That said, and as the author so eloquently points out, it is how we use “A.I.” or any technology that determines whether it is a force for good or bad.
While I do not initially agree with that position, do you have any link or reference to share? It’s quite an interesting idea and I would like to explore more.
This one is also a good read to complement what you shared. By Cal Newport.
Here are two. One is just an article based on the other, so not two distinct analyses.
Here are three studies that seem to argue that the human brain operates predictably in response to inputs. I’m no scientist (neuro or otherwise), so I won’t be offended if you or someone more knowledgeable than me tells me that I’ve completely misunderstood the arguments/findings/conclusions of these studies.
- Naselaris, T., Kay, K. N., Nishimoto, S., & Gallant, J. L. (2017). Encoding and decoding in fMRI. 2011 May 15;56(2):400-10. doi: 10.1016/j.neuroimage.2010.07.073. Epub 2010 Aug 4.
- Meyniel F, Schlunegger D, Dehaene S (2015) The Sense of Confidence during Probabilistic Learning: A Normative Account. PLoS Comput Biol 11(6): e1004305. The Sense of Confidence during Probabilistic Learning: A Normative Account
- Nguyen, M., Vanderwal, T., & Hasson, U. (2019). Shared understanding of narratives is correlated with shared neural responses. 2019 Jan 1;184:161-170. doi: 10.1016/j.neuroimage.2018.09.010. Epub 2018 Sep 12.
Now that’s a nice rabbit hole. Thanks!
The circular spin is phrased as … How can we create something that we guess could ultimately destroy us yet keep our hands entirely clean of blame when it does? The soundtrack from Kate Bush’s Experiment IV come to mind.
It was music we were making here until …