A.I.-Doomerism has officially gone mainstream. Here's why you should be less scared than many people new to the topic now seem to be (although you should still be somewhat scared).
I think this sentence “The emotionally resonant, attention-grabbing threats are not the most dire ones” sums up the human response perfectly. We are still programmed to respond to the tiger chasing us as opposed to a glacier melting 8 cm per year. Thanks for a clear look at a complicated issue.
Exactly so! "AI might decide to turn evil" is just so much closer to "tiger chasing me" than "AI might cause unemployment and ennui in populations large enough to cause political instability on a grand scale". The environmental analogy is an apt one.
“These Chatbots are constructing their responses one word at a time, based on guessing the next word that would follow on the last word they wrote. They have no higher-level conception of “what the conversation is about” or “how do I come across”. They are literally just doing a really complicated version of "guess-the-next-work-in-the-sentence”.”
Now this does beg the question; if A.I becomes sentient in the future, how will we be able to tell the difference between “this machine is alive” and “this machine is just guessing what its next word will be”?
The very question! I also think it goes the other way: If A.I. can be so very very human-like while just playing the word-guessing game, then how sure are we that humans are "sentient" in some categorically different way than "very clever and complicated reaction-havers"?
These things go right to the most basic philosophical questions about the mind-body problem, consciousness, etc that people have been puzzling over for centuries.
I think this sentence “The emotionally resonant, attention-grabbing threats are not the most dire ones” sums up the human response perfectly. We are still programmed to respond to the tiger chasing us as opposed to a glacier melting 8 cm per year. Thanks for a clear look at a complicated issue.
Exactly so! "AI might decide to turn evil" is just so much closer to "tiger chasing me" than "AI might cause unemployment and ennui in populations large enough to cause political instability on a grand scale". The environmental analogy is an apt one.
“These Chatbots are constructing their responses one word at a time, based on guessing the next word that would follow on the last word they wrote. They have no higher-level conception of “what the conversation is about” or “how do I come across”. They are literally just doing a really complicated version of "guess-the-next-work-in-the-sentence”.”
Now this does beg the question; if A.I becomes sentient in the future, how will we be able to tell the difference between “this machine is alive” and “this machine is just guessing what its next word will be”?
The very question! I also think it goes the other way: If A.I. can be so very very human-like while just playing the word-guessing game, then how sure are we that humans are "sentient" in some categorically different way than "very clever and complicated reaction-havers"?
These things go right to the most basic philosophical questions about the mind-body problem, consciousness, etc that people have been puzzling over for centuries.
Interesting stuff! Good start to a dystopian post world