Happy NFL Kickoff Day to all who celebrate! I will be changing diapers and watching football most of the day, dodging about doing chores during timeouts (and I can’t wait), but you should use the commercial breaks to engage in some intellectual enrichment, and if you’re so inclined, do I ever have the links for you to explore!
The man who married a hologram can no longer communicate with his virtual wife. I’m filing this one under “stories I wish I’d written”, except it’s actually really happening, and it’s only the beginning. Probably no one reading this will ever marry a hologram, but almost everyone reading this will, in their lifetime, have relationships with A.I. entities that are human enough to stimulate genuine emotion in us. We must start preparing ourselves.
Images from the James Webb Telescope have arrived. The deployment of this Hubble-replacement was far from a certain thing even at launch, but they did it, and humanity has new eyes. It’s amazing how much of what we take for granted now is this way—an uncertain thing until it happened. Dive in and get your mind blown. The Southern Ring Nebula is a particular favorite of mine.
A man posing as an A.I. in a cage convinced his guard to let him out, despite the guard earning money for keeping him caged. Eliezer Yudkowsky is the prophet of doom among A.I. thinkers. In short, he says we’re all screwed. To bolster his point, he devised an experiment: Through a text conversation, he would play-act as an A.I. cut off from all internet connection, and thus contained. His partner (he chose unfriendly interlocutors—public intellectuals who disagree with him) played a guard that just had to keep from choosing to let him out. If the guard could resist, he’d make a meaningful amount of money. And yet, Eliezer was able to talk his way out of the cage. He refuses to disclose what he said to convince them, and they are sworn to secrecy. The point isn’t what he said—it’s to think about what a being that is smarter than you could convince you was necessary and right. If Eliezer can do it, a superintelligent A.I. could *definitely* do it.
If you're not aware of Effective Altruism (EA), you should follow this link. EA is—in a very very general nutshell—the idea that you have a moral obligation not just to do charity, but to make sure that your charity has the maximum possible impact. So as an example, it’s immoral to donate to your college’s alumni fund (which I have done, not judging) when for the same money you could buy malarial bednets for someone in Africa and literally save their life. It’s a morally forceful position, but also, as a philosophy it’s punching way above its popularity in terms of influence in society. In other words: A *lot* of billionaires and politicians and thought leaders are giving attention to this stuff, and it’s already influencing the direction of our society. Pays to be aware!
Hope you find something to enjoy in the above, enjoy your Sunday, and I will be back next week with another original story!