Happy Halloween everyone! Definitely one of my favorite holidays of the year. The wife and son and I are going as Nadja, Lazlo, and Baby Colin Robinson from the incomparable “What We Do In The Shadows”, now streaming on Hulu for the uninitiated. Can’t wait! And now, here are some things that have spooked me this year, in ascending order of spookiness.
1) This is one of the coolest things I’ve seen so far that was made with A.I. Image Generation Tech:
Absolutely stunning imagery and a bizarre vision of where we might go. To be clear—this is not *entirely* an A.I.’s creation. Each of the images is generated from a human prompt and then selected by a human from a list of possible A.I. image-responses. Still, what a result! The whole pace and feel of it has what I’m starting to think of as a particularly “A.I.-creative” vibe. It’s iterative, sequential, exuberant, almost pressurized by its endless creative capacity.
This sort of hybrid creation—humans using A.I. “tools” that are so expansive they almost don’t seem like tools—is how more and more (and more) work is going to be done, and that’s true even if we never reach a “true” AGI. (AGI = Artificial General Intelligence, the holy grail.)
2) This isn’t “creepy” enough to take the top spot, but a reminder—there’s a good chance that by far the biggest thing that has happened this year is that we may have encountered alien spacecraft. At the very least, these phenomena were observed by experienced Navy pilots, recorded on cutting edge technology, and they have no terrestrial explanation according to our current understanding of the laws of physics.
Definitely food for thought this holiday season! As far as I’m concerned, this is the most under-discussed story in the world right now. It’s not definitely aliens, but it’s not definitely-not-aliens either, and even that is momentous.
3) One important A.I. concept is “latent space”. This term has to do with representation. Consider dogs. How do you recognize a dog walking down the street? Approximate answer: You keep a general and abstract but compressed concept of what a dog is like, based on agglomerated input from many similar-but-not-identical experiences of seeing dogs. Then you apply that compressed concept in novel situations. That’s how you can recognize a dog you haven’t seen before, even though it doesn’t look exactly like any of the dogs you have actually seen—you’ve managed to collage together what *a* dog looks like, and you somehow make a very-accurate-but-not-infallible judgment about whether any given dog-like animal actually is a dog or is not-a-dog.
A.I. has to do similar representation. The rub is that A.I.’s today are so complex, and contain so many different parameters and so many representations, that humans cannot actually understand how they are put together. We can evaluate how *well* they are doing representation, but we can’t actually figure out exactly what they’re doing.
The region inside the A.I.’s mind where the representation is kept is called “the latent space”. It is a black box, from within which strange things are starting to emerge.
Only follow this link if you do not scare easily.
END
Happy (early) Halloween! I’ll be back next Sunday with a spoooooooky original story and (hopefully if it works out) a picture of our costume!
Your dog example makes clear just how complicated the human brain is. Something we take for granted - recognizing a creature is a dog - is actually very high level thinking and reasoning. Interesting stuff Owen!