A thought experiment: What would you have to do to explain the computer revolution to a caveman? Assume that the caveman can speak modern English, so you can communicate in any level of detail you like, and assume that the caveman is not afraid of you and is interested in sitting down and listening to your explanation indefinitely. But, he has no modern concepts. He knows about how to hunt and fish and make simple tools, and literally nothing else.
Where do you begin? What are the first concepts you try to explain, and what is the order of concepts to explain after that? Electricity sounds like magic. “First, you harness the thing that comes down in lightning”? Please. So go back before that. Start with math. Take away and add rocks, show him the language of science. But of course, math doesn’t “really” explain electricity. Even if you gave him a thorough mathematical grounding, in pure words, I don’t think you could really get as far as electricity. It doesn’t sound believable and it’s too complex.
Of course that begs the question: Why is it too complex? In other words, why is mere explanation an inadequate tool for getting a caveman brain to comprehend modern concepts?
*
Moore’s Law states that the speed and power of computers doubles about every two years. That held true for decades, almost uncannily, and why this is so is still not well understood.
Less famous but just as important for understanding modern society is Wright’s Law, which states that across a wide variety of manufacturing processes, a doubling of the cumulative output results in an approximately 15% deduction in the per unit cost. This is a result of individual engineers shaving waste off individual steps within the production chain.
Interestingly, though, those efficiency improvements tend to be non-transferable, meaning that, outside of copying a novel design wholesale, it’s difficult for the engineers to explain to someone trying to set up a new factory exactly “what they did” at the first factory that resulted in gains. There are examples, of course, but “all expertise is local” remains a stubbornly sticky concept.
Where does the lost knowledge go when factories try to teach each other how to operate?
*
It turns out that progress in A.I. is unevenly distributed towards knowledge work, and particularly towards creative work. Where it struggles is in motor skills and fine control—gardening, laying brick, doing a sink full of dishes without breaking any. Building a robot and a control A.I. that can do manual labor is still in the “barely there and still not great” stage, while at the same time a large-language-model can take a two sentence prompt and return a story of publishable quality or take as input a series of real estate listings and generate cap rates based on market comps—all in the span of a few moments.
Meanwhile, my not-quite-two-year-old can navigate solo up playground stairs and go down the spiral slide while holding a large rock clutched in each fist, but can’t articulate why on earth he feels the need to hold the rocks, and starts signing for “more” like an otter performing at Sea World before he even reaches the spiral’s second turn. Dummy wouldn’t know a good sci fi story if Borges himself walked up and handed him one.The progression is the exact opposite of how an A.I. gains capabilities.
What about A.I. minds is different from human minds that causes this opposition?
*
I still don’t personally use A.I. that much. I ask it questions and receive excellent answers. It has in that sense replaced Google, at least partially. But other than that what I mostly do is on a lark. I generate images and have a laugh. I brainstorm ideas but they’re mostly retread ideas and if I get good ideas they’re ones I think of. I question the new models as they release to see how they perform on trick questions and some benchmarks that I find interesting and illuminating.
What I haven’t done, though, is integrate them into my normal work flow, not as a writer, nor a builder, nor a dad. I’ve been meaning to build web sites, which is supposed to be faster and easier with A.I., but it still requires a big investment of time to figure out exactly how, and to this point I haven’t had it available. My attention keeps getting dawn away from the world of bits and back into the world of atoms, which I think it good for my overall happiness.
It seems strange, though. A.I. is truly a science fiction power, and I am an outlier in my level of interest and (for a non-technical person) knowledge of the state of the art in A.I.
When will A.I. be so useful in my daily life that I feel obligated to and rewarded for making it a larger part of my consciousness?
*
I think the answer to all these questions is the same: Compression.
What the caveman lacks is not any individual concept, it is the *experience* of new concepts as a reference class. We moderns, despite so many of our resistance to change, at least have the conceptual apparatus to imagine that things *can* change quickly, wholly, and irreversibly. If an alien being descended and tried to explain to us a technology that was as far advanced of our species as the computer revolution is to a caveman, we would need a ton of background concepts explained, but most of us would at least understand the *idea* of “some new technology based on unknown natural laws”.
The reason we don’t take millennia to grasp the idea of such concepts is that we are capable of compression, which is the ability to translate previous experience into general principle.
The lost knowledge of efficient manufacturing flows from the necessary mathematical uncertainty of real world situations because of copious and chaotic initial conditions. In other words, when one really great factory gets going, and then tries to impart that knowledge to a second factory, the second factory is not actually exactly the same. (Unless they’re copying wholesale rather than retooling.) In a retooled factory, the space will be different, at least subtly. The air outside will be different. The workers they can hire will differ in unknown ways. “Making factories more efficient” is a lucrative business with a defined skill set, but each individual problem is different.
Management problems resist compression. They are not generalizable. (Although a lot of people have sold a lot of books promising otherwise!)
My son is able to walk before he can speak because he is able to compress movement into highly generalizable experience: He learns to go up and down a particular set of stairs in one to two tries, despite the technically-infinite possibility space of approach angles, speeds, step heights, etc. However, learning one word doesn’t make him know other words. A.I. are the opposite: They use a token prediction system for language that is based entirely on generalizable knowledge, but there’s no way to input what it “feels like” to move your body in the proper steps to weed and mulch a garden.
The infinite possibility space of what a garden can look like and the distribution of weeds throughout resists the demands of compression in a way that language is not.
As such so thus with my work flow and its lack of A.I. assistance. I’ve now realized that, outside of writing, I am fundamentally insulated from compressible work. I like to garden, I like to play music and sing, I like to build things and fix things, I like to work out, and I am not a slave to economic productivity. Besides which, I have no interest in using A.I. to write science fiction, even though I could. This activity (though precious to me and huge gratitude to anyone reading me) is not lucrative enough to justify destroying my enjoyment of *having the idea* and bringing it to life in the painstaking way I’ve crafted over years of trying.
*
According to the definition I led with—Compression is converting previous experience into general principle—A.I. in a very real sense just *is* compression. It is a face to face encounter with the collected knowledge of our civilization, poured down a processing-power funnel and squirting out a decorating tip like even, succulent frosting.
Alternate definition: Compression is the ability to iterate and adapt, at speed. A caveman cannot understand computers because he lacks the relevant experience to develop the emotional orientation to accept new things as not just possible but inevitable. My son learned to run in two years because he tried walking every which way with the threat of a fall hanging over his head for motivation. He saw me walking and got the idea, but he walks in a way that works for his body and it looks nothing like how I move.
A.I. is also compression in still another sense: The quicker uptake of information. When a medieval King’s army won a war, the king would routinely have to wait weeks or months for word of the victory to return to the capitol. Now I can learn about the latest on Israel-Palenstine in literal moments, and it’s taking place in a country I’ve never been to that nobody I know controls.
The secret to when A.I. will start to take over the life of somebody like me (and that will happen) is when it can automate “lower-skill” (that phrase makes my blood boil) work at speed and scale. (Yes, this post was partially inspired by my garden getting its first weeding of the new year. Bring on the weeding robots!)
Until then, it’ll just keep kicking sci fi writer’s asses with very decent stories written on a compressed (like, instants) timeline.