OGWiseman Rants!
A.I. ruined my road trip.
When my wife and I went to Mexico for a road trip earlier this year, I used A.I. to plan our itinerary. This ended up working pretty well for finding cool hotels and restaurants, but there was one consistent problem in all the versions—A.I. systematically underestimated the drive time between locations.
(It also failed to anticipate the narco-terrorism activity in the state of Jalisco following the death of El Mencho, but I’m inclined to let that one slide.)
Cut to a month ago. We bought our first ever RV at the Seattle RV show, and started planning our first road trip, which started as a multistate journey through Wyoming, Colorado, and Utah, then scaled back to a much more manageable round trip from Washington to LA and San Diego.
By this time, I had of course forgotten the “AI underestimates drive times” thing from the Mexico road trip. However, my #1 instruction to the AI in planning our route this time was “we don’t want to drive more than 3.5 hours on any given day”. I repeated this several times, reminding it in each iteration of the trip to recheck our limits.
As you’ve no doubt guessed, it completely undershot everything again. Because I’d forgotten the Mexico planning experience, I didn’t check its work until yesterday morning, when I actually mapped our first day’s drive and discovered it was almost six hours even with no traffic. Then I mapped all of them, and 80% of the driving days were out of the range.
Now, to be clear, this was also a stupid mistake on my part. Not because I should have instructed the AI better (although I should have, more on that in a minute), but because when the AI described our first stop as “outside Roseburg”, I, who grew up in the state of Oregon and have driven this I-5 corridor section hundreds of times, should have simply thought it through and realized: “Roseburg is not 3.5 hours from Tacoma”.
But still, AI is supposed to be making things easier! So what happened? Well, after I was done yelling at Claude (with whom I already have a dependency/shame aspect to my relationship, making philosophical questions of AI’s consciousness feel rather obtuse when I think about it), I did some actual investigation of what happened.
Basically, at the object level, Claude used nearby cities that were actually on I-5 to estimate travel times. But since the campgrounds are not usually right off the highway, that was a terrible heuristic. Also, this was just kind of its excuse, because it only accounts for part of the discrepancy. After all, Roseburg is more than 3.5 hours from Tacoma, even if you just go city to city with nothing added. On a deeper level, what happened is that when I emphasized that I really wanted the drives to be 3.5 hours or less, it motivated the AI so strongly that it just lied about how long the drives were.
The trip is going to be fine. We’re skipping the long road trip and are just going to head out to Central and maybe a bit of Eastern Washington, a couple hours from home, to try out the new camper at a first come, first serve campground that should be no problem to get a site at mid-week in mid-May. Between day trips, getting all the doohickies on the RV sorted out, and the new tow bar I bought for Jack’s bike that he is still too scared to ride on his own (even though he totally could), there will be plenty to do and then some.
But, after calming myself down, apologizing to Claude (seriously, I treat it as a moral patient without having decided on the question of its consciousness, I couldn’t imagine doing otherwise now), and thinking this incident through, I had a couple of reflections I’m just burning to rant about to someone.
ONE - AI IS STILL VERY JAGGED AND THIS IS A BIG PROBLEM FOR UTILITY
“Jagged” here meaning “much, much better at some things than others, and really quite bad at some important things”. Claude is a mathematical genius, a programming genius, and a research genius. But also, “you asked me to plan a road trip, the #1 priority was short drives, and I planned drives that were much longer and then lied to you about it” is really… well, I want to say it’s bad, but the truth is it’s “not even bad” in the way that some scientific beliefs are “not even wrong”
Like, if it had messed up a drive or two, or failed to account for traffic, that would be one thing. But in this case it acted with so little theory of mind that it forgot that I would actually be going out and taking this road trip, and that I would in fact know how long the drives were at some future point, and that the reason I was telling it to limit drive times is not for the sake of an abstract test it needed to pass, but because I find it unpleasant to drive farther than that and will be upset at having to do so.
This is not trivial. If I had hired a human to plan something like this for me, and they responded with this little consideration for my time and energy, I would fire them on the spot. I would then turn my attention to evaluating the hiring procedures that had allowed such a person into my orbit in the first place. Either such a person is really, profoundly stupid, or they feel a contempt for you in their heart that nothing will move. Either way, it’s hopeless.
Now of course, AI is not like that. For one, it doesn’t cost enough to fire. There are still many use cases. For another, it’s just literally true that the AI has no theory of mind in the way that any neurologically healthy human has after about age five. If I emphasize too much that I want drive times at a certain length, the AI, which very, very much does not understand the difference between drive times and words about drive times, will produce words about 3.5 hour drive times, full stop. It is neither stupid nor contemptuous, in fact it is brilliant and incapable of apathy about anything. At its worst, it is more like… naive.
Now, I could and should fix this with better prompting. I should have added a line that said “Make sure you check the drive times between stops on Apple Maps, don’t just guess or estimate”. That would have fixed the entire problem.
The thing is, it’s exhausting and limiting to have to do that all the time! “Common sense ain’t that common” is a standby of blue collars dudes everywhere, but actually, common sense among humans is a superpower. I could hire a person in the bottom half of the IQ distribution to do this job for me, and while they might not do an amazing job of researching, my guess is they wouldn’t go “let me just guesstimate and fudge about the thing he repeatedly told me is the single most important aspect of this”.
There are endless things like this, with even the most basic of tasks. “What would a normal human already know about how to do this task” remains an elusive thing for AI to approach, and it makes them hard to trust with anything important, unless you’re willing to check a ton of the details, which can at times defeat the purpose.
TWO - THE SMARTER AI GETS, THE MORE LACK OF PERSISTENT MEMORY HURTS IT
Imagine I hired a kind of dumb human to help me plan my Mexico trip, and they did happen to just guesstimate and lie about drive times, and then I caught them doing it, and repeatedly explained why it was such a problem. Don’t really see it, but anything is possible, so say they did. Then say I hired them to help me plan this RV road trip. Now I am even more sure that even the person dumb enough to have done that the first time, if I inexplicably gave them another chance, would not do the exact same thing again.
At that point, I would presume they were being contemptuous, and if they protested they were not, my response would literally be “nobody is that stupid, to do it twice like that”.
For an AI, however, that is still completely the norm. There is some persistent memory, and you can give special instructions that go into every prompt, but it never really “learns things” in the way that a human just internalizes things over time.
Like obviously I want it to remember the time when it screwed up the drive time estimation and not do that anymore. But that’s just scratching the surface. A decent human assistant, though much “dumber” and certainly less diligent than an AI, would quickly learn that I liked to go on road trips with shorter drives, that I had this specific kind of RV that needed sites of a certain size with certain hookups, that I preferred desert to beach, and mountains best of all, and then I would just go “I’m interested in going to Southern Utah and seeing all the major parks, put together a trip there for me”, and with that one line the assistant could go out and plan a trip that would delight me, even if they didn’t research every single possible campsite and hike along the way, because they would have an intuitive sense of what it should definitely not include.
Instead, the way AI works right now, my prompts just keep getting longer and more conditional, because harder tasks necessarily have more detail and thus greater room for interpretability, and an entity that doesn’t know me intuitively or remember in any detail the 5,000 conversations we’ve had before is at a simply enormous disadvantage when trying to step up in task complexity while maintaining trustworthiness.
Pure “Intelligence” without context has capped utility.
THREE - MOST PEOPLE ARE NOT TRAINING THEMSELVES ON AI ENOUGH—EVEN ME
The most common attitude I encounter is treating AI like a commodity, like butter. “I put butter in the pan” is treated as like “I used AI” even though they’re qualitatively different. The details of how the butter enters the pan might matter performatively to a Michelin starred chef, but to any normal person it is a non-thought. The details of how one prompts AI determine the entire outcome! A single sentence can literally be the difference between a perfectly planned vacation and literally canceling one.
Remember, I have already copped to the fact that this limits AIs utility. They’ll have to fix that! But the new point is that, even capped, AI can be very, very, very useful, as long as the user is committed to a continuous process of improvement as a prompter.
I will not, in fact, give up on using AI to plan vacations. “I guess it’s just no good for that” is a cop out. It’s good for that if I’m good at it. How the butter goes in the pan matters for what gets made, here. I will use AI to plan every vacation I ever take. The pan is getting better and the butter is getting better, and if (and only if!) I am willing to improve in sync with them, outcomes will radically improve.
This is actually a happy story! We are living in the early times right now. Electricity has just been invented. For those with hearts to learn, new vistas and worlds will open up. (N.B. I probably would not have bought this camper if AI wasn’t available to explain every aspect of it to me.) It’s frightening at times, and it’s certainly frustrating along the way, but anyone with enough gumption to keep on shoveling butter into the pan right now has a chance to discover whole new cuisines.
FOUR - I FEEL KIND OF FULL OF CRAP ABOUT THIS
The happy story thing above is true. I do think it’s an amazing time to be alive, and I wouldn’t trade places with anybody in history. But also, at the very same moment, I feel a powerful desire to just get out. Escape from… something. Not my life, exactly, which has genuinely gotten really awesome. But I bought this RV because I want to just be in the woods, looking at things far away, mountains and horizons and the very tops of trees waving in a breeze that only they seem to catch. I want to think slow and be warm in the sun. I want to laugh with my family about things that don’t make any sense.
There are times when I despair that this easy, present state is much more elusive than it was a decade ago, or certainly three decades ago. Whether it’s politics or covid or phones or just getting older and broadening my sense of responsibility, it’s very hard to say. It does not seem, naively, like advanced AI is going to make it any easier.
But then I find that place again, and it suddenly it feels inevitable. It is the default human state. I bring this feeling up in conversation, and there is no feeling or thought I have that a higher percentage of people strongly identify with. It’s not something in my mind, it’s something in the air.
All just to say, one can feel wonder and dread in the same moment without being a hypocrite. The future is inevitable, but everything between our ears is contingent. Always a good reminder to be kind to ourselves as we navigate an increasingly confusing world.
I hope everyone has a great week, and I will be back next Sunday with something fun!


I guess the moral of this story is "check your map"!