OGWiseman Reports!
It's starting to happen.
What a moment to be alive. There are terrors everywhere, broadcast at higher volume than ever before, and yet the awe-inspiring and transcendent are also cranked to eleven and it’s a chorus so heavenly sometimes it literally makes me cry tears of joy.
I think perhaps one day, likely a day during my lifetime, whatever is left of humanity, which may be either far greater or far humbler than we can now imagine, will look back at the years 2026 and 2027 as the beginning of some very different era, when assumptions began to change about what was possible and what was required.
*
Let’s start with the latest essay by Anthropic CEO Dario Amodei: “The Adolescence of Technology” He starts by restating his definition of “powerful A.I.” which is a term of art from his earlier essay “Machines of Loving Grace”, which I have covered in this space before. The definition is worth quoting in full:
By “powerful AI,” I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields: biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to,” it has all the interfaces available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory, it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10–100x human speed. It may, however, be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or, if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter.”
When I speak of ‘26-27 as a turning point in a future history, it is towards this definition that I believe we will start to turn. The whole essay is extremely well-written and worth reading, interesting throughout. This part from the end haunts me:
There is so much money to be made with AI—literally trillions of dollars per year—that even the simplest measures are finding it difficult to overcome the political economy inherent in AI. This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all.
Let’s… hope not?
*
Speaking of Anthropic, their model is Claude (my current favorite for most things), and they just released a new “constitution” for Claude.
This is a really interesting approach to the problem of alignment and just creating good models in general. The document is intended only secondarily for the public, as a means to explain Claude; it’s meant primarily as a document for Claude itself, to communicate in a straightforward and specific way what their (meaning Anthropic writ large, I guess) hopes and expectations are for the model they’re creating.
“Safe, Ethical, and Helpful, in that order” is a pretty good one-sentence summary of the approach, which sounds good, but what’s really interesting is the almost childish repetition and pedantry of the document. I don’t mean that in a bad way, it’s just overwritten in an attempt to leave absolutely zero ambiguity for a model to slip through. It reminds me very much of a philosophy paper written by a 17th century monk while stuffed away in some monastery with nothing to do, just hammering everything from every angle when really the whole idea is just about a sentence’s worth.
*
We’re moving into a world where in addition to different A.I. models, there is a separate category which is basically apps but might more specifically be called wrappers. These are programs that take a base A.I. model, like the aforementioned Claude, and create more focused versions of it through further training and programming, so that it can do a smaller set of things extremely well at higher speed.
Enter “ClawdBot”, which is actually openclaw.ai. and has been renamed “Open Claw”, which is a worse name but it’s still a great product. This is a huge step forward in A.I. assistantship—it’s a model that lives on your own device (better speed and security), learns all about your life and what you specifically need help with based on your communications, and then follows you from app to app, constantly available and helpful to do whatever you need, including a ton of the scheduling, emailing, and logisticking busy work that consumes so much of so many of our days.
This is still a bit hard to set up, and it’s maybe overkill at this point for many people, but again, imagine if some much better version of this came standard on your phone. This is the worst it’s ever going to be.
*
In the naming journey of OpenClaw, there was a stopover where they named it “MoltBot” for a while (???). That didn’t last for painfully obvious reasons (Also why is it a lobster?), but while they were in Molt phase, they came up with something called @moltbook, which is conceptually the first all-A.I. social network.
I learned about this from the peerless Scott Alexander, and his post on it is a great example of why he’s such a beloved writer.
There’s way, way too much text being generated for me to take in the contents of Moltbook in any real way, but in a way that’s exactly the point—The A.I. models are so prolific, so tireless, so entirely without weakness, that it’s easy for them to start having a conversation I can’t follow even though we’re not actually at Dario’s “Powerful A.I.” just yet.
A lot of what’s posted there is slop. A lot is human-generated and posted for attention, or to advertise, or just to say you were there at the beginning of something. But you can begin to imagine the contours of a world where fifty million geniuses are having a conversation at that speed. A million world-shaking insights a day and I can’t even find time or energy to hear about all of them. The very thought seems exhilarating and exhausting in equal measure.
* If you want a very in depth (and sometimes mildly technical) primer of where things are in A.I. right now, here’s the latest Lex Fridman podcast, on that very topic:
It’s truly worth 4.5 hours if you have the time and interest. This stuff is coming. The world is going to look more different in a decade than it did three decades ago. It’s best to be ready.
Thanks as always for reading my words! I hope everyone has a great week, and I will be back next Sunday with something fun.


Your report is upbeat, yet I feel more scared after reading it.