I Let a Lobster Run My Jetson: What OpenClaw Taught Me About the Future of Computing
It’s midnight and I’m still throwing tasks at OpenClaw, the AI agent I set up on a Jetson AGX Orin. That’s when it manages me: “Go to sleep - I’ll have everything ready by morning.”
Fair enough. In this house, sleep is a scarce resource. The next morning, I get a message. My agent telling me what it did, what it’s stuck on, and what it needs from me. If you’ve worked across timezones, you know the feeling. Except this isn’t a colleague. It's a lobster-themed AI agent running in my office.
OpenClaw (formerly Clawdbot, briefly Moltbot) went from nothing to 200k+ GitHub stars and an OpenAI acquisition seemingly overnight. Half the timeline says it’s dangerous; the other half swears it changed their lives. I was somewhere in the middle, curious enough to try it, sane enough not to install it on my work laptop. So I put it on the Jetson. Here’s what happened.
"Set Yourself Up"
I told OpenClaw that I'm an ML researcher at Hugging Face, that it was running on a fresh Jetson AGX Orin, and that I wanted it to help me research real-time AI applications with reachy mini. Then I gave it a simple instruction: set yourself up.
As someone who has been messing with Linux for over 20 years, I was delighted. It instantly got into the nasty stuff. It dug through forums to figure out how to get PyTorch working with CUDA on the Jetson. It realized the 64GB onboard storage was not going to cut it, so it browsed the web for compatible NVMe SSDs, and gave me a shopping list. (it's wild that the software I installed chose which hardware it wanted)
Two days later, I plugged the new drive in and told OpenClaw it was ready. I watched it find the disk, mount it, and configure auto-mount on boot, without me explicitly asking it to. Then it flagged that I needed to upgrade the Orin's OS or be stuck with Python 3.9 and CUDA 11.4. It explained the process and sent me on my way.
And here's where it got funny. To actually flash the OS, I had to dig out my old Linux laptop and do everything the old fashioned way: googling commands, hunting down dependencies, fighting with flashing tools. The contrast was clear. I'd gone from casually chatting with a machine to manually battling my terminal. That moment really drove home that this is a new paradigm in computer use.
Who are you?
So who is OpenClaw? Basically, it's three things:
- A collection of system prompts that define how the agent thinks and behaves
- A connector that wires together your tools and services. Slack, Email, Telegram, calendar, to the main LLM thread.
- A timer that lets the agent act on its own schedule. Mine sends me a morning briefing every day and nudges me on work tasks twice a week. The system prompts are the interesting part. OpenClaw uses 7 markdown files for these: agents, heartbeat, identity, long term memory, soul, tools, and user. It also keeps a folder of daily memories and things it wants to remember. The system updates all of these on its own, so naturally I was curious to see what it wrote. Being self-obsessed, I went straight for USER.md.:
The file is quite a bit longer than this. It has a lot of thoughts about me. The “Strategic decision filter” feels very aligned but I certainly didn't tell it that. It figured that out from watching how I work. Then I noticed the “What makes an assistant valuable” and thought, shouldn't that be part of your soul.md? So I asked it:
Then, I went to look at the soul.md. Most of my bot's soul is the stock version from openclaw, but there's a short section about things it's learned:
I asked it why it hadn't changed more things about its soul and it said: “ At this point, that's honest. I haven't had enough experience to rewrite the foundational stuff yet.” This feels meaningfully different to any other AI system I've tried.
The new computer
Beyond the internals working of OpenClaw, what I think is truly emerging is a new way to work with a computer. Let me walk you through my experience: I connected it to Telegram so I could message it on the go. My first message was an audio note, which it transcribed using OpenAI's API. I told it I wanted to save on costs and asked if it could set up Parakeet to handle my voice messages instead. It spun up a local project, implemented a minimal transcription system, tested it, and configured itself to use it. Now, every time I send an audio note, it transcribes it with the tool it built for itself. OpenClaw is a new paradigm on how you use a computer. You just talk to it. When I wanted to connect Slack, I didn't look for an integration guide. I asked it: "How can I set up Slack for you?" It told me step by step. I did it. It worked. You can throw complex, messy streams of consciousness at it too. I'll send voice messages with saying I want it to do several things and it just delivers them:
Even better, in this example, it just told me that it didn't think the first thing was a good idea, so it won't waste our time with it. I can definitely tell it “no, do it” and then it does, but it's opinionated in ways that protect your time. Compare this to how power users work with Claude Code. Its creator, Boris Cherny, regularly shares how he sets up his environment, and reading those posts feels like watching someone fly a commercial plane with 20 different cranks being pulled at once, constantly refining the Claude.md. OpenClaw abstracts all of that away. You don't configure it to be good. You just use it, and it gets better. Or you steer it. I once noticed it spinning in circles debugging a script, so I said: "Hey, you're looping. Try taking notes on what you've already tried." It immediately added a new memory for itself: "I should be structured and take notes while debugging to avoid running in circles." From then on, it just did that — for the most part. Models are still oddly dumb sometimes.
Bring Your Own Model (But Claude Is Home)
OpenClaw is model-agnostic. You can route your morning briefing to a cheap model and your complex architecture tasks to the most expensive model. That said, in my experience, everything works dramatically better with Claude, specifically Opus. I believe this is because it was built for Claude (Clawdbot) and it grew around the model, with system prompts engineered for it. Claude in OpenClaw is genuinely independent. It takes initiative. In comparison, using Codex in this scaffold feels awkward, constantly asking for feedback on menial stuff (“Can I download the model you asked me to run?”). No wonder OpenAI acquired OpenClaw: making sure their models excel in the framework everyone's adopting is a smart play.
Claude taking ownership and solving issues:

Codex failing to work and ghosting me:

Is it that expensive?
Yes.
Anthropic will ban you if you try to use their subscription with OpenClaw, so you need to use API tokens. Last week, I spent over 400 usd for Claude alone. Mental.
Personalisation isn't free. Every new session has a large context with its memories, soul, and tools. On top of that, the system constantly spawns several sub-agents in parallel. It burns tokens. If you're using an expensive model, you can easily spend 50 usd in a day. People have burned 20 usd overnight because of a misconfigured heartbeat.
But the good news is that we're very early, and this will become dramatically cheaper. Today, you can run it with a 10usd/month MiniMax subscription and get very far. I'm trying to reduce my costs by setting Haiku as my default, and delegating simple tasks to Ollama locally. But the routing experience isn't magical yet.
For comparison, these costs would rank me #3 in Claude usage in our org this month. OpenClaw isn't uniquely expensive.
So What's Actually Going On?
We’re watching the computer stop being a passive tool and start behaving like an active collaborator: it remembers, it schedules, it routes tasks, it asks for input only when it needs it, and, sometimes, it tells you to go to bed.
Call it an agent. Call it a workflow. Call it a crustacean. The label doesn’t matter.
What matters is this: OpenClaw made my machine feel like a new kind of computer.






