The LLM Hammer: When Everything Looks Like a Nail
Projects like OpenClaw are encouraging bad LLM practices. They're exploding in popularity, and people are using them to manage servers, run cron jobs, and handle routine tasks. It feels like magic, but there is a more economical way.
The concern
LLMs have unlocked something real. A lot of people are now able to do things they've only ever imagined. That's genuinely exciting. But somewhere along the way, we stopped asking a simple question: does this task actually need an LLM?
Every invocation costs money. Every prompt, every response, every function call - tokens in, tokens out, dollars spent. Running an LLM to do something a bash script could handle means you're paying for intelligence you don't need, repeatedly, forever.
Say you want the weather messaged to you every morning. With an agent like OpenClaw, you'd have the LLM wake up, hit a weather API, format the result, and send it to you. Every single morning. That's dozens of tokens burned on a task with zero ambiguity. There's no reasoning required. There's no judgment call. It's a curl and a jq pipe.
You could instead have the LLM build you a program that does exactly that - grabs the weather and sends it to you - then schedule it with cron. One conversation, one tool, zero ongoing inference cost.
The mental model shift
Here's the novel idea: what if we had agents build us microservices instead of having them pretend to be our butler?
Why are we using LLMs for everything? Why are we having them handle scheduling, API calls, server management? Do we need an LLM to do all of that? No. We don't.
The better approach is to have them develop the tools for you, then you run those tools without their involvement. Use Claude Code to write the program. Have it build you a container to run it. Ship it and move on.
When you let the LLM be the runtime, you inherit every problem that comes with it. You worry about securing the functions it can call. You need approval policies for commands. You stress about context injection. You're paying to babysit intelligence that doesn't need to be there.
LLMs shouldn't be the program. They're not your operating system. They're not your cron daemon. They're a tool for building tools.
Where LLMs actually belong in the loop
This isn't an argument against using LLMs. It's an argument against using them where they don't add value.
Keep the LLM in the loop when the task genuinely requires it - when there's ambiguity, when natural language needs to be interpreted, when the context is unpredictable, when actual reasoning matters. If something changes every time and requires judgment, that's an LLM's job.
But if the task is deterministic - if the logic doesn't change between runs - then it doesn't need intelligence. It needs a script.
The line is simple: if the task needs reasoning every time, keep the LLM. If it doesn't, have the LLM build the tool and get out of the way.
Conclusion
We need to guide them. Not hand them the keys and walk away.
LLMs are incredibly powerful, but that power is wasted when we burn it on tasks that a purpose-built tool handles better, faster, and for free. The real value isn't in having an LLM do your work - it's in having an LLM build the thing that does your work.
Build with LLMs. Don't run on them.