Home | Updates | Now | LinkedIn | GitHub | Use of AI

On being honest about AI

By Magnus Hultberg • 31 March 2026

Last edited: 31 March 2026

I use AI for almost everything on this site. Writing, building, thinking. That's not a confession, it's just true.

Right now I'm spending a lot of time exploring what's possible with LLMs and agentic coding, specifically Claude.ai and Claude Code, because it fascinates me. There's a maker revolution happening around these tools that reminds me of how I felt when I first started tinkering with the web in the early nineties: the sense that something once limited to specialists was suddenly available to anyone curious enough to try.

That barrier coming down is the interesting part, not the technology itself.

What matters more than the fact of AI involvement is what you do with it. Anthropic published a useful guide to writing an AI diligence statement recently, and it crystallised something I'd been meaning to do for a while: put in writing how I work, and what I'm responsible for.

So I did. You can read it here.

The short version: nothing I publish is unedited AI output. Every piece of writing starts with me and ends with me editing it until it sounds like I actually said it. The code running my sites is almost entirely AI-generated, built against specifications I've written and reviewed by sub-agents before anything is merged. I don't write code, but I do make all the decisions that matter.

The other thing worth saying plainly: all of this is for me. If something here is broken or wrong, I'm the one who bears the consequence. That's the limit I've set deliberately. I wouldn't use this approach for anything someone else was depending on, paying for, or trusting with anything sensitive.

Transparency about AI use is the responsible thing. Not because AI involvement diminishes the work, but because it's a tool. And the tool only adds value if you're honest about what it's doing and what you're doing.

Home | Updates | Now | LinkedIn | GitHub | Use of AI