I don’t use AI to write for me. I use it to argue with me. More specifically: I use it to borrow other people’s brains on demand. Lately, when I’m thinking about a topic— incidents response, decision-making, whatever—I’ll feed AI a pile of stuff from people I respect. Blog posts. Talks. Essays. Sometimes opinions I don’t fully agree with. Then I ask it to apply those ways of thinking to the problem I’m stuck on. Not summarize. Not quote. Apply. “What would this person notice here?” “What would they challenge?” “What assumption would they attack first?” That’s where things get interesting. A weird thing happens when you do this consistently. You start seeing the gaps in your own thinking much faster. The places where you’re being lazy. The places where you’re repeating something because it sounds right, not because you’ve really tested it. AI is good at pattern remixing. Humans are good at judgment. The combo is surprisingly sharp. Here’s a concrete example. I’ll be stuck on a system design question that feels muddy. Too many tradeoffs. Too many half-truths. I ask AI to “think like” a few different people I’ve read over the years and run the same problem through each lens. One angle will obsess over incentives. Another will zoom out to system dynamics. Another will basically say, “you’re solving the wrong problem.” I don’t accept all of it. Some of it is wrong. Some of it is shallow. But almost every time, there’s something that makes me pause and rethink my framing. And that pause is the value. The unexpected part: this doesn’t make my thinking more derivative. It makes it more distinct. Because once you’ve pressure-tested an idea against multiple perspectives, what’s left is usually something you actually believe—not just something you inherited. You start noticing when an argument only works under one worldview. You start seeing where ideas break when you move them out of their original context. It’s like intellectual cross-training. You don’t lift weights so you can lift weights. You lift so you move better everywhere else. One strong opinion: people who say “AI makes you lazy” are usually using it lazily. If you treat it like a vending machine for answers, yeah, your thinking will rot. If you treat it like a sparring partner that never gets tired, it does the opposite. I still do the hard part myself. Deciding what matters. Choosing what to keep. Living with the consequences of being wrong. AI doesn’t get to do any of that for me. What it does do is shorten the distance between “vague intuition” and “clear thought.” It helps me explore paths I wouldn’t naturally walk down. And sometimes it says something so off that it forces me to articulate why I disagree—which is another way of learning what I actually think. That’s the real trick. AI doesn’t give me answers. It gives me better questions, faster. And then I still have to sit there and think. March 25, 2026 — Mill Valley. Still thinking.