Engineering Management in the New Age of AI
Yep, of course, another article about AI. But I couldn’t not talk about it. At Spotify, we’ve been lucky to get access to all the tools the cool kids are using, and that changes the pace of how you learn, build, and lead.
Check out Spotify’s blog, we have cool stuff like https://engineering.atspotify.com/2025/11/spotifys-background-coding-agent-part-1.
I used these tools for a while as a Senior Engineer, but switching into management made things feel different. There’s a lot to catch up on, a lot of methodology, and a lot of language that isn’t always obvious when you’re learning on the fly. Google is often noisy, and sometimes I knew the concept but not the exact term to search for. At the same time, I noticed I was relying on muscle memory from years of working with great leaders, and while that helped me make impact, it also left me feeling like I was operating on instinct more than clarity. Instinct is useful, but in management you need to understand the what and the why.
That is where AI became genuinely useful for me, not as a replacement for judgment, but as leverage across three areas: execution support, communication quality, and leadership learning.
AI as a Claude Junior Engineer
Engineering is still my passion, and now that I’m not coding full-time, the fun of creation is back in a different way. I don’t need to be side by side with engineers writing production features; they are the brains of the operation, and pretending otherwise helps no one. But I do need enough proximity to the stack to understand their pain points, unblock work, and reduce execution risk.
So I keep Claude Code open and use it for repetitive and annoying tasks: metadata updates, cleanup work, deprecations, and small automation scripts that nobody wants to do but everyone benefits from once done. That alone has been one of the highest-leverage changes in my recent career. It improves operational hygiene, reduces decision latency on low-value work, and frees engineers to spend more time on problems that actually move us forward.
I also use it for small hacks and prototypes. Most never become platforms that “change the world,” but they sharpen my context, improve conversations with the team, and keep my technical instincts alive without stealing focus from management responsibilities.
AI as an EM Copilot
I also use AI directly for management work: drafting documents, improving writing clarity, pressure-testing narratives, and helping me quickly understand leadership concepts in plain language when my brain is overloaded. Honestly, the overload seems constant these days.
This has helped most in stakeholder alignment work. I can take a rough update, clarify the core message, and make the intended ask obvious before it reaches directors, peers, or partner teams. I can turn fuzzy thoughts into structured options with tradeoffs and risks. I can prepare for difficult conversations with better framing and better questions.
I’ve also built a few Claude Skills and leveraged ALL the MCP integrations I could find to accelerate context gathering and reduce busywork. The result is not “AI wrote my management”; it’s that AI shortened the path from idea to clear communication.
AI as a Thought Partner
One of the most practical uses has been leadership learning. I regularly use AI to explain terms in context, compare management approaches, and challenge my assumptions before I act. Sometimes I literally ask for ELI5 explanations just to clear noise and rebuild understanding from first principles.
Used this way, AI becomes a safe rehearsal space. You can test framing before a 1:1, simulate counterarguments before a review, and sharpen how you explain decisions tied to career ladders, priorities, or team boundaries. AI can help you prepare, but responsibility is still yours, so always remember.
Guardrails
This only works if boundaries are clear. I treat AI as support, not authority.
I use AI for:
- Drafting and refining communication
- Structuring plans and tradeoffs
- Scaffolding low-risk technical work
- Learning and terminology clarification
I do not use AI for:
- Performance judgments or people decisions
- Sensitive feedback that should be written directly
- Anything that requires confidential details beyond policy boundaries
- Final decision-making responsibility
Everything important still gets human review, context checks, and ownership.
Failure Modes to Watch
There are real downsides if you overdo it. Tone can drift into generic “AI voice.” Documents can become polished but shallow. You can over-automate judgment and under-invest in human context. And if you stop staying close to the technical reality of your team, your management quality will decay even if your writing looks better.
The fix is simple but not automatic: keep your own point of view, keep your own accountability, and keep your own standards.
Final Thought
Old habits still matter. The same fundamentals that worked in strong engineering environments, like clarity, context, ownership, and follow-through, still work now. AI just compresses the distance between intent and execution if you use it well.
AI can draft, accelerate, and challenge your thinking, but judgment, trust, and accountability are still human responsibilities.