Stop Perfecting Prompts. Start Managing Agents.
The hours you spent last year sharpening prompts are worth less this year. The skill that separates effective AI users in spring 2026 is management: scoping the task, writing the guardrails, and choosing where to step back in.
By Forge Team
The hours you spent last year getting better at prompts are worth less this year. The skill that actually separates effective AI users from frustrated ones in spring 2026 is management: scoping a task well enough to hand it off, writing down what "good" looks like, and choosing where to step back in.
That framing is not me being dramatic. Ethan Mollick put it plainly in March 2026: we have entered "an era of managing AIs, rather than working with them." Tools like Claude Code, OpenAI Codex, and OpenClaw let you assign hours of work and come back to finished output. The shift is not subtle.
Harvard Business Review went further in February, arguing that companies now need a specific role: "agent managers," people responsible for orchestrating how AI agents learn, collaborate, and hand work back to humans. It is not a formal job title yet. It is becoming one.
The most concrete version of the shift came from Claire Vo on Lenny Rachitsky's podcast in early April. Vo described going from OpenClaw skeptic to running nine dedicated AI agents that handle parts of her business, write code, close sales deals, and keep her schedule sane. Nine. Vo is not a prompt engineer. She is a manager who happens to manage software instead of people.
What this looks like at your desk
"Managing agents" sounds abstract until you make it specific to a role.
A marketing manager briefs a campaign agent the way she would brief a junior marketer: here is the audience, here is the offer, here are three examples of the tone we want, here is what must never appear in the copy, flag anything that touches legal language you haven't approved. That is not a prompt. That is a standing brief.
A financial analyst hands monthly variance analysis to an agent but bounds it: use last quarter's assumptions, flag any line that moves more than eight percent, and never propose a reclassification without listing two alternatives and the tradeoffs. The analyst still owns the judgement. The agent owns the mechanical work the analyst used to do on autopilot.
An HR director lets an agent screen the first hundred résumés for a role, but defines the rubric: these three criteria are hard requirements, these two are nice-to-have, surface anyone who does not fit the rubric cleanly so a human can decide. The director still interviews. She just starts with a shorter stack.
A product manager runs a weekly synthesis agent: it reads the support tickets, sales-call notes, and sprint retros, then produces a one-page summary in a fixed format. The PM reviews what surfaces and spends the saved hours on the conversations the agent cannot have.
In each case, the job description is unchanged. The daily texture changed because the person learned to brief work instead of do it.
Scope an agent task the way a good manager would.
The three sub-skills you are actually learning
Agent management is not one skill. It is three, and most professionals are practising none of them on purpose.
Scoping. The most common failure is giving an agent a task that is too vague, too big, or too ambiguous. A good manager knows the difference between "help me with the Q2 plan" and "draft the Q2 plan as a one-page memo with three options, assuming the budget holds flat." The same difference applies to agents. The scoping sentence you write before delegating determines the quality of everything downstream.
Guardrails. Good delegation always includes what not to do. What must the agent never touch? What must it flag instead of deciding? What must it stop and ask about? A legal associate briefing an agent for contract review writes "never modify indemnity clauses without surfacing them first" for the same reason a CFO writes "never wire money without two approvals." Guardrails are not limits on the agent. They are what make the agent trustworthy enough to use at all.
Write the rules that make an agent safe to delegate to.
Checkpoints. The reviewer's job is to choose where to step in — not everywhere, not nowhere. A recruiter running an agent through a first-round screen might check the first ten outputs, spot-check every tenth after that, and always review borderline cases by hand. A consultant running a research agent might review sources before reviewing conclusions. Choosing your checkpoint pattern is the part most people skip, and it is the part that decides whether agent work saves time or creates hidden risk.
Design the checkpoint pattern for a real workflow.
Monday morning
Pick one recurring task you currently do yourself. Something weekly. Something that takes at least an hour. Write a two-hundred-word brief for it as if you were handing it to a new hire — goal, inputs, output format, what to flag, what never to touch. That brief is your first agent spec. Try it on whichever tool you have access to, review the output the way a manager reviews a junior's work, and rewrite the brief once based on what went wrong.
Nine agents is far away. One working brief is not. The professionals who come out of 2026 ahead will be the ones who ran the first brief this month instead of sharpening prompts for a skill that is already being absorbed into the job title above it.
Put this into practice
Reading is a start — but skill comes from doing. Try these drills now.
Like this post?
Get the next one in your inbox. Practical AI skills, no filler.