AI Agents Just Got a Normal Person Interface
Anthropic's Cowork, Perplexity's Comet, and ChatGPT in CarPlay all launched within weeks. AI agents are no longer a developer concept. Here's how to actually use them.
By Forge Team
Three things happened in the same month. Anthropic launched Cowork — a desktop agent that reads, creates, and manipulates your files without you writing a line of code. Perplexity shipped Comet, a browser with AI built into every page you visit. ChatGPT launched in Apple CarPlay.
None of these require a terminal or an API key. Until now, AI agents lived behind code editors and command lines. If you weren't technical, agents were something you read about, not something you used.
That just changed.
What actually happened
Cowork is a visual agent that sits on your desktop and works with your actual files — spreadsheets, documents, PDFs. Anthropic's design lead Jenny Wen demonstrated it turning raw user feedback into prioritized product ideas and generating automated weekly reports. No code. You describe the task, and the agent does it.
The AI newsletter The Neuron reported in April that non-technical salespeople at Anthropic itself had fully migrated from Claude Code (the developer tool) to Cowork because the visual interface made agent delegation accessible. When the company building the AI switches its own non-technical staff to the new tool, that tells you who it's actually for.
Perplexity's Comet points in the same direction. The AI assistant lives alongside your browsing and handles multi-step research tasks across tabs. The six-to-eighteen-times increase in user questions suggests that when AI becomes the environment you work in — rather than a separate destination — people use it dramatically more.
What this means for your Tuesday morning
If you're a project manager, Cowork means you hand it a folder of status updates from five teams and describe the consolidated report you want. If you're a customer success lead, Comet means the AI is already in your browser, ready to synthesise across sources. If you're a marketing director, the barrier between "I know AI could help" and "AI is actually helping" just collapsed. The gap was never intelligence. It was interface.
But easier access doesn't mean easier results. An agent that can manipulate your files is powerful when directed well and dangerous when directed poorly. The skill is not "can I use this tool" — it's "can I define clearly what I want it to do."
Practice defining what an agent should handle vs. what stays with you.
The delegation skill
Harvard Business Review argued earlier this year that companies now need "agent managers" — people responsible for orchestrating how AI agents learn, collaborate, and work alongside humans. Product leader Claire Vo, profiled on Lenny Rachitsky's product management podcast, went from agent sceptic to running nine dedicated AI agents that manage parts of her business, write code, and close sales deals. The shift is real. But the framing hides the hard part: the quality of what comes back depends entirely on the quality of how you defined the task.
When you delegate to a human colleague, you've learned (often painfully) to be specific about the outcome, the constraints, and the definition of "done." When you delegate to an AI agent, the same skills apply — except the agent won't ask clarifying questions if your brief is vague. It'll just produce something confident and potentially wrong.
The professionals who'll get the most from Cowork, Comet, and similar tools are the ones who can:
- Define the scope before handing it off. What exactly should the agent do? What should it not touch? Where does it stop and you start?
- Build a simple workflow. If the task has more than one step, what order should they happen in? What does the agent need to complete step one before moving to step two?
- Review the output with clear criteria. What does "good" look like for this specific task? If you can't articulate it before the agent starts, you won't be able to evaluate what it produces.
Try building a multi-step workflow for an agent task.
The mistake to avoid
The most common mistake will be treating these tools like better chatbots. Typing a vague request into Cowork the way you'd type into ChatGPT and expecting the agent to figure out what you meant. That produces disappointing results at scale — which is worse than disappointing results one response at a time.
The fix is the same skill that separates effective managers from ineffective ones: briefing clearly. If you're a financial analyst asking Cowork to process quarterly data, the difference between a useful output and a dangerous one is whether you specified which calculations, which assumptions, and which anomalies to flag for human review. If you're an HR director researching compensation benchmarks, the difference is whether you told it which geographies, roles, and sources to prioritise.
The tool got easier. The skill didn't.
Practice reviewing and improving what an agent gives you back.
What to try this week
Pick one recurring task that currently takes you more than thirty minutes. Something with files — a weekly report, a research synthesis, a document review. Don't start with your most important work. Start with something where a mediocre first pass is still useful.
Define the task as if you were briefing a new hire on their first day. What's the goal? What are the inputs? What does the output look like? What should they flag if they're unsure?
Then hand it to one of these tools. Cowork if you have it. Comet if you're a Perplexity user. ChatGPT if that's what you have access to.
The point isn't which tool you use. The point is practising the delegation skill before the tools get even more capable — because they will, and the people who learned to direct them early will have a significant advantage over those who are still figuring out what to ask for.
Put this into practice
Reading is a start — but skill comes from doing. Try these drills now.
Like this post?
Get the next one in your inbox. Practical AI skills, no filler.