OpenAI Just Killed a Billion-Dollar Product to Build Your Replacement. Here's What to Do About It.
OpenAI killed Sora and redirected compute toward automated researchers. That's a capital allocation signal about which knowledge work skills survive. Three specific ones to invest in now.
By Forge Team
OpenAI killed Sora. The consumer app shuts down April 26 (the API follows in September), and a reported billion-dollar partnership with Disney was scrapped — Disney reportedly learned of the decision less than an hour before the public announcement.
That's not a product pivot. That's a capital allocation decision, and capital allocation decisions tell you more about the future than any keynote.
What the money is moving toward
On iHeartPodcasts' Mostly Human (a podcast about how technology shapes daily life), OpenAI CEO Sam Altman said the company has "a few times in our history realized something really important is working, or about to work so well, that we have to stop a bunch of other projects." He described what's coming next as "the next generation of automated researchers and companies."
Read that phrase carefully: automated researchers. Not "research assistants." Not "tools that help you research faster." Automated researchers — AI that conducts research, synthesises findings, and produces recommendations on its own.
That's not a creative tool. That's a description of what a strategy analyst does. What a market researcher does. What a policy adviser, a management consultant, or an investment associate does. The compute that was generating video clips is being redirected toward replicating the core loop of knowledge work: gather, synthesise, recommend.
Define what AI handles vs. what stays human.
The two wrong reactions
There are exactly two ways to get this wrong.
Reaction one: panic. "AI is coming for all knowledge work. Nothing is safe. Start learning plumbing." This ignores the fact that every previous wave of automation created more cognitive work, not less. The spreadsheet didn't eliminate accountants. It eliminated the version of accounting that was pure calculation and created a version that required judgment about what the numbers meant.
Reaction two: dismissal. "Sora was overhyped anyway. This is just corporate reshuffling." This ignores that a company worth hundreds of billions just abandoned a product line with a major entertainment partnership attached because they think something else will be more valuable. When that much capital moves, it's worth understanding why.
The useful reaction is neither. It's asking: if AI is moving from creative assistant to cognitive worker, which parts of cognitive work does it still can't do?
Three skills that protect you
AI that automates research can gather information, spot patterns, and generate recommendations. What it cannot do — structurally, not just "not yet" — is three things.
1. Frame the problem
An automated researcher can answer a question. It cannot decide which question matters. If you're a product manager and you ask AI to research competitors, it will produce a thorough competitive landscape. But it won't tell you that the real threat isn't your competitors — it's that your customers are solving the problem themselves with a spreadsheet and a chatbot.
Framing is the skill of defining the right problem before anyone starts solving. AI is a powerful engine, but it doesn't know where to point itself. The person who frames well becomes more valuable as AI gets better at executing, not less.
2. Verify the output
Automated researchers will produce confident, well-structured analysis. Some of it will be wrong. Not obviously wrong — subtly wrong. A statistic that's real but outdated. A recommendation that makes sense in isolation but contradicts your organisation's risk appetite. A synthesis that weights one source too heavily because it appeared more frequently in training data.
If you're a finance lead reviewing an AI-generated market analysis, your job isn't to check whether it reads well. It will read beautifully. Your job is to check whether the claims hold up, the sources are current, and the conclusions follow from evidence rather than pattern-matching.
Verification isn't scepticism. It's a structured practice: checking claims against sources, testing logic for internal consistency, and asking "what would have to be true for this recommendation to be wrong?"
3. Design the workflow
Even powerful automated researchers need someone to decide: where does AI work alone, where does a human review, and where does the human lead entirely? That's workflow design.
If you're an operations director deploying AI to handle vendor evaluations, someone needs to decide: does AI shortlist candidates, or does it make the final recommendation? Where is the human checkpoint? What happens when the AI's recommendation contradicts the team's instinct?
These aren't technical questions. They're judgment calls about risk, trust, and accountability. And they're the questions that determine whether AI deployment actually works or just creates faster mistakes.
Practice splitting complex work into AI-ready and human-owned pieces.
What to do this week
You don't need to overhaul your career. You need to start practising three things:
Frame before you prompt. Next time you're about to ask AI a question, spend sixty seconds writing down what problem you're actually trying to solve. Not the query — the problem. You'll be surprised how often the two don't match.
Verify one output properly. Take something AI produced for you this week. Don't just scan it. Check one specific claim against a primary source. Check whether the conclusion follows from the evidence presented. Time yourself — it takes less time than you think, and it trains a skill that compounds.
Draw a workflow boundary. Pick one process where you use AI and draw a line: here's where AI works, here's where I review, here's where I decide. If you can't draw that line clearly, you've found the thing to fix first.
The Sora shutdown isn't a warning that your job is disappearing. It's a signal about where the investment is going. The companies building AI are betting that the next valuable frontier is automating the research-synthesise-recommend loop. The professionals who practise framing, verification, and workflow design are the ones who'll direct those automated researchers rather than compete with them.
That's not a prediction. It's a capital allocation signal. And capital doesn't lie.
Identify where human oversight belongs in an AI workflow.
Put this into practice
Reading is a start — but skill comes from doing. Try these drills now.
Like this post?
Get the next one in your inbox. Practical AI skills, no filler.