Mar 3, 2026Company

Building the Network of Human Knowledge

Brendan Foody
Brendan FoodyCo-founder / CEO

There’s a paradox in AI right now. We are witnessing undeniable leaps in model performance. AI models are coding and reasoning at superhuman levels. Yet the massive economic promises of AI still haven’t materialized. Most companies are still relying on the same manual workflows they used five years ago.

Why hasn't the revolution hit the bottom line?

The answer is that models are smart, but they’re not trained to do the job. Today’s frontier models are like new college graduates – talented, but if you drop them into a company without onboarding, they'll fail. They don’t know how to navigate messy workflows, use the tools and apps systems are built in, or recognize what ‘good’ looks like for your specific customers in your industry.

The future of work requires training agents with human expertise: from teaching models the fundamentals of professional work to encoding your company's specific context. In order to unlock their economic potential, agents will have to be taught to navigate the huge distribution of real-world workflows.

Meeting this new demand for human expertise at scale requires a new kind of infrastructure – a platform that understands what every person knows, what they’re capable of, and where their expertise can create the most value.

That is what we are building at Mercor: the definitive network for human knowledge and capabilities.

The Future of Foundation Model Training

While frontier models have improved dramatically in general reasoning, we’re in the first inning of the training required to unlock the full economic potential of knowledge work.

The surface area of knowledge work is vastly larger than most people realize. There are over 800 distinct occupations in the U.S. alone, each with dozens of core workflows. A corporate lawyer doesn't just "practice law" – they review NDAs, redline vendor contracts, flag regulatory risk in M&A filings, and negotiate indemnification clauses, all while toggling between a document management system, email, and a spreadsheet tracking deal terms. Multiply that complexity across every profession, and you begin to see the surface area that models need to master.

We've seen what happens when that surface area gets measured through evals. SWE-bench gave AI labs a structured way to evaluate coding agents against real GitHub issues. Once researchers could measure it, they could hillclimb it. However, software engineering benefits from a pre-existing corpus of open-source repositories and unit tests that make evaluation natural. The vast majority of knowledge work – law, medicine, finance, consulting, operations – has no equivalent.

The hyperscalers will need to pour hundreds of billions into constructing environments that mimic the full distribution of professional contexts, apps, and tasks. Because complex environments are far more labor-intensive to build than legacy training approaches, budgets are shifting dramatically toward human data, with many labs 10x-ing their environments budgets this year alone and already spending over $1 billion each.

While labs will continue to build synthetic data pipelines and training infrastructure in house, none of them want to recreate an enormous, operationally complex talent network from scratch. And they can’t afford to wait. For the companies whose core businesses depend on workspace productivity, this transformation is existential. Hundreds of billions in annual revenue hang in the balance if they fall behind or launch ineffective agents.

The Future of Enterprise Deployment

Even a model that has perfected simulated tasks in law or finance still lacks the one thing no foundation model can arrive with: your company's internal context. The new job of the knowledge worker will be to encode their expertise – the intuition that lives in their head – into agents.

At Mercor, we’re already training agents like employees. Our customer support representatives no longer respond redundantly to hundreds of similar tickets. They update documentation and evals to train an agent instead, giving themselves time to focus on higher-value tasks. And our fraud team doesn't manually catch people cheating in interviews. They build evals that teach an agent to help them spot the patterns.

This is what convergence looks like. When you can turn repetition into training, you get compounding returns: faster turnaround, consistent quality, and larger scale. The boring parts of the job get absorbed into the system, leaving people to focus on the work that actually requires strategy and judgment.

This creates a flywheel between the labs and enterprise. Benchmarks for real-world workflows make models robust enough for production. Once deployed, real enterprise usage will reveal the edge cases and reasoning gaps that no simulation could predict. That signal feeds back to the labs to refine core capabilities, and back to the enterprise to sharpen the agent's playbook – each cycle unlocking more complex workflows.

The Most Important Network Effect

The prevailing narrative is that AI will render human workers obsolete. The reality is more nuanced. There will be tasks that humans can solve where models fail for decades to come because of the long tail of human expertise. Models are improving quickly within the broad surface of knowledge work, but progress in the physical world will be far slower.

The reality is that there are millions of different categories of professional work, and the same job has completely different requirements at different enterprises. As we move from chatbots to agents, and from demos to deployment, the limiting factor – the fuel that powers the entire system – will be human expertise.

So the real question isn’t whether human expertise is needed but who will build the infrastructure to find it, organize it, and deploy it at the scale AI demands.

That’s what Mercor is building. We've created a network of over 4 million vetted experts and over 1.9 million referrals – a self-reinforcing system that turns loose professional knowledge into structured, reliable data. Every project teaches us more about what our experts know and what they're capable of, so the next match is better than the last. We scaled from paying out nothing to over $2M/day within two years by answering society's most important question: what role will humans play in the AI economy?

Our unique ability to answer this question is framed by the fact that we know who's an expert in what, all around the world. Uber built a driver network. Airbnb built a host network. Mercor is building a knowledge network – the definitive platform that understands everyone's expertise and the economic value they can deliver. This is the most important network effect that hasn't been built yet.