Entire: The Ex-GitHub CEO Just Raised $60M to Fix the Biggest Problem With AI-Generated Code

The Problem
AI coding agents — Claude Code, Gemini CLI, Copilot, Cursor — can now write entire features, refactor codebases, and ship pull requests. But there's a gap that's getting wider with every line of AI-generated code:
Nobody knows why the AI wrote what it wrote.
When a human developer writes code, there's context: the ticket, the Slack discussion, the PR description, the review comments. When an AI agent writes code, you get a diff. Maybe a commit message. But the reasoning — the prompts, the back-and-forth, the decisions the agent made along the way — is lost the moment the session ends.
That's fine for a one-off script. It's a nightmare for a production codebase where multiple developers and multiple AI agents are generating thousands of lines of code per day.
Enter Entire
Entire is the new company from Thomas Dohmke, who served as CEO of GitHub for four years and oversaw the rise and commercialization of GitHub Copilot. On February 11, 2026, the company announced a $60 million seed round at a $300 million valuation — the largest seed round ever for a developer tools startup, according to lead investor Felicis.
The pitch is straightforward: Entire is building the infrastructure layer for AI-generated code. Not another coding agent. Not another AI model. The connective tissue between AI agents and human developers.
Dohmke's framing: "We are not training models or building agents — we are integrating with them."
How It Works
Entire's platform has three core components:
1. Git-Compatible Database
A storage layer that unifies AI-produced code with its creation context. It's compatible with Git, so it works with existing workflows, but adds a metadata layer that Git doesn't have.
2. Universal Semantic Reasoning Layer
This allows multiple AI agents to collaborate on the same codebase without stepping on each other. If Claude Code refactors a module and Gemini CLI updates the tests, the reasoning layer tracks both changes and their relationship to each other.
3. AI-Native Interface
A UI designed for agent-to-human workflows — reviewing what AI agents did, understanding why, and providing feedback that improves future outputs.
Checkpoints: The First Product
Entire's first release is an open-source tool called Checkpoints. It's already available and does something deceptively simple:
Every time an AI agent submits code, Checkpoints automatically captures and stores:
- The prompts that generated the code
- The full conversation transcript between developer and agent
- The reasoning and decisions the agent made
- Token usage and model metadata
- The resulting code changes
All of this is paired with the code itself, so when you look at a function six months later, you can trace back not just what changed, but why — and what instructions produced it.
Launch integrations: Claude Code and Google's Gemini CLI, with OpenAI and GitHub support coming within weeks.
The Investors
The $60M round tells you a lot about who believes in this:
- Felicis Ventures (lead)
- Madrona Venture Group
- M12 (Microsoft's venture arm — notable given Dohmke's GitHub/Microsoft background)
- Basis Set
- Olivier Pomel (Datadog CEO)
- Garry Tan (Y Combinator CEO)
- Jerry Yang (former Yahoo CEO)
- Gergely Orosz (The Pragmatic Engineer)
- Theo Browne (t3.gg / create-t3-app)
Microsoft investing through M12 is especially interesting — they're effectively funding a tool that integrates with competitors' AI agents (Claude, Gemini) in addition to their own.
The Team
Entire is a 15-person, fully remote company staffed primarily with former developers from GitHub and Atlassian. They plan to expand ahead of a broader platform launch later in 2026.
Why This Matters
The Accountability Gap
Right now, when an AI agent introduces a bug, the debugging process looks like this:
- Find the broken code
- Check
git blame— it says "ai-generated" or the developer who accepted it - Try to figure out what prompt produced it
- Realize the session is gone
- Start from scratch
With Checkpoints, step 3 becomes: "Look up the exact prompt, reasoning, and context that generated this code." That's the difference between 10 minutes and 2 hours of debugging.
The Collaboration Problem
As more teams run multiple AI agents on the same codebase — Claude for backend, Copilot for frontend, Gemini for tests — coordination becomes critical. Without a shared context layer, agents work in silos, make conflicting changes, and create merge conflicts that neither agent understands.
Entire's semantic reasoning layer is designed to solve exactly this: giving agents awareness of each other's work.
The Audit Trail
For regulated industries — finance, healthcare, defense — you need to explain why code exists and who (or what) wrote it. AI-generated code without provenance is a compliance nightmare. Entire creates the paper trail that auditors need.
What Developers Should Watch
Try Checkpoints
It's open-source and available now. If you're using Claude Code or Gemini CLI, you can start capturing AI context alongside your code today. It's low-risk and immediately useful.
Think About AI Code Provenance
As AI agents write more code, the question of "why was this written this way?" becomes increasingly important. Start building habits around documenting AI interactions — whether through Entire or your own process.
Watch the Platform Play
Entire is positioning itself as a neutral layer between all AI coding agents. If they succeed, they become the standard for AI code metadata — the way Git became the standard for version control. That's a powerful position.
The Bottom Line
GitHub Copilot showed that AI can write code. Entire is betting that the next big problem isn't generating code — it's understanding, auditing, and managing the code that AI generates.
Given that Dohmke literally ran the company that launched Copilot and watched this problem emerge firsthand, it's a well-informed bet.
The era of "just accept the AI's diff" is ending. The era of AI code provenance is starting.