OpenAI and Anthropic No Longer Sell Only Models: They Sell Deployment
OpenAI and Anthropic are investing in consulting, partners, and Forward Deployed Engineers because enterprise AI value is won in the last mile: data, workflows, permissions, adoption, and ROI.
For the last few years, many companies have treated artificial intelligence as a tool you buy, give to the team, and expect to generate productivity on its own. Reality has been less comfortable: interesting pilots, promising demos, plenty of enthusiasm, and also many projects that never reach production.
That is why the recent moves from OpenAI and Anthropic matter so much. They are not only saying "our models are better." They are saying something more pragmatic: enterprise AI value is won in the last mile.
That last mile is not the model. It is the workflow. It is connecting AI to real data, permissions, CRM, ERP, internal policies, human teams, metrics, and operational governance. It is exactly the space where an AI agency stops selling promises and starts building systems a company can use every day.
What Happened
On May 11, 2026, OpenAI announced the OpenAI Deployment Company, a company designed to help organizations build and deploy AI systems across critical work. The move came with more than $4 billion in initial investment and an agreement to acquire Tomoro, an applied AI consulting and engineering firm with around 150 deployment profiles.
Earlier, on February 23, 2026, OpenAI had already announced its Frontier Alliances with BCG, McKinsey, Accenture, and Capgemini to help enterprises define strategy, integrate systems, redesign workflows, and scale AI agents.
Anthropic made a similar move on May 4, 2026: it announced a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs. Its stated goal is to bring Claude into the core operations of mid-sized companies, with applied AI engineers working alongside the new firm's engineering team.
Anthropic has also reinforced its ecosystem with the Claude Partner Network, including an initial $100 million investment for 2026 focused on training, technical support, certifications, and joint market development.
The signal is clear: AI labs want to be much closer to implementation.
Access to AI Is Not the Main Problem
Almost any company can now open ChatGPT, Claude, Gemini, or Copilot. Access is no longer the main barrier.
The real barrier appears when a company asks:
- Which specific process are we improving?
- What data does the AI need?
- Who can approve its actions?
- How do we prevent hallucinated information?
- How do we measure savings or revenue impact?
- How does it integrate with the CRM, ERP, or email?
- What happens when the system makes a mistake?
This is where many projects stall. A generic chatbot can answer questions. But a system that checks orders, generates proposals, updates a CRM, reviews documents, or prioritizes incidents needs architecture, security, and operational judgment.
If you want to go deeper into that distinction, we explain it in our guide to AI agents for SMEs: an agent does not only chat, it executes. And when it executes, it needs limits.
What the "Last Mile" Means in Enterprise AI
The last mile is everything that turns AI from a demo into part of the business.
It includes:
- Choosing use cases with real impact
- Redesigning workflows before automating them
- Connecting internal data securely
- Defining permissions, identity, and audit trails
- Creating quality evaluations
- Integrating tools such as CRM, ERP, email, ticketing, or document bases
- Training the teams that will use the system
- Measuring results with financial and operational indicators
- Maintaining the system as workflows, models, and data change
This explains why OpenAI talks about Forward Deployed Engineers and why Anthropic talks about applied engineers working close to customers. The model may be powerful, but someone has to convert that power into workflow.
Why AI Labs Are Entering Consulting
There are several reasons.
First, enterprise adoption is slow if it remains "try this tool." Companies need help moving from pilot to production. Without integration, governance, and adoption, the project remains an experiment.
Second, consulting creates product feedback. When a technical team enters a real customer environment, it discovers patterns that do not appear in a lab: inherited permissions, duplicated data, informal processes, legal exceptions, teams that do not trust the tool, and old systems nobody wants to touch.
Third, the services market is huge. Large consultancies and systems integrators capture a lot of value in technology transformations. If AI becomes the new layer of enterprise work, the labs want to participate in implementation too.
Fourth, investment firms have thousands of portfolio companies where patterns can be tested, repeated, and scaled. If an AI system reduces costs or increases productivity in one portfolio company, it may be replicated in others.
The New Model: Technology Plus Execution
The old narrative was:
| Before | Now |
|---|---|
| Buy licenses | Redesign workflows |
| Launch pilots | Measure real impact |
| Use isolated prompts | Integrate data and tools |
| Expect spontaneous adoption | Support internal change |
| Choose a model | Design an architecture |
| Make a demo | Operate in production |
Enterprise AI looks less like installing software and more like building a new operating capability.
That is why concepts such as Forward Deployed Engineer, RAG, observability, evaluations, agent governance, and data architecture are becoming central.
What This Means for a Mid-Sized Company
A mid-sized company does not need to copy the strategy of a multinational. It does not need a team of 40 consultants or a three-year plan before starting.
But it does need to take the last mile seriously.
In practice, that means starting with a few well-chosen use cases:
- Customer support with an internal knowledge base
- Email summary and classification
- Assisted proposal generation
- Data extraction from invoices, delivery notes, or contracts
- Automatic follow-up for sales opportunities
- Chatbots connected to FAQs and booking systems
- Recurring report automation
- Internal search across company documentation
Many of these cases can be solved with a combination of AI and automation, RAG, connectors, and human supervision.
The key is not to do "an AI project." The key is to choose a process that currently consumes time, money, or quality, and turn it into a more reliable system.
The Risk: When the Consultant Also Sells the Model
There is a delicate point. If the company recommending the strategy also owns the model, provider bias can appear.
That is not always bad. Working close to the vendor can provide technical knowledge, roadmap access, and good practices. But it is worth asking:
- What happens if another model performs better tomorrow?
- Can we switch providers without rebuilding everything?
- Are data assets decoupled from the model?
- Does business logic live in our architecture or inside a closed box?
- Are there independent logs, evaluations, and controls?
In some cases, proprietary models will make sense. In others, a hybrid architecture with local models, European cloud, or multiple providers will be more prudent. We explore that in Local AI, European Cloud, or ChatGPT.
How to Act Now
If your company wants to capture this wave without falling into hype, a reasonable order is:
- Inventory repetitive and costly workflows.
- Prioritize two or three use cases with clear impact.
- Review data, permissions, and systems involved.
- Build a prototype connected to reality, not an isolated demo.
- Measure quality, savings, errors, and adoption.
- Define governance: who approves, who audits, who corrects.
- Scale only what proves value.
If the project involves agents that execute actions, first read how to define permissions, identity, and limits before connecting an AI agent to your ERP.
The Bottom Line
OpenAI and Anthropic are entering consulting because they have understood something many companies are quietly experiencing: AI does not transform an organization simply by being available. It transforms it when it is integrated into real work.
The model matters, but it is not enough.
Data, workflows, people, permissions, metrics, and the ability to turn an idea into a stable system matter too. That is the last mile. And it will likely decide which companies get value from AI and which remain stuck in pilots.
At Navel Digital, we help companies cross that last mile: detect use cases, design architecture, connect systems, automate workflows, and deploy AI solutions that can be measured and maintained.