Before Connecting an AI Agent to Your ERP, Define This
Connecting an AI agent to an ERP, CRM, or internal system can save many hours, but it can also create risks if you don't define permissions, identity, approvals, limits, and auditing before granting access.
The question is no longer whether an AI agent can connect to your ERP. It can. It can also query clients, review invoices, prepare orders, update CRM fields, read histories, generate quotes, and launch automations.
The important question is different: should it be able to do all of this without limits?
According to Deloitte, AI agents are scaling faster than their control mechanisms. In their 2026 global survey, only 21% of companies claim to have a mature governance model for AI agents, even though adoption is growing rapidly.
For an SME, this matters greatly. An agent connected to real data stops being a chatbot. It becomes an operational user within the company.
An Agent is an Identity, Not a Function
Many companies treat the agent as "an AI tool." This is a mistake. If the agent can enter internal systems, execute actions, and move data, it must be managed as a digital identity.
This means answering specific questions:
- Who owns the agent?
- Which systems can it use?
- What data can it read?
- What actions can it execute?
- Which actions require approval?
- How are its permissions revoked?
- Where are its actions recorded?
If you cannot answer these questions, you should not connect it to an ERP yet.
What It Can Read
The first level of control is reading. It seems harmless, but it is not. Reading client data, margins, payroll, contracts, or incidents already implies risk.
A good design starts with the minimum possible access:
| Data Type | Recommended Access |
|---|---|
| FAQs, manuals, and internal processes | Broad reading if they do not contain sensitive data |
| Commercial documents | Limited reading by department or role |
| Client data | Reading filtered by use case |
| Invoicing and accounting | Restricted and audited reading |
| Payroll, health, or legal data | Only if there is a clear need and strong controls |
In internal knowledge bases, a solution like Polp can help organize documents, answer with sources, and limit access according to context. But the rule remains the same: the agent should not see more information than it needs.
What It Can Write
Writing is where operational risk begins. An agent that only responds makes text mistakes. An agent that writes can change the reality of your business.
Low-risk actions:
- Creating email drafts
- Preparing proposals pending review
- Tagging tickets
- Classifying leads
- Updating non-critical fields
Medium-risk actions:
- Changing order statuses
- Creating tasks in CRM
- Scheduling meetings
- Generating commercial documents
- Updating contact data
High-risk actions:
- Issuing invoices or credits
- Applying discounts
- Canceling orders
- Modifying contracts
- Changing payment terms
- Deleting records
The practical recommendation: start with reading and drafts. Then allow writing for reversible actions. Leave economic, legal, or irreversible actions with human approval.
Who Approves
An agent should not decide alone when the action has real impact. But it also doesn't make sense for everything to go through the director.
You need an approval matrix:
| Action | Approves |
|---|---|
| Response to FAQ | Agent automatically |
| Email to client with order info | Agent if the source is reliable |
| New quote | Commercial manager |
| Discount above a certain margin | Commercial manager |
| Refund or credit | Administration |
| Contract change | Director or legal |
This matrix avoids two extremes: agents that are too dangerous and agents that are so blocked that no one uses them.
Behavioral Limits
Technical permissions are not enough. You must also define behavioral limits.
For example:
- Not promising deadlines if the ERP does not return a confirmed date
- Not offering discounts outside of commercial policy
- Not diagnosing legal, medical, or financial problems
- Not asking for unnecessary personal data
- Not continuing a conversation if the user expresses severe anger
- Not executing a tool if the intention is unclear
These limits must be in instructions, evaluations, and business rules. If they only live in a prompt, they are fragile.
Auditing: The Ability to Reconstruct Any Decision
Deloitte warns that many agent risks appear when there is no monitoring, clear limits, or audit trails. For a company, auditing is not a technical luxury: it is the way to respond to a complaint, an error, or a legal review.
A minimum record should include:
- User who initiated the action
- Agent that intervened
- Agent version
- Systems consulted
- Data used
- Tools executed
- Result of each tool
- Human approver, if any
- Date and time
Without this, you cannot know if the failure was due to the agent, the data, the prompt, the user, or an integration.
The ERP Should Not Be the First Experiment
If your company has never used agents, do not start by connecting the entire ERP. Start with a lower-risk environment:
- Internal document base
- Email inbox
- CRM with read permissions
- Tickets or support
- ERP in read-only mode
- ERP with limited actions
- Automations with human approval
This path allows you to build confidence without exposing the core business system from day one.
How We Can Help
At Navel Digital, we implement agents connected to real systems with governance from the start: permissions, roles, limits, approvals, records, and human fallback. We can also integrate them with knowledge tools like Polp so that the agent consults internal documentation before acting on a CRM or ERP.
An agent connected to your ERP can save many hours. But only if you first decide exactly what identity it has within your company.