|
English

For two decades, integration platform as a service (iPaaS) has been the connective tissue of the enterprise. Vendors like MuleSoft, Boomi, Workato, SnapLogic, Informatica and Frends sold the same essential promise: a unified control plane for moving data between systems, scheduling batch jobs, exposing APIs, and applying governance to inter-application traffic. That promise still holds for the workloads it was designed for. What it does not hold for is the new dominant consumer of enterprise data in 2026: the autonomous AI agent.

Agents do not want a scheduled CSV drop at 02:00 UTC. They want a discoverable, callable, governed, auditable surface that exposes business actions in real time and that they can reason about during a single inference loop. That surface, increasingly, is the Model Context Protocol. And the most interesting structural shift of the year is not that MCP exists. It is that MCP is rapidly absorbing the integration layer of the enterprise stack, becoming the new front door that AI agents knock on whenever they need to read, write, search, or act against a system of record.

Informatica's 2026 trend report frames this directly. In "AI-Led Integration: 6 Emerging Trends Shaping the Future of iPaaS", the leading prediction is that iPaaS itself becomes "the MCP layer" — the governed bridge between AI agents and enterprise systems of record. MIT Technology Review's coverage in "Consolidating Systems for AI With iPaaS" reaches a similar conclusion from a different angle: enterprises are now running fewer integration platforms but exposing far more endpoints through them, because every endpoint is potentially callable by an agent.

This article lays out a reference architecture for that emerging shape. It introduces the MCP-as-Integration-Layer model — a four-layer architecture for hybrid integration in the agent era — and walks through the eight enterprise patterns that show up repeatedly in production deployments. It is written for integration architects, platform engineers, and enterprise CTOs who already operate iPaaS at scale and now need to figure out what stays, what goes, and what gets wrapped in an MCP server in the next four quarters.

Why MCP Eats Part of the iPaaS Stack

Classical iPaaS was optimized for human-authored, schedule-driven, point-to-point integration. The atomic unit was a recipe, flow, or pipeline that a developer or citizen integrator built in a visual editor. The execution model was either scheduled or event-triggered. The consumer was almost always another deterministic system: a CRM updating an ERP, a data warehouse ingesting from Salesforce, a webhook firing into a workflow engine. The governance model assumed that the caller and callee were both trusted, identifiable, machine-shaped components.

Agents break almost every assumption in that model. The caller is non-deterministic. The exact sequence of calls cannot be enumerated in advance. The agent may decide at inference time whether it needs to call a tool, which tool, with what arguments, and how to recover if it fails. The classical iPaaS playbook of "build a flow, schedule it, monitor it" maps poorly to a world in which the integration is composed at runtime by a probabilistic system that is reasoning about the user's intent.

MCP is the protocol that closes this gap. As Frends notes in its analysis of the BOAT (Business Orchestration and Automation Technologies) wave, the shift is from human-authored flows to agent-callable capabilities. An MCP server publishes tools, resources, and prompts in a standardized, discoverable schema. Agents introspect that schema at runtime, decide what to call, and the protocol handles transport, authentication context, and response shaping. From the agent's perspective, every enterprise system collapses into a uniform interface. From the enterprise's perspective, every agent collapses into a uniform consumer that can be governed centrally.

DimensionClassical iPaaSMCP-as-Integration-Layer
Primary consumerOther systems, scheduled jobsAI agents (Claude, GPT, Gemini, custom)
Composition modelDesign-time (human-built flows)Runtime (agent reasoning)
DiscoveryCatalog of pre-built recipesTool/resource introspection per session
Trigger patternSchedule, webhook, pollingAgent inference loop
Auth modelAPI keys, OAuth per connectorStandardized auth context + delegation
Latency toleranceSeconds to minutesSub-second to low-second
Throughput shapeBulk batch, large payloadsMany small, conversational calls
Failure recoveryRetry queue, dead-letterIn-context retry, agent reasoning
Audit primitivePipeline runTool call with full prompt context
Governance unitRecipe, connectorTool, scope, policy bundle
StrengthBulk data movement, ETL/ELTReal-time agent action, tool use
WeaknessAgent ergonomics, schema discoverabilityBulk movement, complex state machines

The conclusion is not that iPaaS dies. It is that the integration stack splits into two complementary halves. Bulk, batch, scheduled, and large-payload workloads stay in classical iPaaS. Conversational, real-time, agent-driven, action-oriented workloads move to MCP. The hybrid integration platform of 2026 runs both.

The 4-Layer MCP-as-Integration-Layer Architecture

Production MCP deployments converge on a four-layer reference architecture. This is the same general shape we see in SnapLogic's 2026 vendor analysis and in NeosAlpha's enterprise integration trend report, even though the layer names vary. The diagram below is the canonical Swfte rendering.

MCP-as-Integration-Layer — Reference Architecture

┌─────────────────────────────────────────────────────────────┐
│  Layer 4: Agent Consumers                                    │
│  Claude Opus 4.7 · GPT-5.5 · Gemini 3.1 · Custom Agents      │
└────────────────┬────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│  Layer 3: Policy / Governance                                │
│  RBAC · ABAC · Rate Limits · Audit · DLP · Approval Routing  │
└────────────────┬────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│  Layer 2: MCP Server Mesh                                    │
│  CRM-MCP · ITSM-MCP · ERP-MCP · DW-MCP · Knowledge-MCP       │
└────────────────┬────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│  Layer 1: Source Systems                                     │
│  Salesforce · ServiceNow · NetSuite · Snowflake · Confluence │
└─────────────────────────────────────────────────────────────┘
Source: Swfte MCP-as-Integration-Layer reference, May 2026

The directional read is bottom-up at design time and top-down at runtime. At design time, you start with the systems of record at Layer 1 and decide what surface area each one should expose. You build or adopt MCP servers at Layer 2 that wrap those systems. You attach policy at Layer 3. And you only then introduce agent consumers at Layer 4. At runtime, the flow inverts: an agent issues a tool call from Layer 4, which is intercepted and evaluated at Layer 3, which is routed to the appropriate MCP server at Layer 2, which translates the request into a system-of-record API call at Layer 1.

The four layers are not collapsible. Skipping Layer 3 is the single most common reason MCP-as-iPaaS pilots fail to reach production. We will return to this in the governance section.

Layer 1: Source Systems (and Why You Don't Touch Them)

The first principle of the MCP-as-Integration-Layer model is that you do not modify your systems of record to support agents. Salesforce, ServiceNow, NetSuite, Workday, SAP, Snowflake, and Confluence are not changing in shape because of MCP. Their existing APIs, webhooks, bulk export interfaces, and CDC streams remain the contract surface. What changes is who calls them and through which abstraction.

This matters for three reasons. First, ROI: most enterprises have already paid for and stabilized their integration footprint with these systems. Reworking that integration to be "AI-native" at the source layer would be a rip-and-replace project measured in years. Second, governance: the existing systems carry the existing access controls, audit trails, and compliance certifications. You want to ride those, not rebuild them. Third, vendor independence: by leaving Layer 1 untouched, you preserve the option to swap any specific MCP server at Layer 2 without disturbing the underlying system.

The practical implication is that Layer 1 work in an MCP migration is mostly inventory work. You catalog every system, every API endpoint, every existing iPaaS connector, every batch job, and every authentication scheme. That inventory becomes the input to Layer 2 design.

Layer 2: The MCP Server Mesh

Layer 2 is where the new investment goes. An MCP server is a small, focused service that wraps one or more capabilities of an underlying system in the MCP tool/resource/prompt schema. The server mesh is the collection of these servers, deployed in a way that an agent gateway can discover and route to.

A well-designed MCP server mesh has three properties. It is decomposed by business function, not by source system — a single CRM-MCP server can pull from Salesforce and HubSpot if the underlying business concept is unified. It is versioned independently, so that you can ship a v2 of the CRM-MCP without touching the ITSM-MCP. And it has clear ownership, typically aligned to the team that already owns the underlying system integration.

The table below shows the canonical MCP server inventory we recommend for a mid-to-large enterprise. This is the smallest mesh that covers the bulk of agent demand we see in 2026 deployments.

MCP ServerEnterprise FunctionTypical Source System(s)Primary Tools ExposedRead/Write
CRM-MCPCustomer & opportunity dataSalesforce, HubSpot, Microsoft Dynamicslookup_account, get_opportunity, update_contactRW
ITSM-MCPIT service managementServiceNow, Jira Service Managementcreate_incident, update_ticket, search_kbRW
ERP-MCPFinance & operationsNetSuite, SAP, Oracle Fusionget_invoice, approve_purchase_order, lookup_vendorRW
DW-MCPAnalytics & reportingSnowflake, Databricks, BigQueryrun_query, describe_table, get_metricR only
Knowledge-MCPInternal documentationConfluence, Notion, SharePointsearch_docs, get_page, list_recentR only
HRIS-MCPPeople & org dataWorkday, BambooHR, Ripplinglookup_employee, get_org_chart, request_time_offRW
Comms-MCPMessaging & emailSlack, Microsoft Teams, Outlooksend_message, search_threads, create_channelRW
Identity-MCPAccess & permissionsOkta, Entra ID, Auth0lookup_user, check_group_membership, request_accessRW

The right number of MCP servers in a mesh is usually between six and twelve for a single business unit. Fewer than six and you are bundling unrelated functions into single servers, which makes governance hard. More than twelve and you are fragmenting capability across servers in a way that complicates agent reasoning and tool selection.

Layer 3: Policy and Governance (Where Most Implementations Fail)

If Layer 2 is where the new investment goes, Layer 3 is where the new thinking has to happen. Most failed MCP-as-iPaaS pilots we have reviewed in 2025 and 2026 failed at Layer 3 — not because the team did not implement governance, but because they implemented the wrong shape of governance. They tried to apply classical iPaaS governance (per-connector API key rotation, recipe-level approval) to a runtime where the unit of work is an agent tool call, not a pipeline run.

The governance unit in MCP-as-Integration-Layer is the tool call in context. That means the policy engine needs to evaluate not just "can this user call this tool" but "given the prompt, the calling agent, the user's role, the time of day, the data sensitivity of the arguments, and the recent history of calls, should this specific call be allowed, denied, sampled, or escalated?" That is fundamentally an attribute-based access control (ABAC) problem with audit and approval routing layered on top.

The table below maps the governance primitives that need to exist at Layer 3, the policy mechanism that implements each, and the artifact that gets produced for audit.

Governance ConcernPolicy MechanismImplementation PatternAudit Artifact
Who can callRBACRole-to-tool mapping in policy bundleTool call log with subject identity
Under what conditionABACAttribute evaluation (time, env, data class, geo)Decision log with evaluated attributes
How oftenRate limitingToken bucket per (user, tool) and (agent, tool)Rate decision log with bucket state
With what dataData loss prevention (DLP)Argument & response inspection for PII/PHI/secretsDLP scan result with redaction map
With what approvalApproval routingSync wait or async approval depending on risk scoreApproval ticket with approver identity
With what lineageLineage trackingTrace propagation across tool call chainLineage graph linking calls to outcomes
With what evidenceAudit logAppend-only immutable log of all calls + decisionsCryptographically signed audit record
Under what breakBreak-glass overrideTime-boxed elevated access with mandatory reviewBreak-glass event with justification text

The implementation pattern that most production deployments converge on is a thin gateway component that sits between Layer 4 and Layer 2, intercepts every MCP call, evaluates the policy bundle, and either permits, denies, modifies, or escalates the call. This component is functionally equivalent to an API gateway for agents — and indeed several teams call it an "agent gateway" even though it is structurally different from classical API gateways in that it terminates and re-emits MCP rather than just HTTP.

For a deeper dive on this distinction, see our companion piece on iPaaS vs AI Gateway: Enterprise Guide 2026.

Layer 4: Agent Consumers (Provider-Agnostic)

The fourth layer is the agent fleet itself. The single most important architectural property at Layer 4 is provider independence. Your CRM-MCP server should not care whether the agent calling it is Claude Opus 4.7, GPT-5.5, Gemini 3.1, an open-weights Llama derivative running on-prem, or a custom agent stitched together from a workflow engine. That neutrality is precisely the value of MCP as an integration layer: it decouples the rate at which you adopt new model providers from the rate at which you can rework your integration stack.

In practice, this means agent code at Layer 4 should never import a vendor SDK that hard-codes a specific MCP server endpoint. The discovery should always go through the gateway at Layer 3. The auth context should always be propagated from the user session, not baked into the agent. And the agent should be able to fall back gracefully when a tool returns an error, a timeout, or a policy denial — because in production those will happen with frequency.

Multi-agent deployments add another wrinkle: a planner agent may call a research agent which calls a billing agent, each issuing tool calls along the way. The lineage tracking at Layer 3 needs to handle nested calls, and the gateway needs to enforce policy at every hop. We cover this pattern in detail in Multi-Agent AI Systems: Enterprise Guide.

Pattern 1-2: Read-Only Lookup, Write-Through With Approval

We now turn to the eight enterprise patterns that show up repeatedly in production MCP deployments. Each pattern has a distinct latency profile, governance requirement, and tooling fit. We will introduce them two at a time.

Pattern 1: Read-Only Agent Lookup. This is the gateway drug of MCP. An agent receives a user question, calls one or more read-only MCP tools to fetch context, and returns a synthesized answer. There are no side effects on the system of record. The CRM-MCP lookup_account tool, the DW-MCP run_query tool, and the Knowledge-MCP search_docs tool are the workhorses here. Latency budget is typically 200ms to 2s per call. Governance is light — RBAC on the tool, DLP on the response, audit log of the lookup. This pattern is where almost every enterprise starts.

Pattern 2: Write-Through With Approval. The agent prepares a write — updating a contact record, creating a ticket, modifying an invoice — but the actual mutation is gated behind a human approval. The MCP tool returns an "approval pending" response immediately, and the gateway routes an approval request to the appropriate human (the user, the user's manager, or a designated approver based on data sensitivity). Once approved, the gateway issues the underlying API call. This pattern is the safest write pattern in early production and is typically how teams cross from read-only to read-write deployments.

Pattern 3-4: Federated Search, Chained Action With Saga

Pattern 3: Federated Search. A single agent query fans out across multiple MCP servers in parallel, each running a constrained search against its underlying system. Knowledge-MCP searches Confluence, CRM-MCP searches account notes, ITSM-MCP searches recent tickets, and the agent synthesizes a unified answer. The challenge is consistency — what does it mean for the federated result set to be coherent when each underlying system has different freshness guarantees? In practice, agents handle this by surfacing the source and timestamp of each fragment.

Pattern 4: Chained Action With Saga. An agent executes a multi-step action that spans multiple MCP servers — for example, creating a customer in the CRM, provisioning an identity in Identity-MCP, and sending a welcome message via Comms-MCP. Each step is a separate MCP tool call. The saga pattern requires that each step have a defined compensating action, so that if step three fails, the gateway can issue compensations for steps one and two. This is the most demanding pattern in terms of tooling design, because every write tool needs to be paired with an inverse.

Pattern 5-6: Background Sync, Audit-First Action

Pattern 5: Background Sync With CDC. This is the pattern where classical iPaaS still wins, but it is wrapped in MCP at the consumption end. A change data capture stream from Snowflake or Salesforce flows through the iPaaS pipeline as it always did, lands in a derived store, and is exposed to agents via a thin MCP server that wraps the derived store. The agent never sees the CDC pipeline; it only sees an MCP tool that returns "current state." This hybrid is the cleanest expression of "iPaaS for batch, MCP for action."

Pattern 6: Audit-First Action. Some actions are too sensitive to take without provable evidence of intent. The audit-first pattern flips the normal sequence: the agent first writes a structured audit record describing the proposed action and the reasoning, the gateway returns a confirmation token, and only then does the agent issue the actual write call referencing that token. This pattern is common in regulated industries where the audit trail is the legal record of the decision. The Swfte Connect MCP server templates ship with an audit-first wrapper option for exactly this case.

Pattern 7-8: Quorum Approval, Sandboxed Trial Execution

Pattern 7: Quorum Approval. For high-blast-radius actions — bulk customer updates, large financial transactions, mass communications — a single human approver is insufficient. The quorum pattern requires N-of-M approvers from a designated group, with the gateway holding the action in escrow until quorum is reached or a timeout fires. This is computationally inexpensive to implement but operationally expensive because of the human coordination cost. It is reserved for the highest-risk write paths.

Pattern 8: Sandboxed Trial Execution. Before issuing a real write, the agent issues a "what would happen if" call against a sandbox or shadow copy of the system of record. The MCP server runs the action against the sandbox, returns a diff of the predicted change, and the agent presents that diff to the user before issuing the real write. This pattern is increasingly common for ERP and finance MCP servers where the cost of a wrong write is high. The sandbox itself is typically maintained by a classical iPaaS pipeline that clones production state on a schedule.

Pattern Adoption vs Latency Profile (May 2026, n=240 enterprises)

Latency budget (p95)
10s │                                          ◆ Quorum
    │                                          ◆ Sandboxed Trial
    │                                ◆ Chained-Saga
 5s │                       ◆ Audit-First
    │                       ◆ Write-Approval
    │            ◆ Federated
 1s │  ◆ Read-Only
    │              ◆ Background Sync (consumption)
    └────┬────┬────┬────┬────┬────┬────┬────┬────▶
        10%  25%  40%  55%  70%  85%  100%
                  Adoption rate among MCP-deployed enterprises

Source: Swfte MCP enterprise survey, May 2026

The adoption curve is informative. Read-only lookup is nearly universal among any enterprise that has deployed MCP at all. Write-through with approval is the next most common, present in around 70% of deployments. The more complex patterns — chained sagas, quorum approval, sandboxed trial — are concentrated in regulated industries and large enterprises where the cost of a wrong action is high enough to justify the additional complexity.

Where Classical iPaaS Still Wins

The MCP-as-Integration-Layer model is not a wholesale replacement for classical iPaaS. There are at least four workload classes where classical iPaaS continues to dominate and should not be migrated.

The first is bulk batch movement. Moving 50 million rows from Snowflake to a downstream warehouse, transforming them through 30 dbt models, and landing them in a reporting mart is not a job for MCP. It is a job for a classical iPaaS or ELT pipeline, scheduled, monitored, with bulk APIs. MCP's per-call overhead and conversational shape make it a poor fit for this workload.

The second is system-of-record-to-system-of-record sync. When Salesforce contact updates need to flow to Marketo, NetSuite, and a custom data lake on a guaranteed-delivery basis, that is classical iPaaS territory. The reliability semantics, the dead-letter queues, the retry policies, the bulk reconciliation jobs — all of that exists in mature form in iPaaS and would have to be rebuilt poorly in MCP.

The third is complex stateful workflows with human approval steps that span days. While MCP can handle approval routing, long-running stateful workflows with multi-day SLAs, escalation paths, and parallel branches are better expressed in a workflow engine that the iPaaS vendor already provides.

The fourth is legacy connector breadth. Major iPaaS vendors maintain hundreds of connectors to systems that no MCP server author has yet wrapped. For the next several quarters, a classical iPaaS will reach more systems out of the box than your MCP mesh will.

The hybrid integration architecture acknowledges all four of these. It runs classical iPaaS for batch, sync, and long workflows, and runs MCP for agent-callable real-time action. The two layers share authentication context, share audit infrastructure, and increasingly share the same governance plane.

This is broadly the same conclusion that Frends arrives at in its BOAT analysis and that SnapLogic articulates in its top vendor comparison: the integration platform of 2026 is bimodal. The platforms that survive will offer both modes, ideally with a unified governance layer spanning them.

For the broader context on how this hybrid emerges out of the agentic AI landscape rather than the classical integration landscape, see our analysis in Agentic AI: From Sprawl to Interoperability and the deeper protocol-level treatment in MCP Protocol: The Agentic AI Interoperability Standard.

Swfte Connect ships MCP server templates for the eight servers in the canonical mesh, plus the gateway layer and policy bundle templates. It also brokers calls between agents at Layer 4 and the systems of record at Layer 1, providing the audit, lineage, and approval routing that Layer 3 requires without forcing teams to build a gateway from scratch.

What to Do This Quarter

If you are an integration architect, platform engineer, or enterprise CTO planning the next four quarters of integration work, here are the actions that will compound the most.

  1. Inventory your Layer 1. Before designing any MCP server, build a complete inventory of every system of record, every existing iPaaS connector, every API endpoint, and every authentication scheme. This inventory is the input to every subsequent decision and most teams skip it.
  2. Deploy the canonical six-server mesh. Start with CRM-MCP, ITSM-MCP, DW-MCP, Knowledge-MCP, HRIS-MCP, and Identity-MCP. These six cover roughly 80% of agent demand in a typical enterprise and give you a clean baseline before you start adding domain-specific servers.
  3. Build the gateway before you build the third server. Do not let MCP servers proliferate without a Layer 3 gateway in place. The cost of retrofitting governance onto a sprawling mesh is significantly higher than the cost of building it in from the start.
  4. Pick three patterns and exclude the rest, for now. Most enterprises should start with Pattern 1 (Read-Only Lookup), Pattern 2 (Write-Through With Approval), and Pattern 3 (Federated Search). The complex patterns — saga, quorum, sandboxed trial — should wait until you have a year of operational data on the simpler ones.
  5. Keep classical iPaaS for what it is good at. Resist the urge to migrate batch, CDC, and bulk sync workloads to MCP. The hybrid architecture is the goal; pick the right tool per workload class.
  6. Establish a single governance plane across both modes. Whatever you use for authentication, audit, and lineage in classical iPaaS should be the same system serving Layer 3 of your MCP mesh. A bifurcated governance estate is the failure mode that takes three years to recover from.
  7. Pilot provider-agnostic agent code. Make sure your Layer 4 agent implementations can swap between Claude, GPT, Gemini, and any open-weights model without changes to the integration layer. That property is the single biggest dividend MCP pays back over time.

The integration platform of 2026 looks structurally different from the integration platform of 2022. It has two complementary halves. It is governed at the tool-call level, not the pipeline level. It treats AI agents as first-class consumers, not as edge cases bolted onto existing flows. The architects who get there first will spend the rest of the decade compounding that lead. The ones who do not will spend it migrating.

0
0
0
0

Enjoyed this article?

Get more insights on AI and enterprise automation delivered to your inbox.