The idea of running your own AI agent is appealing. An agent that connects to your tools, understands your workflows, and executes tasks on your behalf -- deployed on infrastructure you control, processing data that never leaves your perimeter. The open-source ecosystem, led by projects like OpenClaw (formerly ClawdBot) with its 145,000 GitHub stars, has demonstrated that the demand for self-hosted AI agents is enormous.
The reality of deploying one has been less appealing. Clone a repository. Install Node.js. Configure environment variables. Paste API keys into plaintext configuration files. Install plugins. Debug dependency conflicts. Set up a reverse proxy. Configure authentication. Pray that none of the 430,000 lines of community-contributed code contains anything malicious.
That was the state of the art until recently. This guide covers a different approach: deploying a production-ready AI agent in minutes -- on the cloud, on your local machine, or in a hybrid architecture -- using Swfte's no-code builder with enterprise security built in from the first request.
The Setup Problem: Why Most AI Agent Deployments Stall
Before we walk through the deployment paths, it is worth understanding why self-hosted AI agent deployment has historically been difficult, and why this difficulty matters.
The OpenClaw Setup Process (For Context)
To deploy an OpenClaw agent on your own infrastructure, the process looks like this:
-
Install Node.js v20+ -- if you do not already have it, this involves downloading the installer or using a version manager like nvm. Estimated time: 2-5 minutes.
-
Install OpenClaw globally --
npm install -g openclaw. This downloads the core package and its dependencies. Estimated time: 1-2 minutes. -
Run the setup wizard --
openclaw init. This interactive terminal process prompts you for AI model API keys, default model selection, and basic preferences. Estimated time: 3-5 minutes. -
Configure API keys -- You need API keys for every AI model you want to use (OpenAI, Anthropic, Google, etc.) and every service you want to connect (Slack, GitHub, Jira, Gmail, etc.). Each key must be obtained from the respective service's developer console and pasted into OpenClaw's configuration file. These keys are stored in plaintext. Estimated time: 10-30 minutes depending on the number of services.
-
Install skills -- Browse the skill registry, select the ones you need, install each with
openclaw skill install <name>. Each skill may have its own configuration requirements. Estimated time: 5-15 minutes. -
Debug dependency conflicts -- Community skills use a wide range of npm dependencies, and version conflicts between skills are common. Resolving these requires understanding npm's dependency resolution and often involves manually pinning versions. Estimated time: 0-30 minutes (highly variable).
-
Start the server --
openclaw start. If everything went well, you can access the web UI atlocalhost:3000.
Total estimated time for a developer experienced with Node.js: 10-30 minutes. For a non-developer, the process frequently takes over an hour, and OpenClaw's own survey data indicates a 62% abandonment rate among users without prior Node.js experience.
For production deployment on a cloud server, add HTTPS configuration, authentication (OpenClaw's web UI has no built-in login), firewall rules, process management, log rotation, backup, and monitoring. This extends the timeline to 2-4 hours for an experienced DevOps engineer.
And at the end of that process, you have a running agent with plaintext API keys, no execution sandboxing, no access control, no audit logging, and no compliance documentation. Every security and governance feature must be built on top, by your team, at your cost.
Why This Matters
The complexity is a filter. It filters out exactly the people who would benefit most from AI agents -- business users, operations teams, customer support managers, marketing professionals -- and limits adoption to the subset of the organization that is comfortable with terminal commands and npm. The result: AI agent capability remains siloed in engineering teams instead of being available across the organization.
The security gaps are structural. They are not bugs that will be fixed in the next release. They are architectural decisions that reflect the project's design for individual developer use. Retrofitting enterprise security onto that architecture is possible but requires significant engineering investment.
Swfte's Approach: Three Deployment Paths, One Builder
Swfte separates the act of building an AI agent from the act of deploying it. You build your agent once, using the same visual builder regardless of where it will run. Then you choose a deployment path based on your requirements for latency, data sovereignty, compliance, and control.
All three paths start in the same place: the Swfte agent builder at swfte.com/try.
Path 1: Cloud Deploy -- The Fastest Path (2 Minutes)
Cloud deployment is the fastest way to get an AI agent running in production. The agent runs on Swfte's managed infrastructure with HTTPS, monitoring, auto-scaling, and audit logging included.
Step-by-Step
Step 1: Open the agent builder. Navigate to swfte.com/try. You will see the agent builder interface -- a visual workspace where you define what your agent does, which tools it has access to, and how it behaves.
Step 2: Describe your agent. You have two options. The first is natural language: describe what you want the agent to do in plain English. "An agent that monitors our Slack channels for customer questions, searches our knowledge base for answers, and posts responses as a threaded reply." The builder translates your description into an agent configuration with the appropriate tools, knowledge sources, and behavior rules.
The second option is template-based: select from a library of pre-built agent templates covering common use cases (customer support, sales enablement, internal IT helpdesk, document processing, code review, data analysis). Templates come pre-configured with recommended tools and behavior rules that you can customize.
Step 3: Configure tools and knowledge sources. The builder presents the tools your agent needs based on your description or template. For each tool, you authorize access through a standard OAuth flow -- no API keys to copy and paste, no configuration files to edit. Your credentials are stored in Swfte's encrypted secrets vault, never in plaintext, never accessible to the agent runtime in raw form.
Knowledge sources are configured the same way. Point the agent at a Notion workspace, a Confluence instance, a Google Drive folder, a database, or upload documents directly. The builder indexes the content and makes it available for retrieval-augmented generation.
Step 4: Configure agent behavior. Set the agent's personality and rules. What tone should it use? What topics should it decline to answer? What actions require human approval before execution? What data classification levels can it access? These behavior rules are enforced at the platform level, not just at the prompt level, meaning they cannot be bypassed through prompt injection.
Step 5: Deploy. Click "Deploy to Cloud." Within 30-60 seconds, your agent is live with:
- HTTPS endpoint with a unique URL for API access
- Web chat interface embeddable in any website or internal tool
- Slack, Teams, or Discord integration if configured
- Auto-scaling that handles traffic spikes without configuration
- Monitoring dashboard showing request volume, latency, error rates, and costs
- Audit logging recording every interaction for compliance
- RBAC enforcement based on your organization's configured roles and permissions
Total time from start to live agent: approximately 2 minutes for a template, 5-10 minutes for a custom build. No terminal. No code. No server configuration.
When to Use Cloud Deploy
- Rapid prototyping. Test an agent concept in minutes, iterate based on results, deploy to production when ready -- all in the same platform.
- Production workloads without data sovereignty requirements. Customer support agents, sales enablement bots, internal helpdesk agents, and other use cases where data is not subject to geographic or infrastructure restrictions.
- Teams without DevOps resources. Marketing, sales, support, and operations teams can deploy and manage agents without engineering involvement.
- Multi-region availability. Swfte's cloud infrastructure runs in multiple regions, and agents can be deployed to the region closest to your users for lowest latency.
Comparison with OpenClaw
Deploying the equivalent agent with OpenClaw requires: Node.js installation, npm setup, API key configuration, skill installation, dependency resolution, server startup, and testing. Minimum 10 minutes for an experienced developer. No HTTPS, no auth, no monitoring, no audit logging, no RBAC without additional engineering work.
Path 2: Self-Hosted via Docker Compose -- Data Sovereignty in 5 Minutes
For organizations that need to keep data on their own infrastructure -- whether for regulatory compliance, data sovereignty, or security policy -- Swfte provides a self-hosted deployment path using Docker Compose. Your agent runs on your hardware, your API keys never leave your infrastructure, and all data processing happens within your network perimeter.
Step-by-Step
Step 1: Build your agent on Swfte. The agent builder process is identical to Path 1. Describe your agent, configure tools and knowledge sources, set behavior rules.
Step 2: Download the Docker Compose configuration. Instead of clicking "Deploy to Cloud," select "Self-Hosted" from the deployment options. Swfte generates a Docker Compose file (docker-compose.yml) that packages your agent configuration, the Swfte runtime, and all required services (encrypted secrets store, audit log collector, health check endpoint) into a set of containers.
The configuration file is a standard Docker Compose specification. It defines:
- Agent runtime container: the Swfte runtime executing your agent's logic
- Secrets management container: a lightweight vault service that stores credentials encrypted at rest
- Audit log container: collects and stores interaction logs locally; optionally forwards to your SIEM
- Reverse proxy container: handles HTTPS termination and request routing
Step 3: Configure environment variables. The Docker Compose file references environment variables for your AI model API keys and service credentials. Set these in a .env file on your host machine or inject them through your secrets management pipeline. Unlike OpenClaw's plaintext configuration, these values are encrypted by the secrets management container at startup and never written to disk in cleartext.
Step 4: Run docker compose up. On your machine, your on-premises server, or any Docker-capable host:
docker compose up -d
The containers start, the agent initializes, and within 60-90 seconds, your agent is accessible at https://localhost:8443 (or whatever hostname you configure).
Step 5: Integrate with your applications. The self-hosted agent exposes the same API as the cloud-hosted version. Embed the chat widget in internal tools, connect it to Slack or Teams via webhook, or call the API programmatically from your applications. Authentication is handled by the platform -- users authenticate through your SSO provider, and RBAC policies are enforced locally.
Total time from Docker Compose download to running agent: approximately 5 minutes assuming Docker is already installed.
What Runs Where
This is the critical question for data sovereignty, and the answer is clear:
- Agent runtime, secrets, audit logs, and all data processing: on your infrastructure
- Agent configuration and builder UI: Swfte's cloud (used only during the build phase; the exported configuration is self-contained)
- AI model inference: depends on your model choice. Use OpenAI or Anthropic APIs (data leaves your network for inference), or configure a local model via Ollama or vLLM (all inference stays on your hardware)
For organizations that require zero data egress -- not even to AI model APIs -- the Docker Compose deployment supports local model inference out of the box. Configure the MODEL_PROVIDER environment variable to point to your local Ollama or vLLM instance, and all processing stays within your network perimeter.
When to Use Docker Compose
- Data sovereignty requirements. Government agencies, healthcare organizations, financial institutions, and any organization whose data policies prohibit processing on third-party infrastructure.
- Development and testing. Run agents locally for development, then promote to cloud or CloudFormation for production.
- Air-gapped environments. With local model inference, the Docker Compose deployment can run in environments with no internet access.
- Cost optimization. For high-volume use cases, self-hosted deployment on your own hardware can be more cost-effective than cloud deployment, particularly when combined with local model inference.
Comparison with OpenClaw
OpenClaw also supports Docker deployment, and the container provides some isolation from the host system. However, the fundamental differences remain:
- Credentials: OpenClaw stores API keys in plaintext config files mounted as Docker volumes. Swfte encrypts credentials in a dedicated secrets management container.
- Sandboxing: OpenClaw skills run with full container permissions. Swfte enforces per-skill permission boundaries within the container.
- Audit logging: OpenClaw's Docker deployment does not add audit logging. Swfte includes an audit log container that captures every interaction.
- RBAC: OpenClaw's Docker deployment has no access control. Swfte enforces RBAC with SSO integration even in self-hosted mode.
- Maintenance: OpenClaw's Docker image must be manually updated. Swfte provides automated update notifications and one-command upgrades.
Path 3: Enterprise/Hybrid via CloudFormation -- Full Control in 10 Minutes
For enterprises with stringent infrastructure requirements -- VPC isolation, IAM integration, CloudWatch monitoring, compliance with internal cloud governance policies -- Swfte provides CloudFormation templates that deploy the full agent stack into your AWS account.
Step-by-Step
Step 1: Build your agent on Swfte. Same builder, same process as Paths 1 and 2.
Step 2: Download the CloudFormation template. Select "Enterprise / AWS" from the deployment options. Swfte generates a CloudFormation YAML template customized for your agent configuration.
Step 3: Review the template. The CloudFormation template is human-readable YAML that defines:
- VPC configuration: private subnets for agent runtime, public subnets for load balancer, NAT gateway for outbound API calls
- ECS Fargate service: serverless container execution for the agent runtime (no EC2 instances to manage)
- Secrets Manager integration: agent credentials stored in AWS Secrets Manager with automatic rotation
- CloudWatch: logs, metrics, and alarms pre-configured for agent health, latency, error rates, and cost
- IAM roles: least-privilege IAM roles for the agent runtime, with explicit deny policies for actions outside the agent's scope
- Application Load Balancer: HTTPS termination with your ACM certificate, health checks, and request routing
- WAF integration: optional Web Application Firewall rules for the agent's HTTPS endpoint
The template follows AWS Well-Architected Framework principles and is designed to pass enterprise cloud governance reviews without modification.
Step 4: Deploy the stack. In the AWS Console or via CLI:
aws cloudformation deploy \
--template-file swfte-agent-stack.yaml \
--stack-name my-ai-agent \
--capabilities CAPABILITY_IAM \
--parameter-overrides \
AgentConfigBucket=my-config-bucket \
CertificateArn=arn:aws:acm:... \
VpcCidr=10.0.0.0/16
CloudFormation provisions all resources, configures networking, deploys the agent, and outputs the agent's HTTPS endpoint. Estimated time: 8-12 minutes for the stack to complete provisioning.
Step 5: Connect to your organization's infrastructure. The CloudFormation deployment integrates natively with your existing AWS infrastructure:
- VPC peering: connect the agent's VPC to your existing VPCs for access to internal databases, APIs, and services
- AWS PrivateLink: expose the agent's endpoint through PrivateLink for access without traversing the public internet
- CloudTrail integration: all API calls made by the agent's IAM role are captured in CloudTrail
- AWS Config: resource configuration compliance is monitored automatically
- SSO via AWS IAM Identity Center: authenticate users through your organization's identity provider
What You Get
After the CloudFormation stack completes, you have:
- A production-grade AI agent running in an isolated VPC in your AWS account
- No shared infrastructure -- single-tenant deployment with resources exclusively allocated to your organization
- AWS-native security -- IAM roles, Security Groups, NACLs, VPC Flow Logs, CloudTrail, GuardDuty integration
- AWS-native monitoring -- CloudWatch dashboards, metrics, logs, and pre-configured alarms
- Automated scaling -- ECS Fargate scales the agent runtime based on request volume
- Automated credential rotation -- AWS Secrets Manager rotates API keys on a configurable schedule
- Full audit trail -- every interaction logged to CloudWatch Logs with configurable retention and S3 archival
When to Use CloudFormation
- Enterprise compliance requirements. SOC 2, HIPAA, PCI-DSS, FedRAMP, or internal cloud governance policies that require infrastructure-as-code deployment, VPC isolation, and AWS-native security controls.
- Integration with existing AWS infrastructure. Organizations that have invested in AWS networking, security, and monitoring tooling and want AI agents to participate in the same governance framework.
- Multi-account strategies. Deploy agent stacks to dedicated AWS accounts within your AWS Organization for maximum isolation.
- Regulated industries. Healthcare, financial services, government, and other sectors where deployment architecture is subject to regulatory scrutiny and audit.
Comparison with OpenClaw
There is no direct comparison here. OpenClaw does not provide infrastructure-as-code templates for any cloud provider. Deploying OpenClaw on AWS requires manually provisioning an EC2 instance, installing Node.js, configuring security groups, setting up an ALB, and building every layer of the infrastructure by hand. The engineering time for this is measured in days, not minutes, and the result still lacks the security, monitoring, and governance features that Swfte's CloudFormation template provides out of the box.
Use Cases: Matching Deployment Path to Requirements
Cloud Deploy is best for customer support agents, sales enablement bots, internal helpdesk agents, and content generation workflows -- use cases that benefit from auto-scaling, rapid iteration, and embeddable interfaces without data sovereignty constraints.
Docker Compose is best for code review agents that access proprietary codebases, medical research assistants handling PHI, financial analysis agents processing proprietary data, and development/testing environments -- any scenario where data must stay on your infrastructure.
CloudFormation is best for multi-department AI platforms needing centralized governance, regulated industry deployments requiring compliance-ready architecture, high-availability production agents with 99.9%+ uptime requirements, and government/public sector deployments with FedRAMP-aligned controls.
Security Across All Three Paths
Regardless of which deployment path you choose, every Swfte agent deployment includes:
Encrypted credential storage. API keys and OAuth tokens are encrypted at rest and in transit. They are never stored in plaintext, never written to configuration files, and never accessible to the agent runtime in their raw form.
Execution sandboxing. Each tool and integration executes in an isolated environment with explicit permission boundaries. A tool that reads from your database cannot write to the filesystem. A tool that sends Slack messages cannot make arbitrary HTTP requests. Permissions are defined at deployment time and enforced at runtime.
RBAC and SSO. Users authenticate through your identity provider. Roles determine which agents, tools, and data sources each user can access. Permission changes are logged and auditable.
Audit logging. Every interaction -- prompts, responses, tools invoked, data accessed, actions taken -- is logged to tamper-proof storage. Logs can be exported to your SIEM or retained locally.
Prompt injection protection. Input sanitization and output filtering at the platform level, independent of the underlying model's safety features. Behavior rules are enforced architecturally, not just through prompt engineering.
For a comprehensive view of Swfte's security posture, certifications, and practices, visit swfte.com/security.
Migrating from OpenClaw to Swfte
For organizations currently running OpenClaw that want to migrate to a managed platform, the process is straightforward:
Step 1: Inventory your OpenClaw skills. List the skills you are actively using. Most organizations use 15-30 of their installed skills in practice.
Step 2: Map skills to Swfte integrations. Swfte's 200+ enterprise integrations cover the most common use cases. For custom workflows, Swfte Studio provides a visual builder to recreate any skill logic.
Step 3: Migrate credentials. Remove plaintext API keys from OpenClaw's configuration files. Re-authorize services through Swfte's OAuth flows. Credentials are stored encrypted from the first moment.
Step 4: Rebuild agents. Use the Swfte agent builder to recreate your OpenClaw workflows. For most users, the builder's natural language description feature can replicate OpenClaw skill logic in minutes.
Step 5: Test and validate. Run the Swfte agent alongside your OpenClaw deployment to verify equivalent functionality. Once validated, decommission the OpenClaw instance and securely delete all plaintext configuration files.
For organizations needing assistance with migration, Swfte's customer success team provides guided migration support for enterprise customers.
The Bottom Line
Deploying your own AI agent should not require a computer science degree, a tolerance for plaintext API keys, or a weekend of terminal debugging. The technology has matured past the point where the deployment experience should be the bottleneck.
If you need an agent running in 2 minutes with no infrastructure to manage: Cloud Deploy.
If you need data sovereignty and on-premises control without sacrificing security features: Docker Compose via Swfte's self-hosted option.
If you need enterprise-grade infrastructure with VPC isolation, IAM integration, and compliance-ready architecture: CloudFormation via Dedicated Cloud.
All three paths start in the same builder. All three paths include the same security, governance, and observability features. The only difference is where the agent runs -- and that decision is yours, not your vendor's.
For more context on why enterprises are moving from DIY AI agents to managed platforms, read our comparison: ClawdBot vs Swfte: Why Enterprises Are Choosing Managed AI Agents Over DIY. For a deeper look at the security risks of open-source AI agent deployments, see: ClawdBot, OpenClaw, and Molt Walk Into Your Production Environment.