The world of AI is shifting from chatbots that talk to agents that do. Over the past few months I’ve been building a self-hosted “24/7 Jarvis” that lives on my local hardware. This post is the full story — from picking the right OS to shipping a fully automated deployment pipeline.
All the code lives in the openclaw-journey repo. The README is a TL;DR if you’re in a hurry; this post goes deeper.
What is OpenClaw?
OpenClaw is an open-source autonomous AI agent framework created by Peter Steinberger. It went viral in late 2025 (337k GitHub stars and counting). Unlike a chatbot, OpenClaw gives LLMs a “body” — the ability to run terminal commands, manage files, control browsers, and proactively reach out to you via messaging apps. Persistent memory lives in plain Markdown files, which makes it inspectable and portable.
I chose OpenClaw because it is genuinely model-agnostic. One day Claude for reasoning, the next Llama via Ollama for full offline privacy. No vendor lock-in. That freedom comes with significant security responsibility — which is the main point of this post.
Note: NVIDIA introduced NemoClaw in early 2026 as a security-hardened enterprise wrapper. If you work in a regulated environment it is worth evaluating. I chose OpenClaw for the model flexibility.
Why a Proxmox VM?
My first instinct was to run OpenClaw in a container on my MacBook. Rootless Podman in a Kind cluster, network policy, the works. It is technically viable, but it has two practical problems:
1. It ties the agent to your laptop. Close the lid, agent stops. An “always-on” assistant that goes offline every time you leave your desk is not very useful.
2. macOS is the wrong substrate. Kernel namespaces, cgroup v2, and container security primitives are Linux-native. On macOS they run inside a hidden HyperKit/QEMU VM anyway — you get all the complexity with none of the transparency.
A dedicated Proxmox VM solves both. It runs 24/7 on always-on hardware. Each VM is a proper isolation boundary — a crash inside the VM does not touch the host. You can snapshot before risky experiments and restore in seconds. And the entire environment is reproducible with Terraform.
Tip: You do not need a rack server. I run Proxmox on a mini PC with 32 GB RAM. A VM with 2 vCPUs and 2 GB RAM is plenty for OpenClaw.
Choosing the Operating System
I considered four candidates for the VM guest OS:
| OS | Pros | Cons |
|---|---|---|
| Debian 12 | Minimal, stable, long LTS, AppArmor default | Slightly older packages |
| Ubuntu 24.04 LTS | Large ecosystem, up-to-date packages | Heavier default install, snap overhead |
| Alpine Linux | Tiny footprint | Binary compat issues with Node.js native modules |
| Rocky Linux 9 | Enterprise hardening, SELinux | SELinux complexity unjustified for a personal VM |
I went with Debian 12 (Bookworm). It is the right balance of minimal, secure, and well-supported. AppArmor ships enabled by default. The package selection is stable and audited. It has excellent cloud-init support and the Proxmox community maintains well-tested cloud image templates for it.
Architecture Overview
Here is what we are building:
Proxmox Host
└── VM: openclaw (Debian 12, 2 vCPU / 2 GB)
├── OpenClaw (systemd service, non-root user)
├── UFW (default deny inbound, outbound allowlist)
├── fail2ban (SSH brute-force protection)
└── Tailscale (WireGuard mesh — the only inbound channel)
Your devices (phone, laptop)
└── Tailscale → VM:4096 → OpenClaw gateway → Discord
No ports are exposed to the public internet. The only way in is through the Tailscale encrypted mesh. The only outbound traffic is through an explicit UFW allowlist.
The Repo
All automation lives in openclaw-journey:
terraform/
main.tf # VM resource + cloud-init
variables.tf # All inputs
outputs.tf # VM IP
terraform.tfvars.example # Starter config — copy and fill in
cloud-init/
user-data.yaml.tpl # Bootstrap: SSH key, base packages
ansible/
site.yml # Playbook — hardening → tailscale → openclaw
inventory.ini # Target host
ansible.cfg # Sensible defaults
vars/
secrets.yml # ansible-vault encrypted secrets
roles/
hardening/ # SSH, UFW, fail2ban, sysctl
tailscale/ # Install + auth
openclaw/ # Node.js, clone, configure, systemd service
Terraform handles the VM lifecycle. Ansible handles everything inside it. cloud-init bridges the gap — it sets up SSH access so Ansible can connect.
Step 1: Prepare the Proxmox Template
Before Terraform can do anything, you need a cloud-init-ready Debian 12 template on your Proxmox host. This is a one-time manual step.
SSH into your Proxmox host as root and run:
wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-genericcloud-amd64.qcow2
qm create 9000 --name debian-12-cloud --memory 2048 --cores 2 \
--net0 virtio,bridge=vmbr0 --serial0 socket --vga serial0
qm importdisk 9000 debian-12-genericcloud-amd64.qcow2 local-lvm
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm set 9000 --boot c --bootdisk scsi0
qm set 9000 --ide2 local-lvm:cloudinit
qm set 9000 --ipconfig0 ip=dhcp
qm set 9000 --agent enabled=1
qm template 9000
Warning: Template VM ID
9000is a convention — change it if it conflicts with existing VMs on your host and updatetemplate_vm_idinterraform.tfvarsaccordingly.
Tip: You only need to do this once. Terraform clones from this template every time you provision a new VM.
Step 2: Terraform — Provision the VM
The Terraform config uses the bpg/proxmox provider, which is the most actively maintained Proxmox provider available.
Authentication uses API tokens rather than a username/password. Create one in Proxmox under Datacenter → Permissions → API Tokens, then export:
export PROXMOX_VE_USERNAME="root@pam"
export PROXMOX_VE_API_TOKEN="root@pam!openclaw=<your-token-secret>"
Warning: Never commit API tokens. The repo
.gitignoreexcludes*.tfvarsand*.tfvars.jsonbut always double-check before pushing.
Clone the repo and configure your variables:
git clone https://github.com/teerakarna/openclaw-journey
cd openclaw-journey/terraform
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars — set your Proxmox URL, node, VM IP, SSH key, etc.
terraform init
terraform plan
terraform apply
Terraform clones the template, injects your SSH public key via cloud-init, and boots the VM. When it completes, the output gives you the VM’s IP address and an SSH connection string.
Step 3: Ansible — Harden and Deploy
With the VM running, point ansible/inventory.ini at the IP from Terraform’s output, then create and encrypt your secrets file:
cd ../ansible
# Create the secrets file with your real values, then encrypt it
cp vars/secrets.yml.example vars/secrets.yml
# Edit vars/secrets.yml with your Tailscale auth key, Discord token, etc.
ansible-vault encrypt vars/secrets.yml
Then run the playbook:
ansible-playbook -i inventory.ini site.yml --ask-vault-pass
The playbook runs three roles in order:
hardening — SSH config (key-only auth, root login disabled), UFW rules (default deny inbound, allowlisted outbound), fail2ban for SSH brute-force protection, unattended-upgrades for automatic security patches, and a set of sysctl hardening knobs.
tailscale — Installs Tailscale from the official apt repo and authenticates using your auth key. After this role completes, you can reach the VM over its Tailscale IP.
openclaw — Installs Node.js 22 LTS, clones OpenClaw, creates the openclaw system user, writes the .env config file, and starts the service under systemd.
Tip: Run
ansible-playbook -i inventory.ini site.yml --tags hardeningfirst to confirm you can still SSH in after the hardening role. Catching a misconfiguredsshd_configbefore you lock yourself out is much easier than recovering from it.
Warning: Once Tailscale is up and you have confirmed access over the Tailscale IP, restrict SSH in UFW to the Tailscale interface only. This closes the last publicly routable inbound port.
Step 4: Tailscale and Discord
After the playbook completes, your OpenClaw instance is reachable only over the Tailscale mesh.
On your phone or laptop:
- Install Tailscale and join the same tailnet.
- The OpenClaw gateway is reachable at
http://<tailscale-ip>:4096. - Connect your Discord bot — the bot token is already in
.envvia the Ansible role.
I use Discord as my primary interface. Separate channels (#research, #devops, #admin) keep task contexts clean, and Discord’s thread model maps naturally to long-running agentic tasks.
Warning: Always set
GATEWAY_AUTH_MODE=tokenin.envand generate a strong random token (openssl rand -hex 32). Never usenonemode — anyone on your tailnet would be able to talk to your agent without authentication.
Security Summary
Here is a quick checklist of what the automation puts in place:
- SSH key auth only — password authentication disabled
- Root login disabled over SSH
- UFW default deny inbound, outbound allowlisted
- fail2ban watching SSH with a 1-hour ban on 5 failed attempts
- Unattended security upgrades enabled
- OpenClaw runs as a dedicated non-root
openclawuser - systemd service hardening:
NoNewPrivileges,PrivateTmp,ProtectSystem - All secrets managed through ansible-vault — nothing in plaintext in the repo
- No public ports — Tailscale is the only inbound path
What’s Next
This setup gives me a reproducible, hardened, always-on AI agent reachable from anywhere without exposing anything to the public internet. Tear it down and rebuild in under 15 minutes.
From here I want to explore:
- Snapshot-on-task — take a Proxmox snapshot before any destructive agentic task, auto-restore on failure via the Proxmox API.
- Ollama sidecar — a second VM on the same tailnet running Llama 3 locally for fully offline tasks.
- SOPS for secrets — replace ansible-vault with SOPS for a cleaner secrets workflow.
The openclaw-journey repo will keep evolving as this project grows.