The sysadmin that watches your entire infrastructure 24/7

Pulse Patrol acts as a senior engineer that never sleeps. It automatically detects failing backups, ZFS rot, stalled containers, and config drift across Proxmox, Docker, and Kubernetes.

Secure Stripe checkout, instant license delivery after payment, and a 14-day money-back guarantee.
Join 3,200+ developers automating their monitoring
Self-hosted
Privacy-first
Bring your own LLM
Native support for
Proxmox VE
Docker
Kubernetes
Ceph
ZFS
Pulse dashboard
Pulse Patrol finding · Just now
Warning
Backup job "pbs-daily" failing
Last 3 runs failed with "connection refused". PBS at 192.168.1.50 may be unreachable.
Pricing

Pricing

Choose the self-hosted plan that fits your setup.

Secure checkout by Stripe
License available immediately after payment
14-day money-back guarantee

Dashboards look cool.
But they don't fix things.

You have Grafana graphs and Zabbix alerts. But do you actually look at them? Most homelab outages happen because "alerts were noisy" or "I didn't check the dashboard."

Pulse is different. It doesn't just graph CPU usage. It analyzes why CPU is high, connects it to that failing backup job, and tells you the root cause in plain English.

Traditional Monitoring
⚠️ CPU > 90% (Node pve1)
⚠️ IO Delay > 10% (Node pve1)
⚠️ Backup Job failed
...and 20 more emails
Pulse Patrol
Root Cause Identified
"Backup job stalled causing high IO delay. CPU wait is high. Kill the stuck backup process."
What Pulse Patrol Catches

Real issues from real clusters

These aren't hypotheticals. These are real findings from production homelabs running Pulse.

💾
ZFS pool "tank" 94% full

At current growth rate, pool will be full in 3 days. ZFS performance degrades significantly above 80%.

→ Run `zfs list -o space`
🔄
VM 105 restarting loop

6 unexpected reboots in the last 24 hours. Memory usage spikes to 98% before each restart.

→ Check memory allocation
📦
Backup job silent fail

"nightly-backup" last succeeded on Dec 18. VM 102 (postgres) has no recent protection.

→ Check PBS storage
🖥️
Node clock drift

System time is 47s behind cluster. Can cause ceph issues and VM migration failures.

→ Check NTP sync
🐳
Stalled Container

Container "plex" running but failing health checks on port 32400 for > 2 hours.

→ Check app logs
⚙️
Config Drift

Network config differs on node pve3. vmbr0 MTU is 1500 vs 9000 on other nodes.

→ Fix MTU mismatch
★★★★★ Featured Review

"Pulse is by far the best way to monitor my Proxmox hosts in a single dashboard. No overwhelming metrics that nobody cares about. Just what you need."

r/selfhosted community member
Get Started →
# Pulse Patrol runs automatically $[17:35:02] Starting patrol run... [17:35:02] Analyzing 3 nodes, 24 VMs [17:35:04] Scanning logs, metrics... [17:35:07] Patrol complete: 2 warnings, 1 critical
[CRITICAL] ZFS pool at 94% [WARNING] Backup job failed [WARNING] VM 105 memory pressure
Pulse Pro: Pulse Patrol
Pulse Patrol findings in Pulse dashboard
Why This Isn't Just Another Chatbot

Your infrastructure, not just a chat window

When you ask ChatGPT about your server, you have to describe everything manually. Patrol sees your entire infrastructure at once, with historical context no generic LLM can access.

📊 Full Infrastructure State
  • Proxmox nodes, VMs, containers
  • Docker/Podman hosts & containers
  • Kubernetes pods, deployments, services
  • PBS/PMG backup status & jobs
  • Ceph clusters & OSD health
  • ZFS pools & storage usage
📈 Historical Intelligence
  • 24h & 7d trends: rising, stable, or volatile
  • Learned baselines: what's normal for your env
  • Capacity predictions: "full in X days"
  • Anomaly detection: z-score deviations
  • Change tracking: config drift and migrations
  • Incident memory: past investigations
🧠 Operational Memory
  • Your notes: "runs hot for transcoding"
  • Dismissed alerts: won't nag again
  • Past remediations: what fixed it before
  • Resource correlations: cross-host patterns
  • User feedback: learns from your input
  • Suppression rules: expected behavior

This is what makes Patrol catch issues that static alerts miss.

It's not just "CPU > 90%". It's "CPU spiked to 85% but that's normal for this VM during backups, however the backup hasn't completed in 3 hours which is unusual."

Pro Exclusive

The 3am problem, solved

Traditional alerts wake you up with noise. Pulse wakes you up with answers.

Traditional monitoring · 3:14 AM
⚠️ ALERT: CPU > 90%
⚠️ ALERT: Memory > 85%
⚠️ ALERT: IO Wait > 20%
...and 12 more
"Great, everything is on fire. Where do I even start?"
Pulse Alert Analysis · 3:14 AM
Root Cause: Runaway backup process
PBS backup job for VM 102 (postgres) is stuck in a write loop, consuming 94% CPU and causing IO saturation. All other alerts are symptoms.
→ One root cause, not 15 alerts
Skip the 20-minute triage. Start fixing immediately.
Pulse uses AI to analyze every alert the moment it fires, so you don't have to correlate logs at 3am.
Pro Exclusive

Pulse that fixes, not just reports

Patrol doesn't just find problems. It can fix them too. With your approval, Pulse Assistant executes safe remediation commands on connected hosts.

🤖
Pulse Assistant
Investigating patrol finding
I found a stuck backup process on pve1. The PBS task UPID:pve1:00003A2F has been running for 6+ hours with no progress.
Recommended action: Kill the stuck task to free resources.
PROPOSED FIX
pvesh delete /nodes/pve1/tasks/UPID:pve1:00003A2F
Fix applied successfully
Task terminated. CPU usage returning to normal.
🛡️ Commands require explicit approval. You stay in control.
Unified Agent

One agent. Every platform.

Install once, monitor everything. The agent auto-detects Docker, Kubernetes, and Proxmox with no manual configuration.

🔍
Auto-Detection

Detects Docker, Podman, Kubernetes, and Proxmox automatically

🔄
Auto-Updates

Agent updates itself when new versions are released

📦
Single Binary

One install command. Works on Linux, macOS, and Synology

🛡️
Pulse Commands

Let Pulse run diagnostics and fixes on connected hosts

# Install on any host - Docker, K8s, Proxmox auto-detected $ curl -fsSL http://pulse:7655/install.sh | bash -s -- --token <token>
[install] Detecting platforms... [install] Found: Docker ✓ Kubernetes ✓ Host metrics ✓ [install] Installing pulse-agent to /usr/local/bin/ [install] Creating systemd service... [install] ✓ Agent connected to Pulse
For Teams

Built for production environments

Running Pulse for a team or business? Pro includes compliance and collaboration features out of the box.

📋
Audit Logging

Tamper-proof logs with HMAC signing. Stream to your SIEM via webhooks. Configurable retention.

👥
Role-Based Access

Define roles and permissions. Control who can view, modify, or execute commands across your infrastructure.

🔐
SSO & SAML

OIDC and SAML support. Connect to Okta, Azure AD, Google Workspace. Map roles automatically.

📊
Infrastructure Reports

Generate PDF/CSV reports. Capacity trends, resource usage, patrol history. Share with stakeholders.

Setup

Up and running in minutes

1

Deploy Pulse

Docker or LXC. Add your Proxmox API credentials.

docker run -d pulse
2

Add your LLM key

OpenAI, Anthropic, or local Ollama. Your choice, your data.

Settings → Pulse Assistant → Add Key
3

Pulse Patrol starts

Automated scanning on your schedule. Findings appear in Dash.

🟢 Watching
Secure checkout by Stripe
14-day money-back guarantee
Works worldwide
Community

What users are saying

"Pulse is by far the best way to monitor my Proxmox hosts in a single dashboard."

Homelab user

"No overwhelming metrics that nobody cares about. Just CPU, memory, and disk. Simple and easy."

r/selfhosted member

"I wanted to try Pulse and instantly fell in love with it once installed."

Proxmox cluster admin

"The Setup Wizard was easy as can be. Upgrade was super easy, with no issues at all."

v5 beta tester

"Really clean, easy to look at interface. Love what you've done with this."

Docker host user

"Found this project and was very impressed. Thank you for your efforts and good work."

Multi-node user
Reviews

See Pulse in action

Community reviews of Pulse

FAQ

Common Questions

Can I use my own LLM? +
Yes! Pulse supports OpenAI, Anthropic Claude, Google Gemini, and local Ollama. Your data stays on your infrastructure. We never see it.
What's the refund policy? +
14-day money-back guarantee, no questions asked. If it's not for you, we'll refund you immediately.
Do I need a Pro license for each server? +
No. One license covers your Pulse installation. Pulse still counts top-level monitored systems once for product understanding, and child resources like VMs, containers, pods, disks, and backup jobs are included, but self-hosted plans do not charge by monitoring volume.
Will Pulse v6 change how licensing works? +
You are buying the current self-hosted Pulse Pro subscription today. If you later move to Pulse v6, paid installs migrate onto the newer activation model and any legacy continuity terms are handled through separate migration guidance rather than this public pricing FAQ.
Does it work offline / air-gapped? +
Yes! Pulse runs entirely self-hosted. Use a local LLM like Ollama for fully air-gapped environments. No internet required after setup.
Need an invoice with VAT ID? +
Yes, we provide official invoices for businesses. After purchase, email [email protected] with your company name, address, and VAT ID.
Didn't receive your license email? +
If your order completed but you cannot find the license email, use the license retrieval page. Enter the email address used at checkout and I will send a verification code so you can reveal your current license key immediately. Also check spam or junk for messages from [email protected].

Stop trading your evenings and weekends for uptime.

Ready to run Pulse in production? Choose a plan
Choose Plan Try Demo