OVH Dual-Node VPS Infrastructure
A fully automated setup toolkit for a dual-node OVH VPS Kubernetes environment with K3s, Tailscale, Cloudflare Tunnel, and a complete observability stack.

Project Overview
A production-ready, fully scripted infrastructure for two OVH VPS nodes running K3s Kubernetes. VPS-1 (master) hosts the K3s control plane, ArgoCD, Prometheus, Grafana, Loki, Traefik ingress, and Cloudflared tunnel. VPS-2 (worker) runs K3s agent, Node Exporter, and Promtail. Databases (PostgreSQL 16, MongoDB 8, Redis 7) deliberately run in Docker outside K3s to avoid StatefulSet complexity. Tailscale provides the internal VPN mesh between nodes, and rclone backs up data to OVH S3 and Google Drive.
Technical Challenges & Solutions
Secure Node-to-Node Networking
OVH's private network (vRack) requires additional configuration and cost. Needed a secure private channel between the two VPS nodes for K3s cluster traffic.
Zero-Exposure External Access
Exposing Kubernetes services via public NodePort or LoadBalancer opens attack surface. Needed HTTPS access without opening inbound firewall ports.
Database Stability Outside K3s
Running stateful databases in Kubernetes StatefulSets adds scheduling complexity and risk of data loss during node failures.
Architecture
Node-to-node communication via Tailscale (VPN mesh, replacing OVH vRack). External HTTPS via Cloudflare Tunnel (zero open inbound ports). Workloads deployed to K3s via ArgoCD GitOps. Traefik handles ingress routing. Observability: Prometheus + Grafana (metrics), Loki + Promtail (logs), Node Exporter (host metrics). Automated setup via numbered Bash scripts (00–06) run once per node.
Learnings
Designing this infrastructure taught me how to compose enterprise-grade reliability patterns on a budget: using Tailscale instead of vRack, Cloudflare Tunnel instead of load balancers, and intentionally keeping stateful workloads outside Kubernetes. The result is a resilient, observable, and cost-effective self-hosted platform.