GCP Free Stack — Zero-Cost Self-Hosted Infrastructure
A zero-cost self-hosted infrastructure stack on GCP's free-tier e2-micro VM, using IPv6-only networking, Cloudflare Tunnel, WARP proxy, and Socat to eliminate a $3.65/month IPv4 charge while running 6 production Docker services.

Project Overview
gcp-free-stack is a personal infrastructure project and engineering journal documenting how to run multiple production-grade Docker services on GCP's e2-micro free tier at $0/month. When GCP started billing for external IPv4 addresses (~$3.65/month in 2024), the project systematically evaluated alternatives (Cloud NAT at $4.67/month was worse) before landing on a definitive solution: IPv6-only VM + Cloudflare Tunnel for public ingress + Cloudflare WARP in SOCKS5 proxy mode to bridge IPv4 gap for Docker image pulls + Socat TCP forwarder to expose the WARP proxy to containers + Cloudflare Worker as a serverless IPv4 relay for notifications. Six services — Portainer, Uptime Kuma, Umami (analytics), IT-Tools, Homer (dashboard), and Dash. (system monitor) — all run in Docker Compose on a 1 vCPU / 1 GB RAM VM.
Technical Challenges & Solutions
Docker Image Pulls Fail on IPv6-Only Host
After removing the external IPv4 address to eliminate the $3.65/month charge, docker pull commands failed because Docker Hub, GHCR, and other registries are IPv4-only — the VM has no route to reach them.
Uptime Kuma Cannot Reach Monitoring Targets or Send Notifications
Uptime Kuma runs in a container and needs to both connect to IPv4 monitoring targets (external websites) and send alerts to Discord webhooks — but all those endpoints are IPv4 and the VM has no direct IPv4 egress.
SSH Access Without a Public IP
Removing the external IPv4 address also removes the standard SSH entry point — how to manage the VM remotely without paying for a public address?
Architecture
Public ingress: Cloudflare Tunnel (cloudflared, QUIC/HTTP2) terminates TLS and routes to local container ports. VM networking: GCP dual-stack subnet, external IPv6 only (no IPv4), accessed via GCP IAP tunnel for SSH. IPv4 bridge: Cloudflare WARP in proxy mode (SOCKS5 :4000) — not VPN mode, to avoid disrupting system routing. Container proxy: Socat systemd service (TCP 0.0.0.0:4001 → 127.0.0.1:4000) exposes WARP to Docker containers. Notification relay: Cloudflare Worker proxies IPv6-origin webhook calls to IPv4 Discord/Slack endpoints. Containers: Docker Compose, Uptime Kuma uses host networking for native IPv6 stack + WARP SOCKS5 access. OS tuning: kernel socket buffer parameters enlarged for QUIC on 1 GB RAM.
Learnings
This project deepened my understanding of cloud networking trade-offs at the infrastructure cost boundary. Diagnosing why docker pull fails on IPv6-only hosts (registries are IPv4-only) and solving it with a WireGuard-based SOCKS5 proxy gave me hands-on experience with multi-layer network bridging. I also learned that WARP proxy mode is critical on servers — full VPN mode changes the system routing table and locks you out of SSH. Using a Cloudflare Worker as a serverless IPv4/IPv6 translation layer for notification webhooks was an elegant and free architectural decision.