Full Stack2024

Discord-style Chat App

A Discord-style real-time chat application with servers, channels, direct messages, and a friend system, built with Vue 3 + Go. The backend now supports K3s horizontal scaling and Redis Pub/Sub for cross-pod realtime broadcast.

Discord-style Chat App

Project Overview

A Discord-inspired real-time chat application with a Vue 3 frontend and a Go backend. It supports servers (guilds), channels, direct messages, friendship flows, file uploads, and WebSocket messaging. The original version centered on a single-instance in-memory WebSocket hub, then evolved toward a K3s-deployable realtime system: Redis Pub/Sub now propagates room events across pods so multiple chat-app replicas can preserve a consistent realtime experience under horizontal scaling. MongoDB, Redis, Prometheus, pprof, k6, and ArgoCD/GitOps complete the project by moving it beyond feature delivery into production-like deployment and validation.

Technical Challenges & Solutions

Evolving from Single-instance WebSockets to Cross-pod Broadcast

The original realtime layer kept room state in a single process. Once the service runs across multiple pods, messages only reach the instance that owns the connection and cannot be broadcast across pods.

Solution:
Kept the local RoomManager model, but after writing each message to MongoDB, published it through Redis Pub/Sub on room:: channels. Each pod subscribes to the relevant room channel during room initialization and fans the event out to its local clients, preserving a consistent chat experience after horizontal scaling.

JWT Auth & Security

Implementing a secure, stateless auth system for an SPA while keeping login, refresh, and revocation flows consistent across multiple pods.

Solution:
Adopted a JWT access/refresh token rolling strategy. The refresh token is stored in an httpOnly cookie and persisted in MongoDB so any replica can validate or revoke it. CSRF middleware adds an extra defense layer.

K3s Deployment and Horizontal Scaling Validation

Docker Compose can run the app, but it does not validate real multi-replica load balancing, readiness probes, rolling updates, or autoscaling behavior.

Solution:
Added Kubernetes manifests, Kustomize overlays, and an ArgoCD Application so the system can run on K3s with Deployments and Services. HPA scales the app between 2 and 10 pods based on CPU and memory, and the project includes repeatable local K8s scaling commands for validation.

File Storage Under Stateless Pods

Chat apps need avatars and server images, but K3s pods are stateless. A local uploads directory can disappear or diverge when pods restart or scale.

Solution:
Abstracted file storage behind a provider layer. Development keeps fast local uploads, while the architecture preserves a clean switch path to MinIO or another object storage backend when the deployment environment requires shared persistent storage.

Observability and Scaling Validation

After scaling out, it becomes difficult to tell whether bottlenecks come from WebSockets, Redis, MongoDB, or pod resource limits unless the system is observable.

Solution:
Integrated go-gin-prometheus for metrics, added pprof and monitoring guidance, and used k6 to validate single-instance, multi-instance Docker Compose, and Kubernetes scaling scenarios so performance tuning and HPA behavior are backed by data.

Architecture

Frontend: Vue 3 + TypeScript + Pinia + Element Plus. Backend: Go 1.25 + Gin + gorilla/websocket, organized with a Controller → Service → Repository layering and handwritten dependency injection. The realtime layer uses RoomManager to track rooms and local clients. After persisting messages, MessageHandler publishes events to Redis channels in the form room:<type>:<id>; each pod subscribes when a room is initialized and forwards incoming events only to its own local WebSocket clients, solving cross-instance broadcast after horizontal scaling. Deployment uses K3s, Deployments, HPA, Kustomize, and ArgoCD, while Prometheus, pprof, and k6 are used to observe and validate scaling behavior.

Learnings

This project pushed me beyond building a single-node realtime app toward designing a scalable realtime system. I not only learned Go backend development, JWT flows, and WebSocket lifecycle management, but also had to address the harder problem that appears after scaling out: once WebSocket connections are spread across multiple pods, in-memory broadcast is no longer enough. Implementing Redis Pub/Sub and validating the behavior on K3s with HPA made me understand how realtime data flow, stateless deployments, monitoring, and load testing need to be designed together.

Tech Stack

Frontend

Vue 3TypeScriptElement PlusPiniaUnoCSS

Backend

Go 1.25Gingorilla/websocketJWTRedis Pub/Sub

Database

MongoDBRedis

Container Orchestration

K3sHPAKustomizeArgoCD

Monitoring & Observability

Prometheuspprofk6