When it came time to deploy Wooster, I had what I thought was a solid deployment strategy:
Provision a $12/month Digital Ocean droplet (2GB RAM, seemed plenty)
Clone the repo, npm install, build with Vite
Configure Nginx as a reverse proxy
Set up PM2 for process management
Spoiler alert: I learned a lot about Linux memory management that day.
The Great Memory Crisis of 2024
My first deployment attempt followed what seemed like a sensible pattern: clone the entire repo to my Linux droplet and build it there. After all, that's basically what I was doing in development, right?
And then:
Turns out, Vite's build process is quite memory-intensive, and my 2GB droplet was not up to the task. After some research and a few failed attempts at increasing Node's memory limits, I realized I was approaching this wrong.
The solution? GitHub Actions. Here's my actual workflow:
The key improvements here:
Builds happen on GitHub's beefy runners, not my modest droplet
Only deploys on merged PRs, preventing accidental deployments
Properly handles environment variables and secrets
Sets up correct permissions before copying files
The OAuth Configuration Saga
Implementing Supabase auth with Google OAuth looked straightforward in the docs:
But Google OAuth had other plans. It required:
A valid domain (thanks GitHub Education for the free .live domain)
HTTPS configuration
Correct OAuth redirect URIs
The Case-Sensitive Catastrophe
Here's a fun one: everything worked perfectly in Windows development, but after deployment:
The culprit? Linux's case-sensitive filesystem versus Windows' case-insensitive one. A seemingly minor detail that cost an hour of debugging.
Nginx Configuration: The Final Piece
After sorting out the build process and OAuth, here's my production Nginx configuration:
Key features:
Explicit content-type handling for index.html
Aggressive caching for static assets (31536000 seconds = 1 year)
Proper WebSocket support in the API proxy
SPA-friendly routing with fallback to index.html
Rate Limiting: Teaching Wooster Some Self-Control
An AI-powered app without rate limits is like a golden retriever at an all-you-can-eat buffet - enthusiastic but potentially problematic. I added two tiers of rate limiting:
The two-tier approach means:
Regular API endpoints get a generous 100 requests per 15 minutes
AI-powered endpoints are limited to 20 requests per hour (because API credits aren't free!)
This protects both the server and my wallet from unexpected traffic spikes.
Lessons Learned
Build processes can be surprisingly resource-intensive - use CI/CD when possible
Case sensitivity matters in cross-platform development
OAuth providers have strict security requirements - plan accordingly
Configuration details matter!
A solid CI/CD pipeline saves time and prevents deployment headaches
Next up: Adding monitoring and error tracking to Wooster, because even AI dogs
need a health check now and then!