# Local Docker Testing Guide ## Prerequisites ### 1. Install Docker Desktop (Mac) Download and install from: https://www.docker.com/products/docker-desktop Or use Homebrew: ```bash brew install --cask docker ``` Start Docker Desktop from Applications. ### 2. Verify Installation ```bash docker --version docker-compose --version ``` ## Quick Start ### 1. Create Local .env File ```bash # Copy example cp .env.example .env # Edit with your values nano .env ``` **Minimal .env for local testing:** ```bash # Database MYSQL_ROOT_PASSWORD=localrootpass123 MYSQL_PASSWORD=localvoxblogpass123 # Application ADMIN_PASSWORD=admin123 OPENAI_API_KEY=sk-your-actual-openai-key-here GHOST_ADMIN_API_KEY=leave-empty-if-not-using # S3 Storage (use your actual credentials) S3_BUCKET=your-bucket-name S3_REGION=us-east-1 S3_ACCESS_KEY=your-access-key S3_SECRET_KEY=your-secret-key S3_ENDPOINT=https://s3.amazonaws.com # Frontend (for local testing) VITE_API_URL=http://localhost:3001 ``` ### 2. Build and Start ```bash # Build and start all containers docker-compose up --build # Or run in background (detached mode) docker-compose up --build -d ``` This will: - Build the API Docker image - Build the Admin Docker image - Start MySQL database - Start all services **First build takes 5-10 minutes** (downloading dependencies, building images) ### 3. Wait for Services to Start Watch the logs until you see: ``` voxblog-mysql | ready for connections voxblog-api | Server listening on port 3001 voxblog-admin | Configuration complete ``` ### 4. Access Your Application - **Frontend**: http://localhost:3000 - **API**: http://localhost:3001 - **API Health**: http://localhost:3001/health ### 5. Test the Application 1. Open http://localhost:3000 in your browser 2. Login with your ADMIN_PASSWORD 3. Create a post 4. Upload images 5. Generate content with AI ## Useful Commands ### View Logs ```bash # All services docker-compose logs -f # Specific service docker-compose logs -f api docker-compose logs -f admin docker-compose logs -f mysql # Last 50 lines docker-compose logs --tail=50 api ``` ### Check Status ```bash # See running containers docker-compose ps # See all Docker containers docker ps ``` ### Stop Services ```bash # Stop all containers docker-compose down # Stop and remove volumes (deletes database!) docker-compose down -v ``` ### Restart Services ```bash # Restart all docker-compose restart # Restart specific service docker-compose restart api docker-compose restart admin ``` ### Rebuild After Code Changes ```bash # Rebuild and restart docker-compose up --build # Rebuild specific service docker-compose up --build api docker-compose up --build admin ``` ### Access Container Shell ```bash # API container docker-compose exec api sh # MySQL container docker-compose exec mysql mysql -u voxblog -p # Enter MYSQL_PASSWORD when prompted ``` ### Clean Up Everything ```bash # Stop and remove containers docker-compose down # Remove all unused images docker image prune -a # Remove all unused volumes docker volume prune # Nuclear option - clean everything docker system prune -a --volumes ``` ## Troubleshooting ### Port Already in Use If you get "port is already allocated" error: ```bash # Check what's using the port sudo lsof -i :3000 sudo lsof -i :3001 # Kill the process kill -9 # Or change ports in docker-compose.yml ports: - "3002:80" # Use 3002 instead of 3000 ``` ### Build Fails ```bash # Clean build cache docker-compose build --no-cache # Remove old images docker image prune -a # Try again docker-compose up --build ``` ### Database Connection Error ```bash # Check if MySQL is healthy docker-compose ps # View MySQL logs docker-compose logs mysql # Restart MySQL docker-compose restart mysql # Wait 30 seconds for MySQL to be ready ``` ### Out of Disk Space ```bash # Check Docker disk usage docker system df # Clean up docker system prune -a docker volume prune ``` ### Container Keeps Restarting ```bash # View logs to see error docker-compose logs api # Common issues: # - Missing environment variables in .env # - Database not ready (wait longer) # - Port conflict ``` ### Can't Access Frontend 1. Check if container is running: `docker-compose ps` 2. Check logs: `docker-compose logs admin` 3. Try accessing: `curl http://localhost:3000` 4. Check if port 3000 is free: `lsof -i :3000` ### API Returns 502 Error 1. Check if API is running: `docker-compose ps` 2. Check API logs: `docker-compose logs api` 3. Test API directly: `curl http://localhost:3001/health` 4. Check environment variables: `docker-compose exec api env` ## Development Workflow ### Making Code Changes **Backend (API) changes:** ```bash # Edit code in apps/api/ # Rebuild and restart docker-compose up --build api ``` **Frontend (Admin) changes:** ```bash # Edit code in apps/admin/ # Rebuild and restart docker-compose up --build admin ``` ### Database Changes ```bash # Run migrations docker-compose exec api pnpm run drizzle:migrate # Generate new migration docker-compose exec api pnpm run drizzle:generate ``` ### View Database ```bash # Access MySQL docker-compose exec mysql mysql -u voxblog -p # Show databases SHOW DATABASES; USE voxblog; SHOW TABLES; # Query data SELECT * FROM posts; ``` ## Testing Checklist - [ ] Docker Desktop running - [ ] `.env` file created with all values - [ ] `docker-compose up --build` successful - [ ] All 3 containers running (mysql, api, admin) - [ ] Can access http://localhost:3000 - [ ] Can login with ADMIN_PASSWORD - [ ] Can create a post - [ ] Can upload images - [ ] Can generate AI content - [ ] Can save and publish ## Performance Tips ### Speed Up Builds ```bash # Use BuildKit for faster builds export DOCKER_BUILDKIT=1 export COMPOSE_DOCKER_CLI_BUILD=1 docker-compose up --build ``` ### Reduce Image Size Already optimized with: - Multi-stage builds - Alpine Linux base images - Production dependencies only ### Persistent Data Database data is stored in Docker volume `mysql_data`: ```bash # List volumes docker volume ls # Inspect volume docker volume inspect voxblog_mysql_data # Backup volume docker run --rm -v voxblog_mysql_data:/data -v $(pwd):/backup alpine tar czf /backup/mysql-backup.tar.gz /data ``` ## Next Steps Once local testing is successful: 1. ✅ Commit your changes 2. ✅ Push to Gitea 3. ✅ Deploy to VPS using deploy.sh 4. ✅ Set up Caddy reverse proxy 5. ✅ Configure CI/CD ## Common Issues & Solutions ### Issue: "Cannot connect to Docker daemon" **Solution**: Start Docker Desktop ### Issue: "Port 3000 is already allocated" **Solution**: Stop other services or change port in docker-compose.yml ### Issue: "Build takes too long" **Solution**: First build is slow (5-10 min). Subsequent builds are faster. ### Issue: "MySQL not ready" **Solution**: Wait 30 seconds after starting. MySQL needs time to initialize. ### Issue: "API returns 500 error" **Solution**: Check .env file has all required variables, especially OPENAI_API_KEY ### Issue: "Images not uploading" **Solution**: Check S3 credentials in .env file ## Quick Reference ```bash # Start docker-compose up -d # Stop docker-compose down # Logs docker-compose logs -f # Rebuild docker-compose up --build # Clean up docker-compose down -v docker system prune -a ``` --- **Ready to test!** Start Docker Desktop, then run `docker-compose up --build` 🚀