# VoxBlog Production Deployment Guide ## Overview Complete CI/CD pipeline for deploying VoxBlog to your VPS with Gitea using Docker and Gitea Actions (similar to GitHub Actions). ## Architecture ``` ┌─────────────────────────────────────────────────────────┐ │ Your VPS Server │ │ │ │ ┌────────────┐ ┌──────────────┐ ┌─────────────┐ │ │ │ Gitea │ │ Gitea Runner │ │ Docker │ │ │ │ Repository │→ │ (CI/CD) │→ │ Containers │ │ │ └────────────┘ └──────────────┘ └─────────────┘ │ │ ↓ │ │ ┌────────────────────────┐ │ │ │ voxblog-api:3301 │ │ │ │ voxblog-admin:3300 │ │ │ │ mysql:3306 │ │ │ └────────────────────────┘ │ └─────────────────────────────────────────────────────────┘ ``` ## Project Structure ``` voxblog/ ├── apps/ │ ├── api/ # Backend (Express + TypeScript) │ └── admin/ # Frontend (React + Vite) ├── packages/ │ └── config-ts/ ├── .gitea/ │ └── workflows/ │ └── deploy.yml ├── docker/ │ ├── api.Dockerfile │ ├── admin.Dockerfile │ └── nginx.conf ├── docker-compose.yml └── deploy.sh ``` ## Step 1: Create Dockerfiles ### API Dockerfile ```dockerfile # docker/api.Dockerfile FROM node:18-alpine AS builder WORKDIR /app # Copy workspace files COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./ COPY apps/api/package.json ./apps/api/ COPY packages/config-ts/package.json ./packages/config-ts/ # Install pnpm RUN npm install -g pnpm # Install dependencies RUN pnpm install --frozen-lockfile # Copy source COPY apps/api ./apps/api COPY packages/config-ts ./packages/config-ts # Build WORKDIR /app/apps/api RUN pnpm run build || echo "No build script, using ts-node" # Production image FROM node:18-alpine WORKDIR /app # Install pnpm RUN npm install -g pnpm # Copy package files COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./ COPY apps/api/package.json ./apps/api/ COPY packages/config-ts/package.json ./packages/config-ts/ # Install production dependencies only RUN pnpm install --frozen-lockfile --prod # Copy built app COPY --from=builder /app/apps/api ./apps/api COPY --from=builder /app/packages/config-ts ./packages/config-ts WORKDIR /app/apps/api EXPOSE 3301 CMD ["pnpm", "run", "dev"] ``` ### Admin Dockerfile ```dockerfile # docker/admin.Dockerfile FROM node:18-alpine AS builder WORKDIR /app # Copy workspace files COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./ COPY apps/admin/package.json ./apps/admin/ COPY packages/config-ts/package.json ./packages/config-ts/ # Install pnpm RUN npm install -g pnpm # Install dependencies RUN pnpm install --frozen-lockfile # Copy source COPY apps/admin ./apps/admin COPY packages/config-ts ./packages/config-ts # Build WORKDIR /app/apps/admin RUN pnpm run build # Production image with nginx FROM nginx:alpine # Copy built files COPY --from=builder /app/apps/admin/dist /usr/share/nginx/html # Copy nginx config COPY docker/nginx.conf /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] ``` ### Nginx Config ```nginx # docker/nginx.conf server { listen 80; server_name _; root /usr/share/nginx/html; index index.html; # Gzip compression gzip on; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; # SPA routing - all routes go to index.html location / { try_files $uri $uri/ /index.html; } # Cache static assets location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ { expires 1y; add_header Cache-Control "public, immutable"; } # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header X-XSS-Protection "1; mode=block" always; } ``` ## Step 2: Docker Compose ```yaml # docker-compose.yml version: '3.8' services: mysql: image: mysql:8.0 container_name: voxblog-mysql restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD} MYSQL_DATABASE: voxblog MYSQL_USER: voxblog MYSQL_PASSWORD: ${MYSQL_PASSWORD} volumes: - mysql_data:/var/lib/mysql networks: - voxblog-network healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 10s timeout: 5s retries: 5 api: build: context: . dockerfile: docker/api.Dockerfile container_name: voxblog-api restart: unless-stopped ports: - "3301:3301" environment: NODE_ENV: production PORT: 3301 DATABASE_URL: mysql://voxblog:${MYSQL_PASSWORD}@mysql:3306/voxblog ADMIN_PASSWORD: ${ADMIN_PASSWORD} OPENAI_API_KEY: ${OPENAI_API_KEY} GHOST_ADMIN_API_KEY: ${GHOST_ADMIN_API_KEY} S3_BUCKET: ${S3_BUCKET} S3_REGION: ${S3_REGION} S3_ACCESS_KEY: ${S3_ACCESS_KEY} S3_SECRET_KEY: ${S3_SECRET_KEY} S3_ENDPOINT: ${S3_ENDPOINT} depends_on: mysql: condition: service_healthy networks: - voxblog-network volumes: - ./data:/app/data admin: build: context: . dockerfile: docker/admin.Dockerfile args: VITE_API_URL: ${VITE_API_URL:-http://localhost:3301} container_name: voxblog-admin restart: unless-stopped ports: - "3300:80" networks: - voxblog-network depends_on: - api networks: voxblog-network: driver: bridge volumes: mysql_data: ``` ## Step 3: Gitea Actions Workflow ```yaml # .gitea/workflows/deploy.yml name: Deploy to Production on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Create .env file run: | cat > .env << EOF MYSQL_ROOT_PASSWORD=${{ secrets.MYSQL_ROOT_PASSWORD }} MYSQL_PASSWORD=${{ secrets.MYSQL_PASSWORD }} ADMIN_PASSWORD=${{ secrets.ADMIN_PASSWORD }} OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }} GHOST_ADMIN_API_KEY=${{ secrets.GHOST_ADMIN_API_KEY }} S3_BUCKET=${{ secrets.S3_BUCKET }} S3_REGION=${{ secrets.S3_REGION }} S3_ACCESS_KEY=${{ secrets.S3_ACCESS_KEY }} S3_SECRET_KEY=${{ secrets.S3_SECRET_KEY }} S3_ENDPOINT=${{ secrets.S3_ENDPOINT }} VITE_API_URL=${{ secrets.VITE_API_URL }} EOF - name: Build and deploy run: | docker-compose down docker-compose build --no-cache docker-compose up -d - name: Run database migrations run: | docker-compose exec -T api pnpm run drizzle:migrate - name: Health check run: | sleep 10 curl -f http://localhost:3301/api/health || exit 1 curl -f http://localhost:3300 || exit 1 - name: Clean up old images run: | docker image prune -af --filter "until=24h" ``` ## Step 4: Deployment Script (Alternative to Gitea Actions) If Gitea Actions is not available, use a webhook + script approach: ```bash #!/bin/bash # deploy.sh set -e echo "🚀 Starting deployment..." # Pull latest code echo "📥 Pulling latest code..." git pull origin main # Create .env if not exists if [ ! -f .env ]; then echo "⚠️ .env file not found! Please create it from .env.example" exit 1 fi # Stop existing containers echo "🛑 Stopping existing containers..." docker-compose down # Build new images echo "🔨 Building new images..." docker-compose build --no-cache # Start containers echo "▶️ Starting containers..." docker-compose up -d # Wait for services to be ready echo "⏳ Waiting for services..." sleep 10 # Run migrations echo "🗄️ Running database migrations..." docker-compose exec -T api pnpm run drizzle:migrate # Health check echo "🏥 Health check..." if curl -f http://localhost:3301/api/health; then echo "✅ API is healthy" else echo "❌ API health check failed" exit 1 fi if curl -f http://localhost:3300; then echo "✅ Admin is healthy" else echo "❌ Admin health check failed" exit 1 fi # Clean up echo "🧹 Cleaning up old images..." docker image prune -af --filter "until=24h" echo "✅ Deployment complete!" ``` Make it executable: ```bash chmod +x deploy.sh ``` ## Step 5: Gitea Webhook Setup ### Option A: Using Gitea Actions (Recommended) 1. **Install Gitea Runner on your VPS:** ```bash # Download Gitea Runner wget https://dl.gitea.com/act_runner/latest/act_runner-latest-linux-amd64 chmod +x act_runner-latest-linux-amd64 sudo mv act_runner-latest-linux-amd64 /usr/local/bin/act_runner # Register runner act_runner register --instance https://your-gitea-url --token YOUR_RUNNER_TOKEN # Run as service sudo tee /etc/systemd/system/gitea-runner.service > /dev/null < /dev/null < /dev/null < ./deploy.sh ``` ## Best Practices 1. **Always test locally first:** ```bash docker-compose up --build ``` 2. **Use health checks** in docker-compose.yml 3. **Backup database regularly:** ```bash docker-compose exec mysql mysqldump -u voxblog -p voxblog > backup.sql ``` 4. **Monitor disk space:** ```bash docker system df docker system prune -a ``` 5. **Use secrets management** - never commit `.env` to git 6. **Set up monitoring** (optional): - Portainer for Docker management - Grafana + Prometheus for metrics - Uptime Kuma for uptime monitoring ## Troubleshooting ### Container won't start ```bash docker-compose logs api docker-compose exec api sh # Debug inside container ``` ### Database connection issues ```bash docker-compose exec mysql mysql -u voxblog -p # Check if database exists SHOW DATABASES; ``` ### Port already in use ```bash sudo lsof -i :3301 sudo kill -9 ``` ### Out of disk space ```bash docker system prune -a --volumes ``` ## Security Checklist - [ ] Use strong passwords in `.env` - [ ] Enable firewall (ufw) - [ ] Keep Docker updated - [ ] Use SSL/TLS (HTTPS) - [ ] Limit SSH access - [ ] Regular backups - [ ] Monitor logs for suspicious activity - [ ] Use Docker secrets for sensitive data (advanced) ## Next Steps 1. Create all Docker files 2. Set up Gitea Runner or webhook 3. Configure environment variables 4. Test deployment locally 5. Deploy to production 6. Set up monitoring 7. Configure backups --- **Status**: Ready for production deployment! 🚀