# VoxBlog Production Deployment Guide ## Overview Complete CI/CD pipeline for deploying VoxBlog to your VPS with Gitea using Docker and Gitea Actions (similar to GitHub Actions). ## Architecture ``` ┌─────────────────────────────────────────────────────────┐ │ Your VPS Server │ │ │ │ ┌────────────┐ ┌──────────────┐ ┌─────────────┐ │ │ │ Gitea │ │ Gitea Runner │ │ Docker │ │ │ │ Repository │→ │ (CI/CD) │→ │ Containers │ │ │ └────────────┘ └──────────────┘ └─────────────┘ │ │ ↓ │ │ ┌────────────────────────┐ │ │ │ voxblog-api:3301 │ │ │ │ voxblog-admin:3300 │ │ │ │ mysql:3306 │ │ │ └────────────────────────┘ │ └─────────────────────────────────────────────────────────┘ ``` ## Project Structure ``` voxblog/ ├── apps/ │ ├── api/ # Backend (Express + TypeScript) │ └── admin/ # Frontend (React + Vite) ├── packages/ │ └── config-ts/ ├── .gitea/ │ └── workflows/ │ └── deploy.yml ├── docker/ │ ├── api.Dockerfile │ ├── admin.Dockerfile │ └── nginx.conf ├── docker-compose.yml └── deploy.sh ``` ## Step 1: Create Dockerfiles ### API Dockerfile ```dockerfile # docker/api.Dockerfile FROM node:18-alpine AS builder WORKDIR /app # Copy workspace files COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./ COPY apps/api/package.json ./apps/api/ COPY packages/config-ts/package.json ./packages/config-ts/ # Install pnpm RUN npm install -g pnpm # Install dependencies RUN pnpm install --frozen-lockfile # Copy source COPY apps/api ./apps/api COPY packages/config-ts ./packages/config-ts # Build WORKDIR /app/apps/api RUN pnpm run build || echo "No build script, using ts-node" # Production image FROM node:18-alpine WORKDIR /app # Install pnpm RUN npm install -g pnpm # Copy package files COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./ COPY apps/api/package.json ./apps/api/ COPY packages/config-ts/package.json ./packages/config-ts/ # Install production dependencies only RUN pnpm install --frozen-lockfile --prod # Copy built app COPY --from=builder /app/apps/api ./apps/api COPY --from=builder /app/packages/config-ts ./packages/config-ts WORKDIR /app/apps/api EXPOSE 3301 CMD ["pnpm", "run", "dev"] ``` ### Admin Dockerfile ```dockerfile # docker/admin.Dockerfile FROM node:18-alpine AS builder WORKDIR /app # Copy workspace files COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./ COPY apps/admin/package.json ./apps/admin/ COPY packages/config-ts/package.json ./packages/config-ts/ # Install pnpm RUN npm install -g pnpm # Install dependencies RUN pnpm install --frozen-lockfile # Copy source COPY apps/admin ./apps/admin COPY packages/config-ts ./packages/config-ts # Build WORKDIR /app/apps/admin RUN pnpm run build # Production image with nginx FROM nginx:alpine # Copy built files COPY --from=builder /app/apps/admin/dist /usr/share/nginx/html # Copy nginx config COPY docker/nginx.conf /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] ``` ### Nginx Config ```nginx # docker/nginx.conf server { listen 80; server_name _; root /usr/share/nginx/html; index index.html; # Gzip compression gzip on; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; # SPA routing - all routes go to index.html location / { try_files $uri $uri/ /index.html; } # Cache static assets location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ { expires 1y; add_header Cache-Control "public, immutable"; } # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header X-XSS-Protection "1; mode=block" always; } ``` ## Step 2: Docker Compose ```yaml # docker-compose.yml version: '3.8' services: mysql: image: mysql:8.0 container_name: voxblog-mysql restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD} MYSQL_DATABASE: ${DB_NAME:-voxblog} MYSQL_USER: ${DB_USER:-voxblog} MYSQL_PASSWORD: ${DB_PASSWORD} volumes: - mysql_data:/var/lib/mysql networks: - voxblog-network healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] interval: 10s timeout: 5s retries: 5 api: build: context: . dockerfile: docker/api.Dockerfile container_name: voxblog-api restart: unless-stopped ports: - "3301:3301" environment: NODE_ENV: production PORT: 3301 DB_HOST: ${DB_HOST:-mysql} DB_PORT: ${DB_PORT:-3306} DB_USER: ${DB_USER:-voxblog} DB_PASSWORD: ${DB_PASSWORD} DB_NAME: ${DB_NAME:-voxblog} ADMIN_PASSWORD: ${ADMIN_PASSWORD} OPENAI_API_KEY: ${OPENAI_API_KEY} GHOST_ADMIN_API_KEY: ${GHOST_ADMIN_API_KEY} GHOST_ADMIN_API_URL: ${GHOST_ADMIN_API_URL} S3_BUCKET: ${S3_BUCKET} S3_REGION: ${S3_REGION} S3_ACCESS_KEY: ${S3_ACCESS_KEY} S3_SECRET_KEY: ${S3_SECRET_KEY} S3_ENDPOINT: ${S3_ENDPOINT} depends_on: mysql: condition: service_healthy networks: - voxblog-network volumes: - ./data:/app/data admin: build: context: . dockerfile: docker/admin.Dockerfile args: VITE_API_URL: ${VITE_API_URL:-http://localhost:3301} container_name: voxblog-admin restart: unless-stopped ports: - "3300:80" networks: - voxblog-network depends_on: - api networks: voxblog-network: driver: bridge volumes: mysql_data: ``` ## Step 3: Gitea Actions Workflow ```yaml # .gitea/workflows/deploy.yml name: Deploy to Production on: push: branches: - main jobs: deploy: runs-on: ubuntu-latest env: COMPOSE_PROJECT_NAME: voxblog INFISICAL_TOKEN: ${{ secrets.INFISICAL_TOKEN }} INFISICAL_SITE_URL: ${{ secrets.INFISICAL_SITE_URL }} INFISICAL_CLI_IMAGE: infisical/cli:latest steps: - name: Checkout code uses: actions/checkout@v3 - name: Create placeholder .env run: touch .env - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Load secrets from Infisical shell: bash run: | set -euo pipefail if [ -z "${INFISICAL_TOKEN}" ]; then echo "INFISICAL_TOKEN is not configured" exit 1 fi CLI_IMAGE="${INFISICAL_CLI_IMAGE:-infisical/cli:latest}" docker pull "$CLI_IMAGE" >/dev/null tmp_file=$(mktemp) if [ -n "${INFISICAL_API_URL:-}" ]; then docker run --rm \ -e INFISICAL_TOKEN="$INFISICAL_TOKEN" \ ${INFISICAL_SITE_URL:+-e INFISICAL_SITE_URL="$INFISICAL_SITE_URL"} \ -e INFISICAL_API_URL="$INFISICAL_API_URL" \ "$CLI_IMAGE" export --format=dotenv > "$tmp_file" elif [ -n "${INFISICAL_SITE_URL:-}" ]; then api_url="${INFISICAL_SITE_URL%/}/api" docker run --rm \ -e INFISICAL_TOKEN="$INFISICAL_TOKEN" \ -e INFISICAL_SITE_URL="$INFISICAL_SITE_URL" \ -e INFISICAL_API_URL="$api_url" \ "$CLI_IMAGE" export --format=dotenv > "$tmp_file" else docker run --rm \ -e INFISICAL_TOKEN="$INFISICAL_TOKEN" \ "$CLI_IMAGE" export --format=dotenv > "$tmp_file" fi cp "$tmp_file" .env while IFS= read -r line || [ -n "$line" ]; do if [ -z "$line" ] || [[ "$line" == \#* ]]; then continue fi key="${line%%=*}" value="${line#*=}" echo "::add-mask::$value" printf '%s=%s\n' "$key" "$value" >> "$GITHUB_ENV" done < "$tmp_file" rm -f "$tmp_file" - name: Build and deploy run: | docker-compose down docker-compose build --no-cache docker-compose up -d - name: Run database migrations run: | docker-compose exec -T api pnpm run drizzle:migrate - name: Health check run: | sleep 10 curl -f http://localhost:3301/api/health || exit 1 curl -f http://localhost:3300 || exit 1 - name: Clean up old images run: | docker image prune -af --filter "until=24h" ``` ## Step 4: Deployment Script (Alternative to Gitea Actions) If Gitea Actions is not available, you can still trigger deployments by SSH, cron, or a webhook that calls `deploy.sh`. The script now: - Prefers `INFISICAL_TOKEN` and (optionally) `INFISICAL_SITE_URL` to pull secrets from Infisical via the official CLI container. - Falls back to a local `.env` file only when no token is exported (for development/testing). - Runs the same build → up → migrate → health-check flow afterwards. Before running it manually or from a webhook, export the token: ```bash export INFISICAL_TOKEN=st.your_service_token export INFISICAL_SITE_URL=https://secrets.yourdomain.com # optional ./deploy.sh ``` Everything else inside the script remains unchanged; see [deploy.sh](deploy.sh) for details. ## Step 5: Gitea Webhook Setup ### Option A: Using Gitea Actions (Recommended) 1. **Install Gitea Runner on your VPS:** ```bash # Download Gitea Runner wget https://dl.gitea.com/act_runner/latest/act_runner-latest-linux-amd64 chmod +x act_runner-latest-linux-amd64 sudo mv act_runner-latest-linux-amd64 /usr/local/bin/act_runner # Register runner act_runner register --instance https://your-gitea-url --token YOUR_RUNNER_TOKEN # Run as service sudo tee /etc/systemd/system/gitea-runner.service > /dev/null < /dev/null < /dev/null < ./deploy.sh ``` ## Best Practices 1. **Always test locally first:** ```bash docker-compose up --build ``` 2. **Use health checks** in docker-compose.yml 3. **Backup database regularly:** ```bash docker-compose exec mysql mysqldump -u voxblog -p voxblog > backup.sql ``` 4. **Monitor disk space:** ```bash docker system df docker system prune -a ``` 5. **Use secrets management** - keep secrets in Infisical, never commit `.env` 6. **Set up monitoring** (optional): - Portainer for Docker management - Grafana + Prometheus for metrics - Uptime Kuma for uptime monitoring ## Troubleshooting ### Container won't start ```bash docker-compose logs api docker-compose exec api sh # Debug inside container ``` ### Database connection issues ```bash docker-compose exec mysql mysql -u voxblog -p # Check if database exists SHOW DATABASES; ``` ### Port already in use ```bash sudo lsof -i :3301 sudo kill -9 ``` ### Out of disk space ```bash docker system prune -a --volumes ``` ## Security Checklist - [ ] Infisical secrets configured with strong values - [ ] Enable firewall (ufw) - [ ] Keep Docker updated - [ ] Use SSL/TLS (HTTPS) - [ ] Limit SSH access - [ ] Regular backups - [ ] Monitor logs for suspicious activity - [ ] Use Docker secrets for sensitive data (advanced) ## Next Steps 1. Create all Docker files 2. Set up Gitea Runner or webhook 3. Configure environment variables 4. Test deployment locally 5. Deploy to production 6. Set up monitoring 7. Configure backups --- **Status**: Ready for production deployment! 🚀