Some checks are pending
Deploy to Production / deploy (push) Waiting to run
- Changed admin frontend port from 3000 to 3300 across all configuration files - Changed API backend port from 3001 to 3301 across all configuration files - Updated health check endpoints to use new ports in CI/CD workflow - Modified documentation and deployment guides to reflect new port numbers - Updated Caddy and Nginx reverse proxy configurations to use new ports
16 KiB
16 KiB
VoxBlog Production Deployment Guide
Overview
Complete CI/CD pipeline for deploying VoxBlog to your VPS with Gitea using Docker and Gitea Actions (similar to GitHub Actions).
Architecture
┌─────────────────────────────────────────────────────────┐
│ Your VPS Server │
│ │
│ ┌────────────┐ ┌──────────────┐ ┌─────────────┐ │
│ │ Gitea │ │ Gitea Runner │ │ Docker │ │
│ │ Repository │→ │ (CI/CD) │→ │ Containers │ │
│ └────────────┘ └──────────────┘ └─────────────┘ │
│ ↓ │
│ ┌────────────────────────┐ │
│ │ voxblog-api:3301 │ │
│ │ voxblog-admin:3300 │ │
│ │ mysql:3306 │ │
│ └────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
Project Structure
voxblog/
├── apps/
│ ├── api/ # Backend (Express + TypeScript)
│ └── admin/ # Frontend (React + Vite)
├── packages/
│ └── config-ts/
├── .gitea/
│ └── workflows/
│ └── deploy.yml
├── docker/
│ ├── api.Dockerfile
│ ├── admin.Dockerfile
│ └── nginx.conf
├── docker-compose.yml
└── deploy.sh
Step 1: Create Dockerfiles
API Dockerfile
# docker/api.Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
# Copy workspace files
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/api/package.json ./apps/api/
COPY packages/config-ts/package.json ./packages/config-ts/
# Install pnpm
RUN npm install -g pnpm
# Install dependencies
RUN pnpm install --frozen-lockfile
# Copy source
COPY apps/api ./apps/api
COPY packages/config-ts ./packages/config-ts
# Build
WORKDIR /app/apps/api
RUN pnpm run build || echo "No build script, using ts-node"
# Production image
FROM node:18-alpine
WORKDIR /app
# Install pnpm
RUN npm install -g pnpm
# Copy package files
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/api/package.json ./apps/api/
COPY packages/config-ts/package.json ./packages/config-ts/
# Install production dependencies only
RUN pnpm install --frozen-lockfile --prod
# Copy built app
COPY --from=builder /app/apps/api ./apps/api
COPY --from=builder /app/packages/config-ts ./packages/config-ts
WORKDIR /app/apps/api
EXPOSE 3301
CMD ["pnpm", "run", "dev"]
Admin Dockerfile
# docker/admin.Dockerfile
FROM node:18-alpine AS builder
WORKDIR /app
# Copy workspace files
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/admin/package.json ./apps/admin/
COPY packages/config-ts/package.json ./packages/config-ts/
# Install pnpm
RUN npm install -g pnpm
# Install dependencies
RUN pnpm install --frozen-lockfile
# Copy source
COPY apps/admin ./apps/admin
COPY packages/config-ts ./packages/config-ts
# Build
WORKDIR /app/apps/admin
RUN pnpm run build
# Production image with nginx
FROM nginx:alpine
# Copy built files
COPY --from=builder /app/apps/admin/dist /usr/share/nginx/html
# Copy nginx config
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Nginx Config
# docker/nginx.conf
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# SPA routing - all routes go to index.html
location / {
try_files $uri $uri/ /index.html;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
}
Step 2: Docker Compose
# docker-compose.yml
version: '3.8'
services:
mysql:
image: mysql:8.0
container_name: voxblog-mysql
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: voxblog
MYSQL_USER: voxblog
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- mysql_data:/var/lib/mysql
networks:
- voxblog-network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
api:
build:
context: .
dockerfile: docker/api.Dockerfile
container_name: voxblog-api
restart: unless-stopped
ports:
- "3301:3301"
environment:
NODE_ENV: production
PORT: 3301
DATABASE_URL: mysql://voxblog:${MYSQL_PASSWORD}@mysql:3306/voxblog
ADMIN_PASSWORD: ${ADMIN_PASSWORD}
OPENAI_API_KEY: ${OPENAI_API_KEY}
GHOST_ADMIN_API_KEY: ${GHOST_ADMIN_API_KEY}
S3_BUCKET: ${S3_BUCKET}
S3_REGION: ${S3_REGION}
S3_ACCESS_KEY: ${S3_ACCESS_KEY}
S3_SECRET_KEY: ${S3_SECRET_KEY}
S3_ENDPOINT: ${S3_ENDPOINT}
depends_on:
mysql:
condition: service_healthy
networks:
- voxblog-network
volumes:
- ./data:/app/data
admin:
build:
context: .
dockerfile: docker/admin.Dockerfile
args:
VITE_API_URL: ${VITE_API_URL:-http://localhost:3301}
container_name: voxblog-admin
restart: unless-stopped
ports:
- "3300:80"
networks:
- voxblog-network
depends_on:
- api
networks:
voxblog-network:
driver: bridge
volumes:
mysql_data:
Step 3: Gitea Actions Workflow
# .gitea/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Create .env file
run: |
cat > .env << EOF
MYSQL_ROOT_PASSWORD=${{ secrets.MYSQL_ROOT_PASSWORD }}
MYSQL_PASSWORD=${{ secrets.MYSQL_PASSWORD }}
ADMIN_PASSWORD=${{ secrets.ADMIN_PASSWORD }}
OPENAI_API_KEY=${{ secrets.OPENAI_API_KEY }}
GHOST_ADMIN_API_KEY=${{ secrets.GHOST_ADMIN_API_KEY }}
S3_BUCKET=${{ secrets.S3_BUCKET }}
S3_REGION=${{ secrets.S3_REGION }}
S3_ACCESS_KEY=${{ secrets.S3_ACCESS_KEY }}
S3_SECRET_KEY=${{ secrets.S3_SECRET_KEY }}
S3_ENDPOINT=${{ secrets.S3_ENDPOINT }}
VITE_API_URL=${{ secrets.VITE_API_URL }}
EOF
- name: Build and deploy
run: |
docker-compose down
docker-compose build --no-cache
docker-compose up -d
- name: Run database migrations
run: |
docker-compose exec -T api pnpm run drizzle:migrate
- name: Health check
run: |
sleep 10
curl -f http://localhost:3301/api/health || exit 1
curl -f http://localhost:3300 || exit 1
- name: Clean up old images
run: |
docker image prune -af --filter "until=24h"
Step 4: Deployment Script (Alternative to Gitea Actions)
If Gitea Actions is not available, use a webhook + script approach:
#!/bin/bash
# deploy.sh
set -e
echo "🚀 Starting deployment..."
# Pull latest code
echo "📥 Pulling latest code..."
git pull origin main
# Create .env if not exists
if [ ! -f .env ]; then
echo "⚠️ .env file not found! Please create it from .env.example"
exit 1
fi
# Stop existing containers
echo "🛑 Stopping existing containers..."
docker-compose down
# Build new images
echo "🔨 Building new images..."
docker-compose build --no-cache
# Start containers
echo "▶️ Starting containers..."
docker-compose up -d
# Wait for services to be ready
echo "⏳ Waiting for services..."
sleep 10
# Run migrations
echo "🗄️ Running database migrations..."
docker-compose exec -T api pnpm run drizzle:migrate
# Health check
echo "🏥 Health check..."
if curl -f http://localhost:3301/api/health; then
echo "✅ API is healthy"
else
echo "❌ API health check failed"
exit 1
fi
if curl -f http://localhost:3300; then
echo "✅ Admin is healthy"
else
echo "❌ Admin health check failed"
exit 1
fi
# Clean up
echo "🧹 Cleaning up old images..."
docker image prune -af --filter "until=24h"
echo "✅ Deployment complete!"
Make it executable:
chmod +x deploy.sh
Step 5: Gitea Webhook Setup
Option A: Using Gitea Actions (Recommended)
- Install Gitea Runner on your VPS:
# Download Gitea Runner
wget https://dl.gitea.com/act_runner/latest/act_runner-latest-linux-amd64
chmod +x act_runner-latest-linux-amd64
sudo mv act_runner-latest-linux-amd64 /usr/local/bin/act_runner
# Register runner
act_runner register --instance https://your-gitea-url --token YOUR_RUNNER_TOKEN
# Run as service
sudo tee /etc/systemd/system/gitea-runner.service > /dev/null <<EOF
[Unit]
Description=Gitea Actions Runner
After=network.target
[Service]
Type=simple
User=git
WorkingDirectory=/home/git
ExecStart=/usr/local/bin/act_runner daemon
Restart=always
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable gitea-runner
sudo systemctl start gitea-runner
- Add secrets in Gitea:
- Go to your repository → Settings → Secrets
- Add all environment variables as secrets
Option B: Using Webhook + Script
- Create webhook endpoint:
# Install webhook listener
sudo apt-get install webhook
# Create webhook config
sudo tee /etc/webhook.conf > /dev/null <<EOF
[
{
"id": "voxblog-deploy",
"execute-command": "/path/to/voxblog/deploy.sh",
"command-working-directory": "/path/to/voxblog",
"response-message": "Deployment started",
"trigger-rule": {
"match": {
"type": "payload-hash-sha256",
"secret": "YOUR_WEBHOOK_SECRET",
"parameter": {
"source": "header",
"name": "X-Gitea-Signature"
}
}
}
}
]
EOF
# Run webhook as service
sudo tee /etc/systemd/system/webhook.service > /dev/null <<EOF
[Unit]
Description=Webhook Service
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/webhook -hooks /etc/webhook.conf -verbose
Restart=always
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable webhook
sudo systemctl start webhook
- Configure Gitea webhook:
- Repository → Settings → Webhooks → Add Webhook
- URL:
http://your-vps:9000/hooks/voxblog-deploy - Secret:
YOUR_WEBHOOK_SECRET - Trigger: Push events on main branch
Step 6: Reverse Proxy (Nginx)
# /etc/nginx/sites-available/voxblog
server {
listen 80;
server_name voxblog.yourdomain.com;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name voxblog.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/voxblog.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/voxblog.yourdomain.com/privkey.pem;
# Admin frontend
location / {
proxy_pass http://localhost:3300;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# API backend
location /api {
proxy_pass http://localhost:3301;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Increase timeout for AI streaming
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
}
Enable site:
sudo ln -s /etc/nginx/sites-available/voxblog /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Step 7: SSL Certificate
# Install certbot
sudo apt-get install certbot python3-certbot-nginx
# Get certificate
sudo certbot --nginx -d voxblog.yourdomain.com
# Auto-renewal is set up automatically
Step 8: Environment Variables
Create .env on your VPS:
cd /path/to/voxblog
cp .env.example .env
nano .env
Fill in all values from .env.example.
Step 9: Initial Deployment
# Clone repository
cd /var/www # or your preferred location
git clone https://your-gitea-url/your-username/voxblog.git
cd voxblog
# Create .env file
cp .env.example .env
nano .env # Fill in values
# Initial deployment
./deploy.sh
Step 10: Monitoring & Logs
# View logs
docker-compose logs -f
# View specific service
docker-compose logs -f api
docker-compose logs -f admin
# Check status
docker-compose ps
# Restart services
docker-compose restart api
docker-compose restart admin
Deployment Workflow
Developer pushes to main
↓
Gitea detects push
↓
Triggers Gitea Actions / Webhook
↓
Runs deploy.sh or workflow
↓
1. Pull latest code
2. Build Docker images
3. Stop old containers
4. Start new containers
5. Run migrations
6. Health check
7. Clean up
↓
Deployment complete! ✅
Rollback Strategy
# View recent images
docker images | grep voxblog
# Rollback to previous version
docker tag voxblog-api:latest voxblog-api:backup
docker tag voxblog-api:previous voxblog-api:latest
docker-compose up -d
# Or use git
git log --oneline
git checkout <previous-commit-hash>
./deploy.sh
Best Practices
-
Always test locally first:
docker-compose up --build -
Use health checks in docker-compose.yml
-
Backup database regularly:
docker-compose exec mysql mysqldump -u voxblog -p voxblog > backup.sql -
Monitor disk space:
docker system df docker system prune -a -
Use secrets management - never commit
.envto git -
Set up monitoring (optional):
- Portainer for Docker management
- Grafana + Prometheus for metrics
- Uptime Kuma for uptime monitoring
Troubleshooting
Container won't start
docker-compose logs api
docker-compose exec api sh # Debug inside container
Database connection issues
docker-compose exec mysql mysql -u voxblog -p
# Check if database exists
SHOW DATABASES;
Port already in use
sudo lsof -i :3301
sudo kill -9 <PID>
Out of disk space
docker system prune -a --volumes
Security Checklist
- Use strong passwords in
.env - Enable firewall (ufw)
- Keep Docker updated
- Use SSL/TLS (HTTPS)
- Limit SSH access
- Regular backups
- Monitor logs for suspicious activity
- Use Docker secrets for sensitive data (advanced)
Next Steps
- Create all Docker files
- Set up Gitea Runner or webhook
- Configure environment variables
- Test deployment locally
- Deploy to production
- Set up monitoring
- Configure backups
Status: Ready for production deployment! 🚀