docs: add comprehensive local Docker testing guide
Some checks are pending
Deploy to Production / deploy (push) Waiting to run

- Created new LOCAL_TESTING.md with detailed instructions for setting up local development environment
- Added step-by-step setup guide covering Docker installation, environment configuration, and common commands
- Included troubleshooting section with solutions for common issues like port conflicts and build failures
- Added development workflow guidelines and testing checklist for quality assurance
- Documented performance tips and best practices for Docker
This commit is contained in:
Ender 2025-10-25 23:42:24 +02:00
parent 51999669af
commit d8c41cc206
9 changed files with 450 additions and 36 deletions

409
LOCAL_TESTING.md Normal file
View File

@ -0,0 +1,409 @@
# Local Docker Testing Guide
## Prerequisites
### 1. Install Docker Desktop (Mac)
Download and install from: https://www.docker.com/products/docker-desktop
Or use Homebrew:
```bash
brew install --cask docker
```
Start Docker Desktop from Applications.
### 2. Verify Installation
```bash
docker --version
docker-compose --version
```
## Quick Start
### 1. Create Local .env File
```bash
# Copy example
cp .env.example .env
# Edit with your values
nano .env
```
**Minimal .env for local testing:**
```bash
# Database
MYSQL_ROOT_PASSWORD=localrootpass123
MYSQL_PASSWORD=localvoxblogpass123
# Application
ADMIN_PASSWORD=admin123
OPENAI_API_KEY=sk-your-actual-openai-key-here
GHOST_ADMIN_API_KEY=leave-empty-if-not-using
# S3 Storage (use your actual credentials)
S3_BUCKET=your-bucket-name
S3_REGION=us-east-1
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_ENDPOINT=https://s3.amazonaws.com
# Frontend (for local testing)
VITE_API_URL=http://localhost:3001
```
### 2. Build and Start
```bash
# Build and start all containers
docker-compose up --build
# Or run in background (detached mode)
docker-compose up --build -d
```
This will:
- Build the API Docker image
- Build the Admin Docker image
- Start MySQL database
- Start all services
**First build takes 5-10 minutes** (downloading dependencies, building images)
### 3. Wait for Services to Start
Watch the logs until you see:
```
voxblog-mysql | ready for connections
voxblog-api | Server listening on port 3001
voxblog-admin | Configuration complete
```
### 4. Access Your Application
- **Frontend**: http://localhost:3000
- **API**: http://localhost:3001
- **API Health**: http://localhost:3001/health
### 5. Test the Application
1. Open http://localhost:3000 in your browser
2. Login with your ADMIN_PASSWORD
3. Create a post
4. Upload images
5. Generate content with AI
## Useful Commands
### View Logs
```bash
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f api
docker-compose logs -f admin
docker-compose logs -f mysql
# Last 50 lines
docker-compose logs --tail=50 api
```
### Check Status
```bash
# See running containers
docker-compose ps
# See all Docker containers
docker ps
```
### Stop Services
```bash
# Stop all containers
docker-compose down
# Stop and remove volumes (deletes database!)
docker-compose down -v
```
### Restart Services
```bash
# Restart all
docker-compose restart
# Restart specific service
docker-compose restart api
docker-compose restart admin
```
### Rebuild After Code Changes
```bash
# Rebuild and restart
docker-compose up --build
# Rebuild specific service
docker-compose up --build api
docker-compose up --build admin
```
### Access Container Shell
```bash
# API container
docker-compose exec api sh
# MySQL container
docker-compose exec mysql mysql -u voxblog -p
# Enter MYSQL_PASSWORD when prompted
```
### Clean Up Everything
```bash
# Stop and remove containers
docker-compose down
# Remove all unused images
docker image prune -a
# Remove all unused volumes
docker volume prune
# Nuclear option - clean everything
docker system prune -a --volumes
```
## Troubleshooting
### Port Already in Use
If you get "port is already allocated" error:
```bash
# Check what's using the port
sudo lsof -i :3000
sudo lsof -i :3001
# Kill the process
kill -9 <PID>
# Or change ports in docker-compose.yml
ports:
- "3002:80" # Use 3002 instead of 3000
```
### Build Fails
```bash
# Clean build cache
docker-compose build --no-cache
# Remove old images
docker image prune -a
# Try again
docker-compose up --build
```
### Database Connection Error
```bash
# Check if MySQL is healthy
docker-compose ps
# View MySQL logs
docker-compose logs mysql
# Restart MySQL
docker-compose restart mysql
# Wait 30 seconds for MySQL to be ready
```
### Out of Disk Space
```bash
# Check Docker disk usage
docker system df
# Clean up
docker system prune -a
docker volume prune
```
### Container Keeps Restarting
```bash
# View logs to see error
docker-compose logs api
# Common issues:
# - Missing environment variables in .env
# - Database not ready (wait longer)
# - Port conflict
```
### Can't Access Frontend
1. Check if container is running: `docker-compose ps`
2. Check logs: `docker-compose logs admin`
3. Try accessing: `curl http://localhost:3000`
4. Check if port 3000 is free: `lsof -i :3000`
### API Returns 502 Error
1. Check if API is running: `docker-compose ps`
2. Check API logs: `docker-compose logs api`
3. Test API directly: `curl http://localhost:3001/health`
4. Check environment variables: `docker-compose exec api env`
## Development Workflow
### Making Code Changes
**Backend (API) changes:**
```bash
# Edit code in apps/api/
# Rebuild and restart
docker-compose up --build api
```
**Frontend (Admin) changes:**
```bash
# Edit code in apps/admin/
# Rebuild and restart
docker-compose up --build admin
```
### Database Changes
```bash
# Run migrations
docker-compose exec api pnpm run drizzle:migrate
# Generate new migration
docker-compose exec api pnpm run drizzle:generate
```
### View Database
```bash
# Access MySQL
docker-compose exec mysql mysql -u voxblog -p
# Show databases
SHOW DATABASES;
USE voxblog;
SHOW TABLES;
# Query data
SELECT * FROM posts;
```
## Testing Checklist
- [ ] Docker Desktop running
- [ ] `.env` file created with all values
- [ ] `docker-compose up --build` successful
- [ ] All 3 containers running (mysql, api, admin)
- [ ] Can access http://localhost:3000
- [ ] Can login with ADMIN_PASSWORD
- [ ] Can create a post
- [ ] Can upload images
- [ ] Can generate AI content
- [ ] Can save and publish
## Performance Tips
### Speed Up Builds
```bash
# Use BuildKit for faster builds
export DOCKER_BUILDKIT=1
export COMPOSE_DOCKER_CLI_BUILD=1
docker-compose up --build
```
### Reduce Image Size
Already optimized with:
- Multi-stage builds
- Alpine Linux base images
- Production dependencies only
### Persistent Data
Database data is stored in Docker volume `mysql_data`:
```bash
# List volumes
docker volume ls
# Inspect volume
docker volume inspect voxblog_mysql_data
# Backup volume
docker run --rm -v voxblog_mysql_data:/data -v $(pwd):/backup alpine tar czf /backup/mysql-backup.tar.gz /data
```
## Next Steps
Once local testing is successful:
1. ✅ Commit your changes
2. ✅ Push to Gitea
3. ✅ Deploy to VPS using deploy.sh
4. ✅ Set up Caddy reverse proxy
5. ✅ Configure CI/CD
## Common Issues & Solutions
### Issue: "Cannot connect to Docker daemon"
**Solution**: Start Docker Desktop
### Issue: "Port 3000 is already allocated"
**Solution**: Stop other services or change port in docker-compose.yml
### Issue: "Build takes too long"
**Solution**: First build is slow (5-10 min). Subsequent builds are faster.
### Issue: "MySQL not ready"
**Solution**: Wait 30 seconds after starting. MySQL needs time to initialize.
### Issue: "API returns 500 error"
**Solution**: Check .env file has all required variables, especially OPENAI_API_KEY
### Issue: "Images not uploading"
**Solution**: Check S3 credentials in .env file
## Quick Reference
```bash
# Start
docker-compose up -d
# Stop
docker-compose down
# Logs
docker-compose logs -f
# Rebuild
docker-compose up --build
# Clean up
docker-compose down -v
docker system prune -a
```
---
**Ready to test!** Start Docker Desktop, then run `docker-compose up --build` 🚀

View File

@ -58,7 +58,6 @@ export default function EditorShell({ onLogout: _onLogout, initialPostId, onBack
refreshPreview,
toggleGenImage,
toggleReferenceImage,
triggerAutoSave,
triggerImmediateAutoSave,
} = usePostEditor(initialPostId);
@ -199,7 +198,6 @@ export default function EditorShell({ onLogout: _onLogout, initialPostId, onBack
draftHtml={draft}
onChangeDraft={setDraft}
generatedDraft={generatedDraft}
imagePlaceholders={imagePlaceholders}
selectedImageKeys={genImageKeys}
onAutoSave={triggerImmediateAutoSave}
/>

View File

@ -10,7 +10,6 @@ export default function StepEdit({
draftHtml,
onChangeDraft,
generatedDraft,
imagePlaceholders,
selectedImageKeys,
onAutoSave,
}: {
@ -18,7 +17,6 @@ export default function StepEdit({
draftHtml: string;
onChangeDraft: (html: string) => void;
generatedDraft?: string;
imagePlaceholders?: string[];
selectedImageKeys?: string[];
onAutoSave?: () => void;
}) {

View File

@ -1,4 +1,4 @@
import { Alert, Box, Button, Stack, Typography } from '@mui/material';
import { Alert, Box, Button, Stack } from '@mui/material';
import StepHeader from './StepHeader';
export default function StepPublish({

View File

@ -110,33 +110,6 @@ export default function Recorder({ postId, initialClips, onInsertAtCursor, onTra
})));
}, [initialClips]);
const uploadClip = async (idx: number) => {
const c = clips[idx];
if (!c) return;
if (!c.blob) {
setClips((prev) => prev.map((x, i) => i === idx ? { ...x, error: 'No local audio data for upload' } : x));
return;
}
setClips((prev) => prev.map((x, i) => i === idx ? { ...x, isUploading: true, error: '' } : x));
try {
const form = new FormData();
const ext = c.mime.includes('mp4') ? 'm4a' : 'webm';
const blob = c.blob as Blob;
form.append('audio', blob, `recording.${ext}`);
const url = postId ? `/api/media/audio?postId=${encodeURIComponent(postId)}` : '/api/media/audio';
const res = await fetch(url, { method: 'POST', body: form });
if (!res.ok) {
const txt = await res.text();
throw new Error(`Upload failed: ${res.status} ${txt}`);
}
const data = await res.json();
setClips((prev) => prev.map((x, i) => i === idx ? { ...x, id: (data.clipId || x.id), uploadedKey: data.key || 'uploaded', uploadedBucket: data.bucket || null } : x));
} catch (e: any) {
setClips((prev) => prev.map((x, i) => i === idx ? { ...x, error: e?.message || 'Upload failed' } : x));
} finally {
setClips((prev) => prev.map((x, i) => i === idx ? { ...x, isUploading: false } : x));
}
};
const transcribeClip = async (idx: number) => {
const c = clips[idx];

View File

@ -25,6 +25,8 @@ services:
build:
context: .
dockerfile: docker/api.Dockerfile
args:
PNPM_FLAGS: --no-frozen-lockfile
container_name: voxblog-api
restart: unless-stopped
ports:
@ -55,6 +57,7 @@ services:
dockerfile: docker/admin.Dockerfile
args:
VITE_API_URL: ${VITE_API_URL:-http://localhost:3001}
PNPM_FLAGS: --no-frozen-lockfile
container_name: voxblog-admin
restart: unless-stopped
ports:

View File

@ -4,6 +4,7 @@ WORKDIR /app
# Build args
ARG VITE_API_URL=http://localhost:3001
ARG PNPM_FLAGS=--frozen-lockfile
# Copy workspace files
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
@ -13,7 +14,7 @@ COPY apps/admin/package.json ./apps/admin/
RUN npm install -g pnpm
# Install dependencies
RUN pnpm install --frozen-lockfile
RUN pnpm install ${PNPM_FLAGS}
# Copy source
COPY apps/admin ./apps/admin

View File

@ -1,6 +1,7 @@
FROM node:18-alpine AS builder
WORKDIR /app
ARG PNPM_FLAGS=--frozen-lockfile
# Copy workspace files
COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
@ -10,7 +11,7 @@ COPY apps/api/package.json ./apps/api/
RUN npm install -g pnpm
# Install dependencies
RUN pnpm install --frozen-lockfile
RUN pnpm install ${PNPM_FLAGS}
# Copy source
COPY apps/api ./apps/api
@ -19,6 +20,7 @@ COPY apps/api ./apps/api
FROM node:18-alpine
WORKDIR /app
ARG PNPM_FLAGS=--frozen-lockfile
# Install pnpm and ts-node
RUN npm install -g pnpm ts-node typescript
@ -28,7 +30,7 @@ COPY package.json pnpm-lock.yaml pnpm-workspace.yaml ./
COPY apps/api/package.json ./apps/api/
# Install production dependencies
RUN pnpm install --frozen-lockfile
RUN pnpm install ${PNPM_FLAGS}
# Copy app from builder
COPY --from=builder /app/apps/api ./apps/api
@ -42,6 +44,6 @@ EXPOSE 3001
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD node -e "require('http').get('http://localhost:3001/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"
CMD node -e "require('http').get('http://localhost:3001/api/health', (r) => {process.exit(r.statusCode === 200 ? 0 : 1)})"
CMD ["pnpm", "run", "dev"]

30
docker/nginx.conf Normal file
View File

@ -0,0 +1,30 @@
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml;
# SPA routing - all routes go to index.html
location / {
try_files $uri $uri/ /index.html;
}
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# Don't cache index.html
location = /index.html {
add_header Cache-Control "no-cache, no-store, must-revalidate";
add_header Pragma "no-cache";
add_header Expires "0";
}
}