voxblog/apps/admin/STREAMING_PERSISTENCE.md
Ender 38376ab632 feat: persist AI generation state across navigation
- Moved streaming state (isGenerating, content, tokens, errors) from StepGenerate to usePostEditor hook
- Added new state management to allow continuous AI generation when navigating between editor steps
- Updated EditorShell to pass streaming state and setters down to StepGenerate component
- Added detailed documentation explaining streaming persistence architecture and user experience
- Removed local state from StepGenerate in favor of props
2025-10-25 21:35:43 +02:00

6.6 KiB

Streaming Persistence Across Navigation

Problem Solved

Before: When navigating away from the Generate step during AI generation, the streaming would stop and all progress would be lost.

After: Streaming continues in the background even when you navigate to other steps. Content is preserved and available when you return.

How It Works

State Management Architecture

usePostEditor Hook (Persistent)
    ↓
    ├─ isGenerating: boolean
    ├─ streamingContent: string
    ├─ tokenCount: number
    └─ generationError: string
    ↓
EditorShell (Parent)
    ↓
StepGenerate (Child)

Key Changes

1. Lifted State to Hook (usePostEditor.ts)

// Streaming state (persisted across navigation)
const [isGenerating, setIsGenerating] = useState(false);
const [streamingContent, setStreamingContent] = useState('');
const [tokenCount, setTokenCount] = useState(0);
const [generationError, setGenerationError] = useState<string>('');

2. Passed Through EditorShell (EditorShell.tsx)

<StepGenerate
  // ... other props
  isGenerating={isGenerating}
  streamingContent={streamingContent}
  tokenCount={tokenCount}
  generationError={generationError}
  onSetIsGenerating={setIsGenerating}
  onSetStreamingContent={setStreamingContent}
  onSetTokenCount={setTokenCount}
  onSetGenerationError={setGenerationError}
/>

3. Used in Component (StepGenerate.tsx)

// No longer local state - uses props from hook
const { 
  isGenerating, 
  streamingContent, 
  tokenCount, 
  generationError,
  onSetIsGenerating,
  onSetStreamingContent,
  onSetTokenCount,
  onSetGenerationError
} = props;

User Experience

Scenario 1: Navigate During Streaming

  1. Start generation on Generate step
  2. See content streaming in real-time
  3. Navigate to Assets step to check something
  4. Generation continues in background
  5. Return to Generate step
  6. See completed content or ongoing stream

Scenario 2: Check Other Steps While Generating

Step 0: Assets     ← Navigate here
Step 1: AI Prompt  ← Or here
Step 2: Generate   ← Streaming continues here
Step 3: Edit       ← Or here
Step 4: Metadata
Step 5: Publish

Result: Generation keeps running, content preserved

Scenario 3: Long Generation

  • Start 2000-word article generation (~60 seconds)
  • Navigate to Edit step to prepare
  • Navigate to Metadata to plan tags
  • Return to Generate step
  • Content is complete and ready!

Technical Benefits

1. State Persistence

  • State lives in usePostEditor hook
  • Hook persists across step navigation
  • Only unmounts when leaving entire editor

2. Background Processing

  • Streaming API continues regardless of UI
  • Server-Sent Events connection stays open
  • Content accumulates in hook state

3. No Data Loss

  • Partial content preserved if navigation occurs
  • Token count maintained
  • Error state preserved
  • Can resume viewing at any time

4. Better UX

  • Don't have to wait on Generate step
  • Can multitask while AI generates
  • No accidental cancellation
  • Flexible workflow

Implementation Details

State Flow During Streaming

1. User clicks "Generate Draft"
   ↓
2. onSetIsGenerating(true) in hook
   ↓
3. Stream starts, chunks arrive
   ↓
4. onSetStreamingContent(content + delta)
   ↓
5. User navigates to another step
   ↓
6. StepGenerate unmounts (local state lost)
   BUT hook state persists!
   ↓
7. Stream continues, updating hook state
   ↓
8. User returns to Generate step
   ↓
9. StepGenerate remounts with current hook state
   ↓
10. Sees current streaming content or final result

Memory Management

  • Hook state: ~few KB for content string
  • Streaming connection: Maintained by browser
  • Cleanup: Automatic when leaving editor
  • No memory leaks: State cleared on unmount

Edge Cases Handled

1. Navigation During Stream

Stream continues Content preserved Can return anytime

2. Error During Stream

Error state preserved Visible when returning Can retry generation

3. Multiple Generations

Previous content cleared on new generation State reset properly No conflicts

4. Browser Refresh

Stream lost (expected - SSE connection closed) Last saved draft preserved in database Can regenerate if needed

Comparison

Before (Local State)

// In StepGenerate.tsx
const [isGenerating, setIsGenerating] = useState(false);
const [streamingContent, setStreamingContent] = useState('');

// ❌ Lost on navigation
// ❌ Stream stops
// ❌ Progress lost

After (Hook State)

// In usePostEditor.ts
const [isGenerating, setIsGenerating] = useState(false);
const [streamingContent, setStreamingContent] = useState('');

// ✅ Persists across navigation
// ✅ Stream continues
// ✅ Progress preserved

Testing

Test Case 1: Basic Persistence

  1. Start generation
  2. Wait 5 seconds (partial content)
  3. Navigate to Assets
  4. Navigate back to Generate
  5. Expected: See partial content, stream continuing

Test Case 2: Complete During Navigation

  1. Start generation
  2. Navigate away immediately
  3. Wait 60 seconds
  4. Navigate back to Generate
  5. Expected: See complete content

Test Case 3: Error Handling

  1. Disconnect network
  2. Start generation
  3. Navigate away
  4. Navigate back
  5. Expected: See error message

Future Enhancements

1. Visual Indicator

Show streaming status in step navigation:

Step 2: Generate ⚡ (Streaming...)

2. Notification on Complete

Toast notification when generation completes while on another step:

✅ Article generation complete! (2,456 tokens)

3. Progress in Sidebar

Show live progress in sidebar:

┌─────────────────────┐
│ AI Generation       │
│ ▓▓▓▓▓▓▓░░░░░░░░░    │
│ 1,234 tokens        │
└─────────────────────┘

4. Pause/Resume

Add ability to pause streaming:

const [isPaused, setIsPaused] = useState(false);
// Pause SSE consumption, resume later

Conclusion

The streaming persistence feature provides a seamless, flexible workflow where users can multitask during long AI generations without losing progress. The implementation is clean, using React's built-in state management patterns and requiring minimal changes to the existing codebase.

Status: Fully implemented and tested Impact: Significantly improved UX for long-running generations Complexity: Low (simple state lifting pattern)