- Fix ChatBubble to handle non-string content with String() wrapper - Fix API route to use generateText for non-streaming requests - Add @ai-sdk/openai-compatible for non-OpenAI providers (DeepSeek, etc.) - Use Chat Completions API instead of Responses API for compatible providers - Update ChatBubble tests and fix component exports to kebab-case - Remove stale PascalCase ChatBubble.tsx file
22 KiB
Story 2.1: Ghostwriter Agent & Markdown Generation
Status: done
Story
As a user, I want the system to draft a polished post based on my chat, So that I can see my raw thoughts transformed into value.
Acceptance Criteria
-
Ghostwriter Agent Trigger
- Given the user has completed the interview or used "Fast Track"
- When the "Ghostwriter" agent is triggered
- Then it consumes the entire chat history and the "Lesson" context
- And generates a structured Markdown artifact (Title, Body, Tags)
-
Drafting Animation
- Given the generation is processing
- When the user waits
- Then they see a distinct "Drafting" animation (different from "Typing")
- And the tone of the output matches the "Professional/LinkedIn" persona
Tasks / Subtasks
-
Implement Ghostwriter Prompt Engine
- Create
generateGhostwriterPrompt()function insrc/lib/llm/prompt-engine.ts - Build prompt structure: Chat history + Intent context + Persona instructions
- Define output format: Markdown with Title, Body, Tags sections
- Add constraints: Professional tone, no hallucinations, grounded in user input
- Create
-
Implement Ghostwriter LLM Service
- Create
getGhostwriterResponse()insrc/services/llm-service.ts - Handle streaming response for draft generation
- Add retry logic for failed generations
- Return structured Markdown object
- Create
-
Create Draft State Management
- Add draft state to ChatStore:
currentDraft,isDrafting - Add
generateDraft()action to trigger Ghostwriter - Add
clearDraft()action for state reset - Persist draft to
draftstable in IndexedDB
- Add draft state to ChatStore:
-
Implement Drafting Indicator
- Create
DraftingIndicator.tsxcomponent - Use distinct animation (shimmer/skeleton) different from typing indicator
- Show "Drafting your post..." message with professional tone
- Create
-
Create Draft Storage Schema
- Add
draftstable to Dexie schema insrc/lib/db/index.ts - Define Draft interface: id, sessionId, title, content, tags, createdAt, status
- Add indexes for querying drafts by session and date
- Add
-
Integrate Ghostwriter with Chat Service
- Modify
ChatServiceto route to Ghostwriter after interview completion - Implement trigger logic: user taps "Draft It" or Fast Track sends input
- Store generated draft in IndexedDB
- Update ChatStore with draft result
- Modify
-
Test Ghostwriter End-to-End
- Unit test: Prompt generation with various chat histories
- Unit test: Ghostwriter LLM service with mocked responses
- Integration test: Full flow from chat to draft generation
- Integration test: Fast Track triggers Ghostwriter directly
- Edge case: Empty chat history
- Edge case: Very long chat history (token limits)
Dev Notes
Architecture Compliance (CRITICAL)
Logic Sandwich Pattern - DO NOT VIOLATE:
- UI Components MUST NOT import
src/lib/llmorsrc/services/llm-service.tsdirectly - All Ghostwriter logic MUST go through
ChatServicelayer - ChatService then calls
LLMServiceas needed - Components use Zustand store via atomic selectors only
- Services return plain objects, not Dexie observables
State Management - Atomic Selectors Required:
// BAD - Causes unnecessary re-renders
const { currentDraft, isDrafting } = useChatStore();
// GOOD - Atomic selectors
const currentDraft = useChatStore(s => s.currentDraft);
const isDrafting = useChatStore(s => s.isDrafting);
const generateDraft = useChatStore(s => s.generateDraft);
Local-First Data Boundary:
- Generated drafts MUST be stored in IndexedDB (
draftstable) - Drafts are the primary artifacts - chat history is the source context
- Drafts persist offline and can be accessed from history view
- No draft content sent to server for storage
Edge Runtime Constraint:
- All API routes under
app/api/must use the Edge Runtime - The Ghostwriter LLM call goes through
/api/llmroute (same as Teacher) - Code:
export const runtime = 'edge';
Architecture Implementation Details
Story Purpose: This is the FIRST story of Epic 2 ("The Magic Mirror"). It implements the core value proposition: transforming raw chat input into a polished artifact. The Ghostwriter Agent is the "magic" that turns venting into content.
State Management:
// Add to ChatStore (src/lib/store/chat-store.ts)
interface ChatStore {
// Draft state
currentDraft: Draft | null;
isDrafting: boolean;
generateDraft: (sessionId: string) => Promise<void>;
clearDraft: () => void;
}
interface Draft {
id: string;
sessionId: string;
title: string;
content: string; // Markdown formatted
tags: string[];
createdAt: number;
status: 'draft' | 'completed' | 'regenerated';
}
Dexie Schema Extensions:
// Add to src/lib/db/schema.ts
db.version(1).stores({
chatLogs: 'id, sessionId, timestamp, role',
sessions: 'id, createdAt, updatedAt',
drafts: 'id, sessionId, createdAt, status' // NEW table
});
interface DraftRecord {
id: string;
sessionId: string;
title: string;
content: string;
tags: string[];
createdAt: number;
status: 'draft' | 'completed' | 'regenerated';
}
Logic Flow:
- User completes interview OR uses Fast Track
- ChatService detects "ready to draft" state
- ChatService calls
LLMService.getGhostwriterResponse(chatHistory, intent) - LLMService streams response through
/api/llmedge function - ChatStore updates
isDraftingstate (shows drafting indicator) - On completion, draft stored in IndexedDB and ChatStore updated
- Draft view UI displays the result (Story 2.2)
Files to Create:
src/components/features/chat/DraftingIndicator.tsx- Drafting animation componentsrc/lib/db/draft-service.ts- Draft CRUD operations (follows Service pattern)
Files to Modify:
src/lib/db/schema.ts- Add drafts tablesrc/lib/llm/prompt-engine.ts- AddgenerateGhostwriterPrompt()functionsrc/services/llm-service.ts- AddgetGhostwriterResponse()functionsrc/services/chat-service.ts- Add draft generation orchestrationsrc/lib/store/chat-store.ts- Add draft state and actionssrc/app/api/llm/route.ts- Handle Ghostwriter requests (extend existing)
UX Design Specifications
From UX Design Document:
Visual Feedback - Drafting State:
- Use "Skeleton card loader" (shimmering lines) to show work is happening
- Different from "Teacher is typing..." dots
- Text: "Drafting your post..." or "Polishing your insight..."
Output Format - The "Magic Moment":
- The draft should appear as a "Card" or "Article" view (Story 2.2 will implement)
- Use
Merriweatherfont (serif) to signal "published work" - Distinct visual shift from Chat (casual) to Draft (professional)
Tone and Persona:
- Ghostwriter should use "Professional/LinkedIn" persona
- Output should be polished but authentic
- Avoid corporate jargon; maintain the user's voice
Transition Pattern:
- When drafting completes, the Draft View slides up (Sheet pattern)
- Chat remains visible underneath for context
- This "Split-Personality" UI reinforces the transformation value
Testing Requirements
Unit Tests:
PromptEngine:generateGhostwriterPrompt()produces correct structurePromptEngine: Includes chat history context in promptPromptEngine: Handles empty/short chat history gracefullyLLMService:getGhostwriterResponse()calls Edge API correctlyLLMService: Handles streaming response with callbacksChatStore:generateDraft()action updates state correctlyChatStore: Draft persisted to IndexedDBDraftService: CRUD operations work correctly
Integration Tests:
- Full flow: Chat history -> Draft generation -> Draft stored
- Fast Track flow: Single input -> Draft generation
- Draft state: Draft appears in UI after generation
- Offline scenario: Draft queued if offline (basic handling for now, full sync in Epic 3)
Edge Cases:
- Empty chat history: Should return helpful error message
- Very long chat history: Should truncate/summarize within token limits
- LLM API failure: Should show retry option
- Malformed LLM response: Should handle gracefully
Performance Tests:
- Draft generation time: < 5 seconds (NFR requirement)
- Drafting indicator appears within 1 second of trigger
- Large chat history (100+ messages): Should handle efficiently
Previous Story Intelligence (from Epic 1)
Patterns Established (must follow):
- Logic Sandwich Pattern: UI -> Zustand -> Service -> LLM (strictly enforced)
- Atomic Selectors: All state access uses
useChatStore(s => s.field) - Streaming Pattern: LLM responses use streaming with callbacks (onToken, onComplete)
- Edge Runtime: All API routes use
export const runtime = 'edge' - Typing Indicator: Pattern for showing processing state
- Intent Detection: Teacher agent classifies user input (context for Ghostwriter)
Key Files from Epic 1 (Reference):
src/lib/llm/prompt-engine.ts- HasgenerateTeacherPrompt(), add Ghostwriter versionsrc/services/llm-service.ts- HasgetTeacherResponseStream(), add Ghostwriter versionsrc/app/api/llm/route.ts- Handles Teacher requests, extend for Ghostwritersrc/lib/store/chat-store.ts- Has chat state, add draft statesrc/services/chat-service.ts- Orchestrates chat flow, add draft generationsrc/lib/db/schema.ts- Has chatLogs and sessions, add drafts table
Learnings to Apply:
- Story 1.4 established Fast Track mode that directly triggers Ghostwriter
- Use
isProcessingpattern forisDraftingstate - Follow streaming callback pattern:
onTokenfor building draft incrementally - Ghostwriter prompt should include intent context from Teacher agent
- Draft generation should use same Edge API proxy as Teacher agent
Testing Patterns:
- Epic 1 established 101 passing tests
- Follow same test structure: unit tests for each service, integration tests for full flow
- Use mocked LLM responses for deterministic testing
- Test streaming behavior with callback mocks
Ghostwriter Prompt Specifications
Prompt Structure:
function generateGhostwriterPrompt(
chatHistory: ChatMessage[],
intent?: 'venting' | 'insight'
): string {
return `
You are the Ghostwriter Agent. Your role is to transform a user's chat session into a polished, professional post.
CONTEXT:
- User Intent: ${intent || 'unknown'}
- Chat History: ${formatChatHistory(chatHistory)}
REQUIREMENTS:
1. Extract the core insight or lesson from the chat
2. Write in a professional but authentic tone (LinkedIn-style)
3. Structure as Markdown with: Title, Body, Tags
4. DO NOT hallucinate facts - stay grounded in what the user shared
5. Focus on the "transformation" - how the user's thinking evolved
6. If it was a struggle, frame it as a learning opportunity
7. Keep it concise (300-500 words for the body)
OUTPUT FORMAT:
\`\`\`markdown
# [Compelling Title]
[2-4 paragraphs that tell the story of the insight]
**Tags:** [3-5 relevant tags]
\`\`\`
`;
}
Prompt Engineering Notes:
- The prompt should emphasize "grounded in user input" to prevent hallucinations
- For "venting" intent, focus on "reframing struggle as lesson"
- For "insight" intent, focus on "articulating the breakthrough"
- Include the full chat history as context
- Title generation is critical - should be catchy but authentic
Data Schema Specifications
Dexie Schema - Drafts Table:
// Add to src/lib/db/schema.ts
db.version(1).stores({
chatLogs: 'id, sessionId, timestamp, role, intent',
sessions: 'id, createdAt, updatedAt, isFastTrackMode, currentIntent',
drafts: 'id, sessionId, createdAt, status' // NEW
});
export interface DraftRecord {
id: string;
sessionId: string;
title: string;
content: string; // Markdown formatted
tags: string[]; // Array of tag strings
createdAt: number;
status: 'draft' | 'completed' | 'regenerated';
}
Session-Draft Relationship:
- Each draft is linked to a session via
sessionId - A session can have multiple drafts (regenerations)
- The latest draft for a session is shown in history
- Status tracks draft lifecycle: draft -> completed (user approved) or regenerated
Performance Requirements
NFR-01 Compliance (Generation Latency):
- Draft generation should complete in < 5 seconds total
- First token should appear within 3 seconds
- Streaming should show progressive build-up of the draft
NFR-06 Compliance (Data Persistence):
- Draft must be auto-saved to IndexedDB immediately on completion
- No data loss if user closes app during generation
- Drafts persist offline and can be accessed from history
State Updates:
isDraftingstate should update immediately on trigger- Draft content should stream into UI as tokens arrive
- Draft should be queryable from history view immediately after completion
Security & Privacy Requirements
NFR-03 & NFR-04 Compliance:
- User content sent to LLM API for inference only (not training)
- No draft content stored on server
- Drafts stored 100% client-side in IndexedDB
- API keys hidden via Edge Function proxy
Content Safety:
- Ghostwriter prompt should include guardrails against:
- Toxic or offensive content
- Factually incorrect technical claims
- Overly promotional language
- If LLM generates concerning content, flag for user review
Project Structure Notes
Following Feature-First Lite Pattern:
- New component:
src/components/features/chat/DraftingIndicator.tsx - New service:
src/lib/db/draft-service.ts(could also be insrc/services/) - Store updates:
src/lib/store/chat-store.ts - Schema updates:
src/lib/db/schema.ts
Alignment with Unified Project Structure:
- All feature code under
src/components/features/ - Services orchestrate logic, don't touch DB directly from UI
- State managed centrally in Zustand stores
- Database schema versioned properly with Dexie
No Conflicts Detected:
- Ghostwriter fits cleanly into existing architecture
- Drafts table is new, no migration conflicts
- Extends existing LLM service pattern
References
Epic Reference:
- Epic 2: "The Magic Mirror" - Ghostwriter & Draft Refinement
- Story 2.1: Ghostwriter Agent & Markdown Generation
- FR-03: "Ghostwriter Agent can transform the structured interview data into a grammatically correct and structured 'Enlightenment' artifact"
Architecture Documents:
- Project Context: Logic Sandwich
- Project Context: State Management
- Project Context: Local-First Boundary
- Architecture: Service Boundaries
- Architecture: Data Architecture
UX Design Specifications:
PRD Requirements:
- PRD: Dual-Agent Pipeline
- FR-03: "Ghostwriter Agent can transform the structured interview data into a grammatically correct and structured 'Enlightenment' artifact"
- NFR-01: "< 3 seconds for first token, < 5 seconds total generation"
Previous Stories:
- Story 1.4: Fast Track Mode - Fast Track directly triggers Ghostwriter
Dev Agent Record
Agent Model Used
Claude Opus 4.5 (model ID: 'claude-opus-4-5-20251101')
Debug Log References
Session file: /tmp/claude/-home-maximilienmao-Projects-Test01/e83dd24d-bb58-4fba-ac25-3628cdeae3e8/scratchpad
Completion Notes List
Story Analysis Completed:
- Extracted story requirements from Epic 2, Story 2.1
- Analyzed previous Epic 1 stories for established patterns
- Reviewed architecture for compliance requirements (Logic Sandwich, State Management, Local-First)
- Reviewed UX specification for visual design and interaction patterns
- Identified all files to create and modify
Implementation Completed: All tasks and subtasks have been implemented:
-
Ghostwriter Prompt Engine (
src/lib/llm/prompt-engine.ts)- Added
generateGhostwriterPrompt()function with chat history context, intent-specific guidance - Prompt enforces: professional LinkedIn tone, no hallucinations, grounded in user input
- Output format: Markdown with Title, Body, Tags sections
- 21 new tests added for prompt generation (all passing)
- Added
-
Ghostwriter LLM Service (
src/services/llm-service.ts)- Added
getGhostwriterResponse()for non-streaming draft generation - Added
getGhostwriterResponseStream()for streaming draft generation - Includes retry logic and comprehensive error handling
- Returns structured Draft object with Markdown content
- 13 new tests added (all passing)
- Added
-
Draft State Management (
src/lib/store/chat-store.ts)- Added
currentDraft,isDraftingstate to ChatStore - Added
generateDraft()action to trigger Ghostwriter - Added
clearDraft()action for state reset - Drafts automatically persisted to IndexedDB via DraftService
- Follows atomic selector pattern for state access
- Added
-
Drafting Indicator (
src/components/features/chat/DraftingIndicator.tsx)- Created component with shimmer/pulse animation
- Distinct from typing indicator (dots vs skeleton)
- Shows "Drafting your post..." message
-
Draft Storage Schema (
src/lib/db/index.ts)- Added
DraftRecordinterface with all required fields - Added
draftstable to Dexie database with indexes - Database version 1 with proper schema definition
- Added
-
ChatService Integration (
src/services/chat-service.ts)- Added
generateGhostwriterDraft()method - Orchestrates Ghostwriter LLM service and DraftService
- Handles title/content parsing from Markdown
- Returns structured response with draft ID
- Added
-
Fast Track Integration (
src/lib/store/chat-store.ts)- Fast Track mode now triggers Ghostwriter Agent
- Generates actual draft instead of placeholder response
- Draft saved to IndexedDB and loaded into currentDraft state
Key Technical Decisions:
- Prompt Engineering: Ghostwriter prompt structure with chat history context, output format requirements, hallucination guardrails
- State Management: Add draft state to ChatStore following atomic selector pattern
- Data Schema: New
draftstable in IndexedDB with proper indexing - Service Pattern: DraftService for CRUD operations (follows established pattern)
- Streaming: Use same streaming pattern as Teacher agent for draft generation
- Fast Track: Now calls Ghostwriter instead of returning placeholder (Epic 2 integration)
Dependencies:
- No new dependencies required
- Reuses existing Zustand, Dexie, LLM service infrastructure
- Extends existing prompt engine and LLM service
Integration Points:
- Connected to existing ChatStore state management
- Ghostwriter triggered by ChatService after interview completion or Fast Track
- Reuses Edge API proxy (
/api/llm) for LLM calls - Draft stored in IndexedDB for history access (Epic 3)
Testing Summary:
- 146 tests passing (all tests passing)
- All Ghostwriter functionality tested with unit and integration tests
- Error handling tested for timeout, rate limit, network errors
- Edge cases tested: empty history, long history, undefined intent
- Fast Track integration test updated to mock ChatService.generateGhostwriterDraft
Behavior Changes:
- Fast Track mode (Story 1.4) now triggers Ghostwriter Agent
- Returns actual draft instead of placeholder response
- Integration test updated to mock ChatService.generateGhostwriterDraft and DraftService.getDraftBySessionId
File List
New Files Created:
src/lib/db/draft-service.ts- Draft CRUD operations servicesrc/lib/db/draft-service.test.ts- Draft service tests (11 tests)src/components/features/chat/DraftingIndicator.tsx- Drafting animation component
Files Modified:
src/lib/db/index.ts- Added DraftRecord interface and drafts table to Dexie schemasrc/lib/llm/prompt-engine.ts- AddedgenerateGhostwriterPrompt()function with intent-specific guidancesrc/services/llm-service.ts- Added Ghostwriter methods with streaming and error handlingsrc/services/chat-service.ts- AddedgenerateGhostwriterDraft()orchestration methodsrc/lib/store/chat-store.ts- Added draft state, generateDraft/clearDraft actions, Fast Track integrationsrc/lib/llm/prompt-engine.test.ts- Added 21 Ghostwriter prompt testssrc/services/llm-service.test.ts- Added 13 Ghostwriter service testssrc/integration/fast-track.test.ts- Updated Fast Track test to mock Ghostwriter services