Files
brachnha-insight/_bmad-output/implementation-artifacts/2-1-ghostwriter-agent-markdown-generation.md
Max e9e6fadb1d fix: ChatBubble crash and DeepSeek API compatibility
- Fix ChatBubble to handle non-string content with String() wrapper
- Fix API route to use generateText for non-streaming requests
- Add @ai-sdk/openai-compatible for non-OpenAI providers (DeepSeek, etc.)
- Use Chat Completions API instead of Responses API for compatible providers
- Update ChatBubble tests and fix component exports to kebab-case
- Remove stale PascalCase ChatBubble.tsx file
2026-01-26 16:55:05 +07:00

22 KiB

Story 2.1: Ghostwriter Agent & Markdown Generation

Status: done

Story

As a user, I want the system to draft a polished post based on my chat, So that I can see my raw thoughts transformed into value.

Acceptance Criteria

  1. Ghostwriter Agent Trigger

    • Given the user has completed the interview or used "Fast Track"
    • When the "Ghostwriter" agent is triggered
    • Then it consumes the entire chat history and the "Lesson" context
    • And generates a structured Markdown artifact (Title, Body, Tags)
  2. Drafting Animation

    • Given the generation is processing
    • When the user waits
    • Then they see a distinct "Drafting" animation (different from "Typing")
    • And the tone of the output matches the "Professional/LinkedIn" persona

Tasks / Subtasks

  • Implement Ghostwriter Prompt Engine

    • Create generateGhostwriterPrompt() function in src/lib/llm/prompt-engine.ts
    • Build prompt structure: Chat history + Intent context + Persona instructions
    • Define output format: Markdown with Title, Body, Tags sections
    • Add constraints: Professional tone, no hallucinations, grounded in user input
  • Implement Ghostwriter LLM Service

    • Create getGhostwriterResponse() in src/services/llm-service.ts
    • Handle streaming response for draft generation
    • Add retry logic for failed generations
    • Return structured Markdown object
  • Create Draft State Management

    • Add draft state to ChatStore: currentDraft, isDrafting
    • Add generateDraft() action to trigger Ghostwriter
    • Add clearDraft() action for state reset
    • Persist draft to drafts table in IndexedDB
  • Implement Drafting Indicator

    • Create DraftingIndicator.tsx component
    • Use distinct animation (shimmer/skeleton) different from typing indicator
    • Show "Drafting your post..." message with professional tone
  • Create Draft Storage Schema

    • Add drafts table to Dexie schema in src/lib/db/index.ts
    • Define Draft interface: id, sessionId, title, content, tags, createdAt, status
    • Add indexes for querying drafts by session and date
  • Integrate Ghostwriter with Chat Service

    • Modify ChatService to route to Ghostwriter after interview completion
    • Implement trigger logic: user taps "Draft It" or Fast Track sends input
    • Store generated draft in IndexedDB
    • Update ChatStore with draft result
  • Test Ghostwriter End-to-End

    • Unit test: Prompt generation with various chat histories
    • Unit test: Ghostwriter LLM service with mocked responses
    • Integration test: Full flow from chat to draft generation
    • Integration test: Fast Track triggers Ghostwriter directly
    • Edge case: Empty chat history
    • Edge case: Very long chat history (token limits)

Dev Notes

Architecture Compliance (CRITICAL)

Logic Sandwich Pattern - DO NOT VIOLATE:

  • UI Components MUST NOT import src/lib/llm or src/services/llm-service.ts directly
  • All Ghostwriter logic MUST go through ChatService layer
  • ChatService then calls LLMService as needed
  • Components use Zustand store via atomic selectors only
  • Services return plain objects, not Dexie observables

State Management - Atomic Selectors Required:

// BAD - Causes unnecessary re-renders
const { currentDraft, isDrafting } = useChatStore();

// GOOD - Atomic selectors
const currentDraft = useChatStore(s => s.currentDraft);
const isDrafting = useChatStore(s => s.isDrafting);
const generateDraft = useChatStore(s => s.generateDraft);

Local-First Data Boundary:

  • Generated drafts MUST be stored in IndexedDB (drafts table)
  • Drafts are the primary artifacts - chat history is the source context
  • Drafts persist offline and can be accessed from history view
  • No draft content sent to server for storage

Edge Runtime Constraint:

  • All API routes under app/api/ must use the Edge Runtime
  • The Ghostwriter LLM call goes through /api/llm route (same as Teacher)
  • Code: export const runtime = 'edge';

Architecture Implementation Details

Story Purpose: This is the FIRST story of Epic 2 ("The Magic Mirror"). It implements the core value proposition: transforming raw chat input into a polished artifact. The Ghostwriter Agent is the "magic" that turns venting into content.

State Management:

// Add to ChatStore (src/lib/store/chat-store.ts)
interface ChatStore {
  // Draft state
  currentDraft: Draft | null;
  isDrafting: boolean;
  generateDraft: (sessionId: string) => Promise<void>;
  clearDraft: () => void;
}

interface Draft {
  id: string;
  sessionId: string;
  title: string;
  content: string;  // Markdown formatted
  tags: string[];
  createdAt: number;
  status: 'draft' | 'completed' | 'regenerated';
}

Dexie Schema Extensions:

// Add to src/lib/db/schema.ts
db.version(1).stores({
  chatLogs: 'id, sessionId, timestamp, role',
  sessions: 'id, createdAt, updatedAt',
  drafts: 'id, sessionId, createdAt, status'  // NEW table
});

interface DraftRecord {
  id: string;
  sessionId: string;
  title: string;
  content: string;
  tags: string[];
  createdAt: number;
  status: 'draft' | 'completed' | 'regenerated';
}

Logic Flow:

  1. User completes interview OR uses Fast Track
  2. ChatService detects "ready to draft" state
  3. ChatService calls LLMService.getGhostwriterResponse(chatHistory, intent)
  4. LLMService streams response through /api/llm edge function
  5. ChatStore updates isDrafting state (shows drafting indicator)
  6. On completion, draft stored in IndexedDB and ChatStore updated
  7. Draft view UI displays the result (Story 2.2)

Files to Create:

  • src/components/features/chat/DraftingIndicator.tsx - Drafting animation component
  • src/lib/db/draft-service.ts - Draft CRUD operations (follows Service pattern)

Files to Modify:

  • src/lib/db/schema.ts - Add drafts table
  • src/lib/llm/prompt-engine.ts - Add generateGhostwriterPrompt() function
  • src/services/llm-service.ts - Add getGhostwriterResponse() function
  • src/services/chat-service.ts - Add draft generation orchestration
  • src/lib/store/chat-store.ts - Add draft state and actions
  • src/app/api/llm/route.ts - Handle Ghostwriter requests (extend existing)

UX Design Specifications

From UX Design Document:

Visual Feedback - Drafting State:

  • Use "Skeleton card loader" (shimmering lines) to show work is happening
  • Different from "Teacher is typing..." dots
  • Text: "Drafting your post..." or "Polishing your insight..."

Output Format - The "Magic Moment":

  • The draft should appear as a "Card" or "Article" view (Story 2.2 will implement)
  • Use Merriweather font (serif) to signal "published work"
  • Distinct visual shift from Chat (casual) to Draft (professional)

Tone and Persona:

  • Ghostwriter should use "Professional/LinkedIn" persona
  • Output should be polished but authentic
  • Avoid corporate jargon; maintain the user's voice

Transition Pattern:

  • When drafting completes, the Draft View slides up (Sheet pattern)
  • Chat remains visible underneath for context
  • This "Split-Personality" UI reinforces the transformation value

Testing Requirements

Unit Tests:

  • PromptEngine: generateGhostwriterPrompt() produces correct structure
  • PromptEngine: Includes chat history context in prompt
  • PromptEngine: Handles empty/short chat history gracefully
  • LLMService: getGhostwriterResponse() calls Edge API correctly
  • LLMService: Handles streaming response with callbacks
  • ChatStore: generateDraft() action updates state correctly
  • ChatStore: Draft persisted to IndexedDB
  • DraftService: CRUD operations work correctly

Integration Tests:

  • Full flow: Chat history -> Draft generation -> Draft stored
  • Fast Track flow: Single input -> Draft generation
  • Draft state: Draft appears in UI after generation
  • Offline scenario: Draft queued if offline (basic handling for now, full sync in Epic 3)

Edge Cases:

  • Empty chat history: Should return helpful error message
  • Very long chat history: Should truncate/summarize within token limits
  • LLM API failure: Should show retry option
  • Malformed LLM response: Should handle gracefully

Performance Tests:

  • Draft generation time: < 5 seconds (NFR requirement)
  • Drafting indicator appears within 1 second of trigger
  • Large chat history (100+ messages): Should handle efficiently

Previous Story Intelligence (from Epic 1)

Patterns Established (must follow):

  • Logic Sandwich Pattern: UI -> Zustand -> Service -> LLM (strictly enforced)
  • Atomic Selectors: All state access uses useChatStore(s => s.field)
  • Streaming Pattern: LLM responses use streaming with callbacks (onToken, onComplete)
  • Edge Runtime: All API routes use export const runtime = 'edge'
  • Typing Indicator: Pattern for showing processing state
  • Intent Detection: Teacher agent classifies user input (context for Ghostwriter)

Key Files from Epic 1 (Reference):

  • src/lib/llm/prompt-engine.ts - Has generateTeacherPrompt(), add Ghostwriter version
  • src/services/llm-service.ts - Has getTeacherResponseStream(), add Ghostwriter version
  • src/app/api/llm/route.ts - Handles Teacher requests, extend for Ghostwriter
  • src/lib/store/chat-store.ts - Has chat state, add draft state
  • src/services/chat-service.ts - Orchestrates chat flow, add draft generation
  • src/lib/db/schema.ts - Has chatLogs and sessions, add drafts table

Learnings to Apply:

  • Story 1.4 established Fast Track mode that directly triggers Ghostwriter
  • Use isProcessing pattern for isDrafting state
  • Follow streaming callback pattern: onToken for building draft incrementally
  • Ghostwriter prompt should include intent context from Teacher agent
  • Draft generation should use same Edge API proxy as Teacher agent

Testing Patterns:

  • Epic 1 established 101 passing tests
  • Follow same test structure: unit tests for each service, integration tests for full flow
  • Use mocked LLM responses for deterministic testing
  • Test streaming behavior with callback mocks

Ghostwriter Prompt Specifications

Prompt Structure:

function generateGhostwriterPrompt(
  chatHistory: ChatMessage[],
  intent?: 'venting' | 'insight'
): string {
  return `
You are the Ghostwriter Agent. Your role is to transform a user's chat session into a polished, professional post.

CONTEXT:
- User Intent: ${intent || 'unknown'}
- Chat History: ${formatChatHistory(chatHistory)}

REQUIREMENTS:
1. Extract the core insight or lesson from the chat
2. Write in a professional but authentic tone (LinkedIn-style)
3. Structure as Markdown with: Title, Body, Tags
4. DO NOT hallucinate facts - stay grounded in what the user shared
5. Focus on the "transformation" - how the user's thinking evolved
6. If it was a struggle, frame it as a learning opportunity
7. Keep it concise (300-500 words for the body)

OUTPUT FORMAT:
\`\`\`markdown
# [Compelling Title]

[2-4 paragraphs that tell the story of the insight]

**Tags:** [3-5 relevant tags]
\`\`\`
`;
}

Prompt Engineering Notes:

  • The prompt should emphasize "grounded in user input" to prevent hallucinations
  • For "venting" intent, focus on "reframing struggle as lesson"
  • For "insight" intent, focus on "articulating the breakthrough"
  • Include the full chat history as context
  • Title generation is critical - should be catchy but authentic

Data Schema Specifications

Dexie Schema - Drafts Table:

// Add to src/lib/db/schema.ts
db.version(1).stores({
  chatLogs: 'id, sessionId, timestamp, role, intent',
  sessions: 'id, createdAt, updatedAt, isFastTrackMode, currentIntent',
  drafts: 'id, sessionId, createdAt, status'  // NEW
});

export interface DraftRecord {
  id: string;
  sessionId: string;
  title: string;
  content: string;      // Markdown formatted
  tags: string[];       // Array of tag strings
  createdAt: number;
  status: 'draft' | 'completed' | 'regenerated';
}

Session-Draft Relationship:

  • Each draft is linked to a session via sessionId
  • A session can have multiple drafts (regenerations)
  • The latest draft for a session is shown in history
  • Status tracks draft lifecycle: draft -> completed (user approved) or regenerated

Performance Requirements

NFR-01 Compliance (Generation Latency):

  • Draft generation should complete in < 5 seconds total
  • First token should appear within 3 seconds
  • Streaming should show progressive build-up of the draft

NFR-06 Compliance (Data Persistence):

  • Draft must be auto-saved to IndexedDB immediately on completion
  • No data loss if user closes app during generation
  • Drafts persist offline and can be accessed from history

State Updates:

  • isDrafting state should update immediately on trigger
  • Draft content should stream into UI as tokens arrive
  • Draft should be queryable from history view immediately after completion

Security & Privacy Requirements

NFR-03 & NFR-04 Compliance:

  • User content sent to LLM API for inference only (not training)
  • No draft content stored on server
  • Drafts stored 100% client-side in IndexedDB
  • API keys hidden via Edge Function proxy

Content Safety:

  • Ghostwriter prompt should include guardrails against:
    • Toxic or offensive content
    • Factually incorrect technical claims
    • Overly promotional language
  • If LLM generates concerning content, flag for user review

Project Structure Notes

Following Feature-First Lite Pattern:

  • New component: src/components/features/chat/DraftingIndicator.tsx
  • New service: src/lib/db/draft-service.ts (could also be in src/services/)
  • Store updates: src/lib/store/chat-store.ts
  • Schema updates: src/lib/db/schema.ts

Alignment with Unified Project Structure:

  • All feature code under src/components/features/
  • Services orchestrate logic, don't touch DB directly from UI
  • State managed centrally in Zustand stores
  • Database schema versioned properly with Dexie

No Conflicts Detected:

  • Ghostwriter fits cleanly into existing architecture
  • Drafts table is new, no migration conflicts
  • Extends existing LLM service pattern

References

Epic Reference:

Architecture Documents:

UX Design Specifications:

PRD Requirements:

  • PRD: Dual-Agent Pipeline
  • FR-03: "Ghostwriter Agent can transform the structured interview data into a grammatically correct and structured 'Enlightenment' artifact"
  • NFR-01: "< 3 seconds for first token, < 5 seconds total generation"

Previous Stories:

Dev Agent Record

Agent Model Used

Claude Opus 4.5 (model ID: 'claude-opus-4-5-20251101')

Debug Log References

Session file: /tmp/claude/-home-maximilienmao-Projects-Test01/e83dd24d-bb58-4fba-ac25-3628cdeae3e8/scratchpad

Completion Notes List

Story Analysis Completed:

  • Extracted story requirements from Epic 2, Story 2.1
  • Analyzed previous Epic 1 stories for established patterns
  • Reviewed architecture for compliance requirements (Logic Sandwich, State Management, Local-First)
  • Reviewed UX specification for visual design and interaction patterns
  • Identified all files to create and modify

Implementation Completed: All tasks and subtasks have been implemented:

  1. Ghostwriter Prompt Engine (src/lib/llm/prompt-engine.ts)

    • Added generateGhostwriterPrompt() function with chat history context, intent-specific guidance
    • Prompt enforces: professional LinkedIn tone, no hallucinations, grounded in user input
    • Output format: Markdown with Title, Body, Tags sections
    • 21 new tests added for prompt generation (all passing)
  2. Ghostwriter LLM Service (src/services/llm-service.ts)

    • Added getGhostwriterResponse() for non-streaming draft generation
    • Added getGhostwriterResponseStream() for streaming draft generation
    • Includes retry logic and comprehensive error handling
    • Returns structured Draft object with Markdown content
    • 13 new tests added (all passing)
  3. Draft State Management (src/lib/store/chat-store.ts)

    • Added currentDraft, isDrafting state to ChatStore
    • Added generateDraft() action to trigger Ghostwriter
    • Added clearDraft() action for state reset
    • Drafts automatically persisted to IndexedDB via DraftService
    • Follows atomic selector pattern for state access
  4. Drafting Indicator (src/components/features/chat/DraftingIndicator.tsx)

    • Created component with shimmer/pulse animation
    • Distinct from typing indicator (dots vs skeleton)
    • Shows "Drafting your post..." message
  5. Draft Storage Schema (src/lib/db/index.ts)

    • Added DraftRecord interface with all required fields
    • Added drafts table to Dexie database with indexes
    • Database version 1 with proper schema definition
  6. ChatService Integration (src/services/chat-service.ts)

    • Added generateGhostwriterDraft() method
    • Orchestrates Ghostwriter LLM service and DraftService
    • Handles title/content parsing from Markdown
    • Returns structured response with draft ID
  7. Fast Track Integration (src/lib/store/chat-store.ts)

    • Fast Track mode now triggers Ghostwriter Agent
    • Generates actual draft instead of placeholder response
    • Draft saved to IndexedDB and loaded into currentDraft state

Key Technical Decisions:

  1. Prompt Engineering: Ghostwriter prompt structure with chat history context, output format requirements, hallucination guardrails
  2. State Management: Add draft state to ChatStore following atomic selector pattern
  3. Data Schema: New drafts table in IndexedDB with proper indexing
  4. Service Pattern: DraftService for CRUD operations (follows established pattern)
  5. Streaming: Use same streaming pattern as Teacher agent for draft generation
  6. Fast Track: Now calls Ghostwriter instead of returning placeholder (Epic 2 integration)

Dependencies:

  • No new dependencies required
  • Reuses existing Zustand, Dexie, LLM service infrastructure
  • Extends existing prompt engine and LLM service

Integration Points:

  • Connected to existing ChatStore state management
  • Ghostwriter triggered by ChatService after interview completion or Fast Track
  • Reuses Edge API proxy (/api/llm) for LLM calls
  • Draft stored in IndexedDB for history access (Epic 3)

Testing Summary:

  • 146 tests passing (all tests passing)
  • All Ghostwriter functionality tested with unit and integration tests
  • Error handling tested for timeout, rate limit, network errors
  • Edge cases tested: empty history, long history, undefined intent
  • Fast Track integration test updated to mock ChatService.generateGhostwriterDraft

Behavior Changes:

  • Fast Track mode (Story 1.4) now triggers Ghostwriter Agent
  • Returns actual draft instead of placeholder response
  • Integration test updated to mock ChatService.generateGhostwriterDraft and DraftService.getDraftBySessionId

File List

New Files Created:

  • src/lib/db/draft-service.ts - Draft CRUD operations service
  • src/lib/db/draft-service.test.ts - Draft service tests (11 tests)
  • src/components/features/chat/DraftingIndicator.tsx - Drafting animation component

Files Modified:

  • src/lib/db/index.ts - Added DraftRecord interface and drafts table to Dexie schema
  • src/lib/llm/prompt-engine.ts - Added generateGhostwriterPrompt() function with intent-specific guidance
  • src/services/llm-service.ts - Added Ghostwriter methods with streaming and error handling
  • src/services/chat-service.ts - Added generateGhostwriterDraft() orchestration method
  • src/lib/store/chat-store.ts - Added draft state, generateDraft/clearDraft actions, Fast Track integration
  • src/lib/llm/prompt-engine.test.ts - Added 21 Ghostwriter prompt tests
  • src/services/llm-service.test.ts - Added 13 Ghostwriter service tests
  • src/integration/fast-track.test.ts - Updated Fast Track test to mock Ghostwriter services