- Fix ChatBubble to handle non-string content with String() wrapper - Fix API route to use generateText for non-streaming requests - Add @ai-sdk/openai-compatible for non-OpenAI providers (DeepSeek, etc.) - Use Chat Completions API instead of Responses API for compatible providers - Update ChatBubble tests and fix component exports to kebab-case - Remove stale PascalCase ChatBubble.tsx file
18 KiB
stepsCompleted, inputDocuments
| stepsCompleted | inputDocuments | |||||||
|---|---|---|---|---|---|---|---|---|
|
|
Test01 - Epic Breakdown
Overview
This document provides the complete epic and story breakdown for Test01, decomposing the requirements from the PRD, UX Design if it exists, and Architecture requirements into implementable stories.
Requirements Inventory
Functional Requirements
FR-01: System can detect "Venting" vs. "Insight" intent from initial user input. FR-02: "Teacher Agent" can generate probing questions to elicit specific missing details based on the user's initial input. FR-03: "Ghostwriter Agent" can transform the structured interview data into a grammatically correct and structured "Enlightenment" artifact (e.g., Markdown post). FR-04: Users can "Regenerate" the outcome with specific critique (e.g., "Make it less corporate", "Focus more on the technical solution"). FR-05: System provides a "Fast Track" option to bypass the interview and go straight to generation for advanced users. FR-06: Users can view a chronological feed of past "Enlightenments" (history). FR-07: Users can "One-Click Copy" the formatted text to clipboard. FR-08: Users can delete past entries. FR-09: Users can edit the generated draft manually before exporting. FR-10: Users can access the app and view history while offline. FR-11: Users can complete a full "Venting Session" offline; system queues generation for reconnection. FR-12: System actively prompts users to "Add to Home Screen" (A2HS) upon meeting engagement criteria. FR-13: System stores all chat history locally (persistent client-side storage) by default. FR-14: Users can export their entire history as a JSON/Markdown file.
NonFunctional Requirements
NFR-01 (Chat Latency): The "Teacher" agent must generate the first follow-up question within < 3 seconds to maintain conversational flow. NFR-02 (App Load Time): The app must be interactive (Time to Interactive) in < 1.5 seconds on 4G networks. NFR-03 (Data Sovereignty): User chat logs are stored 100% Client-Side (persistent client-side storage) in the MVP. No user content is sent to the cloud except for the temporary API inference call. NFR-04 (Inference Privacy): Data sent to the LLM API must be stateless (not used for training). NFR-05 (Offline Behavior): The app shell and local history must remain accessible in Aeroplane Mode. Active Chat interactions will be unavailable offline as they require live LLM access. NFR-06 (Data Persistence): Drafts must be auto-saved locally every 2 seconds to prevent data loss. NFR-07 (Visual Accessibility): Dark Mode is the default. Contrast ratios must meet WCAG AA standards to reduce eye strain for late-night users.
Additional Requirements
- [Arch] Use Next.js 14+ App Router + ShadCN UI starter template
- [Arch] Implement "Local-First" architecture with Dexie.js (IndexedDB)
- [Arch] Implement Vercel Edge Functions for secure LLM API proxy
- [Arch] Use Zustand for global state management
- [Arch] Implement Service Worker for offline support and sync queue
- [UX] Implement "Morning Mist" theme with Inter (UI) and Merriweather (Content) fonts
- [UX] Implement "Chat" vs "Draft" view split pattern/slide-up sheet
- [UX] Ensure mobile-first responsive design (375px+) with centered container for desktop
- [UX] Adhere to WCAG AA accessibility standards (contrast, focus, zoom)
FR Coverage Map
FR-01: Epic 1 - Initial intent detection logic in the main chat loop. FR-02: Epic 1 - Teacher agent logic and prompt engineering for elicitation. FR-03: Epic 2 - Ghostwriter agent logic and Markdown artifact generation. FR-04: Epic 2 - Regeneration workflow for draft refinement. FR-05: Epic 1 - Option to skip straight to generation (Fast Track). FR-06: Epic 3 - History feed UI and data retrieval. FR-07: Epic 2 - Copy to clipboard functionality in draft view. FR-08: Epic 3 - Deletion management in history feed. FR-09: Epic 2 - Manual editing capabilities for generated drafts. FR-10: Epic 3 - Offline history access via IndexedDB. FR-11: Epic 3 - Offline/Online sync queue for venting sessions. FR-12: Epic 3 - PWA installation prompt logic. FR-13: Epic 1 - Chat storage infrastructure (Dexie.js). FR-14: Epic 3 - Data export functionality. FR-15: Epic 4 (Story 4.1) - Custom API URL configuration. FR-16: Epic 4 (Story 4.1) - Secure local credential storage. FR-17: Epic 4 (Story 4.3) - Model selection logic. FR-18: Epic 4 (Story 4.2) - Connection validation. FR-19: Epic 4 (Story 4.4) - Provider switching logic.
Epic List
Epic 1: "Active Listening" - Core Chat & Teacher Agent
Goal: Enable users to start a session, "vent" their raw thoughts, and have the system "Active Listen" (store chat) and "Teach" (probe for details) using a local-first architecture. User Outcome: Users can open the app, chat safely (locally), and get probing questions from the AI. FRs covered: FR-01, FR-02, FR-05, FR-13 NFRs: NFR-01, NFR-03, NFR-04
Epic 2: "The Magic Mirror" - Ghostwriter & Draft Refinement
Goal: Transform the structured chat context into a tangible "Enlightenment" artifact (the post) that users can review, refine, and export. User Outcome: Users get a high-quality post from their vent, which they can edit and ultimately copy for publishing. FRs covered: FR-03, FR-04, FR-07, FR-09 NFRs: NFR-07 (Visuals), NFR-04
Epic 3: "My Legacy" - History, Offline Action Replay & PWA Polish
Goal: Turn single sessions into a persistent "Journal" of growth, ensuring the app works flawlessly offline and behaves like a native app. User Outcome: Users can view past wins, use the app on the subway (offline), and install it to their home screen. FRs covered: FR-06, FR-08, FR-10, FR-11, FR-12, FR-14 NFRs: NFR-02, NFR-05, NFR-06
Epic 4: "Power User Settings" - BYOD & Configuration
Goal: Enable users to bring their own Intelligence (BYOD) by configuring custom API providers, models, and keys, satisfying the "Privacy-First" and "Vendor Independence" requirements. User Outcome: Users can configure and switch between different AI providers with their own API keys, ensuring data privacy and vendor flexibility. FRs covered: FR-15, FR-16, FR-17, FR-18, FR-19 NFRs: NFR-03 (Data Sovereignty), NFR-08 (Secure Key Storage)
Epic 1: "Active Listening" - Core Chat & Teacher Agent
Goal: Enable users to start a session, "vent" their raw thoughts, and have the system "Active Listen" (store chat) and "Teach" (probe for details) using a local-first architecture.
Story 1.1: Local-First Setup & Chat Storage
As a user, I want my chat sessions to be saved locally on my device, So that my data is private and accessible offline.
Acceptance Criteria:
Given a new user visits the app When they load the page Then a Dexie.js database is initialized with the correct schema And no data is sent to the server without explicit action
Given the user sends a message
When the message is sent
Then it is stored in the chatLogs table in IndexedDB with a timestamp
And is immediately displayed in the UI
Given the user reloads the page When the page loads Then the previous chat history is retrieved from IndexedDB and displayed correctly And the session state is restored
Given the device is offline When the user opens the app Then the app loads successfully and shows stored history from the local database
Story 1.2: Chat Interface Implementation
As a user, I want a clean, familiar chat interface, So that I can focus on venting without fighting the UI.
Acceptance Criteria:
Given a user is on the main chat screen When they look at the UI Then they see a "Morning Mist" themed interface with distinct bubbles for User (Right) and AI (Left) And the design matches the "Telegram-style" visual specification
Given the user is typing When they press "Send" Then the input field clears and the message appears in the chat And the view scrolls to the bottom
Given the user is on a mobile device When they view the chat Then the layout is responsive and all touch targets are at least 44px And the text size is legible (Inter font)
Given the AI is processing When the user waits Then a "Teacher is typing..." indicator is visible And the UI remains responsive
Story 1.3: Teacher Agent Logic & Intent Detection
As a user, I want the AI to understand if I'm venting or sharing an insight, So that it responds appropriately.
Acceptance Criteria:
Given a user sends a first message When the AI processes it Then it classifies the intent as "Venting" or "Insight" And stores this context in the session state
Given the intent is "Venting" When the AI responds Then it validates the emotion first And asks a probing question to uncover the underlying lesson
Given the AI is generating a response When the request is sent Then it makes a direct client-side request to the configured Provider And the user's stored API key is retrieved from local secure storage
Given the API response takes time When the user waits Then the response time is optimized to be under 3 seconds for the first token (if streaming)
Story 1.4: Fast Track Mode
As a Power User, I want to bypass the interview questions, So that I can generate a post immediately if I already have the insight.
Acceptance Criteria:
Given a user is in the chat When they toggle "Fast Track" or press a specific "Just Draft It" button Then the AI skips the probing phase And proceeds directly to the "Ghostwriter" generation phase (transition to Epic 2 workflow)
Given "Fast Track" is active When the user sends their input Then the system interprets it as the final insight And immediately triggers the draft generation
Epic 2: "The Magic Mirror" - Ghostwriter & Draft Refinement
Goal: Transform the structured chat context into a tangible "Enlightenment" artifact (the post) that users can review, refine, and export.
Story 2.1: Ghostwriter Agent & Markdown Generation
As a user, I want the system to draft a polished post based on my chat, So that I can see my raw thoughts transformed into value.
Acceptance Criteria:
Given the user has completed the interview or used "Fast Track" When the "Ghostwriter" agent is triggered Then it consumes the entire chat history and the "Lesson" context And generates a structured Markdown artifact (Title, Body, Tags)
Given the generation is processing When the user waits Then they see a distinct "Drafting" animation (different from "Typing") And the tone of the output matches the "Professional/LinkedIn" persona
Story 2.2: Draft View UI (The Slide-Up)
As a user, I want to view the generated draft in a clean, reading-focused interface, So that I can review it without the distraction of the chat.
Acceptance Criteria:
Given the draft generation is complete When the result is ready Then a "Sheet" or modal slides up from the bottom And it displays the post in "Medium-style" typography (Merriweather font)
Given the draft view is open When the user scrolls Then the reading experience is comfortable with appropriate whitespace And the "Thumbs Up" and "Thumbs Down" actions are sticky or easily accessible
Story 2.3: Refinement Loop (Regeneration)
As a user, I want to provide feedback if the draft isn't right, So that I can get a better version.
Acceptance Criteria:
Given the user is viewing a draft When they click "Thumbs Down" Then the draft sheet closes and returns to the Chat UI And the AI proactively asks "What should we change?"
Given the user provides specific critique (e.g., "Make it shorter") When they send the feedback Then the "Ghostwriter" regenerates the draft respecting the new constraint And the new draft replaces the old one in the Draft View
Story 2.4: Export & Copy Actions
As a user, I want to copy the text or save the post, So that I can publish it on LinkedIn or save it for later.
Acceptance Criteria:
Given the user likes the draft When they click "Thumbs Up" or "Copy" Then the full Markdown text is copied to the clipboard And a success toast/animation confirms the action
Given the draft is finalized When the user saves it Then it is marked as "Completed" in the local database And the user is returned to the Home/History screen
Epic 3: "My Legacy" - History, Offline Sync & PWA Polish
Goal: Turn single sessions into a persistent "Journal" of growth, ensuring the app works flawlessly offline and behaves like a native app.
Story 3.1: History Feed UI
As a user, I want to see a list of my past growing moments, So that I can reflect on my journey.
Acceptance Criteria:
Given the user is on the Home screen When they view the feed Then they see a chronological list of past "Completed" sessions (Title, Date, Tags) And the list supports lazy loading/pagination for performance
Given the user clicks a history card When the card opens Then the full "Enlightenment" artifact allows for reading And the "Copy" action is available
Story 3.2: Deletion & Management
As a user, I want to delete old entries, So that I can control my private data.
Acceptance Criteria:
Given the user is viewing a past entry When they select "Delete" Then they are prompted with a confirmation dialog (Destructive Action) And the action cannot be undone
Given the deletion is confirmed When the action completes Then the entry is permanently removed from IndexedDB And the History Feed updates immediately to remove the item
Story 3.3: Offline Action Replay
As a user, I want my actions to be queued when offline, So that I don't lose work on the subway.
Acceptance Criteria:
Given the device is offline When the user performs an LLM-dependent action (e.g., Send message, Regenerate draft) Then the action is added to a persistent "Action Queue" in Dexie And the UI shows a subtle "Offline - Queued" indicator
Given connection is restored When the app detects the network Then the Sync Manager replays queued actions to the LLM API And the indicator updates to "Processed"
Story 3.4: PWA Install Prompt & Manifest
As a user, I want to install the app to my home screen, So that it feels like a native app.
Acceptance Criteria:
Given the user visits the web app
When the browser parses the site
Then it finds a valid manifest.json with correct icons, name ("Test01"), and display: standalone settings
Given the user has engaged with the app (e.g., completed 1 session) When the browser supports it (beforeinstallprompt event) Then a custom "Install App" UI element appears (non-intrusive) And clicking it triggers the native install prompt
Given the app is installed When it launches from Home Screen Then it opens without the browser URL bar (Standalone mode)
Epic 4: "Power User Settings" - BYOD & Configuration
Goal: Enable users to bring their own Intelligence (BYOD) by configuring custom API providers, models, and keys, satisfying the "Privacy-First" and "Vendor Independence" requirements.
Story 4.1: API Provider Configuration UI
As a user, I want to enter my own API Key and Base URL, So that I can use my own LLM account (e.g., DeepSeek, OpenAI).
Acceptance Criteria:
Given the user navigates to "Settings" When they select "AI Provider" Then they see a form to enter: "Base URL" (Default: OpenAI), "API Key", and "Model Name"
Given the user enters a key
When they save
Then the key is stored in localStorage with basic encoding (not plain text)
And it is NEVER sent to the app backend (Client-Side only)
Given the user has saved a provider When they return to chat Then the new settings are active immediately
Story 4.2: Connection Validation
As a user, I want to know if my key works, So that I don't get errors in the middle of a chat.
Acceptance Criteria:
Given the user enters new credentials When they click "Connect" or "Save" Then the system sends a tiny "Hello" request to the provider And shows "Connected ✅" if successful, or the error message if failed
Story 4.3: Model Selection & Configuration
As a user, I want to specify which AI model to use, So that I can choose between different capabilities (e.g., fast vs. smart).
Acceptance Criteria:
Given the user is in the API Provider settings When they view the form Then they see a "Model Name" field with examples (e.g., "gpt-4o", "deepseek-chat")
Given the user enters a custom model name When they save Then the model name is stored alongside the API key and base URL And all future LLM requests use this model identifier
Given the user doesn't specify a model When they save provider settings Then a sensible default is used (e.g., "gpt-3.5-turbo" for OpenAI endpoints)
Story 4.4: Provider Switching
As a user, I want to switch between different saved providers, So that I can use different AI services for different needs.
Acceptance Criteria:
Given the user has configured multiple providers When they open Settings Then they see a list of saved providers with labels (e.g., "OpenAI GPT-4", "DeepSeek Chat")
Given the user selects a different provider When they confirm the switch Then the app immediately uses the new provider for all LLM requests And the active provider is persisted in local storage
Given the user starts a new chat session When they send messages Then the currently active provider is used And the provider selection is maintained across page reloads