- Next.js 14+ with App Router and TypeScript - Tailwind CSS and ShadCN UI styling - Zustand state management - Dexie.js for IndexedDB (local-first data) - Auth.js v5 for authentication - BMAD framework integration Co-Authored-By: Claude <noreply@anthropic.com>
20 KiB
Story 4.3: Model Selection Configuration
Status: done
Story
As a user, I want to specify which AI model to use, So that I can choose between different capabilities (e.g., fast vs. smart).
Acceptance Criteria
-
Model Name Field in Settings Form
- Given the user is in the API Provider settings
- When they view the form
- Then they see a "Model Name" field with examples (e.g., "gpt-4o", "deepseek-chat")
-
Custom Model Name Storage
- Given the user enters a custom model name
- When they save
- Then the model name is stored alongside the API key and base URL
- And all future LLM requests use this model identifier
-
Default Model Behavior
- Given the user doesn't specify a model
- When they save provider settings
- Then a sensible default is used (e.g., "gpt-3.5-turbo" for OpenAI endpoints)
Tasks / Subtasks
-
Review Current Model Name Implementation (AC: 1, 2, 3)
- Verify
src/store/use-settings.tshasmodelNamestate and actions - Verify
src/components/features/settings/provider-form.tsxhas Model Name input field - Verify current default value is set in store
- Verify
-
Enhance Model Name Field with Examples/Helper Text (AC: 1)
- Add helper text showing common model examples for selected preset
- Add placeholder text showing format (e.g., "gpt-4o", "deepseek-chat", "claude-3-haiku")
- Consider adding model examples based on provider preset
-
Verify Model Name Integration (AC: 2)
- Verify LLMService uses model name from settings store
- Verify ChatService passes model name to LLM calls
- Test that custom model names are used in API requests
-
Set Appropriate Defaults per Provider (AC: 3)
- Ensure provider presets set appropriate default models
- OpenAI preset: "gpt-4o" (balanced quality/speed)
- DeepSeek preset: "deepseek-chat" (default chat model)
- OpenRouter preset: "anthropic/claude-3-haiku" (fast/cheap option)
-
Add Model Name Validation (Enhancement)
- Add basic validation that model name is not empty when other fields are filled
- Show warning if model name field is left empty
Note: Model name validation is already implemented in SettingsService.validateProviderSettings() (lines 74-77). This validates at save-time, which is sufficient for the acceptance criteria. Real-time UI warnings are deferred as a future enhancement.
-
Add Unit Tests
- Test model name is stored in settings store
- Test model name persists across page reloads
- Test provider presets set correct default models
- Test custom model names override defaults
-
Add Integration Tests
- Test model name is passed to LLM API calls
- Test switching provider presets updates model name
- Test manual model name entry is preserved
Dev Notes
Architecture Compliance (CRITICAL)
Logic Sandwich Pattern - DO NOT VIOLATE:
- UI Components MUST NOT directly access localStorage or handle model name logic
- All model name operations MUST go through SettingsService service layer
- SettingsService manages model name validation and defaults
- LLMService retrieves model name from settings store for API calls
State Management - Atomic Selectors Required:
// GOOD - Atomic selectors
const modelName = useSettingsStore(s => s.modelName);
const actions = useSettingsStore(s => s.actions);
// BAD - Causes unnecessary re-renders
const { modelName, actions } = useSettingsStore();
Local-First Data Boundary:
- Model names are stored in localStorage with zustand persist middleware
- Model names are part of ProviderSettings alongside API key and base URL
- No server transmission of model configuration
Story Purpose
This story implements model selection configuration for the AI provider settings. Currently, the settings form has a Model Name field, and the store already stores this value. The main work is to:
- Verify existing implementation - The Model Name field already exists in ProviderForm
- Enhance UX with examples - Show users common model names for each provider
- Ensure proper defaults - Each provider preset should set an appropriate default model
- Verify integration - Ensure LLMService uses the configured model name
Implementation Assessment:
The core functionality for model selection is ALREADY IMPLEMENTED:
src/store/use-settings.tshasmodelNamestate andsetModelNameactionsrc/components/features/settings/provider-form.tsxhas Model Name input field- Provider presets already set default models (OpenAI: gpt-4o, DeepSeek: deepseek-chat, OpenRouter: claude-3-haiku)
- LLMService retrieves model name from settings via
SettingsService.getProviderSettings()
This story focuses on verification, enhancement, and UX improvements rather than building new core functionality.
Current Implementation Analysis
Existing SettingsStore (src/store/use-settings.ts:29, 50):
- Has
modelName: stringstate with default'gpt-4-turbo-preview' - Has
setModelNameaction - Persists to localStorage via zustand middleware
- GAP: Default might be outdated (gpt-4-turbo-preview vs gpt-4o)
Existing ProviderForm (src/components/features/settings/provider-form.tsx:96-107):
- Has Model Name input field with label
- Has placeholder text "gpt-4-turbo-preview"
- Has helper text showing example format
- Bound to store via
useModelName()atomic selector - GAP: Placeholder and examples might need updating for latest models
Existing Provider Presets (src/components/features/settings/provider-form.tsx:11-30):
- OpenAI preset sets: baseUrl='https://api.openai.com/v1', defaultModel='gpt-4o'
- DeepSeek preset sets: baseUrl='https://api.deepseek.com/v1', defaultModel='deepseek-chat'
- OpenRouter preset sets: baseUrl='https://openrouter.ai/api/v1', defaultModel='anthropic/claude-3-haiku'
- VERIFIED: Presets already set appropriate defaults
Existing SettingsService (src/services/settings-service.ts:46-54):
getProviderSettings()retrieves modelName from store- Returns ProviderSettings interface with modelName field
- VERIFIED: Integration already in place
Existing LLMService (src/services/llm-service.ts):
- Accepts model parameter in API calls
- Retrieves settings via SettingsService.getProviderSettings()
- VERIFIED: Model name integration already working
Previous Story Intelligence
From Story 4.1 (API Provider Configuration UI):
- Settings Flow: User enters credentials → Store persists to localStorage → LLMService uses settings
- Provider Presets: OpenAI, DeepSeek, OpenRouter buttons with default models
- Model Name Field: Already implemented in ProviderForm component
- Logic Sandwich: Form → SettingsStore → SettingsService → LLMService
From Story 4.2 (Connection Validation):
- Validation Pattern: SettingsService validates connection before saving
- Error Handling: Detailed error messages for different failure types
- Service Layer: SettingsService handles all business logic
From Story 1.3 (Teacher Agent Logic):
- LLM Integration: Direct client-side fetch to provider with model parameter
- Response Handling: Streaming and non-streaming support
From Story 3.3 (Offline Sync Queue):
- Settings Persistence: Zustand persist middleware handles localStorage
- Rehydration: Settings restored on page load
UX Design Specifications
Model Name Field Design:
- Label: "Model Name" (clear, concise)
- Placeholder: Show common model format (e.g., "gpt-4o", "deepseek-chat")
- Helper Text: "Model identifier (e.g., gpt-4o, deepseek-chat)"
- Provider-Specific Examples: Update helper text based on selected preset
Provider-Specific Defaults:
- OpenAI: gpt-4o (recommended for balance of quality/speed)
- Alternative: gpt-4-turbo-preview (older name)
- Budget option: gpt-3.5-turbo
- DeepSeek: deepseek-chat (default chat model)
- Alternative: deepseek-coder (for code tasks)
- OpenRouter: anthropic/claude-3-haiku (fast, cost-effective)
- Alternative: anthropic/claude-3-sonnet (better quality)
- Alternative: openai/gpt-4o (via OpenRouter)
Visual Feedback:
- Model name field is text input (not select/dropdown)
- Users can type any model name their provider supports
- Provider presets fill in recommended defaults
Accessibility:
- Model name input has associated label
- Helper text provides examples
- Placeholder shows expected format
Technical Implementation Plan
Enhancement 1: Update Model Name Examples
File: src/components/features/settings/provider-form.tsx
Current State:
- Placeholder: "gpt-4-turbo-preview"
- Helper text: "Model identifier (e.g., gpt-4o, deepseek-chat)"
Enhancement:
- Update placeholder to current default "gpt-4o"
- Consider adding provider-specific examples when preset is selected
Enhancement 2: Provider-Specific Model Examples
File: src/components/features/settings/provider-form.tsx
New Feature:
- When user clicks a provider preset, show model examples specific to that provider
- Update helper text dynamically based on selected preset
Implementation:
// Add state for selected preset
const [selectedPreset, setSelectedPreset] = useState<typeof PROVIDER_PRESETS[0] | null>(null);
// Update helper text based on preset
const getModelExamples = () => {
if (!selectedPreset) {
return "Model identifier (e.g., gpt-4o, deepseek-chat)";
}
switch (selectedPreset.name) {
case 'OpenAI':
return "OpenAI models: gpt-4o, gpt-4-turbo, gpt-3.5-turbo";
case 'DeepSeek':
return "DeepSeek models: deepseek-chat, deepseek-coder";
case 'OpenRouter':
return "OpenRouter: anthropic/claude-3-haiku, openai/gpt-4o";
default:
return "Model identifier (e.g., gpt-4o)";
}
};
Verification 1: Confirm Model Name Integration
File: src/services/llm-service.ts
Verify:
generateResponse()method uses model from settingsvalidateConnection()method uses model from settings- Model parameter is passed to API calls
Expected:
static async generateResponse(messages: Message[]): Promise<string> {
const settings = SettingsService.getProviderSettings();
// ... uses settings.modelName in API call
}
Verification 2: Test Provider Preset Defaults
File: src/components/features/settings/provider-form.tsx
Verify:
- OpenAI preset sets
gpt-4o - DeepSeek preset sets
deepseek-chat - OpenRouter preset sets
anthropic/claude-3-haiku
Test:
- Click each preset button
- Verify model name field updates to correct default
Security & Privacy Requirements
NFR-03 (Data Sovereignty):
- Model names are stored locally in browser localStorage
- No server transmission of model configuration
- Model names used directly in client-side API calls
NFR-04 (Inference Privacy):
- Model selection doesn't affect privacy posture
- All requests go directly to user's configured provider
Validation:
- Model names are user-defined strings
- No sensitive information in model names
- Basic validation for empty strings
Testing Requirements
Unit Tests:
SettingsStore:
- Test modelName state initializes with default value
- Test setModelName action updates modelName
- Test modelName persists to localStorage
- Test modelName restores on rehydration
ProviderForm Component:
- Test model name input renders with correct placeholder
- Test model name input is bound to store
- Test provider presets set correct default model names
- Test helper text displays examples
SettingsService:
- Test getProviderSettings() returns modelName
- Test saveProviderSettings() saves modelName
Integration Tests:
LLM Integration:
- Test LLMService retrieves model name from settings
- Test API calls include configured model name
- Test changing model name affects subsequent API calls
Provider Presets:
- Test OpenAI preset sets gpt-4o
- Test DeepSeek preset sets deepseek-chat
- Test OpenRouter preset sets claude-3-haiku
- Test manual model entry overrides preset
Persistence Tests:
- Test model name persists across page reloads
- Test model name restores correctly on app restart
Manual Tests (Browser Testing):
- OpenAI: Enter OpenAI credentials, set model to "gpt-4o", verify chat works
- DeepSeek: Enter DeepSeek credentials, set model to "deepseek-chat", verify chat works
- Custom Model: Set model to "gpt-3.5-turbo", verify cheaper/faster model is used
- Preset Switch: Switch between provider presets, verify model name updates
- Manual Entry: Enter custom model name, verify it's used in API calls
- Empty Model: Leave model empty, verify default or error is handled
Performance Requirements
NFR-02 Compliance (App Load Time):
- Model name loading from localStorage must be < 100ms
- Model name updates must be instant (no blocking operations)
Efficient Re-renders:
- Use atomic selectors to prevent unnecessary re-renders
- Model name changes shouldn't trigger full form re-render
Project Structure Notes
Files to Modify:
src/components/features/settings/provider-form.tsx- Enhance model name field with examplessrc/store/use-settings.ts- Update default model if needed (gpt-4-turbo-preview → gpt-4o)
Files to Create:
src/components/features/settings/provider-form.model-selection.test.tsx- Model selection tests
Files to Verify (No Changes Expected):
src/services/llm-service.ts- Should already use model from settingssrc/services/settings-service.ts- Should already handle model in ProviderSettingssrc/services/chat-service.ts- Should already pass model to LLM calls
References
Epic Reference:
- Epic 4: "Power User Settings" - BYOD & Configuration
- Story 4.3: Model Selection Configuration
- FR-17: "Model selection - users can specify which AI model to use"
Architecture Documents:
Previous Stories:
- Story 4.1: API Provider Configuration UI - Model name field already exists
- Story 4.2: Connection Validation - SettingsService integration
External References:
Dev Agent Record
Agent Model Used
Claude Opus 4.5 (model ID: 'claude-opus-4-5-20251101')
Debug Log References
Completion Notes List
Story Analysis Completed:
- Extracted story requirements from Epic 4, Story 4.3
- Analyzed existing settings infrastructure for model name support
- Reviewed previous stories (4.1, 4.2) for established patterns
- Verified that core model selection functionality is ALREADY IMPLEMENTED
- Identified enhancements: update examples, add provider-specific hints, verify defaults
Key Finding: Model Selection Already Implemented The core functionality for model selection is COMPLETE:
- SettingsStore has modelName state and action
- ProviderForm has Model Name input field
- Provider presets set appropriate defaults
- LLMService uses model from settings store
Implementation Scope: ENHANCEMENT ONLY
- Update placeholder text from "gpt-4-turbo-preview" to "gpt-4o"
- Add provider-specific model examples in helper text
- Verify integration works correctly
- Add tests for model selection
Technical Decisions:
- No Major Changes: Core functionality is already working
- UX Enhancements: Add provider-specific model examples
- Verification: Confirm model name is used in API calls
- Testing: Add tests for model selection features
Files to Modify:
src/components/features/settings/provider-form.tsx- Enhance with provider-specific examplessrc/store/use-settings.ts- Update default if needed
Files to Create:
- Test files for model selection enhancements
Files Verified (No Changes Needed):
src/services/llm-service.ts- Already uses model from settingssrc/services/settings-service.ts- Already handles modelsrc/services/chat-service.ts- Already integrates with settings
File List
New Files Created:
src/components/features/settings/provider-form.model-selection.test.tsx- Model selection tests
Files Modified:
src/store/use-settings.ts- Updated default model from 'gpt-4-turbo-preview' to 'gpt-4o'src/components/features/settings/provider-form.tsx- Updated placeholder from 'gpt-4-turbo-preview' to 'gpt-4o'_bmad-output/implementation-artifacts/4-3-model-selection-configuration.md- Story file updated_bmad-output/implementation-artifacts/sprint-status.yaml- Story marked in-progress
Senior Developer Review (AI)
Reviewer: Max (AI Agent) on 2026-01-24
Findings
- Status: Approved (ACs met)
- Code Quality: Generally high, identified 6 minor/medium issues.
- Medium Issues Fixed:
- UX Data Loss: Fixed
provider-form.tsxto preserve custom model names when switching provider presets. Added "Smart Preset" logic. - Weak Validation: Fixed
settings-service.tsto enforce minimum length (2 chars) and regex validation for model names.
- UX Data Loss: Fixed
Actions Taken
- Implemented smart preset logic in
ProviderForm - Added strict validation to
SettingsService - Expanded test suite to cover new logic
Change Log
Date: 2026-01-24
Code Review Implementation:
- ✅ Fixed UX issue where custom model names were lost on preset switch
- ✅ Added strict validation for model names (min length 2, allowed chars)
- ✅ Verified all fixes with new tests (13/13 passing)
- ✅ Validated against Story requirements
Date: 2026-01-24
Story Implementation Completed:
- ✅ Updated default model name from 'gpt-4-turbo-preview' to 'gpt-4o' in settings store
- ✅ Updated placeholder text in ProviderForm from 'gpt-4-turbo-preview' to 'gpt-4o'
- ✅ Verified all acceptance criteria are met
- ✅ Created comprehensive test suite with 11 tests - all passing
- ✅ Verified provider presets set correct defaults (OpenAI: gpt-4o, DeepSeek: deepseek-chat, OpenRouter: claude-3-haiku)
- ✅ Verified LLMService and ChatService integration with model name
- ✅ Confirmed existing model name validation in SettingsService
Test Results:
- Model selection tests: 11/11 passing ✓
- Settings store tests: 17/17 passing ✓
- All provider-form tests: 8/8 passing ✓ (Story 4.2 review fixed mock issues)
Acceptance Criteria Met:
- AC 1: Model Name field visible with examples (placeholder: "gpt-4o", helper text with examples) ✓
- AC 2: Custom model names stored and used in API requests ✓
- AC 3: Default model behavior (gpt-4o for new settings, provider presets set appropriate defaults) ✓
Key Finding: Core model selection functionality was already fully implemented. This story required only:
- Updating outdated placeholder and default model references
- Verification of existing integration
- Adding comprehensive test coverage