Ignore and untrack BMad directories

This commit is contained in:
Max
2026-01-26 15:49:36 +07:00
parent 7b732372e3
commit 6b113e0392
525 changed files with 2 additions and 112645 deletions

View File

@@ -1,655 +0,0 @@
# Requirements Traceability & Gate Decision - Validation Checklist
**Workflow:** `testarch-trace`
**Purpose:** Ensure complete traceability matrix with actionable gap analysis AND make deployment readiness decision (PASS/CONCERNS/FAIL/WAIVED)
This checklist covers **two sequential phases**:
- **PHASE 1**: Requirements Traceability (always executed)
- **PHASE 2**: Quality Gate Decision (executed if `enable_gate_decision: true`)
---
# PHASE 1: REQUIREMENTS TRACEABILITY
## Prerequisites Validation
- [ ] Acceptance criteria are available (from story file OR inline)
- [ ] Test suite exists (or gaps are acknowledged and documented)
- [ ] If tests are missing, recommend `*atdd` (trace does not run it automatically)
- [ ] Test directory path is correct (`test_dir` variable)
- [ ] Story file is accessible (if using BMad mode)
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)
---
## Context Loading
- [ ] Story file read successfully (if applicable)
- [ ] Acceptance criteria extracted correctly
- [ ] Story ID identified (e.g., 1.3)
- [ ] `test-design.md` loaded (if available)
- [ ] `tech-spec.md` loaded (if available)
- [ ] `PRD.md` loaded (if available)
- [ ] Relevant knowledge fragments loaded from `tea-index.csv`
---
## Test Discovery and Cataloging
- [ ] Tests auto-discovered using multiple strategies (test IDs, describe blocks, file paths)
- [ ] Tests categorized by level (E2E, API, Component, Unit)
- [ ] Test metadata extracted:
- [ ] Test IDs (e.g., 1.3-E2E-001)
- [ ] Describe/context blocks
- [ ] It blocks (individual test cases)
- [ ] Given-When-Then structure (if BDD)
- [ ] Priority markers (P0/P1/P2/P3)
- [ ] All relevant test files found (no tests missed due to naming conventions)
---
## Criteria-to-Test Mapping
- [ ] Each acceptance criterion mapped to tests (or marked as NONE)
- [ ] Explicit references found (test IDs, describe blocks mentioning criterion)
- [ ] Test level documented (E2E, API, Component, Unit)
- [ ] Given-When-Then narrative verified for alignment
- [ ] Traceability matrix table generated:
- [ ] Criterion ID
- [ ] Description
- [ ] Test ID
- [ ] Test File
- [ ] Test Level
- [ ] Coverage Status
---
## Coverage Classification
- [ ] Coverage status classified for each criterion:
- [ ] **FULL** - All scenarios validated at appropriate level(s)
- [ ] **PARTIAL** - Some coverage but missing edge cases or levels
- [ ] **NONE** - No test coverage at any level
- [ ] **UNIT-ONLY** - Only unit tests (missing integration/E2E validation)
- [ ] **INTEGRATION-ONLY** - Only API/Component tests (missing unit confidence)
- [ ] Classification justifications provided
- [ ] Edge cases considered in FULL vs PARTIAL determination
---
## Duplicate Coverage Detection
- [ ] Duplicate coverage checked across test levels
- [ ] Acceptable overlap identified (defense in depth for critical paths)
- [ ] Unacceptable duplication flagged (same validation at multiple levels)
- [ ] Recommendations provided for consolidation
- [ ] Selective testing principles applied
---
## Gap Analysis
- [ ] Coverage gaps identified:
- [ ] Criteria with NONE status
- [ ] Criteria with PARTIAL status
- [ ] Criteria with UNIT-ONLY status
- [ ] Criteria with INTEGRATION-ONLY status
- [ ] Gaps prioritized by risk level using test-priorities framework:
- [ ] **CRITICAL** - P0 criteria without FULL coverage (BLOCKER)
- [ ] **HIGH** - P1 criteria without FULL coverage (PR blocker)
- [ ] **MEDIUM** - P2 criteria without FULL coverage (nightly gap)
- [ ] **LOW** - P3 criteria without FULL coverage (acceptable)
- [ ] Specific test recommendations provided for each gap:
- [ ] Suggested test level (E2E, API, Component, Unit)
- [ ] Test description (Given-When-Then)
- [ ] Recommended test ID (e.g., 1.3-E2E-004)
- [ ] Explanation of why test is needed
---
## Coverage Metrics
- [ ] Overall coverage percentage calculated (FULL coverage / total criteria)
- [ ] P0 coverage percentage calculated
- [ ] P1 coverage percentage calculated
- [ ] P2 coverage percentage calculated (if applicable)
- [ ] Coverage by level calculated:
- [ ] E2E coverage %
- [ ] API coverage %
- [ ] Component coverage %
- [ ] Unit coverage %
---
## Test Quality Verification
For each mapped test, verify:
- [ ] Explicit assertions are present (not hidden in helpers)
- [ ] Test follows Given-When-Then structure
- [ ] No hard waits or sleeps (deterministic waiting only)
- [ ] Self-cleaning (test cleans up its data)
- [ ] File size < 300 lines
- [ ] Test duration < 90 seconds
Quality issues flagged:
- [ ] **BLOCKER** issues identified (missing assertions, hard waits, flaky patterns)
- [ ] **WARNING** issues identified (large files, slow tests, unclear structure)
- [ ] **INFO** issues identified (style inconsistencies, missing documentation)
Knowledge fragments referenced:
- [ ] `test-quality.md` for Definition of Done
- [ ] `fixture-architecture.md` for self-cleaning patterns
- [ ] `network-first.md` for Playwright best practices
- [ ] `data-factories.md` for test data patterns
---
## Phase 1 Deliverables Generated
### Traceability Matrix Markdown
- [ ] File created at `{output_folder}/traceability-matrix.md`
- [ ] Template from `trace-template.md` used
- [ ] Full mapping table included
- [ ] Coverage status section included
- [ ] Gap analysis section included
- [ ] Quality assessment section included
- [ ] Recommendations section included
### Coverage Badge/Metric (if enabled)
- [ ] Badge markdown generated
- [ ] Metrics exported to JSON for CI/CD integration
### Updated Story File (if enabled)
- [ ] "Traceability" section added to story markdown
- [ ] Link to traceability matrix included
- [ ] Coverage summary included
---
## Phase 1 Quality Assurance
### Accuracy Checks
- [ ] All acceptance criteria accounted for (none skipped)
- [ ] Test IDs correctly formatted (e.g., 1.3-E2E-001)
- [ ] File paths are correct and accessible
- [ ] Coverage percentages calculated correctly
- [ ] No false positives (tests incorrectly mapped to criteria)
- [ ] No false negatives (existing tests missed in mapping)
### Completeness Checks
- [ ] All test levels considered (E2E, API, Component, Unit)
- [ ] All priorities considered (P0, P1, P2, P3)
- [ ] All coverage statuses used appropriately (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
- [ ] All gaps have recommendations
- [ ] All quality issues have severity and remediation guidance
### Actionability Checks
- [ ] Recommendations are specific (not generic)
- [ ] Test IDs suggested for new tests
- [ ] Given-When-Then provided for recommended tests
- [ ] Impact explained for each gap
- [ ] Priorities clear (CRITICAL, HIGH, MEDIUM, LOW)
---
## Phase 1 Documentation
- [ ] Traceability matrix is readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
---
# PHASE 2: QUALITY GATE DECISION
**Note**: Phase 2 executes only if `enable_gate_decision: true` in workflow.yaml
---
## Prerequisites
### Evidence Gathering
- [ ] Test execution results obtained (CI/CD pipeline, test framework reports)
- [ ] Story/epic/release file identified and read
- [ ] Test design document discovered or explicitly provided (if available)
- [ ] Traceability matrix discovered or explicitly provided (available from Phase 1)
- [ ] NFR assessment discovered or explicitly provided (if available)
- [ ] Code coverage report discovered or explicitly provided (if available)
- [ ] Burn-in results discovered or explicitly provided (if available)
### Evidence Validation
- [ ] Evidence freshness validated (warn if >7 days old, recommend re-running workflows)
- [ ] All required assessments available or user acknowledged gaps
- [ ] Test results are complete (not partial or interrupted runs)
- [ ] Test results match current codebase (not from outdated branch)
### Knowledge Base Loading
- [ ] `risk-governance.md` loaded successfully
- [ ] `probability-impact.md` loaded successfully
- [ ] `test-quality.md` loaded successfully
- [ ] `test-priorities.md` loaded successfully
- [ ] `ci-burn-in.md` loaded (if burn-in results available)
---
## Process Steps
### Step 1: Context Loading
- [ ] Gate type identified (story/epic/release/hotfix)
- [ ] Target ID extracted (story_id, epic_num, or release_version)
- [ ] Decision thresholds loaded from workflow variables
- [ ] Risk tolerance configuration loaded
- [ ] Waiver policy loaded
### Step 2: Evidence Parsing
**Test Results:**
- [ ] Total test count extracted
- [ ] Passed test count extracted
- [ ] Failed test count extracted
- [ ] Skipped test count extracted
- [ ] Test duration extracted
- [ ] P0 test pass rate calculated
- [ ] P1 test pass rate calculated
- [ ] Overall test pass rate calculated
**Quality Assessments:**
- [ ] P0/P1/P2/P3 scenarios extracted from test-design.md (if available)
- [ ] Risk scores extracted from test-design.md (if available)
- [ ] Coverage percentages extracted from traceability-matrix.md (available from Phase 1)
- [ ] Coverage gaps extracted from traceability-matrix.md (available from Phase 1)
- [ ] NFR status extracted from nfr-assessment.md (if available)
- [ ] Security issues count extracted from nfr-assessment.md (if available)
**Code Coverage:**
- [ ] Line coverage percentage extracted (if available)
- [ ] Branch coverage percentage extracted (if available)
- [ ] Function coverage percentage extracted (if available)
- [ ] Critical path coverage validated (if available)
**Burn-in Results:**
- [ ] Burn-in iterations count extracted (if available)
- [ ] Flaky tests count extracted (if available)
- [ ] Stability score calculated (if available)
### Step 3: Decision Rules Application
**P0 Criteria Evaluation:**
- [ ] P0 test pass rate evaluated (must be 100%)
- [ ] P0 acceptance criteria coverage evaluated (must be 100%)
- [ ] Security issues count evaluated (must be 0)
- [ ] Critical NFR failures evaluated (must be 0)
- [ ] Flaky tests evaluated (must be 0 if burn-in enabled)
- [ ] P0 decision recorded: PASS or FAIL
**P1 Criteria Evaluation:**
- [ ] P1 test pass rate evaluated (threshold: min_p1_pass_rate)
- [ ] P1 acceptance criteria coverage evaluated (threshold: 95%)
- [ ] Overall test pass rate evaluated (threshold: min_overall_pass_rate)
- [ ] Code coverage evaluated (threshold: min_coverage)
- [ ] P1 decision recorded: PASS or CONCERNS
**P2/P3 Criteria Evaluation:**
- [ ] P2 failures tracked (informational, don't block if allow_p2_failures: true)
- [ ] P3 failures tracked (informational, don't block if allow_p3_failures: true)
- [ ] Residual risks documented
**Final Decision:**
- [ ] Decision determined: PASS / CONCERNS / FAIL / WAIVED
- [ ] Decision rationale documented
- [ ] Decision is deterministic (follows rules, not arbitrary)
### Step 4: Documentation
**Gate Decision Document Created:**
- [ ] Story/epic/release info section complete (ID, title, description, links)
- [ ] Decision clearly stated (PASS / CONCERNS / FAIL / WAIVED)
- [ ] Decision date recorded
- [ ] Evaluator recorded (user or agent name)
**Evidence Summary Documented:**
- [ ] Test results summary complete (total, passed, failed, pass rates)
- [ ] Coverage summary complete (P0/P1 criteria, code coverage)
- [ ] NFR validation summary complete (security, performance, reliability, maintainability)
- [ ] Flakiness summary complete (burn-in iterations, flaky test count)
**Rationale Documented:**
- [ ] Decision rationale clearly explained
- [ ] Key evidence highlighted
- [ ] Assumptions and caveats noted (if any)
**Residual Risks Documented (if CONCERNS or WAIVED):**
- [ ] Unresolved P1/P2 issues listed
- [ ] Probability × impact estimated for each risk
- [ ] Mitigations or workarounds described
**Waivers Documented (if WAIVED):**
- [ ] Waiver reason documented (business justification)
- [ ] Waiver approver documented (name, role)
- [ ] Waiver expiry date documented
- [ ] Remediation plan documented (fix in next release, due date)
- [ ] Monitoring plan documented
**Critical Issues Documented (if FAIL or CONCERNS):**
- [ ] Top 5-10 critical issues listed
- [ ] Priority assigned to each issue (P0/P1/P2)
- [ ] Owner assigned to each issue
- [ ] Due date assigned to each issue
**Recommendations Documented:**
- [ ] Next steps clearly stated for decision type
- [ ] Deployment recommendation provided
- [ ] Monitoring recommendations provided (if applicable)
- [ ] Remediation recommendations provided (if applicable)
### Step 5: Status Updates and Notifications
**Status File Updated:**
- [ ] Gate decision appended to bmm-workflow-status.md (if append_to_history: true)
- [ ] Format correct: `[DATE] Gate Decision: DECISION - Target {ID} - {rationale}`
- [ ] Status file committed or staged for commit
**Gate YAML Created:**
- [ ] Gate YAML snippet generated with decision and criteria
- [ ] Evidence references included in YAML
- [ ] Next steps included in YAML
- [ ] YAML file saved to output folder
**Stakeholder Notification Generated:**
- [ ] Notification subject line created
- [ ] Notification body created with summary
- [ ] Recipients identified (PM, SM, DEV lead, stakeholders)
- [ ] Notification ready for delivery (if notify_stakeholders: true)
**Outputs Saved:**
- [ ] Gate decision document saved to `{output_file}`
- [ ] Gate YAML saved to `{output_folder}/gate-decision-{target}.yaml`
- [ ] All outputs are valid and readable
---
## Phase 2 Output Validation
### Gate Decision Document
**Completeness:**
- [ ] All required sections present (info, decision, evidence, rationale, next steps)
- [ ] No placeholder text or TODOs left in document
- [ ] All evidence references are accurate and complete
- [ ] All links to artifacts are valid
**Accuracy:**
- [ ] Decision matches applied criteria rules
- [ ] Test results match CI/CD pipeline output
- [ ] Coverage percentages match reports
- [ ] NFR status matches assessment document
- [ ] No contradictions or inconsistencies
**Clarity:**
- [ ] Decision rationale is clear and unambiguous
- [ ] Technical jargon is explained or avoided
- [ ] Stakeholders can understand next steps
- [ ] Recommendations are actionable
### Gate YAML
**Format:**
- [ ] YAML is valid (no syntax errors)
- [ ] All required fields present (target, decision, date, evaluator, criteria, evidence)
- [ ] Field values are correct data types (numbers, strings, dates)
**Content:**
- [ ] Criteria values match decision document
- [ ] Evidence references are accurate
- [ ] Next steps align with decision type
---
## Phase 2 Quality Checks
### Decision Integrity
- [ ] Decision is deterministic (follows rules, not arbitrary)
- [ ] P0 failures result in FAIL decision (unless waived)
- [ ] Security issues result in FAIL decision (unless waived - but should never be waived)
- [ ] Waivers have business justification and approver (if WAIVED)
- [ ] Residual risks are documented (if CONCERNS or WAIVED)
### Evidence-Based
- [ ] Decision is based on actual test results (not guesses)
- [ ] All claims are supported by evidence
- [ ] No assumptions without documentation
- [ ] Evidence sources are cited (CI run IDs, report URLs)
### Transparency
- [ ] Decision rationale is transparent and auditable
- [ ] Criteria evaluation is documented step-by-step
- [ ] Any deviations from standard process are explained
- [ ] Waiver justifications are clear (if applicable)
### Consistency
- [ ] Decision aligns with risk-governance knowledge fragment
- [ ] Priority framework (P0/P1/P2/P3) applied consistently
- [ ] Terminology consistent with test-quality knowledge fragment
- [ ] Decision matrix followed correctly
---
## Phase 2 Integration Points
### BMad Workflow Status
- [ ] Gate decision added to `bmm-workflow-status.md`
- [ ] Format matches existing gate history entries
- [ ] Timestamp is accurate
- [ ] Decision summary is concise (<80 chars)
### CI/CD Pipeline
- [ ] Gate YAML is CI/CD-compatible
- [ ] YAML can be parsed by pipeline automation
- [ ] Decision can be used to block/allow deployments
- [ ] Evidence references are accessible to pipeline
### Stakeholders
- [ ] Notification message is clear and actionable
- [ ] Decision is explained in non-technical terms
- [ ] Next steps are specific and time-bound
- [ ] Recipients are appropriate for decision type
---
## Phase 2 Compliance and Audit
### Audit Trail
- [ ] Decision date and time recorded
- [ ] Evaluator identified (user or agent)
- [ ] All evidence sources cited
- [ ] Decision criteria documented
- [ ] Rationale clearly explained
### Traceability
- [ ] Gate decision traceable to story/epic/release
- [ ] Evidence traceable to specific test runs
- [ ] Assessments traceable to workflows that created them
- [ ] Waiver traceable to approver (if applicable)
### Compliance
- [ ] Security requirements validated (no unresolved vulnerabilities)
- [ ] Quality standards met or waived with justification
- [ ] Regulatory requirements addressed (if applicable)
- [ ] Documentation sufficient for external audit
---
## Phase 2 Edge Cases and Exceptions
### Missing Evidence
- [ ] If test-design.md missing, decision still possible with test results + trace
- [ ] If traceability-matrix.md missing, decision still possible with test results (but Phase 1 should provide it)
- [ ] If nfr-assessment.md missing, NFR validation marked as NOT ASSESSED
- [ ] If code coverage missing, coverage criterion marked as NOT ASSESSED
- [ ] User acknowledged gaps in evidence or provided alternative proof
### Stale Evidence
- [ ] Evidence freshness checked (if validate_evidence_freshness: true)
- [ ] Warnings issued for assessments >7 days old
- [ ] User acknowledged stale evidence or re-ran workflows
- [ ] Decision document notes any stale evidence used
### Conflicting Evidence
- [ ] Conflicts between test results and assessments resolved
- [ ] Most recent/authoritative source identified
- [ ] Conflict resolution documented in decision rationale
- [ ] User consulted if conflict cannot be resolved
### Waiver Scenarios
- [ ] Waiver only used for FAIL decision (not PASS or CONCERNS)
- [ ] Waiver has business justification (not technical convenience)
- [ ] Waiver has named approver with authority (VP/CTO/PO)
- [ ] Waiver has expiry date (does NOT apply to future releases)
- [ ] Waiver has remediation plan with concrete due date
- [ ] Security vulnerabilities are NOT waived (enforced)
---
# FINAL VALIDATION (Both Phases)
## Non-Prescriptive Validation
- [ ] Traceability format adapted to team needs (not rigid template)
- [ ] Examples are minimal and focused on patterns
- [ ] Teams can extend with custom classifications
- [ ] Integration with external systems supported (JIRA, Azure DevOps)
- [ ] Compliance requirements considered (if applicable)
---
## Documentation and Communication
- [ ] All documents are readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
- [ ] Gate decision is prominent and unambiguous (Phase 2)
---
## Final Validation
**Phase 1 (Traceability):**
- [ ] All prerequisites met
- [ ] All acceptance criteria mapped or gaps documented
- [ ] P0 coverage is 100% OR documented as BLOCKER
- [ ] Gap analysis is complete and prioritized
- [ ] Test quality issues identified and flagged
- [ ] Deliverables generated and saved
**Phase 2 (Gate Decision):**
- [ ] All quality evidence gathered
- [ ] Decision criteria applied correctly
- [ ] Decision rationale documented
- [ ] Gate YAML ready for CI/CD integration
- [ ] Status file updated (if enabled)
- [ ] Stakeholders notified (if enabled)
**Workflow Complete:**
- [ ] Phase 1 completed successfully
- [ ] Phase 2 completed successfully (if enabled)
- [ ] All outputs validated and saved
- [ ] Ready to proceed based on gate decision
---
## Sign-Off
**Phase 1 - Traceability Status:**
- [ ] ✅ PASS - All quality gates met, no critical gaps
- [ ] ⚠️ WARN - P1 gaps exist, address before PR merge
- [ ] ❌ FAIL - P0 gaps exist, BLOCKER for release
**Phase 2 - Gate Decision Status (if enabled):**
- [ ] ✅ PASS - Deploy to production
- [ ] ⚠️ CONCERNS - Deploy with monitoring
- [ ] ❌ FAIL - Block deployment, fix issues
- [ ] 🔓 WAIVED - Deploy with business approval and remediation plan
**Next Actions:**
- If PASS (both phases): Proceed to deployment
- If WARN/CONCERNS: Address gaps/issues, proceed with monitoring
- If FAIL (either phase): Run `*atdd` for missing tests, fix issues, re-run `*trace`
- If WAIVED: Deploy with approved waiver, schedule remediation
---
## Notes
Record any issues, deviations, or important observations during workflow execution:
- **Phase 1 Issues**: [Note any traceability mapping challenges, missing tests, quality concerns]
- **Phase 2 Issues**: [Note any missing, stale, or conflicting evidence]
- **Decision Rationale**: [Document any nuanced reasoning or edge cases]
- **Waiver Details**: [Document waiver negotiations or approvals]
- **Follow-up Actions**: [List any actions required after gate decision]
---
<!-- Powered by BMAD-CORE™ -->

File diff suppressed because it is too large Load Diff

View File

@@ -1,675 +0,0 @@
# Traceability Matrix & Gate Decision - Story {STORY_ID}
**Story:** {STORY_TITLE}
**Date:** {DATE}
**Evaluator:** {user_name or TEA Agent}
---
Note: This workflow does not generate tests. If gaps exist, run `*atdd` or `*automate` to create coverage.
## PHASE 1: REQUIREMENTS TRACEABILITY
### Coverage Summary
| Priority | Total Criteria | FULL Coverage | Coverage % | Status |
| --------- | -------------- | ------------- | ---------- | ------------ |
| P0 | {P0_TOTAL} | {P0_FULL} | {P0_PCT}% | {P0_STATUS} |
| P1 | {P1_TOTAL} | {P1_FULL} | {P1_PCT}% | {P1_STATUS} |
| P2 | {P2_TOTAL} | {P2_FULL} | {P2_PCT}% | {P2_STATUS} |
| P3 | {P3_TOTAL} | {P3_FULL} | {P3_PCT}% | {P3_STATUS} |
| **Total** | **{TOTAL}** | **{FULL}** | **{PCT}%** | **{STATUS}** |
**Legend:**
- ✅ PASS - Coverage meets quality gate threshold
- ⚠️ WARN - Coverage below threshold but not critical
- ❌ FAIL - Coverage below minimum threshold (blocker)
---
### Detailed Mapping
#### {CRITERION_ID}: {CRITERION_DESCRIPTION} ({PRIORITY})
- **Coverage:** {COVERAGE_STATUS} {STATUS_ICON}
- **Tests:**
- `{TEST_ID}` - {TEST_FILE}:{LINE}
- **Given:** {GIVEN}
- **When:** {WHEN}
- **Then:** {THEN}
- `{TEST_ID_2}` - {TEST_FILE_2}:{LINE}
- **Given:** {GIVEN_2}
- **When:** {WHEN_2}
- **Then:** {THEN_2}
- **Gaps:** (if PARTIAL or UNIT-ONLY or INTEGRATION-ONLY)
- Missing: {MISSING_SCENARIO_1}
- Missing: {MISSING_SCENARIO_2}
- **Recommendation:** {RECOMMENDATION_TEXT}
---
#### Example: AC-1: User can login with email and password (P0)
- **Coverage:** FULL ✅
- **Tests:**
- `1.3-E2E-001` - tests/e2e/auth.spec.ts:12
- **Given:** User has valid credentials
- **When:** User submits login form
- **Then:** User is redirected to dashboard
- `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8
- **Given:** Valid email and password hash
- **When:** validateCredentials is called
- **Then:** Returns user object
---
#### Example: AC-3: User can reset password via email (P1)
- **Coverage:** PARTIAL ⚠️
- **Tests:**
- `1.3-E2E-003` - tests/e2e/auth.spec.ts:44
- **Given:** User requests password reset
- **When:** User clicks reset link in email
- **Then:** User can set new password
- **Gaps:**
- Missing: Email delivery validation
- Missing: Expired token handling (error path)
- Missing: Invalid token handling (security test)
- Missing: Unit test for token generation logic
- **Recommendation:** Add `1.3-API-001` for email service integration testing and `1.3-UNIT-003` for token generation logic. Add `1.3-E2E-004` for error path validation (expired/invalid tokens).
---
### Gap Analysis
#### Critical Gaps (BLOCKER) ❌
{CRITICAL_GAP_COUNT} gaps found. **Do not release until resolved.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P0)
- Current Coverage: {COVERAGE_STATUS}
- Missing Tests: {MISSING_TEST_DESCRIPTION}
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
- Impact: {IMPACT_DESCRIPTION}
---
#### High Priority Gaps (PR BLOCKER) ⚠️
{HIGH_GAP_COUNT} gaps found. **Address before PR merge.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P1)
- Current Coverage: {COVERAGE_STATUS}
- Missing Tests: {MISSING_TEST_DESCRIPTION}
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
- Impact: {IMPACT_DESCRIPTION}
---
#### Medium Priority Gaps (Nightly) ⚠️
{MEDIUM_GAP_COUNT} gaps found. **Address in nightly test improvements.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P2)
- Current Coverage: {COVERAGE_STATUS}
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
---
#### Low Priority Gaps (Optional)
{LOW_GAP_COUNT} gaps found. **Optional - add if time permits.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P3)
- Current Coverage: {COVERAGE_STATUS}
---
### Quality Assessment
#### Tests with Issues
**BLOCKER Issues**
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
**WARNING Issues** ⚠️
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
**INFO Issues**
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
---
#### Example Quality Issues
**WARNING Issues** ⚠️
- `1.3-E2E-001` - 145 seconds (exceeds 90s target) - Optimize fixture setup to reduce test duration
- `1.3-UNIT-005` - 320 lines (exceeds 300 line limit) - Split into multiple focused test files
**INFO Issues**
- `1.3-E2E-002` - Missing Given-When-Then structure - Refactor describe block to use BDD format
---
#### Tests Passing Quality Gates
**{PASSING_TEST_COUNT}/{TOTAL_TEST_COUNT} tests ({PASSING_PCT}%) meet all quality criteria** ✅
---
### Duplicate Coverage Analysis
#### Acceptable Overlap (Defense in Depth)
- {CRITERION_ID}: Tested at unit (business logic) and E2E (user journey) ✅
#### Unacceptable Duplication ⚠️
- {CRITERION_ID}: Same validation at E2E and Component level
- Recommendation: Remove {TEST_ID} or consolidate with {OTHER_TEST_ID}
---
### Coverage by Test Level
| Test Level | Tests | Criteria Covered | Coverage % |
| ---------- | ----------------- | -------------------- | ---------------- |
| E2E | {E2E_COUNT} | {E2E_CRITERIA} | {E2E_PCT}% |
| API | {API_COUNT} | {API_CRITERIA} | {API_PCT}% |
| Component | {COMP_COUNT} | {COMP_CRITERIA} | {COMP_PCT}% |
| Unit | {UNIT_COUNT} | {UNIT_CRITERIA} | {UNIT_PCT}% |
| **Total** | **{TOTAL_TESTS}** | **{TOTAL_CRITERIA}** | **{TOTAL_PCT}%** |
---
### Traceability Recommendations
#### Immediate Actions (Before PR Merge)
1. **{ACTION_1}** - {DESCRIPTION}
2. **{ACTION_2}** - {DESCRIPTION}
#### Short-term Actions (This Sprint)
1. **{ACTION_1}** - {DESCRIPTION}
2. **{ACTION_2}** - {DESCRIPTION}
#### Long-term Actions (Backlog)
1. **{ACTION_1}** - {DESCRIPTION}
---
#### Example Recommendations
**Immediate Actions (Before PR Merge)**
1. **Add P1 Password Reset Tests** - Implement `1.3-API-001` for email service integration and `1.3-E2E-004` for error path validation. P1 coverage currently at 80%, target is 90%.
2. **Optimize Slow E2E Test** - Refactor `1.3-E2E-001` to use faster fixture setup. Currently 145s, target is <90s.
**Short-term Actions (This Sprint)**
1. **Enhance P2 Coverage** - Add E2E validation for session timeout (`1.3-E2E-005`). Currently UNIT-ONLY coverage.
2. **Split Large Test File** - Break `1.3-UNIT-005` (320 lines) into multiple focused test files (<300 lines each).
**Long-term Actions (Backlog)**
1. **Enrich P3 Coverage** - Add tests for edge cases in P3 criteria if time permits.
---
## PHASE 2: QUALITY GATE DECISION
**Gate Type:** {story | epic | release | hotfix}
**Decision Mode:** {deterministic | manual}
---
### Evidence Summary
#### Test Execution Results
- **Total Tests**: {total_count}
- **Passed**: {passed_count} ({pass_percentage}%)
- **Failed**: {failed_count} ({fail_percentage}%)
- **Skipped**: {skipped_count} ({skip_percentage}%)
- **Duration**: {total_duration}
**Priority Breakdown:**
- **P0 Tests**: {p0_passed}/{p0_total} passed ({p0_pass_rate}%) {✅ | ❌}
- **P1 Tests**: {p1_passed}/{p1_total} passed ({p1_pass_rate}%) {✅ | | ❌}
- **P2 Tests**: {p2_passed}/{p2_total} passed ({p2_pass_rate}%) {informational}
- **P3 Tests**: {p3_passed}/{p3_total} passed ({p3_pass_rate}%) {informational}
**Overall Pass Rate**: {overall_pass_rate}% {✅ | | ❌}
**Test Results Source**: {CI_run_id | test_report_url | local_run}
---
#### Coverage Summary (from Phase 1)
**Requirements Coverage:**
- **P0 Acceptance Criteria**: {p0_covered}/{p0_total} covered ({p0_coverage}%) {✅ | ❌}
- **P1 Acceptance Criteria**: {p1_covered}/{p1_total} covered ({p1_coverage}%) {✅ | | ❌}
- **P2 Acceptance Criteria**: {p2_covered}/{p2_total} covered ({p2_coverage}%) {informational}
- **Overall Coverage**: {overall_coverage}%
**Code Coverage** (if available):
- **Line Coverage**: {line_coverage}% {✅ | | ❌}
- **Branch Coverage**: {branch_coverage}% {✅ | | ❌}
- **Function Coverage**: {function_coverage}% {✅ | | ❌}
**Coverage Source**: {coverage_report_url | coverage_file_path}
---
#### Non-Functional Requirements (NFRs)
**Security**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | | ❌}
- Security Issues: {security_issue_count}
- {details_if_issues}
**Performance**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | | ❌}
- {performance_metrics_summary}
**Reliability**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | | ❌}
- {reliability_metrics_summary}
**Maintainability**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | | ❌}
- {maintainability_metrics_summary}
**NFR Source**: {nfr_assessment_file_path | not_assessed}
---
#### Flakiness Validation
**Burn-in Results** (if available):
- **Burn-in Iterations**: {iteration_count} (e.g., 10)
- **Flaky Tests Detected**: {flaky_test_count} {✅ if 0 | if >0}
- **Stability Score**: {stability_percentage}%
**Flaky Tests List** (if any):
- {flaky_test_1_name} - {failure_rate}
- {flaky_test_2_name} - {failure_rate}
**Burn-in Source**: {CI_burn_in_run_id | not_available}
---
### Decision Criteria Evaluation
#### P0 Criteria (Must ALL Pass)
| Criterion | Threshold | Actual | Status |
| --------------------- | --------- | ------------------------- | -------- | -------- |
| P0 Coverage | 100% | {p0_coverage}% | {✅ PASS | ❌ FAIL} |
| P0 Test Pass Rate | 100% | {p0_pass_rate}% | {✅ PASS | ❌ FAIL} |
| Security Issues | 0 | {security_issue_count} | {✅ PASS | ❌ FAIL} |
| Critical NFR Failures | 0 | {critical_nfr_fail_count} | {✅ PASS | ❌ FAIL} |
| Flaky Tests | 0 | {flaky_test_count} | {✅ PASS | ❌ FAIL} |
**P0 Evaluation**: {✅ ALL PASS | ❌ ONE OR MORE FAILED}
---
#### P1 Criteria (Required for PASS, May Accept for CONCERNS)
| Criterion | Threshold | Actual | Status |
| ---------------------- | ------------------------- | -------------------- | -------- | ----------- | -------- |
| P1 Coverage | ≥{min_p1_coverage}% | {p1_coverage}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
| P1 Test Pass Rate | ≥{min_p1_pass_rate}% | {p1_pass_rate}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
| Overall Test Pass Rate | ≥{min_overall_pass_rate}% | {overall_pass_rate}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
| Overall Coverage | ≥{min_coverage}% | {overall_coverage}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
**P1 Evaluation**: {✅ ALL PASS | ⚠️ SOME CONCERNS | ❌ FAILED}
---
#### P2/P3 Criteria (Informational, Don't Block)
| Criterion | Actual | Notes |
| ----------------- | --------------- | ------------------------------------------------------------ |
| P2 Test Pass Rate | {p2_pass_rate}% | {allow_p2_failures ? "Tracked, doesn't block" : "Evaluated"} |
| P3 Test Pass Rate | {p3_pass_rate}% | {allow_p3_failures ? "Tracked, doesn't block" : "Evaluated"} |
---
### GATE DECISION: {PASS | CONCERNS | FAIL | WAIVED}
---
### Rationale
{Explain decision based on criteria evaluation}
{Highlight key evidence that drove decision}
{Note any assumptions or caveats}
**Example (PASS):**
> All P0 criteria met with 100% coverage and pass rates across critical tests. All P1 criteria exceeded thresholds with 98% overall pass rate and 92% coverage. No security issues detected. No flaky tests in validation. Feature is ready for production deployment with standard monitoring.
**Example (CONCERNS):**
> All P0 criteria met, ensuring critical user journeys are protected. However, P1 coverage (88%) falls below threshold (90%) due to missing E2E test for AC-5 edge case. Overall pass rate (96%) is excellent. Issues are non-critical and have acceptable workarounds. Risk is low enough to deploy with enhanced monitoring.
**Example (FAIL):**
> CRITICAL BLOCKERS DETECTED:
>
> 1. P0 coverage incomplete (80%) - AC-2 security validation missing
> 2. P0 test failures (75% pass rate) in core search functionality
> 3. Unresolved SQL injection vulnerability in search filter (CRITICAL)
>
> Release MUST BE BLOCKED until P0 issues are resolved. Security vulnerability cannot be waived.
**Example (WAIVED):**
> Original decision was FAIL due to P0 test failure in legacy Excel 2007 export module (affects <1% of users). However, release contains critical GDPR compliance features required by regulatory deadline (Oct 15). Business has approved waiver given:
>
> - Regulatory priority overrides legacy module risk
> - Workaround available (use Excel 2010+)
> - Issue will be fixed in v2.4.1 hotfix (due Oct 20)
> - Enhanced monitoring in place
---
### {Section: Delete if not applicable}
#### Residual Risks (For CONCERNS or WAIVED)
List unresolved P1/P2 issues that don't block release but should be tracked:
1. **{Risk Description}**
- **Priority**: P1 | P2
- **Probability**: Low | Medium | High
- **Impact**: Low | Medium | High
- **Risk Score**: {probability × impact}
- **Mitigation**: {workaround or monitoring plan}
- **Remediation**: {fix in next sprint/release}
**Overall Residual Risk**: {LOW | MEDIUM | HIGH}
---
#### Waiver Details (For WAIVED only)
**Original Decision**: ❌ FAIL
**Reason for Failure**:
- {list_of_blocking_issues}
**Waiver Information**:
- **Waiver Reason**: {business_justification}
- **Waiver Approver**: {name}, {role} (e.g., Jane Doe, VP Engineering)
- **Approval Date**: {YYYY-MM-DD}
- **Waiver Expiry**: {YYYY-MM-DD} (**NOTE**: Does NOT apply to next release)
**Monitoring Plan**:
- {enhanced_monitoring_1}
- {enhanced_monitoring_2}
- {escalation_criteria}
**Remediation Plan**:
- **Fix Target**: {next_release_version} (e.g., v2.4.1 hotfix)
- **Due Date**: {YYYY-MM-DD}
- **Owner**: {team_or_person}
- **Verification**: {how_fix_will_be_verified}
**Business Justification**:
{detailed_explanation_of_why_waiver_is_acceptable}
---
#### Critical Issues (For FAIL or CONCERNS)
Top blockers requiring immediate attention:
| Priority | Issue | Description | Owner | Due Date | Status |
| -------- | ------------- | ------------------- | ------------ | ------------ | ------------------ |
| P0 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} |
| P0 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} |
| P1 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} |
**Blocking Issues Count**: {p0_blocker_count} P0 blockers, {p1_blocker_count} P1 issues
---
### Gate Recommendations
#### For PASS Decision ✅
1. **Proceed to deployment**
- Deploy to staging environment
- Validate with smoke tests
- Monitor key metrics for 24-48 hours
- Deploy to production with standard monitoring
2. **Post-Deployment Monitoring**
- {metric_1_to_monitor}
- {metric_2_to_monitor}
- {alert_thresholds}
3. **Success Criteria**
- {success_criterion_1}
- {success_criterion_2}
---
#### For CONCERNS Decision ⚠️
1. **Deploy with Enhanced Monitoring**
- Deploy to staging with extended validation period
- Enable enhanced logging/monitoring for known risk areas:
- {risk_area_1}
- {risk_area_2}
- Set aggressive alerts for potential issues
- Deploy to production with caution
2. **Create Remediation Backlog**
- Create story: "{fix_title_1}" (Priority: {priority})
- Create story: "{fix_title_2}" (Priority: {priority})
- Target sprint: {next_sprint}
3. **Post-Deployment Actions**
- Monitor {specific_areas} closely for {time_period}
- Weekly status updates on remediation progress
- Re-assess after fixes deployed
---
#### For FAIL Decision ❌
1. **Block Deployment Immediately**
- Do NOT deploy to any environment
- Notify stakeholders of blocking issues
- Escalate to tech lead and PM
2. **Fix Critical Issues**
- Address P0 blockers listed in Critical Issues section
- Owner assignments confirmed
- Due dates agreed upon
- Daily standup on blocker resolution
3. **Re-Run Gate After Fixes**
- Re-run full test suite after fixes
- Re-run `bmad tea *trace` workflow
- Verify decision is PASS before deploying
---
#### For WAIVED Decision 🔓
1. **Deploy with Business Approval**
- Confirm waiver approver has signed off
- Document waiver in release notes
- Notify all stakeholders of waived risks
2. **Aggressive Monitoring**
- {enhanced_monitoring_plan}
- {escalation_procedures}
- Daily checks on waived risk areas
3. **Mandatory Remediation**
- Fix MUST be completed by {due_date}
- Issue CANNOT be waived in next release
- Track remediation progress weekly
- Verify fix in next gate
---
### Next Steps
**Immediate Actions** (next 24-48 hours):
1. {action_1}
2. {action_2}
3. {action_3}
**Follow-up Actions** (next sprint/release):
1. {action_1}
2. {action_2}
3. {action_3}
**Stakeholder Communication**:
- Notify PM: {decision_summary}
- Notify SM: {decision_summary}
- Notify DEV lead: {decision_summary}
---
## Integrated YAML Snippet (CI/CD)
```yaml
traceability_and_gate:
# Phase 1: Traceability
traceability:
story_id: "{STORY_ID}"
date: "{DATE}"
coverage:
overall: {OVERALL_PCT}%
p0: {P0_PCT}%
p1: {P1_PCT}%
p2: {P2_PCT}%
p3: {P3_PCT}%
gaps:
critical: {CRITICAL_COUNT}
high: {HIGH_COUNT}
medium: {MEDIUM_COUNT}
low: {LOW_COUNT}
quality:
passing_tests: {PASSING_COUNT}
total_tests: {TOTAL_TESTS}
blocker_issues: {BLOCKER_COUNT}
warning_issues: {WARNING_COUNT}
recommendations:
- "{RECOMMENDATION_1}"
- "{RECOMMENDATION_2}"
# Phase 2: Gate Decision
gate_decision:
decision: "{PASS | CONCERNS | FAIL | WAIVED}"
gate_type: "{story | epic | release | hotfix}"
decision_mode: "{deterministic | manual}"
criteria:
p0_coverage: {p0_coverage}%
p0_pass_rate: {p0_pass_rate}%
p1_coverage: {p1_coverage}%
p1_pass_rate: {p1_pass_rate}%
overall_pass_rate: {overall_pass_rate}%
overall_coverage: {overall_coverage}%
security_issues: {security_issue_count}
critical_nfrs_fail: {critical_nfr_fail_count}
flaky_tests: {flaky_test_count}
thresholds:
min_p0_coverage: 100
min_p0_pass_rate: 100
min_p1_coverage: {min_p1_coverage}
min_p1_pass_rate: {min_p1_pass_rate}
min_overall_pass_rate: {min_overall_pass_rate}
min_coverage: {min_coverage}
evidence:
test_results: "{CI_run_id | test_report_url}"
traceability: "{trace_file_path}"
nfr_assessment: "{nfr_file_path}"
code_coverage: "{coverage_report_url}"
next_steps: "{brief_summary_of_recommendations}"
waiver: # Only if WAIVED
reason: "{business_justification}"
approver: "{name}, {role}"
expiry: "{YYYY-MM-DD}"
remediation_due: "{YYYY-MM-DD}"
```
---
## Related Artifacts
- **Story File:** {STORY_FILE_PATH}
- **Test Design:** {TEST_DESIGN_PATH} (if available)
- **Tech Spec:** {TECH_SPEC_PATH} (if available)
- **Test Results:** {TEST_RESULTS_PATH}
- **NFR Assessment:** {NFR_FILE_PATH} (if available)
- **Test Files:** {TEST_DIR_PATH}
---
## Sign-Off
**Phase 1 - Traceability Assessment:**
- Overall Coverage: {OVERALL_PCT}%
- P0 Coverage: {P0_PCT}% {P0_STATUS}
- P1 Coverage: {P1_PCT}% {P1_STATUS}
- Critical Gaps: {CRITICAL_COUNT}
- High Priority Gaps: {HIGH_COUNT}
**Phase 2 - Gate Decision:**
- **Decision**: {PASS | CONCERNS | FAIL | WAIVED} {STATUS_ICON}
- **P0 Evaluation**: {✅ ALL PASS | ❌ ONE OR MORE FAILED}
- **P1 Evaluation**: {✅ ALL PASS | ⚠️ SOME CONCERNS | ❌ FAILED}
**Overall Status:** {STATUS} {STATUS_ICON}
**Next Steps:**
- If PASS ✅: Proceed to deployment
- If CONCERNS ⚠️: Deploy with monitoring, create remediation backlog
- If FAIL ❌: Block deployment, fix critical issues, re-run workflow
- If WAIVED 🔓: Deploy with business approval and aggressive monitoring
**Generated:** {DATE}
**Workflow:** testarch-trace v4.0 (Enhanced with Gate Decision)
---
<!-- Powered by BMAD-CORE™ -->

View File

@@ -1,55 +0,0 @@
# Test Architect workflow: trace (enhanced with gate decision)
name: testarch-trace
description: "Generate requirements-to-tests traceability matrix, analyze coverage, and make quality gate decision (PASS/CONCERNS/FAIL/WAIVED)"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/_bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/bmm/workflows/testarch/trace"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
template: "{installed_path}/trace-template.md"
# Variables and inputs
variables:
# Directory paths
test_dir: "{project-root}/tests" # Root test directory
source_dir: "{project-root}/src" # Source code directory
# Workflow behavior
coverage_levels: "e2e,api,component,unit" # Which test levels to trace
gate_type: "story" # story | epic | release | hotfix - determines gate scope
decision_mode: "deterministic" # deterministic (rule-based) | manual (team decision)
# Output configuration
default_output_file: "{output_folder}/traceability-matrix.md"
# Required tools
required_tools:
- read_file # Read story, test files, BMad artifacts
- write_file # Create traceability matrix, gate YAML
- list_files # Discover test files
- search_repo # Find tests by test ID, describe blocks
- glob # Find test files matching patterns
tags:
- qa
- traceability
- test-architect
- coverage
- requirements
- gate
- decision
- release
execution_hints:
interactive: false # Minimize prompts
autonomous: true # Proceed without user input unless blocked
iterative: true