Test Appendices
Parent Document: TEST_PLAN.mdVersion: 1.2 | Date: January 5, 2026
Appendix A: Record of Changes
| Version | Date | Author | Description |
|---|---|---|---|
| 0.1 | 2025-11-15 | QA Lead | Initial draft |
| 0.2 | 2025-11-20 | QA Lead | Added sections 1-6 |
| 0.3 | 2025-11-25 | QA Lead | Added sections 7-12 |
| 1.0 | 2025-12-02 | QA Lead | Completed all sections |
| 1.1 | 2025-12-20 | QA Lead | Added Dust Ranger RLS policies testing |
| 1.2 | 2026-01-05 | QA Lead | Added Device Timeline Data Gaps test cases (TC-REPORT-021 to TC-REPORT-026) |
Appendix B: Glossary
| Term | Definition |
|---|---|
| Acceptance Criteria | Conditions for feature completion |
| BOM | Bureau of Meteorology (Australia) |
| CI/CD | Continuous Integration/Deployment |
| DustRanger | PM monitoring device |
| E2E | End-to-End testing |
| LCP | Largest Contentful Paint |
| PII | Personally Identifiable Information |
| PM2.5/PM10 | Particulate Matter sizes |
| RLS | Row-Level Security |
| RTM | Requirements Traceability Matrix |
| SPA | Single Page Application |
| UAT | User Acceptance Testing |
| WCAG | Web Content Accessibility Guidelines |
Appendix C: Acronyms
| Acronym | Full Term |
|---|---|
| API | Application Programming Interface |
| CSV | Comma-Separated Values |
| JWT | JSON Web Token |
| OWASP | Open Web Application Security Project |
| Portable Document Format | |
| QA | Quality Assurance |
| RTO | Recovery Time Objective |
| RPO | Recovery Point Objective |
| SQL | Structured Query Language |
| SSL/TLS | Secure Sockets Layer / Transport Layer Security |
| URL | Uniform Resource Locator |
| VRR | Validation Readiness Review |
| IRR | Implementation Readiness Review |
| ORR | Operational Readiness Review |
Appendix D: Test Case Template
# Test Case: TC-[MODULE]-[NUMBER]
## Metadata
| Field | Value |
|-------|-------|
| **Test Case ID** | TC-MODULE-XXX |
| **Test Case Name** | [Descriptive name] |
| **Priority** | P0/P1/P2/P3 |
| **Type** | Functional/Integration/E2E/Performance |
| **Requirement** | REQ-XXX |
## Objective
[Brief description of what is being tested]
## Preconditions
- [Prerequisite 1]
- [Prerequisite 2]
## Test Data
- [Input data requirements]
## Test Steps
| Step | Action | Expected Result |
|------|--------|-----------------|
| 1 | [Action] | [Expected] |
| 2 | [Action] | [Expected] |
## Expected Results
[Overall expected outcome]
## Execution (filled during testing)
- **Executed By:** [Name]
- **Date:** [YYYY-MM-DD]
- **Environment:** Dev/Staging/Production
- **Status:** ✅ Pass | ❌ Fail | ⏸️ Blocked
- **Defects:** [IDs if any]
- **Notes:** [Observations]Appendix E: Defect Report Template
# Defect Report
## Summary
[Brief description - 50 chars max]
## Environment
- **Browser:** Chrome/Firefox/Safari/Edge [version]
- **OS:** macOS/Windows/iOS/Android [version]
- **Environment:** Dev/Staging/Production
- **User Role:** Standard/Admin
## Classification
- **Severity:** S0 (Blocker) | S1 (Critical) | S2 (Major) | S3 (Minor) | S4 (Trivial)
- **Priority:** P0 (Critical) | P1 (High) | P2 (Medium) | P3 (Low)
## Steps to Reproduce
1. [Step 1]
2. [Step 2]
3. [Step 3]
## Expected Behavior
[What should happen]
## Actual Behavior
[What actually happens]
## Test Data
- **File:** [filename if applicable]
- **Account:** [test account used]
- **Date Range:** [if applicable]
## Attachments
[Screenshots, console logs, network traces]
## Additional Context
- **Frequency:** Always/Sometimes/Rarely
- **First Observed:** [date or version]
- **Workaround:** [if available]
- **Related Issues:** #XXXAppendix F: Test Summary Report Template
# Test Summary Report
**Project:** Dustac Environmental Monitoring Dashboard
**Release:** [Version]
**Date:** [YYYY-MM-DD]
**Author:** [QA Lead]
## Executive Summary
[Brief overview of testing activities and results]
## Test Scope
- Features tested: [list]
- Features not tested: [list with reasons]
## Test Execution Summary
| Metric | Value |
|--------|-------|
| Total Test Cases | XXX |
| Executed | XXX |
| Passed | XXX |
| Failed | XXX |
| Blocked | XXX |
| Pass Rate | XX% |
## Defect Summary
| Severity | Open | Closed | Total |
|----------|------|--------|-------|
| S0 - Blocker | X | X | X |
| S1 - Critical | X | X | X |
| S2 - Major | X | X | X |
| S3 - Minor | X | X | X |
| **Total** | X | X | X |
## Coverage Analysis
[Requirements coverage, code coverage metrics]
## Risks & Issues
[Outstanding risks, unresolved issues]
## Recommendations
[Go/No-Go recommendation with justification]
## Sign-Off
- QA Lead: _____________ Date: _______
- Dev Lead: _____________ Date: _______
- Product Owner: _____________ Date: _______Appendix G: Risk Mitigation Strategies
| Risk ID | Risk | Mitigation |
|---|---|---|
| R-001 | Supabase Outage | Local instance for dev, monitor status |
| R-002 | Data Loss | Daily backups, soft-delete |
| R-003 | Performance Issues | Query optimization, caching |
| R-004 | CSV Format Issues | Robust validation, preview |
| R-005 | Security Breach | RLS, JWT, audits |
| R-006 | PDF Failure | Retry logic, timeouts |
| R-007 | External API Down | Graceful degradation, caching |
Appendix H: Testing Tools Summary
| Tool | Purpose | License | Cost |
|---|---|---|---|
| Vitest | Unit testing | MIT | Free |
| Playwright | E2E testing | Apache 2.0 | Free |
| React Testing Library | Component testing | MIT | Free |
| MSW | API mocking | MIT | Free |
| axe-core | Accessibility | MPL 2.0 | Free |
| Lighthouse CI | Performance | Apache 2.0 | Free |
| k6 | Load testing | AGPL 3.0 | Free |
| Snyk | Security scanning | Proprietary | Free tier |
| GitHub Actions | CI/CD | - | Free tier |
Total Cost: $0/month (free tiers)
Appendix I: Device Timeline Data Gaps - Detailed Test Cases
TC-REPORT-021: Render Continuous Data Period
| Field | Value |
|---|---|
| Test Case ID | TC-REPORT-021 |
| Priority | P1 (High) |
| Type | Functional |
| Requirement | REQ-045 |
Objective: Verify device timeline chart correctly displays continuous data collection periods as single segments.
Preconditions:
- User logged in with valid session
- Upload session with single device data available
- All measurements within 1 hour intervals (no gaps)
Test Data:
- Device: TEST-DEVICE-001
- Records: 100 measurements, 15-minute intervals
- Duration: 25 hours continuous
Steps:
- Navigate to Reports page
- Select upload session with continuous data
- Generate report with Device Timeline chart enabled
- Verify timeline chart rendering
Expected Results:
- Single continuous bar displayed for device
- No gap indicators shown
dataGapscount equals 0segmentsarray contains exactly 1 segment
TC-REPORT-022: Detect Single Data Gap
| Field | Value |
|---|---|
| Test Case ID | TC-REPORT-022 |
| Priority | P1 (High) |
| Type | Functional |
| Requirement | REQ-045 |
Objective: Verify system correctly detects a gap when >1 hour exists between consecutive records.
Preconditions:
- User logged in with valid session
- Upload session with gap in data (>1 hour between records)
Test Data:
- Device: TEST-DEVICE-002
- Records: 50 measurements before gap, 50 after
- Gap duration: 2 hours (exceeds 1 hour threshold)
Steps:
- Navigate to Reports page
- Select upload session with gap in data
- Generate report with Device Timeline chart enabled
- Verify gap detection in timeline
Expected Results:
- Two separate segments displayed for device
- Visual gap indicator between segments
dataGapscount equals 1segmentsarray contains exactly 2 segments- Gap location correctly corresponds to data discontinuity
TC-REPORT-023: Multiple Gaps Detection
| Field | Value |
|---|---|
| Test Case ID | TC-REPORT-023 |
| Priority | P1 (High) |
| Type | Functional |
| Requirement | REQ-045 |
Objective: Verify system correctly detects and displays multiple data gaps within a single device's timeline.
Preconditions:
- User logged in
- Upload session with multiple gaps in data
Test Data:
- Device: TEST-DEVICE-003
- 3 continuous periods separated by 2 gaps
- Gap 1: 3 hours, Gap 2: 5 hours
Steps:
- Navigate to Reports page
- Select upload session with multiple gaps
- Generate report
- Inspect timeline chart rendering
Expected Results:
- Three separate segments displayed
- Two visual gap indicators
dataGapscount equals 2segmentsarray contains exactly 3 segments- Segment record counts sum to total record count
TC-REPORT-024: Multi-Device Display
| Field | Value |
|---|---|
| Test Case ID | TC-REPORT-024 |
| Priority | P1 (High) |
| Type | Functional |
| Requirement | REQ-045 |
Objective: Verify timeline chart correctly displays multiple devices with varying gap patterns.
Preconditions:
- User logged in
- Upload session with 3+ devices
Test Data:
- Device A: Continuous data (0 gaps)
- Device B: 1 gap (2 segments)
- Device C: 3 gaps (4 segments)
Steps:
- Navigate to Reports page
- Select multi-device upload session
- Generate report
- Verify all devices displayed in timeline
Expected Results:
- All devices displayed on separate rows
- Devices sorted by
firstSeentimestamp - Each device shows correct segment count
- Gap counts accurate per device
- Tooltip shows device summary (total records, duration, gaps)
TC-REPORT-025: Gap Threshold Boundary
| Field | Value |
|---|---|
| Test Case ID | TC-REPORT-025 |
| Priority | P2 (Medium) |
| Type | Boundary |
| Requirement | REQ-045 |
Objective: Verify 1-hour gap threshold is correctly applied.
Preconditions:
- User logged in
- Test data prepared with specific gap intervals
Test Data:
- Scenario A: Gap of exactly 59 minutes (should NOT create gap)
- Scenario B: Gap of exactly 61 minutes (SHOULD create gap)
- Scenario C: Gap of exactly 60 minutes (boundary - should NOT create gap)
Steps:
- Upload test data for each scenario
- Generate reports for each
- Verify gap detection behavior
Expected Results:
- Scenario A:
dataGaps= 0 (under threshold) - Scenario B:
dataGaps= 1 (exceeds threshold) - Scenario C:
dataGaps= 0 (at threshold, not exceeded)
TC-REPORT-026: ECharts Rendering Performance
| Field | Value |
|---|---|
| Test Case ID | TC-REPORT-026 |
| Priority | P2 (Medium) |
| Type | Performance |
| Requirement | REQ-045 |
Objective: Verify timeline chart renders efficiently with large datasets.
Preconditions:
- User logged in
- Large upload session available
Test Data:
- 10 devices
- 1000 records per device
- Mixed gap patterns (0-5 gaps per device)
Steps:
- Navigate to Reports page
- Select large dataset upload
- Generate report with timeline chart
- Measure chart rendering time
Expected Results:
- Chart renders within 3 seconds
- No browser freezing or lag
- All segments displayed correctly
- Scrolling/zooming responsive
- Memory usage remains stable
Appendix J: Document Revision History
| Version | Date | Sections Changed | Reason |
|---|---|---|---|
| 1.0 | 2025-12-02 | All | Initial test plan |
| 1.1 | 2025-12-20 | Section 6 | Dust Ranger RLS testing |
| 1.2 | 2026-01-05 | Sections 3,4,6, Appendix I | Device Timeline Data Gaps |
Review Schedule:
- Quarterly reviews
- Phase reviews at start of each major phase
- Ad-hoc updates for significant changes
Next Scheduled Review: 2026-04-01
This document is confidential and proprietary to Dustac.