System Design Document (SDD)
Dustac Environmental Monitoring Dashboard
Version: 1.0 Date: December 2, 2025 Status: Draft
Document Control
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0 | 2025-12-02 | Dustac Development Team | Initial draft |
Table of Contents
- Introduction
- General Overview and Design Guidelines/Approach
- Design Considerations
- System Architecture and Architecture Design
- System Design
- Operational Scenarios
- Detailed Design
- System Integrity Controls
- External Interfaces
- Appendices
List of Figures
| Figure # | Title | Page |
|---|---|---|
| Figure 4-1 | High-Level System Architecture Diagram | TBD |
| Figure 4-2 | Logical View Diagram | TBD |
| Figure 4-3 | Hardware Architecture Diagram | TBD |
| Figure 4-4 | Software Architecture Diagram | TBD |
| Figure 4-5 | Security Architecture Diagram | TBD |
| Figure 4-6 | System Architecture Diagram | TBD |
| Figure 5-1 | Entity-Relationship Diagram | TBD |
| Figure 5-2 | User Interface Navigation Flow | TBD |
| Figure 6-1 | User Registration Flow Diagram | TBD |
| Figure 6-2 | Data Ingestion Flow Diagram | TBD |
| Figure 6-3 | Report Generation Flow Diagram | TBD |
| Figure 7-1 | Component Interaction Diagram | TBD |
| Figure 7-2 | Sequence Diagram - Authentication | TBD |
| Figure 9-1 | External Interface Architecture | TBD |
List of Tables
| Table # | Title | Page |
|---|---|---|
| Table 2-1 | Risk Assessment Matrix | TBD |
| Table 4-1 | Hardware Specifications | TBD |
| Table 4-2 | Software Components | TBD |
| Table 5-1 | Database Tables Overview | TBD |
| Table 5-2 | Data Dictionary | TBD |
| Table 5-3 | Input Data Formats | TBD |
| Table 5-4 | Output Data Formats | TBD |
| Table 9-1 | External Interface Inventory | TBD |
| Table A-1 | Record of Changes | TBD |
| Table B-1 | Acronyms | TBD |
| Table C-1 | Glossary | TBD |
| Table D-1 | Referenced Documents | TBD |
| Table E-1 | Approvals | TBD |
1. Introduction
1.1 Purpose of the SDD
This System Design Document (SDD) provides a comprehensive technical specification for the Dustac Environmental Monitoring Dashboard system. The document serves multiple critical purposes:
Technical Blueprint: Provides detailed architecture and design specifications to guide the development team in implementing system components, ensuring consistency and adherence to established patterns.
Stakeholder Communication: Enables clear communication between technical and non-technical stakeholders by documenting system capabilities, constraints, and design decisions in a structured format.
Quality Assurance Foundation: Establishes the baseline for testing strategies, validation procedures, and acceptance criteria, ensuring the delivered system meets specified requirements.
Maintenance and Evolution: Serves as authoritative reference documentation for system maintenance, troubleshooting, and future enhancements, reducing knowledge transfer friction.
Risk Mitigation: Documents assumptions, constraints, and risk mitigation strategies to proactively address potential issues during development and deployment.
This document bridges the gap between high-level business requirements and low-level implementation details, providing sufficient information for developers to build the system while maintaining alignment with business objectives.
1.2 Project Background
The Dustac Environmental Monitoring Dashboard addresses critical needs in the mining industry for real-time environmental compliance monitoring and data management. Mining operations must continuously monitor airborne particulate matter (dust) to comply with environmental regulations, protect worker health, and maintain community relations.
Business Context:
- Mining sites deploy DustRanger monitoring devices that collect environmental measurements including PM1.0, PM2.5, PM4.0, and PM10 particulate concentrations, along with temperature, humidity, and location data
- Regulatory requirements mandate accurate record-keeping, reporting, and trend analysis of environmental metrics
- Operations teams need accessible, actionable insights from collected data to make informed decisions about dust suppression activities
Historical Evolution: The system has evolved through five major development phases:
- Phase 1 (Q1 2025): Foundation - User authentication, database schema, and automated data scraping capabilities
- Phase 2 (Q1 2025): Core features - PDF report generation, dashboard analytics, and data visualization
- Phase 3 (Q2 2025): Report management - Calendar views, bulk operations, and report templates
- Phase 4 (Q2 2025): Data management - Multi-site support, device tracking, and advanced filtering
- Phase 5+ (Q3-Q4 2025): Extended features - Real-time dust level monitoring, weekly field reports, flow meter tracking, climate integration, and AI-powered chart descriptions
Current State: The system is deployed in production, serving multiple mining sites with daily active users managing environmental compliance data. The architecture leverages modern cloud technologies (Supabase, Cloudflare Pages) to provide scalable, secure, and performant services.
1.3 Intended Audience
This document is intended for the following audiences, each with specific interests:
Primary Audiences:
- Software Developers: Detailed technical specifications for implementation, including database schemas, API designs, component architecture, and coding standards
- System Architects: High-level architecture patterns, technology stack decisions, integration approaches, and scalability considerations
- DevOps Engineers: Deployment architecture, infrastructure requirements, CI/CD pipelines, monitoring strategies, and operational procedures
Secondary Audiences:
- Project Managers: Project scope, constraints, risks, timelines, and resource requirements for planning and tracking
- Quality Assurance Engineers: Testing strategies, validation scenarios, acceptance criteria, and quality metrics
- Security Officers: Security architecture, authentication mechanisms, data protection strategies, and compliance considerations
- Business Stakeholders: System capabilities, business value delivery, operational scenarios, and user experience design
Supporting Audiences:
- Technical Support Teams: System operation, troubleshooting procedures, and maintenance guidelines
- External Auditors: Compliance verification, security controls, and audit trail mechanisms
- Future Development Teams: System understanding for maintenance and enhancement activities
Each audience should focus on sections most relevant to their role while maintaining awareness of the complete system context.
1.4 Document Scope
Inclusions:
This document comprehensively covers:
- System Architecture: Complete hardware, software, information, and security architecture specifications
- Database Design: Physical data models, table schemas, indexing strategies, and Row-Level Security (RLS) policies
- Feature Modules: Detailed design for nine feature modules including data ingestion, reports, dust levels, weekly reports, flow meter, climate, report templates, mine sites, and dashboard
- User Interface Design: UI framework, component structure, navigation patterns, and accessibility compliance
- API Design: REST endpoints, RPC functions, Edge Functions, and external integrations
- Security Design: Authentication flows, authorization models, encryption standards, and audit logging
- Performance Design: Optimization strategies, caching mechanisms, scalability patterns, and performance targets
- Operational Scenarios: Step-by-step user workflows for key system functions
- Deployment Architecture: Development, staging, and production environments with CI/CD pipelines
Exclusions:
This document explicitly does not cover:
- Project Management: Project plans, sprint schedules, resource allocation, or budget details (covered in separate project management documentation)
- Business Requirements: Detailed business requirements specification (referenced from Requirements Document)
- User Training: End-user training materials, tutorials, or help documentation
- Marketing Materials: Product positioning, sales collateral, or customer-facing materials
- Third-Party System Internal Design: Internal implementation details of Supabase, Cloudflare, or external APIs (only integration interfaces documented)
- Future Enhancements: Speculative features beyond Phase 5+ scope (documented separately in product roadmap)
- Source Code: Actual implementation code (maintained in version control repository)
Document Boundaries:
This SDD focuses on the "how" of system implementation rather than the "why" of business decisions. Business justifications and requirement specifications are referenced from the Requirements Document and SYSTEM_DESIGN.md. Implementation-level details such as specific code patterns and algorithms are provided at an appropriate level of abstraction for technical understanding without duplicating source code.
2. General Overview and Design Guidelines/Approach
2.1 General Overview
The Dustac Environmental Monitoring Dashboard is a cloud-native web application that enables mining operations to collect, analyze, visualize, and report on environmental dust monitoring data. The system provides a comprehensive platform for managing regulatory compliance while delivering actionable insights for operational decision-making.
System Purpose:
The system serves three primary functions:
Data Ingestion and Management: Automatically collect data from DustRanger monitoring devices via Edge Function scrapers, validate and store measurements in a centralized database, and maintain data integrity through comprehensive validation rules.
Analysis and Visualization: Transform raw measurement data into meaningful insights through interactive dashboards, statistical summaries, trend analysis, and comparative visualizations across multiple sites and time periods.
Reporting and Compliance: Generate professional PDF reports for regulatory submission, maintain audit trails for compliance verification, and provide structured weekly field reports for operational documentation.
Key System Capabilities:
- Multi-Site Management: Support for multiple mining sites with independent data streams and reporting
- Real-Time Monitoring: Live dust level tracking with automated data collection from external scrapers
- Flexible Reporting: Customizable report templates with AI-generated chart descriptions
- Water Management: Flow meter tracking for dust suppression water usage monitoring
- Climate Integration: Weather data correlation for environmental context
- Role-Based Access: Secure, user-specific data access with administrative oversight capabilities
- Audit Compliance: Comprehensive activity logging for regulatory audit trails
High-Level Context Diagram:
┌─────────────────────────────────────────────────────────────────────────┐
│ External Entities │
└─────────────────────────────────────────────────────────────────────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────┐ ┌──────────────┐
│ End Users │ │ DustRanger│ │ External │
│ (Browser) │ │ Devices │ │ Data APIs │
└──────┬──────┘ └────┬─────┘ └──────┬───────┘
│ │ │
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────────┐
│ Dustac Dashboard System │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ Frontend (React 19 SPA) │ │
│ │ • Dashboard • Reports • Weekly Reports • Dust Levels │ │
│ └────────────────────┬────────────────────────────────────────────┘ │
│ │ │
│ │ Supabase Client SDK │
│ │ │
│ ┌────────────────────┴────────────────────────────────────────────┐ │
│ │ Supabase Backend-as-a-Service │ │
│ │ ┌──────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │
│ │ │ PostgreSQL │ │ Auth │ │ Storage (S3-like) │ │ │
│ │ │ Database │ │ (JWT) │ │ • CSV Files │ │ │
│ │ │ • RLS │ │ • Email │ │ • PDF Reports │ │ │
│ │ └──────────────┘ └─────────────┘ └─────────────────────┘ │ │
│ │ ┌──────────────────────────────────────────────────────────┐ │ │
│ │ │ Edge Functions (Deno Runtime) │ │ │
│ │ │ • Chart Description AI • Scraper Triggers │ │ │
│ │ └──────────────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────┐ ┌──────────────┐
│ Dustac │ │ Google │ │ BOM Weather │
│ Scraper │ │ Gemini │ │ API │
│ API │ │ AI │ │ │
└─────────────┘ └──────────┘ └──────────────┘System Boundaries:
- Internal: User interface, business logic, data storage, report generation, authentication, and authorization
- External: DustRanger device data (CSV), external scraper APIs (Dustac, Flow Meter), AI services (Google Gemini), weather data (Bureau of Meteorology)
2.2 Assumptions/Constraints/Risks
2.2.1 Assumptions
The following assumptions underpin the system design:
Technical Assumptions:
- Browser Support: Users access the system via modern web browsers (Chrome 90+, Firefox 88+, Safari 14+, Edge 90+) with JavaScript enabled
- Network Connectivity: Users have stable internet connectivity with minimum 1 Mbps download speed for dashboard functionality
- CSV Data Format: DustRanger devices produce CSV files with consistent column structure as defined in system specifications
- Supabase Availability: Supabase platform maintains 99.9% uptime SLA with automatic failover capabilities
- Cloudflare CDN: Cloudflare Pages provides global CDN distribution with acceptable latency (<200ms) for target user locations
Operational Assumptions:
- User Training: Users receive basic training on system navigation and data management before production use
- Data Volume: Automated scrapers collect data at regular intervals; system handles up to 500MB of daily data per site
- Concurrent Users: System supports up to 50 concurrent users during peak operational hours
- Report Generation: PDF generation completes within 60 seconds for standard reports (up to 30 pages)
- Data Retention: Users require access to historical data for minimum 12 months for compliance purposes
Business Assumptions:
- Single Tenant Model: Each organization operates independently with isolated data; no cross-organization data sharing required
- English Language: Primary user interface and reports in English; internationalization not required in Phase 5
- Email Verification: Users have access to corporate email for account verification and notifications
- Regulatory Stability: Environmental reporting requirements remain stable during system lifecycle
2.2.2 Constraints
The following constraints influence system design decisions:
Technical Constraints:
Supabase Limitations:
- Free tier: 500MB database storage, 1GB file storage
- Pro tier: 8GB database storage, 100GB file storage
- Row-Level Security overhead impacts query performance for complex policies
- PostgreSQL connection pool limited to 60 concurrent connections (Pro tier)
Browser Constraints:
- LocalStorage limited to 10MB for session data
- Memory constraints for client-side PDF generation (html2canvas/jsPDF)
External API Constraints:
- Google Gemini API: 60 requests/minute rate limit
- Dustac Scraper API: 10 requests/minute per user
- BOM Weather API: Public data access, subject to throttling under heavy load
Business Constraints:
Budget Limitations:
- Development budget constrains third-party service selections to cost-effective solutions
- Hosting costs must remain under $500/month for production deployment
Timeline Constraints:
- Phase 5+ features delivered incrementally over 6-month period
- No breaking changes to existing data structures during active development
Regulatory Constraints:
Data Privacy:
- Must comply with general data protection principles
- User data must remain within specified geographical regions (if required by local regulations)
- Audit trails required for all data modifications
Environmental Reporting:
- PDF reports must meet formatting standards for regulatory submission
- Data retention policies must align with environmental compliance requirements (typically 5-7 years)
Resource Constraints:
- Development Team: Core team of 2-3 developers; limited parallel development capacity
- Testing Infrastructure: Manual testing supplemented by automated unit/integration tests; limited E2E test coverage
- Documentation: Documentation maintained alongside development; dedicated technical writer not available
2.2.3 Risks
| Risk ID | Description | Probability | Impact | Mitigation Strategy |
|---|---|---|---|---|
| R-001 | Supabase Platform Outage: Extended Supabase service disruption affecting system availability | Low (5%) | High | • Implement comprehensive error handling with user-friendly messages • Monitor Supabase status page • Maintain backup CSV exports for critical data • Document manual data access procedures • Consider multi-region deployment for critical customers |
| R-002 | Data Loss: Database corruption or accidental deletion leading to data loss | Low (5%) | Critical | • Leverage Supabase automatic backups (daily snapshots) • Implement soft-delete patterns for user data • Maintain audit logs for data recovery • Regular backup verification testing • Point-in-time recovery capability (Supabase Pro) |
| R-003 | Performance Degradation: System slowdown under high concurrent user load | Medium (30%) | Medium | • Implement database query optimization and indexing • Use Supabase connection pooling • Add client-side caching for static data • Monitor query performance with slow query logs • Implement pagination for large datasets • Consider read replicas for scaling |
| R-004 | CSV Format Inconsistency: DustRanger devices output inconsistent CSV formats causing import failures | Medium (25%) | Medium | • Implement robust CSV validation with detailed error messages • Provide CSV preview before import • Maintain format version detection • Create CSV format documentation for users • Build flexible parser with configurable column mapping |
| R-005 | Security Breach: Unauthorized access to sensitive environmental data | Low (10%) | High | • Enforce Supabase Row-Level Security policies • Implement JWT token expiration and refresh • Regular security audits of RLS policies • Monitor suspicious activity via activity logs • Apply HTTPS everywhere with HSTS headers • Regular dependency vulnerability scanning |
| R-006 | PDF Generation Failure: Report generation timeouts or failures under complex reports | Medium (20%) | Medium | • Implement retry logic with exponential backoff • Optimize chart rendering performance • Add timeout handling with partial report generation • Provide alternative export formats (CSV, Excel) • Queue system for background report generation |
| R-007 | External API Dependency: Dustac scraper or weather API unavailability | Medium (25%) | Low | • Implement graceful degradation when external APIs unavailable • Cache recently fetched data • Provide manual data entry alternatives • Display clear status indicators for external service health • Document known service windows |
| R-008 | Browser Compatibility: Issues with specific browser versions or configurations | Low (15%) | Low | • Test on major browser versions during development • Provide browser compatibility documentation • Implement feature detection and polyfills • Display browser compatibility warnings • Monitor user-agent analytics for problem patterns |
| R-009 | Scalability Limitations: System unable to scale beyond 100+ concurrent users | Low (10%) | Medium | • Architecture designed for horizontal scaling via Supabase • Monitor usage metrics and performance • Plan for database scaling (vertical or read replicas) • Implement rate limiting to prevent abuse • Load testing before major user onboarding |
| R-010 | AI Service Costs: Google Gemini API costs exceed budget due to high usage | Medium (20%) | Low | • Implement AI description caching • Rate limit AI generation requests per user • Provide option to disable AI descriptions • Monitor API usage dashboards • Set billing alerts and quotas |
2.3 Alignment with Enterprise Architecture Standards
The Dustac Environmental Monitoring Dashboard aligns with modern enterprise architecture principles and industry best practices:
1. Cloud-Native Architecture:
- Twelve-Factor App Principles: System adheres to 12-factor methodology including configuration management via environment variables, stateless processes, disposable infrastructure, and development/production parity
- Microservices Pattern: Backend services decomposed into focused, independently deployable components (authentication, storage, edge functions)
- API-First Design: All data access through well-defined APIs (Supabase REST/GraphQL, Edge Functions), enabling future mobile apps or third-party integrations
2. Security Best Practices:
- Zero Trust Security Model: No implicit trust; every request authenticated and authorized via JWT tokens and RLS policies
- Defense in Depth: Multiple security layers including transport encryption (TLS 1.3), database-level RLS, application-level authorization, and audit logging
- OWASP Top 10 Compliance: Design addresses common vulnerabilities including injection attacks (parameterized queries), broken authentication (JWT with expiration), XSS (React's built-in escaping), and security misconfiguration (least privilege access)
- Data Encryption: Data encrypted at rest (AES-256) and in transit (TLS 1.3); sensitive data in database encrypted at column level where required
3. Data Architecture Standards:
- Single Source of Truth: PostgreSQL database serves as authoritative data store; no data duplication across services
- Data Normalization: Database schema follows third normal form (3NF) to minimize redundancy while maintaining query performance
- Audit Trail: Comprehensive activity logging for compliance and forensic analysis
- Data Lifecycle Management: Defined retention policies, archival strategies, and GDPR-compliant data deletion procedures
4. Integration Patterns:
- RESTful APIs: External integrations follow REST principles with proper HTTP verbs, status codes, and versioning
- Asynchronous Processing: Long-running operations (report generation, data imports) handled asynchronously with status polling
- Circuit Breaker Pattern: External API calls wrapped with timeout and retry logic to prevent cascading failures
- Event-Driven Updates: Real-time subscriptions (Supabase Realtime) for collaborative features and live data updates
5. Development Standards:
- Infrastructure as Code: Infrastructure configuration managed in version control (Supabase migrations, Cloudflare configuration)
- Continuous Integration: Automated testing (unit, integration, E2E) in CI pipeline before deployment
- Version Control: Git-based version control with conventional commits and semantic versioning
- Code Quality: ESLint rules, Prettier formatting, TypeScript strict mode, and code review requirements
6. Accessibility and Usability:
- WCAG 2.1 AA Compliance: User interface designed for accessibility with keyboard navigation, screen reader support, and sufficient color contrast
- Responsive Design: Mobile-first design approach with responsive layouts for desktop, tablet, and mobile viewports
- Progressive Enhancement: Core functionality works without JavaScript; enhanced features layer progressively
- Internationalization Ready: Architecture supports future multi-language expansion (i18next integration)
7. Performance Standards:
- Core Web Vitals: Target metrics include LCP <2.5s, FID <100ms, CLS <0.1
- API Response Times: 95th percentile response time <500ms for read operations, <2s for write operations
- Scalability: Architecture supports horizontal scaling to handle 10x current user load without redesign
- Caching Strategy: Multi-level caching (browser, CDN, database) to minimize redundant data transfers
8. Compliance and Governance:
- Environmental Compliance: System supports regulatory reporting requirements for mining environmental monitoring
- Audit Readiness: Complete audit trails with tamper-evident logging for compliance verification
- Data Sovereignty: Deployment configuration supports data residency requirements for different jurisdictions
- Disaster Recovery: RPO <24 hours, RTO <4 hours via automated backups and recovery procedures
Industry Framework Alignment:
- NIST Cybersecurity Framework: Security controls mapped to NIST CSF categories (Identify, Protect, Detect, Respond, Recover)
- ISO 27001 Principles: Information security management practices aligned with ISO 27001 standards
- SOC 2 Type II: Supabase infrastructure maintains SOC 2 Type II compliance for service provider assurance
This alignment ensures the system is built on proven patterns, maintainable by industry-standard practices, and positioned for long-term sustainability and growth.
3. Design Considerations
3.1 Goals and Guidelines
The system design is governed by a set of prioritized goals and guidelines that ensure consistency, maintainability, and alignment with business objectives.
Design Priority Hierarchy:
- Security First: Data protection and access control take precedence over convenience features
- Reliability: System availability and data integrity prioritized over new feature velocity
- User Experience: Intuitive, accessible interfaces preferred over feature density
- Performance: Acceptable response times maintained even under load
- Maintainability: Code clarity and documentation valued for long-term sustainability
- Scalability: Architecture supports growth without fundamental redesign
Coding Standards and Conventions:
TypeScript Standards:
- Strict Mode Enabled: Full TypeScript strict mode enforced (
strict: truein tsconfig.json) - Explicit Types: Function parameters, return types, and complex objects explicitly typed
- Type Imports: Use
import typefor type-only imports (enforced by ESLint) - No Any:
anytype prohibited except for documented exceptional cases - Null Safety:
noUncheckedIndexedAccessenabled; all array/object access checked for undefined
Import Organization (ESLint enforced):
// 1. Type imports
import type { Database } from '@/lib/supabaseTypes';
// 2. External dependencies
import { useState, useEffect } from 'react';
import { supabase } from '@supabase/supabase-js';
// 3. Internal imports (@ alias)
import { cn } from '@/lib/utils';
import { Button } from '@/components/ui/button';React Component Standards:
- Functional Components: Use function declarations, not arrow functions for components
- Named Exports: No default exports for components (improves refactoring)
- Props Interface: Define explicit interface for all component props
- JSX Prop Ordering (ESLint enforced): reserved props → shorthand → callbacks → multiline
- Hooks Rules: Follow hooks rules (only at top level, only in React functions)
File Naming Conventions:
- Components: PascalCase (e.g.,
ReportTemplate.tsx,DataUploadForm.tsx) - Hooks: camelCase with
useprefix (e.g.,useReportGenerator.tsx,useAuth.tsx) - Services: camelCase (e.g.,
csvParser.ts,uploadService.ts) - Types: PascalCase for interfaces/types, camelCase for files (e.g.,
types.tsexportsUploadStatus) - Tests: Same as source file with
.test.tsor.spec.tssuffix
Git Commit Conventions (Commitlint enforced):
<type>(<scope>): <subject>
feat(upload): add support for multi-file CSV upload
fix(reports): correct date range filtering in PDF generation
chore(deps): update TanStack Router to v1.134.9
docs(phase4): add report management implementation notes- Types: feat, fix, chore, docs, test, refactor, style, perf
- Scopes: upload, reports, dashboard, auth, dust-levels, flow-meter, etc.
- Subject: Imperative mood, lowercase, no period, max 72 characters
UI/UX Guidelines:
Design System:
- Component Library: shadcn/ui (Radix UI primitives + Tailwind CSS)
- Utility-First CSS: Tailwind CSS for styling; avoid custom CSS unless necessary
- Class Name Merging: Use
cn()utility for conditional class names - Spacing Scale: Tailwind's default spacing scale (4px increments)
- Color System: Consistent color palette defined in Tailwind config
Accessibility Requirements:
- Keyboard Navigation: All interactive elements accessible via keyboard
- Focus Indicators: Visible focus states for all interactive elements
- ARIA Labels: Descriptive labels for screen readers where visual labels insufficient
- Color Contrast: WCAG AA contrast ratios (4.5:1 for normal text, 3:1 for large text)
- Semantic HTML: Use appropriate HTML elements (buttons, links, headings)
Responsive Design:
- Mobile-First: Design for mobile viewports first, enhance for larger screens
- Breakpoints: Tailwind default breakpoints (sm: 640px, md: 768px, lg: 1024px, xl: 1280px)
- Touch Targets: Minimum 44x44px for touch interactive elements
- Layout Patterns: Flexbox and Grid for responsive layouts
User Feedback:
- Loading States: Show loading indicators for operations >300ms
- Error Messages: Clear, actionable error messages with recovery guidance
- Success Confirmations: Toast notifications for successful operations
- Progress Indicators: Progress bars/percentages for long-running operations
API Design Principles:
Supabase REST API Usage:
- Parameterized Queries: Always use parameterized queries to prevent SQL injection
- Selective Columns: Select only needed columns to minimize data transfer
- Pagination: Use
range()for large result sets; default page size 50 items - Error Handling: Wrap all Supabase calls in try-catch with user-friendly error messages
RPC Function Design:
- Single Responsibility: Each RPC function performs one clear operation
- Input Validation: Validate parameters within function; return descriptive errors
- Transaction Safety: Use transactions for multi-table operations
- Performance: Optimize queries; avoid N+1 patterns
Edge Function Standards:
- Stateless: No server-side state between requests
- Timeout Handling: Operations complete within 60s timeout (Supabase limit)
- Error Response Format: Consistent JSON error format with status codes
- Logging: Structured logging for debugging and monitoring
3.2 Development Methods & Contingencies
Development Methodology:
The project follows an Agile iterative approach with continuous delivery:
Sprint Structure:
- Sprint Duration: 2-week sprints
- Planning: Sprint planning at start; story pointing with Fibonacci sequence
- Daily Standups: Asynchronous updates in project management tool
- Sprint Reviews: Demo completed features to stakeholders
- Retrospectives: Team reflection and process improvement
Development Workflow:
- Feature Branch: Create feature branch from
main(e.g.,feat/upload-validation) - Development: Implement feature with tests; commit frequently with conventional commits
- Testing: Run unit tests, integration tests, and manual testing
- Code Review: Pull request reviewed by at least one team member
- CI Pipeline: Automated tests and linting in GitHub Actions
- Merge: Squash and merge to
mainafter approval - Deploy: Automatic deployment to staging; manual promotion to production
Quality Gates:
- All tests passing (unit, integration, E2E for critical paths)
- No new TypeScript errors
- ESLint warnings below threshold (500 max)
- Code review approval
- Security scan passes (dependency vulnerabilities)
Contingency Plans:
1. Third-Party Service Failure (Supabase/Cloudflare):
- Detection: Monitor service status pages and health check endpoints
- Response: Display user-friendly error messages with estimated recovery time
- Mitigation: Implement circuit breakers; graceful degradation where possible
- Recovery: Automatic retry with exponential backoff; manual escalation if persists >30 minutes
2. Design Ambiguity Discovered During Development:
- Detection: Developer identifies unclear requirement or conflicting specifications
- Response: Document ambiguity in issue tracker with specific questions
- Resolution: Schedule clarification session with product owner/stakeholders
- Timeline: Block development on ambiguous feature; pivot to other sprint items
- Documentation: Update requirements document with clarified decisions
3. Performance Issues During Development:
- Detection: Identify slow queries, high latency, or poor user experience during testing
- Analysis: Use browser DevTools, Supabase query analyzer, and performance profiling
- Response Options:
- Quick win: Add database indexes, optimize queries, add caching
- Architectural: Redesign data access patterns, introduce background jobs
- Scope: Reduce feature complexity or split into phases
- Decision: Evaluate effort vs. impact; escalate if affects core functionality
4. Breaking Changes in Dependencies:
- Prevention: Pin dependency versions; review changelogs before upgrading
- Detection: Automated dependency update PRs (Dependabot) with CI checks
- Response: Test thoroughly in development environment before merging
- Rollback: Revert to previous version if breaking changes discovered post-deployment
5. Database Schema Changes Required:
- Process: Create Supabase migration file; test on local database first
- Backward Compatibility: Prefer additive changes; avoid breaking changes to production data
- Deployment: Apply migration during low-traffic window; monitor for errors
- Rollback: Prepare rollback migration in advance; test rollback procedure
3.3 Architectural Strategies
Technology Selection Rationale:
Frontend Stack:
| Technology | Rationale | Trade-offs |
|---|---|---|
| React 19 | Industry-standard UI library; large ecosystem; excellent TypeScript support; concurrent features for performance | Learning curve for new developers; bundle size larger than lightweight alternatives |
| Vite | Fastest build tool; excellent DX with HMR; native ESM; smaller production bundles than Webpack | Less mature plugin ecosystem than Webpack |
| React Router v7 | Type-safe routing; excellent TypeScript integration; nested layouts; data loading patterns | More opinionated than alternatives like React Navigation |
| Tailwind CSS | Utility-first rapid development; consistent design system; excellent tree-shaking; responsive design utilities | Large class names in JSX; less intuitive for CSS experts |
| shadcn/ui | Accessible components (Radix UI); full customization; copy-paste approach (not dependency); Tailwind integration | Requires manual updates for component improvements |
| Zustand | Lightweight state management; minimal boilerplate; TypeScript support; no Provider wrapping | Less structured than Redux; fewer dev tools |
| React Hook Form | Performant form validation; excellent UX; Zod integration; minimal re-renders | API different from traditional controlled forms |
Backend Stack:
| Technology | Rationale | Trade-offs |
|---|---|---|
| Supabase | Backend-as-a-Service; PostgreSQL; built-in auth; RLS security; real-time subscriptions; S3-compatible storage; cost-effective | Vendor lock-in; less control than self-hosted; RLS complexity |
| PostgreSQL | Production-grade RDBMS; JSON support; full-text search; excellent query optimizer; mature ecosystem | More complex than SQLite; requires understanding of SQL optimization |
| Edge Functions | Serverless compute; Deno runtime; TypeScript native; integrated with Supabase; pay-per-use pricing | Cold start latency; 60s timeout limit; limited CPU/memory |
| Supabase Storage | S3-compatible object storage; CDN integration; access control; image transformations | Less feature-rich than AWS S3; 5GB free tier limit |
Infrastructure Stack:
| Technology | Rationale | Trade-offs |
|---|---|---|
| Cloudflare Pages | Global CDN; automatic deployments from Git; preview deployments; free tier generous; edge caching | Limited server-side rendering options; configuration less flexible than Vercel |
| GitHub Actions | Integrated with repository; generous free tier; extensive marketplace; easy configuration | Slower than dedicated CI services; limited concurrent jobs on free tier |
Design Patterns Employed:
1. Feature-Driven Architecture:
- Pattern: Organize code by feature/domain rather than technical layer
- Implementation:
/src/features/with self-contained modules (components, services, hooks, types) - Benefits: Better code organization, easier to find related code, supports team scalability
- Trade-off: Some code duplication across features; shared utilities in
/src/lib/
2. Service Layer Pattern:
- Pattern: Separate business logic from UI components
- Implementation: Service files handle data fetching, transformations, and business rules
- Benefits: Testable business logic, reusable across components, clearer separation of concerns
- Example:
uploadService.tshandles CSV processing independently of React components
3. Repository Pattern (via Supabase):
- Pattern: Abstract data access layer behind service interfaces
- Implementation: Services encapsulate Supabase queries; components don't call Supabase directly
- Benefits: Easier to test (mock services), potential to swap data source, consistent error handling
- Example:
reportDataService.tsprovidesgetReportData()without exposing SQL details
4. Hooks Pattern:
- Pattern: Encapsulate stateful logic in reusable custom hooks
- Implementation: Custom hooks for data fetching, form management, authentication state
- Benefits: Reusable logic, easier testing, cleaner components
- Example:
useUser()hook provides authentication state to any component
5. Composition Pattern:
- Pattern: Build complex components from smaller, focused components
- Implementation: shadcn/ui components composed into feature-specific components
- Benefits: Reusable building blocks, easier testing, consistent UI
- Example:
DataUploadFormcomposesButton,Input,Label,Cardprimitives
6. Error Boundary Pattern:
- Pattern: Catch React errors and display fallback UI
- Implementation: Error boundaries wrap route components
- Benefits: Prevents white screen of death, graceful degradation, error logging opportunity
- Example: App-level error boundary catches uncaught errors, displays friendly message
Trade-off Decisions and Justifications:
Decision 1: Client-Side PDF Generation vs. Server-Side:
- Options: (A) Server-side with Puppeteer, (B) Client-side with jsPDF/html2canvas
- Decision: Client-side generation (B)
- Rationale: Simpler architecture, no server infrastructure, leverages user's CPU, better for Supabase serverless
- Trade-off: Limited to client memory, slower on low-end devices, less control over rendering
- Mitigation: Optimize charts before rendering, chunk large reports, provide loading indicators
Decision 2: Row-Level Security vs. Application-Layer Authorization:
- Options: (A) RLS policies in PostgreSQL, (B) Authorization checks in application code
- Decision: RLS policies (A)
- Rationale: Defense in depth, prevents data leaks even if application logic buggy, Supabase best practice
- Trade-off: Complex RLS policies impact query performance, harder to debug, requires SQL expertise
- Mitigation: Optimize RLS policies, add indexes on user_id columns, document policies thoroughly
Decision 3: Monolithic SPA vs. Micro-frontends:
- Options: (A) Single React SPA, (B) Micro-frontends per feature module
- Decision: Monolithic SPA (A)
- Rationale: Simpler deployment, shared state easier, team size doesn't require splitting, consistent UX
- Trade-off: Larger bundle size, all features deployed together, potential merge conflicts
- Mitigation: Code splitting per route, lazy loading, modular architecture within monolith
Decision 4: Real-time Updates vs. Polling:
- Options: (A) Supabase Realtime subscriptions, (B) Polling with setInterval
- Decision: Polling for most features, Realtime for collaborative editing (B with selective A)
- Rationale: Simpler implementation, fewer open connections, polling sufficient for non-critical updates
- Trade-off: Less immediate updates, higher server load, potential stale data
- Mitigation: Refresh on user action, optimistic UI updates, use Realtime for weekly reports collaboration
Decision 5: TypeScript Strict Mode:
- Options: (A) TypeScript strict mode, (B) Permissive mode
- Decision: Strict mode (A)
- Rationale: Catch errors at compile time, better IDE support, maintainability, industry best practice
- Trade-off: More verbose code, steeper learning curve, migration effort for existing code
- Mitigation: Gradual migration, helpful error messages, team training on TypeScript patterns
3.4 Performance Engineering
Performance Requirements and Targets:
| Metric | Target | Measurement Method | Priority |
|---|---|---|---|
| Page Load Time (LCP) | < 2.5s (75th percentile) | Lighthouse, Core Web Vitals | High |
| First Input Delay (FID) | < 100ms | Core Web Vitals | High |
| Cumulative Layout Shift (CLS) | < 0.1 | Core Web Vitals | Medium |
| Time to Interactive (TTI) | < 3.5s | Lighthouse | Medium |
| API Response Time (Read) | < 500ms (95th percentile) | Supabase logs | High |
| API Response Time (Write) | < 2s (95th percentile) | Supabase logs | Medium |
| PDF Generation | < 60s for 30-page report | Application logs | High |
| CSV Upload Processing | < 30s for 10K rows | Application logs | High |
| Dashboard Query | < 1s for 90-day data | Supabase query analyzer | High |
| Concurrent Users | 50 users without degradation | Load testing (Playwright) | Medium |
Performance Design Strategies:
1. Frontend Performance:
Code Splitting and Lazy Loading:
// Route-based code splitting
const DustLevels = lazy(() => import('@/features/dust-levels'));
const WeeklyReports = lazy(() => import('@/features/weekly-reports'));
// Component-level lazy loading for heavy components
const PDFViewer = lazy(() => import('@/components/PDFViewer'));Benefits: Smaller initial bundle, faster page loads, load features on demand Trade-off: Slight delay when navigating to new routes (mitigated with Suspense)
Asset Optimization:
- Image Optimization: Use modern formats (WebP), responsive images with
srcset, lazy loading below fold - Bundle Size: Tree-shaking unused code, analyze bundle with Vite rollup visualizer
- Minification: Automatic minification in production build (Terser)
- Compression: Brotli compression via Cloudflare CDN
React Performance Patterns:
- Memoization: Use
useMemofor expensive calculations,useCallbackfor callbacks passed to children - Virtualization: Virtual scrolling for large lists (react-window) if needed
- Debouncing: Debounce search inputs and filter changes (usehooks-ts)
- Optimistic Updates: Update UI immediately, sync with server in background
2. Database Performance:
Indexing Strategy:
-- Composite indexes for common query patterns
CREATE INDEX idx_measurements_site_time ON measurements(site, time);
CREATE INDEX idx_measurements_upload_time ON measurements(upload_id, time);
-- Partial indexes for filtered queries
CREATE INDEX idx_uploads_active ON uploads(user_id, status)
WHERE status IN ('pending', 'importing');
-- GIN indexes for JSONB columns
CREATE INDEX idx_uploads_sites ON uploads USING GIN(sites);Query Optimization:
- Selective Columns: Select only needed columns; avoid
SELECT * - Limit Results: Always paginate large result sets; default limit 50
- Avoid N+1: Use joins or batch queries instead of multiple round trips
- Analyze Queries: Use
EXPLAIN ANALYZEto identify slow queries
Connection Pooling:
- Supabase provides connection pooling (PgBouncer)
- Application uses single Supabase client instance
- Connections automatically managed and reused
3. Caching Strategy:
Multi-Level Caching:
┌─────────────┐
│ Browser │ (60s stale-while-revalidate for static assets)
└──────┬──────┘
│
┌──────▼──────┐
│ Cloudflare │ (CDN cache for HTML/JS/CSS/images)
│ CDN │ (Edge caching rules)
└──────┬──────┘
│
┌──────▼──────┐
│ Supabase │ (Database query cache, prepared statements)
│ Database │
└─────────────┘Client-Side Caching:
- React Query: Cache API responses with stale-time and cache-time configurations
- LocalStorage: Cache user preferences, session data (max 10MB)
- Service Worker: Cache static assets for offline support (future enhancement)
CDN Caching:
- Static Assets: Cache-Control headers for long-term caching (1 year for fingerprinted assets)
- HTML: Short cache (5 minutes) with stale-while-revalidate
- API Responses: No caching (dynamic, user-specific)
4. Load Handling Strategies:
Request Rate Limiting:
- Supabase built-in rate limiting (per IP)
- Client-side request throttling for expensive operations
- Exponential backoff for failed requests
Background Processing:
- Heavy Operations: PDF generation, large CSV imports in background
- Status Polling: Client polls for completion rather than blocking
- Queue System: Future enhancement for job queue (Supabase pg_cron or external service)
Graceful Degradation:
- Reduce chart complexity if rendering takes too long
- Paginate or sample data for very large datasets
- Show simplified view if full dashboard takes >3s
5. Performance Monitoring:
Monitoring Tools:
- Supabase Dashboard: Query performance, connection pool stats, database size
- Vercel Analytics: Core Web Vitals, real user metrics (if Cloudflare doesn't provide)
- Application Logs: Performance timing logs for key operations
- Error Tracking: Sentry or similar for error monitoring and performance issues
Performance Budget:
- Initial Bundle Size: < 300KB gzipped
- Route Chunks: < 100KB gzipped per route
- Total JavaScript: < 1MB gzipped
- Images: < 500KB per page
- Database Queries: < 10 queries per page load
Alerting Thresholds:
- Page load time > 5s for 3+ consecutive measurements
- API error rate > 5% over 5-minute window
- Database connection pool > 80% utilization
- PDF generation failure rate > 10%
Performance Testing:
- Lighthouse CI: Run Lighthouse in CI pipeline; fail build if scores drop below thresholds
- Load Testing: Playwright or k6 for simulating concurrent users
- Database Load Testing: pg_bench for database stress testing
- Real User Monitoring: Track Core Web Vitals from production users
This performance engineering approach ensures the system meets user expectations while remaining maintainable and cost-effective at scale.
4. System Architecture and Architecture Design
4.1 Logical View
The system follows a three-tier architecture pattern with clear separation of concerns between presentation, application logic, and data layers. The logical view abstracts away physical deployment details to focus on component responsibilities and interactions.
Logical Architecture Layers:
┌─────────────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ React 19 Application │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ Upload │ │Dashboard │ │ Reports │ │ Dust │ │ │
│ │ │ Module │ │ Module │ │ Module │ │ Levels │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ Weekly │ │ Flow │ │ Climate │ │ Mine │ │ │
│ │ │ Reports │ │ Meter │ │ Module │ │ Sites │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │ │
│ (Props, Events, State) │
│ │ │
│ ┌────────────────────────┴───────────────────────────────────┐ │
│ │ UI Component Library (shadcn/ui) │ │
│ │ (Button, Card, Form, Modal, Table, etc.) │ │
│ └──────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
│
(HTTP/REST, WebSocket)
│
┌─────────────────────────────────┼───────────────────────────────────┐
│ APPLICATION LOGIC LAYER │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Business Logic Services │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ CSV │ │ Report │ │ PDF │ │ Chart │ │ │
│ │ │ Parser │ │ Data │ │Generator │ │ Data │ │ │
│ │ │ Service │ │ Service │ │ Service │ │Transformer│ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ Upload │ │ Weather │ │Flow Meter│ │ Scraper │ │ │
│ │ │ Service │ │ Service │ │ Service │ │ Service │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────────┴───────────────────────────────────┐ │
│ │ Supabase Client SDK (API Gateway) │ │
│ │ (REST API, RPC Functions, Realtime Subscriptions) │ │
│ └──────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
│
(PostgreSQL Protocol)
│
┌─────────────────────────────────┼───────────────────────────────────┐
│ DATA LAYER │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ PostgreSQL Database │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ users/ │ │ mine_ │ │ uploads/ │ │ measure- │ │ │
│ │ │ profiles │ │ sites │ │csv_files │ │ ments │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ reports │ │ weather_ │ │flow_meter│ │ weekly_ │ │ │
│ │ │ │ │ data │ │ data │ │ reports │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────────┐ │ │
│ │ │ Row-Level Security Policies (RLS) │ │ │
│ │ │ • User isolation • Role-based access • Auditing │ │ │
│ │ └──────────────────────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Supabase Storage │ │
│ │ • CSV Files Bucket • PDF Reports Bucket │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Supabase Auth (User Management) │ │
│ │ • JWT Tokens • Email Verification • Session Management │ │
│ └────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────┘Component Responsibilities:
| Layer | Component | Responsibility |
|---|---|---|
| Presentation | Feature Modules | Render UI, handle user interactions, manage local UI state |
| UI Components | Reusable presentational components with no business logic | |
| Application | Services | Encapsulate business logic, data transformations, API calls |
| Supabase SDK | Abstract data access, handle authentication, manage connections | |
| Data | PostgreSQL | Persistent data storage, enforce data integrity, execute queries |
| RLS Policies | Enforce row-level security, isolate user data | |
| Supabase Storage | Store binary files (CSV, PDF), serve via signed URLs | |
| Supabase Auth | User authentication, JWT token management, session handling |
Cross-Cutting Concerns:
- Logging: Application-level logging in frontend, database triggers for audit logs
- Error Handling: Centralized error boundaries (React), consistent error responses (Supabase)
- Configuration: Environment variables (Vite), feature flags (future)
- Monitoring: Performance metrics, error tracking, usage analytics
4.2 Hardware Architecture
The system leverages cloud infrastructure with no on-premise hardware. All components run on managed services provided by Supabase (Backend) and Cloudflare (Frontend CDN).
Infrastructure Provider Architecture:
┌───────────────────────────────────────────────────────────────────┐
│ CLOUDFLARE GLOBAL NETWORK │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Cloudflare Pages (Static Hosting) │ │
│ │ • 300+ Edge Locations • Automatic HTTPS • DDoS Protection│ │
│ └────────────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Cloudflare CDN │ │
│ │ • Asset Caching • Brotli Compression • Image Optimization│ │
│ └────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────┘
│
│ (HTTPS)
│
┌───────────────────────────────┼───────────────────────────────────┐
│ SUPABASE CLOUD │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Supabase Managed PostgreSQL │ │
│ │ Instance: AWS RDS (us-east-1) │ │
│ │ Specs: 2 vCPU, 8GB RAM, 50GB SSD (Pro Tier) │ │
│ │ Features: Automated backups, Point-in-time recovery │ │
│ └────────────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Supabase Storage (S3-Compatible) │ │
│ │ Provider: AWS S3 (us-east-1) │ │
│ │ Capacity: 100GB (Pro Tier) │ │
│ │ CDN: Integrated CloudFront distribution │ │
│ └────────────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Supabase Edge Functions │ │
│ │ Runtime: Deno v1.40+ on Fly.io infrastructure │ │
│ │ Regions: Multi-region deployment (automatic) │ │
│ │ Limits: 60s timeout, 512MB memory per invocation │ │
│ └────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────┘Hardware Specifications:
| Component | Specification | Justification |
|---|---|---|
| Database Server | 2 vCPU, 8GB RAM, 50GB SSD (AWS RDS) | Sufficient for 50 concurrent users, 10GB data storage |
| Connection Pool | 60 max connections (PgBouncer) | Handles concurrent queries from 50 users |
| Storage Capacity | 100GB object storage (AWS S3) | Accommodates 20,000 CSV files + PDFs (avg 2MB each) |
| CDN Edge Nodes | 300+ locations (Cloudflare) | <100ms latency for global users |
| Edge Function Compute | 512MB RAM, 60s timeout (Fly.io) | Sufficient for AI API calls, scraper triggers |
4.2.1 Security Hardware Architecture
Network Security Components:
- DDoS Protection: Cloudflare's built-in DDoS mitigation at network edge (automatic)
- Web Application Firewall (WAF): Cloudflare WAF rules (managed ruleset)
- TLS Termination: Cloudflare edge terminates TLS 1.3, re-encrypts to origin
- Database Network Isolation: Supabase PostgreSQL accessible only via authenticated connections
- VPC Configuration: Supabase runs in isolated VPC (AWS), no public database access
Security Infrastructure:
┌───────────────────────────────────────────────────────────────┐
│ Internet (Users) │
└──────────────────────────┬────────────────────────────────────┘
│
┌────────▼────────┐
│ Cloudflare WAF │ <- DDoS Protection, Bot Detection
└────────┬────────┘
│ (TLS 1.3)
┌────────▼────────┐
│ Cloudflare CDN │ <- HTTPS Enforcement, HSTS
└────────┬────────┘
│
┌────────────┼────────────┐
│ │ │
┌───────▼──────┐ │ ┌──────▼─────────┐
│ Static Assets│ │ │ API Requests │
│ (Cached) │ │ │ (Pass-through) │
└──────────────┘ │ └──────┬─────────┘
│ │
│ ┌────────▼─────────┐
│ │ Supabase Auth │ <- JWT Verification
│ └────────┬─────────┘
│ │
│ ┌────────▼─────────┐
│ │ RLS Policies │ <- Row-level filtering
│ └────────┬─────────┘
│ │
│ ┌────────▼─────────┐
└──│ PostgreSQL DB │
└──────────────────┘
(VPC Isolated)SSL/TLS Configuration:
- Protocol: TLS 1.3 (minimum TLS 1.2)
- Cipher Suites: Modern ciphers only (ECDHE-RSA-AES256-GCM-SHA384, etc.)
- Certificate: Cloudflare-managed SSL certificate (auto-renewal)
- HSTS: Strict-Transport-Security header with 1-year max-age
4.2.2 Performance Hardware Architecture
High Availability Configuration:
- Database: Supabase managed PostgreSQL with automated failover (Pro tier)
- Storage: AWS S3 with 99.99% availability SLA, multi-AZ replication
- CDN: Cloudflare global anycast network (automatic failover to healthy nodes)
- Edge Functions: Multi-region deployment on Fly.io (automatic routing)
Load Balancing:
- Frontend: Cloudflare anycast IP routes users to nearest edge location
- Database: PgBouncer connection pooling distributes connections
- API: Supabase REST API auto-scales within infrastructure limits
Scaling Characteristics:
| Component | Scaling Method | Limits | Upgrade Path |
|---|---|---|---|
| Frontend | Automatic (CDN) | No practical limit | N/A (inherently scalable) |
| Database | Vertical (upgrade instance) | 64 vCPU, 256GB RAM | Add read replicas (Enterprise) |
| Storage | Automatic | 10TB (practical) | Unlimited with higher tier |
| Edge Functions | Horizontal (auto-scale) | Concurrent invocations | No limit (pay-per-use) |
Resource Allocation:
Production Load Profile (50 concurrent users):
┌─────────────────────────────────────────────────────────┐
│ Database │
│ ├─ CPU Usage: 30-40% average, 70% peak │
│ ├─ Memory: 4GB / 8GB (50%) │
│ ├─ Connections: 15-25 / 60 (40%) │
│ └─ Storage: 12GB / 50GB (24%) │
├─────────────────────────────────────────────────────────┤
│ Object Storage │
│ ├─ Used: 25GB / 100GB (25%) │
│ ├─ Request Rate: ~500 req/min (CSV downloads, PDFs) │
│ └─ Bandwidth: ~2GB/day egress │
├─────────────────────────────────────────────────────────┤
│ Edge Functions │
│ ├─ Invocations: ~50/hour (AI descriptions, scrapers) │
│ ├─ Duration: 2-5s average │
│ └─ Concurrency: 1-3 concurrent executions │
└─────────────────────────────────────────────────────────┘4.3 Software Architecture
The software architecture follows a modular, feature-driven design with clear boundaries between features and shared infrastructure.
Software Stack Overview:
┌───────────────────────────────────────────────────────────────┐
│ FRONTEND SOFTWARE STACK │
├───────────────────────────────────────────────────────────────┤
│ Runtime Environment │
│ └─ Browser (Chrome 90+, Firefox 88+, Safari 14+, Edge 90+) │
├───────────────────────────────────────────────────────────────┤
│ UI Framework & Libraries │
│ ├─ React 19.2.0 (UI rendering, component model) │
│ ├─ React Router v7.9.1 (Client-side routing) │
│ ├─ Tailwind CSS 4.1.16 (Utility-first styling) │
│ └─ shadcn/ui (Accessible component library) │
├───────────────────────────────────────────────────────────────┤
│ State Management │
│ ├─ Zustand 5.0.8 (Global state) │
│ ├─ React Hook Form 7.66 (Form state) │
│ └─ usehooks-ts 3.1.1 (Common hooks) │
├───────────────────────────────────────────────────────────────┤
│ Data Visualization │
│ ├─ Recharts 3.3.0 (Charts for reports) │
│ ├─ Nivo 0.99.0 (Dashboard analytics charts) │
│ └─ ApexCharts 1.7.0 (Additional chart types) │
├───────────────────────────────────────────────────────────────┤
│ Document Generation │
│ ├─ jsPDF 3.0.3 (PDF generation) │
│ ├─ html2canvas 1.4.1 (DOM to canvas conversion) │
│ └─ docx 9.5.1 (Word document generation) │
├───────────────────────────────────────────────────────────────┤
│ Data Processing │
│ ├─ PapaParse 5.5.3 (CSV parsing) │
│ ├─ JSZip 3.10.1 (ZIP file handling) │
│ ├─ date-fns 4.1.0 (Date manipulation) │
│ └─ Zod 4.1.12 (Runtime validation) │
├───────────────────────────────────────────────────────────────┤
│ HTTP & API Client │
│ ├─ @supabase/supabase-js 2.80.0 (Supabase SDK) │
│ └─ Native Fetch API (HTTP requests) │
├───────────────────────────────────────────────────────────────┤
│ Development Tools │
│ ├─ TypeScript 5.9.3 (Type system) │
│ ├─ Vite 7.1.12 (Build tool, dev server) │
│ ├─ ESLint 9.39.1 (Linting) │
│ └─ Prettier 3.6.2 (Code formatting) │
└───────────────────────────────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────┐
│ BACKEND SOFTWARE STACK │
├───────────────────────────────────────────────────────────────┤
│ Database │
│ ├─ PostgreSQL 15.x (Primary database) │
│ ├─ PgBouncer (Connection pooling) │
│ └─ PostgREST (Auto-generated REST API) │
├───────────────────────────────────────────────────────────────┤
│ Authentication │
│ └─ Supabase Auth (JWT-based authentication) │
├───────────────────────────────────────────────────────────────┤
│ Object Storage │
│ └─ Supabase Storage (S3-compatible API) │
├───────────────────────────────────────────────────────────────┤
│ Serverless Functions │
│ ├─ Deno Runtime v1.40+ (Edge function runtime) │
│ └─ TypeScript (Function implementation language) │
├───────────────────────────────────────────────────────────────┤
│ External Integrations │
│ ├─ Google Gemini API (AI chart descriptions) │
│ ├─ Dustac Scraper API (Dust level data collection) │
│ └─ BOM Weather API (Weather data fetching) │
└───────────────────────────────────────────────────────────────┘Software Component Diagram:
Frontend Components:
┌─────────────────────────────────────────────────────────────┐
│ React Application │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Feature Modules (src/features/) │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────┐ │ │
│ │ │ upload │ │ reports │ │ dust- │ │weekly- │ │ │
│ │ │ │ │ │ │ levels │ │reports │ │ │
│ │ │ ├─comp. │ │ ├─comp. │ │ ├─comp. │ │├─comp. │ │ │
│ │ │ ├─serv. │ │ ├─serv. │ │ ├─serv. │ │├─serv. │ │ │
│ │ │ ├─hooks │ │ ├─hooks │ │ ├─hooks │ │├─hooks │ │ │
│ │ │ └─types │ │ └─types │ │ └─types │ │└─types │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌────────┐ │ │
│ │ │flow-meter│ │ climate │ │ report- │ │ mine- │ │ │
│ │ │ │ │ │ │templates │ │ sites │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └────────┘ │ │
│ └────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Shared Infrastructure (src/lib/, src/components/) │ │
│ │ ├─ supabase.ts (Supabase client singleton) │ │
│ │ ├─ supabaseTypes.ts (Generated DB types) │ │
│ │ ├─ utils.ts (Common utilities) │ │
│ │ └─ components/ui/ (shadcn/ui components) │ │
│ └────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Routing (src/routes/) │ │
│ │ ├─ Routes.tsx (Route definitions) │ │
│ │ └─ _layout/ (Layout components) │ │
│ └────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Backend Components:
┌─────────────────────────────────────────────────────────────┐
│ Supabase Platform │
│ ┌────────────────────────────────────────────────────┐ │
│ │ PostgreSQL Database │ │
│ │ ├─ Tables (profiles, uploads, measurements, etc.) │ │
│ │ ├─ RLS Policies (user isolation, role-based) │ │
│ │ ├─ Functions (log_report_read, etc.) │ │
│ │ ├─ Triggers (auto-logging, updated_at) │ │
│ │ └─ Indexes (performance optimization) │ │
│ └────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Edge Functions (supabase/functions/) │ │
│ │ ├─ generate-chart-descriptions (Gemini AI) │ │
│ │ ├─ trigger-dust-level-scraper (Dustac API) │ │
│ │ └─ trigger-flow-meter-scraper (Flow Meter API) │ │
│ └────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Supabase Auth │ │
│ │ ├─ User management (signup, login, reset) │ │
│ │ ├─ JWT token generation/verification │ │
│ │ └─ Email verification │ │
│ └────────────────────────────────────────────────────┘ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ Supabase Storage (Buckets) │ │
│ │ ├─ csv-uploads (CSV files) │ │
│ │ ├─ pdf-reports (Generated PDF reports) │ │
│ │ └─ Storage policies (user-level access) │ │
│ └────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘Component Interaction Patterns:
Data Flow (Read):
User Action → React Component → Service Layer → Supabase Client SDK → REST API → RLS Policy Check → PostgreSQL Query → ResponseData Flow (Write):
User Input → Form Validation (Zod) → Service Layer → Supabase Client SDK → REST API → RLS Policy Check → PostgreSQL INSERT/UPDATE → Trigger → ResponseFile Upload Flow:
File Selection → Upload Service → Supabase Storage API → S3 Upload → Metadata to PostgreSQL → Status UpdateExternal API Flow:
User Action → Frontend → Edge Function Invocation → External API Call → Response Processing → Database Update → Frontend Notification
4.3.1 Security Software Architecture
Authentication Flow:
┌──────────┐ ┌─────────────┐
│ User │ │ Supabase │
│ Browser │ │ Auth │
└────┬─────┘ └──────┬──────┘
│ │
│ 1. POST /auth/signup │
│ { email, password, full_name } │
├───────────────────────────────────────────────>│
│ │
│ 2. Validate input │
│ 3. Hash password │
│ 4. Create user │
│ 5. Send verify email
│ │
│ 6. { user, session: null } │
│<───────────────────────────────────────────────┤
│ │
│ 7. Click verification link in email │
│ GET /auth/confirm?token=... │
├───────────────────────────────────────────────>│
│ │
│ 8. Verify token │
│ 9. Mark confirmed │
│ │
│ 10. Redirect to /login │
│<───────────────────────────────────────────────┤
│ │
│ 11. POST /auth/signin │
│ { email, password } │
├───────────────────────────────────────────────>│
│ │
│ 12. Verify password
│ 13. Generate JWT │
│ 14. Create session
│ │
│ 15. { user, session: { access_token, refresh_token }}
│<───────────────────────────────────────────────┤
│ │
│ 16. Store tokens in localStorage │
│ │
│ 17. Subsequent requests include: │
│ Authorization: Bearer <access_token> │
├───────────────────────────────────────────────>│
│ │
│ 18. Verify JWT │
│ 19. Extract user_id
│ 20. Apply RLS │Authorization Model (Row-Level Security):
-- Example RLS Policy for measurements table
CREATE POLICY "Users can view own measurements"
ON measurements FOR SELECT
USING (
EXISTS (
SELECT 1 FROM uploads
WHERE uploads.id = measurements.upload_id
AND uploads.user_id = auth.uid()
)
);
-- Admin override policy
CREATE POLICY "Admins can view all measurements"
ON measurements FOR SELECT
USING (
EXISTS (
SELECT 1 FROM profiles
WHERE id = auth.uid() AND role = 'admin'
)
);Security Software Components:
| Component | Technology | Function |
|---|---|---|
| JWT Token Generation | Supabase Auth | Creates signed JWT with user_id, role, expiration |
| JWT Token Verification | Supabase PostgREST | Verifies token signature, checks expiration |
| Password Hashing | bcrypt (Supabase Auth) | One-way hash with salt, cost factor 10 |
| Session Management | Supabase Auth | Tracks active sessions, handles refresh tokens |
| RLS Policy Engine | PostgreSQL | Filters query results based on auth.uid() |
| Email Verification | Supabase Auth | Generates verification tokens, sends emails |
| CSRF Protection | SameSite cookies | Prevents cross-site request forgery |
| XSS Prevention | React (automatic escaping) | Prevents script injection in rendered content |
| SQL Injection Prevention | Parameterized queries | All queries use bound parameters |
Encryption:
- Data in Transit: TLS 1.3 for all HTTPS connections
- Data at Rest: AES-256 encryption for database and storage (provider-managed)
- JWT Signing: HS256 algorithm with secret key
- Password Storage: bcrypt with salt (managed by Supabase Auth)
4.3.2 Performance Software Architecture
Caching Layers:
Request Flow with Caching:
User Request
│
▼
┌─────────────────┐
│ Browser Cache │ <-- Cache-Control: max-age=31536000 (static assets)
└────────┬────────┘ ETag validation for HTML
│ (Cache Miss)
▼
┌─────────────────┐
│ Cloudflare CDN │ <-- Edge cache with stale-while-revalidate
└────────┬────────┘ Brotli compression
│ (Cache Miss)
▼
┌─────────────────┐
│ Supabase API │ <-- No response caching (dynamic, user-specific)
└────────┬────────┘ Connection pooling (PgBouncer)
│
▼
┌─────────────────┐
│ PostgreSQL │ <-- Query result caching (internal)
└─────────────────┘ Prepared statement cachePerformance Optimization Techniques:
| Technique | Implementation | Impact |
|---|---|---|
| Code Splitting | Vite automatic chunking, React.lazy() | Reduces initial bundle from 1.2MB to 300KB |
| Tree Shaking | Vite production build | Removes unused code, saves ~40% bundle size |
| Image Optimization | WebP format, lazy loading, srcset | Reduces image payload by 60-80% |
| Database Indexing | Composite indexes on common queries | Reduces query time from 500ms to <50ms |
| Connection Pooling | PgBouncer with 60 connections | Eliminates connection overhead (~100ms) |
| Debouncing | usehooks-ts debounce (300ms) | Reduces API calls by 80% for search/filter |
| Pagination | Limit 50 items per page | Reduces query time and data transfer |
| Lazy Loading | React.lazy() for route components | Defers loading until route accessed |
Bundle Optimization:
// vite.config.ts - Build optimization
export default defineConfig({
build: {
rollupOptions: {
output: {
manualChunks: {
'vendor-react': ['react', 'react-dom', 'react-router-dom'],
'vendor-supabase': ['@supabase/supabase-js'],
'vendor-charts': ['recharts', '@nivo/core', '@nivo/bar'],
'vendor-ui': ['@radix-ui/react-dialog', '@radix-ui/react-dropdown-menu'],
},
},
},
minify: 'terser',
terserOptions: {
compress: {
drop_console: true, // Remove console.log in production
},
},
},
});Async Processing:
- PDF Generation: Client-side rendering with loading indicator, no blocking
- CSV Upload: Chunked processing with progress updates
- External API Calls: Async/await with timeout handling
- Database Writes: Batch inserts for multiple records
4.4 Information Architecture
The information architecture defines how data is organized, categorized, and accessed within the system.
Data Classification:
| Data Category | Examples | Sensitivity | Retention |
|---|---|---|---|
| User Identity Data | Email, full_name, password hash | High | Account lifetime + 30 days |
| Environmental Measurements | PM10, PM2.5, temperature, humidity | Medium | 7 years (regulatory) |
| Operational Data | CSV files, upload metadata | Medium | 2 years |
| Generated Reports | PDF reports | Low | Permanent (user managed) |
| Audit Logs | Activity logs, access logs | High | 3 years (compliance) |
| System Configuration | User preferences, templates | Low | Account lifetime |
| Weather Data | BOM weather observations | Low | 1 year |
| Flow Meter Data | Water usage measurements | Medium | 2 years |
4.4.1 Records Management
Data Lifecycle Management:
┌──────────────────────────────────────────────────────────────┐
│ Data Lifecycle Stages │
└──────────────────────────────────────────────────────────────┘
1. CREATION
├─ CSV Upload: User uploads file → Supabase Storage → Metadata in DB
├─ Measurements: Parsed from CSV → Batch insert to PostgreSQL
├─ Reports: Generated from measurements → PDF to Storage → Metadata in DB
└─ Audit Logs: Triggered automatically on CRUD operations
2. ACTIVE USE (0-12 months)
├─ Full access via dashboard and reports
├─ Indexed for fast queries
├─ Cached at multiple levels
└─ Available for export
3. ARCHIVAL (12+ months)
├─ Data remains in database (searchable but less frequently accessed)
├─ CSV files retained in storage (user can download)
├─ PDF reports retained permanently
└─ Query performance may degrade for very old data
4. DISPOSAL
├─ User-initiated deletion: Soft delete (mark as deleted, retain 30 days)
├─ Account deletion: User data deleted after 30-day grace period
├─ Regulatory retention met: Audit logs deleted after 3 years
└─ Automatic cleanup: Temporary files deleted after 24 hoursRetention Policies:
| Data Type | Retention Period | Justification | Disposal Method |
|---|---|---|---|
| Measurements | 7 years | Environmental compliance (typical regulatory requirement) | Hard delete from database |
| CSV Files | 2 years | Operational need for re-processing | Delete from Storage bucket |
| PDF Reports | Permanent (user-controlled) | User maintains for records | User-initiated deletion |
| Audit Logs | 3 years | Security/compliance requirements | Hard delete from database |
| User Accounts (inactive) | 2 years | Reasonable retention for inactive users | Full account deletion |
| Deleted Data (soft delete) | 30 days | Allow recovery from accidental deletion | Hard delete after grace period |
Archival Procedures:
Automated Archival (Future Enhancement):
- PostgreSQL partitioning by year for measurements table
- Older partitions moved to cheaper storage tier (AWS Glacier)
- Metadata remains in main database for searchability
User-Initiated Export:
- Users can export all data as ZIP file (CSV + PDFs)
- Export includes data dictionary and README
- Export functionality available via dashboard
Backup and Recovery:
- Supabase automatic daily backups (retained 7 days)
- Point-in-time recovery available (Pro tier)
- User can request manual backup before major changes
4.4.2 Data
All data entering the system is electronic; no paper-based inputs.
| Data Type | Format | Source | Description | Volume |
|---|---|---|---|---|
| Environmental Measurements | CSV (UTF-8) | DustRanger devices (via user upload) | PM measurements, temperature, humidity, location, timestamp | ~10K rows/file, 50MB max |
| User Registration | JSON (Web form) | User manual entry via browser | Email, full name, password | 1 record per user |
| PDF Reports | Binary (PDF/A) | System-generated (jsPDF) | Formatted report with charts and statistics | 1-5MB per report |
| Weather Data | JSON (API response) | BOM API (automated fetch) | Temperature, rainfall, wind, pressure | ~30 records/day/station |
| Dust Level Data | JSON (API response) | Dustac Scraper API (automated) | Real-time dust measurements from external system | ~100 records/hour |
| Flow Meter Data | JSON (API response) | Flow Meter Scraper API (automated) | Water usage measurements | ~50 records/day |
| Weekly Reports | JSON (Web form) | User manual entry via dashboard | Structured weekly field report (8 sections) | 1 report per user per week |
| Report Templates | Markdown (text) | User manual entry via editor | Customizable report template | <10KB per template |
| Mine Site Information | JSON (Web form) | User manual entry | Site name, location, device list | <1KB per site |
Data Validation at Entry:
- CSV Files: Column validation, data type checking, date format parsing, range validation
- User Input: Zod schema validation, React Hook Form validation, backend validation
- API Responses: JSON schema validation, error handling for malformed data
4.4.3 Manual/Electronic Inputs
Input Processing Flow:
CSV File Upload:
User selects CSV → Client validation → Upload to Storage → Parse CSV → Validate measurements → Batch insert to DB → Create upload record → Generate PDF → Update statusWeb Form Input (e.g., Weekly Report):
User enters data → React Hook Form validation → Zod schema validation → Submit to Supabase → RLS policy check → Insert/Update in DB → Trigger updated_at → Return successExternal API Data (e.g., Weather):
Edge Function invoked → Call BOM API → Parse JSON response → Transform to DB schema → Insert with user_id → Return status
Post-Entry Processing:
- CSV Measurements: After import, indexed for queries, available for dashboard, trigger PDF generation
- Weekly Reports: Auto-saved as draft, marked complete when user submits, available for viewing
- Weather Data: Stored with station_id + date, available for climate charts
- User Edits: Trigger
updated_attimestamp, create audit log entry
4.4.4 Master Files
Primary Data Tables (Master Files):
| Table | Description | Key Data | Update Frequency |
|---|---|---|---|
| profiles | User profile information | full_name, email, role | Rarely (user edits profile) |
| mine_sites | Mine site locations | site_name, location, status | Occasionally (add/edit sites) |
| devices | Monitoring devices | device_id, device_name, site_id | Occasionally (add/edit devices) |
| uploads | CSV upload tracking | upload_type, period, status, file count | Frequently (daily/weekly uploads) |
| csv_files | Individual CSV files | filename, site_name, period, record_count | Frequently (one per CSV upload) |
| measurements | Environmental readings | PM10, PM2.5, temperature, humidity, time | Very frequently (10K+ rows per upload) |
| reports | Generated PDF reports | pdf_storage_path, generation_time, status | Frequently (one per CSV file) |
| weekly_reports | Structured weekly reports | 8 JSONB sections, status, period_end | Weekly (one per user per week) |
| report_templates | Custom report templates | name, content (markdown), is_default | Rarely (user creates/edits) |
| weather_data | Weather observations | temperature, rainfall, wind, station_id | Daily (automated import) |
| flow_meter_data | Water usage data | volume, timestamp, site_name | Daily (automated import) |
| activity_logs | Audit trail | user_id, action, resource_type, details | Continuously (all CRUD operations) |
Data Relationships:
profiles (1) ──< (many) uploads
uploads (1) ──< (many) csv_files
csv_files (1) ──< (many) measurements
csv_files (1) ──< (1) reports
mine_sites (1) ──< (many) devices
profiles (1) ──< (many) weekly_reports
profiles (1) ──< (many) report_templates
weather_stations (1) ──< (many) weather_data
profiles (1) ──< (many) flow_meter_data
profiles (1) ──< (many) activity_logs4.5 Internal Communications Architecture
Communication Protocols:
| Communication Type | Protocol | Use Case |
|---|---|---|
| Frontend ↔ Supabase API | HTTPS/REST | All data queries, mutations |
| Frontend ↔ Supabase Auth | HTTPS/REST | Authentication, session management |
| Frontend ↔ Supabase Storage | HTTPS/REST | File uploads, downloads |
| Frontend ↔ Edge Functions | HTTPS/REST | AI generation, scraper triggers |
| Edge Functions ↔ External APIs | HTTPS/REST | Google Gemini, Dustac, BOM |
| Supabase API ↔ PostgreSQL | PostgreSQL Wire Protocol | Database queries |
| Frontend ↔ Supabase Realtime | WebSocket | Live updates (future) |
API Communication Patterns:
// Example: Data Query Pattern
const { data, error } = await supabase
.from('measurements')
.select('pm10, pm25, temperature, time')
.eq('site', 'Boddington')
.gte('time', '2025-01-01')
.lte('time', '2025-01-31')
.order('time', { ascending: true })
.limit(1000);
// Example: File Upload Pattern
const { data, error } = await supabase.storage
.from('csv-uploads')
.upload(`${userId}/daily/2025-01-15_${uploadId}.csv`, file, {
contentType: 'text/csv',
upsert: false,
});
// Example: Edge Function Invocation
const { data, error } = await supabase.functions.invoke(
'generate-chart-descriptions',
{
body: { chartType: 'dustLevelsTemp', chartData, siteName, dateRange },
}
);Message Formats:
REST API Requests:
- Content-Type: application/json
- Authorization: Bearer
- Body: JSON-encoded payload
REST API Responses:
- Content-Type: application/json
- Status: 200 (success), 400 (validation error), 401 (unauthorized), 500 (server error)
- Body: JSON-encoded response or error
File Upload:
- Content-Type: multipart/form-data (browser upload)
- Content-Type: text/csv, application/pdf (storage upload)
WebSocket (Realtime):
- Protocol: WebSocket over TLS
- Message Format: JSON with event type and payload
Network Topology:
┌──────────────┐
│ User Browser │
└──────┬───────┘
│ (HTTPS)
│
┌───▼────────────────────┐
│ Cloudflare CDN (Edge) │
└───┬────────────────────┘
│
├─── (Static Assets) ─── Cached at Edge
│
├─── (API Requests) ────┐
│ │
│ ┌────▼──────────┐
│ │ Supabase API │
│ │ (us-east-1) │
│ └────┬──────────┘
│ │
│ ┌────▼──────────┐
│ │ PostgreSQL │
│ │ Database │
│ └───────────────┘
│
└─── (Edge Functions) ──┐
│
┌────▼────────────┐
│ Deno Runtime │
│ (Fly.io) │
└────┬────────────┘
│
┌─────────┼─────────┐
│ │ │
┌───▼───┐ ┌──▼───┐ ┌──▼────┐
│Gemini │ │Dustac│ │ BOM │
│ API │ │ API │ │ API │
└───────┘ └──────┘ └───────┘4.6 Security Architecture
Defense-in-Depth Strategy:
Layer 1: Network Security
├─ Cloudflare WAF (blocks malicious requests)
├─ DDoS protection (rate limiting, bot detection)
└─ TLS 1.3 encryption (transport security)
Layer 2: Application Security
├─ JWT authentication (validates user identity)
├─ CORS policy (prevents unauthorized origins)
├─ Input validation (prevents injection attacks)
└─ CSRF protection (SameSite cookies)
Layer 3: Data Security
├─ Row-Level Security (isolates user data)
├─ Parameterized queries (prevents SQL injection)
├─ Data encryption at rest (AES-256)
└─ Signed URLs (time-limited file access)
Layer 4: Audit & Monitoring
├─ Activity logs (tracks all data access)
├─ Error logging (detects anomalies)
├─ Session monitoring (tracks active users)
└─ Security alerts (notifies on suspicious activity)Security Controls Matrix:
| Threat | Control | Implementation |
|---|---|---|
| Unauthorized Access | Authentication | Supabase Auth with JWT, email verification |
| Privilege Escalation | Authorization | RLS policies based on user role |
| Data Leakage | Data Isolation | RLS policies filter by user_id |
| SQL Injection | Input Sanitization | Parameterized queries, ORM-style API |
| XSS Attacks | Output Encoding | React automatic escaping |
| CSRF Attacks | CSRF Tokens | SameSite cookies, CORS policy |
| Brute Force Login | Rate Limiting | Supabase built-in rate limiting |
| Session Hijacking | Secure Sessions | JWT with expiration, HttpOnly cookies |
| Man-in-the-Middle | Encryption | TLS 1.3, HSTS headers |
| Data Tampering | Audit Logging | Activity logs with trigger-based logging |
4.7 Performance
Performance Architecture Principles:
- Minimize Round Trips: Batch queries, use joins instead of N+1 queries
- Cache Aggressively: Multi-level caching (browser, CDN, database)
- Optimize Assets: Code splitting, image optimization, compression
- Index Strategically: Composite indexes on common query patterns
- Parallelize When Possible: Concurrent API calls, async processing
Performance Monitoring Dashboard:
Real-Time Performance Metrics:
┌─────────────────────────────────────────────────────────────┐
│ Frontend Metrics (User Experience) │
│ ├─ LCP (Largest Contentful Paint): 1.8s ✓ (<2.5s) │
│ ├─ FID (First Input Delay): 45ms ✓ (<100ms) │
│ ├─ CLS (Cumulative Layout Shift): 0.05 ✓ (<0.1) │
│ └─ Bundle Size: 285KB ✓ (<300KB) │
├─────────────────────────────────────────────────────────────┤
│ Backend Metrics (API Performance) │
│ ├─ Database Query Time (p95): 120ms ✓ (<500ms) │
│ ├─ API Response Time (p95): 180ms ✓ (<500ms) │
│ ├─ Database Connection Pool: 18/60 ✓ (<80%) │
│ └─ Storage Bandwidth: 1.2GB/day │
├─────────────────────────────────────────────────────────────┤
│ Business Metrics (User Operations) │
│ ├─ CSV Upload Processing: 12s ✓ (<30s) │
│ ├─ PDF Generation: 22s ✓ (<60s) │
│ ├─ Dashboard Load: 0.8s ✓ (<1s) │
│ └─ Concurrent Users: 32/50 ✓ │
└─────────────────────────────────────────────────────────────┘Bottleneck Analysis:
| Potential Bottleneck | Symptoms | Mitigation |
|---|---|---|
| Database Connections | Connection pool exhausted, timeouts | Optimize query count, use connection pooling, upgrade tier |
| Large CSV Processing | Upload timeouts, UI freezing | Chunk processing, Web Workers, progress indicators |
| PDF Generation | Long wait times, memory errors | Optimize chart rendering, reduce image quality, paginate |
| Dashboard Queries | Slow load times for large date ranges | Add indexes, implement pagination, cache aggregations |
| Storage Bandwidth | Slow downloads for large files | Use CDN, compress files, implement progressive loading |
4.8 System Architecture Diagram
Integrated System Architecture:
USERS
│
│ (HTTPS)
▼
┌─────────────────────────────────────────────────────────────────┐
│ CLOUDFLARE GLOBAL EDGE │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Cloudflare CDN │ │
│ │ • DDoS Protection • WAF Rules • TLS Termination │ │
│ │ • Static Asset Caching • Brotli Compression │ │
│ └────────────────────────────────────────────────────────────┘ │
└─────────────────────────────┬───────────────────────────────────┘
│
┌────────────┴────────────┐
│ │
│ (Static) │ (API)
▼ ▼
┌────────────────────────────┐ ┌───────────────────────────────┐
│ REACT 19 SPA (Frontend) │ │ SUPABASE PLATFORM │
│ ┌──────────────────────┐ │ │ ┌─────────────────────────┐ │
│ │ Feature Modules: │ │ │ │ PostgreSQL Database │ │
│ │ • Upload │ │ │ │ • Tables (15+) │ │
│ │ • Reports │◄─┼──┼──┤ • RLS Policies │ │
│ │ • Dust Levels │ │ │ │ • Triggers & Functions │ │
│ │ • Weekly Reports │ │ │ │ • Indexes │ │
│ │ • Flow Meter │ │ │ └─────────────────────────┘ │
│ │ • Climate │ │ │ ┌─────────────────────────┐ │
│ │ • Mine Sites │ │ │ │ Supabase Auth │ │
│ │ • Dashboard │ │ │ │ • JWT Generation │ │
│ └──────────────────────┘ │ │ │ • Email Verification │ │
│ ┌──────────────────────┐ │ │ │ • Session Management │ │
│ │ UI Component Library │ │ │ └─────────────────────────┘ │
│ │ • shadcn/ui (Radix) │ │ │ ┌─────────────────────────┐ │
│ │ • Tailwind CSS │ │ │ │ Supabase Storage │ │
│ └──────────────────────┘ │ │ │ • csv-uploads bucket │ │
│ ┌──────────────────────┐ │ │ │ • pdf-reports bucket │ │
│ │ State Management │ │ │ │ • Signed URLs │ │
│ │ • Zustand │ │ │ └─────────────────────────┘ │
│ │ • React Hook Form │ │ │ ┌─────────────────────────┐ │
│ └──────────────────────┘ │ │ │ Edge Functions (Deno) │ │
│ ┌──────────────────────┐ │ │ │ • Chart Descriptions │ │
│ │ Visualization │ │ │ │ • Dust Scraper Trigger │ │
│ │ • Recharts │ │ │ │ • Flow Meter Trigger │ │
│ │ • Nivo │ │ │ └─────────────────────────┘ │
│ └──────────────────────┘ │ └───────────────────────────────┘
│ ┌──────────────────────┐ │
│ │ PDF Generation │ │
│ │ • jsPDF │ │
│ │ • html2canvas │ │
│ └──────────────────────┘ │
└────────────────────────────┘
│
│ (HTTPS)
▼
┌─────────────────────────────────────────────────────────────────┐
│ EXTERNAL SERVICES │
│ ┌─────────────┐ ┌─────────────┐ ┌──────────────────────┐ │
│ │ Google │ │ Dustac │ │ Bureau of │ │
│ │ Gemini │ │ Scraper │ │ Meteorology (BOM) │ │
│ │ AI │ │ API │ │ Weather API │ │
│ └─────────────┘ └─────────────┘ └──────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
DEPLOYMENT TOPOLOGY:
┌─────────────────────────────────────────────────────────────────┐
│ Cloudflare Pages: Production (main branch) │
│ Staging (staging branch) │
│ Preview (feature branches) │
├─────────────────────────────────────────────────────────────────┤
│ Supabase: Project: hwogexspejrzvadfjmce │
│ Region: us-east-1 │
│ Tier: Pro ($25/month) │
├─────────────────────────────────────────────────────────────────┤
│ GitHub: Repository with GitHub Actions CI/CD │
│ Auto-deploy to Cloudflare Pages │
└─────────────────────────────────────────────────────────────────┘This comprehensive architecture diagram shows all major components, their interactions, and the deployment topology. The system is designed for scalability, security, and performance while maintaining simplicity through managed services.
5. System Design
5.1 Business Requirements
The system design directly addresses the following key business requirements:
Primary Business Requirements:
Environmental Compliance Management
- Requirement: Enable mining operations to collect, store, and report on dust monitoring data to meet environmental regulations
- Design Response: PostgreSQL database stores all measurements with 7-year retention; PDF reports generated for regulatory submission
- Success Criteria: 100% of uploaded measurements retained and queryable; PDF reports meet regulatory formatting standards
Multi-Site Operations Support
- Requirement: Support multiple mining sites with independent data collection and reporting
- Design Response: Site-based data organization; users can manage multiple sites with isolated data streams
- Success Criteria: Users can upload and view data for 10+ distinct sites without performance degradation
Real-Time Operational Insights
- Requirement: Provide dashboard analytics and visualizations for operational decision-making
- Design Response: Interactive dashboards with Recharts/Nivo; real-time dust level monitoring via scraper integration
- Success Criteria: Dashboard loads within 1 second for 90-day data range; charts update within 500ms
Automated Reporting
- Requirement: Generate professional PDF reports automatically from uploaded CSV data
- Design Response: Client-side PDF generation with jsPDF; customizable templates; AI-generated chart descriptions
- Success Criteria: PDF generation completes within 60 seconds for 30-page reports
Water Usage Tracking
- Requirement: Monitor water consumption for dust suppression activities
- Design Response: Flow meter data integration; automated collection via scraper API; usage analytics
- Success Criteria: Flow meter data synchronized hourly; usage trends visualized in dashboard
Weather Correlation
- Requirement: Correlate environmental measurements with weather conditions
- Design Response: BOM weather data integration; climate module with combined visualizations
- Success Criteria: Weather data available for all measurement dates; correlation charts display both datasets
Audit and Compliance
- Requirement: Maintain complete audit trail for regulatory compliance
- Design Response: Comprehensive activity logging via database triggers; immutable audit logs
- Success Criteria: All data modifications logged with user_id, timestamp, and action details
Secondary Business Requirements:
- User Access Control: Row-Level Security ensures users only access their own data; admin role for oversight
- Weekly Field Reporting: Structured weekly reports capture operational activities in standardized format
- Data Export: Users can export all data (CSV, PDF) for external analysis or archival
Non-Functional Requirements Driving Design:
- Performance: Sub-second dashboard load times, <30s CSV processing for 10K rows
- Scalability: Support 50 concurrent users; architecture supports 10x growth
- Security: Zero-trust model with JWT authentication, RLS policies, encrypted data
- Reliability: 99.9% uptime target; automated backups; point-in-time recovery
- Usability: Intuitive UI requiring minimal training; accessible to users with varying technical skills
5.2 Database Design
The database follows a normalized relational model (third normal form) optimized for transactional integrity and query performance.
5.2.1 Data Objects and Resultant Data Structures
Core Entity Definitions:
1. profiles (User Profiles):
- Purpose: Extended user information beyond Supabase Auth
- Key Attributes: id (UUID, PK), full_name, organization, role (user/admin)
- Relationships: 1:many with uploads, weekly_reports, weather_data
- Business Rules:
- Auto-created when user signs up (trigger)
- Role defaults to 'user'; only admins can elevate to 'admin'
- Profile deletion cascades from auth.users deletion
2. uploads (Upload Batches):
- Purpose: Track CSV upload operations with batch-level metadata
- Key Attributes: id (UUID, PK), user_id (FK), upload_type, period_start/end, status, file counts
- Relationships: 1:many with csv_files; many:1 with profiles
- Business Rules:
- Status transitions: pending → validating → importing → generating_reports → completed/failed
- Cannot delete upload with status 'importing' or 'generating_reports'
- Soft delete retains data for 30 days
3. csv_files (Individual CSV Files):
- Purpose: Metadata for each CSV file in an upload batch
- Key Attributes: id (UUID, PK), upload_id (FK), site_name, period_start/end, record_count, devices (JSONB)
- Relationships: many:1 with uploads; 1:many with measurements; 1:1 with reports
- Business Rules:
- Unique constraint on (upload_id, filename)
- Period must be within upload batch period
- Device list extracted during CSV parsing
4. measurements (Environmental Readings):
- Purpose: Individual dust measurement records from DustRanger devices
- Key Attributes: id (UUID, PK), time (timestamptz), pm10/pm25/pm40 (numeric), temperature, humidity, site
- Relationships: many:1 with csv_files and uploads
- Business Rules:
- PM values must be non-negative
- Temperature range: -50°C to 60°C
- Humidity range: 0% to 100%
- Time stored in UTC; converted to local for display
5. reports (Generated PDF Reports):
- Purpose: Metadata for generated PDF reports
- Key Attributes: id (UUID, PK), upload_id/csv_file_id (FKs), pdf_storage_path, generation_timestamp
- Relationships: many:1 with uploads and csv_files
- Business Rules:
- Reports are publicly viewable (all authenticated users)
- Version increments on regeneration
- PDF file stored in Supabase Storage
6. weekly_reports (Structured Weekly Reports):
- Purpose: Structured field reports with 8 sections (JSONB)
- Key Attributes: id (UUID, PK), user_id (FK), report_period_end, status (draft/completed), 8 JSONB sections
- Relationships: many:1 with profiles
- Business Rules:
- Auto-save drafts every 30 seconds
- Cannot submit incomplete sections
- One report per user per week
7. weather_data (Weather Observations):
- Purpose: Daily weather data from Bureau of Meteorology
- Key Attributes: id (UUID, PK), station_id (FK), date, temperature/rainfall/wind measurements
- Relationships: many:1 with weather_stations and profiles
- Business Rules:
- Unique constraint on (user_id, station_id, date)
- Measurements at 9am and 3pm
- Auto-fetched from BOM API
8. activity_logs (Audit Trail):
- Purpose: Immutable audit log for compliance
- Key Attributes: id (UUID, PK), user_id (FK), action, resource_type/id, details (JSONB), created_at
- Relationships: many:1 with profiles
- Business Rules:
- No updates or deletes (append-only)
- Auto-generated by triggers for report CRUD
- Manually logged for read/download operations
Data Type Specifications:
| PostgreSQL Type | Usage | Precision | Rationale |
|---|---|---|---|
| UUID | All primary keys | 128-bit | Globally unique, non-sequential (security), standard in distributed systems |
| TIMESTAMPTZ | All timestamps | Microsecond | Timezone-aware, stores UTC internally, converts to local for display |
| NUMERIC(p,s) | Measurements | Variable | Arbitrary precision, no floating-point rounding errors (critical for compliance) |
| JSONB | Semi-structured | N/A | Binary JSON, indexable with GIN, flexible schema for evolving data structures |
| TEXT | Strings | Unlimited | Variable length, no artificial 255-char limits, efficient storage |
| INTEGER | Counts, IDs | 32-bit signed | Range: -2B to +2B, sufficient for counts and small IDs |
| BIGINT | File sizes | 64-bit signed | Range: -9 quintillion to +9 quintillion, handles large file sizes |
Relationships and Cardinality:
profiles (1) ──< (many) uploads
│ │
│ └──< (many) csv_files
│ │
│ ├──< (many) measurements
│ └──< (1) reports
│
├──< (many) weekly_reports
├──< (many) weather_data
├──< (many) flow_meter_data (future)
└──< (many) activity_logs
weather_stations (1) ──< (many) weather_data
mine_sites (1) ──< (many) devices (future)Constraints and Validation:
- Foreign Key Constraints: All FKs with ON DELETE CASCADE (parent deletion removes children) or ON DELETE SET NULL (preserve orphans)
- Check Constraints: Enum-like fields (status, role, upload_type) use CHECK (field IN (...))
- Unique Constraints: Natural keys (upload_id + filename, user_id + station_id + date)
- Not Null Constraints: Required business fields enforced at database level
- Default Values: Timestamps default to NOW(), UUIDs auto-generated, status fields default to initial state
5.2.2 File and Database Structures
5.2.2.1 Database Management System Files
PostgreSQL Configuration (Supabase Managed):
| Parameter | Value | Purpose | Impact |
|---|---|---|---|
| max_connections | 60 (PgBouncer pool) | Concurrent connection limit | Prevents connection exhaustion; scales to 50 concurrent users |
| shared_buffers | ~2GB (25% RAM) | Database cache size | Frequently accessed data cached in memory |
| effective_cache_size | ~6GB (75% RAM) | Query planner assumption | Optimizer prefers index scans when data likely cached |
| work_mem | 32MB | Per-operation memory | Sufficient for sorting 10K-row result sets |
| maintenance_work_mem | 512MB | VACUUM, INDEX creation | Faster index builds and maintenance operations |
| checkpoint_timeout | 15min | WAL checkpoint frequency | Balance durability vs. write performance |
| wal_buffers | 16MB | Write-ahead log buffer | Reduces I/O for transaction logging |
| random_page_cost | 1.1 | SSD cost factor | Encourages index usage on SSD storage |
Complete Table Schemas (Summary):
See supabase/migrations/20250105000001_initial_schema.sql for full DDL. Key tables:
| Table | Columns | Indexes | RLS Policies | Purpose |
|---|---|---|---|---|
| profiles | 5 | 1 | 3 (SELECT own/all, UPDATE own) | User management |
| uploads | 15 | 5 | 4 (CRUD own) + 1 (admin all) | Upload tracking |
| csv_files | 17 | 4 | 2 (SELECT/UPDATE own) + 1 (admin) | File metadata |
| measurements | 22 | 7 | 2 (SELECT own/all) | Environmental data |
| reports | 12 | 5 | 3 (SELECT all, DELETE/UPDATE own) + 1 (admin) | Report metadata |
| weekly_reports | 12 | 3 | 4 (CRUD own) | Field reports |
| weather_data | 23 | 4 | 0 (public read via station policy) | Weather observations |
| activity_logs | 9 | 4 | 2 (SELECT own/all) | Audit trail |
Stored Procedures and Functions:
-- Function: handle_new_user()
-- Trigger: AFTER INSERT ON auth.users
-- Purpose: Auto-create profile when user signs up
CREATE OR REPLACE FUNCTION public.handle_new_user()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO public.profiles (id, full_name, role)
VALUES (
NEW.id,
COALESCE(NEW.raw_user_meta_data->>'full_name', 'User'),
'user'
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Function: log_report_activity()
-- Trigger: AFTER INSERT/UPDATE/DELETE ON reports
-- Purpose: Automatically log report CRUD operations
CREATE OR REPLACE FUNCTION log_report_activity()
RETURNS TRIGGER AS $$
DECLARE
action_name TEXT;
report_name TEXT;
BEGIN
CASE TG_OP
WHEN 'INSERT' THEN action_name := 'create';
WHEN 'UPDATE' THEN action_name := 'update';
WHEN 'DELETE' THEN action_name := 'delete';
END CASE;
report_name := COALESCE(NEW.site_name, OLD.site_name) || ' Report ' ||
TO_CHAR(COALESCE(NEW.generation_timestamp, OLD.generation_timestamp), 'YYYY-MM-DD');
INSERT INTO activity_logs (user_id, action, resource_type, resource_id, resource_name, details)
VALUES (
(SELECT user_id FROM uploads WHERE id = COALESCE(NEW.upload_id, OLD.upload_id)),
action_name,
'report',
COALESCE(NEW.id, OLD.id),
report_name,
jsonb_build_object('site', COALESCE(NEW.site_name, OLD.site_name), 'version', COALESCE(NEW.version, OLD.version))
);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- RPC Function: log_report_read()
-- Called from: Frontend when user views/downloads report
-- Purpose: Manually log read operations (not captured by triggers)
CREATE OR REPLACE FUNCTION log_report_read(
p_report_id UUID,
p_action TEXT -- 'view' or 'download'
)
RETURNS VOID AS $$
DECLARE
v_report reports%ROWTYPE;
v_user_id UUID;
v_report_name TEXT;
BEGIN
SELECT * INTO v_report FROM reports WHERE id = p_report_id;
IF NOT FOUND THEN
RAISE EXCEPTION 'Report not found';
END IF;
v_user_id := auth.uid();
v_report_name := v_report.site_name || ' Report ' || TO_CHAR(v_report.generation_timestamp, 'YYYY-MM-DD');
INSERT INTO activity_logs (user_id, action, resource_type, resource_id, resource_name, details)
VALUES (
v_user_id,
p_action,
'report',
p_report_id,
v_report_name,
jsonb_build_object('site', v_report.site_name, 'file_size_bytes', v_report.file_size_bytes)
);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;Database Triggers:
-- Auto-create profile on user signup
CREATE TRIGGER on_auth_user_created
AFTER INSERT ON auth.users
FOR EACH ROW EXECUTE FUNCTION public.handle_new_user();
-- Auto-log report CRUD operations
CREATE TRIGGER report_activity_logger
AFTER INSERT OR UPDATE OR DELETE ON reports
FOR EACH ROW EXECUTE FUNCTION log_report_activity();
-- Auto-update updated_at timestamp
CREATE TRIGGER weekly_reports_updated_at
BEFORE UPDATE ON public.weekly_reports
FOR EACH ROW EXECUTE FUNCTION update_weekly_reports_updated_at();
-- (Similar updated_at triggers for weather_data, weather_stations)Indexing Strategy (Performance-Critical Indexes):
-- Composite indexes for common query patterns
CREATE INDEX idx_measurements_site_time ON measurements(site, time);
-- Supports: WHERE site = 'X' ORDER BY time (dashboard queries)
CREATE INDEX idx_measurements_upload_time ON measurements(upload_id, time);
-- Supports: WHERE upload_id = 'X' ORDER BY time (report generation)
CREATE INDEX idx_uploads_user_status ON uploads(user_id, status);
-- Supports: WHERE user_id = 'X' AND status IN ('pending', 'importing')
-- Partial indexes for filtered queries
CREATE INDEX idx_uploads_active ON uploads(user_id, status)
WHERE status IN ('pending', 'importing', 'generating_reports');
-- Smaller index for active uploads only
-- GIN indexes for JSONB columns
CREATE INDEX idx_uploads_files_summary ON uploads USING GIN(files_summary);
-- Supports: WHERE files_summary @> '{"status": "completed"}'
-- B-tree indexes for timestamp ordering
CREATE INDEX idx_activity_logs_created_desc ON activity_logs(created_at DESC);
-- Optimizes: ORDER BY created_at DESC LIMIT 100 (recent activity)5.2.2.2 Non-Database Management System Files
Supabase Storage Bucket Structure:
csv-uploads/
{user_id}/
daily/
2025-01-15_{upload_id}_boddington.csv
2025-01-16_{upload_id}_weipa.csv
weekly/
2025-01-W03_{upload_id}_combined.csv
monthly/
2025-01_{upload_id}_all_sites.csv
pdf-reports/
{user_id}/
Boddington/
2025-01-01_2025-01-31_daily_report.pdf
2025-02-01_2025-02-28_daily_report.pdf
Weipa/
2025-01_monthly_report.pdfFile Type Specifications:
| File Type | Purpose | Location | Access Method | Retention | Max Size | Backup |
|---|---|---|---|---|---|---|
| CSV Upload Files | Raw environmental data | csv-uploads/{user_id}/{type}/ | Signed URLs (1-hour expiry) | 2 years | 50MB | S3 multi-AZ replication |
| PDF Reports | Generated compliance reports | pdf-reports/{user_id}/{site}/ | Signed URLs (24-hour expiry) | Permanent (user-managed) | 10MB | S3 multi-AZ replication |
| Configuration | Vite, TypeScript, ESLint | Git repository root | File system | N/A | <100KB | Git version control |
| Environment Variables | API keys, URLs | .env.local (gitignored) | Process environment | N/A | <10KB | Not backed up (secrets) |
| Database Migrations | Schema definitions | supabase/migrations/ | Supabase CLI | N/A | <1MB per file | Git version control |
| Application Logs | Runtime errors, performance | Supabase platform | Dashboard API | 90 days | N/A | Platform-managed |
CSV File Structure Detail:
DustRanger CSV Format:
Header Row: loc,time,speed,massconcentration1p0,massconcentration2p5,...
Sample Data Row:
A,2025-01-15 10:30:00,5.2,12.3,45.6,78.9,123.4,567.8,234.5,89.2,34.1,12.7,1.8,28.5,35.2,65.3,5
Column Mapping:
loc → measurements.loc (TEXT)
time → measurements.time (TIMESTAMPTZ, converted to UTC)
speed → measurements.speed (NUMERIC)
massconcentration1p0 → measurements.massconcentration1p0 (NUMERIC)
massconcentration2p5 → measurements.massconcentration2p5 (NUMERIC)
massconcentration4p0 → measurements.massconcentration4p0 (NUMERIC)
massconcentration10p0 → measurements.massconcentration10p0 (NUMERIC)
... (additional columns)
Validation Rules:
- Header row required (exact column names)
- UTF-8 encoding (BOM stripped if present)
- Comma delimiter (RFC 4180 compliant)
- Date/time parsing: flexible formats (ISO 8601, DD/MM/YYYY, etc.)
- Numeric validation: non-negative for PM values, range checks for temp/humidity
- Missing values: NULL for optional columns, error for required columns
Estimated File Characteristics:
- Row count: 1,000 - 50,000 (typical daily file: 10,000 rows)
- Row size: ~200-300 bytes (20 columns × 10-15 bytes avg)
- File size: 2-15 MB (10K rows × 250 bytes ≈ 2.5MB)
- Compression: Not applied (small files, processing overhead not justified)PDF Report Structure Detail:
PDF/A Format Specification:
- Standard: PDF/A-1b (ISO 19005-1:2005) for long-term archival
- Page Size: A4 (210mm × 297mm)
- Orientation: Portrait (switchable to Landscape for wide charts)
- Color Space: sRGB IEC61966-2.1
- Fonts: Embedded subset (Helvetica, Courier for tables)
- Compression: JPEG for charts (quality 85), Flate for text
- Metadata: XMP metadata with title, author, creation date, keywords
Typical Report Structure (8-12 pages):
Page 1: Cover
- Site name (36pt bold)
- Report period (24pt)
- Generation date and time
- Company logo (if configured)
- Regulatory compliance statement
Page 2: Executive Summary
- Total measurements: 10,234
- Average PM2.5: 45.6 μg/m³
- Average PM10: 87.3 μg/m³
- Max PM2.5: 234.5 μg/m³ (timestamp)
- Exceedance events: 12 (if thresholds defined)
- Weather summary (if weather data available)
Pages 3-8: Visualizations
- PM2.5 Time Series (line chart, full page)
- PM10 Time Series (line chart, full page)
- Temperature & Humidity (dual-axis line chart)
- Dust Level Heatmap (day × hour matrix)
- Daily Averages (bar chart)
- Hourly Distribution (stacked area chart)
Pages 9-10: Statistical Tables
- Daily summary (date, count, avg, max, min)
- Hourly summary (hour, count, avg, max)
- Percentile analysis (p50, p75, p90, p95, p99)
Page 11: Compliance & Appendices
- Regulatory standards referenced
- Compliance status (pass/fail with criteria)
- Data quality notes
- Signature block (digital signature future enhancement)
Page 12: Footer on All Pages
- Page number (center)
- Generation timestamp (right)
- "Generated by Dustac Dashboard" (left)
Estimated File Characteristics:
- Base PDF structure: ~100KB
- Chart images (6 charts × 250KB avg): ~1.5MB
- Tables and text: ~100KB
- Total typical size: 1.7MB
- Maximum size (30-page report with high-res charts): 8-10MBBackup and Recovery Specifications:
| Backup Type | Frequency | Retention | Recovery Time (RTO) | Recovery Point (RPO) | Tested |
|---|---|---|---|---|---|
| Database Snapshot | Daily (3am UTC) | 7 days | <1 hour (full restore) | <24 hours (last snapshot) | Monthly |
| Point-in-Time Recovery | Continuous (WAL) | 7 days | <2 hours (restore to timestamp) | <5 minutes (last WAL) | Quarterly |
| Storage Files (S3) | Continuous (replication) | Permanent (unless deleted) | <30 min (S3 11-9s durability) | Near-zero (multi-AZ) | N/A (AWS SLA) |
| User Data Export | On-demand | User-managed | Immediate (ZIP download) | Current state | Per export |
| Configuration/Code | Git commits | Permanent (Git history) | Minutes (git checkout) | Last commit | Per deploy |
5.3 Data Conversion
Current State: Greenfield deployment with no legacy data migration required.
Future Data Conversion Scenarios:
Scenario 1: CSV Format Evolution (High Probability)
DustRanger devices may update CSV format with new measurement types or renamed columns.
| Aspect | Specification |
|---|---|
| Trigger | Device firmware update adds new particle size measurements (PM0.3, PM0.5) |
| Source Format | Existing CSV files with 20 columns |
| Target Format | New schema with 22 columns (2 additional PM measurements) |
| Detection | CSV parser detects missing/extra columns; format version inferred from column set |
| Transformation | Version-aware parser maps old column names to database; new columns set to NULL for old files |
| Validation | Row count matches; spot-check converted values against originals; statistical comparison (averages, max/min) |
| Testing | Import 10 sample files (5 old format, 5 new format); verify both formats load correctly |
| Rollback | Keep original CSV files in storage; re-import with corrected parser if issues detected |
Implementation:
// CSV Parser with format version detection
function detectFormatVersion(headers: string[]): 'v1' | 'v2' {
if (headers.includes('massconcentration0p3') && headers.includes('massconcentration0p5')) {
return 'v2'; // New format with additional PM measurements
}
return 'v1'; // Original format
}
function parseCSV(file: File, formatVersion: 'v1' | 'v2') {
const columnMapping = formatVersion === 'v1'
? V1_COLUMN_MAPPING // Maps old columns to database
: V2_COLUMN_MAPPING; // Maps new columns to database
// Parse with version-specific mapping
// V1 files: pm0p3 and pm0p5 set to NULL
// V2 files: all columns populated
}Scenario 2: Measurement Unit Conversion (Medium Probability)
Regulatory requirements change from μg/m³ to mg/m³ for certain measurements.
| Aspect | Specification |
|---|---|
| Trigger | Regulatory authority updates standards to mg/m³ for PM10 and above |
| Source | measurements table with PM values in μg/m³ |
| Target | Same table with PM10+ columns in mg/m³ (divide by 1000) |
| Transformation | SQL UPDATE statement with division: pm10 = pm10 / 1000 |
| Affected Data | Historical measurements (future measurements imported in new units) |
| Validation | Statistical checks: ensure averages shift by factor of 1000; spot-check conversions; verify regulatory thresholds still pass |
| Rollback | Database backup before conversion; multiply by 1000 to revert if issues found |
SQL Script:
-- Backup: Performed automatically via Supabase daily snapshot
-- Conversion script (run in transaction)
BEGIN;
-- Add column to track conversion
ALTER TABLE measurements ADD COLUMN unit_conversion_applied BOOLEAN DEFAULT FALSE;
-- Update PM10 and above to mg/m³
UPDATE measurements
SET
massconcentration10p0 = massconcentration10p0 / 1000,
unit_conversion_applied = TRUE
WHERE time < '2025-06-01' -- Only historical data
AND massconcentration10p0 IS NOT NULL;
-- Verify conversion
SELECT
COUNT(*) as total_converted,
AVG(massconcentration10p0) as new_avg,
AVG(massconcentration10p0 * 1000) as old_avg_check
FROM measurements
WHERE unit_conversion_applied = TRUE;
-- Expect: old_avg_check ≈ historical average
COMMIT; -- Only commit if validation passesScenario 3: User Data Import from External Systems (Future Enhancement)
Users migrating from competitor dashboards or Excel-based tracking.
| Aspect | Specification |
|---|---|
| Source Formats | Excel (.xlsx), CSV, competitor database exports |
| Process | 1. User uploads file via import wizard 2. System auto-detects format (file extension + content analysis) 3. Column mapping UI presented (auto-match by name similarity) 4. User confirms mappings and validation rules 5. Dry-run import with validation report 6. User approves; data imported with user_id tagging |
| Data Quality | Duplicate detection (timestamp + site), date range validation, measurement range checks |
| Error Handling | Partial import supported; failed rows logged to error CSV; user can review and re-import |
| Validation | Compare row counts, statistical summaries (avg, max, min), spot-check random samples |
5.4 User Machine-Readable Interface
5.4.1 Inputs
1. CSV File Upload:
| Attribute | Specification |
|---|---|
| Format | CSV (RFC 4180), UTF-8 encoding with optional BOM |
| Required Columns | time, massconcentration2p5, massconcentration10p0, site |
| Optional Columns | loc, speed, temperature, humidity, numberconcentrations, typicalparticlesize, roadbumpscount |
| Header Row | Required (first row, exact column names case-insensitive) |
| Date/Time Format | Flexible: ISO 8601 (YYYY-MM-DD HH:MM:SS), Australian (DD/MM/YYYY HH:MM:SS), Unix timestamp |
| Decimal Separator | Period (.) only |
| Row Delimiter | CRLF (\r\n) or LF (\n) |
| BOM Handling | UTF-8 BOM (\uFEFF) automatically stripped during parsing |
| Maximum File Size | 50MB (client-side check), enforced by Supabase Storage (100MB hard limit) |
| Maximum Rows | 100,000 rows per file (performance limit for browser parsing) |
| Validation Rules | - PM values ≥ 0 - Temperature: -50°C to 60°C - Humidity: 0% to 100% - Dates: within last 10 years, not future - Site name: non-empty, max 100 chars |
Validation Process:
Client-Side (Immediate Feedback):
1. Check file size < 50MB
2. Parse first 100 rows with PapaParse
3. Detect column headers (case-insensitive match)
4. Verify required columns present
5. Check data types (date parsing, numeric conversion)
6. Display preview table with first 10 rows
7. Show validation warnings (if any)
Server-Side (During Import):
1. Re-validate file size and format
2. Parse entire file
3. For each row:
a. Validate non-null required fields
b. Parse and validate date/time
c. Validate numeric ranges
d. Check data type compatibility
4. Aggregate validation errors
5. Reject file if error rate > 5% (configurable)
6. Log validation errors to processing_log (JSONB)Error Handling:
| Error Type | Detection | User Feedback | Resolution |
|---|---|---|---|
| Invalid Format | File extension ≠ .csv or not parseable | "File must be valid CSV format" | Upload correct file |
| Missing Columns | Required columns not in header | "Missing columns: time, pm25" | Add columns or use correct file |
| Parse Errors | Row cannot be parsed (unquoted commas, etc.) | "Row 234: Unable to parse" | Fix CSV formatting |
| Value Out of Range | PM < 0 or temp > 60°C | "Row 456: PM2.5 = -5 (must be ≥ 0)" | Correct data in source |
| Date Format | Cannot parse date | "Row 789: Invalid date '13/45/2025'" | Use supported format |
| File Too Large | Size > 50MB | "File exceeds 50MB limit" | Split file or compress |
2. Web Form Inputs (Weekly Reports, Settings):
| Input Type | Validation | Error Handling |
|---|---|---|
| Text Fields | Zod schema: z.string().min(1).max(255)Required vs. optional marked with asterisk | Inline error below field; Red border on invalid; Submit button disabled until valid |
| Date Pickers | Flatpickr library Date range validation Future dates blocked (for reports) | Calendar prevents invalid dates; Manual entry validated on blur; Error message: "Date must be in past" |
| Number Inputs | z.number().min(0).max(1000000)Decimal places limited (2 for measurements) | Input mask forces numeric; Increment/decrement buttons; Error shows acceptable range |
| Dropdown Selects | z.enum(['option1', 'option2'])Required unless explicitly optional | Placeholder "Select..."; Error if unchanged on submit; Searchable for long lists |
| JSONB Data | Nested Zod schemas for weekly report sections | Field-level validation; Auto-save draft every 30s; Visual indicators for incomplete sections |
3. External API Inputs:
| API | Input Format | Validation | Rate Limiting |
|---|---|---|---|
| Dustac Scraper | POST /api/scraper/dustlevels{ siteName: string, startDate: string, endDate: string } | Site name in allowed list Date range < 90 days Dates in YYYY-MM-DD format | 10 req/min per user |
| BOM Weather | GET https://bom.gov.au/climate/data/...?station=027045&date=YYYYMMDD | Station ID from valid list Date format YYYYMMDD Date within last 5 years | 60 req/min (BOM limit) |
| Google Gemini | POST /functions/v1/generate-chart-descriptions{ chartType, chartData, siteName, dateRange } | Chart type from enum Chart data is valid JSON Token count < 30K | 60 req/min (Gemini limit) |
5.4.2 Outputs
1. PDF Reports:
| Attribute | Specification |
|---|---|
| Format | PDF/A-1b (ISO 19005-1) for archival compliance |
| File Naming | {site_name}_{start_date}_{end_date}_report.pdfExample: Boddington_2025-01-01_2025-01-31_report.pdf |
| Typical Size | 1-3MB (6-8 charts) |
| Maximum Size | 10MB (30-page report with high-res charts) |
| Resolution | 150 DPI for charts (balance quality vs. size) |
| Color Profile | sRGB IEC61966-2.1 |
| Metadata | Title: "{Site} Environmental Report {Date Range}" Author: "Dustac Dashboard" Keywords: site, compliance, dust, monitoring Creation Date: Generation timestamp (ISO 8601) |
| Accessibility | Searchable text (no image-only PDFs) Logical reading order Tagged PDF (future enhancement) |
| Delivery | Browser download (Content-Disposition: attachment) Stored in Supabase Storage (permanent) Signed URL (24-hour expiry) for sharing |
2. CSV Data Exports:
| Attribute | Specification |
|---|---|
| Format | CSV (RFC 4180), UTF-8 with BOM for Excel compatibility |
| File Naming | {site}_{start_date}_{end_date}_export.csv |
| Columns | All measurement columns + metadata (upload_id, csv_file_id, site, time) |
| Header Row | Yes (descriptive column names) |
| Date Format | ISO 8601 (YYYY-MM-DD HH:MM:SS) in user's local timezone |
| Number Format | Maximum 2 decimal places, period decimal separator |
| Row Limit | Unlimited (streaming for large datasets via client-side generation) |
| Delivery | Client-side generation (FileSaver.js), no server roundtrip for performance |
3. JSON API Responses:
// Success Response
{
"data": [...] | {...}, // Array or single object
"count": number, // Total count (for pagination)
"status": "success",
"timestamp": "2025-01-15T10:30:00.000Z"
}
// Error Response
{
"error": {
"code": "VALIDATION_ERROR", // ENUM: VALIDATION_ERROR, AUTH_ERROR, NOT_FOUND, SERVER_ERROR
"message": "Invalid date range", // User-friendly
"details": { // Optional debugging info
"field": "start_date",
"value": "2025-13-01",
"reason": "Month must be 1-12"
}
},
"status": "error",
"timestamp": "2025-01-15T10:30:00.000Z"
}
// HTTP Status Codes
200 OK - Successful GET/POST/PUT
201 Created - Resource created (POST)
204 No Content - Successful DELETE
400 Bad Request - Validation error
401 Unauthorized - Missing/invalid authentication
403 Forbidden - Insufficient permissions (RLS)
404 Not Found - Resource not found
409 Conflict - Duplicate resource
429 Too Many Requests - Rate limit exceeded
500 Internal Server Error - Server error4. Real-Time Notifications (Toast Messages):
| Event | Message | Duration | Action Button |
|---|---|---|---|
| Upload Success | "CSV files uploaded successfully (3 files, 25,430 records)" | 5s | View Reports |
| Upload Error | "Upload failed: Invalid date format in row 234" | 10s | View Details |
| PDF Generated | "Report generated: Boddington Jan 2025 (2.3 MB)" | 5s | View PDF |
| Draft Saved | "Weekly report auto-saved" | 2s | - |
| Permission Denied | "Access denied: You don't have permission to view this resource" | 5s | - |
5.5 User Interface Design
UI Framework: React 19 with Tailwind CSS and shadcn/ui component library
Navigation Structure:
Main Navigation (Sidebar - Always Visible):
Dashboard (/)
Upload (/upload)
Reports (/reports)
Dust Levels (/dust-levels)
Weekly Reports (/weekly-reports)
Flow Meter (/flow-meter)
Climate (/climate)
Report Templates (/report-templates)
Settings (/settings)
User Profile Dropdown (Top-Right):
View Profile
Settings
Help & Documentation
Sign OutScreen Layouts (ASCII mockups):
Dashboard Layout:
┌────────────────────────────────────────────────────────────────┐
│ Dustac Dashboard [Search] [👤 User Menu ▼] │
├─────────┬──────────────────────────────────────────────────────┤
│ │ Home > Dashboard │
│ ├───────────────────────────────────────────────────────┤
│ Dash │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐│
│ Upload │ │ Total │ │ Total │ │ Total │ │ Active ││
│ Reports │ │ Uploads │ │Measuremts│ │ Reports │ │ Sites ││
│ Dust │ │ 142 │ │ 1.2M │ │ 87 │ │ 5 ││
│ Weekly │ └──────────┘ └──────────┘ └──────────┘ └──────────┘│
│ Flow │ │
│ Climate │ Recent Activity View All > │
│ Templates│ ┌───────────────────────────────────────────────┐ │
│ Settings│ │Date │Action │Site │Status │ │
│ │ ├───────────────────────────────────────────────┤ │
│ │ │2025-01-15│CSV Upload │Boddington│✓ Complete │ │
│ │ │2025-01-14│Report Gen │Weipa │✓ Complete │ │
│ │ └───────────────────────────────────────────────┘ │
│ │ │
│ │ PM2.5 Trends (Last 30 Days) │
│ │ ┌──────────────────────────────────────────────┐ │
│ │ │ [Line Chart with multiple sites] │ │
│ │ │ │ │
│ │ └──────────────────────────────────────────────┘ │
└─────────┴──────────────────────────────────────────────────────┘User Interaction Patterns:
| Pattern | Implementation | Use Case |
|---|---|---|
| Drag & Drop | react-dropzone | CSV file upload |
| Inline Editing | Click-to-edit with auto-save | Weekly report sections |
| Modal Dialogs | Radix UI Dialog | PDF preview, delete confirmations |
| Toast Notifications | react-hot-toast | Success/error messages (top-right) |
| Loading States | Skeleton loaders + spinners | Data fetching placeholders |
| Infinite Scroll | Intersection Observer | Activity feed (future) |
| Filtering | Dropdown + date range pickers | Reports by site/date |
| Sorting | Table header clicks | Sort by date, size, status |
| Tooltips | Radix UI Tooltip | Help text on icons/abbreviations |
5.5.1 Section 508 Compliance
WCAG 2.1 AA Compliance Strategy:
1. Perceivable:
- Text Alternatives: Alt text for all images; chart descriptions via AI
- Color Contrast: Minimum 4.5:1 for normal text, 3:1 for large text (Tailwind enforced)
- Resize Text: Responsive typography (rem units), scalable to 200% without horizontal scroll
- Reflow: Content reflows at 320px width, no 2D scrolling (except data tables)
2. Operable:
- Keyboard Accessible: All interactive elements in tab order, logical sequence
- Focus Visible: 2px outline on focus (Tailwind
focus:ring-2) - No Keyboard Trap: Modals close on Escape, return focus on dismiss
- Timing Adjustable: No time limits except session timeout (1 hour, extendable)
3. Understandable:
- Language:
<html lang="en">, consistent terminology - Predictable: Consistent navigation, no unexpected context changes
- Input Assistance: Labels for all fields, error messages describe problem and solution
4. Robust:
- Valid HTML: Semantic HTML5, no duplicate IDs
- ARIA: Landmarks (
role="navigation"), labels for icon buttons, live regions for status updates
Testing Approach:
- Automated: Lighthouse (score > 90), axe DevTools, ESLint jsx-a11y
- Manual: Keyboard navigation, screen reader (NVDA/JAWS), color contrast tools
- User Testing: Quarterly review with assistive technology users
This completes Section 5 with comprehensive system design specifications.
6. Operational Scenarios
This section describes detailed operational scenarios that illustrate how users interact with the Dustac Environmental Monitoring Dashboard. Each scenario follows a structured format including actors, preconditions, main flow, postconditions, alternative flows, and exception handling.
6.1 Scenario: User Registration and Authentication
6.1.1 Scenario Overview
Description: A new user registers for access to the Dustac system and subsequently authenticates to access the dashboard.
Primary Actor: Environmental Officer / Mine Site Manager
Supporting Actors:
- Supabase Authentication Service
- Email Service Provider
- System Administrator (for role assignment)
Business Goal: Enable secure access to the environmental monitoring system while maintaining audit trails and enforcing role-based access control.
6.1.2 Preconditions
| Precondition | Description |
|---|---|
| PRE-01 | User has a valid corporate email address |
| PRE-02 | System is accessible via network (internet connectivity) |
| PRE-03 | Supabase authentication service is operational |
| PRE-04 | User's organization has active subscription |
| PRE-05 | User has received invitation or authorization to register |
6.1.3 Main Success Flow
Phase 1: Registration
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to registration page (https://app.dustac.com/register) | System displays registration form |
| 2 | User | Enters email address, full name, password (min 12 characters) | System validates input format in real-time |
| 3 | User | Selects organization from dropdown | System loads organization metadata |
| 4 | User | Clicks "Register" button | System validates all fields |
| 5 | System | Checks email uniqueness in auth.users table | No duplicate found |
| 6 | System | Creates user record in auth.users with status='unconfirmed' | UUID generated, encrypted password stored |
| 7 | System | Triggers handle_new_user() function | Profile record created in user_profiles table |
| 8 | System | Sends confirmation email via Supabase Auth | Email contains magic link (valid 1 hour) |
| 9 | System | Displays confirmation message | "Check your email to verify your account" |
| 10 | System | Logs activity in activity_logs table | Action: 'user_registration', includes IP address |
Phase 2: Email Verification
| Step | Actor | Action | System Response |
|---|---|---|---|
| 11 | User | Checks email inbox and clicks verification link | Browser opens verification URL |
| 12 | System | Validates token from URL parameter | Token is valid and not expired |
| 13 | System | Updates auth.users.email_confirmed_at timestamp | Status changes to 'confirmed' |
| 14 | System | Creates JWT access token (expires 1 hour) and refresh token (expires 30 days) | Tokens stored in localStorage |
| 15 | System | Redirects to dashboard home page | User sees welcome message and onboarding tour |
| 16 | System | Logs activity | Action: 'email_verified' |
Phase 3: Subsequent Authentication
| Step | Actor | Action | System Response |
|---|---|---|---|
| 17 | User | Navigates to login page (https://app.dustac.com/login) | System displays login form |
| 18 | User | Enters email and password | System validates format |
| 19 | User | Clicks "Sign In" button | System submits credentials to Supabase Auth |
| 20 | System | Verifies credentials against auth.users table | Password hash matches |
| 21 | System | Checks account status | Status is 'confirmed' and not locked |
| 22 | System | Generates new JWT access and refresh tokens | Tokens returned to client |
| 23 | System | Stores tokens in localStorage | Tokens encrypted in browser storage |
| 24 | System | Fetches user profile from user_profiles table | Includes role, organization, preferences |
| 25 | System | Redirects to last visited page or dashboard home | User sees personalized dashboard |
| 26 | System | Logs activity | Action: 'user_login', timestamp, IP address, user agent |
6.1.4 Postconditions
| Postcondition | Description |
|---|---|
| POST-01 | User record exists in auth.users with confirmed email |
| POST-02 | User profile exists in user_profiles with default role='user' |
| POST-03 | Valid JWT tokens stored in browser localStorage |
| POST-04 | User session active with 1-hour expiry |
| POST-05 | Activity log entries created for registration, verification, and login |
| POST-06 | User can access dashboard and upload data |
6.1.5 Alternative Flows
A1: Email Already Registered
| Step | Condition | Alternative Action |
|---|---|---|
| 5a | Email exists in auth.users | System displays error: "Email already registered. Please sign in or reset password." |
| 5b | User clicks "Forgot Password" link | System redirects to password reset flow |
A2: Weak Password
| Step | Condition | Alternative Action |
|---|---|---|
| 4a | Password fails complexity requirements | System displays inline error: "Password must contain 12+ characters, uppercase, lowercase, number, and special character" |
| 4b | User updates password | System re-validates and allows submission |
A3: Email Verification Expired
| Step | Condition | Alternative Action |
|---|---|---|
| 12a | Token expired (>1 hour old) | System displays error: "Verification link expired" |
| 12b | System provides "Resend Verification Email" button | User clicks button |
| 12c | System generates new token and sends email | New email sent with fresh link |
A4: Invalid Credentials at Login
| Step | Condition | Alternative Action |
|---|---|---|
| 20a | Password does not match hash | System increments failed_login_attempts counter |
| 20b | Failed attempts < 5 | System displays: "Invalid email or password. X attempts remaining" |
| 20c | Failed attempts >= 5 | System locks account for 30 minutes, sends security alert email |
A5: Session Expired
| Step | Condition | Alternative Action |
|---|---|---|
| - | User accesses system with expired access token | System attempts to refresh using refresh token |
| - | Refresh token valid | System issues new access token, user continues session |
| - | Refresh token expired | System redirects to login page with message: "Session expired. Please sign in again." |
6.1.6 Exception Handling
| Exception | Cause | System Behavior | Recovery Action |
|---|---|---|---|
| EX-01 | Supabase Auth service down | Display error page with retry button | User waits and retries; system logs incident |
| EX-02 | Email service unavailable | Registration succeeds but email fails | System queues email for retry; user can request resend |
| EX-03 | Network connection lost during registration | Transaction rolled back | User refreshes page and resubmits form |
| EX-04 | Browser blocks third-party cookies | Token storage fails | System displays warning to enable cookies or use different browser |
| EX-05 | Database constraint violation | Registration fails | System logs error, displays generic message, notifies admin |
6.1.7 Business Rules
| Rule ID | Business Rule |
|---|---|
| BR-01 | Only users with confirmed email addresses can access the system |
| BR-02 | Passwords must meet minimum complexity: 12 characters, mixed case, numbers, special characters |
| BR-03 | Failed login attempts > 5 within 30 minutes trigger account lockout |
| BR-04 | User sessions expire after 1 hour of inactivity (access token TTL) |
| BR-05 | All authentication events must be logged for audit compliance |
| BR-06 | Users must belong to an active organization subscription |
6.1.8 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| Registration form load time | < 2 seconds | Time to interactive |
| Registration submission processing | < 3 seconds | Server response time |
| Email delivery time | < 60 seconds | SMTP queue to inbox |
| Login processing time | < 1.5 seconds | Credential validation to token issuance |
| Token refresh time | < 500ms | Background refresh without user interruption |
6.2 Scenario: CSV Data Upload
6.2.1 Scenario Overview
Description: An environmental officer uploads CSV files containing dust monitoring data from field devices to the Dustac platform for processing and analysis.
Primary Actor: Environmental Officer / Field Technician
Supporting Actors:
- CSV Parser Service
- Supabase Storage Service
- Database Import Service
- Notification Service
Business Goal: Ingest environmental monitoring data efficiently and accurately to enable compliance reporting and analysis.
6.2.2 Preconditions
| Precondition | Description |
|---|---|
| PRE-01 | User is authenticated with valid session |
| PRE-02 | User has at least one mine site configured in the system |
| PRE-03 | CSV files are available locally (exported from Dustac devices) |
| PRE-04 | CSV files conform to expected format (18 columns, UTF-8 encoding) |
| PRE-05 | User has sufficient storage quota remaining |
| PRE-06 | Network bandwidth adequate for file transfer |
6.2.3 Main Success Flow
Phase 1: Upload Initiation
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to Upload page (/upload) | System displays upload interface with drag-drop zone |
| 2 | System | Fetches user's mine sites from database | Displays site selection dropdown |
| 3 | User | Selects target mine site from dropdown | System caches selection |
| 4 | User | Selects upload type (Daily/Weekly/Monthly) | System adjusts date range picker |
| 5 | User | Sets period start and end dates | System validates date range is logical |
Phase 2: File Selection and Validation
| Step | Actor | Action | System Response |
|---|---|---|---|
| 6 | User | Drags and drops 3 CSV files into upload zone OR clicks "Browse" and selects files | System displays file list with names and sizes |
| 7 | System | Performs client-side pre-validation | Checks file extensions (.csv), sizes (< 50MB each), count (< 10 files) |
| 8 | System | Reads first 5 rows of each file for preview | Displays data preview table |
| 9 | System | Detects and strips UTF-8 BOM if present | Normalizes encoding |
| 10 | System | Parses CSV headers | Validates 18 expected columns exist |
| 11 | System | Validates data types in sample rows | Checks numeric fields, date formats, required fields |
| 12 | System | Displays validation results | Shows ✓ for valid files, ⚠ for warnings, ✗ for errors |
Phase 3: Upload Execution
| Step | Actor | Action | System Response |
|---|---|---|---|
| 13 | User | Reviews validation results and clicks "Upload" button | System disables button, shows progress modal |
| 14 | System | Creates upload record in uploads table | Status='pending', generates UUID |
| 15 | System | For each CSV file, creates csv_files record | Links to upload_id, status='pending' |
| 16 | System | Initiates parallel upload to Supabase Storage | Files uploaded to bucket: csv-uploads/{user_id}/{upload_id}/ |
| 17 | System | Displays progress bar per file | Updates % completion in real-time |
| 18 | System | On successful upload, updates csv_files.storage_path | Stores full path for retrieval |
| 19 | System | Updates upload.status='validating' | Triggers validation phase |
Phase 4: Data Processing
| Step | Actor | Action | System Response |
|---|---|---|---|
| 20 | System | Invokes CSV parser for each file | Parses all rows, extracts measurements |
| 21 | System | Validates each measurement record | Checks: time format, numeric ranges, required fields |
| 22 | System | Updates csv_files.record_count | Stores total valid records per file |
| 23 | System | Updates upload.status='importing' | Begins database import |
| 24 | System | Batch inserts measurements into measurements table | Uses batch size of 1000 records per query |
| 25 | System | Updates progress indicator | Shows "Importing: X of Y records" |
| 26 | System | On completion, updates csv_files.status='completed' | Marks individual file as processed |
| 27 | System | Updates upload.status='completed' | All files processed successfully |
| 28 | System | Updates summary statistics | total_record_count, successful_files, files_summary JSON |
| 29 | System | Logs activity in activity_logs | Action: 'data_upload', details include file count and record count |
| 30 | System | Displays success notification | "Upload completed: 3 files, 4,523 records imported" |
6.2.4 Postconditions
| Postcondition | Description |
|---|---|
| POST-01 | Upload record exists in uploads table with status='completed' |
| POST-02 | CSV files stored in Supabase Storage with unique paths |
| POST-03 | csv_files records created for each file with metadata |
| POST-04 | Measurement records inserted into measurements table |
| POST-05 | User can query uploaded data immediately |
| POST-06 | Activity logged for compliance audit trail |
| POST-07 | User receives confirmation notification |
6.2.5 Alternative Flows
A1: Invalid File Format
| Step | Condition | Alternative Action |
|---|---|---|
| 10a | CSV missing required columns | System displays error: "File missing columns: [list]. Please export from Dustac device." |
| 10b | User clicks "Remove" to exclude invalid file | System removes file from upload queue |
| 10c | User proceeds with valid files only | System uploads remaining valid files |
A2: Network Interruption During Upload
| Step | Condition | Alternative Action |
|---|---|---|
| 17a | Network connection lost mid-upload | System pauses upload, displays "Connection lost. Retrying..." |
| 17b | System attempts retry every 5 seconds | Uses exponential backoff (5s, 10s, 20s) |
| 17c | Connection restored within 2 minutes | System resumes upload from last checkpoint |
| 17d | Connection not restored after 2 minutes | System marks upload as 'failed', allows user to retry |
A3: Duplicate Data Detected
| Step | Condition | Alternative Action |
|---|---|---|
| 24a | Measurement records overlap with existing data | System detects duplicate (site + timestamp) |
| 24b | System displays warning dialog | "X records already exist. Skip duplicates or cancel upload?" |
| 24c | User selects "Skip Duplicates" | System inserts only new records, updates summary with skipped count |
A4: Partial File Failure
| Step | Condition | Alternative Action |
|---|---|---|
| 21a | File 2 of 3 has validation errors in 10% of rows | System marks those rows as invalid |
| 21b | System continues processing valid rows | Imports 90% of data from File 2 |
| 21c | Updates csv_files.status='partial_success' | Stores validation_errors JSON with row numbers |
| 21d | Displays warning notification | "File 2: 450 of 500 records imported. 50 records had errors." |
| 21e | System provides "Download Error Report" button | User downloads CSV with error details |
A5: Storage Quota Exceeded
| Step | Condition | Alternative Action |
|---|---|---|
| 16a | User storage quota exceeded | System aborts upload before starting |
| 16b | Displays error message | "Storage quota exceeded. Delete old uploads or contact admin." |
| 16c | Provides link to Manage Storage page | User can delete old CSV files and reports |
6.2.6 Exception Handling
| Exception | Cause | System Behavior | Recovery Action |
|---|---|---|---|
| EX-01 | Supabase Storage unavailable | Upload fails immediately | System displays error, user retries later |
| EX-02 | Database connection timeout during import | Transaction rolled back | System marks upload as 'failed', allows retry |
| EX-03 | Malformed CSV (encoding issues) | Parser throws exception | System logs error, marks file as 'failed', displays helpful error message |
| EX-04 | File size exceeds 50MB limit | Client-side validation blocks upload | System displays error: "File too large. Split data into multiple files." |
| EX-05 | Browser crashes during upload | Upload abandoned | On restart, system shows "Resume Upload?" prompt if partial data exists |
6.2.7 Business Rules
| Rule ID | Business Rule |
|---|---|
| BR-01 | CSV files must contain exactly 18 columns in specified order |
| BR-02 | Maximum file size: 50MB per file, maximum 10 files per upload |
| BR-03 | Duplicate measurements (same site + timestamp) are skipped by default |
| BR-04 | Data retention: Uploaded CSV files retained for 90 days, measurements retained indefinitely |
| BR-05 | Users can only upload data for mine sites they have access to |
| BR-06 | Failed uploads can be retried; system cleans up partial data after 24 hours |
6.2.8 Data Validation Rules
| Field | Validation Rule | Error Handling |
|---|---|---|
| time | ISO 8601 format, not future date | Reject row, log error |
| massconcentration* | Numeric, >= 0, <= 10000 μg/m³ | Reject row if required field invalid |
| temperature | Numeric, -50 to 80 °C | Accept with warning if out of range |
| humidity | Numeric, 0 to 100 % | Reject row if out of range |
| speed | Numeric, >= 0 km/h | Accept null, reject negative |
| loc | Text, max 255 characters | Truncate if longer |
6.2.9 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| File upload speed | > 5 MB/s | Network transfer rate |
| CSV parsing throughput | > 10,000 rows/second | Parser processing rate |
| Database insert throughput | > 5,000 records/second | Batch insert performance |
| Total upload time (10MB file) | < 30 seconds | End-to-end from file selection to completion |
| Progress update frequency | Every 500ms | Real-time UI refresh rate |
6.3 Scenario: PDF Report Generation
6.3.1 Scenario Overview
Description: A user generates a comprehensive PDF report containing dust monitoring data, charts, and analysis for a specified date range and set of monitoring devices.
Primary Actor: Environmental Manager / Compliance Officer
Supporting Actors:
- Report Data Service
- Chart Rendering Service (Recharts)
- PDF Generation Service (jsPDF + html2canvas)
- Supabase Storage Service
Business Goal: Produce professional, compliant environmental reports for regulatory submission and stakeholder communication.
6.3.2 Preconditions
| Precondition | Description |
|---|---|
| PRE-01 | User is authenticated with valid session |
| PRE-02 | Measurement data exists in database for selected period |
| PRE-03 | User has selected at least one mine site and device |
| PRE-04 | Browser supports HTML5 Canvas API |
| PRE-05 | User has sufficient storage quota for report |
| PRE-06 | Adequate browser memory (recommended 4GB+ available) |
6.3.3 Main Success Flow
Phase 1: Report Configuration
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to Reports page (/reports) | System displays report generation interface |
| 2 | System | Loads user's mine sites from database | Populates site dropdown |
| 3 | User | Selects mine site from dropdown | System fetches devices for selected site |
| 4 | System | Queries devices table filtered by site_id | Displays device multiselect with device names |
| 5 | User | Selects 2 devices (e.g., "Device-A1", "Device-B3") | System updates selection state |
| 6 | User | Selects date range using calendar picker | System validates: start < end, range <= 90 days |
| 7 | User | Selects report template (Standard/Detailed/Summary) | System loads template configuration |
| 8 | User | Clicks "Preview Data" button | System triggers data preview |
Phase 2: Data Retrieval and Preview
| Step | Actor | Action | System Response |
|---|---|---|---|
| 9 | System | Constructs SQL query with filters | WHERE site=X AND device IN (A,B) AND time BETWEEN start AND end |
| 10 | System | Executes query against measurements table | Retrieves matching records (assume 4,523 records) |
| 11 | System | Aggregates data for preview | Calculates: record count, date range, avg/max PM2.5, PM10 |
| 12 | System | Displays data summary table | Shows: Device |
| 13 | System | Renders preview chart (line chart) | Shows PM2.5 trend over time for both devices |
| 14 | User | Reviews preview, adjusts filters if needed | System updates preview dynamically |
| 15 | User | Satisfied with data, clicks "Generate Report" | System initiates PDF generation |
Phase 3: PDF Generation
| Step | Actor | Action | System Response |
|---|---|---|---|
| 16 | System | Creates report record in reports table | Status='generating', stores parameters |
| 17 | System | Displays modal: "Generating report..." with progress bar | User cannot navigate away |
| 18 | System | Renders React components for report in hidden div | Components include: cover page, summary, charts, data tables |
| 19 | System | Waits for all charts to complete rendering | Uses useEffect hooks to detect completion |
| 20 | System | Captures cover page as image using html2canvas | Resolution: 1240x1754px (A4 portrait at 150 DPI) |
| 21 | System | Updates progress: "Rendering page 1 of 12" | Progress bar: 8% |
| 22 | System | Captures summary page | Contains: site info, date range, device list, key statistics |
| 23 | System | Updates progress: "Rendering page 2 of 12" | Progress bar: 16% |
| 24 | System | Captures chart pages (6 pages) | PM2.5 trend, PM10 trend, PM2.5 vs PM10, Temperature, Humidity, Daily averages |
| 25 | System | Updates progress after each chart | Progress bar: 24%, 32%, 40%, 48%, 56%, 64% |
| 26 | System | Captures data table pages (4 pages) | Paginated tables with 100 records per page |
| 27 | System | Updates progress after each table | Progress bar: 72%, 80%, 88%, 96% |
| 28 | System | Initializes jsPDF instance | Format: A4, orientation: portrait |
| 29 | System | Adds each captured image to PDF | Maintains aspect ratio, adds page breaks |
| 30 | System | Generates PDF blob | File size: ~3.2 MB |
| 31 | System | Updates progress: "Saving report..." | Progress bar: 100% |
Phase 4: Storage and Download
| Step | Actor | Action | System Response |
|---|---|---|---|
| 32 | System | Generates filename | Format: "{site}_{start-date}to_Report.pdf" |
| 33 | System | Uploads PDF to Supabase Storage | Bucket: reports/{user_id}/{upload_id}/ |
| 34 | System | Updates reports table | pdf_storage_path, file_size_bytes, page_count, status='completed' |
| 35 | System | Calculates generation duration | Stores generation_duration_ms (e.g., 45,300 ms = 45.3 seconds) |
| 36 | System | Logs activity in activity_logs | Action: 'report_generated', includes report_id and parameters |
| 37 | System | Triggers browser download | Uses FileSaver.js to download PDF |
| 38 | System | Closes progress modal | Displays success notification: "Report generated successfully!" |
| 39 | System | Displays report card in Reports list | Card shows: filename, date, size, download/share buttons |
6.3.4 Postconditions
| Postcondition | Description |
|---|---|
| POST-01 | PDF report file saved to user's local downloads folder |
| POST-02 | Report record exists in reports table with status='completed' |
| POST-03 | PDF file stored in Supabase Storage with unique path |
| POST-04 | User can re-download report from Reports page |
| POST-05 | Activity logged for audit trail |
| POST-06 | Report appears in user's report history |
| POST-07 | Report metadata includes generation timestamp and parameters |
6.3.5 Alternative Flows
A1: No Data for Selected Criteria
| Step | Condition | Alternative Action |
|---|---|---|
| 10a | Query returns 0 records | System displays warning: "No data found for selected criteria" |
| 10b | System disables "Generate Report" button | User must adjust filters (date range, devices) |
| 10c | System provides suggestions | "Try expanding date range or selecting different devices" |
A2: Browser Memory Insufficient
| Step | Condition | Alternative Action |
|---|---|---|
| 20a | html2canvas fails due to memory error | System catches exception |
| 20b | System displays error modal | "Report too large to generate in browser. Please reduce date range or contact support." |
| 20c | Updates reports.status='failed' | Stores error_message in database |
| 20d | Logs error with details | Includes browser info, data size, memory usage |
A3: User Cancels Generation
| Step | Condition | Alternative Action |
|---|---|---|
| 25a | User clicks "Cancel" button in progress modal | System stops rendering process |
| 25b | System deletes partial report files | Cleans up temporary data |
| 25c | Updates reports.status='cancelled' | Marks report as cancelled |
| 25d | Displays notification | "Report generation cancelled" |
A4: Large Dataset (>10,000 records)
| Step | Condition | Alternative Action |
|---|---|---|
| 10a | Query returns >10,000 records | System displays warning: "Large dataset detected (15,423 records)" |
| 10b | System offers options | "Generate full report (may take 2-3 minutes) OR Generate summary report (faster)" |
| 10c | User selects "Summary" | System aggregates data by day instead of individual records |
| 10d | PDF contains summary tables | Daily averages instead of raw data |
A5: Report Already Exists
| Step | Condition | Alternative Action |
|---|---|---|
| 16a | Identical report parameters exist in last 24 hours | System detects duplicate based on site, devices, date range |
| 16b | System displays prompt | "Similar report exists. Re-generate or download existing?" |
| 16c | User selects "Download Existing" | System retrieves existing PDF from storage and downloads |
| 16d | User selects "Re-generate" | System proceeds with new generation |
6.3.6 Exception Handling
| Exception | Cause | System Behavior | Recovery Action |
|---|---|---|---|
| EX-01 | Database query timeout | Query exceeds 30-second limit | System displays error, suggests reducing date range |
| EX-02 | Chart rendering failure | Recharts component throws error | System logs error, generates report without that chart, includes placeholder |
| EX-03 | Storage upload failure | Supabase Storage unavailable | Report generated but not saved; offers "Save to Device Only" |
| EX-04 | PDF generation crash | jsPDF throws exception | System catches error, logs details, displays user-friendly error message |
| EX-05 | Network interruption during upload | Storage upload fails mid-stream | System retries upload 3 times with exponential backoff |
6.3.7 Business Rules
| Rule ID | Business Rule |
|---|---|
| BR-01 | Reports can span maximum 90 days to ensure reasonable file size |
| BR-02 | PDF reports must include: cover page, summary, charts, data tables, footer with generation timestamp |
| BR-03 | All reports are stored for 90 days, then automatically archived |
| BR-04 | Users can re-generate reports with same parameters; new version increments version number |
| BR-05 | Report filenames must be unique and descriptive (include site, date range) |
| BR-06 | Generated reports are viewable by all users in the same organization (RLS policy) |
6.3.8 Report Content Specification
| Section | Page # | Content Description |
|---|---|---|
| Cover Page | 1 | Organization logo, report title, site name, date range, generation date |
| Executive Summary | 2 | Key findings, compliance status, exceedance events, recommendations |
| Site Information | 3 | Site details, device list, monitoring parameters, data completeness |
| PM2.5 Analysis | 4-5 | Time series chart, statistical summary, exceedance analysis, regulatory comparison |
| PM10 Analysis | 6-7 | Time series chart, statistical summary, exceedance analysis, regulatory comparison |
| Environmental Conditions | 8-9 | Temperature and humidity charts, correlation analysis |
| Daily Averages | 10 | Table with daily average values for all parameters |
| Raw Data Tables | 11-12 | Paginated tables with individual measurements (sample or full data) |
| Footer (all pages) | - | Page number, report ID, confidentiality notice, generation timestamp |
6.3.9 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| Data query time | < 5 seconds | For up to 10,000 records |
| Chart rendering time | < 2 seconds per chart | Time to complete useEffect hook |
| html2canvas capture time | < 3 seconds per page | Image generation duration |
| PDF assembly time | < 5 seconds | jsPDF processing time |
| Total generation time (standard report) | < 60 seconds | End-to-end for 5,000 records, 12 pages |
| PDF file size | < 5 MB | For standard 90-day report |
6.4 Scenario: Dust Level Data Query and Export
6.4.1 Scenario Overview
Description: A user queries historical dust monitoring data using filters and exports the results to CSV format for external analysis or reporting.
Primary Actor: Data Analyst / Environmental Scientist
Supporting Actors:
- Query Service
- Data Export Service
- Supabase Database
Business Goal: Enable flexible data access and integration with external analysis tools and regulatory reporting systems.
6.4.2 Preconditions
| Precondition | Description |
|---|---|
| PRE-01 | User is authenticated with valid session |
| PRE-02 | Historical measurement data exists in database |
| PRE-03 | User has access to at least one mine site |
| PRE-04 | Browser supports Blob API for file download |
6.4.3 Main Success Flow
Phase 1: Query Configuration
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to Dust Levels page (/dust-levels) | System displays data query interface |
| 2 | System | Loads user's accessible mine sites | Populates site filter dropdown |
| 3 | User | Selects mine site(s) from multiselect | System fetches devices for selected sites |
| 4 | User | Selects specific devices (optional) | System updates device filter state |
| 5 | User | Sets date range (start and end dates) | System validates range (max 365 days) |
| 6 | User | Configures additional filters: PM2.5 range, temperature range | System stores filter criteria |
| 7 | User | Clicks "Apply Filters" button | System constructs and executes query |
Phase 2: Data Retrieval and Display
| Step | Actor | Action | System Response |
|---|---|---|---|
| 8 | System | Constructs SQL query with WHERE clauses | Applies all user-selected filters |
| 9 | System | Executes paginated query (limit 100 records) | Retrieves first page of results |
| 10 | System | Calculates total result count | COUNT(*) query with same filters |
| 11 | System | Displays results in data grid | Columns: Time, Site, Device, PM2.5, PM10, Temp, Humidity |
| 12 | System | Shows pagination controls | "Showing 1-100 of 2,456 results" |
| 13 | System | Renders summary statistics | Min, Max, Average, Median for PM2.5 and PM10 |
| 14 | System | Displays interactive chart preview | Line chart showing PM2.5 trend over time |
Phase 3: Data Interaction
| Step | Actor | Action | System Response |
|---|---|---|---|
| 15 | User | Clicks column header to sort | System re-queries with ORDER BY clause |
| 16 | User | Navigates to page 3 using pagination | System fetches records 201-300 (OFFSET 200 LIMIT 100) |
| 17 | User | Clicks on specific row | System displays detail modal with all 18 measurements |
| 18 | User | Closes detail modal | System returns to data grid view |
Phase 4: Data Export
| Step | Actor | Action | System Response |
|---|---|---|---|
| 19 | User | Clicks "Export to CSV" button | System displays export configuration modal |
| 20 | User | Selects export scope: "Current Page" or "All Results" | System updates export settings |
| 21 | User | Selects columns to include (default: all) | System updates column selection |
| 22 | User | Clicks "Download CSV" button | System initiates export process |
| 23 | System | Fetches all data matching filters (no pagination) | Executes query without LIMIT |
| 24 | System | Converts data to CSV format using PapaParse | Includes headers, formats timestamps, handles nulls |
| 25 | System | Generates CSV blob | File size: ~850 KB for 2,456 records |
| 26 | System | Generates filename | Format: "DustLevels_{site}_{start-date}to.csv" |
| 27 | System | Triggers browser download | Uses FileSaver.js |
| 28 | System | Logs activity in activity_logs | Action: 'data_export', includes record count |
| 29 | System | Displays success notification | "Exported 2,456 records to CSV" |
6.4.4 Postconditions
| Postcondition | Description |
|---|---|
| POST-01 | CSV file downloaded to user's local machine |
| POST-02 | Export activity logged for audit compliance |
| POST-03 | User can immediately open CSV in Excel or analysis tools |
| POST-04 | Original query filters remain active for further exploration |
6.4.5 Alternative Flows
A1: Large Dataset Warning
| Step | Condition | Alternative Action |
|---|---|---|
| 23a | Query matches >50,000 records | System displays warning: "Large export (50K+ records). This may take 1-2 minutes." |
| 23b | User confirms or cancels | If confirmed, system proceeds with progress indicator |
| 23c | System streams data in batches | Fetches 10,000 records at a time to avoid memory issues |
A2: No Results Found
| Step | Condition | Alternative Action |
|---|---|---|
| 9a | Query returns 0 results | System displays message: "No data matches your filters" |
| 9b | System provides suggestions | "Try expanding date range or removing filters" |
| 9c | Export button disabled | User must adjust filters to get results |
A3: Export All vs. Current Page
| Step | Condition | Alternative Action |
|---|---|---|
| 20a | User selects "Current Page" | System exports only visible 100 records |
| 20b | CSV contains only current page data | Filename includes "_Page3" suffix |
6.4.6 Exception Handling
| Exception | Cause | System Behavior | Recovery Action |
|---|---|---|---|
| EX-01 | Database query timeout | Query exceeds 30-second limit | Display error suggesting to reduce date range or add more filters |
| EX-02 | Browser memory exceeded during export | Large dataset exhausts available memory | Split export into multiple files (by date range) |
| EX-03 | Network interruption during data fetch | Connection lost mid-query | Retry with exponential backoff (3 attempts) |
| EX-04 | Invalid date range | Start date after end date | Client-side validation prevents submission |
6.4.7 Business Rules
| Rule ID | Business Rule |
|---|---|
| BR-01 | Query date range cannot exceed 365 days |
| BR-02 | Maximum 100,000 records per export (split into multiple files if needed) |
| BR-03 | Exported data retains full precision (no rounding) |
| BR-04 | All data exports are logged for audit compliance |
| BR-05 | Users can only query data for mine sites they have access to (RLS enforced) |
6.4.8 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| Initial query execution | < 3 seconds | For typical 100-record page |
| Pagination response time | < 1 second | Fetching next page |
| CSV generation (10K records) | < 5 seconds | Data fetch + conversion + download |
| Chart rendering | < 2 seconds | Line chart with 1,000 data points |
6.5 Scenario: Weekly Report Creation
6.5.1 Scenario Overview
Description: A field supervisor creates and submits a structured weekly report documenting site observations, flow meter usage, dashboard updates, and other operational activities.
Primary Actor: Field Supervisor / Site Manager
Supporting Actors:
- Report Auto-Save Service
- Validation Service
- Notification Service
Business Goal: Capture structured field data for operational tracking, compliance documentation, and stakeholder communication.
6.5.2 Preconditions
| Precondition | Description |
|---|---|
| PRE-01 | User is authenticated with valid session |
| PRE-02 | User has field supervisor or manager role |
| PRE-03 | Week to be reported has ended (report period end date <= today) |
| PRE-04 | User has access to relevant mine sites and equipment data |
6.5.3 Main Success Flow
Phase 1: Report Initiation
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to Weekly Reports page (/weekly-reports) | System displays report dashboard |
| 2 | System | Fetches user's existing reports | Displays list of draft and completed reports |
| 3 | User | Clicks "Create New Report" button | System displays week selection modal |
| 4 | User | Selects report period end date (Saturday) | System calculates period start (7 days prior) |
| 5 | System | Checks for duplicate reports | Validates no completed report exists for this week |
| 6 | System | Creates report record in weekly_reports table | Status='draft', initializes empty JSONB sections |
| 7 | System | Redirects to report editor | Displays 8-section form interface |
Phase 2: Section 1 - Site Observations
| Step | Actor | Action | System Response |
|---|---|---|---|
| 8 | User | Clicks "Add Observation" in Section 1 | System displays observation entry form |
| 9 | User | Selects site from dropdown | System loads site details |
| 10 | User | Enters: Ranger ID, Asset Location, Status, Issue, Action | System validates required fields |
| 11 | User | Clicks "Save Observation" | System adds entry to site_observations JSONB array |
| 12 | System | Auto-saves report (debounced 2 seconds) | Updates weekly_reports.updated_at |
| 13 | User | Adds 3 more observations (repeat steps 8-12) | Each saved to JSONB array |
Phase 3: Section 2 - Flow Meter Usage
| Step | Actor | Action | System Response |
|---|---|---|---|
| 14 | User | Navigates to Section 2 tab | System displays flow meter usage form |
| 15 | User | Clicks "Add Usage Record" | System displays usage entry form |
| 16 | User | Selects site, enters usage dates, volume used, notes | System validates numeric fields |
| 17 | User | Clicks "Save Record" | System adds entry to flow_meter_usage JSONB array |
| 18 | System | Auto-saves report | Updates database with new data |
Phase 4: Sections 3-8 (Continued Entry)
| Step | Actor | Action | System Response |
|---|---|---|---|
| 19 | User | Completes Section 3: Dashboard Updates | Adds 2 update entries (category, description) |
| 20 | User | Completes Section 4: Vendor Activities | Adds 1 vendor activity (name, type, description) |
| 21 | User | Completes Section 5: Water Truck Testing | Fills structured form (sites, hardware, test_summary) |
| 22 | User | Completes Section 6: Hardware & Installations | Adds 2 installation records (type, location, details) |
| 23 | User | Completes Section 7: Admin & Reporting | Adds travel and reporting entries |
| 24 | User | Completes Section 8: Other Tasks | Adds site support, stakeholder, internal task entries |
| 25 | System | Auto-saves after each section | Incrementally updates weekly_reports record |
Phase 5: Validation and Submission
| Step | Actor | Action | System Response |
|---|---|---|---|
| 26 | User | Clicks "Review Report" button | System performs validation checks |
| 27 | System | Validates all required sections have data | Checks at least 1 entry in critical sections |
| 28 | System | Displays validation summary | Shows ✓ Complete or ⚠ Missing for each section |
| 29 | User | Reviews summary, clicks "Submit Report" | System displays confirmation modal |
| 30 | User | Confirms submission | System updates status='completed' |
| 31 | System | Timestamps completion | Sets updated_at to current timestamp |
| 32 | System | Logs activity in activity_logs | Action: 'weekly_report_submitted' |
| 33 | System | Sends notification to manager | Email with report summary and link |
| 34 | System | Redirects to report list page | Displays success message: "Weekly report submitted!" |
Phase 6: Export to DOCX
| Step | Actor | Action | System Response |
|---|---|---|---|
| 35 | User | Clicks "Export to Word" button | System retrieves completed report data |
| 36 | System | Generates DOCX using docx library | Creates formatted document with 8 sections |
| 37 | System | Includes tables, lists, formatting | Professional report layout |
| 38 | System | Generates blob and triggers download | Filename: "WeeklyReport_{date}_{user}.docx" |
| 39 | System | Logs export activity | Action: 'weekly_report_exported' |
6.5.4 Postconditions
| Postcondition | Description |
|---|---|
| POST-01 | Weekly report record exists with status='completed' |
| POST-02 | All 8 sections populated with structured JSONB data |
| POST-03 | Manager notified of report submission |
| POST-04 | Activity logged for audit trail |
| POST-05 | Report available for export to DOCX |
| POST-06 | Draft report cannot be edited after submission (read-only) |
6.5.5 Alternative Flows
A1: Save as Draft
| Step | Condition | Alternative Action |
|---|---|---|
| 26a | User clicks "Save as Draft" instead of Submit | System saves current state, status remains 'draft' |
| 26b | User can exit and return later | Report appears in "Draft Reports" section |
| 26c | User clicks "Continue Editing" on draft | System loads report editor with saved data |
A2: Duplicate Week Detected
| Step | Condition | Alternative Action |
|---|---|---|
| 5a | Completed report exists for selected week | System displays warning: "Report already exists for this week" |
| 5b | System offers options | "View existing report" or "Create revision" |
| 5c | User selects "Create revision" | System creates new report with version number incremented |
A3: Auto-Save Failure
| Step | Condition | Alternative Action |
|---|---|---|
| 12a | Network connection lost during auto-save | System displays warning banner: "Offline - changes not saved" |
| 12b | System stores changes in localStorage | Temporary local backup |
| 12c | Connection restored | System syncs local changes to database, displays "Synced" confirmation |
A4: Incomplete Sections at Submission
| Step | Condition | Alternative Action |
|---|---|---|
| 27a | Critical sections (1, 2) are empty | System displays error: "Sections 1 and 2 are required" |
| 27b | Submit button remains disabled | User must add at least one entry to each required section |
| 27c | Non-critical sections empty | System allows submission with warning: "Section 4 is empty" |
6.5.6 Exception Handling
| Exception | Cause | System Behavior | Recovery Action |
|---|---|---|---|
| EX-01 | Browser crash during editing | Changes lost if not auto-saved recently | localStorage backup attempts recovery on restart |
| EX-02 | Database update conflict | Two users editing same report simultaneously | Last write wins; system warns of potential conflict |
| EX-03 | JSONB data corruption | Malformed JSON in JSONB field | Validation catches error; system rejects save, logs issue |
| EX-04 | DOCX generation failure | docx library throws exception | System displays error, offers CSV export alternative |
6.5.7 Business Rules
| Rule ID | Business Rule |
|---|---|
| BR-01 | Weekly reports must cover a 7-day period ending on Saturday |
| BR-02 | Only one completed report allowed per week per user |
| BR-03 | Reports can be saved as draft unlimited times before submission |
| BR-04 | Completed reports are read-only; revisions create new report record |
| BR-05 | Auto-save occurs every 2 seconds after user input (debounced) |
| BR-06 | Sections 1 and 2 (Site Observations, Flow Meter Usage) are mandatory |
| BR-07 | Managers receive notification within 5 minutes of report submission |
6.5.8 Data Structure Specification
Section 1: Site Observations (JSONB Array)
[
{
"site_name": "Mine Site A",
"ranger_id": "R-123",
"asset_location": "North Pit",
"status": "Operational",
"issue": "Dust monitor battery low",
"action": "Replaced battery, verified operation"
}
]Section 2: Flow Meter Usage (JSONB Array)
[
{
"site_name": "Mine Site B",
"usage_dates": "2025-12-01 to 2025-12-07",
"volume_used": "125000",
"notes": "Increased usage due to dry conditions"
}
]Section 5: Water Truck Testing (JSONB Object)
{
"sites": ["Site A", "Site B"],
"hardware": ["GPS Tracker", "Flow Sensor"],
"test_summary": "All systems operational. GPS accuracy within 5m."
}6.5.9 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| Form load time | < 2 seconds | Time to display editor interface |
| Auto-save latency | < 500ms | Time from input to database update |
| Section navigation | < 200ms | Tab switching response time |
| DOCX generation time | < 8 seconds | For typical 8-section report |
| Report list page load | < 3 seconds | Displaying 50 recent reports |
6.6 Scenario: Flow Meter Data Management
6.6.1 Scenario Overview
Description: A system administrator configures and manages flow meter data sources, including refreshing data from external scraper services and viewing water usage analytics.
Primary Actor: System Administrator / Operations Manager
Supporting Actors:
- Flow Meter Scraper Edge Function
- External Flow Meter API
- Notification Service
Business Goal: Maintain accurate water usage tracking for mine site operations and compliance reporting.
6.6.2 Preconditions
| Precondition | Description |
|---|---|
| PRE-01 | User is authenticated with admin or manager role |
| PRE-02 | Flow meter devices configured in system |
| PRE-03 | Scraper edge function deployed to Supabase |
| PRE-04 | External flow meter API accessible |
| PRE-05 | Database table flow_meter_data exists and is accessible |
6.6.3 Main Success Flow
Phase 1: View Current Data
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to Flow Meters page (/flow-meters) | System displays flow meter dashboard |
| 2 | System | Fetches latest flow meter readings from database | Queries flow_meter_data table |
| 3 | System | Displays data in table view | Columns: Site, Meter ID, Last Reading, Volume (L), Timestamp, Status |
| 4 | System | Shows last refresh timestamp | "Last updated: 2 hours ago" |
| 5 | System | Renders usage chart | Bar chart showing daily water usage by site |
| 6 | User | Filters by site or date range | System updates table and chart |
Phase 2: Manual Data Refresh
| Step | Actor | Action | System Response |
|---|---|---|---|
| 7 | User | Clicks "Refresh Data" button | System displays confirmation modal |
| 8 | User | Confirms refresh action | System disables button, shows spinner |
| 9 | System | Invokes Supabase Edge Function trigger-flow-meter-scraper | HTTP POST to function endpoint |
| 10 | Edge Function | Validates authentication | Checks service role key |
| 11 | Edge Function | Calls external flow meter API | Fetches latest readings for all devices |
| 12 | External API | Returns JSON with meter readings | Example: [{meter_id, volume, timestamp}] |
| 13 | Edge Function | Parses and validates response data | Checks data types, required fields |
| 14 | Edge Function | Batch inserts data into flow_meter_data table | Uses upsert to handle duplicates |
| 15 | Edge Function | Returns success response | HTTP 200 with summary: "45 records updated" |
| 16 | System | Displays success notification | "Flow meter data refreshed: 45 records updated" |
| 17 | System | Reloads dashboard data | Displays updated readings and chart |
| 18 | System | Updates last refresh timestamp | "Last updated: Just now" |
| 19 | System | Logs activity in activity_logs | Action: 'flow_meter_refresh', includes record count |
Phase 3: Configure Flow Meter Settings
| Step | Actor | Action | System Response |
|---|---|---|---|
| 20 | User | Clicks "Settings" button | System displays configuration modal |
| 21 | User | Views current settings | API endpoint, refresh frequency, timeout settings |
| 22 | User | Updates refresh frequency (from 6 hours to 4 hours) | System validates input |
| 23 | User | Clicks "Save Settings" | System updates configuration in database |
| 24 | System | Validates new settings | Checks frequency is between 1-24 hours |
| 25 | System | Updates system configuration | Stores in configuration table |
| 26 | System | Schedules next automatic refresh | Calculates next run time based on new frequency |
| 27 | System | Displays confirmation | "Settings updated. Next refresh at 14:00" |
Phase 4: View Usage Analytics
| Step | Actor | Action | System Response |
|---|---|---|---|
| 28 | User | Navigates to Analytics tab | System displays usage analytics dashboard |
| 29 | System | Aggregates usage data by site and date | SQL queries with GROUP BY clauses |
| 30 | System | Displays KPI cards | Total usage (week), Average daily, Peak day, Number of sites |
| 31 | System | Renders trend chart | Line chart showing 30-day usage trend |
| 32 | System | Displays site comparison chart | Horizontal bar chart: usage by site |
| 33 | User | Selects date range "Last 7 days" | System updates all charts and KPIs |
| 34 | User | Clicks "Export Report" | System generates CSV with usage data |
Phase 5: Handle Anomalies
| Step | Actor | Action | System Response |
|---|---|---|---|
| 35 | System | Detects anomaly (usage spike > 200% of average) | Triggers anomaly detection algorithm |
| 36 | System | Creates alert record | Stores in flow_meter_alerts table |
| 37 | System | Displays alert banner on dashboard | "Alert: Site B usage 250% above normal" |
| 38 | User | Clicks "View Details" on alert | System displays alert detail modal |
| 39 | System | Shows anomaly analysis | Chart comparing current vs historical usage |
| 40 | User | Marks alert as "Investigated" with notes | System updates alert status |
| 41 | System | Logs investigation | Stores notes and status change |
6.6.4 Postconditions
| Postcondition | Description |
|---|---|
| POST-01 | Flow meter data in database reflects latest readings from external API |
| POST-02 | Dashboard displays up-to-date usage information |
| POST-03 | Last refresh timestamp updated |
| POST-04 | Activity logged for audit trail |
| POST-05 | Alerts generated for usage anomalies |
| POST-06 | Configuration changes applied for next scheduled refresh |
6.6.5 Alternative Flows
A1: Scraper Function Timeout
| Step | Condition | Alternative Action |
|---|---|---|
| 10a | External API does not respond within 30 seconds | Edge function times out |
| 10b | Edge function returns error response | HTTP 504 Gateway Timeout |
| 10c | System displays error notification | "Data refresh failed: API timeout. Please try again." |
| 10d | System logs error with details | Includes timestamp, API endpoint, timeout duration |
| 10e | User waits and retries | System attempts refresh again |
A2: Partial Data Fetch Failure
| Step | Condition | Alternative Action |
|---|---|---|
| 12a | External API returns data for only 30 of 45 meters | Some meters offline or unresponsive |
| 13a | Edge function validates partial data | Identifies missing meters |
| 14a | Edge function updates available data | Inserts 30 successful readings |
| 15a | Edge function returns partial success | HTTP 207 Multi-Status with details |
| 16a | System displays warning notification | "Partial refresh: 30 of 45 meters updated. 15 meters unavailable." |
| 16b | System marks unavailable meters with warning icon | Dashboard shows status: "Offline" |
A3: Scheduled Auto-Refresh
| Step | Condition | Alternative Action |
|---|---|---|
| - | Scheduled refresh time reached (e.g., every 4 hours) | System triggers auto-refresh |
| - | System invokes edge function automatically | No user interaction required |
| - | Edge function executes refresh process | Same as steps 10-15 |
| - | On success, system logs refresh | Action: 'flow_meter_auto_refresh' |
| - | On failure, system retries once after 5 minutes | If still fails, alerts admin |
A4: Invalid Configuration
| Step | Condition | Alternative Action |
|---|---|---|
| 24a | User enters refresh frequency < 1 hour | Validation fails |
| 24b | System displays inline error | "Refresh frequency must be between 1 and 24 hours" |
| 24c | Save button remains disabled | User must correct input |
A5: No New Data Available
| Step | Condition | Alternative Action |
|---|---|---|
| 12a | External API returns same data as last fetch | No changes detected |
| 14a | Edge function skips database update | No upsert operations performed |
| 15a | Edge function returns "no changes" response | HTTP 200 with message: "0 records updated" |
| 16a | System displays info notification | "Data refresh complete: No new data available" |
6.6.6 Exception Handling
| Exception | Cause | System Behavior | Recovery Action |
|---|---|---|---|
| EX-01 | External API authentication failure | API returns 401 Unauthorized | System displays error, alerts admin to check API credentials |
| EX-02 | Database connection failure during insert | Cannot write to flow_meter_data table | Edge function retries 3 times, then logs error and alerts admin |
| EX-03 | Malformed JSON from external API | API returns invalid data structure | Edge function logs error with raw response, returns error to frontend |
| EX-04 | Edge function deployment failure | Function not accessible or misconfigured | System displays: "Service temporarily unavailable. Contact support." |
| EX-05 | Rate limit exceeded on external API | API returns 429 Too Many Requests | System waits for rate limit reset, then retries automatically |
6.6.7 Business Rules
| Rule ID | Business Rule |
|---|---|
| BR-01 | Flow meter data is automatically refreshed every 4 hours (configurable) |
| BR-02 | Manual refresh can only be triggered once per 15 minutes to avoid API rate limits |
| BR-03 | Historical flow meter data retained for 2 years, then archived |
| BR-04 | Anomalies are detected if usage exceeds 200% of 30-day rolling average |
| BR-05 | Only admin and manager roles can trigger manual refresh or change settings |
| BR-06 | Failed refresh attempts are automatically retried once after 5 minutes |
6.6.8 External API Specification
Endpoint: https://api.flowmeter-provider.com/v1/readings
Authentication: Bearer token (stored in Supabase secrets)
Request Example:
GET /v1/readings?site_id=all&since=2025-12-01T00:00:00Z
Authorization: Bearer {token}Response Example:
{
"status": "success",
"data": [
{
"meter_id": "FM-001",
"site_id": "SITE-A",
"volume_liters": 125000,
"timestamp": "2025-12-02T14:30:00Z",
"status": "online"
},
{
"meter_id": "FM-002",
"site_id": "SITE-B",
"volume_liters": 98500,
"timestamp": "2025-12-02T14:30:00Z",
"status": "online"
}
],
"total_meters": 45,
"fetched_at": "2025-12-02T14:30:15Z"
}6.6.9 Performance Requirements
| Metric | Target | Measurement |
|---|---|---|
| Dashboard load time | < 3 seconds | Time to display table and charts |
| Manual refresh duration | < 15 seconds | Edge function execution + UI update |
| External API response time | < 10 seconds | API call latency |
| Chart rendering time | < 2 seconds | For 30 days of data across 10 sites |
| Anomaly detection latency | < 5 seconds | Post-refresh analysis |
This completes Section 6: Operational Scenarios with comprehensive coverage of all six key user workflows.
7. Detailed Design
This section provides comprehensive detailed design specifications for all system components, including hardware infrastructure, software architecture, security implementation, performance optimization, and internal communications.
7.1 Hardware Detailed Design
7.1.1 Infrastructure Overview
The Dustac Environmental Monitoring Dashboard utilizes a cloud-native infrastructure with no on-premises hardware requirements. All hardware resources are provisioned and managed by third-party cloud providers.
Infrastructure Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Client Devices │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Desktop │ │ Laptop │ │ Tablet │ │ Mobile │ │
│ │ Windows │ │ macOS │ │ iOS │ │ Android │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
└───────┼─────────────┼─────────────┼─────────────┼──────────┘
│ │ │ │
└─────────────┴─────────────┴─────────────┘
│ HTTPS
▼
┌─────────────────────────────────────────────────────────────┐
│ Cloudflare Global Network (CDN) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Edge Locations: 300+ cities worldwide │ │
│ │ - DDoS Protection (Unmetered) │ │
│ │ - Web Application Firewall (WAF) │ │
│ │ - SSL/TLS Termination │ │
│ │ - Static Asset Caching │ │
│ │ - Bot Management │ │
│ └──────────────────────────────────────────────────────┘ │
└────────────────────────┬────────────────────────────────────┘
│ HTTPS
▼
┌─────────────────────────────────────────────────────────────┐
│ Supabase Infrastructure (AWS) │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Compute (Deno Edge Functions) │ │
│ │ - Runtime: Deno 1.x │ │
│ │ - Auto-scaling: 0-1000 concurrent executions │ │
│ │ - Memory: 512MB per function │ │
│ │ - Timeout: 60 seconds │ │
│ └──────────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Database (AWS RDS - PostgreSQL 15.x) │ │
│ │ - Instance Type: db.t4g.medium │ │
│ │ - vCPU: 2 cores (ARM Graviton2) │ │
│ │ - Memory: 4 GB │ │
│ │ - Storage: 100 GB SSD (gp3) │ │
│ │ - IOPS: 3000 baseline │ │
│ │ - Backup: Automated daily snapshots (7-day retain) │ │
│ └──────────────────────────────────────────────────────┘ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ Storage (AWS S3) │ │
│ │ - Bucket: supabase-storage-[project-id] │ │
│ │ - Redundancy: S3 Standard (99.999999999% durable) │ │
│ │ - Versioning: Enabled │ │
│ │ - Lifecycle: 90-day retention → Glacier │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘7.1.2 Client Device Requirements
Minimum Specifications:
| Component | Minimum Requirement | Recommended |
|---|---|---|
| Processor | Dual-core 1.6 GHz | Quad-core 2.4 GHz+ |
| RAM | 4 GB | 8 GB+ |
| Storage | 500 MB free space | 2 GB+ |
| Display | 1280x720 resolution | 1920x1080+ |
| Network | 5 Mbps download, 1 Mbps upload | 25 Mbps+ download, 5 Mbps+ upload |
| Browser | Chrome 90+, Firefox 88+, Safari 14+, Edge 90+ | Latest stable versions |
| Operating System | Windows 10, macOS 10.15, iOS 13, Android 10 | Latest OS versions |
Browser Requirements:
- JavaScript enabled
- Cookies and local storage enabled
- HTML5 Canvas API support
- WebSocket support (for real-time features)
- Minimum viewport: 320px width (mobile responsive)
7.1.3 Cloud Infrastructure Specifications
Cloudflare Pages (Frontend Hosting):
| Resource | Specification |
|---|---|
| Compute | Serverless (V8 Isolates) |
| Memory | 128 MB per request |
| Execution Time | 30 seconds per request |
| Concurrent Requests | Unlimited (auto-scaling) |
| Bandwidth | Unlimited |
| SSL/TLS | Automatic HTTPS, TLS 1.3 |
| HTTP Version | HTTP/2, HTTP/3 (QUIC) support |
| Cache TTL | 4 hours (static assets) |
| Geographic Distribution | 300+ edge locations |
Supabase Database (AWS RDS PostgreSQL):
| Resource | Specification | Scaling Plan |
|---|---|---|
| Instance Class | db.t4g.medium | Upgrade to db.m6g.large at 10K users |
| CPU | 2 vCPU (ARM Graviton2) | 2 → 4 vCPU at scale |
| Memory | 4 GB | 4 → 16 GB at scale |
| Storage | 100 GB SSD (gp3) | Auto-scaling enabled (max 1 TB) |
| IOPS | 3000 baseline, 16000 burst | Scales with storage |
| Max Connections | 100 concurrent | 100 → 500 with connection pooling |
| Backup Retention | 7 days automated | 30 days for production |
| Multi-AZ | Disabled (dev), Enabled (prod) | HA for production workloads |
| Encryption | AES-256 at rest | AWS KMS managed keys |
Supabase Storage (AWS S3):
| Resource | Specification | Limits |
|---|---|---|
| Storage Class | S3 Standard | Hot data access |
| Redundancy | 99.999999999% durability | Multi-AZ replication |
| Availability | 99.99% SLA | |
| Max Object Size | 5 TB | Practical limit: 50 MB per file |
| Throughput | 3,500 PUT/COPY/POST/DELETE per prefix per second | Auto-scaling |
| Throughput | 5,500 GET/HEAD per prefix per second | Auto-scaling |
| Encryption | AES-256 (SSE-S3) | Server-side encryption |
| Versioning | Enabled | 90-day retention |
| Lifecycle Policy | 90 days → Glacier | Archive old uploads |
Supabase Edge Functions (Deno Runtime):
| Resource | Specification |
|---|---|
| Runtime | Deno 1.38+ (TypeScript native) |
| Memory | 512 MB per execution |
| Timeout | 60 seconds |
| Cold Start | ~100-300ms |
| Warm Execution | ~5-20ms |
| Concurrent Executions | 1000 per function |
| Environment Variables | Stored in Supabase Secrets (encrypted) |
| External API Calls | Allowed (fetch API) |
7.1.4 Network Architecture
Network Topology:
Internet
│
├─── Cloudflare Edge (Layer 7)
│ ├─── DDoS Protection (Unmetered)
│ ├─── WAF Rules (OWASP Core Ruleset)
│ ├─── Rate Limiting (100 req/min per IP)
│ └─── SSL/TLS Termination (TLS 1.3)
│
├─── Cloudflare Pages (Static Assets)
│ ├─── React SPA (Vite Build)
│ ├─── Cache-Control: public, max-age=14400
│ └─── Brotli Compression
│
└─── Supabase API Gateway (AWS API Gateway)
├─── Authentication (JWT Verification)
├─── Authorization (RLS Enforcement)
├─── Rate Limiting (1000 req/min per user)
│
├─── Database Connections (PostgreSQL)
│ ├─── Connection Pool (PgBouncer)
│ ├─── SSL/TLS Required (sslmode=require)
│ └─── Private Subnet (VPC Isolated)
│
├─── Storage API (S3 Presigned URLs)
│ ├─── Time-limited URLs (15 min expiry)
│ ├─── CORS Headers
│ └─── Private Buckets
│
└─── Edge Functions (Deno Workers)
├─── HTTP/HTTPS Only
├─── JWT Auth Required
└─── Service Role Key for Internal CallsNetwork Security:
| Layer | Security Measure | Implementation |
|---|---|---|
| DNS | DNSSEC | Cloudflare DNS with DNSSEC validation |
| Transport | TLS 1.3 | Mandatory HTTPS, HSTS enabled (max-age=31536000) |
| Application | WAF | Cloudflare WAF with OWASP rules |
| DDoS | Layer 3/4/7 Protection | Cloudflare unmetered DDoS protection |
| Rate Limiting | Per-IP and Per-User | 100 req/min (IP), 1000 req/min (user) |
| Firewall | Geo-blocking | Optional: Allow-list specific countries |
7.1.5 Disaster Recovery and Business Continuity
Backup Strategy:
| Component | Backup Frequency | Retention | Recovery Time Objective (RTO) | Recovery Point Objective (RPO) |
|---|---|---|---|---|
| Database | Daily automated snapshots | 7 days (dev), 30 days (prod) | < 4 hours | < 24 hours |
| Database | Point-in-time recovery | 7 days | < 1 hour | < 15 minutes |
| Storage (S3) | Continuous (versioning) | 90 days | < 1 hour | Real-time |
| Application Code | Git commits | Indefinite | < 30 minutes | < 1 hour |
| Configuration | Infrastructure-as-Code | Version controlled | < 30 minutes | Real-time |
High Availability Configuration:
| Component | HA Strategy | Failover Method |
|---|---|---|
| Frontend (Cloudflare) | Multi-region edge caching | Automatic (anycast routing) |
| Database (RDS) | Multi-AZ deployment (prod) | Automatic failover (60-120 seconds) |
| Storage (S3) | Multi-AZ replication | Automatic (transparent to application) |
| Edge Functions | Multi-region deployment | Automatic (Supabase routing) |
Disaster Recovery Plan:
Database Failure:
- Detection: Automated health checks every 30 seconds
- Alerting: PagerDuty notification to on-call engineer
- Action: Automatic failover to standby replica (Multi-AZ)
- Recovery Time: 60-120 seconds
Storage Failure:
- Detection: S3 service health monitoring
- Action: S3 automatically handles failover (transparent)
- Recovery Time: < 1 minute
Complete Region Failure:
- Detection: Multi-region health checks
- Action: Manual DNS update to secondary region
- Recovery Time: 2-4 hours (manual intervention required)
Data Corruption:
- Detection: Application monitoring, user reports
- Action: Restore from point-in-time backup
- Recovery Time: 1-4 hours depending on data volume
7.2 Software Detailed Design
7.2.1 Frontend Component Architecture
Component Hierarchy:
App (Root)
│
├── Routes (React Router v7)
│ ├── PublicLayout
│ │ ├── LandingPage
│ │ ├── LoginPage
│ │ └── RegisterPage
│ │
│ └── ProtectedLayout (requires auth)
│ ├── Header
│ │ ├── Logo
│ │ ├── MainNavigation
│ │ ├── UserMenu
│ │ └── NotificationBell
│ │
│ ├── Sidebar
│ │ ├── NavigationMenu
│ │ ├── QuickActions
│ │ └── UserProfile
│ │
│ └── MainContent (Outlet)
│ ├── DashboardPage
│ │ ├── KPICards (4)
│ │ ├── UsageChart (Nivo Line)
│ │ ├── SiteStatusTable
│ │ └── RecentActivityFeed
│ │
│ ├── UploadPage
│ │ ├── UploadWizard
│ │ │ ├── Step1_FileSelection
│ │ │ ├── Step2_SiteConfiguration
│ │ │ ├── Step3_Validation
│ │ │ └── Step4_Upload
│ │ ├── DropZone (react-dropzone)
│ │ ├── FileList
│ │ ├── ValidationResults
│ │ └── ProgressBar
│ │
│ ├── ReportsPage
│ │ ├── ReportGenerator
│ │ │ ├── DateRangePicker
│ │ │ ├── DeviceMultiSelect
│ │ │ ├── TemplateSelector
│ │ │ └── GenerateButton
│ │ ├── ReportPreview
│ │ │ ├── DataSummaryTable
│ │ │ └── PreviewChart (Recharts)
│ │ ├── ReportList
│ │ └── ReportCard (per report)
│ │
│ ├── DustLevelsPage
│ │ ├── FilterPanel
│ │ │ ├── SiteFilter
│ │ │ ├── DeviceFilter
│ │ │ ├── DateRangeFilter
│ │ │ └── ThresholdFilter
│ │ ├── DataGrid (100 rows paginated)
│ │ ├── PaginationControls
│ │ ├── SummaryStatistics
│ │ ├── TrendChart (Recharts Line)
│ │ └── ExportButton
│ │
│ ├── WeeklyReportsPage
│ │ ├── ReportListView
│ │ ├── ReportEditor
│ │ │ ├── SectionTabs (8 sections)
│ │ │ ├── Section1_SiteObservations
│ │ │ │ └── ObservationForm (repeatable)
│ │ │ ├── Section2_FlowMeterUsage
│ │ │ │ └── UsageForm (repeatable)
│ │ │ ├── Section3_DashboardUpdates
│ │ │ ├── Section4_VendorActivities
│ │ │ ├── Section5_WaterTruckTesting
│ │ │ ├── Section6_HardwareInstallations
│ │ │ ├── Section7_AdminReporting
│ │ │ └── Section8_OtherTasks
│ │ ├── AutoSaveIndicator
│ │ ├── ValidationSummary
│ │ └── SubmitButton
│ │
│ └── FlowMetersPage
│ ├── FlowMeterDashboard
│ │ ├── LastRefreshTimestamp
│ │ ├── RefreshButton
│ │ ├── MeterDataTable
│ │ └── UsageChart (Nivo Bar)
│ ├── AnalyticsTab
│ │ ├── KPICards
│ │ ├── TrendChart (30 days)
│ │ └── SiteComparisonChart
│ ├── AlertsTab
│ │ ├── AlertBanner
│ │ └── AlertList
│ └── SettingsModal
│
└── GlobalProviders
├── AuthProvider (Supabase Auth)
├── QueryClientProvider (React Query - future)
└── ToastProvider (react-hot-toast)7.2.2 Key Component Specifications
Component: UploadWizard
Responsibility: Orchestrate multi-step CSV file upload process with validation
Props Interface:
interface UploadWizardProps {
onUploadComplete: (uploadId: string) => void;
onCancel: () => void;
maxFiles?: number; // default: 10
maxFileSize?: number; // default: 50MB
}State Management:
interface UploadWizardState {
currentStep: 1 | 2 | 3 | 4;
selectedFiles: File[];
siteConfig: {
siteId: string;
siteName: string;
uploadType: 'daily' | 'weekly' | 'monthly';
periodStart: Date;
periodEnd: Date;
};
validationResults: ValidationResult[];
uploadProgress: Map<string, number>; // filename -> progress %
errors: UploadError[];
}Key Methods:
handleFileSelection(files: File[]): void
validateFiles(): Promise<ValidationResult[]>
configureSite(config: SiteConfig): void
executeUpload(): Promise<UploadResult>
handleUploadProgress(filename: string, progress: number): voidComponent Lifecycle:
- Mount: Initialize wizard state, fetch user's mine sites
- Step 1: User selects files → Validate file types and sizes
- Step 2: User configures site and date range → Store configuration
- Step 3: System validates CSV structure → Display validation results
- Step 4: Execute upload → Show progress bars → Complete
- Unmount: Clean up file references
Error Handling:
- Client-side validation errors: Display inline with retry option
- Network errors: Implement exponential backoff retry (3 attempts)
- Server errors: Display user-friendly message with support contact
Component: ReportGenerator
Responsibility: Configure and generate PDF reports from dust monitoring data
Props Interface:
interface ReportGeneratorProps {
onReportGenerated: (reportId: string) => void;
onCancel: () => void;
preselectedSite?: string;
preselectedDateRange?: { start: Date; end: Date };
}State Management:
interface ReportGeneratorState {
selectedSite: string | null;
selectedDevices: string[];
dateRange: { start: Date; end: Date };
template: 'standard' | 'detailed' | 'summary';
dataPreview: {
recordCount: number;
dateRange: { min: Date; max: Date };
statistics: {
avgPM25: number;
maxPM25: number;
avgPM10: number;
maxPM10: number;
};
} | null;
isGenerating: boolean;
generationProgress: number; // 0-100
currentPage: number;
totalPages: number;
}Key Methods:
fetchDataPreview(): Promise<DataPreview>
validateSelections(): boolean
generateReport(): Promise<ReportResult>
captureReportComponent(component: ReactElement): Promise<ImageBlob>
assemblePDF(images: ImageBlob[]): Promise<PDFBlob>
uploadToStorage(pdf: PDFBlob): Promise<StoragePath>
saveReportMetadata(metadata: ReportMetadata): Promise<ReportId>PDF Generation Algorithm:
async function generateReport(): Promise<ReportResult> {
// Phase 1: Fetch data
const data = await fetchMeasurements(filters);
// Phase 2: Render React components (hidden)
const components = [
<CoverPage {...data} />,
<SummaryPage {...data} />,
<PM25ChartPage data={data.pm25} />,
<PM10ChartPage data={data.pm10} />,
<DataTablePage data={data.measurements} />
];
// Phase 3: Capture as images
const images = await Promise.all(
components.map(comp => html2canvas(comp, {
scale: 2, // 2x for high quality
useCORS: true,
logging: false
}))
);
// Phase 4: Assemble PDF
const pdf = new jsPDF('portrait', 'mm', 'a4');
images.forEach((img, index) => {
if (index > 0) pdf.addPage();
pdf.addImage(img, 'PNG', 0, 0, 210, 297); // A4 dimensions
});
// Phase 5: Save
const blob = pdf.output('blob');
const storagePath = await uploadToStorage(blob);
const reportId = await saveMetadata({
storagePath,
fileSize: blob.size,
pageCount: images.length,
...filters
});
return { reportId, storagePath, fileSize: blob.size };
}Performance Optimization:
- Use
React.memofor chart components to avoid re-renders - Render charts with fixed dimensions to prevent layout thrashing
- Process images in batches of 3 to manage memory usage
- Use Web Workers for heavy computation (future enhancement)
7.2.3 State Management Architecture
Zustand Store Structure:
// stores/authStore.ts
interface AuthState {
user: User | null;
session: Session | null;
isAuthenticated: boolean;
isLoading: boolean;
// Actions
setUser: (user: User) => void;
setSession: (session: Session) => void;
signOut: () => Promise<void>;
refreshSession: () => Promise<void>;
}
// stores/uploadStore.ts
interface UploadState {
currentUpload: Upload | null;
uploads: Upload[];
isUploading: boolean;
// Actions
createUpload: (files: File[]) => Promise<string>;
updateUploadProgress: (uploadId: string, progress: number) => void;
fetchUploads: () => Promise<void>;
cancelUpload: (uploadId: string) => void;
}
// stores/filterStore.ts
interface FilterState {
dustLevels: {
sites: string[];
devices: string[];
dateRange: { start: Date; end: Date };
pm25Range: { min: number; max: number };
};
// Actions
updateFilters: (filters: Partial<FilterState['dustLevels']>) => void;
resetFilters: () => void;
}Store Usage Pattern:
// In component
import { useAuthStore } from '@/stores/authStore';
function ProtectedPage() {
const { user, isAuthenticated, signOut } = useAuthStore();
if (!isAuthenticated) {
return <Navigate to="/login" />;
}
return (
<div>
<h1>Welcome, {user.full_name}</h1>
<button onClick={signOut}>Sign Out</button>
</div>
);
}7.2.4 Backend Service Architecture
Supabase Edge Functions:
Function: trigger-flow-meter-scraper
Purpose: Fetch latest flow meter readings from external API and import to database
Invocation: HTTP POST or scheduled cron job
Implementation:
// supabase/functions/trigger-flow-meter-scraper/index.ts
import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
import { createClient } from 'https://esm.sh/@supabase/supabase-js@2'
interface FlowMeterReading {
meter_id: string;
site_id: string;
volume_liters: number;
timestamp: string;
status: 'online' | 'offline';
}
serve(async (req) => {
try {
// 1. Authenticate
const authHeader = req.headers.get('Authorization');
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return new Response('Unauthorized', { status: 401 });
}
// 2. Fetch from external API
const apiUrl = Deno.env.get('FLOW_METER_API_URL')!;
const apiToken = Deno.env.get('FLOW_METER_API_TOKEN')!;
const response = await fetch(`${apiUrl}/readings?site_id=all`, {
headers: {
'Authorization': `Bearer ${apiToken}`,
'Content-Type': 'application/json'
},
signal: AbortSignal.timeout(30000) // 30 second timeout
});
if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}
const { data } = await response.json();
const readings: FlowMeterReading[] = data;
// 3. Validate data
const validReadings = readings.filter(r =>
r.meter_id && r.volume_liters >= 0 && r.timestamp
);
// 4. Upsert to database
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')!
);
const { data: inserted, error } = await supabase
.from('flow_meter_data')
.upsert(
validReadings.map(r => ({
meter_id: r.meter_id,
site_id: r.site_id,
volume_liters: r.volume_liters,
timestamp: r.timestamp,
status: r.status,
updated_at: new Date().toISOString()
})),
{ onConflict: 'meter_id,timestamp' }
);
if (error) throw error;
// 5. Log activity
await supabase
.from('activity_logs')
.insert({
user_id: null,
action: 'flow_meter_auto_refresh',
resource_type: 'flow_meter',
details: {
records_fetched: readings.length,
records_inserted: inserted?.length || 0,
timestamp: new Date().toISOString()
}
});
return new Response(
JSON.stringify({
success: true,
records_updated: inserted?.length || 0,
skipped: readings.length - validReadings.length
}),
{ status: 200, headers: { 'Content-Type': 'application/json' } }
);
} catch (error) {
console.error('Flow meter scraper error:', error);
return new Response(
JSON.stringify({ error: error.message }),
{ status: 500, headers: { 'Content-Type': 'application/json' } }
);
}
})Deployment Configuration:
# Deploy function
supabase functions deploy trigger-flow-meter-scraper
# Set secrets
supabase secrets set FLOW_METER_API_URL=https://api.flowmeter-provider.com/v1
supabase secrets set FLOW_METER_API_TOKEN=<secret-token>
# Configure cron schedule (every 4 hours)
# In Supabase Dashboard: Functions → trigger-flow-meter-scraper → Cron
# Schedule: 0 */4 * * * (every 4 hours)7.3 Security Detailed Design
7.3.1 Authentication Implementation
Supabase Auth Configuration:
// lib/supabase.ts
import { createClient } from '@supabase/supabase-js';
const supabaseUrl = import.meta.env.VITE_SUPABASE_URL;
const supabaseAnonKey = import.meta.env.VITE_SUPABASE_ANON_KEY;
export const supabase = createClient(supabaseUrl, supabaseAnonKey, {
auth: {
autoRefreshToken: true,
persistSession: true,
detectSessionInUrl: true,
storage: window.localStorage, // or AsyncStorage for React Native
flowType: 'pkce' // PKCE flow for enhanced security
},
global: {
headers: {
'X-Client-Info': 'dustac-dashboard@1.0.0'
}
}
});JWT Token Structure:
{
"aud": "authenticated",
"exp": 1733184000,
"iat": 1733180400,
"iss": "https://hwogexspejrzvadfjmce.supabase.co/auth/v1",
"sub": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"email": "user@example.com",
"phone": "",
"app_metadata": {
"provider": "email",
"providers": ["email"]
},
"user_metadata": {
"full_name": "John Doe",
"organization": "Mining Corp"
},
"role": "authenticated",
"aal": "aal1",
"amr": [
{
"method": "password",
"timestamp": 1733180400
}
],
"session_id": "x9y8z7w6-v5u4-t3s2-r1q0-p9o8n7m6l5k4"
}Password Policy Enforcement:
| Requirement | Validation Rule | Error Message |
|---|---|---|
| Minimum Length | >= 12 characters | "Password must be at least 12 characters long" |
| Uppercase | >= 1 uppercase letter | "Password must contain at least one uppercase letter" |
| Lowercase | >= 1 lowercase letter | "Password must contain at least one lowercase letter" |
| Digit | >= 1 number | "Password must contain at least one number" |
| Special Character | >= 1 of !@#$%^&*()_+-=[]{};':"|,.<>/? | "Password must contain at least one special character" |
| No Common Passwords | Not in HIBP database | "Password is too common. Please choose a stronger password" |
Implementation:
import zxcvbn from 'zxcvbn';
function validatePassword(password: string): ValidationResult {
const errors: string[] = [];
if (password.length < 12) {
errors.push("Password must be at least 12 characters long");
}
if (!/[A-Z]/.test(password)) {
errors.push("Password must contain at least one uppercase letter");
}
if (!/[a-z]/.test(password)) {
errors.push("Password must contain at least one lowercase letter");
}
if (!/[0-9]/.test(password)) {
errors.push("Password must contain at least one number");
}
if (!/[!@#$%^&*()_+\-=\[\]{}\\|;:'",.<>/?]/.test(password)) {
errors.push("Password must contain at least one special character");
}
// Check password strength using zxcvbn
const strength = zxcvbn(password);
if (strength.score < 3) {
errors.push(`Password is too weak. ${strength.feedback.suggestions.join(' ')}`);
}
return {
isValid: errors.length === 0,
errors,
strength: strength.score
};
}7.3.2 Authorization and Row-Level Security (RLS)
RLS Policy Design Pattern:
-- Pattern 1: User owns the resource directly
CREATE POLICY "Users can view own uploads"
ON uploads FOR SELECT
USING (auth.uid() = user_id);
-- Pattern 2: User owns resource through relationship
CREATE POLICY "Users can view own measurements"
ON measurements FOR SELECT
USING (
EXISTS (
SELECT 1 FROM uploads
WHERE uploads.id = measurements.upload_id
AND uploads.user_id = auth.uid()
)
);
-- Pattern 3: Role-based access (Admin sees all)
CREATE POLICY "Admins can view all uploads"
ON uploads FOR ALL
USING (
EXISTS (
SELECT 1 FROM user_profiles
WHERE id = auth.uid()
AND role = 'admin'
)
);
-- Pattern 4: Public read access (reports)
CREATE POLICY "All authenticated users can view reports"
ON reports FOR SELECT
USING (auth.uid() IS NOT NULL);
-- Pattern 5: Organization-level access
CREATE POLICY "Users can view organization data"
ON uploads FOR SELECT
USING (
EXISTS (
SELECT 1 FROM user_profiles up1
JOIN user_profiles up2 ON up1.organization = up2.organization
WHERE up1.id = auth.uid()
AND up2.id = uploads.user_id
)
);RLS Policy Testing:
-- Test as specific user
SET LOCAL role = authenticated;
SET LOCAL request.jwt.claims = '{"sub": "user-uuid-here"}';
-- Verify user can only see their own uploads
SELECT * FROM uploads; -- Should return only user's uploads
-- Test as admin
SET LOCAL request.jwt.claims = '{"sub": "admin-uuid-here"}';
SELECT * FROM uploads; -- Should return all uploads7.3.3 Data Encryption
Encryption at Rest:
| Data Type | Encryption Method | Key Management |
|---|---|---|
| Database | AES-256 (RDS encryption) | AWS KMS managed keys |
| File Storage | AES-256 (S3 SSE-S3) | AWS S3 managed keys |
| Backups | AES-256 (automated) | AWS KMS managed keys |
| Secrets | AES-256-GCM | Supabase Vault |
Encryption in Transit:
| Connection | Protocol | Cipher Suite |
|---|---|---|
| Client → Cloudflare | TLS 1.3 | TLS_AES_256_GCM_SHA384 |
| Cloudflare → Supabase | TLS 1.2+ | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 |
| Application → Database | TLS 1.2+ (sslmode=require) | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 |
| Application → S3 | TLS 1.2+ | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 |
Sensitive Data Handling:
// Never log sensitive data
function uploadFile(file: File, authToken: string) {
// ❌ BAD
console.log('Uploading file with token:', authToken);
// ✅ GOOD
console.log('Uploading file:', file.name);
// Use environment variables for secrets
const apiKey = import.meta.env.VITE_API_KEY; // Client-side (public)
const serviceKey = Deno.env.get('SERVICE_ROLE_KEY'); // Server-side (secret)
}
// Sanitize user input before logging
function logActivity(userInput: string) {
const sanitized = userInput.replace(/[^\w\s-]/g, '');
console.log('User input:', sanitized);
}7.3.4 Input Validation and Sanitization
Client-Side Validation (Zod Schema):
import { z } from 'zod';
// Upload form validation
const uploadSchema = z.object({
siteId: z.string().uuid('Invalid site ID'),
siteName: z.string().min(1, 'Site name is required').max(255),
uploadType: z.enum(['daily', 'weekly', 'monthly']),
periodStart: z.date().max(new Date(), 'Start date cannot be in the future'),
periodEnd: z.date(),
files: z.array(
z.instanceof(File)
.refine(file => file.size <= 50 * 1024 * 1024, 'File size must be less than 50MB')
.refine(file => file.name.endsWith('.csv'), 'Only CSV files are allowed')
).min(1, 'At least one file is required').max(10, 'Maximum 10 files allowed')
}).refine(
data => data.periodEnd >= data.periodStart,
{ message: 'End date must be after start date', path: ['periodEnd'] }
);
// Usage
try {
const validated = uploadSchema.parse(formData);
// Proceed with upload
} catch (error) {
if (error instanceof z.ZodError) {
// Display validation errors
error.errors.forEach(err => {
console.error(`${err.path.join('.')}: ${err.message}`);
});
}
}Server-Side Validation (PostgreSQL Constraints):
-- Check constraints
ALTER TABLE uploads
ADD CONSTRAINT check_upload_dates
CHECK (period_end >= period_start);
ALTER TABLE measurements
ADD CONSTRAINT check_pm25_range
CHECK (massconcentration2p5 >= 0 AND massconcentration2p5 <= 10000);
-- Domain constraints
CREATE DOMAIN email_address AS TEXT
CHECK (VALUE ~ '^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$');
ALTER TABLE user_profiles
ALTER COLUMN email TYPE email_address;SQL Injection Prevention:
// ❌ BAD: String concatenation (vulnerable to SQL injection)
async function getUserByEmail(email: string) {
const { data } = await supabase.rpc('get_user', {
query: `SELECT * FROM users WHERE email = '${email}'`
});
return data;
}
// ✅ GOOD: Parameterized queries (Supabase automatically escapes)
async function getUserByEmail(email: string) {
const { data } = await supabase
.from('user_profiles')
.select('*')
.eq('email', email)
.single();
return data;
}XSS Prevention:
// React automatically escapes JSX expressions
function UserProfile({ user }: { user: User }) {
// ✅ SAFE: React escapes HTML entities
return <h1>Welcome, {user.full_name}</h1>;
// ❌ DANGEROUS: dangerouslySetInnerHTML bypasses escaping
return <div dangerouslySetInnerHTML={{ __html: user.bio }} />;
// ✅ SAFE: Sanitize HTML before rendering
import DOMPurify from 'dompurify';
const sanitizedBio = DOMPurify.sanitize(user.bio);
return <div dangerouslySetInnerHTML={{ __html: sanitizedBio }} />;
}7.3.5 Security Testing Requirements
Automated Security Testing:
| Test Type | Tool | Frequency | Threshold |
|---|---|---|---|
| Dependency Vulnerability Scan | npm audit, Snyk | Every commit (CI/CD) | 0 high/critical vulnerabilities |
| Static Application Security Testing (SAST) | ESLint security plugins | Every commit | 0 errors |
| Secrets Detection | GitGuardian | Every commit | 0 secrets exposed |
| SQL Injection Testing | SQLMap (penetration test) | Monthly | 0 vulnerabilities |
| XSS Testing | Burp Suite (penetration test) | Monthly | 0 vulnerabilities |
Manual Security Review Checklist:
- [ ] All API endpoints require authentication
- [ ] RLS policies tested for each table
- [ ] File uploads validated for type and size
- [ ] User input sanitized before database insertion
- [ ] Sensitive data (passwords, tokens) never logged
- [ ] HTTPS enforced (HSTS header set)
- [ ] CORS configured with specific origins (no wildcard)
- [ ] Rate limiting enabled for all public endpoints
- [ ] Error messages don't leak sensitive information
- [ ] Session timeout configured (1 hour)
7.4 Performance Detailed Design
7.4.1 Frontend Performance Optimization
Code Splitting Strategy:
// routes/Routes.tsx
import { lazy, Suspense } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
// Eager load (included in main bundle)
import LoginPage from '@/pages/LoginPage';
import RegisterPage from '@/pages/RegisterPage';
// Lazy load (separate chunks)
const DashboardPage = lazy(() => import('@/pages/DashboardPage'));
const UploadPage = lazy(() => import('@/pages/UploadPage'));
const ReportsPage = lazy(() => import('@/pages/ReportsPage'));
const DustLevelsPage = lazy(() => import('@/pages/DustLevelsPage'));
const WeeklyReportsPage = lazy(() => import('@/pages/WeeklyReportsPage'));
const FlowMetersPage = lazy(() => import('@/pages/FlowMetersPage'));
function AppRoutes() {
return (
<BrowserRouter>
<Suspense fallback={<LoadingSpinner />}>
<Routes>
<Route path="/login" element={<LoginPage />} />
<Route path="/register" element={<RegisterPage />} />
<Route path="/dashboard" element={<DashboardPage />} />
<Route path="/upload" element={<UploadPage />} />
<Route path="/reports" element={<ReportsPage />} />
<Route path="/dust-levels" element={<DustLevelsPage />} />
<Route path="/weekly-reports" element={<WeeklyReportsPage />} />
<Route path="/flow-meters" element={<FlowMetersPage />} />
</Routes>
</Suspense>
</BrowserRouter>
);
}Bundle Size Optimization:
| Optimization | Implementation | Savings |
|---|---|---|
| Tree Shaking | Vite automatic | ~15% bundle size |
| Code Splitting | React.lazy() | Load time: 40% faster |
| Dynamic Imports | import() for large libraries | ~200 KB initial bundle reduction |
| Minification | Vite (esbuild) | ~30% size reduction |
| Compression | Brotli (Cloudflare) | ~70% transfer size reduction |
Expected Bundle Sizes:
| Chunk | Size (uncompressed) | Size (Brotli) | Load Time (4G) |
|---|---|---|---|
| Main (vendor) | 450 KB | 135 KB | ~1.2s |
| Dashboard | 80 KB | 24 KB | ~200ms |
| Reports | 120 KB | 36 KB | ~300ms |
| Upload | 60 KB | 18 KB | ~150ms |
React Component Optimization:
// Use React.memo for expensive components
const DataGrid = React.memo(({ data }: { data: Measurement[] }) => {
return (
<table>
{data.map(row => <DataRow key={row.id} row={row} />)}
</table>
);
}, (prevProps, nextProps) => {
// Custom comparison function
return prevProps.data.length === nextProps.data.length
&& prevProps.data[0]?.id === nextProps.data[0]?.id;
});
// Use useMemo for expensive calculations
function DashboardKPIs({ measurements }: { measurements: Measurement[] }) {
const statistics = useMemo(() => {
return {
avgPM25: measurements.reduce((sum, m) => sum + m.massconcentration2p5, 0) / measurements.length,
maxPM25: Math.max(...measurements.map(m => m.massconcentration2p5)),
avgPM10: measurements.reduce((sum, m) => sum + m.massconcentration10p0, 0) / measurements.length,
maxPM10: Math.max(...measurements.map(m => m.massconcentration10p0))
};
}, [measurements]);
return (
<div>
<KPICard title="Avg PM2.5" value={statistics.avgPM25} />
<KPICard title="Max PM2.5" value={statistics.maxPM25} />
</div>
);
}
// Use useCallback for event handlers passed as props
function ParentComponent() {
const [count, setCount] = useState(0);
const handleClick = useCallback(() => {
setCount(c => c + 1);
}, []); // Empty deps: function never changes
return <ChildComponent onClick={handleClick} />;
}7.4.2 Database Performance Optimization
Query Optimization:
-- ❌ BAD: N+1 query problem
-- Frontend fetches uploads, then for each upload fetches csv_files
SELECT * FROM uploads WHERE user_id = $1;
-- Then for each upload:
SELECT * FROM csv_files WHERE upload_id = $2;
-- ✅ GOOD: Single query with JOIN
SELECT
u.id,
u.upload_timestamp,
u.status,
json_agg(
json_build_object(
'id', cf.id,
'filename', cf.filename,
'status', cf.status,
'record_count', cf.record_count
)
) AS csv_files
FROM uploads u
LEFT JOIN csv_files cf ON cf.upload_id = u.id
WHERE u.user_id = $1
GROUP BY u.id
ORDER BY u.upload_timestamp DESC
LIMIT 50;Index Strategy:
-- Single-column indexes (already created in schema)
CREATE INDEX idx_measurements_time ON measurements(time);
CREATE INDEX idx_measurements_site ON measurements(site);
-- Composite indexes for common query patterns
CREATE INDEX idx_measurements_site_time
ON measurements(site, time DESC);
-- Supports: WHERE site = 'X' ORDER BY time DESC
CREATE INDEX idx_measurements_user_time
ON measurements(upload_id, time DESC);
-- Supports: JOIN with uploads WHERE user_id = 'X' ORDER BY time DESC
-- Partial indexes for common filters
CREATE INDEX idx_measurements_high_pm25
ON measurements(massconcentration2p5)
WHERE massconcentration2p5 > 35; -- EPA PM2.5 threshold
-- Supports: WHERE massconcentration2p5 > 35 (exceedance queries)
-- Expression indexes for computed columns
CREATE INDEX idx_user_profiles_lower_email
ON user_profiles(LOWER(email));
-- Supports case-insensitive email lookupQuery Performance Monitoring:
-- Enable query statistics (in PostgreSQL config)
shared_preload_libraries = 'pg_stat_statements'
-- Find slow queries
SELECT
query,
calls,
total_exec_time,
mean_exec_time,
max_exec_time
FROM pg_stat_statements
WHERE mean_exec_time > 1000 -- > 1 second
ORDER BY mean_exec_time DESC
LIMIT 20;
-- Analyze query plan
EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON)
SELECT * FROM measurements
WHERE site = 'Site A'
AND time BETWEEN '2025-01-01' AND '2025-01-31'
ORDER BY time DESC
LIMIT 100;7.4.3 Caching Strategy
Multi-Level Caching Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Level 1: Browser Cache (Cache-Control headers) │
│ - Static assets: 7 days │
│ - HTML: no-cache (always revalidate) │
│ - API responses: no-store (never cache) │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────┐
│ Level 2: Cloudflare CDN Cache │
│ - Static assets (JS, CSS, images): 4 hours │
│ - Purge on deploy │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────┐
│ Level 3: Application Memory Cache (React State) │
│ - User profile: session duration │
│ - Mine sites list: 5 minutes │
│ - Reports list: 1 minute │
└─────────────────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────────────────┐
│ Level 4: Database Query Cache (PostgreSQL) │
│ - Shared buffers: 1 GB │
│ - Effective cache: 3 GB │
└─────────────────────────────────────────────────────────────┘Cache-Control Headers:
// Cloudflare Pages configuration (_headers file)
/*
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Referrer-Policy: strict-origin-when-cross-origin
/assets/*
Cache-Control: public, max-age=604800, immutable
/index.html
Cache-Control: no-cache, must-revalidate
/api/*
Cache-Control: no-store, no-cache, must-revalidateApplication-Level Caching:
// Simple in-memory cache with TTL
class Cache<T> {
private cache = new Map<string, { value: T; expiry: number }>();
set(key: string, value: T, ttlMs: number): void {
this.cache.set(key, {
value,
expiry: Date.now() + ttlMs
});
}
get(key: string): T | null {
const entry = this.cache.get(key);
if (!entry) return null;
if (Date.now() > entry.expiry) {
this.cache.delete(key);
return null;
}
return entry.value;
}
clear(): void {
this.cache.clear();
}
}
// Usage
const mineSitesCache = new Cache<MineSite[]>();
async function getMineSites(): Promise<MineSite[]> {
const cached = mineSitesCache.get('mine_sites');
if (cached) return cached;
const { data } = await supabase.from('mine_sites').select('*');
mineSitesCache.set('mine_sites', data, 5 * 60 * 1000); // 5 min TTL
return data;
}7.4.4 Performance Monitoring
Core Web Vitals Targets:
| Metric | Target | Measurement Tool |
|---|---|---|
| Largest Contentful Paint (LCP) | < 2.5s | Lighthouse, Chrome DevTools |
| First Input Delay (FID) | < 100ms | Real User Monitoring (RUM) |
| Cumulative Layout Shift (CLS) | < 0.1 | Lighthouse, Chrome DevTools |
| Time to First Byte (TTFB) | < 600ms | Chrome DevTools Network tab |
| First Contentful Paint (FCP) | < 1.8s | Lighthouse |
| Total Blocking Time (TBT) | < 300ms | Lighthouse |
Performance Monitoring Implementation:
// Track Web Vitals
import { onCLS, onFID, onLCP } from 'web-vitals';
function sendToAnalytics({ name, value, id }: Metric) {
// Send to analytics service
console.log(`${name}: ${value}ms (ID: ${id})`);
// Optionally send to Supabase for logging
supabase.from('performance_metrics').insert({
metric_name: name,
value,
metric_id: id,
user_agent: navigator.userAgent,
timestamp: new Date().toISOString()
});
}
onCLS(sendToAnalytics);
onFID(sendToAnalytics);
onLCP(sendToAnalytics);7.5 Internal Communications Detailed Design
7.5.1 API Communication Patterns
Supabase Client-Server Communication:
┌─────────────────┐ ┌─────────────────┐
│ React Frontend │ │ Supabase API │
│ (Browser) │ │ Gateway │
└────────┬────────┘ └────────┬────────┘
│ │
│ 1. Initial Request │
│ GET /rest/v1/uploads │
│ Authorization: Bearer <JWT> │
├───────────────────────────────────────────>│
│ │
│ │ 2. Verify JWT
│ │ & RLS check
│ │
│ 3. Response │
│ 200 OK │
│ Content-Type: application/json │
│ [{id, user_id, status, ...}] │
│<───────────────────────────────────────────┤
│ │API Request/Response Format:
// Standard API Request
interface APIRequest {
method: 'GET' | 'POST' | 'PUT' | 'PATCH' | 'DELETE';
endpoint: string;
headers: {
'Authorization': `Bearer ${accessToken}`;
'Content-Type': 'application/json';
'apikey': string; // Supabase anon key
'Prefer'?: 'return=representation' | 'return=minimal';
};
body?: Record<string, unknown>;
params?: Record<string, string>;
}
// Standard API Response
interface APIResponse<T> {
data: T | null;
error: {
message: string;
details?: string;
hint?: string;
code?: string;
} | null;
count?: number; // For paginated responses
status: number;
statusText: string;
}Error Response Format:
{
"error": {
"message": "Permission denied",
"details": "Row-level security policy violation",
"hint": "User does not have access to this resource",
"code": "42501"
},
"status": 403,
"statusText": "Forbidden"
}7.5.2 Real-Time Communication (WebSocket)
Supabase Realtime Subscriptions:
// Subscribe to changes on uploads table
const subscription = supabase
.channel('uploads-channel')
.on(
'postgres_changes',
{
event: '*', // INSERT, UPDATE, DELETE
schema: 'public',
table: 'uploads',
filter: `user_id=eq.${userId}` // Only user's uploads
},
(payload) => {
console.log('Change received:', payload);
switch (payload.eventType) {
case 'INSERT':
// New upload created
addUploadToList(payload.new);
break;
case 'UPDATE':
// Upload status changed
updateUploadInList(payload.new);
break;
case 'DELETE':
// Upload deleted
removeUploadFromList(payload.old);
break;
}
}
)
.subscribe((status) => {
if (status === 'SUBSCRIBED') {
console.log('Realtime subscription active');
}
});
// Cleanup on unmount
useEffect(() => {
return () => {
supabase.removeChannel(subscription);
};
}, []);WebSocket Message Format:
{
"event": "postgres_changes",
"schema": "public",
"table": "uploads",
"commit_timestamp": "2025-12-02T14:30:00.000Z",
"eventType": "UPDATE",
"new": {
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"user_id": "x9y8z7w6-v5u4-t3s2-r1q0-p9o8n7m6l5k4",
"status": "completed",
"total_record_count": 4523,
"updated_at": "2025-12-02T14:30:00.000Z"
},
"old": {
"status": "importing"
}
}7.5.3 File Upload Communication
Multipart Upload Flow:
async function uploadCSVFile(file: File, uploadId: string): Promise<StoragePath> {
const filePath = `${userId}/${uploadId}/${file.name}`;
// 1. Upload to Supabase Storage
const { data, error } = await supabase.storage
.from('csv-uploads')
.upload(filePath, file, {
cacheControl: '3600',
upsert: false,
contentType: 'text/csv'
});
if (error) throw error;
// 2. Get public URL (if needed)
const { data: urlData } = supabase.storage
.from('csv-uploads')
.getPublicUrl(filePath);
return {
path: data.path,
fullPath: data.fullPath,
publicUrl: urlData.publicUrl
};
}
// With progress tracking
async function uploadWithProgress(
file: File,
onProgress: (progress: number) => void
): Promise<StoragePath> {
const xhr = new XMLHttpRequest();
return new Promise((resolve, reject) => {
xhr.upload.addEventListener('progress', (e) => {
if (e.lengthComputable) {
const progress = (e.loaded / e.total) * 100;
onProgress(progress);
}
});
xhr.addEventListener('load', () => {
if (xhr.status === 200) {
resolve(JSON.parse(xhr.responseText));
} else {
reject(new Error(`Upload failed: ${xhr.statusText}`));
}
});
xhr.addEventListener('error', () => {
reject(new Error('Network error'));
});
const formData = new FormData();
formData.append('file', file);
xhr.open('POST', `${supabaseUrl}/storage/v1/object/csv-uploads/${filePath}`);
xhr.setRequestHeader('Authorization', `Bearer ${accessToken}`);
xhr.setRequestHeader('apikey', supabaseAnonKey);
xhr.send(formData);
});
}7.5.4 Inter-Service Communication
Edge Function to Database:
// Edge function authenticates with service role key
const supabase = createClient(
Deno.env.get('SUPABASE_URL')!,
Deno.env.get('SUPABASE_SERVICE_ROLE_KEY')! // Bypasses RLS
);
// Insert data (bypasses RLS policies)
const { data, error } = await supabase
.from('flow_meter_data')
.insert([
{ meter_id: 'FM-001', volume_liters: 125000, timestamp: '2025-12-02T14:30:00Z' }
]);Frontend to Edge Function:
// Frontend calls edge function with user JWT
async function triggerFlowMeterRefresh(): Promise<RefreshResult> {
const { data: { session } } = await supabase.auth.getSession();
const response = await fetch(
`${supabaseUrl}/functions/v1/trigger-flow-meter-scraper`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${session.access_token}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ siteIds: ['SITE-A', 'SITE-B'] })
}
);
if (!response.ok) {
throw new Error(`Function error: ${response.statusText}`);
}
return response.json();
}7.5.5 Error Handling and Retry Logic
Exponential Backoff Retry:
async function fetchWithRetry<T>(
fetcher: () => Promise<T>,
maxRetries: number = 3,
baseDelay: number = 1000
): Promise<T> {
let lastError: Error;
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fetcher();
} catch (error) {
lastError = error as Error;
// Don't retry on 4xx errors (client errors)
if (error.status >= 400 && error.status < 500) {
throw error;
}
// Calculate delay with exponential backoff
const delay = baseDelay * Math.pow(2, attempt);
const jitter = Math.random() * 0.3 * delay; // Add 0-30% jitter
console.log(`Retry attempt ${attempt + 1} after ${delay + jitter}ms`);
await new Promise(resolve => setTimeout(resolve, delay + jitter));
}
}
throw lastError!;
}
// Usage
const data = await fetchWithRetry(async () => {
const response = await supabase.from('uploads').select('*');
if (response.error) throw response.error;
return response.data;
});Circuit Breaker Pattern:
class CircuitBreaker {
private failures = 0;
private state: 'CLOSED' | 'OPEN' | 'HALF_OPEN' = 'CLOSED';
private nextAttempt = 0;
constructor(
private threshold: number = 5,
private timeout: number = 60000 // 60 seconds
) {}
async execute<T>(fn: () => Promise<T>): Promise<T> {
if (this.state === 'OPEN') {
if (Date.now() < this.nextAttempt) {
throw new Error('Circuit breaker is OPEN');
}
this.state = 'HALF_OPEN';
}
try {
const result = await fn();
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
private onSuccess(): void {
this.failures = 0;
this.state = 'CLOSED';
}
private onFailure(): void {
this.failures++;
if (this.failures >= this.threshold) {
this.state = 'OPEN';
this.nextAttempt = Date.now() + this.timeout;
}
}
}
// Usage
const externalAPIBreaker = new CircuitBreaker(5, 60000);
async function callExternalAPI(): Promise<Data> {
return externalAPIBreaker.execute(async () => {
const response = await fetch('https://external-api.com/data');
return response.json();
});
}This completes Section 7: Detailed Design with comprehensive specifications for hardware, software, security, performance, and communications.
8. System Integrity Controls
This section documents the controls and mechanisms implemented to ensure system integrity across data, application, and security layers.
8.1 Data Integrity Controls
8.1.1 Input Validation
Multi-Layer Validation Strategy:
| Validation Layer | Implementation | Purpose |
|---|---|---|
| Client-Side | Zod schema validation, React Hook Form | Immediate user feedback, reduce server load |
| API Layer | Supabase REST API validation | Reject malformed requests before database |
| Database Layer | CHECK constraints, triggers, domain types | Last line of defense, enforce data rules |
Validation Rules by Data Type:
// CSV Upload Validation
const csvValidationRules = {
fileSize: { max: 50 * 1024 * 1024, message: 'File exceeds 50MB limit' },
fileType: { allowed: ['.csv'], message: 'Only CSV files accepted' },
columnCount: { expected: 18, message: 'CSV must have 18 columns' },
encodingFormat: { expected: 'UTF-8', autoConvert: true },
dateFormat: { pattern: 'ISO 8601', example: '2025-12-02T14:30:00Z' },
numericRange: {
pm25: { min: 0, max: 10000 },
pm10: { min: 0, max: 10000 },
temperature: { min: -50, max: 80 },
humidity: { min: 0, max: 100 }
}
};
// User Input Validation
const userInputRules = {
email: /^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$/,
password: { minLength: 12, requireUppercase: true, requireLowercase: true, requireNumber: true, requireSpecial: true },
siteName: { minLength: 1, maxLength: 255, pattern: /^[\w\s\-]+$/ },
dateRange: { maxDays: 365, noFutureDates: true }
};8.1.2 Data Consistency Checks
Database Constraints:
-- Referential integrity
ALTER TABLE measurements
ADD CONSTRAINT fk_measurements_upload
FOREIGN KEY (upload_id) REFERENCES uploads(id) ON DELETE CASCADE;
ALTER TABLE csv_files
ADD CONSTRAINT fk_csv_files_upload
FOREIGN KEY (upload_id) REFERENCES uploads(id) ON DELETE CASCADE;
-- Data consistency constraints
ALTER TABLE uploads
ADD CONSTRAINT check_dates CHECK (period_end >= period_start);
ALTER TABLE uploads
ADD CONSTRAINT check_file_counts
CHECK (successful_files + failed_files = total_csv_files);
-- Status transitions
CREATE TYPE upload_status AS ENUM (
'pending', 'validating', 'importing',
'generating_reports', 'completed', 'failed', 'partial_success'
);
-- Prevent invalid status transitions
CREATE OR REPLACE FUNCTION validate_upload_status_transition()
RETURNS TRIGGER AS $$
BEGIN
IF OLD.status = 'completed' AND NEW.status NOT IN ('completed', 'failed') THEN
RAISE EXCEPTION 'Cannot transition from completed to %', NEW.status;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER check_upload_status_transition
BEFORE UPDATE ON uploads
FOR EACH ROW
EXECUTE FUNCTION validate_upload_status_transition();Application-Level Consistency Checks:
async function validateUploadConsistency(uploadId: string): Promise<ConsistencyReport> {
const upload = await fetchUpload(uploadId);
const csvFiles = await fetchCSVFiles(uploadId);
const issues: string[] = [];
// Check file count consistency
if (csvFiles.length !== upload.total_csv_files) {
issues.push(`File count mismatch: expected ${upload.total_csv_files}, found ${csvFiles.length}`);
}
// Check record count totals
const totalRecords = csvFiles.reduce((sum, file) => sum + file.record_count, 0);
if (totalRecords !== upload.total_record_count) {
issues.push(`Record count mismatch: expected ${upload.total_record_count}, found ${totalRecords}`);
}
// Check status consistency
const completedFiles = csvFiles.filter(f => f.status === 'completed').length;
const failedFiles = csvFiles.filter(f => f.status === 'failed').length;
if (completedFiles !== upload.successful_files || failedFiles !== upload.failed_files) {
issues.push('File status counts inconsistent with upload record');
}
return {
isConsistent: issues.length === 0,
issues
};
}8.1.3 Transaction Management
ACID Transaction Guarantees:
-- Upload with file records transaction
BEGIN;
-- 1. Create upload record
INSERT INTO uploads (user_id, upload_type, period_start, period_end, total_csv_files)
VALUES ($1, $2, $3, $4, $5)
RETURNING id;
-- 2. Create CSV file records
INSERT INTO csv_files (upload_id, filename, file_size_bytes, site_name)
VALUES ($6, $7, $8, $9), ($6, $10, $11, $12), ($6, $13, $14, $15);
-- 3. If any step fails, rollback entire transaction
COMMIT;
-- Handle errors
EXCEPTION WHEN OTHERS THEN
ROLLBACK;
RAISE;Distributed Transaction Handling (Storage + Database):
async function uploadFilesWithRollback(files: File[], uploadId: string): Promise<UploadResult> {
const uploadedFiles: string[] = [];
const dbRecords: string[] = [];
try {
// Phase 1: Upload files to storage
for (const file of files) {
const path = await supabase.storage
.from('csv-uploads')
.upload(`${userId}/${uploadId}/${file.name}`, file);
uploadedFiles.push(path.data.path);
}
// Phase 2: Create database records
const { data, error } = await supabase
.from('csv_files')
.insert(files.map((f, i) => ({
upload_id: uploadId,
filename: f.name,
storage_path: uploadedFiles[i]
})))
.select();
if (error) throw error;
dbRecords.push(...data.map(r => r.id));
return { success: true, uploadId };
} catch (error) {
// Rollback: Delete uploaded files
for (const path of uploadedFiles) {
await supabase.storage.from('csv-uploads').remove([path]);
}
// Rollback: Delete database records
if (dbRecords.length > 0) {
await supabase.from('csv_files').delete().in('id', dbRecords);
}
throw error;
}
}8.1.4 Backup and Recovery
Backup Strategy:
| Component | Method | Frequency | Retention | Automation |
|---|---|---|---|---|
| Database | RDS automated snapshots | Daily at 03:00 UTC | 7 days (dev), 30 days (prod) | Automatic |
| Database | Point-in-time recovery | Continuous (WAL logs) | 7 days | Automatic |
| Storage | S3 versioning | On every change | 90 days | Automatic |
| Storage | S3 replication | Real-time | Cross-region (prod only) | Automatic |
| Code | Git repository | On every commit | Indefinite | Manual |
| Configuration | Infrastructure-as-Code | On every change | Version controlled | Manual |
Recovery Procedures:
-- Restore database from snapshot
-- CLI command:
aws rds restore-db-instance-from-db-snapshot \
--db-instance-identifier dustac-restored \
--db-snapshot-identifier dustac-snapshot-20251202
-- Point-in-time recovery to specific timestamp
aws rds restore-db-instance-to-point-in-time \
--source-db-instance-identifier dustac-prod \
--target-db-instance-identifier dustac-recovered \
--restore-time 2025-12-02T14:30:00Z
-- Restore S3 object to previous version
aws s3api list-object-versions \
--bucket supabase-storage-hwogexspejrzvadfjmce \
--prefix csv-uploads/user123/upload456/data.csv
aws s3api get-object \
--bucket supabase-storage-hwogexspejrzvadfjmce \
--key csv-uploads/user123/upload456/data.csv \
--version-id abc123xyz \
data-restored.csv8.2 Application Integrity Controls
8.2.1 Error Handling
Error Classification:
| Error Type | HTTP Status | User Message | Action |
|---|---|---|---|
| Validation Error | 400 | "Invalid input: {field} {reason}" | Show inline error, allow correction |
| Authentication Error | 401 | "Session expired. Please sign in again." | Redirect to login |
| Authorization Error | 403 | "Access denied. Contact administrator." | Show error page |
| Not Found | 404 | "Resource not found" | Show not found page |
| Server Error | 500 | "An error occurred. Our team has been notified." | Show error page, log incident |
| Network Error | - | "Network connection lost. Retrying..." | Auto-retry with backoff |
Error Handling Implementation:
class ApplicationError extends Error {
constructor(
public code: string,
public message: string,
public statusCode: number = 500,
public userMessage?: string,
public details?: unknown
) {
super(message);
this.name = 'ApplicationError';
}
}
// Error boundary component
class ErrorBoundary extends React.Component<Props, State> {
state = { hasError: false, error: null };
static getDerivedStateFromError(error: Error) {
return { hasError: true, error };
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
// Log to error tracking service
console.error('Error boundary caught:', error, errorInfo);
// Send to backend logging
supabase.from('error_logs').insert({
error_message: error.message,
error_stack: error.stack,
component_stack: errorInfo.componentStack,
timestamp: new Date().toISOString()
});
}
render() {
if (this.state.hasError) {
return (
<ErrorFallback
error={this.state.error}
onReset={() => this.setState({ hasError: false, error: null })}
/>
);
}
return this.props.children;
}
}
// Global error handler
window.addEventListener('unhandledrejection', (event) => {
console.error('Unhandled promise rejection:', event.reason);
// Log to backend
supabase.from('error_logs').insert({
error_type: 'unhandled_rejection',
error_message: event.reason?.message || String(event.reason),
timestamp: new Date().toISOString()
});
});8.2.2 Logging and Monitoring
Log Levels and Usage:
| Level | Usage | Example | Storage |
|---|---|---|---|
| ERROR | Unhandled exceptions, critical failures | Database connection failed | Supabase logs, 30-day retention |
| WARN | Recoverable issues, deprecated features | API rate limit approaching | Supabase logs, 7-day retention |
| INFO | Significant events, user actions | User logged in, Report generated | Supabase logs, 7-day retention |
| DEBUG | Development debugging (prod disabled) | Function X called with params | Browser console only |
Structured Logging:
interface LogEntry {
level: 'ERROR' | 'WARN' | 'INFO' | 'DEBUG';
message: string;
timestamp: string;
user_id?: string;
context?: Record<string, unknown>;
stack_trace?: string;
}
async function log(entry: LogEntry): Promise<void> {
// Console log for development
console[entry.level.toLowerCase()](entry.message, entry.context);
// Send to Supabase for production
if (import.meta.env.PROD && entry.level !== 'DEBUG') {
await supabase.from('application_logs').insert({
...entry,
user_agent: navigator.userAgent,
url: window.location.href
});
}
}
// Usage
log({
level: 'INFO',
message: 'PDF report generated',
user_id: user.id,
context: { reportId, fileSize, pageCount, duration: '45.3s' }
});Performance Monitoring:
// Track performance metrics
function trackPerformance(metricName: string, duration: number, metadata?: object) {
supabase.from('performance_metrics').insert({
metric_name: metricName,
duration_ms: duration,
metadata,
user_id: auth.user?.id,
timestamp: new Date().toISOString()
});
}
// Usage with performance API
const startTime = performance.now();
await generatePDFReport(data);
const endTime = performance.now();
trackPerformance('pdf_generation', endTime - startTime, { recordCount: data.length });8.2.3 Health Checks
Health Check Endpoints:
| Endpoint | Check | Frequency | Timeout | Alert Threshold |
|---|---|---|---|---|
/api/health | Basic liveness | Every 30s | 5s | 3 consecutive failures |
/api/health/db | Database connection | Every 60s | 10s | 2 consecutive failures |
/api/health/storage | Storage access | Every 60s | 10s | 2 consecutive failures |
/api/health/detailed | Full diagnostic | On-demand | 30s | Manual review |
Implementation:
// Edge function: health-check
serve(async (req) => {
const checks: HealthCheck[] = [];
// Database check
try {
const start = Date.now();
const { error } = await supabase.from('uploads').select('id').limit(1);
checks.push({
name: 'database',
status: error ? 'unhealthy' : 'healthy',
latency: Date.now() - start,
error: error?.message
});
} catch (error) {
checks.push({
name: 'database',
status: 'unhealthy',
error: error.message
});
}
// Storage check
try {
const start = Date.now();
const { error } = await supabase.storage.from('csv-uploads').list('', { limit: 1 });
checks.push({
name: 'storage',
status: error ? 'unhealthy' : 'healthy',
latency: Date.now() - start,
error: error?.message
});
} catch (error) {
checks.push({
name: 'storage',
status: 'unhealthy',
error: error.message
});
}
const allHealthy = checks.every(c => c.status === 'healthy');
return new Response(JSON.stringify({
status: allHealthy ? 'healthy' : 'degraded',
timestamp: new Date().toISOString(),
checks
}), {
status: allHealthy ? 200 : 503,
headers: { 'Content-Type': 'application/json' }
});
});8.3 Security Integrity Controls
8.3.1 Access Controls
Role-Based Access Control (RBAC):
| Role | Permissions | RLS Policy |
|---|---|---|
| Anonymous | View public pages | No database access |
| Authenticated User | View own data, upload CSV, generate reports | auth.uid() = user_id |
| Manager | View organization data, manage team uploads | organization = user.organization |
| Admin | Full system access, user management | role = 'admin' |
Permission Matrix:
| Resource | Anonymous | User | Manager | Admin |
|---|---|---|---|---|
| View Public Pages | ✓ | ✓ | ✓ | ✓ |
| Register/Login | ✓ | ✓ | ✓ | ✓ |
| Upload CSV | ✗ | ✓ | ✓ | ✓ |
| Generate Reports | ✗ | ✓ | ✓ | ✓ |
| View Own Data | ✗ | ✓ | ✓ | ✓ |
| View Team Data | ✗ | ✗ | ✓ | ✓ |
| Delete Others' Data | ✗ | ✗ | ✗ | ✓ |
| Manage Users | ✗ | ✗ | ✗ | ✓ |
| System Configuration | ✗ | ✗ | ✗ | ✓ |
8.3.2 Audit Trails
Audit Log Structure:
CREATE TABLE activity_logs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES auth.users(id),
action TEXT NOT NULL, -- 'create', 'read', 'update', 'delete'
resource_type TEXT NOT NULL, -- 'upload', 'report', 'user', etc.
resource_id UUID,
resource_name TEXT,
details JSONB,
ip_address INET,
user_agent TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_activity_logs_user ON activity_logs(user_id);
CREATE INDEX idx_activity_logs_created ON activity_logs(created_at DESC);
CREATE INDEX idx_activity_logs_action ON activity_logs(action);Audited Actions:
| Action | Logged Details | Retention |
|---|---|---|
| User Login | User ID, IP, timestamp, user agent | 90 days |
| User Registration | Email, IP, timestamp | 365 days |
| Password Reset | User ID, IP, timestamp | 90 days |
| Data Upload | Upload ID, file count, record count, site | 365 days |
| Report Generation | Report ID, date range, devices | 365 days |
| Data Export | Record count, filters, format | 365 days |
| User Role Change | Old role, new role, changed by | 365 days |
| Data Deletion | Resource type, ID, deleted by | 365 days |
8.3.3 Intrusion Detection
Detection Mechanisms:
| Threat | Detection Method | Response |
|---|---|---|
| Brute Force Login | 5+ failed attempts in 30 min | Account lockout (30 min), CAPTCHA |
| SQL Injection | Pattern matching in logs | Block request, alert admin |
| XSS Attempt | Content Security Policy violation | Block script, log incident |
| Abnormal Traffic | Cloudflare WAF rules | Rate limit, challenge, block |
| Suspicious File Upload | File type validation, size check | Reject upload, log attempt |
| Session Hijacking | Token fingerprinting, IP change | Invalidate session, require re-auth |
Automated Response:
// Detect and respond to brute force
async function handleFailedLogin(email: string, ip: string): Promise<void> {
const failedAttempts = await getFailedAttempts(email, ip, 30); // Last 30 minutes
if (failedAttempts >= 5) {
// Lock account
await supabase.from('user_profiles').update({
account_locked_until: new Date(Date.now() + 30 * 60 * 1000)
}).eq('email', email);
// Log security event
await supabase.from('security_events').insert({
event_type: 'brute_force_detected',
email,
ip_address: ip,
failed_attempts: failedAttempts,
action_taken: 'account_locked_30min'
});
// Alert admin
await sendSecurityAlert({
type: 'brute_force',
severity: 'high',
details: { email, ip, failedAttempts }
});
}
}8.3.4 Vulnerability Management
Security Scanning Schedule:
| Scan Type | Tool | Frequency | Remediation SLA |
|---|---|---|---|
| Dependency Vulnerabilities | npm audit, Snyk | Every commit | Critical: 24h, High: 7 days |
| Static Code Analysis | ESLint security rules | Every commit | High: 7 days, Medium: 30 days |
| Container Scanning | Trivy (if using Docker) | Weekly | Critical: 48h, High: 14 days |
| Penetration Testing | Manual + automated | Quarterly | Critical: Immediate, High: 7 days |
| Security Review | Manual code review | Per major release | Address before release |
Patch Management:
# Weekly dependency audit
npm audit --audit-level=moderate
# Update dependencies with known vulnerabilities
npm audit fix
# Review and apply patches
npm update --save
# Test after updates
npm run test
npm run build
# Deploy patched version
git commit -m "chore(security): apply security patches"
git push9. External Interfaces
9.1 Interface Architecture
The Dustac system integrates with four primary external services to enrich functionality and automate data collection.
Interface Inventory:
| Interface | Purpose | Protocol | Authentication | Data Exchange | Frequency |
|---|---|---|---|---|---|
| Dustac Dust Level Scraper API | Fetch dust monitoring sensor data | HTTP REST | API Key | JSON | On-demand |
| Flow Meter Scraper API | Fetch water usage data | HTTP REST | Bearer Token | JSON | Every 4 hours (scheduled) |
| Google Gemini AI API | Generate chart descriptions for accessibility | HTTP REST | API Key | JSON | On report generation |
| BOM Weather Data Interface | Fetch weather station data | HTTP REST | Public (no auth) | CSV | On-demand |
Integration Pattern:
┌──────────────────────────────────────────────────────────────┐
│ Dustac Dashboard │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ Supabase Edge Functions (Middleware) │ │
│ │ - Authentication, rate limiting, error handling │ │
│ │ - Data transformation, validation │ │
│ │ - Caching, retry logic │ │
│ └───┬──────────────┬──────────────┬──────────────┬───────┘ │
└──────┼──────────────┼──────────────┼──────────────┼──────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐ ┌────────────┐
│ Dustac │ │Flow Meter │ │ Gemini AI │ │ BOM Weather│
│ API │ │ API │ │ API │ │ API │
└────────────┘ └────────────┘ └────────────┘ └────────────┘9.2 Interface Detailed Design
9.2.1 Dustac Dust Level Scraper API
Purpose: Fetch real-time and historical dust monitoring data from Dustac sensors deployed at mine sites.
Interface Specifications:
| Attribute | Value |
|---|---|
| Base URL | https://api.dustac.com/v1 (example) |
| Protocol | HTTPS (TLS 1.2+) |
| Authentication | API Key in header (X-API-Key) |
| Data Format | JSON |
| Rate Limit | 100 requests/minute |
| Timeout | 30 seconds |
API Endpoints:
GET /sites
Retrieve list of available monitoring sites.
Request:
GET /v1/sites HTTP/1.1
Host: api.dustac.com
X-API-Key: {api_key}
Accept: application/jsonResponse:
{
"sites": [
{
"site_id": "SITE-A",
"site_name": "Mine Site Alpha",
"location": { "lat": -23.6980, "lon": 133.8807 },
"status": "active"
}
]
}GET /measurements
Fetch dust measurements for specified site and date range.
Request:
GET /v1/measurements?site_id=SITE-A&start_date=2025-12-01&end_date=2025-12-07 HTTP/1.1
Host: api.dustac.com
X-API-Key: {api_key}
Accept: application/jsonResponse:
{
"site_id": "SITE-A",
"measurements": [
{
"timestamp": "2025-12-01T00:00:00Z",
"pm2_5": 12.5,
"pm10": 25.3,
"temperature": 28.5,
"humidity": 45.2
}
],
"total_count": 10080
}Error Handling:
| Error Code | Meaning | Response Action |
|---|---|---|
| 401 | Invalid API key | Alert admin to update credentials |
| 404 | Site not found | Notify user, check site configuration |
| 429 | Rate limit exceeded | Implement exponential backoff |
| 500 | Server error | Retry up to 3 times, then log incident |
Implementation (Edge Function):
async function fetchDustacData(siteId: string, startDate: string, endDate: string) {
const apiKey = Deno.env.get('DUSTAC_API_KEY')!;
const response = await fetch(
`https://api.dustac.com/v1/measurements?site_id=${siteId}&start_date=${startDate}&end_date=${endDate}`,
{
headers: {
'X-API-Key': apiKey,
'Accept': 'application/json'
},
signal: AbortSignal.timeout(30000)
}
);
if (!response.ok) {
throw new Error(`Dustac API error: ${response.status}`);
}
return response.json();
}9.2.2 Flow Meter Scraper API
Purpose: Automatically fetch water flow meter readings from water trucks at mine sites for usage tracking and compliance reporting.
Interface Specifications:
| Attribute | Value |
|---|---|
| Base URL | https://api.flowmeter-provider.com/v1 |
| Protocol | HTTPS (TLS 1.3) |
| Authentication | Bearer Token (JWT) |
| Data Format | JSON |
| Rate Limit | 60 requests/hour |
| Timeout | 30 seconds |
| Invocation | Scheduled (every 4 hours via cron) + Manual trigger |
API Endpoint:
GET /readings
Request:
GET /v1/readings?site_id=all&since=2025-12-01T00:00:00Z HTTP/1.1
Host: api.flowmeter-provider.com
Authorization: Bearer {token}
Accept: application/jsonResponse:
{
"status": "success",
"data": [
{
"meter_id": "FM-001",
"site_id": "SITE-A",
"volume_liters": 125000,
"timestamp": "2025-12-02T14:30:00Z",
"status": "online"
},
{
"meter_id": "FM-002",
"site_id": "SITE-B",
"volume_liters": 98500,
"timestamp": "2025-12-02T14:30:00Z",
"status": "online"
}
],
"total_meters": 45,
"fetched_at": "2025-12-02T14:30:15Z"
}Data Transformation:
| Source Field | Target Field | Transformation |
|---|---|---|
meter_id | meter_id | Direct mapping |
site_id | site_id | Direct mapping |
volume_liters | volume_liters | Validate >= 0 |
timestamp | timestamp | Convert to ISO 8601 if needed |
status | status | Map to enum ('online', 'offline') |
Implementation: See Section 7.2.4 for complete edge function code.
9.2.3 Google Gemini AI API
Purpose: Generate natural language descriptions of charts for screen readers and accessibility compliance (WCAG 2.1 AA).
Interface Specifications:
| Attribute | Value |
|---|---|
| Base URL | https://generativelanguage.googleapis.com/v1beta |
| Protocol | HTTPS (TLS 1.3) |
| Authentication | API Key (query parameter) |
| Data Format | JSON |
| Rate Limit | 60 requests/minute (free tier) |
| Timeout | 60 seconds |
| Invocation | During PDF report generation |
API Endpoint:
POST /models/gemini-pro:generateContent
Request:
POST /v1beta/models/gemini-pro:generateContent?key={api_key} HTTP/1.1
Host: generativelanguage.googleapis.com
Content-Type: application/json
{
"contents": [{
"parts": [{
"text": "Describe this chart for a screen reader user: The chart shows PM2.5 levels over 7 days. Values range from 12 to 45 μg/m³. There was a spike to 45 μg/m³ on Dec 3rd, exceeding the EPA 24-hour standard of 35 μg/m³. Other days remained below the threshold."
}]
}],
"generationConfig": {
"temperature": 0.4,
"maxOutputTokens": 150
}
}Response:
{
"candidates": [{
"content": {
"parts": [{
"text": "This chart displays particulate matter (PM2.5) concentrations measured over a seven-day period. Most readings stayed below 35 micrograms per cubic meter, which is the EPA's 24-hour air quality standard. However, on December 3rd, levels peaked at 45 micrograms per cubic meter, exceeding safe limits. This spike may indicate increased dust activity or adverse weather conditions on that day."
}]
}
}]
}Implementation:
async function generateChartDescription(chartData: ChartData): Promise<string> {
const apiKey = Deno.env.get('GEMINI_API_KEY')!;
const prompt = `Describe this chart for a screen reader user: ${JSON.stringify(chartData)}`;
const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key=${apiKey}`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }],
generationConfig: { temperature: 0.4, maxOutputTokens: 150 }
}),
signal: AbortSignal.timeout(60000)
}
);
const data = await response.json();
return data.candidates[0].content.parts[0].text;
}9.2.4 BOM Weather Data Interface
Purpose: Fetch historical weather data from Australian Bureau of Meteorology (BOM) for correlation analysis with dust levels.
Interface Specifications:
| Attribute | Value |
|---|---|
| Base URL | http://www.bom.gov.au/climate/dwo/ |
| Protocol | HTTP (public data) |
| Authentication | None (public API) |
| Data Format | CSV |
| Rate Limit | Respect robots.txt, ~1 request/second |
| Timeout | 30 seconds |
| Invocation | On-demand (user-initiated weather data import) |
Data Retrieval Pattern:
URL Structure:
http://www.bom.gov.au/climate/dwo/{YYYYMM}/text/{STATION_CODE}.{YYYYMM}.csvExample:
http://www.bom.gov.au/climate/dwo/202512/text/IDCJDW4133.202512.csvCSV Structure:
Date,Minimum temperature (°C),Maximum temperature (°C),Rainfall (mm),Evaporation (mm),Sunshine (hours),Direction of maximum wind gust,Speed of maximum wind gust (km/h),Time of maximum wind gust,9am Temperature (°C),9am relative humidity (%),9am cloud amount (oktas),9am wind direction,9am wind speed (km/h),9am MSL pressure (hPa),3pm Temperature (°C),3pm relative humidity (%),3pm cloud amount (oktas),3pm wind direction,3pm wind speed (km/h),3pm MSL pressure (hPa)
2025-12-01,24.5,32.1,0,8.2,10.5,NW,41,14:23,28.2,65,2,NE,15,1012.3,31.5,42,1,W,22,1010.8Parsing Implementation:
async function fetchBOMWeatherData(stationCode: string, year: number, month: number) {
const yyyymm = `${year}${String(month).padStart(2, '0')}`;
const url = `http://www.bom.gov.au/climate/dwo/${yyyymm}/text/${stationCode}.${yyyymm}.csv`;
const response = await fetch(url, { signal: AbortSignal.timeout(30000) });
if (!response.ok) {
throw new Error(`BOM API error: ${response.status}`);
}
const csvText = await response.text();
// Parse CSV using PapaParse
const parsed = Papa.parse(csvText, {
header: true,
skipEmptyLines: true,
transformHeader: (header) => header.trim()
});
// Transform to database format
const weatherData = parsed.data.map(row => ({
station_id: stationCode,
date: row['Date'],
min_temp: parseFloat(row['Minimum temperature (°C)']),
max_temp: parseFloat(row['Maximum temperature (°C)']),
rainfall: parseFloat(row['Rainfall (mm)']),
evaporation: parseFloat(row['Evaporation (mm)']),
sunshine: parseFloat(row['Sunshine (hours)']),
wind_gust_direction: row['Direction of maximum wind gust'],
wind_gust_speed: parseFloat(row['Speed of maximum wind gust (km/h)']),
// ... map other fields
}));
return weatherData;
}Error Handling:
| Error | Cause | Response |
|---|---|---|
| 404 | Station or date not available | Notify user, suggest alternative station |
| Timeout | Slow network or server | Retry with exponential backoff (3 attempts) |
| Parse error | Malformed CSV | Log error, skip invalid rows, import valid data |
This completes Section 8 and Section 9 with comprehensive integrity controls and external interface specifications.
Appendices
Appendix A: Record of Changes
| Version | Date | Author | Change Description |
|---|---|---|---|
| 1.0 | 2025-12-02 | Dustac Team | Initial document creation |
Appendix B: Acronyms
| Acronym | Definition |
|---|---|
| API | Application Programming Interface |
| BaaS | Backend-as-a-Service |
| BOM | Bureau of Meteorology |
| CDN | Content Delivery Network |
| CI/CD | Continuous Integration/Continuous Deployment |
| CSV | Comma-Separated Values |
| E2E | End-to-End |
| JWT | JSON Web Token |
| Portable Document Format | |
| PM | Particulate Matter |
| RLS | Row-Level Security |
| SDD | System Design Document |
| SPA | Single Page Application |
| SQL | Structured Query Language |
| SSR | Server-Side Rendering |
| UI | User Interface |
| UUID | Universally Unique Identifier |
| WCAG | Web Content Accessibility Guidelines |
Appendix C: Glossary
| Term | Definition |
|---|---|
| Dust Level | Concentration of particulate matter in the air, measured in μg/m³ |
| Edge Function | Serverless function executed at the network edge |
| Flow Meter | Device measuring water flow rate and consumption |
| Mine Site | Physical location of mining operations being monitored |
| Row-Level Security | Database security feature that restricts data access per row |
| Supabase | Open-source Backend-as-a-Service platform |
| Weekly Report | Structured field report submitted on a weekly basis |
Appendix D: Referenced Documents
| Document ID | Document Title | Version | Date |
|---|---|---|---|
| REF-001 | CLAUDE.md (Project Instructions) | 1.0 | 2025-12-02 |
| REF-002 | SYSTEM_DESIGN.md (Technical Design) | 1.0 | 2025-12-02 |
| REF-003 | Supabase Documentation | - | Current |
| REF-004 | React 19 Documentation | - | Current |
| REF-005 | Cloudflare Pages Documentation | - | Current |
Appendix E: Approvals
| Role | Name | Signature | Date |
|---|---|---|---|
| Project Manager | |||
| Technical Lead | |||
| Security Officer | |||
| Quality Assurance Lead |
Appendix F: Additional Appendices
[Reserved for additional supporting documentation as needed.]
End of Document