System Design Document (SDD) v2
Dustac Environmental Monitoring Dashboard
Version: 2.0 Date: February 5, 2026 Status: Final Draft
Document Control
| Version | Date | Author | Description |
|---|---|---|---|
| 1.0 | 2025-12-02 | Dustac Development Team | Initial draft |
| 2.0 | 2026-02-05 | Dustac Development Team | Comprehensive update with new features, RBAC system, and updated architecture |
Table of Contents
- Introduction
- General Overview and Design Guidelines
- Design Considerations
- System Architecture
- System Design
- Operational Scenarios
- Detailed Design
- System Integrity Controls
- External Interfaces
- Feature Modules
- RBAC and Permissions
- Appendices
List of Figures
| Figure # | Title |
|---|---|
| Figure 4-1 | High-Level System Architecture Diagram |
| Figure 4-2 | Logical View Diagram |
| Figure 4-3 | Software Architecture Diagram |
| Figure 4-4 | Security Architecture Diagram |
| Figure 5-1 | Entity-Relationship Diagram |
| Figure 5-2 | User Interface Navigation Flow |
| Figure 10-1 | Feature Module Structure |
| Figure 11-1 | RBAC Permission Flow |
List of Tables
| Table # | Title |
|---|---|
| Table 2-1 | Risk Assessment Matrix |
| Table 3-1 | Technology Stack |
| Table 5-1 | Database Tables Overview |
| Table 5-2 | Data Dictionary |
| Table 9-1 | External Interface Inventory |
| Table 10-1 | Feature Modules Summary |
| Table 11-1 | Permission Modules |
1. Introduction
1.1 Purpose of the SDD
This System Design Document (SDD) provides a comprehensive technical specification for the Dustac Environmental Monitoring Dashboard system. Version 2.0 represents a major update incorporating:
- New Feature Modules: Tank correction management, email template system, heatmap overlays, and enhanced flow meter functionality
- RBAC System: Complete role-based access control with module and site-level permissions
- Updated Technology Stack: React 19, TypeScript 5.9, Vite 7, and latest dependencies
- AI Integration: DeepSeek V3.2 for chart descriptions and data analysis
- Enhanced Architecture: Self-hosted Supabase deployment with improved security
1.2 Project Background
The Dustac Environmental Monitoring Dashboard addresses critical needs in the mining industry for real-time environmental compliance monitoring and data management.
Business Context:
- Mining sites deploy DustRanger monitoring devices collecting PM1.0, PM2.5, PM4.0, and PM10 particulate concentrations
- Regulatory requirements mandate accurate record-keeping and trend analysis
- Operations teams need actionable insights for dust suppression activities
System Evolution:
| Phase | Timeline | Key Features |
|---|---|---|
| Phase 1 | Q1 2025 | Foundation - Authentication, database schema, automated scraping |
| Phase 2 | Q1 2025 | Core features - PDF reports, dashboard analytics, visualization |
| Phase 3 | Q2 2025 | Report management - Calendar views, bulk operations, templates |
| Phase 4 | Q2 2025 | Data management - Multi-site support, device tracking, filtering |
| Phase 5 | Q3-Q4 2025 | Extended features - Real-time monitoring, weekly reports, flow meter, climate, AI |
| Phase 6 | Q1 2026 | Advanced features - RBAC, tank corrections, email templates, heatmap overlays |
Current State: Production deployment serving multiple mining sites with comprehensive RBAC, real-time monitoring, and AI-powered analytics.
1.3 Intended Audience
| Audience | Focus Areas |
|---|---|
| Software Developers | Technical specifications, database schemas, API designs, coding standards |
| System Architects | Architecture patterns, technology decisions, integration approaches |
| DevOps Engineers | Deployment architecture, CI/CD pipelines, monitoring strategies |
| Project Managers | Project scope, constraints, risks, timelines |
| QA Engineers | Testing strategies, validation scenarios, acceptance criteria |
| Security Officers | Security architecture, authentication, data protection |
1.4 Document Scope
Inclusions:
- Complete system architecture (hardware, software, information, security)
- Database design with all tables and RLS policies
- 16 feature modules with detailed specifications
- RBAC system design
- API and external integration specifications
- Operational scenarios and workflows
Exclusions:
- Project management documentation
- End-user training materials
- Third-party system internal design
- Source code implementation details
2. General Overview and Design Guidelines
2.1 General Overview
The Dustac Environmental Monitoring Dashboard is a cloud-native web application enabling mining operations to collect, analyze, visualize, and report on environmental dust monitoring data.
System Purpose:
- Data Ingestion and Management: Automatically collect data from DustRanger devices via Edge Function scrapers
- Analysis and Visualization: Transform raw data into insights through interactive dashboards
- Reporting and Compliance: Generate professional PDF reports for regulatory submission
Key System Capabilities:
| Capability | Description |
|---|---|
| Multi-Site Management | Independent data streams and reporting per mining site |
| Real-Time Monitoring | Live dust level tracking with automated data collection |
| Flexible Reporting | Customizable templates with AI-generated descriptions |
| Water Management | Flow meter tracking with tank level visualization |
| Climate Integration | Weather data correlation for environmental context |
| Role-Based Access | Secure, user-specific data access with RBAC |
| Audit Compliance | Comprehensive activity logging for regulatory trails |
High-Level Context Diagram:
┌─────────────────────────────────────────────────────────────────────────────┐
│ External Entities │
└─────────────────────────────────────────────────────────────────────────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────┐ ┌──────────────┐
│ End Users │ │ DustRanger│ │ External │
│ (Browser) │ │ Devices │ │ Data APIs │
└──────┬──────┘ └────┬─────┘ └──────┬───────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ Dustac Dashboard System │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ Frontend (React 19 SPA) │ │
│ │ • Dashboard • Reports • Weekly Reports • Dust Levels │ │
│ │ • Flow Meter • Heatmap • Email Schedules • User Management │ │
│ └────────────────────┬───────────────────────────────────────────────┘ │
│ │ │
│ │ Supabase Client SDK │
│ │ │
│ ┌────────────────────┴───────────────────────────────────────────────┐ │
│ │ Supabase Backend (Self-Hosted) │ │
│ │ ┌──────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │
│ │ │ PostgreSQL │ │ Auth │ │ Storage (S3-like) │ │ │
│ │ │ Database │ │ (JWT) │ │ • CSV Files │ │ │
│ │ │ • RLS │ │ • Email │ │ • PDF Reports │ │ │
│ │ │ • RBAC │ │ • OAuth │ │ • Attachments │ │ │
│ │ └──────────────┘ └─────────────┘ └─────────────────────┘ │ │
│ │ ┌──────────────────────────────────────────────────────────┐ │ │
│ │ │ Edge Functions (Deno Runtime) │ │ │
│ │ │ • Chart AI • PDF Generation • Scraper Triggers │ │ │
│ │ │ • Email Sending • Calibration Reminders │ │ │
│ │ └──────────────────────────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────┐ ┌──────────────┐
│ Dustac │ │ DeepSeek │ │ BOM Weather │
│ Scraper │ │ V3.2 AI │ │ API │
│ API │ │ │ │ │
└─────────────┘ └──────────┘ └──────────────┘2.2 Assumptions/Constraints/Risks
2.2.1 Assumptions
Technical Assumptions:
- Users access via modern browsers (Chrome 90+, Firefox 88+, Safari 14+, Edge 90+)
- Stable internet connectivity with minimum 1 Mbps download speed
- DustRanger devices produce CSV files with consistent column structure
- Self-hosted Supabase maintains 99.9% uptime with proper infrastructure
Operational Assumptions: 5. Users receive basic training before production use 6. System handles up to 500MB of daily data per site 7. Supports up to 100 concurrent users during peak hours 8. PDF generation completes within 60 seconds for standard reports
2.2.2 Constraints
Technical Constraints:
| Constraint | Limit | Impact |
|---|---|---|
| Database Storage | Self-hosted (scalable) | Plan capacity based on data growth |
| PostgreSQL Connections | 60 concurrent (PgBouncer) | Connection pooling required |
| Edge Function Timeout | 60 seconds | Long operations need chunking |
| Supabase Row Limit | 1000 per query | Pagination required for large datasets |
| Browser LocalStorage | 10MB | Session data size limited |
Business Constraints:
- Hosting costs target under $500/month for production
- Core team of 2-3 developers
- Documentation maintained alongside development
2.2.3 Risk Assessment Matrix
| Risk ID | Description | Probability | Impact | Mitigation Strategy |
|---|---|---|---|---|
| R-001 | Supabase Platform Outage | Low (5%) | High | Comprehensive error handling, backup procedures, multi-region consideration |
| R-002 | Data Loss | Low (5%) | Critical | Automatic backups, soft-delete patterns, audit logs, point-in-time recovery |
| R-003 | Performance Degradation | Medium (30%) | Medium | Query optimization, indexing, connection pooling, caching, pagination |
| R-004 | CSV Format Inconsistency | Medium (25%) | Medium | Robust validation, preview before import, flexible parser |
| R-005 | Security Breach | Low (10%) | High | RLS policies, JWT expiration, security audits, HTTPS, dependency scanning |
| R-006 | PDF Generation Failure | Medium (20%) | Medium | Retry logic, timeout handling, alternative export formats |
| R-007 | External API Dependency | Medium (25%) | Low | Graceful degradation, caching, manual entry alternatives |
| R-008 | AI Service Costs | Medium (20%) | Low | Description caching, rate limiting, usage monitoring |
| R-009 | RBAC Policy Complexity | Medium (25%) | Medium | Thorough testing, policy documentation, admin tools |
| R-010 | Self-Hosted Infrastructure | Low (15%) | High | Monitoring, automated backups, disaster recovery plan |
2.3 Alignment with Enterprise Architecture Standards
Cloud-Native Architecture:
- Twelve-Factor App Principles compliance
- API-First Design for future integrations
- Microservices pattern via Edge Functions
Security Best Practices:
- Zero Trust Security Model
- Defense in Depth (TLS, RLS, JWT, audit logging)
- OWASP Top 10 Compliance
Data Architecture Standards:
- Single Source of Truth (PostgreSQL)
- Third Normal Form (3NF) database design
- Comprehensive audit trails
Performance Standards:
- Core Web Vitals targets: LCP <2.5s, FID <100ms, CLS <0.1
- API Response: 95th percentile <500ms (read), <2s (write)
- Horizontal scaling support
3. Design Considerations
3.1 Goals and Guidelines
Design Priority Hierarchy:
- Security First: Data protection and access control
- Reliability: System availability and data integrity
- User Experience: Intuitive, accessible interfaces
- Performance: Acceptable response times under load
- Maintainability: Code clarity and documentation
- Scalability: Growth without fundamental redesign
TypeScript Standards:
- Strict mode enabled (
strict: true) - Explicit types for function parameters and return types
import typefor type-only imports- No
anytype except documented exceptions
React Component Standards:
- Functional components with function declarations
- Named exports (no default exports)
- Explicit props interfaces
- Hooks rules compliance
File Naming Conventions:
| Type | Convention | Example |
|---|---|---|
| Components | PascalCase | ReportTemplate.tsx |
| Hooks | camelCase with use prefix | useReportGenerator.tsx |
| Services | camelCase | csvParser.ts |
| Types | PascalCase for interfaces | types.ts exports UploadStatus |
| Tests | Source file with .test.ts | csvParser.test.ts |
Git Commit Conventions:
<type>(<scope>): <subject>
feat(flow-meter): add tank correction management
fix(reports): correct date range filtering
chore(deps): update React to 19.2.33.2 Technology Stack
3.2.1 Frontend Stack
| Technology | Version | Purpose |
|---|---|---|
| React | 19.2.3 | UI rendering, component model |
| TypeScript | 5.9.3 | Type system |
| Vite | 7.1.12 | Build tool, dev server |
| React Router | 7.12.0 | Client-side routing |
| Tailwind CSS | 4.1.16 | Utility-first styling |
| Radix UI | Latest | Accessible component primitives |
| Preline | 3.2.3 | UI component library |
3.2.2 State Management
| Technology | Version | Purpose |
|---|---|---|
| React Context | Built-in | Auth state, layout preferences |
| useState/useReducer | Built-in | Local component state |
| React Hook Form | 7.66.x | Form state management |
| Zod | 4.1.12 | Runtime validation |
3.2.3 Data Visualization
| Technology | Version | Purpose |
|---|---|---|
| Nivo | 0.99.0 | Dashboard analytics charts |
| Recharts | 3.3.0 | Report charts |
| ApexCharts | 1.7.0 | Additional chart types |
| ECharts | 6.0.0 | Complex visualizations |
| QuickChart.io | API | Server-side chart rendering |
3.2.4 Document Generation
| Technology | Version | Purpose |
|---|---|---|
| jsPDF | 4.0.0 | PDF generation |
| html2canvas | 1.4.1 | DOM to canvas conversion |
| docx | 9.5.1 | Word document generation |
| PapaParse | 5.5.3 | CSV parsing |
| JSZip | 3.10.1 | ZIP file handling |
3.2.5 Backend Stack
| Technology | Version | Purpose |
|---|---|---|
| Supabase | 2.80.0 | Backend-as-a-Service |
| PostgreSQL | 15.8 | Primary database |
| Deno | 1.40+ | Edge function runtime |
| PgBouncer | Included | Connection pooling |
3.2.6 Testing Stack
| Technology | Version | Purpose |
|---|---|---|
| Vitest | 4.0.6 | Unit testing |
| Playwright | 1.56.1 | E2E testing |
| Testing Library | 16.3.0 | React component testing |
| Storybook | 10.1.10 | Component development |
3.2.7 Development Tools
| Technology | Version | Purpose |
|---|---|---|
| ESLint | 9.39.1 | Code linting |
| Prettier | 3.6.2 | Code formatting |
| Husky | 9.1.7 | Git hooks |
| Supabase CLI | 2.54.11 | Database management |
3.3 Architectural Strategies
Design Patterns Employed:
| Pattern | Implementation | Benefits |
|---|---|---|
| Feature-Driven Architecture | /src/features/ with self-contained modules | Better organization, easier to find related code |
| Service Layer Pattern | Service files handle data fetching and business rules | Testable logic, reusable across components |
| Repository Pattern | Services encapsulate Supabase queries | Easier testing, consistent error handling |
| Hooks Pattern | Custom hooks for stateful logic | Reusable logic, cleaner components |
| Composition Pattern | shadcn/ui components composed into features | Reusable building blocks, consistent UI |
| Error Boundary Pattern | Error boundaries wrap route components | Graceful degradation, error logging |
Trade-off Decisions:
| Decision | Choice | Rationale |
|---|---|---|
| PDF Generation | Client-side (jsPDF) | Simpler architecture, no server infrastructure |
| Authorization | RLS policies in PostgreSQL | Defense in depth, prevents data leaks |
| Architecture | Monolithic SPA | Simpler deployment, shared state easier |
| Real-time Updates | Polling (selective Realtime) | Simpler implementation, sufficient for most features |
| TypeScript | Strict mode | Catch errors at compile time, better maintainability |
3.4 Performance Engineering
Performance Targets:
| Metric | Target | Priority |
|---|---|---|
| Page Load Time (LCP) | < 2.5s (75th percentile) | High |
| First Input Delay (FID) | < 100ms | High |
| Cumulative Layout Shift (CLS) | < 0.1 | Medium |
| API Response Time (Read) | < 500ms (95th percentile) | High |
| API Response Time (Write) | < 2s (95th percentile) | Medium |
| PDF Generation | < 60s for 30-page report | High |
| CSV Upload Processing | < 30s for 10K rows | High |
| Concurrent Users | 100 users without degradation | Medium |
Optimization Strategies:
- Code splitting and lazy loading per route
- Database indexing on common query patterns
- Connection pooling via PgBouncer
- Multi-level caching (browser, CDN, database)
- Debouncing for search/filter operations
4. System Architecture
4.1 Logical View
┌─────────────────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ React 19 Application │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │Dashboard │ │ Dust │ │ Flow │ │ Reports │ │ │
│ │ │ Module │ │ Levels │ │ Meter │ │ Module │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ Weekly │ │ Email │ │ User │ │ Heatmap │ │ │
│ │ │ Reports │ │Schedules │ │Management│ │ Module │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ Assets │ │ Climate │ │ Settings │ │ Upload │ │ │
│ │ │ Module │ │ Module │ │ Module │ │ Module │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────────┴───────────────────────────────────────┐ │
│ │ UI Component Library (shadcn/ui + Radix) │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│
(HTTP/REST, WebSocket)
│
┌─────────────────────────────────┼───────────────────────────────────────┐
│ APPLICATION LOGIC LAYER │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ Business Logic Services │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ CSV │ │ Report │ │ PDF │ │ Chart │ │ │
│ │ │ Parser │ │ Data │ │Generator │ │ Data │ │ │
│ │ │ Service │ │ Service │ │ Service │ │Transformer│ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │Permission│ │ Email │ │ Tank │ │ Scraper │ │ │
│ │ │ Service │ │ Service │ │ Service │ │ Service │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ┌────────────────────────┴───────────────────────────────────────┐ │
│ │ Supabase Client SDK (API Gateway) │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│
(PostgreSQL Protocol)
│
┌─────────────────────────────────┼───────────────────────────────────────┐
│ DATA LAYER │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ PostgreSQL Database │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ user_ │ │ cfg_ │ │ data_ │ │ rpt_ │ │ │
│ │ │ profiles │ │mine_sites│ │dust_levels│ │ reports │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │ │
│ │ │ user_ │ │ data_ │ │ ops_ │ │ email_ │ │ │
│ │ │permissions│ │flow_meters│ │corrections│ │ schedules │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ └───────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────────────┐ │ │
│ │ │ Row-Level Security Policies (RLS) + RBAC │ │ │
│ │ └──────────────────────────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ Supabase Storage │ │
│ │ • csv-uploads • pdf-reports • email-attachments │ │
│ └────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────────┐ │
│ │ Supabase Auth (User Management) │ │
│ │ • JWT Tokens • Email/Password • OAuth (Azure, Google) │ │
│ └────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘4.2 Hardware Architecture
The system leverages self-hosted cloud infrastructure on Oracle Cloud with Docker Compose.
Infrastructure Components:
┌───────────────────────────────────────────────────────────────────┐
│ ORACLE CLOUD INFRASTRUCTURE │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Docker Compose Stack │ │
│ │ ┌──────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │
│ │ │ PostgreSQL │ │ Supabase │ │ Supabase │ │ │
│ │ │ Database │ │ Auth │ │ Storage │ │ │
│ │ │ (15.8) │ │ (GoTrue) │ │ (S3-like) │ │ │
│ │ └──────────────┘ └─────────────┘ └─────────────────┘ │ │
│ │ ┌──────────────┐ ┌─────────────┐ ┌─────────────────┐ │ │
│ │ │ PostgREST │ │ Realtime │ │ Edge Functions │ │ │
│ │ │ (REST API) │ │ (WebSocket)│ │ (Deno) │ │ │
│ │ └──────────────┘ └─────────────┘ └─────────────────┘ │ │
│ │ ┌──────────────────────────────────────────────────────┐ │ │
│ │ │ Kong API Gateway │ │ │
│ │ └──────────────────────────────────────────────────────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────┘
│
│ (HTTPS)
│
┌───────────────────────────────┼───────────────────────────────────┐
│ CLOUDFLARE NETWORK │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Cloudflare Pages (Static Hosting) │ │
│ │ • Global CDN • Automatic HTTPS • DDoS Protection │ │
│ └────────────────────────────────────────────────────────────┘ │
└───────────────────────────────────────────────────────────────────┘Hardware Specifications:
| Component | Specification | Purpose |
|---|---|---|
| Database Server | 4 vCPU, 16GB RAM, 100GB SSD | PostgreSQL with RLS |
| Connection Pool | 60 max connections (PgBouncer) | Handle concurrent queries |
| Storage | 200GB object storage | CSV files, PDFs, attachments |
| CDN | Cloudflare global network | <100ms latency worldwide |
| Edge Functions | 512MB RAM, 60s timeout | AI calls, scraper triggers |
4.3 Software Architecture
Frontend Components:
src/
├── features/ # Feature modules (16 modules)
│ ├── dashboard/ # Dashboard analytics
│ ├── dust-levels/ # Dust monitoring
│ ├── flow-meter/ # Flow meter management
│ ├── dust-ranger/ # Dust Ranger devices
│ ├── reports/ # PDF report generation
│ ├── weekly-reports/ # Weekly field reports
│ ├── email-schedules/ # Email automation
│ ├── user-management/ # RBAC administration
│ ├── heatmap/ # Geographic visualization
│ ├── assets/ # Asset tracking
│ ├── climate/ # Weather integration
│ ├── mine-sites/ # Site configuration
│ ├── report-templates/ # Template management
│ ├── settings/ # App settings
│ └── upload/ # CSV upload
├── components/ # Shared UI components
│ └── ui/ # shadcn/ui components
├── contexts/ # React contexts
│ ├── AuthContext.tsx # Authentication state
│ └── LayoutContext.tsx # Theme, sidebar state
├── hooks/ # Shared hooks
│ ├── useAuth.ts # Auth hook
│ └── usePermissions.ts # Permission checks
├── lib/ # Utilities
│ ├── supabase.ts # Supabase client
│ ├── supabaseTypes.ts # Generated types
│ └── utils.ts # Common utilities
└── routes/ # Routing
├── Routes.tsx # Route definitions
└── ProtectedRoute.tsx # Auth guardBackend Components:
supabase/
├── functions/ # Edge Functions
│ ├── generate-chart-descriptions/ # AI chart analysis
│ ├── generate-pdf-report/ # Server-side PDF
│ ├── trigger-dust-level-scraper/ # Dust data scraper
│ ├── trigger-flow-meter-scraper/ # Flow meter scraper
│ ├── trigger-heatmap-scraper/ # Heatmap scraper
│ ├── trigger-asset-location-scraper/ # Asset scraper
│ ├── send-email/ # Email sending
│ └── send-calibration-reminder/ # Reminders
├── migrations/ # Database migrations
└── config.toml # Supabase configuration4.4 Security Architecture
Authentication Flow:
┌──────────┐ ┌─────────────┐
│ User │ │ Supabase │
│ Browser │ │ Auth │
└────┬─────┘ └──────┬──────┘
│ │
│ 1. POST /auth/signin { email, password } │
├───────────────────────────────────────────────>│
│ │
│ 2. Verify password │
│ 3. Check user_profiles.is_active
│ 4. Generate JWT with role
│ │
│ 5. { user, session: { access_token, refresh_token }}
│<───────────────────────────────────────────────┤
│ │
│ 6. Store tokens in localStorage │
│ │
│ 7. API requests with Authorization: Bearer │
├───────────────────────────────────────────────>│
│ │
│ 8. Verify JWT │
│ 9. Extract user_id │
│ 10. Apply RLS + RBACAuthorization Model (RBAC):
-- Permission check flow
User Request
│
▼
┌─────────────────────────────────────────────────────┐
│ 1. JWT Token Verification (Supabase Auth) │
└─────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ 2. Role Check: is_admin() function │
│ SELECT role = 'admin' FROM user_profiles │
│ WHERE id = auth.uid() │
└─────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ 3. Module Permission Check: has_module_permission() │
│ SELECT can_view/can_edit FROM user_module_permissions
│ WHERE user_id = auth.uid() AND module = ? │
└─────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ 4. Site Permission Check: get_user_permitted_sites()│
│ SELECT site_id FROM user_site_permissions │
│ WHERE user_id = auth.uid() AND can_view = true │
└─────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────┐
│ 5. RLS Policy Application │
│ Data filtered based on permitted sites │
└─────────────────────────────────────────────────────┘Security Layers:
| Layer | Technology | Function |
|---|---|---|
| Transport | TLS 1.3 | Encrypt data in transit |
| Authentication | Supabase Auth (JWT) | Verify user identity |
| Authorization | RLS + RBAC | Control data access |
| Data at Rest | AES-256 | Encrypt stored data |
| API Security | Kong Gateway | Rate limiting, request validation |
| XSS Prevention | React escaping | Prevent script injection |
| SQL Injection | Parameterized queries | Prevent SQL attacks |
5. System Design
5.1 Database Design
5.1.1 Database Tables Overview
The database is organized into logical groups with consistent naming conventions:
| Prefix | Category | Description |
|---|---|---|
user_ | User Management | User profiles, permissions, groups |
cfg_ | Configuration | System and site configuration |
data_ | Data Storage | Monitoring data (dust, flow, weather) |
ops_ | Operations | Operational records (refills, corrections) |
rpt_ | Reports | Generated reports and templates |
email_ | Email System | Schedules, logs, templates |
5.1.2 User Management Tables
user_profiles
CREATE TABLE user_profiles (
id UUID PRIMARY KEY REFERENCES auth.users(id),
email TEXT NOT NULL,
full_name TEXT,
role TEXT NOT NULL DEFAULT 'user' CHECK (role IN ('admin', 'user')),
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);user_module_permissions
CREATE TABLE user_module_permissions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
module TEXT NOT NULL,
can_view BOOLEAN NOT NULL DEFAULT false,
can_edit BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(user_id, module)
);user_site_permissions
CREATE TABLE user_site_permissions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE CASCADE,
can_view BOOLEAN NOT NULL DEFAULT false,
can_edit BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(user_id, site_id)
);user_groups
CREATE TABLE user_groups (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL UNIQUE,
description TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);user_group_members
CREATE TABLE user_group_members (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
group_id UUID NOT NULL REFERENCES user_groups(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(group_id, user_id)
);5.1.3 Configuration Tables
cfg_mine_sites
CREATE TABLE cfg_mine_sites (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL UNIQUE,
code TEXT NOT NULL UNIQUE,
location TEXT,
latitude DECIMAL(10, 8),
longitude DECIMAL(11, 8),
timezone TEXT DEFAULT 'Australia/Perth',
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);cfg_app_settings
CREATE TABLE cfg_app_settings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
key TEXT NOT NULL UNIQUE,
value JSONB NOT NULL,
description TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);cfg_tank_capacities
CREATE TABLE cfg_tank_capacities (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE CASCADE,
tank_name TEXT NOT NULL,
capacity_liters DECIMAL(10, 2) NOT NULL,
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(site_id, tank_name)
);5.1.4 Data Tables
data_dust_levels
CREATE TABLE data_dust_levels (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id),
device_id TEXT NOT NULL,
year INTEGER NOT NULL,
month INTEGER NOT NULL,
day INTEGER NOT NULL,
hour INTEGER NOT NULL,
minute INTEGER NOT NULL,
pm1_0 DECIMAL(10, 4),
pm2_5 DECIMAL(10, 4),
pm4_0 DECIMAL(10, 4),
pm10 DECIMAL(10, 4),
temperature DECIMAL(6, 2),
humidity DECIMAL(5, 2),
latitude DECIMAL(10, 8),
longitude DECIMAL(11, 8),
speed DECIMAL(8, 2),
location TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(site_id, device_id, year, month, day, hour, minute)
);
-- Indexes for common queries
CREATE INDEX idx_dust_levels_site_date ON data_dust_levels(site_id, year, month, day);
CREATE INDEX idx_dust_levels_device ON data_dust_levels(device_id);data_flow_meters
CREATE TABLE data_flow_meters (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id),
asset_id UUID REFERENCES data_assets(id),
meter_name TEXT NOT NULL,
reading_date DATE NOT NULL,
reading_time TIME,
total_volume DECIMAL(12, 2),
daily_usage DECIMAL(10, 2),
tank_level_percent DECIMAL(5, 2),
notes TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);data_weather
CREATE TABLE data_weather (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
station_id TEXT NOT NULL,
observation_time TIMESTAMPTZ NOT NULL,
temperature DECIMAL(5, 2),
humidity DECIMAL(5, 2),
wind_speed DECIMAL(6, 2),
wind_direction TEXT,
rainfall DECIMAL(8, 2),
pressure DECIMAL(7, 2),
created_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(station_id, observation_time)
);data_assets
CREATE TABLE data_assets (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id),
asset_type TEXT NOT NULL,
asset_name TEXT NOT NULL,
serial_number TEXT,
latitude DECIMAL(10, 8),
longitude DECIMAL(11, 8),
last_seen TIMESTAMPTZ,
status TEXT DEFAULT 'active',
metadata JSONB,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);5.1.5 Operations Tables
ops_dustloc_refills
CREATE TABLE ops_dustloc_refills (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id),
asset_id UUID REFERENCES data_assets(id),
refill_date DATE NOT NULL,
volume_liters DECIMAL(10, 2) NOT NULL,
notes TEXT,
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);ops_tank_corrections
CREATE TABLE ops_tank_corrections (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id),
tank_name TEXT NOT NULL,
correction_date DATE NOT NULL,
previous_level DECIMAL(5, 2),
corrected_level DECIMAL(5, 2) NOT NULL,
reason TEXT,
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now()
);5.1.6 Report Tables
rpt_reports
CREATE TABLE rpt_reports (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES user_profiles(id),
site_id UUID REFERENCES cfg_mine_sites(id),
report_type TEXT NOT NULL,
title TEXT NOT NULL,
period_start DATE,
period_end DATE,
parameters JSONB,
pdf_storage_path TEXT,
file_size_bytes INTEGER,
page_count INTEGER,
status TEXT NOT NULL DEFAULT 'pending',
generation_duration_ms INTEGER,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);rpt_weekly_reports
CREATE TABLE rpt_weekly_reports (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES user_profiles(id),
period_start DATE NOT NULL,
period_end DATE NOT NULL,
status TEXT NOT NULL DEFAULT 'draft',
site_observations JSONB DEFAULT '[]',
flow_meter_usage JSONB DEFAULT '[]',
dashboard_updates JSONB DEFAULT '[]',
vendor_activities JSONB DEFAULT '[]',
water_truck_testing JSONB DEFAULT '{}',
hardware_installations JSONB DEFAULT '[]',
admin_reporting JSONB DEFAULT '[]',
other_tasks JSONB DEFAULT '{}',
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
UNIQUE(user_id, period_end)
);5.1.7 Email System Tables
email_schedules
CREATE TABLE email_schedules (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
description TEXT,
schedule_type TEXT NOT NULL,
cron_expression TEXT,
recipients JSONB NOT NULL DEFAULT '[]',
subject_template TEXT NOT NULL,
body_template TEXT NOT NULL,
is_active BOOLEAN NOT NULL DEFAULT true,
last_sent_at TIMESTAMPTZ,
next_send_at TIMESTAMPTZ,
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);email_logs
CREATE TABLE email_logs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
schedule_id UUID REFERENCES email_schedules(id),
recipients JSONB NOT NULL,
subject TEXT NOT NULL,
body TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'pending',
error_message TEXT,
sent_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT now()
);email_snippet_templates
CREATE TABLE email_snippet_templates (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL UNIQUE,
description TEXT,
content TEXT NOT NULL,
category TEXT,
tags JSONB DEFAULT '[]',
is_active BOOLEAN NOT NULL DEFAULT true,
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);email_variable_definitions
CREATE TABLE email_variable_definitions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL UNIQUE,
description TEXT,
data_source TEXT NOT NULL,
query_template TEXT,
format_template TEXT,
example_value TEXT,
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);5.1.8 RLS Policies
Admin Override Policy:
-- Admins can access all data
CREATE POLICY "Admins have full access"
ON data_dust_levels FOR ALL
USING (
EXISTS (
SELECT 1 FROM user_profiles
WHERE id = auth.uid() AND role = 'admin'
)
);Site-Based Access Policy:
-- Users can only access permitted sites
CREATE POLICY "Users access permitted sites"
ON data_dust_levels FOR SELECT
USING (
site_id IN (
SELECT site_id FROM user_site_permissions
WHERE user_id = auth.uid() AND can_view = true
)
);Helper Functions:
-- Check if current user is admin
CREATE OR REPLACE FUNCTION is_admin()
RETURNS BOOLEAN AS $$
BEGIN
RETURN EXISTS (
SELECT 1 FROM user_profiles
WHERE id = auth.uid() AND role = 'admin'
);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Get user's permitted site IDs
CREATE OR REPLACE FUNCTION get_user_permitted_sites()
RETURNS SETOF UUID AS $$
BEGIN
IF is_admin() THEN
RETURN QUERY SELECT id FROM cfg_mine_sites WHERE is_active = true;
ELSE
RETURN QUERY
SELECT site_id FROM user_site_permissions
WHERE user_id = auth.uid() AND can_view = true;
END IF;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Check module permission
CREATE OR REPLACE FUNCTION has_module_permission(module_name TEXT, permission_type TEXT)
RETURNS BOOLEAN AS $$
BEGIN
IF is_admin() THEN
RETURN true;
END IF;
IF permission_type = 'view' THEN
RETURN EXISTS (
SELECT 1 FROM user_module_permissions
WHERE user_id = auth.uid() AND module = module_name AND can_view = true
);
ELSIF permission_type = 'edit' THEN
RETURN EXISTS (
SELECT 1 FROM user_module_permissions
WHERE user_id = auth.uid() AND module = module_name AND can_edit = true
);
END IF;
RETURN false;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;5.2 Entity-Relationship Diagram
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ user_profiles │ │ user_module_ │ │ user_site_ │
│─────────────────│ │ permissions │ │ permissions │
│ id (PK) │──┐ │─────────────────│ │─────────────────│
│ email │ │ │ id (PK) │ │ id (PK) │
│ full_name │ ├───<│ user_id (FK) │ │ user_id (FK) │>──┐
│ role │ │ │ module │ │ site_id (FK) │ │
│ is_active │ │ │ can_view │ │ can_view │ │
└─────────────────┘ │ │ can_edit │ │ can_edit │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ user_groups │ │ cfg_mine_sites │ │
│ │─────────────────│ │─────────────────│ │
│ │ id (PK) │ │ id (PK) │<──┘
│ │ name │ │ name │
│ │ description │ │ code │
│ └────────┬────────┘ │ location │
│ │ │ is_active │
│ ▼ └────────┬────────┘
│ ┌─────────────────┐ │
│ │ user_group_ │ │
│ │ members │ │
│ │─────────────────│ │
└───<│ user_id (FK) │ │
│ group_id (FK) │ │
└─────────────────┘ │
│
┌─────────────────┐ ┌─────────────────┐ │
│ data_dust_levels│ │ data_flow_meters│ │
│─────────────────│ │─────────────────│ │
│ id (PK) │ │ id (PK) │ │
│ site_id (FK) │>──────│ site_id (FK) │>───────────────┤
│ device_id │ │ asset_id (FK) │ │
│ year, month, day│ │ meter_name │ │
│ pm1_0 - pm10 │ │ reading_date │ │
│ temperature │ │ total_volume │ │
│ humidity │ │ tank_level_% │ │
└─────────────────┘ └─────────────────┘ │
│
┌─────────────────┐ ┌─────────────────┐ │
│ ops_tank_ │ │ cfg_tank_ │ │
│ corrections │ │ capacities │ │
│─────────────────│ │─────────────────│ │
│ id (PK) │ │ id (PK) │ │
│ site_id (FK) │>──────│ site_id (FK) │>───────────────┘
│ tank_name │ │ tank_name │
│ correction_date │ │ capacity_liters │
│ corrected_level │ │ is_active │
│ created_by (FK) │ └─────────────────┘
└─────────────────┘5.3 API Design
5.3.1 Supabase REST API Patterns
Standard CRUD Operations:
// Read with filters
const { data, error } = await supabase
.from('data_dust_levels')
.select('*')
.eq('site_id', siteId)
.gte('year', 2026)
.order('created_at', { ascending: false })
.range(0, 99);
// Insert with returning
const { data, error } = await supabase
.from('ops_tank_corrections')
.insert({
site_id: siteId,
tank_name: tankName,
corrected_level: level,
reason: reason
})
.select()
.single();
// Update with filters
const { data, error } = await supabase
.from('email_schedules')
.update({ is_active: false })
.eq('id', scheduleId)
.select()
.single();
// Delete
const { error } = await supabase
.from('rpt_reports')
.delete()
.eq('id', reportId);5.3.2 RPC Functions
Complex Queries:
-- Get dashboard summary
CREATE OR REPLACE FUNCTION get_dashboard_summary(p_site_id UUID)
RETURNS JSON AS $$
DECLARE
result JSON;
BEGIN
SELECT json_build_object(
'total_devices', (SELECT COUNT(DISTINCT device_id) FROM data_dust_levels WHERE site_id = p_site_id),
'latest_reading', (SELECT MAX(created_at) FROM data_dust_levels WHERE site_id = p_site_id),
'avg_pm25_today', (SELECT AVG(pm2_5) FROM data_dust_levels WHERE site_id = p_site_id AND created_at > CURRENT_DATE),
'alerts_count', (SELECT COUNT(*) FROM data_dust_levels WHERE site_id = p_site_id AND pm10 > 150 AND created_at > CURRENT_DATE)
) INTO result;
RETURN result;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;5.3.3 Edge Functions API
Chart Description Generation:
// POST /functions/v1/generate-chart-descriptions
{
"chartType": "pm25_trend",
"data": {
"labels": ["Mon", "Tue", "Wed", "Thu", "Fri"],
"values": [45.2, 52.1, 38.7, 61.3, 44.8]
},
"context": {
"site": "Mine Site A",
"period": "2026-02-01 to 2026-02-05"
}
}
// Response
{
"description": "PM2.5 levels at Mine Site A showed moderate variation over the week, with a peak of 61.3 μg/m³ on Thursday. The weekly average of 48.4 μg/m³ remains within acceptable limits.",
"insights": [
"Thursday spike may correlate with increased mining activity",
"Overall trend shows stable air quality"
]
}Scraper Trigger:
// POST /functions/v1/trigger-dust-level-scraper
{
"site_id": "uuid",
"date_range": {
"start": "2026-02-01",
"end": "2026-02-05"
}
}
// Response
{
"status": "success",
"records_fetched": 1250,
"records_inserted": 1248,
"duplicates_skipped": 2
}6. Operational Scenarios
6.1 Scenario: User Authentication and Authorization
6.1.1 Overview
Description: A user logs into the system and accesses features based on their role and permissions.
Primary Actor: End User (Admin or Regular User)
6.1.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to /login | System displays login form |
| 2 | User | Enters email and password | System validates input format |
| 3 | User | Clicks "Sign In" | System sends credentials to Supabase Auth |
| 4 | System | Verifies credentials | Checks password hash, user_profiles.is_active |
| 5 | System | Generates JWT token | Token includes user_id and role |
| 6 | System | Fetches user permissions | Queries user_module_permissions and user_site_permissions |
| 7 | System | Stores session | Saves tokens to localStorage |
| 8 | System | Redirects to dashboard | Renders UI based on permissions |
| 9 | User | Navigates to /dust-levels | System checks has_module_permission('dust_levels', 'view') |
| 10 | System | Loads permitted sites | Filters data by get_user_permitted_sites() |
6.1.3 Alternative Flows
A1: Invalid Credentials
| Step | Condition | Action |
|---|---|---|
| 4a | Password incorrect | Display "Invalid email or password" |
| 4b | User not found | Display same generic error (security) |
A2: Inactive Account
| Step | Condition | Action |
|---|---|---|
| 4a | user_profiles.is_active = false | Display "Account is deactivated. Contact administrator." |
A3: Missing Module Permission
| Step | Condition | Action |
|---|---|---|
| 9a | can_view = false for module | Redirect to /403 or hide navigation item |
6.2 Scenario: CSV Data Upload
6.2.1 Overview
Description: A user uploads CSV files containing dust monitoring data from DustRanger devices.
Primary Actor: Data Manager / Environmental Officer
6.2.2 Preconditions
- User is authenticated with
uploadmodule permission - User has site permissions for target mine sites
- CSV files follow expected format (18 columns)
6.2.3 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to /upload | System displays upload interface |
| 2 | User | Selects CSV files (drag & drop or browse) | System validates file types |
| 3 | System | Parses CSV headers | Validates column structure |
| 4 | System | Displays preview | Shows first 10 rows, detected sites |
| 5 | User | Confirms upload | System starts processing |
| 6 | System | Validates each row | Checks data types, ranges, required fields |
| 7 | System | Detects duplicates | Compares site + device + timestamp |
| 8 | System | Batch inserts records | Uses transaction for atomicity |
| 9 | System | Updates progress | Shows percentage and record count |
| 10 | System | Displays summary | "Imported 4,523 records. 12 duplicates skipped." |
6.2.4 Data Validation Rules
| Field | Validation | Error Handling |
|---|---|---|
| time | ISO 8601, not future | Reject row |
| pm2_5, pm10 | Numeric, 0-10000 μg/m³ | Reject if required |
| temperature | Numeric, -50 to 80°C | Accept with warning |
| humidity | Numeric, 0-100% | Reject if out of range |
| latitude/longitude | Valid coordinates | Accept null |
6.2.5 Performance Requirements
| Metric | Target |
|---|---|
| File upload speed | > 5 MB/s |
| CSV parsing | > 10,000 rows/second |
| Database insert | > 5,000 records/second |
| Total time (10MB file) | < 30 seconds |
6.3 Scenario: PDF Report Generation
6.3.1 Overview
Description: A user generates a comprehensive PDF report with charts and data tables.
Primary Actor: Environmental Manager / Compliance Officer
6.3.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to /reports | System displays report interface |
| 2 | User | Selects mine site | System loads available devices |
| 3 | User | Selects devices and date range | System validates (max 90 days) |
| 4 | User | Clicks "Preview Data" | System queries measurements |
| 5 | System | Displays summary | Record count, averages, chart preview |
| 6 | User | Clicks "Generate Report" | System starts PDF generation |
| 7 | System | Renders React components | Cover page, charts, tables in hidden div |
| 8 | System | Captures pages with html2canvas | 1240x1754px (A4 at 150 DPI) |
| 9 | System | Assembles PDF with jsPDF | Adds pages, maintains aspect ratio |
| 10 | System | Uploads to Supabase Storage | Stores in reports/{user_id}/ |
| 11 | System | Creates report record | Saves metadata to rpt_reports |
| 12 | System | Triggers download | User receives PDF file |
6.3.3 Report Content
| Section | Pages | Content |
|---|---|---|
| Cover Page | 1 | Logo, title, site, date range |
| Executive Summary | 1 | Key findings, compliance status |
| PM2.5 Analysis | 2 | Time series, statistics, exceedances |
| PM10 Analysis | 2 | Time series, statistics, exceedances |
| Environmental | 2 | Temperature, humidity charts |
| Data Tables | 2-4 | Daily averages, raw data sample |
6.3.4 Performance Requirements
| Metric | Target |
|---|---|
| Data query | < 5 seconds (10K records) |
| Chart rendering | < 2 seconds per chart |
| Page capture | < 3 seconds per page |
| Total generation | < 60 seconds (12 pages) |
| File size | < 5 MB |
6.4 Scenario: Tank Correction Management
6.4.1 Overview
Description: A user manually corrects tank level readings when automatic sensors are inaccurate.
Primary Actor: Field Supervisor / Site Manager
6.4.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to /correction-management | System displays correction interface |
| 2 | User | Selects mine site | System loads tanks for site |
| 3 | User | Selects tank | System shows current level and history |
| 4 | User | Clicks "Add Correction" | System displays correction modal |
| 5 | User | Enters corrected level (%) | System validates 0-100 range |
| 6 | User | Enters reason | Optional but recommended |
| 7 | User | Confirms correction | System inserts ops_tank_corrections record |
| 8 | System | Updates tank display | Shows corrected level with indicator |
| 9 | System | Logs activity | Records user, timestamp, old/new values |
6.4.3 Business Rules
| Rule | Description |
|---|---|
| BR-01 | Only users with flow_meter edit permission can add corrections |
| BR-02 | Corrections cannot be deleted, only superseded |
| BR-03 | Each correction is timestamped and attributed to user |
| BR-04 | Correction history is visible for audit purposes |
6.5 Scenario: Weekly Report Creation
6.5.1 Overview
Description: A field supervisor creates a structured weekly report documenting site activities.
Primary Actor: Field Supervisor / Site Manager
6.5.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to /weekly-reports | System displays report list |
| 2 | User | Clicks "Create New Report" | System shows week selection |
| 3 | User | Selects report period end date | System calculates 7-day period |
| 4 | System | Creates draft report | Initializes empty JSONB sections |
| 5 | User | Fills Section 1: Site Observations | System auto-saves (debounced 2s) |
| 6 | User | Fills Section 2: Flow Meter Usage | System validates numeric fields |
| 7 | User | Completes remaining sections | 8 sections total |
| 8 | User | Clicks "Review Report" | System validates required fields |
| 9 | User | Clicks "Submit Report" | System updates status to 'completed' |
| 10 | User | Clicks "Export to Word" | System generates DOCX with progress |
6.5.3 Report Sections
| Section | Content | Required |
|---|---|---|
| 1. Site Observations | Ranger ID, location, status, issues | Yes |
| 2. Flow Meter Usage | Site, dates, volume, notes | Yes |
| 3. Dashboard Updates | Category, description | No |
| 4. Vendor Activities | Vendor name, type, description | No |
| 5. Water Truck Testing | Sites, hardware, summary | No |
| 6. Hardware Installations | Type, location, details | No |
| 7. Admin & Reporting | Travel, reporting entries | No |
| 8. Other Tasks | Site support, stakeholder, internal | No |
6.5.4 DOCX Export Progress
| Step | Progress | Description |
|---|---|---|
| 1 | 0% | Initializing document |
| 2 | 10% | Creating header and title |
| 3 | 20-70% | Processing sections (8% each) |
| 4 | 80% | Formatting tables |
| 5 | 90% | Finalizing document |
| 6 | 100% | Download ready |
6.6 Scenario: Email Schedule Configuration
6.6.1 Overview
Description: An administrator configures automated email schedules with templates.
Primary Actor: System Administrator
6.6.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | Admin | Navigates to /email-schedules | System displays schedule list |
| 2 | Admin | Clicks "Create Schedule" | System shows configuration form |
| 3 | Admin | Enters schedule name and description | System validates uniqueness |
| 4 | Admin | Selects schedule type (daily/weekly/monthly) | System shows cron options |
| 5 | Admin | Configures recipients | Email addresses or user groups |
| 6 | Admin | Creates subject template | Can use variables like {{site_name}} |
| 7 | Admin | Creates body template | Rich text with snippet insertion |
| 8 | Admin | Inserts snippet template | System loads from email_snippet_templates |
| 9 | Admin | Previews email | System renders with sample data |
| 10 | Admin | Saves and activates | System calculates next_send_at |
6.6.3 Template Variables
| Variable | Source | Example |
|---|---|---|
| {{site_name}} | cfg_mine_sites.name | "Mine Site Alpha" |
| {{date}} | Current date | "2026-02-05" |
| {{pm25_avg}} | Calculated from data | "45.2 μg/m³" |
| {{alert_count}} | Count of exceedances | "3" |
| {{user_name}} | Recipient's name | "John Smith" |
6.7 Scenario: Heatmap Visualization
6.7.1 Overview
Description: A user views geographic dust concentration data on an interactive map.
Primary Actor: Environmental Analyst
6.7.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | User | Navigates to /heatmap | System loads Google Maps |
| 2 | System | Fetches site coordinates | Queries cfg_mine_sites |
| 3 | System | Fetches latest dust data | Queries data_dust_levels with coordinates |
| 4 | System | Renders heatmap overlay | Color gradient based on PM values |
| 5 | User | Selects date range | System updates heatmap data |
| 6 | User | Selects PM metric (PM2.5/PM10) | System recalculates colors |
| 7 | User | Clicks on hotspot | System shows detail popup |
| 8 | User | Toggles contour lines | System renders concentration contours |
6.7.3 Color Scale
| PM2.5 Level | Color | Status |
|---|---|---|
| 0-12 μg/m³ | Green | Good |
| 12-35 μg/m³ | Yellow | Moderate |
| 35-55 μg/m³ | Orange | Unhealthy (Sensitive) |
| 55-150 μg/m³ | Red | Unhealthy |
| 150+ μg/m³ | Purple | Very Unhealthy |
6.8 Scenario: User Permission Management
6.8.1 Overview
Description: An administrator manages user permissions and group memberships.
Primary Actor: System Administrator
6.8.2 Main Success Flow
| Step | Actor | Action | System Response |
|---|---|---|---|
| 1 | Admin | Navigates to /settings/users | System displays user list |
| 2 | Admin | Clicks on user row | System shows user detail page |
| 3 | Admin | Views current permissions | Module and site permissions displayed |
| 4 | Admin | Toggles module permission | System updates user_module_permissions |
| 5 | Admin | Adds site permission | System inserts user_site_permissions |
| 6 | Admin | Assigns to user group | System inserts user_group_members |
| 7 | System | Applies group permissions | User inherits group's permissions |
| 8 | Admin | Saves changes | System confirms update |
6.8.3 Permission Modules
| Module | Description | Default (New User) |
|---|---|---|
| dashboard | Main dashboard access | View only |
| dust_levels | Dust monitoring data | None |
| flow_meter | Flow meter management | None |
| dust_ranger | Dust Ranger devices | None |
| reports | PDF report generation | None |
| weekly_reports | Weekly field reports | None |
| email_schedules | Email automation | None |
| heatmap | Geographic visualization | None |
| assets | Asset tracking | None |
| climate | Weather data | None |
| settings | Application settings | None |
| user_management | User administration | Admin only |
7. Detailed Design
7.1 Frontend Component Architecture
7.1.1 Feature Module Structure
Each feature module follows a consistent structure:
src/features/[feature-name]/
├── components/ # React components
│ ├── [Feature]Page.tsx # Main page component
│ ├── [Feature]List.tsx # List/table component
│ ├── [Feature]Form.tsx # Form component
│ └── charts/ # Chart components
├── services/ # Business logic
│ └── [feature]Service.ts # API calls, data transformation
├── hooks/ # Custom hooks
│ └── use[Feature].ts # State management hook
├── types/ # TypeScript types
│ └── index.ts # Type definitions
└── index.ts # Public exports7.1.2 Service Pattern
// Example: Tank Correction Service
export class TankCorrectionService {
static async getCorrections(siteId: string): Promise<TankCorrection[]> {
const { data, error } = await supabase
.from('ops_tank_corrections')
.select(`
*,
site:cfg_mine_sites(name),
created_by_user:user_profiles(full_name)
`)
.eq('site_id', siteId)
.order('correction_date', { ascending: false });
if (error) throw error;
return data;
}
static async addCorrection(correction: NewTankCorrection): Promise<TankCorrection> {
const { data, error } = await supabase
.from('ops_tank_corrections')
.insert(correction)
.select()
.single();
if (error) throw error;
return data;
}
}7.1.3 Hook Pattern
// Example: useTankCorrections hook
export function useTankCorrections(siteId: string | null) {
const [corrections, setCorrections] = useState<TankCorrection[]>([]);
const [loading, setLoading] = useState(false);
const [error, setError] = useState<Error | null>(null);
const fetchCorrections = useCallback(async () => {
if (!siteId) return;
setLoading(true);
setError(null);
try {
const data = await TankCorrectionService.getCorrections(siteId);
setCorrections(data);
} catch (err) {
setError(err as Error);
} finally {
setLoading(false);
}
}, [siteId]);
useEffect(() => {
fetchCorrections();
}, [fetchCorrections]);
const addCorrection = async (correction: NewTankCorrection) => {
const newCorrection = await TankCorrectionService.addCorrection(correction);
setCorrections(prev => [newCorrection, ...prev]);
return newCorrection;
};
return { corrections, loading, error, addCorrection, refetch: fetchCorrections };
}7.2 State Management
7.2.1 Auth Context
interface AuthContextType {
user: User | null;
session: Session | null;
loading: boolean;
isAdmin: boolean;
permissions: {
modules: ModulePermission[];
sites: SitePermission[];
};
signIn: (email: string, password: string) => Promise<void>;
signOut: () => Promise<void>;
hasModulePermission: (module: string, type: 'view' | 'edit') => boolean;
hasSitePermission: (siteId: string, type: 'view' | 'edit') => boolean;
}7.2.2 Layout Context
interface LayoutContextType {
theme: 'light' | 'dark' | 'system';
sidebarCollapsed: boolean;
rtl: boolean;
setTheme: (theme: 'light' | 'dark' | 'system') => void;
toggleSidebar: () => void;
setRtl: (rtl: boolean) => void;
}7.3 Routing Architecture
7.3.1 Route Protection
// ProtectedRoute component
interface ProtectedRouteProps {
children: React.ReactNode;
requiredModule?: string;
adminOnly?: boolean;
}
export function ProtectedRoute({
children,
requiredModule,
adminOnly
}: ProtectedRouteProps) {
const { user, loading, isAdmin, hasModulePermission } = useAuth();
const location = useLocation();
if (loading) {
return <LoadingSpinner />;
}
if (!user) {
return <Navigate to="/login" state={{ from: location }} replace />;
}
if (adminOnly && !isAdmin) {
return <Navigate to="/403" replace />;
}
if (requiredModule && !hasModulePermission(requiredModule, 'view')) {
return <Navigate to="/403" replace />;
}
return <>{children}</>;
}7.3.2 Route Definitions
export const layoutsRoutes: RouteObject[] = [
{
path: '/',
element: <ProtectedRoute requiredModule="dashboard"><DashboardPage /></ProtectedRoute>
},
{
path: '/dust-levels/*',
element: <ProtectedRoute requiredModule="dust_levels"><DustLevelsPage /></ProtectedRoute>
},
{
path: '/flow-meter',
element: <ProtectedRoute requiredModule="flow_meter"><FlowMeterPage /></ProtectedRoute>
},
{
path: '/correction-management',
element: <ProtectedRoute requiredModule="flow_meter"><CorrectionManagementPage /></ProtectedRoute>
},
{
path: '/settings/users/*',
element: <ProtectedRoute requiredModule="user_management" adminOnly><UserManagementPage /></ProtectedRoute>
},
// ... more routes
];7.4 Error Handling
7.4.1 Error Boundary
class ErrorBoundary extends React.Component<Props, State> {
state = { hasError: false, error: null };
static getDerivedStateFromError(error: Error) {
return { hasError: true, error };
}
componentDidCatch(error: Error, errorInfo: React.ErrorInfo) {
// Log to error tracking service
console.error('Error boundary caught:', error, errorInfo);
// Log to database
supabase.from('error_logs').insert({
error_message: error.message,
error_stack: error.stack,
component_stack: errorInfo.componentStack,
url: window.location.href,
user_agent: navigator.userAgent
});
}
render() {
if (this.state.hasError) {
return <ErrorFallback error={this.state.error} onReset={() => this.setState({ hasError: false })} />;
}
return this.props.children;
}
}7.4.2 API Error Handling
async function handleApiCall<T>(
operation: () => Promise<{ data: T | null; error: PostgrestError | null }>
): Promise<T> {
const { data, error } = await operation();
if (error) {
// Map Supabase errors to user-friendly messages
const message = mapErrorMessage(error);
throw new ApplicationError(error.code, message, error.details);
}
if (!data) {
throw new ApplicationError('NO_DATA', 'No data returned from server');
}
return data;
}
function mapErrorMessage(error: PostgrestError): string {
switch (error.code) {
case '23505': return 'This record already exists';
case '23503': return 'Referenced record not found';
case '42501': return 'You do not have permission for this action';
default: return error.message;
}
}8. System Integrity Controls
8.1 Data Integrity
8.1.1 Input Validation
Client-Side (Zod):
const tankCorrectionSchema = z.object({
site_id: z.string().uuid(),
tank_name: z.string().min(1).max(100),
corrected_level: z.number().min(0).max(100),
reason: z.string().max(500).optional()
});
type TankCorrectionInput = z.infer<typeof tankCorrectionSchema>;Database Constraints:
-- Check constraints
ALTER TABLE ops_tank_corrections
ADD CONSTRAINT check_level_range
CHECK (corrected_level >= 0 AND corrected_level <= 100);
-- Foreign key constraints
ALTER TABLE ops_tank_corrections
ADD CONSTRAINT fk_site
FOREIGN KEY (site_id) REFERENCES cfg_mine_sites(id) ON DELETE RESTRICT;8.1.2 Transaction Management
// Atomic operations with rollback
async function uploadWithRollback(files: File[], uploadId: string) {
const uploadedPaths: string[] = [];
try {
// Phase 1: Upload files
for (const file of files) {
const { data } = await supabase.storage
.from('csv-uploads')
.upload(`${uploadId}/${file.name}`, file);
uploadedPaths.push(data.path);
}
// Phase 2: Create database records
await supabase.from('csv_files').insert(
uploadedPaths.map(path => ({ upload_id: uploadId, storage_path: path }))
);
} catch (error) {
// Rollback: Delete uploaded files
await supabase.storage.from('csv-uploads').remove(uploadedPaths);
throw error;
}
}8.2 Backup and Recovery
| Component | Method | Frequency | Retention |
|---|---|---|---|
| Database | PostgreSQL pg_dump | Daily 03:00 UTC | 30 days |
| Database | WAL archiving | Continuous | 7 days |
| Storage | S3 versioning | On change | 90 days |
| Code | Git repository | On commit | Indefinite |
8.3 Audit Logging
-- Activity log table
CREATE TABLE activity_logs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES user_profiles(id),
action TEXT NOT NULL,
resource_type TEXT NOT NULL,
resource_id UUID,
details JSONB,
ip_address INET,
user_agent TEXT,
created_at TIMESTAMPTZ DEFAULT now()
);
-- Automatic logging trigger
CREATE OR REPLACE FUNCTION log_activity()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO activity_logs (user_id, action, resource_type, resource_id, details)
VALUES (
auth.uid(),
TG_OP,
TG_TABLE_NAME,
COALESCE(NEW.id, OLD.id),
jsonb_build_object('old', OLD, 'new', NEW)
);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;9. External Interfaces
9.1 External API Inventory
| API | Provider | Purpose | Auth Method |
|---|---|---|---|
| DeepSeek V3.2 | DeepSeek | AI chart descriptions | API Key |
| Dustac Scraper | Internal | Dust level data collection | API Key |
| Flow Meter Scraper | Internal | Flow meter data collection | API Key |
| BOM Weather | Bureau of Meteorology | Weather observations | Public |
| Google Maps | Heatmap visualization | API Key | |
| QuickChart.io | QuickChart | Server-side charts | Public |
9.2 AI Integration (DeepSeek V3.2)
9.2.1 Configuration
// Edge Function: generate-chart-descriptions
const DEEPSEEK_API_URL = 'https://api.deepseek.com/v1/chat/completions';
const DEEPSEEK_MODEL = 'deepseek-chat';
interface ChartDescriptionRequest {
chartType: string;
data: ChartData;
context: {
site: string;
period: string;
metric: string;
};
}9.2.2 Request/Response Format
// Request to DeepSeek
const response = await fetch(DEEPSEEK_API_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${DEEPSEEK_API_KEY}`
},
body: JSON.stringify({
model: DEEPSEEK_MODEL,
messages: [
{
role: 'system',
content: 'You are an environmental data analyst. Provide concise, professional descriptions of dust monitoring charts.'
},
{
role: 'user',
content: `Analyze this ${chartType} chart for ${context.site} from ${context.period}. Data: ${JSON.stringify(data)}`
}
],
max_tokens: 500,
temperature: 0.7
})
});
// Response
{
"description": "PM2.5 levels showed a gradual increase throughout the week...",
"insights": [
"Peak concentration occurred on Thursday at 14:00",
"Levels remained within regulatory limits"
],
"recommendations": [
"Consider increased dust suppression during afternoon hours"
]
}9.3 Scraper APIs
9.3.1 Dust Level Scraper
// Edge Function: trigger-dust-level-scraper
interface ScraperRequest {
site_id: string;
date_range: {
start: string; // ISO date
end: string; // ISO date
};
device_ids?: string[];
}
interface ScraperResponse {
status: 'success' | 'partial' | 'failed';
records_fetched: number;
records_inserted: number;
duplicates_skipped: number;
errors?: string[];
}9.3.2 Flow Meter Scraper
// Edge Function: trigger-flow-meter-scraper
interface FlowMeterScraperRequest {
site_id: string;
meter_ids?: string[];
}
interface FlowMeterScraperResponse {
status: 'success' | 'failed';
meters_updated: number;
latest_readings: {
meter_id: string;
reading_date: string;
total_volume: number;
tank_level_percent: number;
}[];
}9.4 Weather API (BOM)
// Weather data fetching
interface BOMObservation {
station_id: string;
observation_time: string;
temperature: number;
humidity: number;
wind_speed: number;
wind_direction: string;
rainfall: number;
pressure: number;
}
async function fetchWeatherData(stationId: string): Promise<BOMObservation[]> {
const response = await fetch(
`http://www.bom.gov.au/fwo/${stationId}.json`
);
const data = await response.json();
return data.observations.data;
}9.5 Google Maps Integration
// Heatmap configuration
const heatmapConfig = {
apiKey: import.meta.env.VITE_GOOGLE_MAPS_API_KEY,
libraries: ['visualization'],
mapOptions: {
center: { lat: -23.5, lng: 119.5 }, // Western Australia
zoom: 10,
mapTypeId: 'satellite'
},
heatmapOptions: {
radius: 50,
opacity: 0.6,
gradient: [
'rgba(0, 255, 0, 0)', // Green (low)
'rgba(255, 255, 0, 1)', // Yellow
'rgba(255, 165, 0, 1)', // Orange
'rgba(255, 0, 0, 1)', // Red (high)
'rgba(128, 0, 128, 1)' // Purple (very high)
]
}
};10. Feature Modules
10.1 Feature Modules Overview
The application consists of 16 feature modules, each self-contained with components, services, hooks, and types.
| Module | Path | Permission | Description |
|---|---|---|---|
| Dashboard | / | dashboard | Main analytics overview |
| Dust Levels | /dust-levels | dust_levels | Dust monitoring data |
| Flow Meter | /flow-meter | flow_meter | Water usage tracking |
| Dust Ranger | /dust-ranger-data | dust_ranger | Device management |
| Reports | /report-template | reports | PDF report generation |
| Weekly Reports | /weekly-reports | weekly_reports | Field reports |
| Email Schedules | /email-schedules | email_schedules | Email automation |
| Heatmap | /heatmap | heatmap | Geographic visualization |
| Assets | /asset-management | assets | Asset tracking |
| Climate | /climate | climate | Weather integration |
| Mine Sites | /settings/sites | settings | Site configuration |
| Settings | /settings | settings | App configuration |
| User Management | /settings/users | user_management | RBAC administration |
| Report Templates | /report-templates | reports | Template management |
| Upload | /upload | upload | CSV data upload |
| Tank Configuration | /tank-configuration | flow_meter | Tank capacity setup |
10.2 Dashboard Module
Purpose: Provide at-a-glance overview of system health and key metrics.
Components:
DashboardPage- Main dashboard layoutOverviewStats- Key metric cardsAssetHealthWidget- Asset status summaryDustRangerHealthWidget- Device health statusRefillFrequencyWidget- Refill tracking
Key Metrics:
| Metric | Source | Update Frequency |
|---|---|---|
| Active Devices | data_dust_levels | Real-time |
| Average PM2.5 | data_dust_levels | Hourly |
| Tank Levels | data_flow_meters | 15 minutes |
| Alerts Count | Calculated | Real-time |
10.3 Dust Levels Module
Purpose: Monitor and analyze dust concentration data from DustRanger devices.
Components:
DustLevelsPage- Main page with filtersStatisticsTable- Data summary tableSiteSummaryCard- Per-site overviewSiteVisibilityManager- Toggle site visibilityCompareExportModal- Multi-site comparison export
Charts:
DustWeatherCorrelationChart- PM vs weather correlationMultiSiteComparisonChart- Cross-site comparisonDustDistributionAnalysis- Distribution histogramDustlocUsageChart- Dustloc usage patternsSpikeDustDataChart- Spike detection
Services:
class DustLevelsService {
static async fetchByDateRange(siteId: string, start: Date, end: Date): Promise<DustLevel[]>;
static async fetchStatistics(siteId: string, period: string): Promise<Statistics>;
static async exportToCSV(filters: DustLevelFilters): Promise<Blob>;
}10.4 Flow Meter Module
Purpose: Track water usage for dust suppression activities.
Components:
FlowMeterPage- Main dashboard with sidebarTankLevelDisplay- 3D cylinder visualizationAddRefillModal- Record refill eventsDeleteConfirmModal- Confirm deletionsScraperConfigModal- Configure data scraping
Sub-Pages:
/refill-management- Manage refill records/correction-management- Tank level corrections/tank-configuration- Tank capacity settings
Tank Visualization:
interface TankDisplayProps {
level: number; // 0-100%
capacity: number; // Liters
lastUpdated: Date;
hasCorrection: boolean;
}10.5 Weekly Reports Module
Purpose: Create structured weekly field reports with auto-save.
Components:
WeeklyReportPage- Report editorSiteSelector- Week/site selectionAutoSaveIndicator- Save status displayObservationEditor- Rich text editing
Sections:
SiteObservationsSection- Field observationsDashboardUpdatesSection- System updatesHardwareInstallationsSection- Equipment changesVendorActivitiesSection- Vendor workWaterTruckTestingSection- Testing recordsAdminReportingSection- Administrative tasksOtherTasksSection- Miscellaneous
Auto-Save Hook:
function useWeeklyReportAutoSave(reportId: string, data: WeeklyReportData) {
const [saveStatus, setSaveStatus] = useState<'saved' | 'saving' | 'error'>('saved');
const debouncedSave = useDebouncedCallback(async (data) => {
setSaveStatus('saving');
try {
await WeeklyReportService.update(reportId, data);
setSaveStatus('saved');
} catch {
setSaveStatus('error');
}
}, 2000);
useEffect(() => {
debouncedSave(data);
}, [data]);
return saveStatus;
}10.6 Email Schedules Module
Purpose: Configure and manage automated email notifications.
Components:
EmailSchedulesPage- Schedule listEmailScheduleForm- Create/edit scheduleSnippetTemplateSelector- Insert templatesEmailLogDetails- View sent emailsAttachmentConfig- Configure attachments
Template System:
interface EmailTemplate {
id: string;
name: string;
subject_template: string; // "Weekly Report - {{site_name}}"
body_template: string; // HTML with {{variables}}
variables: string[]; // ["site_name", "date", "pm25_avg"]
}
interface SnippetTemplate {
id: string;
name: string;
content: string; // Reusable HTML block
category: string; // "header", "footer", "table"
tags: string[];
}10.7 Heatmap Module
Purpose: Visualize dust concentration geographically.
Components:
HeatmapPage- Map containerHeatmapControls- Date/metric selectorsHeatmapLegend- Color scale legendSiteMarkers- Mine site markers
Integration:
// Google Maps with Heatmap Layer
const { isLoaded } = useGoogleMapsApi({
apiKey: GOOGLE_MAPS_API_KEY,
libraries: ['visualization']
});
const heatmapData = useMemo(() =>
dustData.map(d => ({
location: new google.maps.LatLng(d.latitude, d.longitude),
weight: d.pm25
})),
[dustData]
);10.8 User Management Module
Purpose: Administer users, permissions, and groups.
Components:
UserListPage- User directoryUserEditPage- Edit user permissionsGroupListPage- User groupsGroupEditPage- Edit group members
Permission Components:
// PermissionGate - Conditional rendering
<PermissionGate module="reports" permission="edit">
<Button>Generate Report</Button>
</PermissionGate>
// usePermissions hook
const { canView, canEdit } = usePermissions('flow_meter');10.9 Assets Module
Purpose: Track physical assets and their locations.
Components:
AssetManagementPage- Asset listAssetMap- Geographic viewAssetDetails- Asset information
Asset Types:
| Type | Description |
|---|---|
| dust_ranger | DustRanger monitoring device |
| flow_meter | Water flow meter |
| weather_station | Weather monitoring station |
| water_truck | Dust suppression vehicle |
10.10 Reports Module
Purpose: Generate and manage PDF reports.
Components:
ReportTemplatePage- Report configurationReportTemplate- PDF layoutChartFilterPanel- Chart selection
Chart Components:
PM10ConcentrationChartDailyPMAveragesChartHourlyPMAveragesChartEnvironmentalTrendChartTemperatureHumidityScatterParticleDistributionChartDeviceTimelineChartWeeklyDustLevelComparisonChart
11. RBAC and Permissions
11.1 Permission Model Overview
The system implements a comprehensive Role-Based Access Control (RBAC) model with three levels:
┌─────────────────────────────────────────────────────────────┐
│ Permission Hierarchy │
├─────────────────────────────────────────────────────────────┤
│ │
│ Level 1: User Role │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ Admin │ │ User │ │
│ │ (Full Access)│ │ (Limited) │ │
│ └─────────────┘ └──────┬──────┘ │
│ │ │
│ Level 2: Module Permissions │
│ ┌─────────────────────────┴─────────────────────────┐ │
│ │ dashboard │ dust_levels │ flow_meter │ reports │ │
│ │ can_view │ can_view │ can_view │ can_view │ │
│ │ can_edit │ can_edit │ can_edit │ can_edit │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ Level 3: Site Permissions │
│ ┌─────────────────────────┴─────────────────────────┐ │
│ │ Site A │ Site B │ Site C │ Site D │ │
│ │ can_view │ can_view │ can_view │ can_view │ │
│ │ can_edit │ can_edit │ can_edit │ can_edit │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘11.2 User Roles
| Role | Description | Capabilities |
|---|---|---|
| admin | System administrator | Full access to all modules and sites, can manage users |
| user | Regular user | Access based on granted module and site permissions |
11.3 Module Permissions
| Module | Description | View | Edit |
|---|---|---|---|
| dashboard | Main dashboard | See overview stats | N/A |
| dust_levels | Dust monitoring | View data, charts | Export data |
| flow_meter | Flow meter data | View readings | Add refills, corrections |
| dust_ranger | Device management | View devices | Configure devices |
| reports | PDF reports | View reports | Generate reports |
| weekly_reports | Weekly reports | View reports | Create/edit reports |
| email_schedules | Email automation | View schedules | Create/edit schedules |
| heatmap | Geographic view | View heatmap | Configure overlays |
| assets | Asset tracking | View assets | Edit asset info |
| climate | Weather data | View weather | N/A |
| settings | App settings | View settings | Modify settings |
| user_management | User admin | View users | Edit permissions |
11.4 Permission Check Flow
// Frontend permission check
function usePermissions(module: string) {
const { user, isAdmin, permissions } = useAuth();
const canView = useMemo(() => {
if (isAdmin) return true;
const perm = permissions.modules.find(p => p.module === module);
return perm?.can_view ?? false;
}, [isAdmin, permissions, module]);
const canEdit = useMemo(() => {
if (isAdmin) return true;
const perm = permissions.modules.find(p => p.module === module);
return perm?.can_edit ?? false;
}, [isAdmin, permissions, module]);
return { canView, canEdit };
}
// Backend RLS policy
CREATE POLICY "Module permission check"
ON data_dust_levels FOR SELECT
USING (
is_admin() OR
has_module_permission('dust_levels', 'view')
);11.5 Site Permission Filtering
// Hook to get permitted sites
function usePermittedSites() {
const { user, isAdmin, permissions } = useAuth();
const [sites, setSites] = useState<MineSite[]>([]);
useEffect(() => {
async function fetchSites() {
if (isAdmin) {
// Admin sees all sites
const { data } = await supabase
.from('cfg_mine_sites')
.select('*')
.eq('is_active', true);
setSites(data ?? []);
} else {
// User sees only permitted sites
const permittedIds = permissions.sites
.filter(p => p.can_view)
.map(p => p.site_id);
const { data } = await supabase
.from('cfg_mine_sites')
.select('*')
.in('id', permittedIds)
.eq('is_active', true);
setSites(data ?? []);
}
}
fetchSites();
}, [isAdmin, permissions]);
return sites;
}11.6 User Groups
User groups allow bulk permission assignment:
interface UserGroup {
id: string;
name: string;
description: string;
module_permissions: ModulePermission[];
site_permissions: SitePermission[];
members: User[];
}
// When user is added to group, they inherit group permissions
async function addUserToGroup(userId: string, groupId: string) {
// Add membership
await supabase.from('user_group_members').insert({ user_id: userId, group_id: groupId });
// Copy group permissions to user
const group = await fetchGroupWithPermissions(groupId);
for (const perm of group.module_permissions) {
await supabase.from('user_module_permissions').upsert({
user_id: userId,
module: perm.module,
can_view: perm.can_view,
can_edit: perm.can_edit
});
}
for (const perm of group.site_permissions) {
await supabase.from('user_site_permissions').upsert({
user_id: userId,
site_id: perm.site_id,
can_view: perm.can_view,
can_edit: perm.can_edit
});
}
}11.7 Permission UI Components
PermissionGate:
// Conditionally render based on permission
<PermissionGate module="reports" permission="edit" fallback={<ViewOnlyBadge />}>
<Button onClick={generateReport}>Generate Report</Button>
</PermissionGate>ProtectedRoute:
// Route-level protection
<Route
path="/settings/users"
element={
<ProtectedRoute requiredModule="user_management" adminOnly>
<UserManagementPage />
</ProtectedRoute>
}
/>Navigation Filtering:
// Hide nav items user can't access
const navItems = allNavItems.filter(item => {
if (item.adminOnly && !isAdmin) return false;
if (item.module && !hasModulePermission(item.module, 'view')) return false;
return true;
});Appendices
Appendix A: Acronyms and Abbreviations
| Acronym | Definition |
|---|---|
| API | Application Programming Interface |
| AWST | Australian Western Standard Time |
| BOM | Bureau of Meteorology |
| CDN | Content Delivery Network |
| CRUD | Create, Read, Update, Delete |
| CSV | Comma-Separated Values |
| DDoS | Distributed Denial of Service |
| DOCX | Microsoft Word Document Format |
| E2E | End-to-End |
| ER | Entity-Relationship |
| HSTS | HTTP Strict Transport Security |
| HTML | HyperText Markup Language |
| HTTP | HyperText Transfer Protocol |
| HTTPS | HTTP Secure |
| JSON | JavaScript Object Notation |
| JSONB | JSON Binary (PostgreSQL) |
| JWT | JSON Web Token |
| LCP | Largest Contentful Paint |
| OAuth | Open Authorization |
| Portable Document Format | |
| PM | Particulate Matter |
| PM1.0 | Particulate Matter ≤1.0 micrometers |
| PM2.5 | Particulate Matter ≤2.5 micrometers |
| PM4.0 | Particulate Matter ≤4.0 micrometers |
| PM10 | Particulate Matter ≤10 micrometers |
| RBAC | Role-Based Access Control |
| REST | Representational State Transfer |
| RLS | Row-Level Security |
| RPC | Remote Procedure Call |
| SDD | System Design Document |
| SDK | Software Development Kit |
| SPA | Single Page Application |
| SQL | Structured Query Language |
| SSL | Secure Sockets Layer |
| TLS | Transport Layer Security |
| UI | User Interface |
| URL | Uniform Resource Locator |
| UTC | Coordinated Universal Time |
| UUID | Universally Unique Identifier |
| UX | User Experience |
| VPC | Virtual Private Cloud |
| WAF | Web Application Firewall |
| WAL | Write-Ahead Logging |
| WCAG | Web Content Accessibility Guidelines |
| XSS | Cross-Site Scripting |
Appendix B: Glossary
| Term | Definition |
|---|---|
| DustRanger | Portable dust monitoring device that measures particulate matter concentrations |
| Dustloc | Dust suppression location/equipment |
| Edge Function | Serverless function running on Supabase's Deno runtime |
| Heatmap | Geographic visualization showing data intensity through color gradients |
| Mine Site | Physical mining operation location being monitored |
| Refill | Event of replenishing water in dust suppression tanks |
| Row-Level Security | PostgreSQL feature that filters data based on user identity |
| Scraper | Automated process that fetches data from external sources |
| Tank Correction | Manual adjustment of tank level readings |
| Weekly Report | Structured field report documenting weekly site activities |
Appendix C: Database Table Reference
| Table | Records (Est.) | Growth Rate |
|---|---|---|
| user_profiles | 50-100 | Low |
| user_module_permissions | 500-1000 | Low |
| user_site_permissions | 500-1000 | Low |
| cfg_mine_sites | 10-50 | Low |
| cfg_tank_capacities | 50-200 | Low |
| data_dust_levels | 10M+ | High (50K/day) |
| data_flow_meters | 100K+ | Medium (1K/day) |
| data_weather | 500K+ | Medium (500/day) |
| data_assets | 100-500 | Low |
| ops_dustloc_refills | 10K+ | Medium |
| ops_tank_corrections | 1K+ | Low |
| rpt_reports | 5K+ | Medium |
| rpt_weekly_reports | 2K+ | Low (52/year/user) |
| email_schedules | 50-200 | Low |
| email_logs | 50K+ | Medium |
| activity_logs | 1M+ | High |
Appendix D: Environment Variables
| Variable | Description | Required |
|---|---|---|
VITE_SUPABASE_URL | Supabase project URL | Yes |
VITE_SUPABASE_ANON_KEY | Supabase anonymous key | Yes |
SUPABASE_SERVICE_ROLE_KEY | Service role key (server-side) | Yes |
VITE_HEATMAP_SUPABASE_URL | Heatmap database URL | Yes |
VITE_HEATMAP_ANON_KEY | Heatmap database key | Yes |
VITE_GOOGLE_MAPS_API_KEY | Google Maps API key | Yes |
DEEPSEEK_API_KEY | DeepSeek AI API key | Yes |
VITE_APP_NAME | Application display name | No |
VITE_APP_VERSION | Application version | No |
Appendix E: API Rate Limits
| API | Rate Limit | Scope |
|---|---|---|
| Supabase REST | 1000 req/min | Per project |
| Supabase Auth | 30 req/min | Per IP |
| DeepSeek AI | 60 req/min | Per API key |
| Google Maps | 25,000 loads/day | Per API key |
| BOM Weather | No official limit | Fair use |
| Edge Functions | 500K invocations/month | Per project |
Appendix F: Performance Benchmarks
| Operation | Target | Actual (P95) |
|---|---|---|
| Login | < 2s | 1.2s |
| Dashboard load | < 3s | 2.1s |
| Dust levels query (30 days) | < 2s | 1.5s |
| PDF generation (12 pages) | < 60s | 45s |
| CSV upload (10K rows) | < 30s | 22s |
| Weekly report save | < 1s | 0.3s |
| Heatmap render | < 3s | 2.5s |
Appendix G: Change Log
| Version | Date | Changes |
|---|---|---|
| 1.0 | 2025-12-02 | Initial document |
| 2.0 | 2026-02-05 | Major update: RBAC system, tank corrections, email templates, updated tech stack, new feature modules |
Appendix H: Referenced Documents
| Document | Location | Description |
|---|---|---|
| CLAUDE.md | /CLAUDE.md | Development guidelines |
| Security Analysis | /docs/security/SECURITY_ANALYSIS.md | Security assessment |
| Test Plan | /docs/testing/TEST_PLAN.md | Testing strategy |
| API Documentation | Supabase Dashboard | Auto-generated API docs |
| Heatmap Feature | /docs/features/HEATMAP.md | Heatmap specifications |
Document Approval
| Role | Name | Date | Signature |
|---|---|---|---|
| Technical Lead | |||
| Project Manager | |||
| Security Officer | |||
| QA Lead |
End of System Design Document v2
12. Extended Technical Specifications
12.1 Business Requirements Mapping
12.1.1 Functional Requirements
| Req ID | Requirement | Priority | Module | Status |
|---|---|---|---|---|
| FR-001 | User authentication with email/password | High | Auth | Implemented |
| FR-002 | OAuth authentication (Azure, Google) | Medium | Auth | Implemented |
| FR-003 | Role-based access control (Admin/User) | High | RBAC | Implemented |
| FR-004 | Module-level permissions | High | RBAC | Implemented |
| FR-005 | Site-level permissions | High | RBAC | Implemented |
| FR-006 | CSV file upload with validation | High | Upload | Implemented |
| FR-007 | Batch data import (10K+ records) | High | Upload | Implemented |
| FR-008 | PDF report generation | High | Reports | Implemented |
| FR-009 | AI-powered chart descriptions | Medium | Reports | Implemented |
| FR-010 | Weekly field report creation | High | Weekly Reports | Implemented |
| FR-011 | Auto-save for weekly reports | Medium | Weekly Reports | Implemented |
| FR-012 | DOCX export with progress | Medium | Weekly Reports | Implemented |
| FR-013 | Real-time dust level monitoring | High | Dust Levels | Implemented |
| FR-014 | Multi-site comparison charts | Medium | Dust Levels | Implemented |
| FR-015 | Flow meter data tracking | High | Flow Meter | Implemented |
| FR-016 | Tank level visualization | Medium | Flow Meter | Implemented |
| FR-017 | Tank correction management | Medium | Flow Meter | Implemented |
| FR-018 | Tank capacity configuration | Low | Flow Meter | Implemented |
| FR-019 | Email schedule automation | Medium | Implemented | |
| FR-020 | Email template management | Low | Implemented | |
| FR-021 | Geographic heatmap visualization | Medium | Heatmap | Implemented |
| FR-022 | Asset location tracking | Medium | Assets | Implemented |
| FR-023 | Weather data integration | Low | Climate | Implemented |
| FR-024 | User group management | Low | User Mgmt | Implemented |
12.1.2 Non-Functional Requirements
| Req ID | Requirement | Target | Measurement |
|---|---|---|---|
| NFR-001 | Page load time | < 3 seconds | Lighthouse LCP |
| NFR-002 | API response time | < 500ms (P95) | Server logs |
| NFR-003 | System availability | 99.5% uptime | Monitoring |
| NFR-004 | Concurrent users | 100 users | Load testing |
| NFR-005 | Data retention | 7 years | Database policy |
| NFR-006 | Backup frequency | Daily | Automated |
| NFR-007 | Recovery time | < 4 hours | DR testing |
| NFR-008 | Security compliance | OWASP Top 10 | Security audit |
| NFR-009 | Browser support | Chrome, Firefox, Safari, Edge | Manual testing |
| NFR-010 | Mobile responsiveness | Tablet and above | Manual testing |
12.2 User Interface Design
12.2.1 Design System
Color Palette:
/* Primary Colors */
--primary-50: #eff6ff;
--primary-100: #dbeafe;
--primary-500: #3b82f6;
--primary-600: #2563eb;
--primary-700: #1d4ed8;
/* Semantic Colors */
--success: #22c55e;
--warning: #f59e0b;
--error: #ef4444;
--info: #3b82f6;
/* Neutral Colors */
--gray-50: #f9fafb;
--gray-100: #f3f4f6;
--gray-500: #6b7280;
--gray-900: #111827;
/* Dark Mode */
--dark-bg: #1f2937;
--dark-surface: #374151;
--dark-text: #f9fafb;Typography:
/* Font Family */
--font-sans: 'Inter', system-ui, sans-serif;
--font-mono: 'JetBrains Mono', monospace;
/* Font Sizes */
--text-xs: 0.75rem; /* 12px */
--text-sm: 0.875rem; /* 14px */
--text-base: 1rem; /* 16px */
--text-lg: 1.125rem; /* 18px */
--text-xl: 1.25rem; /* 20px */
--text-2xl: 1.5rem; /* 24px */
--text-3xl: 1.875rem; /* 30px */Spacing Scale:
--space-1: 0.25rem; /* 4px */
--space-2: 0.5rem; /* 8px */
--space-3: 0.75rem; /* 12px */
--space-4: 1rem; /* 16px */
--space-6: 1.5rem; /* 24px */
--space-8: 2rem; /* 32px */
--space-12: 3rem; /* 48px */12.2.2 Component Hierarchy
App (Root)
│
├── Routes (React Router v7)
│ ├── PublicLayout
│ │ ├── LandingPage
│ │ ├── LoginPage
│ │ │ ├── LoginForm
│ │ │ ├── OAuthButtons
│ │ │ └── ForgotPasswordLink
│ │ └── RegisterPage
│ │
│ └── ProtectedLayout (requires auth)
│ ├── Header
│ │ ├── Logo
│ │ ├── MainNavigation
│ │ ├── ThemeToggle
│ │ ├── NotificationBell
│ │ └── UserMenu
│ │
│ ├── Sidebar
│ │ ├── NavigationMenu (filtered by permissions)
│ │ ├── QuickActions
│ │ └── CollapsibleToggle
│ │
│ └── MainContent (Outlet)
│ ├── DashboardPage
│ │ ├── KPICards (4)
│ │ ├── AssetHealthWidget
│ │ ├── DustRangerHealthWidget
│ │ └── RefillFrequencyWidget
│ │
│ ├── DustLevelsPage
│ │ ├── FilterPanel
│ │ │ ├── SiteMultiSelect
│ │ │ ├── DeviceMultiSelect
│ │ │ ├── DateRangePicker
│ │ │ └── ThresholdSlider
│ │ ├── StatisticsTable
│ │ ├── SiteSummaryCards
│ │ ├── ChartSection
│ │ │ ├── DustWeatherCorrelationChart
│ │ │ ├── MultiSiteComparisonChart
│ │ │ └── DustDistributionAnalysis
│ │ └── ExportModal
│ │
│ ├── FlowMeterPage
│ │ ├── Sidebar
│ │ │ ├── SiteCards
│ │ │ ├── DateRangeSelector
│ │ │ └── ActionButtons
│ │ ├── TankLevelDisplay (3D Cylinder)
│ │ ├── UsageChart
│ │ └── RefillHistory
│ │
│ ├── WeeklyReportsPage
│ │ ├── ReportList
│ │ ├── ReportEditor
│ │ │ ├── SectionTabs (8)
│ │ │ ├── AutoSaveIndicator
│ │ │ └── ValidationSummary
│ │ └── ExportButton (DOCX)
│ │
│ ├── ReportsPage
│ │ ├── ReportGenerator
│ │ │ ├── SiteSelector
│ │ │ ├── DeviceMultiSelect
│ │ │ ├── DateRangePicker
│ │ │ ├── TemplateSelector
│ │ │ └── GenerateButton
│ │ ├── ReportPreview
│ │ └── ReportList
│ │
│ ├── EmailSchedulesPage
│ │ ├── ScheduleList
│ │ ├── ScheduleForm
│ │ │ ├── BasicInfo
│ │ │ ├── RecipientSelector
│ │ │ ├── TemplateEditor
│ │ │ └── SnippetInserter
│ │ └── EmailLogs
│ │
│ ├── HeatmapPage
│ │ ├── GoogleMap
│ │ ├── HeatmapLayer
│ │ ├── SiteMarkers
│ │ ├── ControlPanel
│ │ └── Legend
│ │
│ ├── UserManagementPage (Admin only)
│ │ ├── UserList
│ │ ├── UserEditForm
│ │ │ ├── ProfileSection
│ │ │ ├── ModulePermissions
│ │ │ └── SitePermissions
│ │ ├── GroupList
│ │ └── GroupEditForm
│ │
│ └── SettingsPage
│ ├── GeneralSettings
│ ├── ThresholdSettings
│ └── IntegrationSettings
│
└── GlobalProviders
├── AuthProvider
├── LayoutProvider
├── ToastProvider
└── ErrorBoundary12.2.3 Navigation Flow
┌─────────────────────────────────────────────────────────────────┐
│ Navigation Map │
└─────────────────────────────────────────────────────────────────┘
Login ──────────────────────────────────────────────────────────┐
│ │
▼ │
Dashboard ◄──────────────────────────────────────────────────────┤
│ │
├──► Dust Levels ──► Compare Export │
│ │ │
│ └──► Site Details │
│ │
├──► Flow Meter ──► Refill Management │
│ │ │
│ ├──► Correction Management │
│ │ │
│ └──► Tank Configuration │
│ │
├──► Dust Ranger ──► Device Details │
│ │
├──► Reports ──► Generate Report ──► Preview ──► Download │
│ │ │
│ └──► Report History │
│ │
├──► Weekly Reports ──► Create/Edit ──► Submit ──► Export │
│ │
├──► Email Schedules ──► Create/Edit ──► View Logs │
│ │
├──► Heatmap │
│ │
├──► Assets │
│ │
├──► Climate │
│ │
└──► Settings ──► Users (Admin) ──► Edit User │
│ │
├──► Groups (Admin) ──► Edit Group │
│ │
└──► General Settings │12.3 Information Architecture
12.3.1 Data Classification
| Category | Examples | Sensitivity | Retention | Access |
|---|---|---|---|---|
| User Identity | Email, name, password hash | High | Account lifetime + 30 days | User + Admin |
| Environmental Data | PM10, PM2.5, temperature | Medium | 7 years (regulatory) | Based on site permissions |
| Operational Data | CSV files, upload metadata | Medium | 2 years | Based on site permissions |
| Generated Reports | PDF reports, DOCX exports | Low | Permanent (user managed) | Creator + Admin |
| Audit Logs | Activity logs, access logs | High | 3 years (compliance) | Admin only |
| Configuration | User preferences, templates | Low | Account lifetime | User + Admin |
| Weather Data | BOM observations | Low | 1 year | All authenticated users |
| Flow Meter Data | Water usage, tank levels | Medium | 2 years | Based on site permissions |
12.3.2 Data Lifecycle Management
┌─────────────────────────────────────────────────────────────────┐
│ Data Lifecycle Stages │
└─────────────────────────────────────────────────────────────────┘
1. CREATION
├─ CSV Upload: User uploads → Storage → Parse → Database
├─ Scraper: Edge Function → External API → Validate → Database
├─ Reports: Generate → PDF/DOCX → Storage → Metadata in DB
└─ Audit Logs: Automatic triggers on CRUD operations
2. ACTIVE USE (0-12 months)
├─ Full access via dashboard and reports
├─ Indexed for fast queries
├─ Cached at multiple levels
└─ Available for export
3. WARM STORAGE (12-24 months)
├─ Data remains in primary database
├─ Less frequently accessed
├─ May have slower query performance
└─ Still available for reports
4. COLD STORAGE (24+ months)
├─ Consider archival to separate storage
├─ Available on request
└─ Longer retrieval times acceptable
5. DISPOSAL
├─ User-initiated: Soft delete → 30-day grace → Hard delete
├─ Account deletion: All user data removed after grace period
├─ Regulatory retention met: Audit logs deleted after 3 years
└─ Automated cleanup jobs run weekly12.3.3 Data Flow Diagrams
CSV Upload Data Flow:
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ User │ │ Frontend │ │ Supabase │ │PostgreSQL│
│ Browser │ │ React │ │ Storage │ │ Database │
└────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │ │
│ 1. Select CSV │ │ │
├──────────────>│ │ │
│ │ │ │
│ │ 2. Validate │ │
│ │ client-side│ │
│ │ │ │
│ │ 3. Upload file│ │
│ ├──────────────>│ │
│ │ │ │
│ │ 4. Storage URL│ │
│ │<──────────────┤ │
│ │ │ │
│ │ 5. Parse CSV │ │
│ │ (PapaParse)│ │
│ │ │ │
│ │ 6. Batch insert │
│ ├──────────────────────────────>│
│ │ │ │
│ │ 7. Insert result │
│ │<──────────────────────────────┤
│ │ │ │
│ 8. Success │ │ │
│<──────────────┤ │ │Report Generation Data Flow:
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ User │ │ Frontend │ │ Edge Fn │ │PostgreSQL│
│ Browser │ │ React │ │(Optional)│ │ Database │
└────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │ │
│ 1. Configure │ │ │
│ report │ │ │
├──────────────>│ │ │
│ │ │ │
│ │ 2. Fetch data │ │
│ ├──────────────────────────────>│
│ │ │ │
│ │ 3. Data result│ │
│ │<──────────────────────────────┤
│ │ │ │
│ │ 4. Render charts │
│ │ (Recharts) │ │
│ │ │ │
│ │ 5. (Optional) AI description │
│ ├──────────────>│ │
│ │ │ 5a. DeepSeek │
│ │ │ API call │
│ │ 5b. Description │
│ │<──────────────┤ │
│ │ │ │
│ │ 6. html2canvas│ │
│ │ capture │ │
│ │ │ │
│ │ 7. jsPDF │ │
│ │ assembly │ │
│ │ │ │
│ 8. Download │ │ │
│ PDF │ │ │
│<──────────────┤ │ │12.4 Internal Communications Architecture
12.4.1 Frontend-Backend Communication
Communication Protocols:
| Protocol | Use Case | Implementation |
|---|---|---|
| HTTPS REST | CRUD operations | Supabase Client SDK |
| WebSocket | Real-time updates | Supabase Realtime (selective) |
| HTTP POST | Edge Function calls | Fetch API |
Request/Response Patterns:
// Standard REST pattern
interface ApiResponse<T> {
data: T | null;
error: {
code: string;
message: string;
details?: unknown;
} | null;
count?: number;
}
// Pagination pattern
interface PaginatedRequest {
page: number;
pageSize: number;
sortBy?: string;
sortOrder?: 'asc' | 'desc';
filters?: Record<string, unknown>;
}
interface PaginatedResponse<T> {
data: T[];
pagination: {
page: number;
pageSize: number;
totalCount: number;
totalPages: number;
hasNext: boolean;
hasPrevious: boolean;
};
}12.4.2 Service Communication
┌─────────────────────────────────────────────────────────────────┐
│ Service Communication Map │
└─────────────────────────────────────────────────────────────────┘
Frontend (React)
│
├─── REST API ──────────────────► Supabase PostgREST
│ │
│ ▼
│ PostgreSQL
│ │
│ │ RLS Policies
│ │
├─── Storage API ───────────────► Supabase Storage
│ │
│ ▼
│ S3-Compatible
│
├─── Auth API ──────────────────► Supabase Auth (GoTrue)
│ │
│ ▼
│ JWT Tokens
│
└─── Edge Functions ────────────► Deno Runtime
│
├──► DeepSeek API
├──► Dustac Scraper
├──► BOM Weather API
└──► Email Service12.4.3 Message Formats
Supabase REST Request:
// Query with filters
const { data, error } = await supabase
.from('data_dust_levels')
.select(`
id,
site_id,
device_id,
pm2_5,
pm10,
temperature,
humidity,
created_at,
site:cfg_mine_sites(name, code)
`)
.eq('site_id', siteId)
.gte('created_at', startDate)
.lte('created_at', endDate)
.order('created_at', { ascending: false })
.range(0, 99);
// Translates to HTTP:
// GET /rest/v1/data_dust_levels?select=...&site_id=eq.{siteId}&...
// Headers: Authorization: Bearer {jwt}, apikey: {anon_key}Edge Function Request:
// Invoke Edge Function
const { data, error } = await supabase.functions.invoke(
'generate-chart-descriptions',
{
body: {
chartType: 'pm25_trend',
data: chartData,
context: { site: 'Mine Site A', period: '2026-02-01 to 2026-02-05' }
}
}
);
// Translates to HTTP:
// POST /functions/v1/generate-chart-descriptions
// Headers: Authorization: Bearer {jwt}
// Body: JSON payload12.5 Security Detailed Design
12.5.1 Authentication Implementation
Login Flow:
async function signIn(email: string, password: string): Promise<AuthResult> {
// 1. Validate input
const validation = loginSchema.safeParse({ email, password });
if (!validation.success) {
throw new ValidationError(validation.error);
}
// 2. Authenticate with Supabase
const { data, error } = await supabase.auth.signInWithPassword({
email,
password
});
if (error) {
// Map error codes to user-friendly messages
throw new AuthError(mapAuthError(error));
}
// 3. Check if user is active
const { data: profile } = await supabase
.from('user_profiles')
.select('is_active, role')
.eq('id', data.user.id)
.single();
if (!profile?.is_active) {
await supabase.auth.signOut();
throw new AuthError('Account is deactivated');
}
// 4. Fetch permissions
const permissions = await fetchUserPermissions(data.user.id);
// 5. Store session
return {
user: data.user,
session: data.session,
profile,
permissions
};
}JWT Token Structure:
{
"aud": "authenticated",
"exp": 1707177600,
"iat": 1707174000,
"iss": "https://your-project.supabase.co/auth/v1",
"sub": "user-uuid-here",
"email": "user@example.com",
"role": "authenticated",
"app_metadata": {
"provider": "email"
},
"user_metadata": {
"full_name": "John Doe"
}
}12.5.2 Authorization Implementation
RLS Policy Examples:
-- Policy: Users can only view their permitted sites' data
CREATE POLICY "site_permission_select" ON data_dust_levels
FOR SELECT USING (
-- Admin bypass
is_admin()
OR
-- Check site permission
site_id IN (
SELECT site_id FROM user_site_permissions
WHERE user_id = auth.uid() AND can_view = true
)
);
-- Policy: Users can only insert to permitted sites
CREATE POLICY "site_permission_insert" ON data_dust_levels
FOR INSERT WITH CHECK (
is_admin()
OR
site_id IN (
SELECT site_id FROM user_site_permissions
WHERE user_id = auth.uid() AND can_edit = true
)
);
-- Policy: Users can only update their own weekly reports
CREATE POLICY "own_weekly_reports" ON rpt_weekly_reports
FOR ALL USING (
is_admin() OR user_id = auth.uid()
);
-- Policy: Only admins can manage users
CREATE POLICY "admin_only_users" ON user_profiles
FOR ALL USING (is_admin());Frontend Permission Checks:
// Permission hook
function usePermissions(module: string) {
const { isAdmin, permissions } = useAuth();
const canView = useMemo(() => {
if (isAdmin) return true;
return permissions.modules.some(
p => p.module === module && p.can_view
);
}, [isAdmin, permissions, module]);
const canEdit = useMemo(() => {
if (isAdmin) return true;
return permissions.modules.some(
p => p.module === module && p.can_edit
);
}, [isAdmin, permissions, module]);
return { canView, canEdit };
}
// Permission gate component
function PermissionGate({
module,
permission = 'view',
fallback = null,
children
}: PermissionGateProps) {
const { canView, canEdit } = usePermissions(module);
const hasPermission = permission === 'view' ? canView : canEdit;
if (!hasPermission) {
return fallback;
}
return <>{children}</>;
}12.5.3 Security Controls
Input Sanitization:
// Zod schema for user input
const userInputSchema = z.object({
email: z.string().email().max(255),
full_name: z.string().min(1).max(100).regex(/^[\w\s\-']+$/),
site_name: z.string().min(1).max(255).regex(/^[\w\s\-]+$/),
notes: z.string().max(2000).optional()
});
// HTML sanitization for rich text
import DOMPurify from 'dompurify';
function sanitizeHtml(dirty: string): string {
return DOMPurify.sanitize(dirty, {
ALLOWED_TAGS: ['p', 'br', 'strong', 'em', 'ul', 'ol', 'li', 'a'],
ALLOWED_ATTR: ['href', 'target']
});
}Rate Limiting:
// Client-side rate limiting
class RateLimiter {
private requests: number[] = [];
constructor(
private maxRequests: number,
private windowMs: number
) {}
canMakeRequest(): boolean {
const now = Date.now();
this.requests = this.requests.filter(t => now - t < this.windowMs);
if (this.requests.length >= this.maxRequests) {
return false;
}
this.requests.push(now);
return true;
}
}
// Usage
const aiRateLimiter = new RateLimiter(10, 60000); // 10 requests per minute
async function generateDescription(data: ChartData) {
if (!aiRateLimiter.canMakeRequest()) {
throw new Error('Rate limit exceeded. Please wait before generating more descriptions.');
}
return await supabase.functions.invoke('generate-chart-descriptions', { body: data });
}12.6 Performance Detailed Design
12.6.1 Frontend Performance Optimization
Code Splitting Strategy:
// Route-based code splitting
const DustLevelsPage = lazy(() => import('@/features/dust-levels/DustLevelsPage'));
const FlowMeterPage = lazy(() => import('@/features/flow-meter/FlowMeterPage'));
const ReportsPage = lazy(() => import('@/features/reports/ReportsPage'));
const WeeklyReportsPage = lazy(() => import('@/features/weekly-reports/WeeklyReportsPage'));
const HeatmapPage = lazy(() => import('@/features/heatmap/HeatmapPage'));
const UserManagementPage = lazy(() => import('@/features/user-management/UserManagementPage'));
// Component-level lazy loading
const PDFViewer = lazy(() => import('@/components/PDFViewer'));
const ChartExportModal = lazy(() => import('@/components/ChartExportModal'));Memoization Patterns:
// Expensive computation memoization
const processedData = useMemo(() => {
return rawData
.filter(d => d.pm25 > threshold)
.map(d => ({
...d,
date: formatDate(d.created_at),
status: getStatus(d.pm25)
}))
.sort((a, b) => b.pm25 - a.pm25);
}, [rawData, threshold]);
// Callback memoization
const handleFilterChange = useCallback((filters: Filters) => {
setFilters(filters);
fetchData(filters);
}, [fetchData]);
// Component memoization
const MemoizedChart = memo(function DustChart({ data, options }: ChartProps) {
return <ResponsiveLine data={data} {...options} />;
}, (prevProps, nextProps) => {
return isEqual(prevProps.data, nextProps.data);
});Debouncing and Throttling:
// Search input debouncing
const [searchTerm, setSearchTerm] = useState('');
const debouncedSearch = useDebouncedValue(searchTerm, 300);
useEffect(() => {
if (debouncedSearch) {
performSearch(debouncedSearch);
}
}, [debouncedSearch]);
// Scroll event throttling
const throttledScroll = useThrottledCallback((e: Event) => {
updateScrollPosition(e);
}, 100);
useEffect(() => {
window.addEventListener('scroll', throttledScroll);
return () => window.removeEventListener('scroll', throttledScroll);
}, [throttledScroll]);12.6.2 Database Performance Optimization
Index Strategy:
-- Composite indexes for common query patterns
CREATE INDEX idx_dust_levels_site_date
ON data_dust_levels(site_id, year, month, day);
CREATE INDEX idx_dust_levels_device_time
ON data_dust_levels(device_id, created_at DESC);
-- Partial indexes for filtered queries
CREATE INDEX idx_dust_levels_high_pm
ON data_dust_levels(site_id, pm10)
WHERE pm10 > 150;
CREATE INDEX idx_reports_pending
ON rpt_reports(user_id, status)
WHERE status IN ('pending', 'generating');
-- GIN indexes for JSONB columns
CREATE INDEX idx_weekly_reports_observations
ON rpt_weekly_reports USING GIN(site_observations);
-- Expression indexes
CREATE INDEX idx_dust_levels_date
ON data_dust_levels((year || '-' || month || '-' || day));Query Optimization Examples:
-- Before: Slow query (full table scan)
SELECT * FROM data_dust_levels
WHERE site_id = 'uuid'
ORDER BY created_at DESC;
-- After: Optimized with index and limit
SELECT id, site_id, device_id, pm2_5, pm10, temperature, humidity, created_at
FROM data_dust_levels
WHERE site_id = 'uuid'
AND created_at >= NOW() - INTERVAL '30 days'
ORDER BY created_at DESC
LIMIT 100;
-- Aggregation with materialized view (for dashboard)
CREATE MATERIALIZED VIEW mv_daily_averages AS
SELECT
site_id,
date_trunc('day', created_at) AS day,
AVG(pm2_5) AS avg_pm25,
AVG(pm10) AS avg_pm10,
MAX(pm2_5) AS max_pm25,
MAX(pm10) AS max_pm10,
COUNT(*) AS reading_count
FROM data_dust_levels
WHERE created_at >= NOW() - INTERVAL '90 days'
GROUP BY site_id, date_trunc('day', created_at);
-- Refresh materialized view (scheduled)
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_daily_averages;Connection Pooling Configuration:
# supabase/config.toml
[db.pooler]
enabled = true
port = 6543
pool_mode = "transaction"
default_pool_size = 20
max_client_conn = 10012.6.3 Caching Strategy
Multi-Level Caching:
┌─────────────────────────────────────────────────────────────────┐
│ Caching Architecture │
└─────────────────────────────────────────────────────────────────┘
Level 1: Browser Cache
├─ Static assets: Cache-Control: max-age=31536000 (1 year)
├─ HTML: Cache-Control: no-cache (always revalidate)
└─ API responses: Not cached (dynamic, user-specific)
Level 2: CDN Cache (Cloudflare)
├─ Static assets: Cached at edge locations
├─ HTML: Cached with stale-while-revalidate
└─ API: Pass-through (not cached)
Level 3: Application Cache
├─ React state: In-memory during session
├─ LocalStorage: User preferences, recent selections
└─ SessionStorage: Temporary form data
Level 4: Database Cache
├─ PostgreSQL shared_buffers: Query result caching
├─ Prepared statements: Query plan caching
└─ Materialized views: Pre-computed aggregationsClient-Side Caching Implementation:
// LocalStorage cache with expiration
class CacheService {
private prefix = 'dustac_cache_';
set<T>(key: string, value: T, ttlMinutes: number = 60): void {
const item = {
value,
expiry: Date.now() + ttlMinutes * 60 * 1000
};
localStorage.setItem(this.prefix + key, JSON.stringify(item));
}
get<T>(key: string): T | null {
const itemStr = localStorage.getItem(this.prefix + key);
if (!itemStr) return null;
const item = JSON.parse(itemStr);
if (Date.now() > item.expiry) {
localStorage.removeItem(this.prefix + key);
return null;
}
return item.value as T;
}
invalidate(pattern: string): void {
Object.keys(localStorage)
.filter(key => key.startsWith(this.prefix + pattern))
.forEach(key => localStorage.removeItem(key));
}
}
// Usage
const cache = new CacheService();
async function fetchSites(): Promise<MineSite[]> {
// Check cache first
const cached = cache.get<MineSite[]>('sites');
if (cached) return cached;
// Fetch from API
const { data } = await supabase.from('cfg_mine_sites').select('*');
// Cache for 30 minutes
cache.set('sites', data, 30);
return data;
}12.6.4 Performance Monitoring
Key Metrics:
// Performance tracking
interface PerformanceMetrics {
// Page load metrics
ttfb: number; // Time to First Byte
fcp: number; // First Contentful Paint
lcp: number; // Largest Contentful Paint
fid: number; // First Input Delay
cls: number; // Cumulative Layout Shift
// Application metrics
apiLatency: number; // Average API response time
renderTime: number; // Component render time
memoryUsage: number; // JS heap size
}
// Track performance
function trackPerformance(metric: string, value: number, metadata?: object) {
// Log to console in development
if (import.meta.env.DEV) {
console.log(`[Performance] ${metric}: ${value}ms`, metadata);
}
// Send to analytics in production
if (import.meta.env.PROD) {
supabase.from('performance_metrics').insert({
metric_name: metric,
value_ms: value,
metadata,
user_agent: navigator.userAgent,
url: window.location.pathname,
timestamp: new Date().toISOString()
});
}
}
// Usage
const startTime = performance.now();
await generatePDFReport(data);
trackPerformance('pdf_generation', performance.now() - startTime, {
pageCount: 12,
recordCount: data.length
});12.7 Disaster Recovery and Business Continuity
12.7.1 Backup Strategy
| Component | Method | Frequency | Retention | RTO | RPO |
|---|---|---|---|---|---|
| Database | pg_dump automated | Daily 03:00 UTC | 30 days | 4 hours | 24 hours |
| Database | WAL archiving | Continuous | 7 days | 1 hour | 15 minutes |
| Storage | S3 versioning | On change | 90 days | 1 hour | Real-time |
| Code | Git repository | On commit | Indefinite | 30 minutes | Real-time |
| Config | Infrastructure-as-Code | On change | Version controlled | 30 minutes | Real-time |
12.7.2 Recovery Procedures
Database Recovery:
# Restore from daily backup
pg_restore -h localhost -U postgres -d dustac_restored backup_20260205.dump
# Point-in-time recovery
pg_restore --target-time="2026-02-05 14:30:00" \
-h localhost -U postgres -d dustac_pitr backup_base.dumpStorage Recovery:
# List object versions
aws s3api list-object-versions \
--bucket dustac-storage \
--prefix csv-uploads/
# Restore specific version
aws s3api get-object \
--bucket dustac-storage \
--key csv-uploads/user123/data.csv \
--version-id abc123 \
restored-data.csv12.7.3 High Availability Configuration
┌─────────────────────────────────────────────────────────────────┐
│ High Availability Architecture │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────┐
│ Cloudflare │
│ (Global CDN) │
└────────┬────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Edge │ │ Edge │ │ Edge │
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ (Sydney) │ │ (Perth) │ │(Singapore)│
└──────────┘ └──────────┘ └──────────┘
│
▼
┌─────────────────┐
│ Load Balancer │
│ (Kong) │
└────────┬────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│PostgreSQL│ │ Supabase │ │ Supabase │
│ Primary │ │ Auth │ │ Storage │
└────┬─────┘ └──────────┘ └──────────┘
│
▼
┌──────────┐
│PostgreSQL│
│ Replica │
│(Standby) │
└──────────┘12.8 Detailed Component Specifications
12.8.1 Upload Wizard Component
Props Interface:
interface UploadWizardProps {
onUploadComplete: (uploadId: string, summary: UploadSummary) => void;
onCancel: () => void;
maxFiles?: number; // default: 10
maxFileSize?: number; // default: 50MB
allowedExtensions?: string[]; // default: ['.csv']
}State Management:
interface UploadWizardState {
currentStep: 1 | 2 | 3 | 4;
selectedFiles: FileWithPreview[];
siteConfig: {
siteId: string;
siteName: string;
uploadType: 'daily' | 'weekly' | 'monthly';
periodStart: Date;
periodEnd: Date;
};
validationResults: ValidationResult[];
uploadProgress: Map<string, UploadProgress>;
errors: UploadError[];
isProcessing: boolean;
}
interface FileWithPreview {
file: File;
preview: {
headers: string[];
rows: string[][];
rowCount: number;
};
validation: {
isValid: boolean;
errors: string[];
warnings: string[];
};
}
interface UploadProgress {
filename: string;
status: 'pending' | 'uploading' | 'processing' | 'completed' | 'failed';
progress: number;
recordsProcessed: number;
recordsTotal: number;
error?: string;
}Step Flow:
Step 1: File Selection
├─ Drag & drop or file browser
├─ Validate file types and sizes
├─ Parse CSV headers for preview
└─ Show file list with previews
Step 2: Site Configuration
├─ Select mine site (filtered by permissions)
├─ Select upload type (daily/weekly/monthly)
├─ Set date range
└─ Validate site access
Step 3: Validation
├─ Validate CSV structure (18 columns)
├─ Validate data types and ranges
├─ Check for duplicates
└─ Display validation summary
Step 4: Upload
├─ Upload files to Supabase Storage
├─ Parse and insert records in batches
├─ Show progress per file
└─ Display final summary12.8.2 Report Generator Component
Props Interface:
interface ReportGeneratorProps {
onReportGenerated: (report: GeneratedReport) => void;
onCancel: () => void;
preselectedSite?: string;
preselectedDateRange?: DateRange;
templates?: ReportTemplate[];
}Generation Algorithm:
async function generateReport(config: ReportConfig): Promise<GeneratedReport> {
const { siteId, deviceIds, dateRange, template } = config;
// Phase 1: Fetch data (10%)
updateProgress(0, 'Fetching data...');
const data = await fetchMeasurements({ siteId, deviceIds, dateRange });
updateProgress(10, `Loaded ${data.length} records`);
// Phase 2: Calculate statistics (20%)
updateProgress(10, 'Calculating statistics...');
const statistics = calculateStatistics(data);
updateProgress(20, 'Statistics calculated');
// Phase 3: Generate AI descriptions (30%) - optional
if (config.includeAIDescriptions) {
updateProgress(20, 'Generating AI descriptions...');
const descriptions = await generateAIDescriptions(data, statistics);
updateProgress(30, 'AI descriptions generated');
}
// Phase 4: Render components (60%)
updateProgress(30, 'Rendering report pages...');
const components = [
<CoverPage site={site} dateRange={dateRange} />,
<SummaryPage statistics={statistics} />,
<PM25AnalysisPage data={data} />,
<PM10AnalysisPage data={data} />,
<EnvironmentalPage data={data} />,
<DataTablesPage data={data} />
];
const images: HTMLCanvasElement[] = [];
for (let i = 0; i < components.length; i++) {
const canvas = await html2canvas(renderToHidden(components[i]), {
scale: 2,
useCORS: true,
logging: false
});
images.push(canvas);
updateProgress(30 + (i + 1) * 5, `Rendered page ${i + 1} of ${components.length}`);
}
// Phase 5: Assemble PDF (80%)
updateProgress(60, 'Assembling PDF...');
const pdf = new jsPDF('portrait', 'mm', 'a4');
images.forEach((canvas, index) => {
if (index > 0) pdf.addPage();
const imgData = canvas.toDataURL('image/png');
pdf.addImage(imgData, 'PNG', 0, 0, 210, 297);
updateProgress(60 + (index + 1) * 2, `Added page ${index + 1}`);
});
// Phase 6: Save (100%)
updateProgress(80, 'Saving report...');
const blob = pdf.output('blob');
const filename = generateFilename(site, dateRange);
// Upload to storage
const { data: storageData } = await supabase.storage
.from('pdf-reports')
.upload(`${userId}/${filename}`, blob);
// Save metadata
const { data: report } = await supabase
.from('rpt_reports')
.insert({
user_id: userId,
site_id: siteId,
title: filename,
period_start: dateRange.start,
period_end: dateRange.end,
pdf_storage_path: storageData.path,
file_size_bytes: blob.size,
page_count: images.length,
status: 'completed'
})
.select()
.single();
updateProgress(100, 'Report generated successfully!');
return {
id: report.id,
filename,
storagePath: storageData.path,
fileSize: blob.size,
pageCount: images.length,
blob
};
}12.8.3 Tank Level Display Component
Props Interface:
interface TankLevelDisplayProps {
level: number; // 0-100%
capacity: number; // Liters
tankName: string;
lastUpdated: Date;
hasCorrection: boolean;
correctionInfo?: {
correctedLevel: number;
correctedAt: Date;
correctedBy: string;
reason: string;
};
onAddCorrection?: () => void;
showAnimation?: boolean;
}3D Cylinder Rendering:
function TankLevelDisplay({ level, capacity, tankName, ...props }: TankLevelDisplayProps) {
const fillHeight = (level / 100) * CYLINDER_HEIGHT;
const fillColor = getTankColor(level);
return (
<div className="tank-container">
<svg viewBox="0 0 200 300" className="tank-svg">
{/* Tank body (cylinder) */}
<ellipse cx="100" cy="30" rx="80" ry="20" fill="#e5e7eb" />
<rect x="20" y="30" width="160" height="220" fill="#f3f4f6" />
<ellipse cx="100" cy="250" rx="80" ry="20" fill="#d1d5db" />
{/* Liquid fill */}
<clipPath id="tankClip">
<rect x="20" y={250 - fillHeight} width="160" height={fillHeight} />
</clipPath>
<g clipPath="url(#tankClip)">
<ellipse cx="100" cy="250" rx="78" ry="18" fill={fillColor} />
<rect x="22" y="32" width="156" height="218" fill={fillColor} opacity="0.8" />
{/* Liquid animation */}
{props.showAnimation && (
<animate
attributeName="opacity"
values="0.7;0.9;0.7"
dur="2s"
repeatCount="indefinite"
/>
)}
</g>
{/* Glass highlight effect */}
<rect x="30" y="40" width="20" height="200" fill="white" opacity="0.3" rx="10" />
{/* Level indicator line */}
<line x1="180" y1={250 - fillHeight} x2="200" y2={250 - fillHeight} stroke="#374151" strokeWidth="2" />
<text x="205" y={255 - fillHeight} fontSize="12" fill="#374151">{level.toFixed(1)}%</text>
</svg>
{/* Tank info */}
<div className="tank-info">
<h3>{tankName}</h3>
<p>{(capacity * level / 100).toFixed(0)}L / {capacity}L</p>
<p className="text-sm text-gray-500">
Updated: {formatRelativeTime(props.lastUpdated)}
</p>
{props.hasCorrection && (
<Badge variant="warning">
Manually corrected
</Badge>
)}
</div>
</div>
);
}
function getTankColor(level: number): string {
if (level >= 70) return '#22c55e'; // Green
if (level >= 40) return '#f59e0b'; // Yellow
if (level >= 20) return '#f97316'; // Orange
return '#ef4444'; // Red
}12.9 Data Conversion and Migration
12.9.1 CSV Data Format Specification
Standard CSV Structure (18 columns):
| Column | Name | Type | Required | Validation |
|---|---|---|---|---|
| 1 | time | ISO 8601 datetime | Yes | Valid date, not future |
| 2 | massconcentration_pm1p0 | Decimal | Yes | 0-10000 |
| 3 | massconcentration_pm2p5 | Decimal | Yes | 0-10000 |
| 4 | massconcentration_pm4p0 | Decimal | Yes | 0-10000 |
| 5 | massconcentration_pm10p0 | Decimal | Yes | 0-10000 |
| 6 | numberconcentration_pm0p5 | Decimal | No | >= 0 |
| 7 | numberconcentration_pm1p0 | Decimal | No | >= 0 |
| 8 | numberconcentration_pm2p5 | Decimal | No | >= 0 |
| 9 | numberconcentration_pm4p0 | Decimal | No | >= 0 |
| 10 | numberconcentration_pm10p0 | Decimal | No | >= 0 |
| 11 | typicalparticlesize | Decimal | No | >= 0 |
| 12 | temperature | Decimal | No | -50 to 80 |
| 13 | humidity | Decimal | No | 0-100 |
| 14 | latitude | Decimal | No | -90 to 90 |
| 15 | longitude | Decimal | No | -180 to 180 |
| 16 | speed | Decimal | No | >= 0 |
| 17 | loc | String | No | Max 255 chars |
| 18 | device_id | String | Yes | Non-empty |
CSV Parsing Implementation:
import Papa from 'papaparse';
interface ParsedCSVResult {
data: DustLevelRecord[];
errors: ParseError[];
warnings: ParseWarning[];
meta: {
totalRows: number;
validRows: number;
invalidRows: number;
duplicates: number;
};
}
async function parseCSVFile(file: File): Promise<ParsedCSVResult> {
return new Promise((resolve, reject) => {
const results: DustLevelRecord[] = [];
const errors: ParseError[] = [];
const warnings: ParseWarning[] = [];
let rowNumber = 0;
Papa.parse(file, {
header: true,
skipEmptyLines: true,
dynamicTyping: true,
step: (row, parser) => {
rowNumber++;
try {
const validated = validateRow(row.data, rowNumber);
if (validated.isValid) {
results.push(validated.record);
} else {
errors.push(...validated.errors);
}
if (validated.warnings.length > 0) {
warnings.push(...validated.warnings);
}
} catch (error) {
errors.push({
row: rowNumber,
message: `Parse error: ${error.message}`
});
}
},
complete: () => {
resolve({
data: results,
errors,
warnings,
meta: {
totalRows: rowNumber,
validRows: results.length,
invalidRows: errors.length,
duplicates: 0 // Calculated during insert
}
});
},
error: (error) => {
reject(new Error(`CSV parsing failed: ${error.message}`));
}
});
});
}
function validateRow(row: Record<string, unknown>, rowNumber: number): ValidationResult {
const errors: ParseError[] = [];
const warnings: ParseWarning[] = [];
// Required field validation
if (!row.time) {
errors.push({ row: rowNumber, field: 'time', message: 'Time is required' });
}
if (!row.device_id) {
errors.push({ row: rowNumber, field: 'device_id', message: 'Device ID is required' });
}
// Range validation
const pm25 = Number(row.massconcentration_pm2p5);
if (isNaN(pm25) || pm25 < 0 || pm25 > 10000) {
errors.push({ row: rowNumber, field: 'pm2_5', message: 'PM2.5 must be between 0 and 10000' });
}
// Warning for unusual values
const temp = Number(row.temperature);
if (!isNaN(temp) && (temp < -20 || temp > 60)) {
warnings.push({ row: rowNumber, field: 'temperature', message: `Unusual temperature: ${temp}°C` });
}
if (errors.length > 0) {
return { isValid: false, errors, warnings, record: null };
}
// Transform to database record
const record: DustLevelRecord = {
device_id: String(row.device_id),
year: new Date(row.time as string).getFullYear(),
month: new Date(row.time as string).getMonth() + 1,
day: new Date(row.time as string).getDate(),
hour: new Date(row.time as string).getHours(),
minute: new Date(row.time as string).getMinutes(),
pm1_0: Number(row.massconcentration_pm1p0) || null,
pm2_5: Number(row.massconcentration_pm2p5) || null,
pm4_0: Number(row.massconcentration_pm4p0) || null,
pm10: Number(row.massconcentration_pm10p0) || null,
temperature: Number(row.temperature) || null,
humidity: Number(row.humidity) || null,
latitude: Number(row.latitude) || null,
longitude: Number(row.longitude) || null,
speed: Number(row.speed) || null,
location: row.loc ? String(row.loc) : null
};
return { isValid: true, errors: [], warnings, record };
}12.9.2 Batch Insert Strategy
async function batchInsertRecords(
records: DustLevelRecord[],
siteId: string,
onProgress: (progress: number) => void
): Promise<BatchInsertResult> {
const BATCH_SIZE = 500;
const batches = chunk(records, BATCH_SIZE);
let inserted = 0;
let duplicates = 0;
let errors: InsertError[] = [];
for (let i = 0; i < batches.length; i++) {
const batch = batches[i];
try {
const { data, error, count } = await supabase
.from('data_dust_levels')
.upsert(
batch.map(r => ({ ...r, site_id: siteId })),
{
onConflict: 'site_id,device_id,year,month,day,hour,minute',
ignoreDuplicates: true
}
)
.select('id');
if (error) throw error;
inserted += data?.length || 0;
duplicates += batch.length - (data?.length || 0);
} catch (error) {
errors.push({
batch: i,
message: error.message,
recordCount: batch.length
});
}
onProgress(((i + 1) / batches.length) * 100);
}
return {
totalRecords: records.length,
inserted,
duplicates,
errors,
success: errors.length === 0
};
}12.10 Testing Specifications
12.10.1 Unit Testing
Test Coverage Requirements:
| Category | Target Coverage |
|---|---|
| Services | 80% lines, 75% branches |
| Hooks | 80% lines, 75% branches |
| Utilities | 90% lines, 85% branches |
| Components | 70% lines, 65% branches |
Example Service Test:
// src/features/flow-meter/services/tankCorrectionService.test.ts
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { TankCorrectionService } from './tankCorrectionService';
import { supabase } from '@/lib/supabase';
vi.mock('@/lib/supabase');
describe('TankCorrectionService', () => {
beforeEach(() => {
vi.clearAllMocks();
});
describe('getCorrections', () => {
it('should fetch corrections for a site', async () => {
const mockData = [
{ id: '1', site_id: 'site-1', tank_name: 'Tank A', corrected_level: 75 }
];
vi.mocked(supabase.from).mockReturnValue({
select: vi.fn().mockReturnValue({
eq: vi.fn().mockReturnValue({
order: vi.fn().mockResolvedValue({ data: mockData, error: null })
})
})
} as any);
const result = await TankCorrectionService.getCorrections('site-1');
expect(result).toEqual(mockData);
expect(supabase.from).toHaveBeenCalledWith('ops_tank_corrections');
});
it('should throw error on database failure', async () => {
vi.mocked(supabase.from).mockReturnValue({
select: vi.fn().mockReturnValue({
eq: vi.fn().mockReturnValue({
order: vi.fn().mockResolvedValue({
data: null,
error: { message: 'Database error' }
})
})
})
} as any);
await expect(TankCorrectionService.getCorrections('site-1'))
.rejects.toThrow('Database error');
});
});
describe('addCorrection', () => {
it('should insert a new correction', async () => {
const newCorrection = {
site_id: 'site-1',
tank_name: 'Tank A',
corrected_level: 80,
reason: 'Manual measurement'
};
const mockResult = { id: '1', ...newCorrection };
vi.mocked(supabase.from).mockReturnValue({
insert: vi.fn().mockReturnValue({
select: vi.fn().mockReturnValue({
single: vi.fn().mockResolvedValue({ data: mockResult, error: null })
})
})
} as any);
const result = await TankCorrectionService.addCorrection(newCorrection);
expect(result).toEqual(mockResult);
});
it('should validate level range', async () => {
const invalidCorrection = {
site_id: 'site-1',
tank_name: 'Tank A',
corrected_level: 150, // Invalid: > 100
reason: 'Test'
};
await expect(TankCorrectionService.addCorrection(invalidCorrection))
.rejects.toThrow('Level must be between 0 and 100');
});
});
});12.10.2 E2E Testing
Playwright Test Example:
// e2e/flow-meter.spec.ts
import { test, expect } from '@playwright/test';
test.describe('Flow Meter Module', () => {
test.beforeEach(async ({ page }) => {
// Login
await page.goto('/login');
await page.fill('[name="email"]', 'test@dustac.com.au');
await page.fill('[name="password"]', 'TestPassword123!');
await page.click('button[type="submit"]');
await page.waitForURL('/');
});
test('should display tank levels', async ({ page }) => {
await page.goto('/flow-meter');
// Wait for data to load
await page.waitForSelector('.tank-container');
// Verify tank display
const tanks = await page.locator('.tank-container').count();
expect(tanks).toBeGreaterThan(0);
// Verify level percentage is shown
const levelText = await page.locator('.tank-level-text').first().textContent();
expect(levelText).toMatch(/\d+(\.\d+)?%/);
});
test('should add tank correction', async ({ page }) => {
await page.goto('/correction-management');
// Click add correction button
await page.click('button:has-text("Add Correction")');
// Fill form
await page.selectOption('[name="site_id"]', { index: 1 });
await page.fill('[name="tank_name"]', 'Tank A');
await page.fill('[name="corrected_level"]', '75');
await page.fill('[name="reason"]', 'Manual measurement verification');
// Submit
await page.click('button:has-text("Save Correction")');
// Verify success
await expect(page.locator('.toast-success')).toBeVisible();
await expect(page.locator('table')).toContainText('Tank A');
await expect(page.locator('table')).toContainText('75%');
});
test('should export data to CSV', async ({ page }) => {
await page.goto('/flow-meter');
// Click export button
const [download] = await Promise.all([
page.waitForEvent('download'),
page.click('button:has-text("Export CSV")')
]);
// Verify download
expect(download.suggestedFilename()).toMatch(/flow-meter.*\.csv$/);
});
});12.10.3 Performance Testing
// performance/load-test.ts
import { check, sleep } from 'k6';
import http from 'k6/http';
export const options = {
stages: [
{ duration: '1m', target: 20 }, // Ramp up to 20 users
{ duration: '3m', target: 50 }, // Ramp up to 50 users
{ duration: '2m', target: 100 }, // Ramp up to 100 users
{ duration: '5m', target: 100 }, // Stay at 100 users
{ duration: '2m', target: 0 }, // Ramp down
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests under 500ms
http_req_failed: ['rate<0.01'], // Less than 1% failure rate
},
};
export default function () {
const BASE_URL = __ENV.BASE_URL || 'https://dashboard.dustac.com.au';
// Login
const loginRes = http.post(`${BASE_URL}/auth/v1/token?grant_type=password`, {
email: 'loadtest@dustac.com.au',
password: 'LoadTestPassword123!'
});
check(loginRes, {
'login successful': (r) => r.status === 200,
});
const token = loginRes.json('access_token');
const headers = { Authorization: `Bearer ${token}` };
// Fetch dashboard data
const dashboardRes = http.get(`${BASE_URL}/rest/v1/rpc/get_dashboard_summary`, { headers });
check(dashboardRes, {
'dashboard loads': (r) => r.status === 200,
'dashboard fast': (r) => r.timings.duration < 500,
});
sleep(1);
// Fetch dust levels
const dustRes = http.get(
`${BASE_URL}/rest/v1/data_dust_levels?select=*&limit=100&order=created_at.desc`,
{ headers }
);
check(dustRes, {
'dust levels load': (r) => r.status === 200,
'dust levels fast': (r) => r.timings.duration < 500,
});
sleep(2);
}13. Conclusion
This System Design Document v2 provides a comprehensive technical specification for the Dustac Environmental Monitoring Dashboard. The document covers:
- Complete System Architecture - From high-level logical views to detailed component specifications
- Database Design - Full schema with 20+ tables, RLS policies, and optimization strategies
- 16 Feature Modules - Each with components, services, hooks, and types
- RBAC System - Role-based access control with module and site-level permissions
- Security Implementation - Authentication, authorization, and data protection
- Performance Engineering - Optimization strategies, caching, and monitoring
- Operational Scenarios - Detailed workflows for key system functions
- External Integrations - AI (DeepSeek), scrapers, weather API, Google Maps
- Testing Specifications - Unit, E2E, and performance testing approaches
- Disaster Recovery - Backup, recovery, and high availability configurations
The system is designed to be:
- Secure - Defense in depth with RLS, JWT, and comprehensive audit logging
- Scalable - Horizontal scaling support with optimized database queries
- Maintainable - Feature-driven architecture with clear separation of concerns
- Performant - Multi-level caching, code splitting, and optimized rendering
Document Version: 2.0Last Updated: February 5, 2026Total Pages: ~120
14. Extended Database Specifications
14.1 PostgreSQL Configuration
Database Server Configuration (Self-Hosted Supabase):
| Parameter | Value | Purpose | Impact |
|---|---|---|---|
| max_connections | 60 (PgBouncer pool) | Concurrent connection limit | Prevents connection exhaustion; scales to 100 concurrent users |
| shared_buffers | ~4GB (25% RAM) | Database cache size | Frequently accessed data cached in memory |
| effective_cache_size | ~12GB (75% RAM) | Query planner assumption | Optimizer prefers index scans when data likely cached |
| work_mem | 64MB | Per-operation memory | Sufficient for sorting 50K-row result sets |
| maintenance_work_mem | 1GB | VACUUM, INDEX creation | Faster index builds and maintenance operations |
| checkpoint_timeout | 15min | WAL checkpoint frequency | Balance durability vs. write performance |
| wal_buffers | 32MB | Write-ahead log buffer | Reduces I/O for transaction logging |
| random_page_cost | 1.1 | SSD cost factor | Encourages index usage on SSD storage |
| effective_io_concurrency | 200 | Concurrent I/O operations | Optimized for SSD storage |
| default_statistics_target | 100 | Statistics sampling | Better query plans for complex queries |
Connection Pooling Configuration (PgBouncer):
[pgbouncer]
pool_mode = transaction
max_client_conn = 200
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
max_db_connections = 60
max_user_connections = 60
server_idle_timeout = 600
server_lifetime = 3600
server_connect_timeout = 15
query_timeout = 300
query_wait_timeout = 120
client_idle_timeout = 014.2 Complete Table Schemas
14.2.1 User Management Tables (Detailed)
user_profiles (Complete Schema):
CREATE TABLE user_profiles (
-- Primary Key
id UUID PRIMARY KEY REFERENCES auth.users(id) ON DELETE CASCADE,
-- Identity
email TEXT NOT NULL,
full_name TEXT,
avatar_url TEXT,
-- Authorization
role TEXT NOT NULL DEFAULT 'user' CHECK (role IN ('admin', 'user')),
is_active BOOLEAN NOT NULL DEFAULT true,
-- Preferences
preferences JSONB DEFAULT '{
"theme": "system",
"notifications": true,
"defaultSite": null,
"timezone": "Australia/Perth"
}'::jsonb,
-- Metadata
last_login_at TIMESTAMPTZ,
login_count INTEGER DEFAULT 0,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
CONSTRAINT email_format CHECK (email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')
);
-- Indexes
CREATE INDEX idx_user_profiles_email ON user_profiles(email);
CREATE INDEX idx_user_profiles_role ON user_profiles(role);
CREATE INDEX idx_user_profiles_active ON user_profiles(is_active) WHERE is_active = true;
-- Trigger for updated_at
CREATE TRIGGER update_user_profiles_updated_at
BEFORE UPDATE ON user_profiles
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();user_module_permissions (Complete Schema):
CREATE TABLE user_module_permissions (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
-- Permission Data
module TEXT NOT NULL CHECK (module IN (
'dashboard', 'dust_levels', 'flow_meter', 'dust_ranger',
'reports', 'weekly_reports', 'email_schedules', 'heatmap',
'assets', 'climate', 'settings', 'user_management', 'upload'
)),
can_view BOOLEAN NOT NULL DEFAULT false,
can_edit BOOLEAN NOT NULL DEFAULT false,
-- Metadata
granted_by UUID REFERENCES user_profiles(id),
granted_at TIMESTAMPTZ DEFAULT now(),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(user_id, module)
);
-- Indexes
CREATE INDEX idx_user_module_permissions_user ON user_module_permissions(user_id);
CREATE INDEX idx_user_module_permissions_module ON user_module_permissions(module);
CREATE INDEX idx_user_module_permissions_view ON user_module_permissions(user_id, module) WHERE can_view = true;user_site_permissions (Complete Schema):
CREATE TABLE user_site_permissions (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE CASCADE,
-- Permission Data
can_view BOOLEAN NOT NULL DEFAULT false,
can_edit BOOLEAN NOT NULL DEFAULT false,
-- Metadata
granted_by UUID REFERENCES user_profiles(id),
granted_at TIMESTAMPTZ DEFAULT now(),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(user_id, site_id)
);
-- Indexes
CREATE INDEX idx_user_site_permissions_user ON user_site_permissions(user_id);
CREATE INDEX idx_user_site_permissions_site ON user_site_permissions(site_id);
CREATE INDEX idx_user_site_permissions_view ON user_site_permissions(user_id) WHERE can_view = true;user_groups (Complete Schema):
CREATE TABLE user_groups (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Group Data
name TEXT NOT NULL UNIQUE,
description TEXT,
color TEXT DEFAULT '#3b82f6',
-- Default Permissions (applied to new members)
default_module_permissions JSONB DEFAULT '[]'::jsonb,
default_site_permissions JSONB DEFAULT '[]'::jsonb,
-- Metadata
is_active BOOLEAN NOT NULL DEFAULT true,
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_user_groups_name ON user_groups(name);
CREATE INDEX idx_user_groups_active ON user_groups(is_active) WHERE is_active = true;user_group_members (Complete Schema):
CREATE TABLE user_group_members (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
group_id UUID NOT NULL REFERENCES user_groups(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
-- Metadata
added_by UUID REFERENCES user_profiles(id),
added_at TIMESTAMPTZ DEFAULT now(),
created_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(group_id, user_id)
);
-- Indexes
CREATE INDEX idx_user_group_members_group ON user_group_members(group_id);
CREATE INDEX idx_user_group_members_user ON user_group_members(user_id);14.2.2 Configuration Tables (Detailed)
cfg_mine_sites (Complete Schema):
CREATE TABLE cfg_mine_sites (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Site Identity
name TEXT NOT NULL UNIQUE,
code TEXT NOT NULL UNIQUE,
description TEXT,
-- Location
address TEXT,
city TEXT,
state TEXT DEFAULT 'WA',
country TEXT DEFAULT 'Australia',
latitude DECIMAL(10, 8),
longitude DECIMAL(11, 8),
-- Configuration
timezone TEXT DEFAULT 'Australia/Perth',
weather_station_id TEXT,
-- Thresholds
pm25_warning_threshold DECIMAL(6, 2) DEFAULT 35.0,
pm25_critical_threshold DECIMAL(6, 2) DEFAULT 55.0,
pm10_warning_threshold DECIMAL(6, 2) DEFAULT 150.0,
pm10_critical_threshold DECIMAL(6, 2) DEFAULT 250.0,
-- Metadata
is_active BOOLEAN NOT NULL DEFAULT true,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_mine_sites_code ON cfg_mine_sites(code);
CREATE INDEX idx_mine_sites_active ON cfg_mine_sites(is_active) WHERE is_active = true;
CREATE INDEX idx_mine_sites_location ON cfg_mine_sites(latitude, longitude) WHERE latitude IS NOT NULL;cfg_app_settings (Complete Schema):
CREATE TABLE cfg_app_settings (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Setting Data
key TEXT NOT NULL UNIQUE,
value JSONB NOT NULL,
description TEXT,
category TEXT DEFAULT 'general',
-- Validation
value_type TEXT DEFAULT 'string' CHECK (value_type IN ('string', 'number', 'boolean', 'array', 'object')),
validation_schema JSONB,
-- Metadata
is_system BOOLEAN DEFAULT false,
updated_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_app_settings_key ON cfg_app_settings(key);
CREATE INDEX idx_app_settings_category ON cfg_app_settings(category);
-- Default Settings
INSERT INTO cfg_app_settings (key, value, description, category, is_system) VALUES
('ai_model', '"deepseek-chat"', 'AI model for chart descriptions', 'ai', true),
('ai_enabled', 'true', 'Enable AI-powered features', 'ai', false),
('default_date_range_days', '30', 'Default date range for queries', 'display', false),
('max_export_records', '100000', 'Maximum records per export', 'limits', true),
('session_timeout_minutes', '60', 'Session timeout duration', 'security', true),
('maintenance_mode', 'false', 'Enable maintenance mode', 'system', true);cfg_tank_capacities (Complete Schema):
CREATE TABLE cfg_tank_capacities (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE CASCADE,
-- Tank Data
tank_name TEXT NOT NULL,
tank_type TEXT DEFAULT 'water' CHECK (tank_type IN ('water', 'fuel', 'chemical', 'other')),
capacity_liters DECIMAL(12, 2) NOT NULL CHECK (capacity_liters > 0),
-- Thresholds
low_level_threshold DECIMAL(5, 2) DEFAULT 20.0 CHECK (low_level_threshold >= 0 AND low_level_threshold <= 100),
critical_level_threshold DECIMAL(5, 2) DEFAULT 10.0 CHECK (critical_level_threshold >= 0 AND critical_level_threshold <= 100),
-- Metadata
is_active BOOLEAN NOT NULL DEFAULT true,
notes TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(site_id, tank_name),
CHECK (critical_level_threshold <= low_level_threshold)
);
-- Indexes
CREATE INDEX idx_tank_capacities_site ON cfg_tank_capacities(site_id);
CREATE INDEX idx_tank_capacities_active ON cfg_tank_capacities(site_id) WHERE is_active = true;14.2.3 Data Tables (Detailed)
data_dust_levels (Complete Schema):
CREATE TABLE data_dust_levels (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE RESTRICT,
-- Device Identification
device_id TEXT NOT NULL,
device_name TEXT,
-- Time Components (for efficient querying without timezone issues)
year INTEGER NOT NULL CHECK (year >= 2020 AND year <= 2100),
month INTEGER NOT NULL CHECK (month >= 1 AND month <= 12),
day INTEGER NOT NULL CHECK (day >= 1 AND day <= 31),
hour INTEGER NOT NULL CHECK (hour >= 0 AND hour <= 23),
minute INTEGER NOT NULL CHECK (minute >= 0 AND minute <= 59),
-- Mass Concentrations (μg/m³)
pm1_0 DECIMAL(10, 4) CHECK (pm1_0 IS NULL OR (pm1_0 >= 0 AND pm1_0 <= 10000)),
pm2_5 DECIMAL(10, 4) CHECK (pm2_5 IS NULL OR (pm2_5 >= 0 AND pm2_5 <= 10000)),
pm4_0 DECIMAL(10, 4) CHECK (pm4_0 IS NULL OR (pm4_0 >= 0 AND pm4_0 <= 10000)),
pm10 DECIMAL(10, 4) CHECK (pm10 IS NULL OR (pm10 >= 0 AND pm10 <= 10000)),
-- Number Concentrations (#/cm³)
nc_pm0_5 DECIMAL(12, 4),
nc_pm1_0 DECIMAL(12, 4),
nc_pm2_5 DECIMAL(12, 4),
nc_pm4_0 DECIMAL(12, 4),
nc_pm10 DECIMAL(12, 4),
-- Particle Size
typical_particle_size DECIMAL(8, 4),
-- Environmental Conditions
temperature DECIMAL(6, 2) CHECK (temperature IS NULL OR (temperature >= -50 AND temperature <= 80)),
humidity DECIMAL(5, 2) CHECK (humidity IS NULL OR (humidity >= 0 AND humidity <= 100)),
-- Location
latitude DECIMAL(10, 8) CHECK (latitude IS NULL OR (latitude >= -90 AND latitude <= 90)),
longitude DECIMAL(11, 8) CHECK (longitude IS NULL OR (longitude >= -180 AND longitude <= 180)),
speed DECIMAL(8, 2) CHECK (speed IS NULL OR speed >= 0),
location TEXT,
-- Metadata
data_quality TEXT DEFAULT 'normal' CHECK (data_quality IN ('normal', 'suspect', 'invalid')),
created_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(site_id, device_id, year, month, day, hour, minute)
);
-- Performance Indexes
CREATE INDEX idx_dust_levels_site_date ON data_dust_levels(site_id, year, month, day);
CREATE INDEX idx_dust_levels_site_datetime ON data_dust_levels(site_id, year, month, day, hour);
CREATE INDEX idx_dust_levels_device ON data_dust_levels(device_id);
CREATE INDEX idx_dust_levels_device_date ON data_dust_levels(device_id, year, month, day);
CREATE INDEX idx_dust_levels_created ON data_dust_levels(created_at DESC);
-- Partial indexes for high PM values (alerts)
CREATE INDEX idx_dust_levels_high_pm25 ON data_dust_levels(site_id, year, month, day)
WHERE pm2_5 > 35;
CREATE INDEX idx_dust_levels_high_pm10 ON data_dust_levels(site_id, year, month, day)
WHERE pm10 > 150;
-- Statistics for query optimization
ALTER TABLE data_dust_levels ALTER COLUMN site_id SET STATISTICS 1000;
ALTER TABLE data_dust_levels ALTER COLUMN device_id SET STATISTICS 500;data_flow_meters (Complete Schema):
CREATE TABLE data_flow_meters (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE RESTRICT,
asset_id UUID REFERENCES data_assets(id) ON DELETE SET NULL,
-- Meter Identification
meter_name TEXT NOT NULL,
meter_serial TEXT,
-- Reading Data
reading_date DATE NOT NULL,
reading_time TIME,
reading_timestamp TIMESTAMPTZ GENERATED ALWAYS AS (
(reading_date + COALESCE(reading_time, '00:00:00'::TIME))::TIMESTAMPTZ
) STORED,
-- Volume Data
total_volume DECIMAL(12, 2) CHECK (total_volume IS NULL OR total_volume >= 0),
daily_usage DECIMAL(10, 2) CHECK (daily_usage IS NULL OR daily_usage >= 0),
flow_rate DECIMAL(8, 2) CHECK (flow_rate IS NULL OR flow_rate >= 0),
-- Tank Status
tank_level_percent DECIMAL(5, 2) CHECK (tank_level_percent IS NULL OR (tank_level_percent >= 0 AND tank_level_percent <= 100)),
tank_volume_liters DECIMAL(12, 2),
-- Status
meter_status TEXT DEFAULT 'online' CHECK (meter_status IN ('online', 'offline', 'error', 'maintenance')),
battery_level DECIMAL(5, 2) CHECK (battery_level IS NULL OR (battery_level >= 0 AND battery_level <= 100)),
-- Metadata
notes TEXT,
data_source TEXT DEFAULT 'scraper' CHECK (data_source IN ('scraper', 'manual', 'import')),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(site_id, meter_name, reading_date)
);
-- Indexes
CREATE INDEX idx_flow_meters_site ON data_flow_meters(site_id);
CREATE INDEX idx_flow_meters_site_date ON data_flow_meters(site_id, reading_date DESC);
CREATE INDEX idx_flow_meters_meter ON data_flow_meters(meter_name);
CREATE INDEX idx_flow_meters_asset ON data_flow_meters(asset_id) WHERE asset_id IS NOT NULL;
CREATE INDEX idx_flow_meters_timestamp ON data_flow_meters(reading_timestamp DESC);data_weather (Complete Schema):
CREATE TABLE data_weather (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Station Identification
station_id TEXT NOT NULL,
station_name TEXT,
-- Observation Time
observation_time TIMESTAMPTZ NOT NULL,
observation_date DATE GENERATED ALWAYS AS (observation_time::DATE) STORED,
-- Temperature (°C)
temperature DECIMAL(5, 2),
temperature_max DECIMAL(5, 2),
temperature_min DECIMAL(5, 2),
apparent_temperature DECIMAL(5, 2),
dew_point DECIMAL(5, 2),
-- Humidity (%)
humidity DECIMAL(5, 2) CHECK (humidity IS NULL OR (humidity >= 0 AND humidity <= 100)),
-- Wind
wind_speed DECIMAL(6, 2) CHECK (wind_speed IS NULL OR wind_speed >= 0),
wind_gust DECIMAL(6, 2) CHECK (wind_gust IS NULL OR wind_gust >= 0),
wind_direction TEXT,
wind_direction_degrees INTEGER CHECK (wind_direction_degrees IS NULL OR (wind_direction_degrees >= 0 AND wind_direction_degrees <= 360)),
-- Precipitation
rainfall DECIMAL(8, 2) CHECK (rainfall IS NULL OR rainfall >= 0),
rainfall_24h DECIMAL(8, 2),
-- Pressure (hPa)
pressure DECIMAL(7, 2),
pressure_msl DECIMAL(7, 2),
-- Visibility and Cloud
visibility DECIMAL(8, 2),
cloud_cover TEXT,
cloud_base_height DECIMAL(8, 2),
-- Metadata
data_source TEXT DEFAULT 'bom' CHECK (data_source IN ('bom', 'manual', 'api')),
created_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(station_id, observation_time)
);
-- Indexes
CREATE INDEX idx_weather_station ON data_weather(station_id);
CREATE INDEX idx_weather_station_time ON data_weather(station_id, observation_time DESC);
CREATE INDEX idx_weather_date ON data_weather(observation_date DESC);data_assets (Complete Schema):
CREATE TABLE data_assets (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE RESTRICT,
-- Asset Identification
asset_type TEXT NOT NULL CHECK (asset_type IN (
'dust_ranger', 'flow_meter', 'weather_station',
'water_truck', 'pump', 'tank', 'other'
)),
asset_name TEXT NOT NULL,
asset_code TEXT,
serial_number TEXT,
manufacturer TEXT,
model TEXT,
-- Location
latitude DECIMAL(10, 8),
longitude DECIMAL(11, 8),
location_description TEXT,
-- Status
status TEXT DEFAULT 'active' CHECK (status IN ('active', 'inactive', 'maintenance', 'retired')),
last_seen TIMESTAMPTZ,
last_maintenance TIMESTAMPTZ,
next_maintenance TIMESTAMPTZ,
-- Configuration
configuration JSONB DEFAULT '{}'::jsonb,
-- Metadata
notes TEXT,
metadata JSONB DEFAULT '{}'::jsonb,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(site_id, asset_code)
);
-- Indexes
CREATE INDEX idx_assets_site ON data_assets(site_id);
CREATE INDEX idx_assets_type ON data_assets(asset_type);
CREATE INDEX idx_assets_status ON data_assets(status);
CREATE INDEX idx_assets_site_type ON data_assets(site_id, asset_type);
CREATE INDEX idx_assets_location ON data_assets(latitude, longitude) WHERE latitude IS NOT NULL;14.2.4 Operations Tables (Detailed)
ops_dustloc_refills (Complete Schema):
CREATE TABLE ops_dustloc_refills (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE RESTRICT,
asset_id UUID REFERENCES data_assets(id) ON DELETE SET NULL,
-- Refill Data
refill_date DATE NOT NULL,
refill_time TIME,
volume_liters DECIMAL(10, 2) NOT NULL CHECK (volume_liters > 0),
-- Tank Status
level_before DECIMAL(5, 2) CHECK (level_before IS NULL OR (level_before >= 0 AND level_before <= 100)),
level_after DECIMAL(5, 2) CHECK (level_after IS NULL OR (level_after >= 0 AND level_after <= 100)),
-- Source
water_source TEXT,
truck_id TEXT,
-- Metadata
notes TEXT,
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
CHECK (level_after IS NULL OR level_before IS NULL OR level_after >= level_before)
);
-- Indexes
CREATE INDEX idx_refills_site ON ops_dustloc_refills(site_id);
CREATE INDEX idx_refills_site_date ON ops_dustloc_refills(site_id, refill_date DESC);
CREATE INDEX idx_refills_asset ON ops_dustloc_refills(asset_id) WHERE asset_id IS NOT NULL;
CREATE INDEX idx_refills_created_by ON ops_dustloc_refills(created_by);ops_tank_corrections (Complete Schema):
CREATE TABLE ops_tank_corrections (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
site_id UUID NOT NULL REFERENCES cfg_mine_sites(id) ON DELETE RESTRICT,
tank_capacity_id UUID REFERENCES cfg_tank_capacities(id) ON DELETE SET NULL,
-- Correction Data
tank_name TEXT NOT NULL,
correction_date DATE NOT NULL,
correction_time TIME DEFAULT CURRENT_TIME,
-- Level Data
previous_level DECIMAL(5, 2) CHECK (previous_level IS NULL OR (previous_level >= 0 AND previous_level <= 100)),
corrected_level DECIMAL(5, 2) NOT NULL CHECK (corrected_level >= 0 AND corrected_level <= 100),
level_difference DECIMAL(6, 2) GENERATED ALWAYS AS (corrected_level - COALESCE(previous_level, 0)) STORED,
-- Reason
reason TEXT,
correction_type TEXT DEFAULT 'manual' CHECK (correction_type IN ('manual', 'calibration', 'sensor_error', 'other')),
-- Verification
verified_by UUID REFERENCES user_profiles(id),
verified_at TIMESTAMPTZ,
-- Metadata
notes TEXT,
created_by UUID NOT NULL REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_corrections_site ON ops_tank_corrections(site_id);
CREATE INDEX idx_corrections_site_date ON ops_tank_corrections(site_id, correction_date DESC);
CREATE INDEX idx_corrections_tank ON ops_tank_corrections(tank_name);
CREATE INDEX idx_corrections_created_by ON ops_tank_corrections(created_by);
CREATE INDEX idx_corrections_created ON ops_tank_corrections(created_at DESC);14.2.5 Report Tables (Detailed)
rpt_reports (Complete Schema):
CREATE TABLE rpt_reports (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
site_id UUID REFERENCES cfg_mine_sites(id) ON DELETE SET NULL,
-- Report Identity
report_type TEXT NOT NULL CHECK (report_type IN ('daily', 'weekly', 'monthly', 'custom', 'comparison')),
title TEXT NOT NULL,
description TEXT,
-- Report Period
period_start DATE,
period_end DATE,
-- Generation Parameters
parameters JSONB DEFAULT '{}'::jsonb,
devices JSONB DEFAULT '[]'::jsonb,
chart_types JSONB DEFAULT '[]'::jsonb,
include_ai_descriptions BOOLEAN DEFAULT false,
-- File Information
pdf_storage_path TEXT,
file_size_bytes INTEGER CHECK (file_size_bytes IS NULL OR file_size_bytes > 0),
page_count INTEGER CHECK (page_count IS NULL OR page_count > 0),
-- Generation Status
status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN (
'pending', 'generating', 'completed', 'failed', 'cancelled'
)),
error_message TEXT,
generation_started_at TIMESTAMPTZ,
generation_completed_at TIMESTAMPTZ,
generation_duration_ms INTEGER,
-- Version Control
version INTEGER DEFAULT 1,
parent_report_id UUID REFERENCES rpt_reports(id),
-- Access Tracking
view_count INTEGER DEFAULT 0,
download_count INTEGER DEFAULT 0,
last_viewed_at TIMESTAMPTZ,
last_downloaded_at TIMESTAMPTZ,
-- Metadata
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
CHECK (period_end IS NULL OR period_start IS NULL OR period_end >= period_start)
);
-- Indexes
CREATE INDEX idx_reports_user ON rpt_reports(user_id);
CREATE INDEX idx_reports_user_status ON rpt_reports(user_id, status);
CREATE INDEX idx_reports_site ON rpt_reports(site_id) WHERE site_id IS NOT NULL;
CREATE INDEX idx_reports_type ON rpt_reports(report_type);
CREATE INDEX idx_reports_created ON rpt_reports(created_at DESC);
CREATE INDEX idx_reports_period ON rpt_reports(period_start, period_end);
-- Trigger for updated_at
CREATE TRIGGER update_reports_updated_at
BEFORE UPDATE ON rpt_reports
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();rpt_weekly_reports (Complete Schema):
CREATE TABLE rpt_weekly_reports (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
user_id UUID NOT NULL REFERENCES user_profiles(id) ON DELETE CASCADE,
-- Report Period
period_start DATE NOT NULL,
period_end DATE NOT NULL,
week_number INTEGER GENERATED ALWAYS AS (EXTRACT(WEEK FROM period_end)) STORED,
year INTEGER GENERATED ALWAYS AS (EXTRACT(YEAR FROM period_end)) STORED,
-- Status
status TEXT NOT NULL DEFAULT 'draft' CHECK (status IN ('draft', 'submitted', 'approved', 'rejected')),
submitted_at TIMESTAMPTZ,
approved_by UUID REFERENCES user_profiles(id),
approved_at TIMESTAMPTZ,
-- Section 1: Site Observations
site_observations JSONB DEFAULT '[]'::jsonb,
/*
[
{
"id": "uuid",
"site_id": "uuid",
"ranger_id": "string",
"asset_location": "string",
"status": "operational|issue|offline",
"issue_description": "string",
"action_taken": "string",
"photos": ["url1", "url2"]
}
]
*/
-- Section 2: Flow Meter Usage
flow_meter_usage JSONB DEFAULT '[]'::jsonb,
/*
[
{
"id": "uuid",
"site_id": "uuid",
"start_date": "date",
"end_date": "date",
"volume_used": number,
"notes": "string"
}
]
*/
-- Section 3: Dashboard Updates
dashboard_updates JSONB DEFAULT '[]'::jsonb,
/*
[
{
"id": "uuid",
"category": "data|feature|bug|other",
"description": "string",
"completed": boolean
}
]
*/
-- Section 4: Vendor Activities
vendor_activities JSONB DEFAULT '[]'::jsonb,
/*
[
{
"id": "uuid",
"vendor_name": "string",
"activity_type": "maintenance|installation|repair|other",
"description": "string",
"date": "date"
}
]
*/
-- Section 5: Water Truck Testing
water_truck_testing JSONB DEFAULT '{}'::jsonb,
/*
{
"sites_tested": ["site1", "site2"],
"hardware_checked": ["item1", "item2"],
"test_summary": "string",
"issues_found": "string",
"recommendations": "string"
}
*/
-- Section 6: Hardware & Installations
hardware_installations JSONB DEFAULT '[]'::jsonb,
/*
[
{
"id": "uuid",
"type": "installation|removal|upgrade|repair",
"location": "string",
"equipment": "string",
"details": "string",
"date": "date"
}
]
*/
-- Section 7: Admin & Reporting
admin_reporting JSONB DEFAULT '[]'::jsonb,
/*
[
{
"id": "uuid",
"category": "travel|meeting|report|training|other",
"description": "string",
"hours": number,
"date": "date"
}
]
*/
-- Section 8: Other Tasks
other_tasks JSONB DEFAULT '{}'::jsonb,
/*
{
"site_support": [{ "description": "string", "hours": number }],
"stakeholder_engagement": [{ "description": "string", "contact": "string" }],
"internal_tasks": [{ "description": "string", "status": "string" }]
}
*/
-- Summary
summary_notes TEXT,
next_week_priorities TEXT,
-- Export Tracking
docx_exported_at TIMESTAMPTZ,
docx_export_count INTEGER DEFAULT 0,
-- Metadata
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now(),
-- Constraints
UNIQUE(user_id, period_end),
CHECK (period_end > period_start),
CHECK (period_end - period_start <= 7)
);
-- Indexes
CREATE INDEX idx_weekly_reports_user ON rpt_weekly_reports(user_id);
CREATE INDEX idx_weekly_reports_user_status ON rpt_weekly_reports(user_id, status);
CREATE INDEX idx_weekly_reports_period ON rpt_weekly_reports(period_end DESC);
CREATE INDEX idx_weekly_reports_year_week ON rpt_weekly_reports(year, week_number);
-- GIN indexes for JSONB searching
CREATE INDEX idx_weekly_reports_observations_gin ON rpt_weekly_reports USING GIN(site_observations);
CREATE INDEX idx_weekly_reports_flow_meter_gin ON rpt_weekly_reports USING GIN(flow_meter_usage);
-- Trigger for updated_at
CREATE TRIGGER update_weekly_reports_updated_at
BEFORE UPDATE ON rpt_weekly_reports
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();rpt_report_templates (Complete Schema):
CREATE TABLE rpt_report_templates (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Template Identity
name TEXT NOT NULL UNIQUE,
description TEXT,
template_type TEXT NOT NULL CHECK (template_type IN ('pdf', 'docx', 'email')),
-- Template Content
content JSONB NOT NULL DEFAULT '{}'::jsonb,
/*
{
"sections": [
{ "type": "cover", "enabled": true },
{ "type": "summary", "enabled": true },
{ "type": "chart", "chartType": "pm25_trend", "enabled": true },
{ "type": "table", "tableType": "daily_averages", "enabled": true }
],
"styling": {
"primaryColor": "#3b82f6",
"fontFamily": "Inter",
"logoUrl": "url"
},
"options": {
"includeAI": true,
"pageNumbers": true,
"confidentialityNotice": true
}
}
*/
-- Preview
preview_image_url TEXT,
-- Usage
is_default BOOLEAN DEFAULT false,
is_system BOOLEAN DEFAULT false,
usage_count INTEGER DEFAULT 0,
-- Metadata
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_report_templates_type ON rpt_report_templates(template_type);
CREATE INDEX idx_report_templates_default ON rpt_report_templates(is_default) WHERE is_default = true;14.2.6 Email System Tables (Detailed)
email_schedules (Complete Schema):
CREATE TABLE email_schedules (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Schedule Identity
name TEXT NOT NULL,
description TEXT,
-- Schedule Configuration
schedule_type TEXT NOT NULL CHECK (schedule_type IN ('once', 'daily', 'weekly', 'monthly', 'custom')),
cron_expression TEXT,
timezone TEXT DEFAULT 'Australia/Perth',
-- Timing
start_date DATE,
end_date DATE,
send_time TIME DEFAULT '09:00:00',
-- Recipients
recipients JSONB NOT NULL DEFAULT '[]'::jsonb,
/*
[
{ "type": "email", "value": "user@example.com" },
{ "type": "user", "value": "user-uuid" },
{ "type": "group", "value": "group-uuid" }
]
*/
cc_recipients JSONB DEFAULT '[]'::jsonb,
bcc_recipients JSONB DEFAULT '[]'::jsonb,
-- Email Content
subject_template TEXT NOT NULL,
body_template TEXT NOT NULL,
body_format TEXT DEFAULT 'html' CHECK (body_format IN ('html', 'text', 'markdown')),
-- Attachments
include_attachments BOOLEAN DEFAULT false,
attachment_config JSONB DEFAULT '{}'::jsonb,
-- Variables
variables JSONB DEFAULT '[]'::jsonb,
-- Status
is_active BOOLEAN NOT NULL DEFAULT true,
last_sent_at TIMESTAMPTZ,
last_send_status TEXT,
next_send_at TIMESTAMPTZ,
send_count INTEGER DEFAULT 0,
-- Metadata
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_email_schedules_active ON email_schedules(is_active) WHERE is_active = true;
CREATE INDEX idx_email_schedules_next_send ON email_schedules(next_send_at) WHERE is_active = true;
CREATE INDEX idx_email_schedules_type ON email_schedules(schedule_type);
CREATE INDEX idx_email_schedules_created_by ON email_schedules(created_by);email_logs (Complete Schema):
CREATE TABLE email_logs (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Foreign Keys
schedule_id UUID REFERENCES email_schedules(id) ON DELETE SET NULL,
-- Email Details
recipients JSONB NOT NULL,
cc_recipients JSONB,
bcc_recipients JSONB,
subject TEXT NOT NULL,
body TEXT NOT NULL,
body_format TEXT DEFAULT 'html',
-- Attachments
attachments JSONB DEFAULT '[]'::jsonb,
/*
[
{ "filename": "report.pdf", "size": 1234567, "type": "application/pdf" }
]
*/
-- Status
status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN (
'pending', 'sending', 'sent', 'delivered', 'failed', 'bounced'
)),
error_message TEXT,
error_code TEXT,
-- Tracking
sent_at TIMESTAMPTZ,
delivered_at TIMESTAMPTZ,
opened_at TIMESTAMPTZ,
clicked_at TIMESTAMPTZ,
-- Provider Info
provider TEXT DEFAULT 'resend',
provider_message_id TEXT,
-- Metadata
triggered_by TEXT DEFAULT 'schedule' CHECK (triggered_by IN ('schedule', 'manual', 'system')),
triggered_by_user UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_email_logs_schedule ON email_logs(schedule_id);
CREATE INDEX idx_email_logs_status ON email_logs(status);
CREATE INDEX idx_email_logs_created ON email_logs(created_at DESC);
CREATE INDEX idx_email_logs_sent ON email_logs(sent_at DESC) WHERE sent_at IS NOT NULL;email_snippet_templates (Complete Schema):
CREATE TABLE email_snippet_templates (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Template Identity
name TEXT NOT NULL UNIQUE,
description TEXT,
-- Content
content TEXT NOT NULL,
content_format TEXT DEFAULT 'html' CHECK (content_format IN ('html', 'text', 'markdown')),
-- Organization
category TEXT DEFAULT 'general',
tags JSONB DEFAULT '[]'::jsonb,
-- Variables used in this snippet
variables_used JSONB DEFAULT '[]'::jsonb,
-- Preview
preview_text TEXT,
-- Status
is_active BOOLEAN NOT NULL DEFAULT true,
usage_count INTEGER DEFAULT 0,
-- Metadata
created_by UUID REFERENCES user_profiles(id),
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_snippet_templates_category ON email_snippet_templates(category);
CREATE INDEX idx_snippet_templates_active ON email_snippet_templates(is_active) WHERE is_active = true;
CREATE INDEX idx_snippet_templates_tags ON email_snippet_templates USING GIN(tags);email_variable_definitions (Complete Schema):
CREATE TABLE email_variable_definitions (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Variable Identity
name TEXT NOT NULL UNIQUE,
display_name TEXT NOT NULL,
description TEXT,
-- Data Source
data_source TEXT NOT NULL CHECK (data_source IN (
'static', 'database', 'api', 'calculated', 'user_input'
)),
source_config JSONB DEFAULT '{}'::jsonb,
/*
For database:
{ "table": "cfg_mine_sites", "column": "name", "filter": "id = {{site_id}}" }
For calculated:
{ "formula": "AVG(pm2_5)", "table": "data_dust_levels", "period": "last_7_days" }
For api:
{ "endpoint": "/api/weather", "field": "temperature" }
*/
-- Formatting
format_template TEXT,
default_value TEXT,
-- Validation
value_type TEXT DEFAULT 'string' CHECK (value_type IN ('string', 'number', 'date', 'boolean', 'array')),
validation_regex TEXT,
-- Example
example_value TEXT,
-- Status
is_active BOOLEAN NOT NULL DEFAULT true,
is_system BOOLEAN DEFAULT false,
-- Metadata
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_variable_definitions_source ON email_variable_definitions(data_source);
CREATE INDEX idx_variable_definitions_active ON email_variable_definitions(is_active) WHERE is_active = true;
-- Default Variables
INSERT INTO email_variable_definitions (name, display_name, description, data_source, example_value, is_system) VALUES
('current_date', 'Current Date', 'Today''s date', 'calculated', '2026-02-05', true),
('current_time', 'Current Time', 'Current time', 'calculated', '14:30:00', true),
('site_name', 'Site Name', 'Name of the mine site', 'database', 'Mine Site Alpha', true),
('user_name', 'User Name', 'Recipient''s full name', 'database', 'John Smith', true),
('pm25_avg_7d', 'PM2.5 7-Day Average', 'Average PM2.5 over last 7 days', 'calculated', '45.2 μg/m³', true),
('pm10_avg_7d', 'PM10 7-Day Average', 'Average PM10 over last 7 days', 'calculated', '123.4 μg/m³', true),
('alert_count', 'Alert Count', 'Number of threshold exceedances', 'calculated', '3', true);14.2.7 Audit and Activity Tables (Detailed)
activity_logs (Complete Schema):
CREATE TABLE activity_logs (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Actor
user_id UUID REFERENCES user_profiles(id) ON DELETE SET NULL,
user_email TEXT,
user_role TEXT,
-- Action
action TEXT NOT NULL CHECK (action IN (
'create', 'read', 'update', 'delete',
'login', 'logout', 'password_change', 'password_reset',
'export', 'import', 'download', 'upload',
'permission_grant', 'permission_revoke',
'report_generate', 'report_view', 'report_download',
'scraper_trigger', 'email_send',
'settings_change', 'system_event'
)),
-- Resource
resource_type TEXT NOT NULL,
resource_id UUID,
resource_name TEXT,
-- Details
details JSONB DEFAULT '{}'::jsonb,
/*
{
"old_value": {...},
"new_value": {...},
"changes": ["field1", "field2"],
"metadata": {...}
}
*/
-- Request Context
ip_address INET,
user_agent TEXT,
request_id TEXT,
-- Location (optional)
country TEXT,
city TEXT,
-- Timestamp
created_at TIMESTAMPTZ DEFAULT now(),
-- Constraints (append-only - no updates or deletes via application)
CONSTRAINT activity_logs_immutable CHECK (true)
);
-- Indexes for common queries
CREATE INDEX idx_activity_logs_user ON activity_logs(user_id);
CREATE INDEX idx_activity_logs_action ON activity_logs(action);
CREATE INDEX idx_activity_logs_resource ON activity_logs(resource_type, resource_id);
CREATE INDEX idx_activity_logs_created ON activity_logs(created_at DESC);
CREATE INDEX idx_activity_logs_user_created ON activity_logs(user_id, created_at DESC);
-- Partial indexes for specific actions
CREATE INDEX idx_activity_logs_auth ON activity_logs(user_id, created_at DESC)
WHERE action IN ('login', 'logout', 'password_change');
CREATE INDEX idx_activity_logs_reports ON activity_logs(resource_id, created_at DESC)
WHERE resource_type = 'report';
-- Prevent updates and deletes (RLS policy)
CREATE POLICY "activity_logs_insert_only" ON activity_logs
FOR INSERT WITH CHECK (true);
CREATE POLICY "activity_logs_select_own" ON activity_logs
FOR SELECT USING (
is_admin() OR user_id = auth.uid()
);
-- No UPDATE or DELETE policies = immutableerror_logs (Complete Schema):
CREATE TABLE error_logs (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Error Classification
error_type TEXT NOT NULL CHECK (error_type IN (
'javascript', 'api', 'database', 'auth', 'validation',
'network', 'timeout', 'permission', 'system', 'unknown'
)),
severity TEXT DEFAULT 'error' CHECK (severity IN ('debug', 'info', 'warning', 'error', 'critical')),
-- Error Details
error_code TEXT,
error_message TEXT NOT NULL,
error_stack TEXT,
-- Context
user_id UUID REFERENCES user_profiles(id) ON DELETE SET NULL,
url TEXT,
component TEXT,
action TEXT,
-- Request Info
request_method TEXT,
request_url TEXT,
request_body JSONB,
response_status INTEGER,
response_body JSONB,
-- Environment
user_agent TEXT,
browser TEXT,
browser_version TEXT,
os TEXT,
device_type TEXT,
-- Additional Context
metadata JSONB DEFAULT '{}'::jsonb,
-- Resolution
is_resolved BOOLEAN DEFAULT false,
resolved_by UUID REFERENCES user_profiles(id),
resolved_at TIMESTAMPTZ,
resolution_notes TEXT,
-- Timestamp
created_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_error_logs_type ON error_logs(error_type);
CREATE INDEX idx_error_logs_severity ON error_logs(severity);
CREATE INDEX idx_error_logs_created ON error_logs(created_at DESC);
CREATE INDEX idx_error_logs_unresolved ON error_logs(created_at DESC) WHERE is_resolved = false;
CREATE INDEX idx_error_logs_user ON error_logs(user_id) WHERE user_id IS NOT NULL;performance_metrics (Complete Schema):
CREATE TABLE performance_metrics (
-- Primary Key
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
-- Metric Identity
metric_name TEXT NOT NULL,
metric_category TEXT DEFAULT 'general' CHECK (metric_category IN (
'page_load', 'api_call', 'database', 'render', 'export', 'import', 'general'
)),
-- Values
value_ms DECIMAL(12, 2) NOT NULL,
value_bytes BIGINT,
value_count INTEGER,
-- Context
user_id UUID REFERENCES user_profiles(id) ON DELETE SET NULL,
url TEXT,
endpoint TEXT,
-- Additional Data
metadata JSONB DEFAULT '{}'::jsonb,
-- Environment
user_agent TEXT,
connection_type TEXT,
-- Timestamp
created_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes
CREATE INDEX idx_perf_metrics_name ON performance_metrics(metric_name);
CREATE INDEX idx_perf_metrics_category ON performance_metrics(metric_category);
CREATE INDEX idx_perf_metrics_created ON performance_metrics(created_at DESC);
-- Partitioning by month (for large datasets)
-- CREATE TABLE performance_metrics_y2026m02 PARTITION OF performance_metrics
-- FOR VALUES FROM ('2026-02-01') TO ('2026-03-01');14.3 Database Functions and Triggers
14.3.1 Utility Functions
-- Function: Update updated_at timestamp
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Function: Generate slug from text
CREATE OR REPLACE FUNCTION generate_slug(input_text TEXT)
RETURNS TEXT AS $$
BEGIN
RETURN lower(regexp_replace(
regexp_replace(input_text, '[^a-zA-Z0-9\s-]', '', 'g'),
'\s+', '-', 'g'
));
END;
$$ LANGUAGE plpgsql IMMUTABLE;
-- Function: Calculate date range overlap
CREATE OR REPLACE FUNCTION date_ranges_overlap(
start1 DATE, end1 DATE,
start2 DATE, end2 DATE
) RETURNS BOOLEAN AS $$
BEGIN
RETURN start1 <= end2 AND end1 >= start2;
END;
$$ LANGUAGE plpgsql IMMUTABLE;14.3.2 RBAC Helper Functions
-- Function: Check if current user is admin
CREATE OR REPLACE FUNCTION is_admin()
RETURNS BOOLEAN AS $$
BEGIN
RETURN EXISTS (
SELECT 1 FROM user_profiles
WHERE id = auth.uid()
AND role = 'admin'
AND is_active = true
);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STABLE;
-- Function: Get user's permitted site IDs
CREATE OR REPLACE FUNCTION get_user_permitted_sites(permission_type TEXT DEFAULT 'view')
RETURNS SETOF UUID AS $$
BEGIN
-- Admins can access all active sites
IF is_admin() THEN
RETURN QUERY SELECT id FROM cfg_mine_sites WHERE is_active = true;
END IF;
-- Regular users get their permitted sites
IF permission_type = 'view' THEN
RETURN QUERY
SELECT site_id FROM user_site_permissions
WHERE user_id = auth.uid() AND can_view = true;
ELSIF permission_type = 'edit' THEN
RETURN QUERY
SELECT site_id FROM user_site_permissions
WHERE user_id = auth.uid() AND can_edit = true;
END IF;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STABLE;
-- Function: Check module permission
CREATE OR REPLACE FUNCTION has_module_permission(
module_name TEXT,
permission_type TEXT DEFAULT 'view'
) RETURNS BOOLEAN AS $$
BEGIN
-- Admins have all permissions
IF is_admin() THEN
RETURN true;
END IF;
-- Check specific permission
IF permission_type = 'view' THEN
RETURN EXISTS (
SELECT 1 FROM user_module_permissions
WHERE user_id = auth.uid()
AND module = module_name
AND can_view = true
);
ELSIF permission_type = 'edit' THEN
RETURN EXISTS (
SELECT 1 FROM user_module_permissions
WHERE user_id = auth.uid()
AND module = module_name
AND can_edit = true
);
END IF;
RETURN false;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STABLE;
-- Function: Check site permission
CREATE OR REPLACE FUNCTION has_site_permission(
site_uuid UUID,
permission_type TEXT DEFAULT 'view'
) RETURNS BOOLEAN AS $$
BEGIN
-- Admins have all permissions
IF is_admin() THEN
RETURN true;
END IF;
-- Check specific permission
IF permission_type = 'view' THEN
RETURN EXISTS (
SELECT 1 FROM user_site_permissions
WHERE user_id = auth.uid()
AND site_id = site_uuid
AND can_view = true
);
ELSIF permission_type = 'edit' THEN
RETURN EXISTS (
SELECT 1 FROM user_site_permissions
WHERE user_id = auth.uid()
AND site_id = site_uuid
AND can_edit = true
);
END IF;
RETURN false;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER STABLE;14.3.3 Business Logic Functions
-- Function: Get dashboard summary for a site
CREATE OR REPLACE FUNCTION get_dashboard_summary(p_site_id UUID DEFAULT NULL)
RETURNS JSON AS $$
DECLARE
result JSON;
site_filter UUID[];
BEGIN
-- Get permitted sites
IF p_site_id IS NOT NULL THEN
site_filter := ARRAY[p_site_id];
ELSE
SELECT ARRAY_AGG(id) INTO site_filter FROM get_user_permitted_sites();
END IF;
SELECT json_build_object(
'total_devices', (
SELECT COUNT(DISTINCT device_id)
FROM data_dust_levels
WHERE site_id = ANY(site_filter)
),
'latest_reading', (
SELECT MAX(created_at)
FROM data_dust_levels
WHERE site_id = ANY(site_filter)
),
'avg_pm25_today', (
SELECT ROUND(AVG(pm2_5)::numeric, 2)
FROM data_dust_levels
WHERE site_id = ANY(site_filter)
AND created_at >= CURRENT_DATE
),
'avg_pm10_today', (
SELECT ROUND(AVG(pm10)::numeric, 2)
FROM data_dust_levels
WHERE site_id = ANY(site_filter)
AND created_at >= CURRENT_DATE
),
'alerts_today', (
SELECT COUNT(*)
FROM data_dust_levels
WHERE site_id = ANY(site_filter)
AND created_at >= CURRENT_DATE
AND (pm2_5 > 55 OR pm10 > 150)
),
'active_sites', (
SELECT COUNT(DISTINCT site_id)
FROM data_dust_levels
WHERE site_id = ANY(site_filter)
AND created_at >= CURRENT_DATE - INTERVAL '1 day'
)
) INTO result;
RETURN result;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Function: Get dust level statistics for a period
CREATE OR REPLACE FUNCTION get_dust_statistics(
p_site_id UUID,
p_start_date DATE,
p_end_date DATE,
p_device_id TEXT DEFAULT NULL
) RETURNS JSON AS $$
DECLARE
result JSON;
BEGIN
-- Check permission
IF NOT has_site_permission(p_site_id, 'view') THEN
RAISE EXCEPTION 'Access denied to site %', p_site_id;
END IF;
SELECT json_build_object(
'record_count', COUNT(*),
'date_range', json_build_object(
'start', MIN(created_at),
'end', MAX(created_at)
),
'pm25', json_build_object(
'min', ROUND(MIN(pm2_5)::numeric, 2),
'max', ROUND(MAX(pm2_5)::numeric, 2),
'avg', ROUND(AVG(pm2_5)::numeric, 2),
'median', ROUND(PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY pm2_5)::numeric, 2)
),
'pm10', json_build_object(
'min', ROUND(MIN(pm10)::numeric, 2),
'max', ROUND(MAX(pm10)::numeric, 2),
'avg', ROUND(AVG(pm10)::numeric, 2),
'median', ROUND(PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY pm10)::numeric, 2)
),
'temperature', json_build_object(
'min', ROUND(MIN(temperature)::numeric, 1),
'max', ROUND(MAX(temperature)::numeric, 1),
'avg', ROUND(AVG(temperature)::numeric, 1)
),
'humidity', json_build_object(
'min', ROUND(MIN(humidity)::numeric, 1),
'max', ROUND(MAX(humidity)::numeric, 1),
'avg', ROUND(AVG(humidity)::numeric, 1)
),
'exceedances', json_build_object(
'pm25_warning', COUNT(*) FILTER (WHERE pm2_5 > 35),
'pm25_critical', COUNT(*) FILTER (WHERE pm2_5 > 55),
'pm10_warning', COUNT(*) FILTER (WHERE pm10 > 150),
'pm10_critical', COUNT(*) FILTER (WHERE pm10 > 250)
)
) INTO result
FROM data_dust_levels
WHERE site_id = p_site_id
AND created_at >= p_start_date
AND created_at < p_end_date + INTERVAL '1 day'
AND (p_device_id IS NULL OR device_id = p_device_id);
RETURN result;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Function: Log activity
CREATE OR REPLACE FUNCTION log_activity(
p_action TEXT,
p_resource_type TEXT,
p_resource_id UUID DEFAULT NULL,
p_resource_name TEXT DEFAULT NULL,
p_details JSONB DEFAULT '{}'::jsonb
) RETURNS UUID AS $$
DECLARE
v_log_id UUID;
v_user_email TEXT;
v_user_role TEXT;
BEGIN
-- Get user info
SELECT email, role INTO v_user_email, v_user_role
FROM user_profiles WHERE id = auth.uid();
INSERT INTO activity_logs (
user_id, user_email, user_role,
action, resource_type, resource_id, resource_name,
details
) VALUES (
auth.uid(), v_user_email, v_user_role,
p_action, p_resource_type, p_resource_id, p_resource_name,
p_details
) RETURNING id INTO v_log_id;
RETURN v_log_id;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;14.3.4 Triggers
-- Trigger: Auto-create user profile on signup
CREATE OR REPLACE FUNCTION handle_new_user()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO user_profiles (id, email, full_name, role)
VALUES (
NEW.id,
NEW.email,
COALESCE(NEW.raw_user_meta_data->>'full_name', split_part(NEW.email, '@', 1)),
'user'
);
-- Grant default dashboard permission
INSERT INTO user_module_permissions (user_id, module, can_view, can_edit)
VALUES (NEW.id, 'dashboard', true, false);
RETURN NEW;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
CREATE TRIGGER on_auth_user_created
AFTER INSERT ON auth.users
FOR EACH ROW EXECUTE FUNCTION handle_new_user();
-- Trigger: Log report generation
CREATE OR REPLACE FUNCTION log_report_activity()
RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'INSERT' THEN
PERFORM log_activity(
'report_generate',
'report',
NEW.id,
NEW.title,
jsonb_build_object(
'report_type', NEW.report_type,
'site_id', NEW.site_id,
'period_start', NEW.period_start,
'period_end', NEW.period_end
)
);
ELSIF TG_OP = 'DELETE' THEN
PERFORM log_activity(
'delete',
'report',
OLD.id,
OLD.title,
jsonb_build_object('deleted_by', auth.uid())
);
END IF;
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
CREATE TRIGGER report_activity_logger
AFTER INSERT OR DELETE ON rpt_reports
FOR EACH ROW EXECUTE FUNCTION log_report_activity();
-- Trigger: Update login tracking
CREATE OR REPLACE FUNCTION update_login_tracking()
RETURNS TRIGGER AS $$
BEGIN
UPDATE user_profiles
SET
last_login_at = now(),
login_count = COALESCE(login_count, 0) + 1
WHERE id = NEW.id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
-- Trigger: Validate tank correction
CREATE OR REPLACE FUNCTION validate_tank_correction()
RETURNS TRIGGER AS $$
BEGIN
-- Ensure user has edit permission for the site
IF NOT has_site_permission(NEW.site_id, 'edit') THEN
RAISE EXCEPTION 'No edit permission for site %', NEW.site_id;
END IF;
-- Set created_by if not provided
IF NEW.created_by IS NULL THEN
NEW.created_by := auth.uid();
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
CREATE TRIGGER validate_tank_correction_trigger
BEFORE INSERT ON ops_tank_corrections
FOR EACH ROW EXECUTE FUNCTION validate_tank_correction();14.4 Row-Level Security Policies
14.4.1 User Management RLS
-- user_profiles policies
ALTER TABLE user_profiles ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view own profile"
ON user_profiles FOR SELECT
USING (id = auth.uid());
CREATE POLICY "Admins can view all profiles"
ON user_profiles FOR SELECT
USING (is_admin());
CREATE POLICY "Users can update own profile"
ON user_profiles FOR UPDATE
USING (id = auth.uid())
WITH CHECK (id = auth.uid() AND role = (SELECT role FROM user_profiles WHERE id = auth.uid()));
CREATE POLICY "Admins can update all profiles"
ON user_profiles FOR UPDATE
USING (is_admin());
CREATE POLICY "Admins can insert profiles"
ON user_profiles FOR INSERT
WITH CHECK (is_admin());
-- user_module_permissions policies
ALTER TABLE user_module_permissions ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view own permissions"
ON user_module_permissions FOR SELECT
USING (user_id = auth.uid());
CREATE POLICY "Admins can manage all permissions"
ON user_module_permissions FOR ALL
USING (is_admin());
-- user_site_permissions policies
ALTER TABLE user_site_permissions ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view own site permissions"
ON user_site_permissions FOR SELECT
USING (user_id = auth.uid());
CREATE POLICY "Admins can manage all site permissions"
ON user_site_permissions FOR ALL
USING (is_admin());14.4.2 Data Tables RLS
-- cfg_mine_sites policies
ALTER TABLE cfg_mine_sites ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view permitted sites"
ON cfg_mine_sites FOR SELECT
USING (
is_admin() OR
id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_view = true)
);
CREATE POLICY "Admins can manage sites"
ON cfg_mine_sites FOR ALL
USING (is_admin());
-- data_dust_levels policies
ALTER TABLE data_dust_levels ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view permitted sites data"
ON data_dust_levels FOR SELECT
USING (
is_admin() OR
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_view = true)
);
CREATE POLICY "Users can insert to permitted sites"
ON data_dust_levels FOR INSERT
WITH CHECK (
is_admin() OR
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_edit = true)
);
CREATE POLICY "Admins can manage all dust data"
ON data_dust_levels FOR ALL
USING (is_admin());
-- data_flow_meters policies
ALTER TABLE data_flow_meters ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view permitted sites flow data"
ON data_flow_meters FOR SELECT
USING (
is_admin() OR
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_view = true)
);
CREATE POLICY "Users can manage permitted sites flow data"
ON data_flow_meters FOR ALL
USING (
is_admin() OR
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_edit = true)
);
-- ops_tank_corrections policies
ALTER TABLE ops_tank_corrections ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view permitted sites corrections"
ON ops_tank_corrections FOR SELECT
USING (
is_admin() OR
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_view = true)
);
CREATE POLICY "Users can insert corrections for permitted sites"
ON ops_tank_corrections FOR INSERT
WITH CHECK (
is_admin() OR
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_edit = true)
);
-- Corrections are immutable (no update/delete for regular users)
CREATE POLICY "Admins can manage all corrections"
ON ops_tank_corrections FOR ALL
USING (is_admin());14.4.3 Report Tables RLS
-- rpt_reports policies
ALTER TABLE rpt_reports ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can view own reports"
ON rpt_reports FOR SELECT
USING (user_id = auth.uid());
CREATE POLICY "Users can view reports for permitted sites"
ON rpt_reports FOR SELECT
USING (
site_id IN (SELECT site_id FROM user_site_permissions WHERE user_id = auth.uid() AND can_view = true)
);
CREATE POLICY "Admins can view all reports"
ON rpt_reports FOR SELECT
USING (is_admin());
CREATE POLICY "Users can create own reports"
ON rpt_reports FOR INSERT
WITH CHECK (user_id = auth.uid());
CREATE POLICY "Users can update own reports"
ON rpt_reports FOR UPDATE
USING (user_id = auth.uid());
CREATE POLICY "Users can delete own reports"
ON rpt_reports FOR DELETE
USING (user_id = auth.uid());
CREATE POLICY "Admins can manage all reports"
ON rpt_reports FOR ALL
USING (is_admin());
-- rpt_weekly_reports policies
ALTER TABLE rpt_weekly_reports ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can manage own weekly reports"
ON rpt_weekly_reports FOR ALL
USING (user_id = auth.uid());
CREATE POLICY "Admins can view all weekly reports"
ON rpt_weekly_reports FOR SELECT
USING (is_admin());
CREATE POLICY "Admins can manage all weekly reports"
ON rpt_weekly_reports FOR ALL
USING (is_admin());14.4.4 Email System RLS
-- email_schedules policies
ALTER TABLE email_schedules ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users with email permission can view schedules"
ON email_schedules FOR SELECT
USING (has_module_permission('email_schedules', 'view'));
CREATE POLICY "Users with email edit permission can manage schedules"
ON email_schedules FOR ALL
USING (has_module_permission('email_schedules', 'edit'));
CREATE POLICY "Admins can manage all schedules"
ON email_schedules FOR ALL
USING (is_admin());
-- email_logs policies
ALTER TABLE email_logs ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users with email permission can view logs"
ON email_logs FOR SELECT
USING (has_module_permission('email_schedules', 'view'));
CREATE POLICY "Admins can view all logs"
ON email_logs FOR SELECT
USING (is_admin());
-- System can insert logs
CREATE POLICY "System can insert logs"
ON email_logs FOR INSERT
WITH CHECK (true);15. Storage Architecture
15.1 Supabase Storage Bucket Structure
supabase-storage/
│
├── csv-uploads/ # CSV file uploads
│ └── {user_id}/
│ ├── daily/
│ │ └── {date}_{upload_id}_{site}.csv
│ ├── weekly/
│ │ └── {week}_{upload_id}_{site}.csv
│ └── monthly/
│ └── {month}_{upload_id}_{site}.csv
│
├── pdf-reports/ # Generated PDF reports
│ └── {user_id}/
│ └── {site_code}/
│ └── {date_range}_report_{version}.pdf
│
├── docx-exports/ # Weekly report DOCX exports
│ └── {user_id}/
│ └── weekly_report_{period_end}_{timestamp}.docx
│
├── email-attachments/ # Email attachment files
│ └── {schedule_id}/
│ └── {timestamp}_{filename}
│
├── assets/ # Asset images and documents
│ └── {site_id}/
│ └── {asset_id}/
│ ├── photos/
│ └── documents/
│
└── system/ # System files
├── logos/
├── templates/
└── backups/15.2 Storage Policies
-- CSV uploads bucket policy
CREATE POLICY "Users can upload own CSV files"
ON storage.objects FOR INSERT
WITH CHECK (
bucket_id = 'csv-uploads' AND
(storage.foldername(name))[1] = auth.uid()::text
);
CREATE POLICY "Users can view own CSV files"
ON storage.objects FOR SELECT
USING (
bucket_id = 'csv-uploads' AND
(storage.foldername(name))[1] = auth.uid()::text
);
CREATE POLICY "Users can delete own CSV files"
ON storage.objects FOR DELETE
USING (
bucket_id = 'csv-uploads' AND
(storage.foldername(name))[1] = auth.uid()::text
);
-- PDF reports bucket policy
CREATE POLICY "Users can upload own reports"
ON storage.objects FOR INSERT
WITH CHECK (
bucket_id = 'pdf-reports' AND
(storage.foldername(name))[1] = auth.uid()::text
);
CREATE POLICY "Users can view own reports"
ON storage.objects FOR SELECT
USING (
bucket_id = 'pdf-reports' AND
(storage.foldername(name))[1] = auth.uid()::text
);
CREATE POLICY "Admins can view all reports"
ON storage.objects FOR SELECT
USING (
bucket_id = 'pdf-reports' AND
is_admin()
);15.3 File Type Specifications
| File Type | Bucket | Max Size | Retention | Access |
|---|---|---|---|---|
| CSV Upload | csv-uploads | 50 MB | 2 years | Owner + Admin |
| PDF Report | pdf-reports | 20 MB | Permanent | Owner + Site Users + Admin |
| DOCX Export | docx-exports | 10 MB | 90 days | Owner + Admin |
| Email Attachment | email-attachments | 25 MB | 1 year | Schedule Owner + Admin |
| Asset Photo | assets | 10 MB | Permanent | Site Users + Admin |
| Asset Document | assets | 50 MB | Permanent | Site Users + Admin |
15.4 File Naming Conventions
// CSV Upload filename
const csvFilename = `${format(date, 'yyyy-MM-dd')}_${uploadId}_${siteCode}.csv`;
// Example: 2026-02-05_abc123_BDD.csv
// PDF Report filename
const pdfFilename = `${siteCode}_${format(periodStart, 'yyyy-MM-dd')}_to_${format(periodEnd, 'yyyy-MM-dd')}_report_v${version}.pdf`;
// Example: BDD_2026-01-01_to_2026-01-31_report_v1.pdf
// DOCX Export filename
const docxFilename = `weekly_report_${format(periodEnd, 'yyyy-MM-dd')}_${format(new Date(), 'HHmmss')}.docx`;
// Example: weekly_report_2026-02-01_143052.docx
// Email Attachment filename
const attachmentFilename = `${format(new Date(), 'yyyyMMdd_HHmmss')}_${originalFilename}`;
// Example: 20260205_143052_monthly_summary.pdf16. API Specifications
16.1 REST API Endpoints
16.1.1 Authentication Endpoints
| Method | Endpoint | Description | Auth Required |
|---|---|---|---|
| POST | /auth/v1/signup | Register new user | No |
| POST | /auth/v1/token?grant_type=password | Login with email/password | No |
| POST | /auth/v1/token?grant_type=refresh_token | Refresh access token | No |
| POST | /auth/v1/logout | Logout user | Yes |
| POST | /auth/v1/recover | Request password reset | No |
| PUT | /auth/v1/user | Update user password | Yes |
| GET | /auth/v1/user | Get current user | Yes |
16.1.2 Data Endpoints
| Method | Endpoint | Description | Permission |
|---|---|---|---|
| GET | /rest/v1/data_dust_levels | List dust level records | dust_levels:view |
| POST | /rest/v1/data_dust_levels | Insert dust level records | dust_levels:edit |
| GET | /rest/v1/data_flow_meters | List flow meter records | flow_meter:view |
| POST | /rest/v1/data_flow_meters | Insert flow meter records | flow_meter:edit |
| PATCH | /rest/v1/data_flow_meters?id=eq. | Update flow meter record | flow_meter:edit |
| DELETE | /rest/v1/data_flow_meters?id=eq. | Delete flow meter record | flow_meter:edit |
| GET | /rest/v1/data_weather | List weather records | climate:view |
| GET | /rest/v1/data_assets | List assets | assets:view |
| POST | /rest/v1/data_assets | Create asset | assets:edit |
| PATCH | /rest/v1/data_assets?id=eq. | Update asset | assets:edit |
16.1.3 Configuration Endpoints
| Method | Endpoint | Description | Permission |
|---|---|---|---|
| GET | /rest/v1/cfg_mine_sites | List mine sites | Any (filtered by RLS) |
| POST | /rest/v1/cfg_mine_sites | Create mine site | Admin only |
| PATCH | /rest/v1/cfg_mine_sites?id=eq. | Update mine site | Admin only |
| GET | /rest/v1/cfg_app_settings | List app settings | settings:view |
| PATCH | /rest/v1/cfg_app_settings?key=eq. | Update setting | settings:edit |
| GET | /rest/v1/cfg_tank_capacities | List tank capacities | flow_meter:view |
| POST | /rest/v1/cfg_tank_capacities | Create tank capacity | flow_meter:edit |
16.1.4 Operations Endpoints
| Method | Endpoint | Description | Permission |
|---|---|---|---|
| GET | /rest/v1/ops_dustloc_refills | List refills | flow_meter:view |
| POST | /rest/v1/ops_dustloc_refills | Create refill | flow_meter:edit |
| GET | /rest/v1/ops_tank_corrections | List corrections | flow_meter:view |
| POST | /rest/v1/ops_tank_corrections | Create correction | flow_meter:edit |
16.1.5 Report Endpoints
| Method | Endpoint | Description | Permission |
|---|---|---|---|
| GET | /rest/v1/rpt_reports | List reports | reports:view |
| POST | /rest/v1/rpt_reports | Create report record | reports:edit |
| PATCH | /rest/v1/rpt_reports?id=eq. | Update report | reports:edit |
| DELETE | /rest/v1/rpt_reports?id=eq. | Delete report | reports:edit |
| GET | /rest/v1/rpt_weekly_reports | List weekly reports | weekly_reports:view |
| POST | /rest/v1/rpt_weekly_reports | Create weekly report | weekly_reports:edit |
| PATCH | /rest/v1/rpt_weekly_reports?id=eq. | Update weekly report | weekly_reports:edit |
16.1.6 User Management Endpoints
| Method | Endpoint | Description | Permission |
|---|---|---|---|
| GET | /rest/v1/user_profiles | List users | Admin only |
| PATCH | /rest/v1/user_profiles?id=eq. | Update user | Admin only |
| GET | /rest/v1/user_module_permissions | List permissions | Admin only |
| POST | /rest/v1/user_module_permissions | Grant permission | Admin only |
| DELETE | /rest/v1/user_module_permissions?id=eq. | Revoke permission | Admin only |
| GET | /rest/v1/user_groups | List groups | Admin only |
| POST | /rest/v1/user_groups | Create group | Admin only |
16.2 RPC Functions
| Function | Parameters | Returns | Description |
|---|---|---|---|
| get_dashboard_summary | site_id? | JSON | Dashboard statistics |
| get_dust_statistics | site_id, start_date, end_date, device_id? | JSON | Dust level statistics |
| log_activity | action, resource_type, resource_id?, details? | UUID | Log user activity |
| is_admin | - | BOOLEAN | Check if current user is admin |
| has_module_permission | module, permission_type | BOOLEAN | Check module permission |
| has_site_permission | site_id, permission_type | BOOLEAN | Check site permission |
| get_user_permitted_sites | permission_type? | UUID[] | Get permitted site IDs |
16.3 Edge Functions
| Function | Method | Path | Description |
|---|---|---|---|
| generate-chart-descriptions | POST | /functions/v1/generate-chart-descriptions | AI chart analysis |
| generate-pdf-report | POST | /functions/v1/generate-pdf-report | Server-side PDF |
| trigger-dust-level-scraper | POST | /functions/v1/trigger-dust-level-scraper | Dust data scraper |
| trigger-flow-meter-scraper | POST | /functions/v1/trigger-flow-meter-scraper | Flow meter scraper |
| trigger-heatmap-scraper | POST | /functions/v1/trigger-heatmap-scraper | Heatmap data scraper |
| trigger-asset-location-scraper | POST | /functions/v1/trigger-asset-location-scraper | Asset location scraper |
| send-email | POST | /functions/v1/send-email | Send email |
| send-calibration-reminder | POST | /functions/v1/send-calibration-reminder | Calibration reminder |
16.4 API Request/Response Examples
16.4.1 Query Dust Levels with Filters
Request:
GET /rest/v1/data_dust_levels?select=id,device_id,pm2_5,pm10,temperature,humidity,created_at,site:cfg_mine_sites(name)&site_id=eq.{site_uuid}&created_at=gte.2026-02-01&created_at=lt.2026-02-06&order=created_at.desc&limit=100
Authorization: Bearer {jwt_token}
apikey: {anon_key}Response:
[
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"device_id": "DR-001",
"pm2_5": 45.23,
"pm10": 123.45,
"temperature": 28.5,
"humidity": 35.2,
"created_at": "2026-02-05T14:30:00+08:00",
"site": {
"name": "Mine Site Alpha"
}
}
]16.4.2 Create Tank Correction
Request:
POST /rest/v1/ops_tank_corrections
Authorization: Bearer {jwt_token}
apikey: {anon_key}
Content-Type: application/json
Prefer: return=representation
{
"site_id": "550e8400-e29b-41d4-a716-446655440001",
"tank_name": "Tank A",
"correction_date": "2026-02-05",
"previous_level": 45.5,
"corrected_level": 72.0,
"reason": "Manual measurement after sensor recalibration",
"correction_type": "calibration"
}Response:
{
"id": "550e8400-e29b-41d4-a716-446655440002",
"site_id": "550e8400-e29b-41d4-a716-446655440001",
"tank_name": "Tank A",
"correction_date": "2026-02-05",
"previous_level": 45.5,
"corrected_level": 72.0,
"level_difference": 26.5,
"reason": "Manual measurement after sensor recalibration",
"correction_type": "calibration",
"created_by": "550e8400-e29b-41d4-a716-446655440003",
"created_at": "2026-02-05T14:35:00+08:00"
}16.4.3 Call RPC Function
Request:
POST /rest/v1/rpc/get_dust_statistics
Authorization: Bearer {jwt_token}
apikey: {anon_key}
Content-Type: application/json
{
"p_site_id": "550e8400-e29b-41d4-a716-446655440001",
"p_start_date": "2026-02-01",
"p_end_date": "2026-02-05"
}Response:
{
"record_count": 4523,
"date_range": {
"start": "2026-02-01T00:00:00+08:00",
"end": "2026-02-05T23:59:00+08:00"
},
"pm25": {
"min": 12.34,
"max": 89.56,
"avg": 45.23,
"median": 42.10
},
"pm10": {
"min": 45.67,
"max": 234.56,
"avg": 123.45,
"median": 115.30
},
"temperature": {
"min": 18.5,
"max": 42.3,
"avg": 28.7
},
"humidity": {
"min": 15.2,
"max": 78.5,
"avg": 35.4
},
"exceedances": {
"pm25_warning": 156,
"pm25_critical": 23,
"pm10_warning": 89,
"pm10_critical": 12
}
}16.4.4 Edge Function - Generate Chart Description
Request:
POST /functions/v1/generate-chart-descriptions
Authorization: Bearer {jwt_token}
Content-Type: application/json
{
"chartType": "pm25_trend",
"data": {
"labels": ["Mon", "Tue", "Wed", "Thu", "Fri"],
"datasets": [
{
"label": "PM2.5",
"data": [45.2, 52.1, 38.7, 61.3, 44.8]
}
]
},
"context": {
"site": "Mine Site Alpha",
"period": "2026-02-01 to 2026-02-05",
"metric": "PM2.5 Concentration"
}
}Response:
{
"success": true,
"description": "PM2.5 levels at Mine Site Alpha showed moderate variation over the monitoring period from February 1-5, 2026. The weekly average of 48.4 μg/m³ indicates generally acceptable air quality, though Thursday recorded a notable peak of 61.3 μg/m³ which exceeded the warning threshold of 55 μg/m³.",
"insights": [
"Thursday's elevated reading (61.3 μg/m³) may correlate with increased mining activity or adverse weather conditions",
"Wednesday showed the lowest concentration (38.7 μg/m³), suggesting effective dust suppression",
"Overall trend remains within regulatory compliance limits"
],
"recommendations": [
"Investigate Thursday's spike to identify contributing factors",
"Consider enhanced dust suppression during peak activity periods",
"Continue monitoring for sustained elevated readings"
],
"metadata": {
"model": "deepseek-chat",
"tokens_used": 245,
"processing_time_ms": 1234
}
}17. Deployment and Operations
17.1 Environment Configuration
17.1.1 Environment Variables
Frontend (.env.local):
# Supabase Configuration
VITE_SUPABASE_URL=https://your-project.supabase.co
VITE_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# Heatmap Database (separate instance)
VITE_HEATMAP_SUPABASE_URL=https://heatmap-project.supabase.co
VITE_HEATMAP_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# Google Maps
VITE_GOOGLE_MAPS_API_KEY=AIzaSy...
# Application
VITE_APP_NAME=Dust Ranger Data Management System
VITE_APP_VERSION=2.1.0
VITE_APP_ENV=productionBackend (supabase/.env.local):
# Database Connection
SUPABASE_DB_URL=postgresql://postgres:password@localhost:54322/postgres
# Service Keys
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# External APIs
DEEPSEEK_API_KEY=sk-...
DUSTAC_SCRAPER_API_KEY=...
FLOW_METER_API_KEY=...
# Email Service
RESEND_API_KEY=re_...
EMAIL_FROM_ADDRESS=noreply@dustac.com.au17.1.2 Deployment Environments
| Environment | URL | Database | Purpose |
|---|---|---|---|
| Development | localhost:3000 | Local Supabase | Feature development |
| Staging | staging.dustac.com.au | Staging DB | Pre-production testing |
| Production | dashboard.dustac.com.au | Production DB | Live system |
17.2 CI/CD Pipeline
17.2.1 GitHub Actions Workflow
# .github/workflows/ci.yml
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v2
with:
version: 8
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- run: pnpm install
- run: pnpm lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v2
with:
version: 8
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- run: pnpm install
- run: pnpm test:unit
- run: pnpm test:unit:coverage
- uses: codecov/codecov-action@v3
build:
runs-on: ubuntu-latest
needs: [lint, test]
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v2
with:
version: 8
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'pnpm'
- run: pnpm install
- run: pnpm build
- uses: actions/upload-artifact@v3
with:
name: dist
path: dist
deploy-staging:
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/develop'
steps:
- uses: actions/download-artifact@v3
with:
name: dist
- name: Deploy to Cloudflare Pages (Staging)
uses: cloudflare/pages-action@v1
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
projectName: dustac-staging
directory: dist
deploy-production:
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/download-artifact@v3
with:
name: dist
- name: Deploy to Cloudflare Pages (Production)
uses: cloudflare/pages-action@v1
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
projectName: dustac-dashboard
directory: dist17.3 Monitoring and Alerting
17.3.1 Health Check Endpoints
| Endpoint | Check | Frequency | Alert Threshold |
|---|---|---|---|
| /api/health | Basic liveness | 30s | 3 consecutive failures |
| /api/health/db | Database connection | 60s | 2 consecutive failures |
| /api/health/storage | Storage access | 60s | 2 consecutive failures |
17.3.2 Key Metrics to Monitor
| Metric | Warning | Critical | Action |
|---|---|---|---|
| API Response Time (P95) | > 500ms | > 2000ms | Investigate slow queries |
| Error Rate | > 1% | > 5% | Check error logs |
| Database Connections | > 70% | > 90% | Scale connection pool |
| Storage Usage | > 70% | > 90% | Archive old data |
| CPU Usage | > 70% | > 90% | Scale infrastructure |
| Memory Usage | > 70% | > 90% | Optimize queries |
17.3.3 Alerting Configuration
# alerting-rules.yml
groups:
- name: dustac-alerts
rules:
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: High error rate detected
- alert: SlowAPIResponse
expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 2
for: 10m
labels:
severity: warning
annotations:
summary: API response time exceeds threshold
- alert: DatabaseConnectionsHigh
expr: pg_stat_activity_count / pg_settings_max_connections > 0.9
for: 5m
labels:
severity: critical
annotations:
summary: Database connections near limit17.4 Backup and Recovery Procedures
17.4.1 Backup Schedule
| Component | Method | Schedule | Retention |
|---|---|---|---|
| Database | pg_dump | Daily 03:00 UTC | 30 days |
| Database | WAL archiving | Continuous | 7 days |
| Storage | S3 versioning | On change | 90 days |
| Configuration | Git | On commit | Indefinite |
17.4.2 Recovery Procedures
Database Recovery from Backup:
# 1. Stop application
sudo systemctl stop dustac-app
# 2. Restore database
pg_restore -h localhost -U postgres -d dustac_db \
--clean --if-exists \
/backups/dustac_db_20260205.dump
# 3. Verify data integrity
psql -h localhost -U postgres -d dustac_db \
-c "SELECT COUNT(*) FROM data_dust_levels;"
# 4. Restart application
sudo systemctl start dustac-appPoint-in-Time Recovery:
# Restore to specific timestamp
pg_restore \
--target-time="2026-02-05 14:30:00" \
-h localhost -U postgres -d dustac_pitr \
/backups/dustac_base.dump18. Appendices (Extended)
Appendix I: Complete Module Permission Matrix
| Module | View Permission | Edit Permission | Admin Only |
|---|---|---|---|
| dashboard | See overview stats, charts | N/A | No |
| dust_levels | View data, charts, export | Import data, configure | No |
| flow_meter | View readings, tank levels | Add refills, corrections | No |
| dust_ranger | View device data | Configure devices | No |
| reports | View, download reports | Generate reports | No |
| weekly_reports | View reports | Create, edit, submit | No |
| email_schedules | View schedules, logs | Create, edit schedules | No |
| heatmap | View heatmap | Configure overlays | No |
| assets | View assets | Edit asset info | No |
| climate | View weather data | N/A | No |
| settings | View settings | Modify settings | No |
| user_management | View users | Edit permissions | Yes |
| upload | N/A | Upload CSV files | No |
Appendix J: Error Code Reference
| Code | HTTP Status | Description | User Message |
|---|---|---|---|
| AUTH001 | 401 | Invalid credentials | Invalid email or password |
| AUTH002 | 401 | Session expired | Your session has expired. Please sign in again. |
| AUTH003 | 403 | Account inactive | Your account is deactivated. Contact administrator. |
| PERM001 | 403 | Module access denied | You don't have access to this feature. |
| PERM002 | 403 | Site access denied | You don't have access to this site's data. |
| PERM003 | 403 | Edit permission denied | You don't have permission to modify this data. |
| DATA001 | 400 | Validation failed | Please check your input and try again. |
| DATA002 | 404 | Resource not found | The requested item was not found. |
| DATA003 | 409 | Duplicate record | This record already exists. |
| SYS001 | 500 | Database error | An error occurred. Please try again later. |
| SYS002 | 503 | Service unavailable | The service is temporarily unavailable. |
| SYS003 | 504 | Request timeout | The request took too long. Please try again. |
Appendix K: Keyboard Shortcuts
| Shortcut | Action | Context |
|---|---|---|
Ctrl/Cmd + K | Open command palette | Global |
Ctrl/Cmd + / | Toggle sidebar | Global |
Ctrl/Cmd + S | Save current form | Forms |
Ctrl/Cmd + Enter | Submit form | Forms |
Escape | Close modal/dialog | Modals |
Ctrl/Cmd + E | Export data | Data tables |
Ctrl/Cmd + F | Focus search | Lists |
Arrow Up/Down | Navigate list items | Lists |
Enter | Select item | Lists |
Appendix L: Browser Support Matrix
| Browser | Minimum Version | Status | Notes |
|---|---|---|---|
| Chrome | 90+ | Full Support | Recommended |
| Firefox | 88+ | Full Support | |
| Safari | 14+ | Full Support | |
| Edge | 90+ | Full Support | Chromium-based |
| Opera | 76+ | Full Support | Chromium-based |
| IE | Any | Not Supported |
Appendix M: Data Retention Policy
| Data Type | Active Period | Archive Period | Deletion |
|---|---|---|---|
| Dust Level Data | 12 months | 7 years | After 7 years |
| Flow Meter Data | 12 months | 2 years | After 2 years |
| Weather Data | 12 months | 1 year | After 1 year |
| PDF Reports | Permanent | N/A | User-initiated |
| Weekly Reports | Permanent | N/A | User-initiated |
| Activity Logs | 12 months | 3 years | After 3 years |
| Error Logs | 3 months | 1 year | After 1 year |
| CSV Uploads | 6 months | 2 years | After 2 years |
| User Accounts | Active | 30 days after deactivation | After grace period |
End of System Design Document v2
Document Statistics:
- Total Sections: 18 main sections + 13 appendices
- Database Tables: 25+ tables with complete schemas
- API Endpoints: 50+ REST endpoints
- Edge Functions: 8 serverless functions
- RLS Policies: 40+ security policies
Version History:
| Version | Date | Author | Changes |
|---|---|---|---|
| 1.0 | 2025-12-02 | Dustac Team | Initial release |
| 2.0 | 2026-02-05 | Dustac Team | Major update with RBAC, new features, extended specifications |