Tank Levels Query Performance Optimization
Date: 2026-02-10 Type: Performance Optimization Feature: Flow Meter → Tank Levels Display File Changed: src/features/flow-meter/services/tankService.ts
Problem
The Tank Levels section on the Flow Meter page (TankLevelDisplay component) loaded very slowly. On a typical deployment with 10+ flow meter assets, the loading spinner would persist for 3–6 seconds before tank data appeared.
Root Cause Analysis
The calculateAllTankLevels() function in tankService.ts used an N+1 query anti-pattern: after 2 initial bulk queries, it entered a for loop that issued 3 sequential Supabase HTTP requests per asset.
Old Query Pattern (Before)
Phase 1 (parallel):
Query 1: SELECT * FROM cfg_tank_capacities ← all capacities
Query 2: SELECT * FROM ops_tank_corrections ← all corrections
Phase 2 (sequential loop, per asset):
for each asset (N iterations):
Query 3a: SELECT litres_refilled FROM ops_dustloc_refills
WHERE asset_id = ? AND refill_datetime > ?
Query 3b: SELECT litres_dispensed FROM data_flow_meters
WHERE asset_id = ? AND is_ignored = false AND datetime_dispensed > ?
Query 3c: SELECT datetime_dispensed FROM data_flow_meters
WHERE asset_id = ? AND is_ignored = false
ORDER BY datetime_dispensed DESC LIMIT 1Total queries: 2 + 3N (sequential)
| Assets (N) | Total Queries | Est. Time (100ms/req) | Est. Time (200ms/req) |
|---|---|---|---|
| 5 | 17 | 1.7s | 3.4s |
| 10 | 32 | 3.2s | 6.4s |
| 20 | 62 | 6.2s | 12.4s |
The self-hosted Supabase instance adds network latency per request, making the sequential pattern especially costly.
Solution
Refactored calculateAllTankLevels() to use bulk queries + client-side computation, following the same pattern already proven in TankAlertManagementService.getTankLevelsForAlerts().
New Query Pattern (After)
Phase 1 (parallel):
Query 1: SELECT * FROM cfg_tank_capacities ← all capacities
Query 2: SELECT * FROM ops_tank_corrections ← all corrections
Phase 2 (parallel, all 3 at once):
Query 3: SELECT asset_id, litres_refilled, refill_datetime
FROM ops_dustloc_refills
WHERE asset_id IN (...) AND refill_datetime > earliest_correction
Query 4: SELECT asset_id, litres_dispensed, datetime_dispensed
FROM data_flow_meters
WHERE asset_id IN (...) AND is_ignored = false
AND datetime_dispensed > earliest_correction
Query 5: SELECT asset_id, datetime_dispensed
FROM data_flow_meters
WHERE asset_id IN (...) AND is_ignored = false
ORDER BY datetime_dispensed DESC
Phase 3 (client-side, zero queries):
for each asset:
- Filter refill rows by asset_id + correction_datetime
- Filter usage rows by asset_id + correction_datetime
- Pick first dispensing row per asset (already sorted desc)
- Calculate remaining = correction + refills - usage
- Derive percentage and statusTotal queries: 5 (constant) — regardless of asset count.
Performance Comparison
| Assets (N) | Old Queries | New Queries | Speedup Factor |
|---|---|---|---|
| 5 | 17 seq | 5 parallel | ~6× |
| 10 | 32 seq | 5 parallel | ~10× |
| 20 | 62 seq | 5 parallel | ~15× |
Expected load time: ~200–400ms (down from 3–6s for 10 assets).
Key Design Decisions
1. Earliest Correction as Lower Bound
Bulk queries for refills and usage use the earliest correction datetime across all assets as a lower bound filter (gt("refill_datetime", earliestCorrection)). This limits the data fetched from the database while still capturing all records needed for per-asset calculations.
2. Client-Side Per-Asset Filtering
After fetching bulk data, the function filters rows by asset_id and each asset's specific correction_datetime in memory. This trades a small amount of client-side CPU for a massive reduction in network round-trips.
3. Last Dispensing via Sort + First-Occurrence
Instead of N individual LIMIT 1 queries, we fetch all non-ignored dispensing records sorted by datetime_dispensed DESC and pick the first occurrence per asset_id client-side. This is a single query regardless of asset count.
4. Zero API Changes
The function signature and return type (TankLevel[]) are unchanged. The TankLevelDisplay component and all other callers require no modifications.
Existing Pattern Reference
The TankAlertManagementService already implemented this bulk-query approach in its getTankLevelsForAlerts() method (lines 368–488 of tankAlertManagementService.ts), with the comment:
"Instead of calculateAllTankLevels() which makes 3N sequential queries (one per asset for refills, usage, and last dispensing), this fetches all needed data in 3 bulk queries and computes client-side."
This optimization applies the same proven pattern to the main calculateAllTankLevels() function, which is the one used by the Flow Meter page UI.
Files Changed
| File | Change |
|---|---|
src/features/flow-meter/services/tankService.ts | Rewrote calculateAllTankLevels() to use bulk queries |
Verification
- ✅ All 232 unit tests pass (including 18 tankService tests)
- ✅ TypeScript strict mode compilation passes
- ✅ ESLint: 0 errors
- ✅ Production build succeeds
Future Considerations
Supabase Row Limit (1000): If
data_flow_metersgrows very large, the bulk query for last dispensing times may hit the 1000-row API limit. If this becomes an issue, consider adding a date range filter (e.g., last 90 days) or creating a database view/materialized view for latest dispensing per asset.Database-Side Aggregation: For even better performance at scale, the refill/usage summation could be moved to a PostgreSQL function (
SUM ... GROUP BY asset_id WHERE datetime > correction_datetime), reducing data transfer to just the aggregated totals per asset.Caching: Tank levels change infrequently (only on new dispensing events, refills, or corrections). A short TTL cache (e.g., 60 seconds) could further reduce perceived load time on repeated visits.