{{ theme.skipToContentLabel || 'Skip to content' }}

Tank Levels Query Performance Optimization

Date: 2026-02-10 Type: Performance Optimization Feature: Flow Meter → Tank Levels Display File Changed: src/features/flow-meter/services/tankService.ts


Problem

The Tank Levels section on the Flow Meter page (TankLevelDisplay component) loaded very slowly. On a typical deployment with 10+ flow meter assets, the loading spinner would persist for 3–6 seconds before tank data appeared.

Root Cause Analysis

The calculateAllTankLevels() function in tankService.ts used an N+1 query anti-pattern: after 2 initial bulk queries, it entered a for loop that issued 3 sequential Supabase HTTP requests per asset.

Old Query Pattern (Before)

Phase 1 (parallel):
  Query 1: SELECT * FROM cfg_tank_capacities        ← all capacities
  Query 2: SELECT * FROM ops_tank_corrections        ← all corrections

Phase 2 (sequential loop, per asset):
  for each asset (N iterations):
    Query 3a: SELECT litres_refilled FROM ops_dustloc_refills
              WHERE asset_id = ? AND refill_datetime > ?
    Query 3b: SELECT litres_dispensed FROM data_flow_meters
              WHERE asset_id = ? AND is_ignored = false AND datetime_dispensed > ?
    Query 3c: SELECT datetime_dispensed FROM data_flow_meters
              WHERE asset_id = ? AND is_ignored = false
              ORDER BY datetime_dispensed DESC LIMIT 1

Total queries: 2 + 3N (sequential)

Assets (N)Total QueriesEst. Time (100ms/req)Est. Time (200ms/req)
5171.7s3.4s
10323.2s6.4s
20626.2s12.4s

The self-hosted Supabase instance adds network latency per request, making the sequential pattern especially costly.

Solution

Refactored calculateAllTankLevels() to use bulk queries + client-side computation, following the same pattern already proven in TankAlertManagementService.getTankLevelsForAlerts().

New Query Pattern (After)

Phase 1 (parallel):
  Query 1: SELECT * FROM cfg_tank_capacities        ← all capacities
  Query 2: SELECT * FROM ops_tank_corrections        ← all corrections

Phase 2 (parallel, all 3 at once):
  Query 3: SELECT asset_id, litres_refilled, refill_datetime
           FROM ops_dustloc_refills
           WHERE asset_id IN (...) AND refill_datetime > earliest_correction

  Query 4: SELECT asset_id, litres_dispensed, datetime_dispensed
           FROM data_flow_meters
           WHERE asset_id IN (...) AND is_ignored = false
             AND datetime_dispensed > earliest_correction

  Query 5: SELECT asset_id, datetime_dispensed
           FROM data_flow_meters
           WHERE asset_id IN (...) AND is_ignored = false
           ORDER BY datetime_dispensed DESC

Phase 3 (client-side, zero queries):
  for each asset:
    - Filter refill rows by asset_id + correction_datetime
    - Filter usage rows by asset_id + correction_datetime
    - Pick first dispensing row per asset (already sorted desc)
    - Calculate remaining = correction + refills - usage
    - Derive percentage and status

Total queries: 5 (constant) — regardless of asset count.

Performance Comparison

Assets (N)Old QueriesNew QueriesSpeedup Factor
517 seq5 parallel~6×
1032 seq5 parallel~10×
2062 seq5 parallel~15×

Expected load time: ~200–400ms (down from 3–6s for 10 assets).

Key Design Decisions

1. Earliest Correction as Lower Bound

Bulk queries for refills and usage use the earliest correction datetime across all assets as a lower bound filter (gt("refill_datetime", earliestCorrection)). This limits the data fetched from the database while still capturing all records needed for per-asset calculations.

2. Client-Side Per-Asset Filtering

After fetching bulk data, the function filters rows by asset_id and each asset's specific correction_datetime in memory. This trades a small amount of client-side CPU for a massive reduction in network round-trips.

3. Last Dispensing via Sort + First-Occurrence

Instead of N individual LIMIT 1 queries, we fetch all non-ignored dispensing records sorted by datetime_dispensed DESC and pick the first occurrence per asset_id client-side. This is a single query regardless of asset count.

4. Zero API Changes

The function signature and return type (TankLevel[]) are unchanged. The TankLevelDisplay component and all other callers require no modifications.

Existing Pattern Reference

The TankAlertManagementService already implemented this bulk-query approach in its getTankLevelsForAlerts() method (lines 368–488 of tankAlertManagementService.ts), with the comment:

"Instead of calculateAllTankLevels() which makes 3N sequential queries (one per asset for refills, usage, and last dispensing), this fetches all needed data in 3 bulk queries and computes client-side."

This optimization applies the same proven pattern to the main calculateAllTankLevels() function, which is the one used by the Flow Meter page UI.

Files Changed

FileChange
src/features/flow-meter/services/tankService.tsRewrote calculateAllTankLevels() to use bulk queries

Verification

  • ✅ All 232 unit tests pass (including 18 tankService tests)
  • ✅ TypeScript strict mode compilation passes
  • ✅ ESLint: 0 errors
  • ✅ Production build succeeds

Future Considerations

  1. Supabase Row Limit (1000): If data_flow_meters grows very large, the bulk query for last dispensing times may hit the 1000-row API limit. If this becomes an issue, consider adding a date range filter (e.g., last 90 days) or creating a database view/materialized view for latest dispensing per asset.

  2. Database-Side Aggregation: For even better performance at scale, the refill/usage summation could be moved to a PostgreSQL function (SUM ... GROUP BY asset_id WHERE datetime > correction_datetime), reducing data transfer to just the aggregated totals per asset.

  3. Caching: Tank levels change infrequently (only on new dispensing events, refills, or corrections). A short TTL cache (e.g., 60 seconds) could further reduce perceived load time on repeated visits.