- Change HTTP method from POST to PUT for Matrix API v3
- Matrix API requires PUT when transaction ID is included in URL path
- Move transaction ID construction before URL building for clarity
- Fixes "405 Method Not Allowed" error when sending notifications
The Matrix API v3 endpoint structure:
PUT /_matrix/client/v3/rooms/{roomId}/send/{eventType}/{txnId}
Previous error:
POST request was being rejected with 405 Method Not Allowed
Now working:
PUT request successfully sends messages to Matrix rooms
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add 8 water level zones plus NewEdge threshold for P.1 station
- Zone 1: 3.7m (Info), Zone 2: 3.9m (Info)
- Zone 3-5: 4.0-4.2m (Warning levels)
- Zone 6-7: 4.3-4.6m (Critical levels)
- Zone 8/NewEdge: 4.8m (Emergency level)
- Implement special zone-based checking logic for P.1
- Maintain backward compatibility with standard warning/critical/emergency thresholds
- Keep standard threshold checking for other stations
Zone progression for P.1:
- 3.7m: Zone 1 alert (Info)
- 3.9m: Zone 2 alert (Info)
- 4.0m: Zone 3 alert (Warning)
- 4.1m: Zone 4 alert (Warning)
- 4.2m: Zone 5 alert (Warning)
- 4.3m: Zone 6 alert (Critical)
- 4.6m: Zone 7 alert (Critical)
- 4.8m: Zone 8/NewEdge alert (Emergency)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Remove tolerance windows and grace periods from data freshness checks
- Require data from current hour only - no exceptions or fallbacks
- If hourly check runs at 21:xx but only has data up to 20:xx, immediately switch to retry mode
- Simplify logic: latest_hour >= current_hour for fresh data
- Remove complex age calculations and tolerance conditions
This ensures the scheduler immediately detects when new hourly data
is not yet available and switches to minute-based retries without delay.
Behavior:
- 21:02 with data up to 21:xx → Fresh (continue hourly)
- 21:02 with data up to 20:xx → Stale (immediate retry mode)
- No grace periods, no tolerance windows, strict hour-based detection
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Modify _check_data_freshness() to verify current hour data exists
- If running at 20:00 but only have data up to 19:xx, consider it stale
- Add tolerance: accept previous hour data if within first 10 minutes
- Combine current hour check with age limit (≤2 hours) for robustness
- Add detailed logging for current vs latest hour comparison
This solves the core issue where scheduler stayed in hourly mode despite
missing the expected current hour data from the API.
Example scenarios:
- 20:57 with data up to 20:xx: Fresh (has current hour)
- 20:57 with data up to 19:xx: Stale (missing current hour) → Retry mode
- 20:05 with data up to 19:xx: Fresh (tolerance for early hour)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add _check_data_freshness() method to detect stale vs fresh data
- Consider data fresh only if latest timestamp is within 2 hours
- Modify run_scraping_cycle() to check data freshness, not just existence
- Return False for stale data to trigger adaptive scheduler retry mode
- Add detailed logging for data age and freshness decisions
This solves the issue where scheduler stayed in hourly mode despite getting
stale data from the API. Now it correctly detects when API returns old data
and switches to retry mode until fresh data becomes available.
Example behavior:
- Fresh data (0.6 hours old): Returns True, stays in hourly mode
- Stale data (68.6 hours old): Returns False, switches to retry mode
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Make discharge field optional in data validator
- Remove discharge from required fields list
- Add explicit null check for discharge before float conversion
- Prevent "float() argument must be a string or a real number, not 'NoneType'" errors
- Allow records with valid water levels but malformed/null discharge data
This completes the malformed data handling fix by updating the validator
to match the parser's new behavior of allowing null discharge values.
Before: Validator rejected records with null discharge
After: Validator accepts records with null discharge, validates only if present
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Change data parsing logic to make discharge data optional
- Water level data is now saved even when discharge values are malformed (e.g., "***")
- Handle malformed discharge values gracefully with null instead of skipping entire record
- Add specific handling for "***" discharge values from API
- Improve data completeness by not discarding valid water level measurements
Before: Entire station record was skipped if discharge was malformed
After: Water level data is preserved, discharge set to null for malformed values
Example fix:
- wlvalues8: 1.6 (valid) + qvalues8: "***" (malformed)
- Before: No record saved
- After: Record saved with water_level=1.6, discharge=null
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add intelligent date selection based on current time
- Before 01:00: fetch yesterday's data only (API not updated yet)
- After 01:00: try today's data first, fallback to yesterday if needed
- Improve data availability by adapting to API update patterns
- Add comprehensive logging for date selection decisions
This ensures optimal data fetching regardless of the time of day:
- Early morning (00:00-00:59): fetches yesterday (reliable)
- Rest of day (01:00-23:59): tries today first, falls back to yesterday
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Replace fixed hourly schedule with adaptive scheduling system
- Switch to 1-minute retries when no data is available from API
- Return to hourly schedule once data is successfully fetched
- Fix data fetching to use yesterday's date (API has 1-day delay)
- Add comprehensive logging for scheduler mode changes
- Improve resilience against API data availability issues
The scheduler now intelligently adapts to data availability:
- Normal mode: hourly runs at top of each hour
- Retry mode: minute-based retries until data is available
- Automatic mode switching based on fetch success/failure
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add import_historical_data() method to EnhancedWaterMonitorScraper
- Support date range imports with Buddhist calendar API format
- Add CLI arguments --import-historical and --force-overwrite
- Include API rate limiting and skip existing data option
- Enable importing years of historical water level data
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Run initial data collection immediately on startup
- Calculate wait time to next full hour (e.g., 22:12 start waits until 23:00)
- Schedule subsequent runs at top of each hour (:00 minutes)
- Display next scheduled run time to user for better visibility
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Implement custom Python alerting system (src/alerting.py) with water level monitoring, data freshness checks, and Matrix notifications
- Add complete Grafana Matrix alerting setup guide (docs/GRAFANA_MATRIX_SETUP.md) with webhook configuration, alert rules, and notification policies
- Create Matrix quick start guide (docs/MATRIX_QUICK_START.md) for rapid deployment
- Integrate alerting commands into main application (--alert-check, --alert-test)
- Add Matrix configuration to environment variables (.env.example)
- Update Makefile with alerting targets (alert-check, alert-test)
- Enhance status command to show Matrix notification status
- Support station-specific water level thresholds and escalation rules
- Provide dual alerting approach: native Grafana alerts and custom Python system
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
- **Migration to uv package manager**: Replace pip/requirements with modern pyproject.toml
- Add pyproject.toml with complete dependency management
- Update all scripts and Makefile to use uv commands
- Maintain backward compatibility with existing workflows
- **PostgreSQL integration and migration tools**:
- Enhanced config.py with automatic password URL encoding
- Complete PostgreSQL setup scripts and documentation
- High-performance SQLite to PostgreSQL migration tool (91x speed improvement)
- Support for both connection strings and individual components
- **Executable distribution system**:
- PyInstaller integration for standalone .exe creation
- Automated build scripts with batch file generation
- Complete packaging system for end-user distribution
- **Enhanced data management**:
- Fix --fill-gaps command with proper method implementation
- Add gap detection and historical data backfill capabilities
- Implement data update functionality for existing records
- Add comprehensive database adapter methods
- **Developer experience improvements**:
- Password encoding tools for special characters
- Interactive setup wizards for PostgreSQL configuration
- Comprehensive documentation and migration guides
- Automated testing and validation tools
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Checkout Action Migration:
- Replace all 'actions/checkout@v4' with 'https://gitea.com/actions/checkout'
- Fixes 'Bad credentials' errors when workflows try to access GitHub API
- Native Gitea checkout action eliminates authentication issues
- Applied across all 4 workflow files (CI, Security, Release, Docs)
Version Increment: 3.1.1 3.1.2
- Core application version updates
- Web API version synchronization
- Documentation version alignment
- Badge and release example updates
Problem Solved:
- Workflows no longer attempt GitHub API calls
- Gitea-native checkout action handles repository access properly
- Eliminates 'Retrieving the default branch name' failures
- Cleaner workflow execution without authentication errors
Files Updated:
- 4 workflow files: checkout action replacement
- 13 files: version number updates
- Consistent v3.1.2 across all components
Benefits:
- Workflows will now run successfully in Gitea
- No more GitHub API authentication failures
- Native Gitea action compatibility
- Ready for successful CI/CD pipeline execution
Features:
- Real-time water level monitoring for Ping River Basin (16 stations)
- Coverage from Chiang Dao to Nakhon Sawan in Northern Thailand
- FastAPI web interface with interactive dashboard and station management
- Multi-database support (SQLite, MySQL, PostgreSQL, InfluxDB, VictoriaMetrics)
- Comprehensive monitoring with health checks and metrics collection
- Docker deployment with Grafana integration
- Production-ready architecture with enterprise-grade observability
CI/CD & Automation:
- Complete Gitea Actions workflows for CI/CD, security, and releases
- Multi-Python version testing (3.9-3.12)
- Multi-architecture Docker builds (amd64, arm64)
- Daily security scanning and dependency monitoring
- Automated documentation generation
- Performance testing and validation
Production Ready:
- Type safety with Pydantic models and comprehensive type hints
- Data validation layer with range checking and error handling
- Rate limiting and request tracking for API protection
- Enhanced logging with rotation, colors, and performance metrics
- Station management API for dynamic CRUD operations
- Comprehensive documentation and deployment guides
Technical Stack:
- Python 3.9+ with FastAPI and Pydantic
- Multi-database architecture with adapter pattern
- Docker containerization with multi-stage builds
- Grafana dashboards for visualization
- Gitea Actions for CI/CD automation
- Enterprise monitoring and alerting
Ready for deployment to B4L infrastructure!