cleanup
This commit is contained in:
124
.kiro/specs/checkpoint-persistence-fix/requirements.md
Normal file
124
.kiro/specs/checkpoint-persistence-fix/requirements.md
Normal file
@ -0,0 +1,124 @@
|
||||
# Requirements Document
|
||||
|
||||
## Introduction
|
||||
|
||||
The Checkpoint Persistence Fix addresses a critical system flaw where model training progress is not being saved during training, causing all learning progress to be lost when the system restarts. Despite having a well-implemented CheckpointManager and proper checkpoint loading at startup, the system lacks checkpoint saving during training operations. This creates a fundamental issue where models train continuously but never persist their improved weights, making continuous improvement impossible and wasting computational resources.
|
||||
|
||||
## Requirements
|
||||
|
||||
### Requirement 1: Real-time Checkpoint Saving During Training
|
||||
|
||||
**User Story:** As a system operator, I want model improvements to be automatically saved during training, so that training progress is never lost when the system restarts.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the DQN model is trained in _train_models_on_decision THEN the system SHALL save a checkpoint if the loss improves.
|
||||
2. WHEN the CNN model is trained THEN the system SHALL save a checkpoint if the loss improves.
|
||||
3. WHEN the COB RL model is trained THEN the system SHALL save a checkpoint if the loss improves.
|
||||
4. WHEN the Extrema trainer is trained THEN the system SHALL save a checkpoint if the loss improves.
|
||||
5. WHEN any model training completes THEN the system SHALL compare current performance to best performance and save if improved.
|
||||
6. WHEN checkpoint saving occurs THEN the system SHALL update the model_states dictionary with new performance metrics.
|
||||
|
||||
### Requirement 2: Performance-Based Checkpoint Management
|
||||
|
||||
**User Story:** As a developer, I want checkpoints to be saved only when model performance improves, so that storage is used efficiently and only the best models are preserved.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN evaluating whether to save a checkpoint THEN the system SHALL compare current loss to the best recorded loss.
|
||||
2. WHEN loss decreases by a configurable threshold THEN the system SHALL trigger checkpoint saving.
|
||||
3. WHEN multiple models are trained simultaneously THEN each model SHALL have independent performance tracking.
|
||||
4. WHEN checkpoint rotation occurs THEN the system SHALL keep only the best performing checkpoints.
|
||||
5. WHEN performance metrics are updated THEN the system SHALL log the improvement for monitoring.
|
||||
6. WHEN no improvement is detected THEN the system SHALL skip checkpoint saving to avoid unnecessary I/O.
|
||||
|
||||
### Requirement 3: Periodic Checkpoint Saving
|
||||
|
||||
**User Story:** As a system administrator, I want checkpoints to be saved periodically regardless of performance, so that progress is preserved even during long training sessions without significant improvement.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN a configurable number of training iterations have passed THEN the system SHALL save a checkpoint regardless of performance.
|
||||
2. WHEN periodic saving occurs THEN the system SHALL use a separate checkpoint category to distinguish from performance-based saves.
|
||||
3. WHEN the system runs for extended periods THEN periodic checkpoints SHALL ensure no more than X minutes of training progress can be lost.
|
||||
4. WHEN periodic checkpoints accumulate THEN the system SHALL maintain a rolling window of recent saves.
|
||||
5. WHEN storage space is limited THEN periodic checkpoints SHALL be cleaned up while preserving performance-based checkpoints.
|
||||
6. WHEN the system restarts THEN it SHALL load the most recent checkpoint (either performance-based or periodic).
|
||||
|
||||
### Requirement 4: Enhanced Training System Integration
|
||||
|
||||
**User Story:** As a developer, I want the EnhancedRealtimeTrainingSystem to properly save checkpoints, so that continuous learning progress is preserved across system restarts.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the EnhancedRealtimeTrainingSystem trains models THEN it SHALL integrate with the CheckpointManager.
|
||||
2. WHEN training episodes complete THEN the system SHALL evaluate and save improved models.
|
||||
3. WHEN the training system initializes THEN it SHALL load the best available checkpoints.
|
||||
4. WHEN training data is collected THEN the system SHALL track performance metrics for checkpoint decisions.
|
||||
5. WHEN the training system shuts down THEN it SHALL save final checkpoints before termination.
|
||||
6. WHEN training resumes THEN the system SHALL continue from the last saved checkpoint state.
|
||||
|
||||
### Requirement 5: Complete Training Data Storage
|
||||
|
||||
**User Story:** As a developer, I want complete training episodes to be stored with full input dataframes, so that training can be replayed and analyzed with all original context.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN training episodes are saved THEN the system SHALL store the complete input dataframe with all model inputs (price data, indicators, market structure, etc.).
|
||||
2. WHEN model actions are recorded THEN the system SHALL store the full context that led to the decision, not just the action result.
|
||||
3. WHEN training cases are saved THEN they SHALL include timestamps, market conditions, and all feature vectors used by the models.
|
||||
4. WHEN storing training data THEN the system SHALL preserve the exact state that can be used to reproduce the model's decision.
|
||||
5. WHEN training episodes are replayed THEN the system SHALL be able to reconstruct the exact same inputs that were originally used.
|
||||
6. WHEN analyzing training performance THEN complete dataframes SHALL be available for debugging and improvement.
|
||||
|
||||
### Requirement 6: Comprehensive Performance Tracking
|
||||
|
||||
**User Story:** As a system operator, I want detailed performance metrics to be tracked and persisted, so that I can monitor training progress and model improvement over time.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN models are trained THEN the system SHALL track loss values, accuracy metrics, and training timestamps.
|
||||
2. WHEN performance improves THEN the system SHALL log the improvement amount and save metadata.
|
||||
3. WHEN checkpoints are saved THEN the system SHALL store performance metrics alongside model weights.
|
||||
4. WHEN the system starts THEN it SHALL display the performance history of loaded checkpoints.
|
||||
5. WHEN multiple training sessions occur THEN the system SHALL maintain a continuous performance history.
|
||||
6. WHEN performance degrades THEN the system SHALL provide alerts and revert to better checkpoints if configured.
|
||||
|
||||
### Requirement 7: Robust Error Handling and Recovery
|
||||
|
||||
**User Story:** As a system administrator, I want checkpoint operations to be resilient to failures, so that training can continue even if individual checkpoint saves fail.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN checkpoint saving fails THEN the system SHALL log the error and continue training without crashing.
|
||||
2. WHEN disk space is insufficient THEN the system SHALL clean up old checkpoints and retry saving.
|
||||
3. WHEN checkpoint files are corrupted THEN the system SHALL fall back to previous valid checkpoints.
|
||||
4. WHEN concurrent access conflicts occur THEN the system SHALL use proper locking mechanisms.
|
||||
5. WHEN the system recovers from failures THEN it SHALL validate checkpoint integrity before loading.
|
||||
6. WHEN critical checkpoint operations fail repeatedly THEN the system SHALL alert administrators.
|
||||
|
||||
### Requirement 8: Configuration and Monitoring
|
||||
|
||||
**User Story:** As a developer, I want configurable checkpoint settings and monitoring capabilities, so that I can optimize checkpoint behavior for different training scenarios.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN configuring the system THEN checkpoint saving frequency SHALL be adjustable.
|
||||
2. WHEN setting performance thresholds THEN the minimum improvement required for saving SHALL be configurable.
|
||||
3. WHEN monitoring training THEN checkpoint save events SHALL be visible in logs and dashboards.
|
||||
4. WHEN analyzing performance THEN checkpoint metadata SHALL be accessible for review.
|
||||
5. WHEN tuning the system THEN checkpoint storage limits SHALL be configurable.
|
||||
6. WHEN debugging issues THEN detailed checkpoint operation logs SHALL be available.
|
||||
|
||||
### Requirement 9: Backward Compatibility and Migration
|
||||
|
||||
**User Story:** As a user, I want existing checkpoints to remain compatible, so that current model progress is preserved when the checkpoint system is enhanced.
|
||||
|
||||
#### Acceptance Criteria
|
||||
|
||||
1. WHEN the enhanced checkpoint system starts THEN it SHALL load existing checkpoints without issues.
|
||||
2. WHEN checkpoint formats are updated THEN migration utilities SHALL convert old formats.
|
||||
3. WHEN new metadata is added THEN existing checkpoints SHALL work with default values.
|
||||
4. WHEN the system upgrades THEN checkpoint directories SHALL be preserved and enhanced.
|
||||
5. WHEN rollback is needed THEN the system SHALL support reverting to previous checkpoint versions.
|
||||
6. WHEN compatibility issues arise THEN clear error messages SHALL guide resolution.
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -1,87 +0,0 @@
|
||||
{
|
||||
"cases": [
|
||||
{
|
||||
"case_id": "loss_20250527_022635_ETHUSDT",
|
||||
"timestamp": "2025-05-27T02:26:35.435596",
|
||||
"symbol": "ETH/USDT",
|
||||
"loss_amount": 3.0,
|
||||
"loss_percentage": 1.0,
|
||||
"training_priority": 1,
|
||||
"retraining_count": 0
|
||||
},
|
||||
{
|
||||
"case_id": "loss_20250527_022710_ETHUSDT",
|
||||
"timestamp": "2025-05-27T02:27:10.436995",
|
||||
"symbol": "ETH/USDT",
|
||||
"loss_amount": 30.0,
|
||||
"loss_percentage": 5.0,
|
||||
"training_priority": 3,
|
||||
"retraining_count": 0
|
||||
},
|
||||
{
|
||||
"case_id": "negative_20250626_005640_ETHUSDT_pnl_neg0p0018",
|
||||
"timestamp": "2025-06-26T00:56:05.060395",
|
||||
"symbol": "ETH/USDT",
|
||||
"pnl": -0.0018115494511830841,
|
||||
"training_priority": 2,
|
||||
"retraining_count": 0,
|
||||
"feature_counts": {
|
||||
"market_state": 0,
|
||||
"cnn_features": 0,
|
||||
"dqn_state": 2,
|
||||
"cob_features": 0,
|
||||
"technical_indicators": 7,
|
||||
"price_history": 50
|
||||
}
|
||||
},
|
||||
{
|
||||
"case_id": "negative_20250626_140647_ETHUSDT_pnl_neg0p0220",
|
||||
"timestamp": "2025-06-26T14:04:41.195630",
|
||||
"symbol": "ETH/USDT",
|
||||
"pnl": -0.02201592485230835,
|
||||
"training_priority": 2,
|
||||
"retraining_count": 0,
|
||||
"feature_counts": {
|
||||
"market_state": 0,
|
||||
"cnn_features": 0,
|
||||
"dqn_state": 2,
|
||||
"cob_features": 0,
|
||||
"technical_indicators": 7,
|
||||
"price_history": 50
|
||||
}
|
||||
},
|
||||
{
|
||||
"case_id": "negative_20250626_140726_ETHUSDT_pnl_neg0p0220",
|
||||
"timestamp": "2025-06-26T14:04:41.195630",
|
||||
"symbol": "ETH/USDT",
|
||||
"pnl": -0.02201592485230835,
|
||||
"training_priority": 2,
|
||||
"retraining_count": 0,
|
||||
"feature_counts": {
|
||||
"market_state": 0,
|
||||
"cnn_features": 0,
|
||||
"dqn_state": 2,
|
||||
"cob_features": 0,
|
||||
"technical_indicators": 7,
|
||||
"price_history": 50
|
||||
}
|
||||
},
|
||||
{
|
||||
"case_id": "negative_20250626_140824_ETHUSDT_pnl_neg0p0071",
|
||||
"timestamp": "2025-06-26T14:07:26.180914",
|
||||
"symbol": "ETH/USDT",
|
||||
"pnl": -0.007136478005372933,
|
||||
"training_priority": 2,
|
||||
"retraining_count": 0,
|
||||
"feature_counts": {
|
||||
"market_state": 0,
|
||||
"cnn_features": 0,
|
||||
"dqn_state": 2,
|
||||
"cob_features": 0,
|
||||
"technical_indicators": 7,
|
||||
"price_history": 50
|
||||
}
|
||||
}
|
||||
],
|
||||
"last_updated": "2025-06-26T14:08:24.042558"
|
||||
}
|
Binary file not shown.
Binary file not shown.
@ -1,11 +0,0 @@
|
||||
{
|
||||
"session_id": "session_loss_20250527_022635_ETHUSDT_1748302030",
|
||||
"start_time": "2025-05-27T02:27:10.436995",
|
||||
"end_time": "2025-05-27T02:27:15.464739",
|
||||
"cases_trained": [
|
||||
"loss_20250527_022635_ETHUSDT"
|
||||
],
|
||||
"epochs_completed": 100,
|
||||
"loss_improvement": 0.3923485547642519,
|
||||
"accuracy_improvement": 0.15929913816087232
|
||||
}
|
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user