Files
gogo2/.kiro/specs/multi-modal-trading-system/tasks.md
2025-07-26 23:34:36 +03:00

16 KiB

Implementation Plan

Enhanced Data Provider and COB Integration

  • 1. Enhance the existing DataProvider class with standardized model inputs

    • Extend the current implementation in core/data_provider.py
    • Implement standardized COB+OHLCV data frame for all models
    • Create unified input format: 300 frames OHLCV (1s, 1m, 1h, 1d) ETH + 300s of 1s BTC
    • Integrate with existing multi_exchange_cob_provider.py for COB data
    • Requirements: 1.1, 1.2, 1.3, 1.6
  • 1.1. Implement standardized COB+OHLCV data frame for all models

    • Create BaseDataInput class with standardized format for all models
    • Implement OHLCV: 300 frames of (1s, 1m, 1h, 1d) ETH + 300s of 1s BTC
    • Add COB: ±20 buckets of COB amounts in USD for each 1s OHLCV
    • Include 1s, 5s, 15s, and 60s MA of COB imbalance counting ±5 COB buckets
    • Ensure all models receive identical input format for consistency
    • Requirements: 1.2, 1.3, 8.1
  • 1.2. Implement extensible model output storage

    • Create standardized ModelOutput data structure
    • Support CNN, RL, LSTM, Transformer, and future model types
    • Include model-specific predictions and cross-model hidden states
    • Add metadata support for extensible model information
    • Requirements: 1.10, 8.2
  • 1.3. Enhance Williams Market Structure pivot point calculation

    • Extend existing williams_market_structure.py implementation
    • Improve recursive pivot point calculation accuracy
    • Add unit tests to verify pivot point detection
    • Integrate with COB data for enhanced pivot detection
    • Requirements: 1.5, 2.7
  • [-] 1.4. Optimize real-time data streaming with COB integration

    • Enhance existing WebSocket connections in enhanced_cob_websocket.py
    • Implement 10Hz COB data streaming alongside OHLCV data
    • Add data synchronization across different refresh rates
    • Ensure thread-safe access to multi-rate data streams
    • Requirements: 1.6, 8.5
  • 1.5. Fix WebSocket COB data processing errors

    • Fix 'NoneType' object has no attribute 'append' errors in COB data processing
    • Ensure proper initialization of data structures in MultiExchangeCOBProvider
    • Add validation and defensive checks before accessing data structures
    • Implement proper error handling for WebSocket data processing
    • Requirements: 1.1, 1.6, 8.5
  • 1.6. Enhance error handling in COB data processing

    • Add validation for incoming WebSocket data
    • Implement reconnection logic with exponential backoff
    • Add detailed logging for debugging COB data issues
    • Ensure system continues operation with last valid data during failures
    • Requirements: 1.6, 8.5

Enhanced CNN Model Implementation

  • 2. Enhance the existing CNN model with standardized inputs/outputs

    • Extend the current implementation in NN/models/enhanced_cnn.py
    • Accept standardized COB+OHLCV data frame: 300 frames (1s,1m,1h,1d) ETH + 300s 1s BTC
    • Include COB ±20 buckets and MA (1s,5s,15s,60s) of COB imbalance ±5 buckets
    • Output BUY/SELL trading action with confidence scores - Requirements: 2.1, 2.2, 2.8, 1.10
  • 2.1. Implement CNN inference with standardized input format

    • Accept BaseDataInput with standardized COB+OHLCV format
    • Process 300 frames of multi-timeframe data with COB buckets
    • Output BUY/SELL recommendations with confidence scores
    • Make hidden layer states available for cross-model feeding
    • Optimize inference performance for real-time processing
    • Requirements: 2.2, 2.6, 2.8, 4.3
  • 2.2. Enhance CNN training pipeline with checkpoint management

    • Integrate with checkpoint manager for training progress persistence
    • Store top 5-10 best checkpoints based on performance metrics
    • Automatically load best checkpoint at startup
    • Implement training triggers based on orchestrator feedback
    • Store metadata with checkpoints for performance tracking
    • Requirements: 2.4, 2.5, 5.2, 5.3, 5.7
  • 2.3. Implement CNN model evaluation and checkpoint optimization

    • Create evaluation methods using standardized input/output format
    • Implement performance metrics for checkpoint ranking
    • Add validation against historical trading outcomes
    • Support automatic checkpoint cleanup (keep only top performers)
    • Track model improvement over time through checkpoint metadata
    • Requirements: 2.5, 5.8, 4.4

Enhanced RL Model Implementation

  • 3. Enhance the existing RL model with standardized inputs/outputs

    • Extend the current implementation in NN/models/dqn_agent.py
    • Accept standardized COB+OHLCV data frame: 300 frames (1s,1m,1h,1d) ETH + 300s 1s BTC
    • Include COB ±20 buckets and MA (1s,5s,15s,60s) of COB imbalance ±5 buckets
    • Output BUY/SELL trading action with confidence scores
    • Requirements: 3.1, 3.2, 3.7, 1.10
  • 3.1. Implement RL inference with standardized input format

    • Accept BaseDataInput with standardized COB+OHLCV format
    • Process CNN hidden states and predictions as part of state input
    • Output BUY/SELL recommendations with confidence scores
    • Include expected rewards and value estimates in output
    • Optimize inference performance for real-time processing
    • Requirements: 3.2, 3.7, 4.3
  • 3.2. Enhance RL training pipeline with checkpoint management

    • Integrate with checkpoint manager for training progress persistence
    • Store top 5-10 best checkpoints based on trading performance metrics
    • Automatically load best checkpoint at startup
    • Implement experience replay with profitability-based prioritization
    • Store metadata with checkpoints for performance tracking
    • Requirements: 3.3, 3.5, 5.4, 5.7, 4.4
  • 3.3. Implement RL model evaluation and checkpoint optimization

    • Create evaluation methods using standardized input/output format
    • Implement trading performance metrics for checkpoint ranking
    • Add validation against historical trading opportunities
    • Support automatic checkpoint cleanup (keep only top performers)
    • Track model improvement over time through checkpoint metadata
    • Requirements: 3.3, 5.8, 4.4

Enhanced Orchestrator Implementation

  • 4. Enhance the existing orchestrator with centralized coordination

    • Extend the current implementation in core/orchestrator.py
    • Implement DataSubscriptionManager for multi-rate data streams
    • Add ModelInferenceCoordinator for cross-model coordination
    • Create ModelOutputStore for extensible model output management
    • Add TrainingPipelineManager for continuous learning coordination
    • Requirements: 4.1, 4.2, 4.5, 8.1
  • 4.1. Implement data subscription and management system

    • Create DataSubscriptionManager class
    • Subscribe to 10Hz COB data, OHLCV, market ticks, and technical indicators
    • Implement intelligent caching for "last updated" data serving
    • Maintain synchronized base dataframe across different refresh rates
    • Add thread-safe access to multi-rate data streams
    • Requirements: 4.1, 1.6, 8.5
  • 4.2. Implement model inference coordination

    • Create ModelInferenceCoordinator class
    • Trigger model inference based on data availability and requirements
    • Coordinate parallel inference execution for independent models
    • Handle model dependencies (e.g., RL waiting for CNN hidden states)
    • Assemble appropriate input data for each model type
    • Requirements: 4.2, 3.1, 2.1
  • 4.3. Implement model output storage and cross-feeding

    • Create ModelOutputStore class using standardized ModelOutput format
    • Store CNN predictions, confidence scores, and hidden layer states
    • Store RL action recommendations and value estimates
    • Support extensible storage for LSTM, Transformer, and future models
    • Implement cross-model feeding of hidden states and predictions
    • Include "last predictions" from all models in base data input
    • Requirements: 4.3, 1.10, 8.2
  • 4.4. Implement training pipeline management

    • Create TrainingPipelineManager class
    • Call each model's training pipeline with prediction-result pairs
    • Manage training data collection and labeling
    • Coordinate online learning updates based on real-time performance
    • Track prediction accuracy and trigger retraining when needed
    • Requirements: 4.4, 5.2, 5.4, 5.7
  • 4.5. Implement enhanced decision-making with MoE

    • Create enhanced DecisionMaker class
    • Implement Mixture of Experts approach for model integration
    • Apply confidence-based filtering to avoid uncertain trades
    • Support configurable thresholds for buy/sell decisions
    • Consider market conditions and risk parameters in decisions
    • Requirements: 4.5, 4.8, 6.7
  • 4.6. Implement extensible model integration architecture

    • Create MoEGateway class supporting dynamic model addition
    • Support CNN, RL, LSTM, Transformer model types without architecture changes
    • Implement model versioning and rollback capabilities
    • Handle model failures and fallback mechanisms
    • Provide model performance monitoring and alerting
    • Requirements: 4.6, 8.2, 8.3

Model Inference Data Validation and Storage

  • 5. Implement comprehensive inference data validation system

    • Create InferenceDataValidator class for input validation
    • Validate complete OHLCV dataframes for all required timeframes
    • Check input data dimensions against model requirements
    • Log missing components and prevent prediction on incomplete data
    • Requirements: 9.1, 9.2, 9.3, 9.4
  • 5.1. Implement input data validation for all models

    • Create validation methods for CNN, RL, and future model inputs
    • Validate OHLCV data completeness (300 frames for 1s, 1m, 1h, 1d)
    • Validate COB data structure (±20 buckets, MA calculations)
    • Raise specific validation errors with expected vs actual dimensions
    • Ensure validation occurs before any model inference
    • Requirements: 9.1, 9.4
  • 5.2. Implement persistent inference history storage

    • Create InferenceHistoryStore class for persistent storage
    • Store complete input data packages with each prediction
    • Include timestamp, symbol, input features, prediction outputs, confidence scores
    • Store model internal states for cross-model feeding
    • Implement compressed storage to minimize footprint
    • Requirements: 9.5, 9.6
  • 5.3. Implement inference history query and retrieval system

    • Create efficient query mechanisms by symbol, timeframe, and date range
    • Implement data retrieval for training pipeline consumption
    • Add data completeness metrics and validation results in storage
    • Handle storage failures gracefully without breaking prediction flow
    • Requirements: 9.7, 11.6

Inference-Training Feedback Loop Implementation

  • 6. Implement prediction outcome evaluation system

    • Create PredictionOutcomeEvaluator class
    • Evaluate prediction accuracy against actual price movements
    • Create training examples using stored inference data and actual outcomes
    • Feed prediction-result pairs back to respective models
    • Requirements: 10.1, 10.2, 10.3
  • 6.1. Implement adaptive learning signal generation

    • Create positive reinforcement signals for accurate predictions
    • Generate corrective training signals for inaccurate predictions
    • Retrieve last inference data for each model for outcome comparison
    • Implement model-specific learning signal formats
    • Requirements: 10.4, 10.5, 10.6
  • 6.2. Implement continuous improvement tracking

    • Track and report accuracy improvements/degradations over time
    • Monitor model learning progress through feedback loop
    • Create performance metrics for inference-training effectiveness
    • Generate alerts for learning regression or stagnation
    • Requirements: 10.7

Inference History Management and Monitoring

  • 7. Implement comprehensive inference logging and monitoring

    • Create InferenceMonitor class for logging and alerting
    • Log inference data storage operations with completeness metrics
    • Log training outcomes and model performance changes
    • Alert administrators on data flow issues with specific error details
    • Requirements: 11.1, 11.2, 11.3
  • 7.1. Implement configurable retention policies

    • Create RetentionPolicyManager class
    • Archive or remove oldest entries when limits are reached
    • Prioritize keeping most recent and valuable training examples
    • Implement storage space monitoring and alerts
    • Requirements: 11.4, 11.7
  • 7.2. Implement efficient historical data management

    • Compress inference data to minimize storage footprint
    • Maintain accessibility for training and analysis
    • Implement efficient query mechanisms for historical analysis
    • Add data archival and restoration capabilities
    • Requirements: 11.5, 11.6

Trading Executor Implementation

  • 5. Design and implement the trading executor

    • Create a TradingExecutor class that accepts trading actions from the orchestrator
    • Implement order execution through brokerage APIs
    • Add order lifecycle management
    • Requirements: 7.1, 7.2, 8.6
  • 5.1. Implement brokerage API integrations

    • Create a BrokerageAPI interface
    • Implement concrete classes for MEXC and Binance
    • Add error handling and retry mechanisms
    • Requirements: 7.1, 7.2, 8.6
  • 5.2. Implement order management

    • Create an OrderManager class
    • Implement methods for creating, updating, and canceling orders
    • Add order tracking and status updates
    • Requirements: 7.1, 7.2, 8.6
  • 5.3. Implement error handling

    • Add comprehensive error handling for API failures
    • Implement circuit breakers for extreme market conditions
    • Add logging and notification mechanisms
    • Requirements: 7.1, 7.2, 8.6

Risk Manager Implementation

  • 6. Design and implement the risk manager

    • Create a RiskManager class
    • Implement risk parameter management
    • Add risk metric calculation
    • Requirements: 7.1, 7.3, 7.4
  • 6.1. Implement stop-loss functionality

    • Create a StopLossManager class
    • Implement methods for creating and managing stop-loss orders
    • Add mechanisms to automatically close positions when stop-loss is triggered
    • Requirements: 7.1, 7.2
  • 6.2. Implement position sizing

    • Create a PositionSizer class
    • Implement methods for calculating position sizes based on risk parameters
    • Add validation to ensure position sizes are within limits
    • Requirements: 7.3, 7.7
  • 6.3. Implement risk metrics

    • Add methods to calculate risk metrics (drawdown, VaR, etc.)
    • Implement real-time risk monitoring
    • Add alerts for high-risk situations
    • Requirements: 7.4, 7.5, 7.6, 7.8

Dashboard Implementation

  • 7. Design and implement the dashboard UI

    • Create a Dashboard class
    • Implement the web-based UI using Flask/Dash
    • Add real-time updates using WebSockets
    • Requirements: 6.1, 6.8
  • 7.1. Implement chart management

    • Create a ChartManager class
    • Implement methods for creating and updating charts
    • Add interactive features (zoom, pan, etc.)
    • Requirements: 6.1, 6.2
  • 7.2. Implement control panel

    • Create a ControlPanel class
    • Implement start/stop toggles for system processes
    • Add sliders for adjusting buy/sell thresholds
    • Requirements: 6.6, 6.7
  • 7.3. Implement system status display

    • Add methods to display training progress
    • Implement model performance metrics visualization
    • Add real-time system status updates
    • Requirements: 6.5, 5.6
  • 7.4. Implement server-side processing

    • Ensure all processes run on the server without requiring the dashboard to be open
    • Implement background tasks for model training and inference
    • Add mechanisms to persist system state
    • Requirements: 6.8, 5.5

Integration and Testing

  • 8. Integrate all components

    • Connect the data provider to the CNN and RL models
    • Connect the CNN and RL models to the orchestrator
    • Connect the orchestrator to the trading executor
    • Requirements: 8.1, 8.2, 8.3
  • 8.1. Implement comprehensive unit tests

    • Create unit tests for each component
    • Implement test fixtures and mocks
    • Add test coverage reporting
    • Requirements: 8.1, 8.2, 8.3
  • 8.2. Implement integration tests

    • Create tests for component interactions
    • Implement end-to-end tests
    • Add performance benchmarks
    • Requirements: 8.1, 8.2, 8.3
  • 8.3. Implement backtesting framework

    • Create a backtesting environment
    • Implement methods to replay historical data
    • Add performance metrics calculation
    • Requirements: 5.8, 8.1
  • 8.4. Optimize performance

    • Profile the system to identify bottlenecks
    • Implement optimizations for critical paths
    • Add caching and parallelization where appropriate
    • Requirements: 8.1, 8.2, 8.3