cleanup and reorgnization
This commit is contained in:
@ -1,207 +0,0 @@
|
|||||||
# Enhanced Scalping Dashboard with 1s Bars and 15min Cache - Implementation Summary
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Successfully implemented an enhanced real-time scalping dashboard with the following key improvements:
|
|
||||||
|
|
||||||
### 🎯 Core Features Implemented
|
|
||||||
|
|
||||||
1. **1-Second OHLCV Bar Charts** (instead of tick points)
|
|
||||||
- Real-time candle aggregation from tick data
|
|
||||||
- Proper OHLCV calculation with volume tracking
|
|
||||||
- Buy/sell volume separation for enhanced analysis
|
|
||||||
|
|
||||||
2. **15-Minute Server-Side Tick Cache**
|
|
||||||
- Rolling 15-minute window of raw tick data
|
|
||||||
- Optimized for model training data access
|
|
||||||
- Thread-safe implementation with deque structures
|
|
||||||
|
|
||||||
3. **Enhanced Volume Visualization**
|
|
||||||
- Separate buy/sell volume bars
|
|
||||||
- Volume comparison charts between symbols
|
|
||||||
- Real-time volume analysis subplot
|
|
||||||
|
|
||||||
4. **Ultra-Low Latency WebSocket Streaming**
|
|
||||||
- Direct tick processing pipeline
|
|
||||||
- Minimal latency between market data and display
|
|
||||||
- Efficient data structures for real-time updates
|
|
||||||
|
|
||||||
## 📁 Files Created/Modified
|
|
||||||
|
|
||||||
### New Files:
|
|
||||||
- `web/enhanced_scalping_dashboard.py` - Main enhanced dashboard implementation
|
|
||||||
- `run_enhanced_scalping_dashboard.py` - Launcher script with configuration options
|
|
||||||
|
|
||||||
### Key Components:
|
|
||||||
|
|
||||||
#### 1. TickCache Class
|
|
||||||
```python
|
|
||||||
class TickCache:
|
|
||||||
"""15-minute tick cache for model training"""
|
|
||||||
- cache_duration_minutes: 15 (configurable)
|
|
||||||
- max_cache_size: 50,000 ticks per symbol
|
|
||||||
- Thread-safe with Lock()
|
|
||||||
- Automatic cleanup of old ticks
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 2. CandleAggregator Class
|
|
||||||
```python
|
|
||||||
class CandleAggregator:
|
|
||||||
"""Real-time 1-second candle aggregation from tick data"""
|
|
||||||
- Aggregates ticks into 1-second OHLCV bars
|
|
||||||
- Tracks buy/sell volume separately
|
|
||||||
- Maintains rolling window of 300 candles (5 minutes)
|
|
||||||
- Thread-safe implementation
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3. TradingSession Class
|
|
||||||
```python
|
|
||||||
class TradingSession:
|
|
||||||
"""Session-based trading with $100 starting balance"""
|
|
||||||
- $100 starting balance per session
|
|
||||||
- Real-time P&L tracking
|
|
||||||
- Win rate calculation
|
|
||||||
- Trade history logging
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4. EnhancedScalpingDashboard Class
|
|
||||||
```python
|
|
||||||
class EnhancedScalpingDashboard:
|
|
||||||
"""Enhanced real-time scalping dashboard with 1s bars and 15min cache"""
|
|
||||||
- 1-second update frequency
|
|
||||||
- Multi-chart layout with volume analysis
|
|
||||||
- Real-time performance monitoring
|
|
||||||
- Background orchestrator integration
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎨 Dashboard Layout
|
|
||||||
|
|
||||||
### Header Section:
|
|
||||||
- Session ID and metrics
|
|
||||||
- Current balance and P&L
|
|
||||||
- Live ETH/USDT and BTC/USDT prices
|
|
||||||
- Cache status (total ticks)
|
|
||||||
|
|
||||||
### Main Chart (700px height):
|
|
||||||
- ETH/USDT 1-second OHLCV candlestick chart
|
|
||||||
- Volume subplot with buy/sell separation
|
|
||||||
- Trading signal overlays
|
|
||||||
- Real-time price and candle count display
|
|
||||||
|
|
||||||
### Secondary Charts:
|
|
||||||
- BTC/USDT 1-second bars (350px)
|
|
||||||
- Volume analysis comparison chart (350px)
|
|
||||||
|
|
||||||
### Status Panels:
|
|
||||||
- 15-minute tick cache details
|
|
||||||
- System performance metrics
|
|
||||||
- Live trading actions log
|
|
||||||
|
|
||||||
## 🔧 Technical Implementation
|
|
||||||
|
|
||||||
### Data Flow:
|
|
||||||
1. **Market Ticks** → DataProvider WebSocket
|
|
||||||
2. **Tick Processing** → TickCache (15min) + CandleAggregator (1s)
|
|
||||||
3. **Dashboard Updates** → 1-second callback frequency
|
|
||||||
4. **Trading Decisions** → Background orchestrator thread
|
|
||||||
5. **Chart Rendering** → Plotly with dark theme
|
|
||||||
|
|
||||||
### Performance Optimizations:
|
|
||||||
- Thread-safe data structures
|
|
||||||
- Efficient deque collections
|
|
||||||
- Minimal callback duration (<50ms target)
|
|
||||||
- Background processing for heavy operations
|
|
||||||
|
|
||||||
### Volume Analysis:
|
|
||||||
- Buy volume: Green bars (#00ff88)
|
|
||||||
- Sell volume: Red bars (#ff6b6b)
|
|
||||||
- Volume comparison between ETH and BTC
|
|
||||||
- Real-time volume trend analysis
|
|
||||||
|
|
||||||
## 🚀 Launch Instructions
|
|
||||||
|
|
||||||
### Basic Launch:
|
|
||||||
```bash
|
|
||||||
python run_enhanced_scalping_dashboard.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### Advanced Options:
|
|
||||||
```bash
|
|
||||||
python run_enhanced_scalping_dashboard.py --host 0.0.0.0 --port 8051 --debug --log-level DEBUG
|
|
||||||
```
|
|
||||||
|
|
||||||
### Access Dashboard:
|
|
||||||
- URL: http://127.0.0.1:8051
|
|
||||||
- Features: 1s bars, 15min cache, enhanced volume display
|
|
||||||
- Update frequency: 1 second
|
|
||||||
|
|
||||||
## 📊 Key Metrics Displayed
|
|
||||||
|
|
||||||
### Session Metrics:
|
|
||||||
- Current balance (starts at $100)
|
|
||||||
- Session P&L (real-time)
|
|
||||||
- Win rate percentage
|
|
||||||
- Total trades executed
|
|
||||||
|
|
||||||
### Cache Statistics:
|
|
||||||
- Tick count per symbol
|
|
||||||
- Cache duration in minutes
|
|
||||||
- Candle count (1s aggregated)
|
|
||||||
- Ticks per minute rate
|
|
||||||
|
|
||||||
### System Performance:
|
|
||||||
- Callback duration (ms)
|
|
||||||
- Session duration (hours)
|
|
||||||
- Real-time performance monitoring
|
|
||||||
|
|
||||||
## 🎯 Benefits Over Previous Implementation
|
|
||||||
|
|
||||||
1. **Better Market Visualization**:
|
|
||||||
- 1s OHLCV bars provide clearer price action
|
|
||||||
- Volume analysis shows market sentiment
|
|
||||||
- Proper candlestick charts instead of scatter plots
|
|
||||||
|
|
||||||
2. **Enhanced Model Training**:
|
|
||||||
- 15-minute tick cache provides rich training data
|
|
||||||
- Real-time data pipeline for continuous learning
|
|
||||||
- Optimized data structures for fast access
|
|
||||||
|
|
||||||
3. **Improved Performance**:
|
|
||||||
- Lower latency data processing
|
|
||||||
- Efficient memory usage with rolling windows
|
|
||||||
- Thread-safe concurrent operations
|
|
||||||
|
|
||||||
4. **Professional Dashboard**:
|
|
||||||
- Clean, dark theme interface
|
|
||||||
- Multiple chart views
|
|
||||||
- Real-time status monitoring
|
|
||||||
- Trading session tracking
|
|
||||||
|
|
||||||
## 🔄 Integration with Existing System
|
|
||||||
|
|
||||||
The enhanced dashboard integrates seamlessly with:
|
|
||||||
- `core.data_provider.DataProvider` for market data
|
|
||||||
- `core.enhanced_orchestrator.EnhancedTradingOrchestrator` for trading decisions
|
|
||||||
- Existing logging and configuration systems
|
|
||||||
- Model training pipeline (via 15min tick cache)
|
|
||||||
|
|
||||||
## 📈 Next Steps
|
|
||||||
|
|
||||||
1. **Model Integration**: Use 15min tick cache for real-time model training
|
|
||||||
2. **Advanced Analytics**: Add technical indicators to 1s bars
|
|
||||||
3. **Multi-Timeframe**: Support for multiple timeframe views
|
|
||||||
4. **Alert System**: Price/volume-based notifications
|
|
||||||
5. **Export Features**: Data export for analysis
|
|
||||||
|
|
||||||
## 🎉 Success Criteria Met
|
|
||||||
|
|
||||||
✅ **1-second bar charts implemented**
|
|
||||||
✅ **15-minute tick cache operational**
|
|
||||||
✅ **Enhanced volume visualization**
|
|
||||||
✅ **Ultra-low latency streaming**
|
|
||||||
✅ **Real-time candle aggregation**
|
|
||||||
✅ **Professional dashboard interface**
|
|
||||||
✅ **Session-based trading tracking**
|
|
||||||
✅ **System performance monitoring**
|
|
||||||
|
|
||||||
The enhanced scalping dashboard is now ready for production use with significantly improved market data visualization and model training capabilities.
|
|
@ -110,6 +110,9 @@ class DQNAgent:
|
|||||||
# DQN hyperparameters
|
# DQN hyperparameters
|
||||||
self.gamma = 0.99 # Discount factor
|
self.gamma = 0.99 # Discount factor
|
||||||
|
|
||||||
|
# Initialize avg_reward for dashboard compatibility
|
||||||
|
self.avg_reward = 0.0 # Average reward tracking for dashboard
|
||||||
|
|
||||||
# Load best checkpoint if available
|
# Load best checkpoint if available
|
||||||
if self.enable_checkpoints:
|
if self.enable_checkpoints:
|
||||||
self.load_best_checkpoint()
|
self.load_best_checkpoint()
|
||||||
@ -215,7 +218,6 @@ class DQNAgent:
|
|||||||
|
|
||||||
# Performance tracking
|
# Performance tracking
|
||||||
self.losses = []
|
self.losses = []
|
||||||
self.avg_reward = 0.0
|
|
||||||
self.no_improvement_count = 0
|
self.no_improvement_count = 0
|
||||||
|
|
||||||
# Confidence tracking
|
# Confidence tracking
|
||||||
|
229
ORCHESTRATOR_STREAMLINING_PLAN.md
Normal file
229
ORCHESTRATOR_STREAMLINING_PLAN.md
Normal file
@ -0,0 +1,229 @@
|
|||||||
|
# Orchestrator Architecture Streamlining Plan
|
||||||
|
|
||||||
|
## Current State Analysis
|
||||||
|
|
||||||
|
### Basic TradingOrchestrator (`core/orchestrator.py`)
|
||||||
|
- **Size**: 880 lines
|
||||||
|
- **Purpose**: Core trading decisions, model coordination
|
||||||
|
- **Features**:
|
||||||
|
- Model registry and weight management
|
||||||
|
- CNN and RL prediction combination
|
||||||
|
- Decision callbacks
|
||||||
|
- Performance tracking
|
||||||
|
- Basic RL state building
|
||||||
|
|
||||||
|
### Enhanced TradingOrchestrator (`core/enhanced_orchestrator.py`)
|
||||||
|
- **Size**: 5,743 lines (6.5x larger!)
|
||||||
|
- **Inherits from**: TradingOrchestrator
|
||||||
|
- **Additional Features**:
|
||||||
|
- Universal Data Adapter (5 timeseries)
|
||||||
|
- COB Integration
|
||||||
|
- Neural Decision Fusion
|
||||||
|
- Multi-timeframe analysis
|
||||||
|
- Market regime detection
|
||||||
|
- Sensitivity learning
|
||||||
|
- Pivot point analysis
|
||||||
|
- Extrema detection
|
||||||
|
- Context data management
|
||||||
|
- Williams market structure
|
||||||
|
- Microstructure analysis
|
||||||
|
- Order flow analysis
|
||||||
|
- Cross-asset correlation
|
||||||
|
- PnL-aware features
|
||||||
|
- Trade flow features
|
||||||
|
- Market impact estimation
|
||||||
|
- Retrospective CNN training
|
||||||
|
- Cold start predictions
|
||||||
|
|
||||||
|
## Problems Identified
|
||||||
|
|
||||||
|
### 1. **Massive Feature Bloat**
|
||||||
|
- Enhanced orchestrator has become a "god object" with too many responsibilities
|
||||||
|
- Single class doing: trading, analysis, training, data processing, market structure, etc.
|
||||||
|
- Violates Single Responsibility Principle
|
||||||
|
|
||||||
|
### 2. **Code Duplication**
|
||||||
|
- Many features reimplemented instead of extending base functionality
|
||||||
|
- Similar RL state building in both classes
|
||||||
|
- Overlapping market analysis
|
||||||
|
|
||||||
|
### 3. **Maintenance Nightmare**
|
||||||
|
- 5,743 lines in single file is unmaintainable
|
||||||
|
- Complex interdependencies
|
||||||
|
- Hard to test individual components
|
||||||
|
- Performance issues due to size
|
||||||
|
|
||||||
|
### 4. **Resource Inefficiency**
|
||||||
|
- Loading entire enhanced orchestrator even if only basic features needed
|
||||||
|
- Memory overhead from unused features
|
||||||
|
- Slower initialization
|
||||||
|
|
||||||
|
## Proposed Solution: Modular Architecture
|
||||||
|
|
||||||
|
### 1. **Keep Streamlined Base Orchestrator**
|
||||||
|
```
|
||||||
|
TradingOrchestrator (core/orchestrator.py)
|
||||||
|
├── Basic decision making
|
||||||
|
├── Model coordination
|
||||||
|
├── Performance tracking
|
||||||
|
└── Core RL state building
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **Create Modular Extensions**
|
||||||
|
```
|
||||||
|
core/
|
||||||
|
├── orchestrator.py (Basic - 880 lines)
|
||||||
|
├── modules/
|
||||||
|
│ ├── cob_module.py # COB integration
|
||||||
|
│ ├── market_analysis_module.py # Market regime, volatility
|
||||||
|
│ ├── multi_timeframe_module.py # Multi-TF analysis
|
||||||
|
│ ├── neural_fusion_module.py # Neural decision fusion
|
||||||
|
│ ├── pivot_analysis_module.py # Williams/pivot points
|
||||||
|
│ ├── extrema_module.py # Extrema detection
|
||||||
|
│ ├── microstructure_module.py # Order flow analysis
|
||||||
|
│ ├── correlation_module.py # Cross-asset correlation
|
||||||
|
│ └── training_module.py # Advanced training features
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. **Configurable Enhanced Orchestrator**
|
||||||
|
```python
|
||||||
|
class ConfigurableOrchestrator(TradingOrchestrator):
|
||||||
|
def __init__(self, data_provider, modules=None):
|
||||||
|
super().__init__(data_provider)
|
||||||
|
self.modules = {}
|
||||||
|
|
||||||
|
# Load only requested modules
|
||||||
|
if modules:
|
||||||
|
for module_name in modules:
|
||||||
|
self.load_module(module_name)
|
||||||
|
|
||||||
|
def load_module(self, module_name):
|
||||||
|
# Dynamically load and initialize module
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. **Module Interface**
|
||||||
|
```python
|
||||||
|
class OrchestratorModule:
|
||||||
|
def __init__(self, orchestrator):
|
||||||
|
self.orchestrator = orchestrator
|
||||||
|
|
||||||
|
def initialize(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_features(self, symbol):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_predictions(self, symbol):
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation Plan
|
||||||
|
|
||||||
|
### Phase 1: Extract Core Modules (Week 1)
|
||||||
|
1. Extract COB integration to `cob_module.py`
|
||||||
|
2. Extract market analysis to `market_analysis_module.py`
|
||||||
|
3. Extract neural fusion to `neural_fusion_module.py`
|
||||||
|
4. Test basic functionality
|
||||||
|
|
||||||
|
### Phase 2: Refactor Enhanced Features (Week 2)
|
||||||
|
1. Move pivot analysis to `pivot_analysis_module.py`
|
||||||
|
2. Move extrema detection to `extrema_module.py`
|
||||||
|
3. Move microstructure analysis to `microstructure_module.py`
|
||||||
|
4. Update imports and dependencies
|
||||||
|
|
||||||
|
### Phase 3: Create Configurable System (Week 3)
|
||||||
|
1. Implement `ConfigurableOrchestrator`
|
||||||
|
2. Create module loading system
|
||||||
|
3. Add configuration file support
|
||||||
|
4. Test different module combinations
|
||||||
|
|
||||||
|
### Phase 4: Clean Dashboard Integration (Week 4)
|
||||||
|
1. Update dashboard to work with both Basic and Configurable
|
||||||
|
2. Add module status display
|
||||||
|
3. Dynamic feature enabling/disabling
|
||||||
|
4. Performance optimization
|
||||||
|
|
||||||
|
## Benefits
|
||||||
|
|
||||||
|
### 1. **Maintainability**
|
||||||
|
- Each module ~200-400 lines (manageable)
|
||||||
|
- Clear separation of concerns
|
||||||
|
- Individual module testing
|
||||||
|
- Easier debugging
|
||||||
|
|
||||||
|
### 2. **Performance**
|
||||||
|
- Load only needed features
|
||||||
|
- Reduced memory footprint
|
||||||
|
- Faster initialization
|
||||||
|
- Better resource utilization
|
||||||
|
|
||||||
|
### 3. **Flexibility**
|
||||||
|
- Mix and match features
|
||||||
|
- Easy to add new modules
|
||||||
|
- Configuration-driven setup
|
||||||
|
- Development environment vs production
|
||||||
|
|
||||||
|
### 4. **Development**
|
||||||
|
- Teams can work on individual modules
|
||||||
|
- Clear interfaces reduce conflicts
|
||||||
|
- Easier to add new features
|
||||||
|
- Better code reuse
|
||||||
|
|
||||||
|
## Configuration Examples
|
||||||
|
|
||||||
|
### Minimal Setup (Basic Trading)
|
||||||
|
```yaml
|
||||||
|
orchestrator:
|
||||||
|
type: basic
|
||||||
|
modules: []
|
||||||
|
```
|
||||||
|
|
||||||
|
### Full Enhanced Setup
|
||||||
|
```yaml
|
||||||
|
orchestrator:
|
||||||
|
type: configurable
|
||||||
|
modules:
|
||||||
|
- cob_module
|
||||||
|
- neural_fusion_module
|
||||||
|
- market_analysis_module
|
||||||
|
- pivot_analysis_module
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Setup (Research)
|
||||||
|
```yaml
|
||||||
|
orchestrator:
|
||||||
|
type: configurable
|
||||||
|
modules:
|
||||||
|
- market_analysis_module
|
||||||
|
- extrema_module
|
||||||
|
- training_module
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration Strategy
|
||||||
|
|
||||||
|
### 1. **Backward Compatibility**
|
||||||
|
- Keep current Enhanced orchestrator as deprecated
|
||||||
|
- Gradually migrate features to modules
|
||||||
|
- Provide compatibility layer
|
||||||
|
|
||||||
|
### 2. **Gradual Migration**
|
||||||
|
- Start with dashboard using Basic orchestrator
|
||||||
|
- Add modules one by one
|
||||||
|
- Test each integration
|
||||||
|
|
||||||
|
### 3. **Performance Testing**
|
||||||
|
- Compare Basic vs Enhanced vs Modular
|
||||||
|
- Memory usage analysis
|
||||||
|
- Initialization time comparison
|
||||||
|
- Decision-making speed tests
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
1. **Code Size**: Enhanced orchestrator < 1,000 lines
|
||||||
|
2. **Memory**: 50% reduction in memory usage for basic setup
|
||||||
|
3. **Speed**: 3x faster initialization for basic setup
|
||||||
|
4. **Maintainability**: Each module < 500 lines
|
||||||
|
5. **Testing**: 90%+ test coverage per module
|
||||||
|
|
||||||
|
This plan will transform the current monolithic enhanced orchestrator into a clean, modular, maintainable system while preserving all functionality and improving performance.
|
@ -1,328 +0,0 @@
|
|||||||
# Trading System - Launch Modes Guide
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
The unified trading system now provides clean, modular launch modes optimized for scalping and multi-timeframe analysis.
|
|
||||||
|
|
||||||
## Available Modes
|
|
||||||
|
|
||||||
### 1. Test Mode
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode test
|
|
||||||
```
|
|
||||||
- Tests enhanced data provider with multi-timeframe indicators
|
|
||||||
- Validates feature matrix creation (26 technical indicators)
|
|
||||||
- Checks data provider health and caching
|
|
||||||
- **Use case**: System validation and debugging
|
|
||||||
|
|
||||||
### 2. CNN Training Mode
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode cnn --symbol ETH/USDT
|
|
||||||
```
|
|
||||||
- Trains CNN models only
|
|
||||||
- Prepares multi-timeframe, multi-symbol feature matrices
|
|
||||||
- Supports timeframes: 1s, 1m, 5m, 1h, 4h
|
|
||||||
- **Use case**: Isolated CNN model development
|
|
||||||
|
|
||||||
### 3. RL Training Mode
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode rl --symbol ETH/USDT
|
|
||||||
```
|
|
||||||
- Trains RL agents only
|
|
||||||
- Focuses on 1s scalping data
|
|
||||||
- Optimized for short-term decision making
|
|
||||||
- **Use case**: Isolated RL agent development
|
|
||||||
|
|
||||||
### 4. Combined Training Mode
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode train --symbol ETH/USDT
|
|
||||||
```
|
|
||||||
- Trains both CNN and RL models sequentially
|
|
||||||
- First runs CNN training, then RL training
|
|
||||||
- **Use case**: Full model pipeline training
|
|
||||||
|
|
||||||
### 5. Live Trading Mode
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode trade --symbol ETH/USDT
|
|
||||||
```
|
|
||||||
- Runs live trading with 1s scalping focus
|
|
||||||
- Real-time data streaming integration
|
|
||||||
- **Use case**: Production trading execution
|
|
||||||
|
|
||||||
### 6. Web Dashboard Mode
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode web --demo --port 8050
|
|
||||||
```
|
|
||||||
- Enhanced scalping dashboard with 1s charts
|
|
||||||
- Real-time technical indicators visualization
|
|
||||||
- Scalping demo mode with realistic decisions
|
|
||||||
- **Use case**: System monitoring and visualization
|
|
||||||
|
|
||||||
## Key Features
|
|
||||||
|
|
||||||
### Enhanced Data Provider
|
|
||||||
- **26 Technical Indicators** including:
|
|
||||||
- Trend: SMA, EMA, MACD, ADX, PSAR
|
|
||||||
- Momentum: RSI, Stochastic, Williams %R
|
|
||||||
- Volatility: Bollinger Bands, ATR, Keltner Channels
|
|
||||||
- Volume: OBV, MFI, VWAP, Volume profiles
|
|
||||||
- Custom composites for trend/momentum
|
|
||||||
|
|
||||||
### Scalping Optimization
|
|
||||||
- **Primary timeframe: 1s** (falls back to 1m, 5m)
|
|
||||||
- High-frequency decision making
|
|
||||||
- Precise buy/sell marker positioning
|
|
||||||
- Small price movement detection
|
|
||||||
|
|
||||||
### Memory Management
|
|
||||||
- **8GB total memory limit** with per-model limits
|
|
||||||
- Automatic cleanup and GPU/CPU fallback
|
|
||||||
- Model registry with memory tracking
|
|
||||||
|
|
||||||
### Multi-Timeframe Architecture
|
|
||||||
- **Unified feature matrix**: (n_timeframes, window_size, n_features)
|
|
||||||
- Common feature set across all timeframes
|
|
||||||
- Consistent shape validation
|
|
||||||
|
|
||||||
## Quick Start Examples
|
|
||||||
|
|
||||||
### Test the enhanced system:
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode test
|
|
||||||
# Expected output: Feature matrix (2, 20, 26) with 26 indicators
|
|
||||||
```
|
|
||||||
|
|
||||||
### Start scalping dashboard:
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode web --demo
|
|
||||||
# Access: http://localhost:8050
|
|
||||||
# Shows 1s charts with scalping decisions
|
|
||||||
```
|
|
||||||
|
|
||||||
### Prepare CNN training data:
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode cnn
|
|
||||||
# Prepares multi-symbol, multi-timeframe matrices
|
|
||||||
```
|
|
||||||
|
|
||||||
### Setup RL training environment:
|
|
||||||
```bash
|
|
||||||
python main_clean.py --mode rl
|
|
||||||
# Focuses on 1s scalping data
|
|
||||||
```
|
|
||||||
|
|
||||||
## Technical Improvements
|
|
||||||
|
|
||||||
### Fixed Issues
|
|
||||||
✅ **Feature matrix shape mismatch** - Now uses common features across timeframes
|
|
||||||
✅ **Buy/sell marker positioning** - Properly aligned with chart timestamps
|
|
||||||
✅ **Chart timeframe** - Optimized for 1s scalping with fallbacks
|
|
||||||
✅ **Unicode encoding errors** - Removed problematic emoji characters
|
|
||||||
✅ **Launch configuration** - Clean, modular mode selection
|
|
||||||
|
|
||||||
### New Capabilities
|
|
||||||
🚀 **Enhanced indicators** - 26 vs previous 17 features
|
|
||||||
🚀 **Scalping focus** - 1s timeframe with dense data points
|
|
||||||
🚀 **Separate training** - CNN and RL can be trained independently
|
|
||||||
🚀 **Memory efficiency** - 8GB limit with automatic management
|
|
||||||
🚀 **Real-time charts** - Enhanced dashboard with multiple indicators
|
|
||||||
|
|
||||||
## Integration Notes
|
|
||||||
|
|
||||||
- **CNN modules**: Connect to `run_cnn_training()` function
|
|
||||||
- **RL modules**: Connect to `run_rl_training()` function
|
|
||||||
- **Live trading**: Integrate with `run_live_trading()` function
|
|
||||||
- **Custom indicators**: Add to `_add_technical_indicators()` method
|
|
||||||
|
|
||||||
## Performance Specifications
|
|
||||||
|
|
||||||
- **Data throughput**: 1s candles with 200+ data points
|
|
||||||
- **Feature processing**: 26 indicators in < 1 second
|
|
||||||
- **Memory usage**: Monitored and limited per model
|
|
||||||
- **Chart updates**: 2-second refresh for real-time display
|
|
||||||
- **Decision latency**: Optimized for scalping (< 100ms target)
|
|
||||||
|
|
||||||
## 🚀 **VSCode Launch Configurations**
|
|
||||||
|
|
||||||
### **1. Core Trading Modes**
|
|
||||||
|
|
||||||
#### **Live Trading (Demo)**
|
|
||||||
```json
|
|
||||||
"name": "Live Trading (Demo)"
|
|
||||||
"program": "main.py"
|
|
||||||
"args": ["--mode", "live", "--demo", "true", "--symbol", "ETH/USDT", "--timeframe", "1m"]
|
|
||||||
```
|
|
||||||
- **Purpose**: Safe demo trading with virtual funds
|
|
||||||
- **Environment**: Paper trading mode
|
|
||||||
- **Risk**: Zero (no real money)
|
|
||||||
|
|
||||||
#### **Live Trading (Real)**
|
|
||||||
```json
|
|
||||||
"name": "Live Trading (Real)"
|
|
||||||
"program": "main.py"
|
|
||||||
"args": ["--mode", "live", "--demo", "false", "--symbol", "ETH/USDT", "--leverage", "50"]
|
|
||||||
```
|
|
||||||
- **Purpose**: Real trading with actual funds
|
|
||||||
- **Environment**: Live exchange API
|
|
||||||
- **Risk**: High (real money)
|
|
||||||
|
|
||||||
### **2. Training & Development Modes**
|
|
||||||
|
|
||||||
#### **Train Bot**
|
|
||||||
```json
|
|
||||||
"name": "Train Bot"
|
|
||||||
"program": "main.py"
|
|
||||||
"args": ["--mode", "train", "--episodes", "100"]
|
|
||||||
```
|
|
||||||
- **Purpose**: Standard RL agent training
|
|
||||||
- **Duration**: 100 episodes
|
|
||||||
- **Output**: Trained model files
|
|
||||||
|
|
||||||
#### **Evaluate Bot**
|
|
||||||
```json
|
|
||||||
"name": "Evaluate Bot"
|
|
||||||
"program": "main.py"
|
|
||||||
"args": ["--mode", "eval", "--episodes", "10"]
|
|
||||||
```
|
|
||||||
- **Purpose**: Model performance evaluation
|
|
||||||
- **Duration**: 10 test episodes
|
|
||||||
- **Output**: Performance metrics
|
|
||||||
|
|
||||||
### **3. Neural Network Training**
|
|
||||||
|
|
||||||
#### **NN Training Pipeline**
|
|
||||||
```json
|
|
||||||
"name": "NN Training Pipeline"
|
|
||||||
"module": "NN.realtime_main"
|
|
||||||
"args": ["--mode", "train", "--model-type", "cnn", "--epochs", "10"]
|
|
||||||
```
|
|
||||||
- **Purpose**: Deep learning model training
|
|
||||||
- **Framework**: PyTorch
|
|
||||||
- **Monitoring**: Automatic TensorBoard integration
|
|
||||||
|
|
||||||
#### **Quick CNN Test (Real Data + TensorBoard)**
|
|
||||||
```json
|
|
||||||
"name": "Quick CNN Test (Real Data + TensorBoard)"
|
|
||||||
"program": "test_cnn_only.py"
|
|
||||||
```
|
|
||||||
- **Purpose**: Fast CNN validation with real market data
|
|
||||||
- **Duration**: 2 epochs, 500 samples
|
|
||||||
- **Output**: `test_models/quick_cnn.pt`
|
|
||||||
- **Monitoring**: TensorBoard metrics
|
|
||||||
|
|
||||||
### **4. 🔥 Realtime RL Training + Monitoring**
|
|
||||||
|
|
||||||
#### **Realtime RL Training + TensorBoard + Web UI**
|
|
||||||
```json
|
|
||||||
"name": "Realtime RL Training + TensorBoard + Web UI"
|
|
||||||
"program": "train_realtime_with_tensorboard.py"
|
|
||||||
"args": ["--episodes", "50", "--symbol", "ETH/USDT", "--web-port", "8051"]
|
|
||||||
```
|
|
||||||
- **Purpose**: Advanced RL training with comprehensive monitoring
|
|
||||||
- **Features**:
|
|
||||||
- Real-time TensorBoard metrics logging
|
|
||||||
- Live web dashboard at http://localhost:8051
|
|
||||||
- Episode rewards, balance tracking, win rates
|
|
||||||
- Trading performance metrics
|
|
||||||
- Agent learning progression
|
|
||||||
- **Data**: 100% real ETH/USDT market data from Binance
|
|
||||||
- **Monitoring**: Dual monitoring (TensorBoard + Web UI)
|
|
||||||
- **Duration**: 50 episodes with real-time feedback
|
|
||||||
|
|
||||||
### **5. Monitoring & Visualization**
|
|
||||||
|
|
||||||
#### **TensorBoard Monitor (All Runs)**
|
|
||||||
```json
|
|
||||||
"name": "TensorBoard Monitor (All Runs)"
|
|
||||||
"program": "run_tensorboard.py"
|
|
||||||
```
|
|
||||||
- **Purpose**: Monitor all training sessions
|
|
||||||
- **Features**: Auto-discovery of training logs
|
|
||||||
- **Access**: http://localhost:6006
|
|
||||||
|
|
||||||
#### **Realtime Charts with NN Inference**
|
|
||||||
```json
|
|
||||||
"name": "Realtime Charts with NN Inference"
|
|
||||||
"program": "realtime.py"
|
|
||||||
```
|
|
||||||
- **Purpose**: Live trading charts with ML predictions
|
|
||||||
- **Features**: Real-time price updates + model inference
|
|
||||||
- **Models**: CNN + RL integration
|
|
||||||
|
|
||||||
### **6. Advanced Training Modes**
|
|
||||||
|
|
||||||
#### **TRAIN Realtime Charts with NN Inference**
|
|
||||||
```json
|
|
||||||
"name": "TRAIN Realtime Charts with NN Inference"
|
|
||||||
"program": "train_rl_with_realtime.py"
|
|
||||||
"args": ["--episodes", "100", "--max-position", "0.1"]
|
|
||||||
```
|
|
||||||
- **Purpose**: RL training with live chart integration
|
|
||||||
- **Features**: Visual training feedback
|
|
||||||
- **Position limit**: 10% portfolio allocation
|
|
||||||
|
|
||||||
## 📊 **Monitoring URLs**
|
|
||||||
|
|
||||||
### **Development**
|
|
||||||
- **TensorBoard**: http://localhost:6006
|
|
||||||
- **Web Dashboard**: http://localhost:8051
|
|
||||||
- **Training Status**: `python monitor_training.py`
|
|
||||||
|
|
||||||
### **Production**
|
|
||||||
- **Live Trading Dashboard**: Integrated in trading interface
|
|
||||||
- **Performance Metrics**: Real-time P&L tracking
|
|
||||||
- **Risk Management**: Position size and drawdown monitoring
|
|
||||||
|
|
||||||
## 🎯 **Quick Start Recommendations**
|
|
||||||
|
|
||||||
### **For CNN Development**
|
|
||||||
1. **Start**: "Quick CNN Test (Real Data + TensorBoard)"
|
|
||||||
2. **Monitor**: Open TensorBoard at http://localhost:6006
|
|
||||||
3. **Validate**: Check `test_models/` for output files
|
|
||||||
|
|
||||||
### **For RL Development**
|
|
||||||
1. **Start**: "Realtime RL Training + TensorBoard + Web UI"
|
|
||||||
2. **Monitor**: TensorBoard (http://localhost:6006) + Web UI (http://localhost:8051)
|
|
||||||
3. **Track**: Episode rewards, balance progression, win rates
|
|
||||||
|
|
||||||
### **For Production Trading**
|
|
||||||
1. **Test**: "Live Trading (Demo)" first
|
|
||||||
2. **Validate**: Confirm strategy performance
|
|
||||||
3. **Deploy**: "Live Trading (Real)" with appropriate risk management
|
|
||||||
|
|
||||||
## ⚡ **Performance Features**
|
|
||||||
|
|
||||||
### **GPU Acceleration**
|
|
||||||
- Automatic CUDA detection and utilization
|
|
||||||
- Mixed precision training support
|
|
||||||
- Memory optimization for large datasets
|
|
||||||
|
|
||||||
### **Real-time Data**
|
|
||||||
- Direct Binance API integration
|
|
||||||
- Multi-timeframe data synchronization
|
|
||||||
- Live price feed with minimal latency
|
|
||||||
|
|
||||||
### **Professional Monitoring**
|
|
||||||
- Industry-standard TensorBoard integration
|
|
||||||
- Custom web dashboards for trading metrics
|
|
||||||
- Real-time performance tracking
|
|
||||||
|
|
||||||
## 🛡️ **Safety Features**
|
|
||||||
|
|
||||||
### **Pre-launch Tasks**
|
|
||||||
- **Kill Stale Processes**: Automatic cleanup before launch
|
|
||||||
- **Port Management**: Intelligent port allocation
|
|
||||||
- **Resource Monitoring**: Memory and GPU usage tracking
|
|
||||||
|
|
||||||
### **Real Market Data Policy**
|
|
||||||
- ✅ **No Synthetic Data**: All training uses authentic exchange data
|
|
||||||
- ✅ **Live API Integration**: Direct connection to cryptocurrency exchanges
|
|
||||||
- ✅ **Data Validation**: Quality checks for completeness and consistency
|
|
||||||
- ✅ **Multi-timeframe Sync**: Aligned data across all time horizons
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
✅ **Launch configuration** - Clean, modular mode selection
|
|
||||||
✅ **Professional monitoring** - TensorBoard + custom dashboards
|
|
||||||
✅ **Real market data** - Authentic cryptocurrency price data
|
|
||||||
✅ **Safety features** - Risk management and validation
|
|
||||||
✅ **GPU acceleration** - Optimized for high-performance training
|
|
@ -1,50 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
|
|
||||||
import json
|
|
||||||
from datetime import datetime
|
|
||||||
import time
|
|
||||||
|
|
||||||
def add_current_trade():
|
|
||||||
"""Add a trade with current timestamp for immediate visibility"""
|
|
||||||
now = datetime.now()
|
|
||||||
|
|
||||||
# Create a trade that just happened
|
|
||||||
current_trade = {
|
|
||||||
'trade_id': 999,
|
|
||||||
'symbol': 'ETHUSDT',
|
|
||||||
'side': 'LONG',
|
|
||||||
'entry_time': (now - timedelta(seconds=30)).isoformat(), # 30 seconds ago
|
|
||||||
'exit_time': now.isoformat(), # Just now
|
|
||||||
'entry_price': 2434.50,
|
|
||||||
'exit_price': 2434.70,
|
|
||||||
'size': 0.001,
|
|
||||||
'fees': 0.05,
|
|
||||||
'net_pnl': 0.15, # Small profit
|
|
||||||
'mexc_executed': True,
|
|
||||||
'duration_seconds': 30,
|
|
||||||
'leverage': 50.0,
|
|
||||||
'gross_pnl': 0.20,
|
|
||||||
'fee_type': 'TAKER',
|
|
||||||
'fee_rate': 0.0005
|
|
||||||
}
|
|
||||||
|
|
||||||
# Load existing trades
|
|
||||||
try:
|
|
||||||
with open('closed_trades_history.json', 'r') as f:
|
|
||||||
trades = json.load(f)
|
|
||||||
except:
|
|
||||||
trades = []
|
|
||||||
|
|
||||||
# Add the current trade
|
|
||||||
trades.append(current_trade)
|
|
||||||
|
|
||||||
# Save back
|
|
||||||
with open('closed_trades_history.json', 'w') as f:
|
|
||||||
json.dump(trades, f, indent=2)
|
|
||||||
|
|
||||||
print(f"✅ Added current trade: LONG @ {current_trade['entry_time']} -> {current_trade['exit_time']}")
|
|
||||||
print(f" Entry: ${current_trade['entry_price']} | Exit: ${current_trade['exit_price']} | P&L: ${current_trade['net_pnl']}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
from datetime import timedelta
|
|
||||||
add_current_trade()
|
|
@ -1,31 +0,0 @@
|
|||||||
import requests
|
|
||||||
|
|
||||||
# Check available API symbols
|
|
||||||
try:
|
|
||||||
resp = requests.get('https://api.mexc.com/api/v3/defaultSymbols')
|
|
||||||
data = resp.json()
|
|
||||||
print('Available API symbols:')
|
|
||||||
api_symbols = data.get('data', [])
|
|
||||||
|
|
||||||
# Show first 10
|
|
||||||
for i, symbol in enumerate(api_symbols[:10]):
|
|
||||||
print(f' {i+1}. {symbol}')
|
|
||||||
print(f' ... and {len(api_symbols) - 10} more')
|
|
||||||
|
|
||||||
# Check for common symbols
|
|
||||||
test_symbols = ['ETHUSDT', 'BTCUSDT', 'MXUSDT', 'BNBUSDT']
|
|
||||||
print('\nChecking test symbols:')
|
|
||||||
for symbol in test_symbols:
|
|
||||||
if symbol in api_symbols:
|
|
||||||
print(f'✅ {symbol} is available for API trading')
|
|
||||||
else:
|
|
||||||
print(f'❌ {symbol} is NOT available for API trading')
|
|
||||||
|
|
||||||
# Find a good symbol to test with
|
|
||||||
print('\nRecommended symbols for testing:')
|
|
||||||
common_symbols = [s for s in api_symbols if 'USDT' in s][:5]
|
|
||||||
for symbol in common_symbols:
|
|
||||||
print(f' - {symbol}')
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f'Error: {e}')
|
|
@ -1,57 +0,0 @@
|
|||||||
import requests
|
|
||||||
|
|
||||||
# Check all available ETH trading pairs on MEXC
|
|
||||||
try:
|
|
||||||
# Get all trading symbols from MEXC
|
|
||||||
resp = requests.get('https://api.mexc.com/api/v3/exchangeInfo')
|
|
||||||
data = resp.json()
|
|
||||||
|
|
||||||
print('=== ALL ETH TRADING PAIRS ON MEXC ===')
|
|
||||||
eth_symbols = []
|
|
||||||
for symbol_info in data.get('symbols', []):
|
|
||||||
symbol = symbol_info['symbol']
|
|
||||||
status = symbol_info['status']
|
|
||||||
if 'ETH' in symbol and status == 'TRADING':
|
|
||||||
eth_symbols.append({
|
|
||||||
'symbol': symbol,
|
|
||||||
'baseAsset': symbol_info['baseAsset'],
|
|
||||||
'quoteAsset': symbol_info['quoteAsset'],
|
|
||||||
'status': status
|
|
||||||
})
|
|
||||||
|
|
||||||
# Show all ETH pairs
|
|
||||||
print(f'Total ETH trading pairs: {len(eth_symbols)}')
|
|
||||||
for i, info in enumerate(eth_symbols[:20]): # Show first 20
|
|
||||||
print(f' {i+1}. {info["symbol"]} ({info["baseAsset"]}/{info["quoteAsset"]}) - {info["status"]}')
|
|
||||||
|
|
||||||
if len(eth_symbols) > 20:
|
|
||||||
print(f' ... and {len(eth_symbols) - 20} more')
|
|
||||||
|
|
||||||
# Check specifically for ETH as base asset with USDT
|
|
||||||
print('\n=== ETH BASE ASSET PAIRS ===')
|
|
||||||
eth_base_pairs = [s for s in eth_symbols if s['baseAsset'] == 'ETH']
|
|
||||||
for pair in eth_base_pairs:
|
|
||||||
print(f' - {pair["symbol"]} ({pair["baseAsset"]}/{pair["quoteAsset"]})')
|
|
||||||
|
|
||||||
# Check API symbols specifically
|
|
||||||
print('\n=== CHECKING API TRADING AVAILABILITY ===')
|
|
||||||
try:
|
|
||||||
api_resp = requests.get('https://api.mexc.com/api/v3/defaultSymbols')
|
|
||||||
api_data = api_resp.json()
|
|
||||||
api_symbols = api_data.get('data', [])
|
|
||||||
|
|
||||||
print('ETH pairs available for API trading:')
|
|
||||||
eth_api_symbols = [s for s in api_symbols if 'ETH' in s]
|
|
||||||
for symbol in eth_api_symbols:
|
|
||||||
print(f' ✅ {symbol}')
|
|
||||||
|
|
||||||
if 'ETHUSDT' in api_symbols:
|
|
||||||
print('\n✅ ETHUSDT IS available for API trading!')
|
|
||||||
else:
|
|
||||||
print('\n❌ ETHUSDT is NOT available for API trading')
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f'Error checking API symbols: {e}')
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f'Error: {e}')
|
|
@ -1,285 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Model Cleanup and Training Setup Script
|
|
||||||
|
|
||||||
This script:
|
|
||||||
1. Backs up current models
|
|
||||||
2. Cleans old/conflicting models
|
|
||||||
3. Sets up proper training progression system
|
|
||||||
4. Initializes fresh model training
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import shutil
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
from datetime import datetime
|
|
||||||
from pathlib import Path
|
|
||||||
import torch
|
|
||||||
|
|
||||||
# Setup logging
|
|
||||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class ModelCleanupManager:
|
|
||||||
"""Manager for cleaning up and organizing model files"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.root_dir = Path(".")
|
|
||||||
self.models_dir = self.root_dir / "models"
|
|
||||||
self.backup_dir = self.root_dir / "model_backups" / f"backup_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
|
|
||||||
self.training_progress_file = self.models_dir / "training_progress.json"
|
|
||||||
|
|
||||||
# Create backup directory
|
|
||||||
self.backup_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
logger.info(f"Created backup directory: {self.backup_dir}")
|
|
||||||
|
|
||||||
def backup_existing_models(self):
|
|
||||||
"""Backup all existing models before cleanup"""
|
|
||||||
logger.info("🔄 Backing up existing models...")
|
|
||||||
|
|
||||||
model_files = [
|
|
||||||
# CNN models
|
|
||||||
"models/cnn_final_20250331_001817.pt.pt",
|
|
||||||
"models/cnn_best.pt.pt",
|
|
||||||
"models/cnn_BTC_USDT_*.pt",
|
|
||||||
"models/cnn_BTC_USD_*.pt",
|
|
||||||
|
|
||||||
# RL models
|
|
||||||
"models/trading_agent_*.pt",
|
|
||||||
"models/trading_agent_*.backup",
|
|
||||||
|
|
||||||
# Other models
|
|
||||||
"models/saved/cnn_model_best.pt"
|
|
||||||
]
|
|
||||||
|
|
||||||
# Backup model files
|
|
||||||
backup_count = 0
|
|
||||||
for pattern in model_files:
|
|
||||||
for file_path in self.root_dir.glob(pattern):
|
|
||||||
if file_path.is_file():
|
|
||||||
backup_path = self.backup_dir / file_path.relative_to(self.root_dir)
|
|
||||||
backup_path.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
shutil.copy2(file_path, backup_path)
|
|
||||||
backup_count += 1
|
|
||||||
logger.info(f" 📁 Backed up: {file_path}")
|
|
||||||
|
|
||||||
logger.info(f"✅ Backed up {backup_count} model files to {self.backup_dir}")
|
|
||||||
|
|
||||||
def clean_old_models(self):
|
|
||||||
"""Remove old/conflicting model files"""
|
|
||||||
logger.info("🧹 Cleaning old model files...")
|
|
||||||
|
|
||||||
files_to_remove = [
|
|
||||||
# Old CNN models with architecture conflicts
|
|
||||||
"models/cnn_final_20250331_001817.pt.pt",
|
|
||||||
"models/cnn_best.pt.pt",
|
|
||||||
"models/cnn_BTC_USDT_20250329_021800.pt",
|
|
||||||
"models/cnn_BTC_USDT_20250329_021448.pt",
|
|
||||||
"models/cnn_BTC_USD_20250329_020711.pt",
|
|
||||||
"models/cnn_BTC_USD_20250329_020430.pt",
|
|
||||||
"models/cnn_BTC_USD_20250329_015217.pt",
|
|
||||||
|
|
||||||
# Old RL models
|
|
||||||
"models/trading_agent_final.pt",
|
|
||||||
"models/trading_agent_best_pnl.pt",
|
|
||||||
"models/trading_agent_best_reward.pt",
|
|
||||||
"models/trading_agent_final.pt.backup",
|
|
||||||
"models/trading_agent_best_net_pnl.pt",
|
|
||||||
"models/trading_agent_best_net_pnl.pt.backup",
|
|
||||||
"models/trading_agent_best_pnl.pt.backup",
|
|
||||||
"models/trading_agent_best_reward.pt.backup",
|
|
||||||
"models/trading_agent_live_trained.pt",
|
|
||||||
|
|
||||||
# Checkpoint files
|
|
||||||
"models/trading_agent_checkpoint_1650.pt.minimal",
|
|
||||||
"models/trading_agent_checkpoint_1650.pt.params.json",
|
|
||||||
"models/trading_agent_best_net_pnl.pt.policy.jit",
|
|
||||||
"models/trading_agent_best_net_pnl.pt.params.json",
|
|
||||||
"models/trading_agent_best_pnl.pt.params.json"
|
|
||||||
]
|
|
||||||
|
|
||||||
removed_count = 0
|
|
||||||
for file_path in files_to_remove:
|
|
||||||
path = Path(file_path)
|
|
||||||
if path.exists():
|
|
||||||
path.unlink()
|
|
||||||
removed_count += 1
|
|
||||||
logger.info(f" 🗑️ Removed: {path}")
|
|
||||||
|
|
||||||
logger.info(f"✅ Removed {removed_count} old model files")
|
|
||||||
|
|
||||||
def setup_training_progression(self):
|
|
||||||
"""Set up training progression tracking system"""
|
|
||||||
logger.info("📊 Setting up training progression system...")
|
|
||||||
|
|
||||||
# Create training progress structure
|
|
||||||
training_progress = {
|
|
||||||
"created": datetime.now().isoformat(),
|
|
||||||
"version": "1.0",
|
|
||||||
"models": {
|
|
||||||
"cnn": {
|
|
||||||
"current_version": 1,
|
|
||||||
"best_model": None,
|
|
||||||
"training_history": [],
|
|
||||||
"architecture": {
|
|
||||||
"input_channels": 5,
|
|
||||||
"window_size": 20,
|
|
||||||
"output_classes": 3
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"rl": {
|
|
||||||
"current_version": 1,
|
|
||||||
"best_model": None,
|
|
||||||
"training_history": [],
|
|
||||||
"architecture": {
|
|
||||||
"state_size": 100,
|
|
||||||
"action_space": 3,
|
|
||||||
"hidden_size": 256
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"williams_cnn": {
|
|
||||||
"current_version": 1,
|
|
||||||
"best_model": None,
|
|
||||||
"training_history": [],
|
|
||||||
"architecture": {
|
|
||||||
"input_shape": [900, 50],
|
|
||||||
"output_size": 10,
|
|
||||||
"enabled": False # Disabled until TensorFlow available
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"training_stats": {
|
|
||||||
"total_sessions": 0,
|
|
||||||
"best_accuracy": 0.0,
|
|
||||||
"best_pnl": 0.0,
|
|
||||||
"last_training": None
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Save training progress
|
|
||||||
with open(self.training_progress_file, 'w') as f:
|
|
||||||
json.dump(training_progress, f, indent=2)
|
|
||||||
|
|
||||||
logger.info(f"✅ Created training progress file: {self.training_progress_file}")
|
|
||||||
|
|
||||||
def create_model_directories(self):
|
|
||||||
"""Create clean model directory structure"""
|
|
||||||
logger.info("📁 Creating clean model directory structure...")
|
|
||||||
|
|
||||||
directories = [
|
|
||||||
"models/cnn/current",
|
|
||||||
"models/cnn/training",
|
|
||||||
"models/cnn/best",
|
|
||||||
"models/rl/current",
|
|
||||||
"models/rl/training",
|
|
||||||
"models/rl/best",
|
|
||||||
"models/williams_cnn/current",
|
|
||||||
"models/williams_cnn/training",
|
|
||||||
"models/williams_cnn/best",
|
|
||||||
"models/checkpoints",
|
|
||||||
"models/training_logs"
|
|
||||||
]
|
|
||||||
|
|
||||||
for directory in directories:
|
|
||||||
Path(directory).mkdir(parents=True, exist_ok=True)
|
|
||||||
logger.info(f" 📂 Created: {directory}")
|
|
||||||
|
|
||||||
logger.info("✅ Model directory structure created")
|
|
||||||
|
|
||||||
def initialize_fresh_models(self):
|
|
||||||
"""Initialize fresh model files for training"""
|
|
||||||
logger.info("🆕 Initializing fresh models...")
|
|
||||||
|
|
||||||
# Keep only the essential saved model
|
|
||||||
essential_models = ["models/saved/cnn_model_best.pt"]
|
|
||||||
|
|
||||||
for model_path in essential_models:
|
|
||||||
if Path(model_path).exists():
|
|
||||||
logger.info(f" ✅ Keeping essential model: {model_path}")
|
|
||||||
else:
|
|
||||||
logger.warning(f" ⚠️ Essential model not found: {model_path}")
|
|
||||||
|
|
||||||
logger.info("✅ Fresh model initialization complete")
|
|
||||||
|
|
||||||
def update_model_registry(self):
|
|
||||||
"""Update model registry to use new structure"""
|
|
||||||
logger.info("⚙️ Updating model registry configuration...")
|
|
||||||
|
|
||||||
registry_config = {
|
|
||||||
"model_paths": {
|
|
||||||
"cnn_current": "models/cnn/current/",
|
|
||||||
"cnn_best": "models/cnn/best/",
|
|
||||||
"rl_current": "models/rl/current/",
|
|
||||||
"rl_best": "models/rl/best/",
|
|
||||||
"williams_current": "models/williams_cnn/current/",
|
|
||||||
"williams_best": "models/williams_cnn/best/"
|
|
||||||
},
|
|
||||||
"auto_load_best": True,
|
|
||||||
"memory_limit_gb": 8.0,
|
|
||||||
"training_enabled": True
|
|
||||||
}
|
|
||||||
|
|
||||||
config_path = Path("models/registry_config.json")
|
|
||||||
with open(config_path, 'w') as f:
|
|
||||||
json.dump(registry_config, f, indent=2)
|
|
||||||
|
|
||||||
logger.info(f"✅ Model registry config saved: {config_path}")
|
|
||||||
|
|
||||||
def run_cleanup(self):
|
|
||||||
"""Execute complete cleanup and setup process"""
|
|
||||||
logger.info("🚀 Starting model cleanup and setup process...")
|
|
||||||
logger.info("=" * 60)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Step 1: Backup existing models
|
|
||||||
self.backup_existing_models()
|
|
||||||
|
|
||||||
# Step 2: Clean old conflicting models
|
|
||||||
self.clean_old_models()
|
|
||||||
|
|
||||||
# Step 3: Setup training progression system
|
|
||||||
self.setup_training_progression()
|
|
||||||
|
|
||||||
# Step 4: Create clean directory structure
|
|
||||||
self.create_model_directories()
|
|
||||||
|
|
||||||
# Step 5: Initialize fresh models
|
|
||||||
self.initialize_fresh_models()
|
|
||||||
|
|
||||||
# Step 6: Update model registry
|
|
||||||
self.update_model_registry()
|
|
||||||
|
|
||||||
logger.info("=" * 60)
|
|
||||||
logger.info("✅ Model cleanup and setup completed successfully!")
|
|
||||||
logger.info(f"📁 Backup created at: {self.backup_dir}")
|
|
||||||
logger.info("🔄 Ready for fresh training with enhanced RL!")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Error during cleanup: {e}")
|
|
||||||
import traceback
|
|
||||||
logger.error(traceback.format_exc())
|
|
||||||
raise
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main execution function"""
|
|
||||||
print("🧹 MODEL CLEANUP AND TRAINING SETUP")
|
|
||||||
print("=" * 50)
|
|
||||||
print("This script will:")
|
|
||||||
print("1. Backup existing models")
|
|
||||||
print("2. Remove old/conflicting models")
|
|
||||||
print("3. Set up training progression tracking")
|
|
||||||
print("4. Create clean directory structure")
|
|
||||||
print("5. Initialize fresh training environment")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
response = input("Continue? (y/N): ").strip().lower()
|
|
||||||
if response != 'y':
|
|
||||||
print("❌ Cleanup cancelled")
|
|
||||||
return
|
|
||||||
|
|
||||||
cleanup_manager = ModelCleanupManager()
|
|
||||||
cleanup_manager.run_cleanup()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
File diff suppressed because it is too large
Load Diff
@ -1,268 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Increase GPU Utilization for Training
|
|
||||||
|
|
||||||
This script provides optimizations to maximize GPU usage during training.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
import numpy as np
|
|
||||||
import logging
|
|
||||||
from pathlib import Path
|
|
||||||
import sys
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
project_root = Path(__file__).parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
def optimize_training_for_gpu():
|
|
||||||
"""Optimize training settings for maximum GPU utilization"""
|
|
||||||
|
|
||||||
print("🚀 GPU TRAINING OPTIMIZATION GUIDE")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
# Check current GPU setup
|
|
||||||
if torch.cuda.is_available():
|
|
||||||
gpu_name = torch.cuda.get_device_name(0)
|
|
||||||
gpu_memory = torch.cuda.get_device_properties(0).total_memory / 1024**3
|
|
||||||
print(f"GPU: {gpu_name}")
|
|
||||||
print(f"VRAM: {gpu_memory:.1f} GB")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Calculate optimal batch sizes
|
|
||||||
print("📊 OPTIMAL BATCH SIZES:")
|
|
||||||
print("Current batch sizes:")
|
|
||||||
print(" - DQN Agent: 128")
|
|
||||||
print(" - CNN Model: 32")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# For RTX 4060 with 8GB VRAM, we can increase batch sizes
|
|
||||||
if gpu_memory >= 7.5: # RTX 4060 has ~8GB
|
|
||||||
print("🔥 RECOMMENDED OPTIMIZATIONS:")
|
|
||||||
print(" 1. Increase DQN batch size: 128 → 256 or 512")
|
|
||||||
print(" 2. Increase CNN batch size: 32 → 64 or 128")
|
|
||||||
print(" 3. Use larger model variants")
|
|
||||||
print(" 4. Enable gradient accumulation")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Show memory usage estimates
|
|
||||||
print("💾 MEMORY USAGE ESTIMATES:")
|
|
||||||
print(" - Current DQN (24M params): ~1.5GB")
|
|
||||||
print(" - Current CNN (168M params): ~3.2GB")
|
|
||||||
print(" - Available for larger batches: ~3GB")
|
|
||||||
print()
|
|
||||||
|
|
||||||
print("⚡ PERFORMANCE OPTIMIZATIONS:")
|
|
||||||
print(" 1. ✅ Mixed precision training (already enabled)")
|
|
||||||
print(" 2. ✅ GPU tensors (already enabled)")
|
|
||||||
print(" 3. 🔧 Increase batch sizes")
|
|
||||||
print(" 4. 🔧 Use DataLoader with multiple workers")
|
|
||||||
print(" 5. 🔧 Pin memory for faster transfers")
|
|
||||||
print(" 6. 🔧 Compile models with torch.compile()")
|
|
||||||
print()
|
|
||||||
|
|
||||||
else:
|
|
||||||
print("❌ No GPU available")
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def create_optimized_training_config():
|
|
||||||
"""Create optimized training configuration"""
|
|
||||||
|
|
||||||
config = {
|
|
||||||
# DQN Optimizations
|
|
||||||
'dqn': {
|
|
||||||
'batch_size': 512, # Increased from 128
|
|
||||||
'buffer_size': 100000, # Increased from 20000
|
|
||||||
'learning_rate': 0.0003, # Slightly reduced for stability
|
|
||||||
'target_update': 10, # More frequent updates
|
|
||||||
'gradient_accumulation_steps': 2, # Accumulate gradients
|
|
||||||
},
|
|
||||||
|
|
||||||
# CNN Optimizations
|
|
||||||
'cnn': {
|
|
||||||
'batch_size': 128, # Increased from 32
|
|
||||||
'learning_rate': 0.001,
|
|
||||||
'epochs': 200, # More epochs for better learning
|
|
||||||
'gradient_accumulation_steps': 4,
|
|
||||||
},
|
|
||||||
|
|
||||||
# Data Loading Optimizations
|
|
||||||
'data_loading': {
|
|
||||||
'num_workers': 4, # Parallel data loading
|
|
||||||
'pin_memory': True, # Faster CPU->GPU transfers
|
|
||||||
'persistent_workers': True, # Keep workers alive
|
|
||||||
},
|
|
||||||
|
|
||||||
# GPU Optimizations
|
|
||||||
'gpu': {
|
|
||||||
'mixed_precision': True,
|
|
||||||
'compile_model': True, # Use torch.compile for speed
|
|
||||||
'channels_last': True, # Memory layout optimization
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return config
|
|
||||||
|
|
||||||
def apply_gpu_optimizations():
|
|
||||||
"""Apply GPU optimizations to existing models"""
|
|
||||||
|
|
||||||
print("🔧 APPLYING GPU OPTIMIZATIONS...")
|
|
||||||
print()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Test optimized DQN training
|
|
||||||
from NN.models.dqn_agent import DQNAgent
|
|
||||||
|
|
||||||
print("1. Testing optimized DQN Agent...")
|
|
||||||
|
|
||||||
# Create agent with larger batch size
|
|
||||||
agent = DQNAgent(
|
|
||||||
state_shape=(100,),
|
|
||||||
n_actions=3,
|
|
||||||
batch_size=512, # Increased batch size
|
|
||||||
buffer_size=100000, # Larger memory
|
|
||||||
learning_rate=0.0003
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f" ✅ DQN Agent with batch size {agent.batch_size}")
|
|
||||||
print(f" ✅ Memory buffer size: {agent.buffer_size:,}")
|
|
||||||
|
|
||||||
# Test larger batch training
|
|
||||||
print(" Testing larger batch training...")
|
|
||||||
|
|
||||||
# Add many experiences
|
|
||||||
for i in range(1000):
|
|
||||||
state = np.random.randn(100).astype(np.float32)
|
|
||||||
action = np.random.randint(0, 3)
|
|
||||||
reward = np.random.randn() * 0.1
|
|
||||||
next_state = np.random.randn(100).astype(np.float32)
|
|
||||||
done = np.random.random() < 0.1
|
|
||||||
agent.remember(state, action, reward, next_state, done)
|
|
||||||
|
|
||||||
# Train with larger batch
|
|
||||||
loss = agent.replay()
|
|
||||||
if loss > 0:
|
|
||||||
print(f" ✅ Large batch training successful, loss: {loss:.4f}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Test optimized CNN
|
|
||||||
from NN.models.enhanced_cnn import EnhancedCNN
|
|
||||||
|
|
||||||
print("2. Testing optimized CNN...")
|
|
||||||
|
|
||||||
model = EnhancedCNN((3, 20, 26), 3)
|
|
||||||
|
|
||||||
# Test larger batch
|
|
||||||
batch_size = 128 # Increased from 32
|
|
||||||
x = torch.randn(batch_size, 3, 20, 26, device=model.device)
|
|
||||||
|
|
||||||
print(f" Testing batch size: {batch_size}")
|
|
||||||
|
|
||||||
# Forward pass
|
|
||||||
outputs = model(x)
|
|
||||||
if isinstance(outputs, tuple):
|
|
||||||
print(f" ✅ Large batch forward pass successful")
|
|
||||||
print(f" ✅ Output shape: {outputs[0].shape}")
|
|
||||||
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Memory usage check
|
|
||||||
if torch.cuda.is_available():
|
|
||||||
memory_used = torch.cuda.memory_allocated() / 1024**3
|
|
||||||
memory_total = torch.cuda.get_device_properties(0).total_memory / 1024**3
|
|
||||||
memory_percent = (memory_used / memory_total) * 100
|
|
||||||
|
|
||||||
print(f"📊 GPU Memory Usage:")
|
|
||||||
print(f" Used: {memory_used:.2f} GB / {memory_total:.1f} GB ({memory_percent:.1f}%)")
|
|
||||||
|
|
||||||
if memory_percent < 70:
|
|
||||||
print(f" 💡 You can increase batch sizes further!")
|
|
||||||
elif memory_percent > 90:
|
|
||||||
print(f" ⚠️ Consider reducing batch sizes")
|
|
||||||
else:
|
|
||||||
print(f" ✅ Good memory utilization")
|
|
||||||
|
|
||||||
print()
|
|
||||||
print("🎉 GPU OPTIMIZATIONS APPLIED SUCCESSFULLY!")
|
|
||||||
print()
|
|
||||||
print("📝 NEXT STEPS:")
|
|
||||||
print(" 1. Update your training scripts with larger batch sizes")
|
|
||||||
print(" 2. Use the optimized configurations")
|
|
||||||
print(" 3. Monitor GPU utilization during training")
|
|
||||||
print(" 4. Adjust batch sizes based on memory usage")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Error applying optimizations: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return False
|
|
||||||
|
|
||||||
def monitor_gpu_during_training():
|
|
||||||
"""Show how to monitor GPU during training"""
|
|
||||||
|
|
||||||
print("📊 GPU MONITORING DURING TRAINING")
|
|
||||||
print("=" * 40)
|
|
||||||
print()
|
|
||||||
print("Use these commands to monitor GPU utilization:")
|
|
||||||
print()
|
|
||||||
print("1. NVIDIA System Management Interface:")
|
|
||||||
print(" nvidia-smi -l 1")
|
|
||||||
print(" (Updates every 1 second)")
|
|
||||||
print()
|
|
||||||
print("2. Continuous monitoring:")
|
|
||||||
print(" watch -n 1 nvidia-smi")
|
|
||||||
print()
|
|
||||||
print("3. Python GPU monitoring:")
|
|
||||||
print(" python -c \"import GPUtil; GPUtil.showUtilization()\"")
|
|
||||||
print()
|
|
||||||
print("4. Memory monitoring in your training script:")
|
|
||||||
print(" if torch.cuda.is_available():")
|
|
||||||
print(" print(f'GPU Memory: {torch.cuda.memory_allocated()/1024**3:.2f}GB')")
|
|
||||||
print()
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main optimization function"""
|
|
||||||
|
|
||||||
print("🚀 GPU TRAINING OPTIMIZATION TOOL")
|
|
||||||
print("=" * 50)
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Check GPU setup
|
|
||||||
if not optimize_training_for_gpu():
|
|
||||||
return 1
|
|
||||||
|
|
||||||
# Show optimized config
|
|
||||||
config = create_optimized_training_config()
|
|
||||||
print("⚙️ OPTIMIZED CONFIGURATION:")
|
|
||||||
for section, settings in config.items():
|
|
||||||
print(f" {section.upper()}:")
|
|
||||||
for key, value in settings.items():
|
|
||||||
print(f" {key}: {value}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Apply optimizations
|
|
||||||
if not apply_gpu_optimizations():
|
|
||||||
return 1
|
|
||||||
|
|
||||||
# Show monitoring info
|
|
||||||
monitor_gpu_during_training()
|
|
||||||
|
|
||||||
print("✅ OPTIMIZATION COMPLETE!")
|
|
||||||
print()
|
|
||||||
print("Your training is working correctly with GPU!")
|
|
||||||
print("Use the optimizations above to increase GPU utilization.")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
exit_code = main()
|
|
||||||
sys.exit(exit_code)
|
|
37
main.py
37
main.py
@ -51,7 +51,7 @@ async def run_web_dashboard():
|
|||||||
|
|
||||||
# Initialize core components for streamlined pipeline
|
# Initialize core components for streamlined pipeline
|
||||||
from core.data_provider import DataProvider
|
from core.data_provider import DataProvider
|
||||||
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
|
from core.orchestrator import TradingOrchestrator
|
||||||
from core.trading_executor import TradingExecutor
|
from core.trading_executor import TradingExecutor
|
||||||
|
|
||||||
# Create data provider
|
# Create data provider
|
||||||
@ -89,26 +89,16 @@ async def run_web_dashboard():
|
|||||||
training_integration = get_training_integration()
|
training_integration = get_training_integration()
|
||||||
logger.info("Checkpoint management initialized for training pipeline")
|
logger.info("Checkpoint management initialized for training pipeline")
|
||||||
|
|
||||||
# Create streamlined orchestrator with 2-action system and always-invested approach
|
# Create basic orchestrator for stability
|
||||||
orchestrator = EnhancedTradingOrchestrator(
|
orchestrator = TradingOrchestrator(data_provider)
|
||||||
data_provider=data_provider,
|
logger.info("Basic Trading Orchestrator initialized for stability")
|
||||||
symbols=config.get('symbols', ['ETH/USDT']),
|
logger.info("Using Basic orchestrator - stable and efficient")
|
||||||
enhanced_rl_training=True,
|
|
||||||
model_registry=model_registry
|
|
||||||
)
|
|
||||||
logger.info("Enhanced Trading Orchestrator with 2-Action System initialized")
|
|
||||||
logger.info("Always Invested: Learning to spot high risk/reward setups")
|
|
||||||
|
|
||||||
# Checkpoint management will be handled in the training loop
|
# Checkpoint management will be handled in the training loop
|
||||||
logger.info("Checkpoint management will be initialized in training loop")
|
logger.info("Checkpoint management will be initialized in training loop")
|
||||||
|
|
||||||
# Start COB integration for real-time market microstructure
|
# COB integration not available in Basic orchestrator
|
||||||
try:
|
logger.info("COB Integration not available - using Basic orchestrator")
|
||||||
# Create and start COB integration task
|
|
||||||
cob_task = asyncio.create_task(orchestrator.start_cob_integration())
|
|
||||||
logger.info("COB Integration startup task created")
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"COB Integration startup failed (will retry): {e}")
|
|
||||||
|
|
||||||
# Create trading executor for live execution
|
# Create trading executor for live execution
|
||||||
trading_executor = TradingExecutor()
|
trading_executor = TradingExecutor()
|
||||||
@ -144,10 +134,10 @@ def start_web_ui(port=8051):
|
|||||||
logger.info("COB Integration: ENABLED (Real-time order book visualization)")
|
logger.info("COB Integration: ENABLED (Real-time order book visualization)")
|
||||||
logger.info("=" * 50)
|
logger.info("=" * 50)
|
||||||
|
|
||||||
# Import and create the Clean Trading Dashboard with COB integration
|
# Import and create the Clean Trading Dashboard
|
||||||
from web.clean_dashboard import CleanTradingDashboard
|
from web.clean_dashboard import CleanTradingDashboard
|
||||||
from core.data_provider import DataProvider
|
from core.data_provider import DataProvider
|
||||||
from core.enhanced_orchestrator import EnhancedTradingOrchestrator # Use enhanced version with COB
|
from core.orchestrator import TradingOrchestrator
|
||||||
from core.trading_executor import TradingExecutor
|
from core.trading_executor import TradingExecutor
|
||||||
|
|
||||||
# Initialize components for the dashboard
|
# Initialize components for the dashboard
|
||||||
@ -178,13 +168,8 @@ def start_web_ui(port=8051):
|
|||||||
dashboard_checkpoint_manager = get_checkpoint_manager()
|
dashboard_checkpoint_manager = get_checkpoint_manager()
|
||||||
dashboard_training_integration = get_training_integration()
|
dashboard_training_integration = get_training_integration()
|
||||||
|
|
||||||
# Create enhanced orchestrator for the dashboard (WITH COB integration)
|
# Create basic orchestrator for the dashboard
|
||||||
dashboard_orchestrator = EnhancedTradingOrchestrator(
|
dashboard_orchestrator = TradingOrchestrator(data_provider)
|
||||||
data_provider=data_provider,
|
|
||||||
symbols=config.get('symbols', ['ETH/USDT']),
|
|
||||||
enhanced_rl_training=True, # Enable RL training display
|
|
||||||
model_registry=model_registry
|
|
||||||
)
|
|
||||||
|
|
||||||
trading_executor = TradingExecutor("config.yaml")
|
trading_executor = TradingExecutor("config.yaml")
|
||||||
|
|
||||||
|
@ -1,230 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Minimal Scalping Dashboard - Test callback functionality without emoji issues
|
|
||||||
"""
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
import pandas as pd
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
# Add project root to path
|
|
||||||
project_root = Path(__file__).parent
|
|
||||||
sys.path.insert(0, str(project_root))
|
|
||||||
|
|
||||||
import dash
|
|
||||||
from dash import dcc, html, Input, Output
|
|
||||||
import plotly.graph_objects as go
|
|
||||||
|
|
||||||
from core.config import setup_logging
|
|
||||||
from core.data_provider import DataProvider
|
|
||||||
|
|
||||||
# Setup logging without emojis
|
|
||||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class MinimalDashboard:
|
|
||||||
"""Minimal dashboard to test callback functionality"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.data_provider = DataProvider()
|
|
||||||
self.app = dash.Dash(__name__)
|
|
||||||
self.chart_data = {}
|
|
||||||
|
|
||||||
# Setup layout and callbacks
|
|
||||||
self._setup_layout()
|
|
||||||
self._setup_callbacks()
|
|
||||||
|
|
||||||
logger.info("Minimal dashboard initialized")
|
|
||||||
|
|
||||||
def _setup_layout(self):
|
|
||||||
"""Setup minimal layout"""
|
|
||||||
self.app.layout = html.Div([
|
|
||||||
html.H1("Minimal Scalping Dashboard - Callback Test", className="text-center"),
|
|
||||||
|
|
||||||
# Metrics row
|
|
||||||
html.Div([
|
|
||||||
html.Div([
|
|
||||||
html.H3(id="current-time", className="text-center"),
|
|
||||||
html.P("Current Time", className="text-center")
|
|
||||||
], className="col-md-3"),
|
|
||||||
|
|
||||||
html.Div([
|
|
||||||
html.H3(id="update-counter", className="text-center"),
|
|
||||||
html.P("Update Count", className="text-center")
|
|
||||||
], className="col-md-3"),
|
|
||||||
|
|
||||||
html.Div([
|
|
||||||
html.H3(id="eth-price", className="text-center"),
|
|
||||||
html.P("ETH Price", className="text-center")
|
|
||||||
], className="col-md-3"),
|
|
||||||
|
|
||||||
html.Div([
|
|
||||||
html.H3(id="status", className="text-center"),
|
|
||||||
html.P("Status", className="text-center")
|
|
||||||
], className="col-md-3")
|
|
||||||
], className="row mb-4"),
|
|
||||||
|
|
||||||
# Chart
|
|
||||||
html.Div([
|
|
||||||
dcc.Graph(id="main-chart", style={"height": "400px"})
|
|
||||||
]),
|
|
||||||
|
|
||||||
# Fast refresh interval
|
|
||||||
dcc.Interval(
|
|
||||||
id='fast-interval',
|
|
||||||
interval=1000, # 1 second
|
|
||||||
n_intervals=0
|
|
||||||
)
|
|
||||||
], className="container-fluid")
|
|
||||||
|
|
||||||
def _setup_callbacks(self):
|
|
||||||
"""Setup callbacks with proper scoping"""
|
|
||||||
|
|
||||||
# Store reference to self for callback access
|
|
||||||
dashboard_instance = self
|
|
||||||
|
|
||||||
@self.app.callback(
|
|
||||||
[
|
|
||||||
Output('current-time', 'children'),
|
|
||||||
Output('update-counter', 'children'),
|
|
||||||
Output('eth-price', 'children'),
|
|
||||||
Output('status', 'children'),
|
|
||||||
Output('main-chart', 'figure')
|
|
||||||
],
|
|
||||||
[Input('fast-interval', 'n_intervals')]
|
|
||||||
)
|
|
||||||
def update_dashboard(n_intervals):
|
|
||||||
"""Update dashboard components"""
|
|
||||||
try:
|
|
||||||
logger.info(f"Callback triggered, interval: {n_intervals}")
|
|
||||||
|
|
||||||
# Get current time
|
|
||||||
current_time = datetime.now().strftime("%H:%M:%S")
|
|
||||||
|
|
||||||
# Update counter
|
|
||||||
counter = f"Updates: {n_intervals}"
|
|
||||||
|
|
||||||
# Try to get ETH price
|
|
||||||
try:
|
|
||||||
eth_price_data = dashboard_instance.data_provider.get_current_price('ETH/USDT')
|
|
||||||
eth_price = f"${eth_price_data:.2f}" if eth_price_data else "Loading..."
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Error getting ETH price: {e}")
|
|
||||||
eth_price = "Error"
|
|
||||||
|
|
||||||
# Status
|
|
||||||
status = "Running" if n_intervals > 0 else "Starting"
|
|
||||||
|
|
||||||
# Create chart
|
|
||||||
try:
|
|
||||||
chart = dashboard_instance._create_chart(n_intervals)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error creating chart: {e}")
|
|
||||||
chart = dashboard_instance._create_error_chart()
|
|
||||||
|
|
||||||
logger.info(f"Callback returning: time={current_time}, counter={counter}, price={eth_price}")
|
|
||||||
|
|
||||||
return current_time, counter, eth_price, status, chart
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error in callback: {e}")
|
|
||||||
import traceback
|
|
||||||
logger.error(f"Traceback: {traceback.format_exc()}")
|
|
||||||
|
|
||||||
# Return safe fallback values
|
|
||||||
return "Error", "Error", "Error", "Error", dashboard_instance._create_error_chart()
|
|
||||||
|
|
||||||
def _create_chart(self, n_intervals):
|
|
||||||
"""Create a simple test chart"""
|
|
||||||
try:
|
|
||||||
# Try to get real data
|
|
||||||
if n_intervals % 5 == 0: # Refresh data every 5 seconds
|
|
||||||
try:
|
|
||||||
df = self.data_provider.get_historical_data('ETH/USDT', '1m', limit=50)
|
|
||||||
if df is not None and not df.empty:
|
|
||||||
self.chart_data = df
|
|
||||||
logger.info(f"Fetched {len(df)} candles for chart")
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Error fetching data: {e}")
|
|
||||||
|
|
||||||
# Create chart
|
|
||||||
fig = go.Figure()
|
|
||||||
|
|
||||||
if hasattr(self, 'chart_data') and not self.chart_data.empty:
|
|
||||||
# Real data chart
|
|
||||||
fig.add_trace(go.Candlestick(
|
|
||||||
x=self.chart_data['timestamp'],
|
|
||||||
open=self.chart_data['open'],
|
|
||||||
high=self.chart_data['high'],
|
|
||||||
low=self.chart_data['low'],
|
|
||||||
close=self.chart_data['close'],
|
|
||||||
name='ETH/USDT'
|
|
||||||
))
|
|
||||||
title = f"ETH/USDT Real Data - Update #{n_intervals}"
|
|
||||||
else:
|
|
||||||
# Mock data chart
|
|
||||||
x_data = list(range(max(0, n_intervals-20), n_intervals + 1))
|
|
||||||
y_data = [3500 + 50 * np.sin(i/5) + 10 * np.random.randn() for i in x_data]
|
|
||||||
|
|
||||||
fig.add_trace(go.Scatter(
|
|
||||||
x=x_data,
|
|
||||||
y=y_data,
|
|
||||||
mode='lines',
|
|
||||||
name='Mock ETH Price',
|
|
||||||
line=dict(color='#00ff88')
|
|
||||||
))
|
|
||||||
title = f"Mock ETH Data - Update #{n_intervals}"
|
|
||||||
|
|
||||||
fig.update_layout(
|
|
||||||
title=title,
|
|
||||||
template="plotly_dark",
|
|
||||||
paper_bgcolor='#1e1e1e',
|
|
||||||
plot_bgcolor='#1e1e1e',
|
|
||||||
showlegend=False
|
|
||||||
)
|
|
||||||
|
|
||||||
return fig
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error in _create_chart: {e}")
|
|
||||||
return self._create_error_chart()
|
|
||||||
|
|
||||||
def _create_error_chart(self):
|
|
||||||
"""Create error chart"""
|
|
||||||
fig = go.Figure()
|
|
||||||
fig.add_annotation(
|
|
||||||
text="Error loading chart data",
|
|
||||||
xref="paper", yref="paper",
|
|
||||||
x=0.5, y=0.5, showarrow=False,
|
|
||||||
font=dict(size=16, color="#ff4444")
|
|
||||||
)
|
|
||||||
fig.update_layout(
|
|
||||||
template="plotly_dark",
|
|
||||||
paper_bgcolor='#1e1e1e',
|
|
||||||
plot_bgcolor='#1e1e1e'
|
|
||||||
)
|
|
||||||
return fig
|
|
||||||
|
|
||||||
def run(self, host='127.0.0.1', port=8052, debug=True):
|
|
||||||
"""Run the dashboard"""
|
|
||||||
logger.info(f"Starting minimal dashboard at http://{host}:{port}")
|
|
||||||
logger.info("This tests callback functionality without emoji issues")
|
|
||||||
self.app.run(host=host, port=port, debug=debug)
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main function"""
|
|
||||||
try:
|
|
||||||
dashboard = MinimalDashboard()
|
|
||||||
dashboard.run()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
logger.info("Dashboard stopped by user")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
@ -1 +0,0 @@
|
|||||||
|
|
@ -1,301 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Model Parameter Audit Script
|
|
||||||
Analyzes and calculates the total parameters for all model architectures in the trading system.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import json
|
|
||||||
from pathlib import Path
|
|
||||||
from collections import defaultdict
|
|
||||||
import numpy as np
|
|
||||||
|
|
||||||
# Add paths to import local modules
|
|
||||||
sys.path.append('.')
|
|
||||||
sys.path.append('./NN/models')
|
|
||||||
sys.path.append('./NN')
|
|
||||||
|
|
||||||
def count_parameters(model):
|
|
||||||
"""Count total parameters in a PyTorch model"""
|
|
||||||
total_params = sum(p.numel() for p in model.parameters())
|
|
||||||
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
|
|
||||||
return total_params, trainable_params
|
|
||||||
|
|
||||||
def get_model_size_mb(model):
|
|
||||||
"""Calculate model size in MB"""
|
|
||||||
param_size = 0
|
|
||||||
buffer_size = 0
|
|
||||||
|
|
||||||
for param in model.parameters():
|
|
||||||
param_size += param.nelement() * param.element_size()
|
|
||||||
|
|
||||||
for buffer in model.buffers():
|
|
||||||
buffer_size += buffer.nelement() * buffer.element_size()
|
|
||||||
|
|
||||||
size_mb = (param_size + buffer_size) / 1024 / 1024
|
|
||||||
return size_mb
|
|
||||||
|
|
||||||
def analyze_layer_parameters(model, model_name):
|
|
||||||
"""Analyze parameters by layer"""
|
|
||||||
layer_info = []
|
|
||||||
total_params = 0
|
|
||||||
|
|
||||||
for name, module in model.named_modules():
|
|
||||||
if len(list(module.children())) == 0: # Leaf modules only
|
|
||||||
params = sum(p.numel() for p in module.parameters())
|
|
||||||
if params > 0:
|
|
||||||
layer_info.append({
|
|
||||||
'layer_name': name,
|
|
||||||
'layer_type': type(module).__name__,
|
|
||||||
'parameters': params,
|
|
||||||
'trainable': sum(p.numel() for p in module.parameters() if p.requires_grad)
|
|
||||||
})
|
|
||||||
total_params += params
|
|
||||||
|
|
||||||
return layer_info, total_params
|
|
||||||
|
|
||||||
def audit_enhanced_cnn():
|
|
||||||
"""Audit Enhanced CNN model - the primary model architecture"""
|
|
||||||
try:
|
|
||||||
from enhanced_cnn import EnhancedCNN
|
|
||||||
|
|
||||||
# Test with the optimal configuration based on analysis
|
|
||||||
config = {'input_shape': (5, 100), 'n_actions': 3, 'name': 'EnhancedCNN_Optimized'}
|
|
||||||
|
|
||||||
try:
|
|
||||||
model = EnhancedCNN(
|
|
||||||
input_shape=config['input_shape'],
|
|
||||||
n_actions=config['n_actions']
|
|
||||||
)
|
|
||||||
|
|
||||||
total_params, trainable_params = count_parameters(model)
|
|
||||||
size_mb = get_model_size_mb(model)
|
|
||||||
layer_info, _ = analyze_layer_parameters(model, config['name'])
|
|
||||||
|
|
||||||
result = {
|
|
||||||
'model_name': config['name'],
|
|
||||||
'input_shape': config['input_shape'],
|
|
||||||
'total_parameters': total_params,
|
|
||||||
'trainable_parameters': trainable_params,
|
|
||||||
'size_mb': size_mb,
|
|
||||||
'layer_breakdown': layer_info
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"✅ {config['name']}: {total_params:,} parameters ({size_mb:.2f} MB)")
|
|
||||||
return [result]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Failed to analyze {config['name']}: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"❌ Cannot import EnhancedCNN: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def audit_dqn_agent():
|
|
||||||
"""Audit DQN Agent model - now using Enhanced CNN"""
|
|
||||||
try:
|
|
||||||
from dqn_agent import DQNAgent
|
|
||||||
|
|
||||||
# Test with optimal configuration
|
|
||||||
config = {'state_shape': (5, 100), 'n_actions': 3, 'name': 'DQNAgent_EnhancedCNN'}
|
|
||||||
|
|
||||||
try:
|
|
||||||
agent = DQNAgent(
|
|
||||||
state_shape=config['state_shape'],
|
|
||||||
n_actions=config['n_actions']
|
|
||||||
)
|
|
||||||
|
|
||||||
# Analyze both policy and target networks
|
|
||||||
policy_params, policy_trainable = count_parameters(agent.policy_net)
|
|
||||||
target_params, target_trainable = count_parameters(agent.target_net)
|
|
||||||
total_params = policy_params + target_params
|
|
||||||
|
|
||||||
policy_size = get_model_size_mb(agent.policy_net)
|
|
||||||
target_size = get_model_size_mb(agent.target_net)
|
|
||||||
total_size = policy_size + target_size
|
|
||||||
|
|
||||||
layer_info, _ = analyze_layer_parameters(agent.policy_net, f"{config['name']}_policy")
|
|
||||||
|
|
||||||
result = {
|
|
||||||
'model_name': config['name'],
|
|
||||||
'state_shape': config['state_shape'],
|
|
||||||
'policy_parameters': policy_params,
|
|
||||||
'target_parameters': target_params,
|
|
||||||
'total_parameters': total_params,
|
|
||||||
'size_mb': total_size,
|
|
||||||
'layer_breakdown': layer_info
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"✅ {config['name']}: {total_params:,} parameters ({total_size:.2f} MB)")
|
|
||||||
print(f" Policy: {policy_params:,}, Target: {target_params:,}")
|
|
||||||
return [result]
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Failed to analyze {config['name']}: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
except ImportError as e:
|
|
||||||
print(f"❌ Cannot import DQNAgent: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
def audit_saved_models():
|
|
||||||
"""Audit saved model files"""
|
|
||||||
print("\n🔍 Auditing Saved Model Files...")
|
|
||||||
|
|
||||||
model_dirs = ['models/', 'NN/models/saved/']
|
|
||||||
saved_models = []
|
|
||||||
|
|
||||||
for model_dir in model_dirs:
|
|
||||||
if os.path.exists(model_dir):
|
|
||||||
for file in os.listdir(model_dir):
|
|
||||||
if file.endswith('.pt'):
|
|
||||||
file_path = os.path.join(model_dir, file)
|
|
||||||
try:
|
|
||||||
file_size = os.path.getsize(file_path) / (1024 * 1024) # MB
|
|
||||||
|
|
||||||
# Try to load and inspect the model
|
|
||||||
try:
|
|
||||||
checkpoint = torch.load(file_path, map_location='cpu')
|
|
||||||
|
|
||||||
# Count parameters if it's a state dict
|
|
||||||
if isinstance(checkpoint, dict):
|
|
||||||
total_params = 0
|
|
||||||
if 'state_dict' in checkpoint:
|
|
||||||
state_dict = checkpoint['state_dict']
|
|
||||||
elif 'model_state_dict' in checkpoint:
|
|
||||||
state_dict = checkpoint['model_state_dict']
|
|
||||||
elif 'policy_net' in checkpoint:
|
|
||||||
# DQN agent format
|
|
||||||
policy_params = sum(p.numel() for p in checkpoint['policy_net'].values() if isinstance(p, torch.Tensor))
|
|
||||||
target_params = sum(p.numel() for p in checkpoint['target_net'].values() if isinstance(p, torch.Tensor)) if 'target_net' in checkpoint else 0
|
|
||||||
total_params = policy_params + target_params
|
|
||||||
state_dict = None
|
|
||||||
else:
|
|
||||||
# Direct state dict
|
|
||||||
state_dict = checkpoint
|
|
||||||
|
|
||||||
if state_dict and isinstance(state_dict, dict):
|
|
||||||
total_params = sum(p.numel() for p in state_dict.values() if isinstance(p, torch.Tensor))
|
|
||||||
|
|
||||||
saved_models.append({
|
|
||||||
'filename': file,
|
|
||||||
'path': file_path,
|
|
||||||
'size_mb': file_size,
|
|
||||||
'estimated_parameters': total_params,
|
|
||||||
'checkpoint_keys': list(checkpoint.keys()) if isinstance(checkpoint, dict) else 'N/A'
|
|
||||||
})
|
|
||||||
|
|
||||||
print(f"📁 {file}: {file_size:.1f} MB, ~{total_params:,} parameters")
|
|
||||||
else:
|
|
||||||
saved_models.append({
|
|
||||||
'filename': file,
|
|
||||||
'path': file_path,
|
|
||||||
'size_mb': file_size,
|
|
||||||
'estimated_parameters': 'Unknown',
|
|
||||||
'checkpoint_keys': 'N/A'
|
|
||||||
})
|
|
||||||
print(f"📁 {file}: {file_size:.1f} MB, Unknown parameters")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
saved_models.append({
|
|
||||||
'filename': file,
|
|
||||||
'path': file_path,
|
|
||||||
'size_mb': file_size,
|
|
||||||
'estimated_parameters': 'Error loading',
|
|
||||||
'error': str(e)
|
|
||||||
})
|
|
||||||
print(f"📁 {file}: {file_size:.1f} MB, Error: {e}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Error processing {file}: {e}")
|
|
||||||
|
|
||||||
return saved_models
|
|
||||||
|
|
||||||
def generate_report(enhanced_cnn_results, dqn_results, saved_models):
|
|
||||||
"""Generate comprehensive audit report"""
|
|
||||||
|
|
||||||
report = {
|
|
||||||
'timestamp': str(torch.datetime.now()) if hasattr(torch, 'datetime') else 'N/A',
|
|
||||||
'pytorch_version': torch.__version__,
|
|
||||||
'cuda_available': torch.cuda.is_available(),
|
|
||||||
'device_info': {
|
|
||||||
'cuda_device_count': torch.cuda.device_count() if torch.cuda.is_available() else 0,
|
|
||||||
'current_device': str(torch.cuda.current_device()) if torch.cuda.is_available() else 'CPU'
|
|
||||||
},
|
|
||||||
'model_architectures': {
|
|
||||||
'enhanced_cnn': enhanced_cnn_results,
|
|
||||||
'dqn_agent': dqn_results
|
|
||||||
},
|
|
||||||
'saved_models': saved_models,
|
|
||||||
'summary': {}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Calculate summary statistics
|
|
||||||
all_results = enhanced_cnn_results + dqn_results
|
|
||||||
|
|
||||||
if all_results:
|
|
||||||
total_params = sum(r.get('total_parameters', 0) for r in all_results)
|
|
||||||
total_size = sum(r.get('size_mb', 0) for r in all_results)
|
|
||||||
max_params = max(r.get('total_parameters', 0) for r in all_results)
|
|
||||||
min_params = min(r.get('total_parameters', 0) for r in all_results)
|
|
||||||
|
|
||||||
report['summary'] = {
|
|
||||||
'total_model_architectures': len(all_results),
|
|
||||||
'total_parameters_across_all': total_params,
|
|
||||||
'total_size_mb': total_size,
|
|
||||||
'largest_model_parameters': max_params,
|
|
||||||
'smallest_model_parameters': min_params,
|
|
||||||
'saved_models_count': len(saved_models),
|
|
||||||
'saved_models_total_size_mb': sum(m.get('size_mb', 0) for m in saved_models)
|
|
||||||
}
|
|
||||||
|
|
||||||
return report
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main audit function"""
|
|
||||||
print("🔍 STREAMLINED MODEL PARAMETER AUDIT")
|
|
||||||
print("=" * 50)
|
|
||||||
|
|
||||||
print("\n📊 Analyzing Enhanced CNN Model (Primary Architecture)...")
|
|
||||||
enhanced_cnn_results = audit_enhanced_cnn()
|
|
||||||
|
|
||||||
print("\n🤖 Analyzing DQN Agent with Enhanced CNN...")
|
|
||||||
dqn_results = audit_dqn_agent()
|
|
||||||
|
|
||||||
print("\n💾 Auditing Saved Models...")
|
|
||||||
saved_models = audit_saved_models()
|
|
||||||
|
|
||||||
print("\n📋 Generating Report...")
|
|
||||||
report = generate_report(enhanced_cnn_results, dqn_results, saved_models)
|
|
||||||
|
|
||||||
# Save detailed report
|
|
||||||
with open('model_parameter_audit_report.json', 'w') as f:
|
|
||||||
json.dump(report, f, indent=2, default=str)
|
|
||||||
|
|
||||||
# Print summary
|
|
||||||
print("\n📊 STREAMLINED AUDIT SUMMARY")
|
|
||||||
print("=" * 50)
|
|
||||||
if report['summary']:
|
|
||||||
summary = report['summary']
|
|
||||||
print(f"Streamlined Model Architectures: {summary['total_model_architectures']}")
|
|
||||||
print(f"Total Parameters: {summary['total_parameters_across_all']:,}")
|
|
||||||
print(f"Total Memory Usage: {summary['total_size_mb']:.1f} MB")
|
|
||||||
print(f"Largest Model: {summary['largest_model_parameters']:,} parameters")
|
|
||||||
print(f"Smallest Model: {summary['smallest_model_parameters']:,} parameters")
|
|
||||||
print(f"Saved Models: {summary['saved_models_count']} files")
|
|
||||||
print(f"Saved Models Total Size: {summary['saved_models_total_size_mb']:.1f} MB")
|
|
||||||
|
|
||||||
print(f"\n📄 Detailed report saved to: model_parameter_audit_report.json")
|
|
||||||
print("\n🎯 STREAMLINING COMPLETE:")
|
|
||||||
print(" ✅ Enhanced CNN: Primary high-performance model")
|
|
||||||
print(" ✅ DQN Agent: Now uses Enhanced CNN for better performance")
|
|
||||||
print(" ❌ Simple models: Removed for streamlined architecture")
|
|
||||||
|
|
||||||
return report
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
File diff suppressed because it is too large
Load Diff
@ -1,185 +0,0 @@
|
|||||||
# Trading System MASSIVE 504M Parameter Model Summary
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
**Analysis Date:** Current (Post-MASSIVE Upgrade)
|
|
||||||
**PyTorch Version:** 2.6.0+cu118
|
|
||||||
**CUDA Available:** Yes (1 device)
|
|
||||||
**Architecture Status:** 🚀 **MASSIVELY SCALED** - 504M parameters for 4GB VRAM
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 **MASSIVE 504M PARAMETER ARCHITECTURE**
|
|
||||||
|
|
||||||
### **Scaled Models for Maximum Accuracy**
|
|
||||||
|
|
||||||
| Model | Parameters | Memory (MB) | VRAM Usage | Performance Tier |
|
|
||||||
|-------|------------|-------------|------------|------------------|
|
|
||||||
| **MASSIVE Enhanced CNN** | **168,296,366** | **642.22** | **1.92 GB** | **🚀 MAXIMUM** |
|
|
||||||
| **MASSIVE DQN Agent** | **336,592,732** | **1,284.45** | **3.84 GB** | **🚀 MAXIMUM** |
|
|
||||||
|
|
||||||
**Total Active Parameters:** **504.89 MILLION**
|
|
||||||
**Total Memory Usage:** **1,926.7 MB (1.93 GB)**
|
|
||||||
**Total VRAM Utilization:** **3.84 GB / 4.00 GB (96%)**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 **MASSIVE Enhanced CNN (Primary Model)**
|
|
||||||
|
|
||||||
### **MASSIVE Architecture Features:**
|
|
||||||
- **2048-channel Convolutional Backbone:** Ultra-deep residual networks
|
|
||||||
- **4-Stage Residual Processing:** 256→512→1024→1536→2048 channels
|
|
||||||
- **Multiple Attention Mechanisms:** Price, Volume, Trend, Volatility attention
|
|
||||||
- **768-dimensional Feature Space:** Massive feature representation
|
|
||||||
- **Ensemble Prediction Heads:**
|
|
||||||
- ✅ Dueling Q-Learning architecture (512→256→128 layers)
|
|
||||||
- ✅ Extrema detection (512→256→128→3 classes)
|
|
||||||
- ✅ Multi-timeframe price prediction (256→128→3 per timeframe)
|
|
||||||
- ✅ Value prediction (512→256→128→8 granular predictions)
|
|
||||||
- ✅ Volatility prediction (256→128→5 classes)
|
|
||||||
- ✅ Support/Resistance detection (256→128→6 classes)
|
|
||||||
- ✅ Market regime classification (256→128→7 classes)
|
|
||||||
- ✅ Risk assessment (256→128→4 levels)
|
|
||||||
|
|
||||||
### **MASSIVE Parameter Breakdown:**
|
|
||||||
- **Convolutional layers:** ~45M parameters (massive depth)
|
|
||||||
- **Fully connected layers:** ~85M parameters (ultra-wide)
|
|
||||||
- **Attention mechanisms:** ~25M parameters (4 specialized attention heads)
|
|
||||||
- **Prediction heads:** ~13M parameters (8 specialized heads)
|
|
||||||
- **Input Configuration:** (5, 100) - 5 timeframes, 100 features
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🤖 **MASSIVE DQN Agent (Enhanced)**
|
|
||||||
|
|
||||||
### **Dual MASSIVE Network Architecture:**
|
|
||||||
- **Policy Network:** 168,296,366 parameters (MASSIVE Enhanced CNN)
|
|
||||||
- **Target Network:** 168,296,366 parameters (MASSIVE Enhanced CNN)
|
|
||||||
- **Total:** 336,592,732 parameters
|
|
||||||
|
|
||||||
### **MASSIVE Improvements:**
|
|
||||||
- ❌ **Previous:** 2.76M parameters (too small)
|
|
||||||
- ✅ **MASSIVE:** 168.3M parameters (61x increase)
|
|
||||||
- ✅ **Capacity:** 10,000x more learning capacity than simple models
|
|
||||||
- ✅ **Features:** Mixed precision training, 4GB VRAM optimization
|
|
||||||
- ✅ **Prediction Ensemble:** 8 specialized prediction heads
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 **Performance Scaling Results**
|
|
||||||
|
|
||||||
### **Before MASSIVE Upgrade:**
|
|
||||||
- **8.28M total parameters** (insufficient)
|
|
||||||
- **31.6 MB memory usage** (under-utilizing hardware)
|
|
||||||
- **Limited prediction accuracy**
|
|
||||||
- **Simple 3-class outputs**
|
|
||||||
|
|
||||||
### **After MASSIVE Upgrade:**
|
|
||||||
- **504.89M total parameters** (61x increase)
|
|
||||||
- **1,926.7 MB memory usage** (optimal 4GB utilization)
|
|
||||||
- **8 specialized prediction heads** for maximum accuracy
|
|
||||||
- **Advanced ensemble learning** with attention mechanisms
|
|
||||||
|
|
||||||
### **Scaling Benefits:**
|
|
||||||
- 📈 **6,000% increase** in total parameters
|
|
||||||
- 📈 **6,000% increase** in memory usage (optimal VRAM utilization)
|
|
||||||
- 📈 **8 specialized prediction heads** vs single output
|
|
||||||
- 📈 **4 attention mechanisms** for different market aspects
|
|
||||||
- 📈 **Maximum learning capacity** within 4GB VRAM budget
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💾 **4GB VRAM Optimization Strategy**
|
|
||||||
|
|
||||||
### **Memory Allocation:**
|
|
||||||
- **Model Parameters:** 1.93 GB (48%)
|
|
||||||
- **Training Gradients:** 1.50 GB (37%)
|
|
||||||
- **Activation Memory:** 0.50 GB (12%)
|
|
||||||
- **System Reserve:** 0.07 GB (3%)
|
|
||||||
- **Total Usage:** 4.00 GB (100% optimized)
|
|
||||||
|
|
||||||
### **Training Optimizations:**
|
|
||||||
- **Mixed Precision Training:** FP16 for 50% memory savings
|
|
||||||
- **Gradient Checkpointing:** Reduces activation memory
|
|
||||||
- **Dynamic Batch Sizing:** Optimal batch size for VRAM
|
|
||||||
- **Attention Memory Optimization:** Efficient attention computation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔍 **MASSIVE Training & Deployment Impact**
|
|
||||||
|
|
||||||
### **Training Benefits:**
|
|
||||||
- **61x more parameters** for complex pattern recognition
|
|
||||||
- **8 specialized heads** for multi-task learning
|
|
||||||
- **4 attention mechanisms** for different market aspects
|
|
||||||
- **Maximum VRAM utilization** (96% of 4GB)
|
|
||||||
- **Advanced ensemble predictions** for higher accuracy
|
|
||||||
|
|
||||||
### **Prediction Capabilities:**
|
|
||||||
- **Q-Value Learning:** Advanced dueling architecture
|
|
||||||
- **Extrema Detection:** Bottom/Top/Neither classification
|
|
||||||
- **Price Direction:** Multi-timeframe Up/Down/Sideways
|
|
||||||
- **Value Prediction:** 8 granular price change predictions
|
|
||||||
- **Volatility Analysis:** 5-level volatility classification
|
|
||||||
- **Support/Resistance:** 6-class level detection
|
|
||||||
- **Market Regime:** 7-class regime identification
|
|
||||||
- **Risk Assessment:** 4-level risk evaluation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 **Overnight Training Session**
|
|
||||||
|
|
||||||
### **Training Configuration:**
|
|
||||||
- **Model Size:** 504.89 Million parameters
|
|
||||||
- **VRAM Usage:** 3.84 GB (96% utilization)
|
|
||||||
- **Training Duration:** 8+ hours overnight
|
|
||||||
- **Target:** Maximum profit with 500x leverage simulation
|
|
||||||
- **Monitoring:** Real-time performance tracking
|
|
||||||
|
|
||||||
### **Expected Outcomes:**
|
|
||||||
- **Massive Model Capacity:** 61x more learning power
|
|
||||||
- **Advanced Predictions:** 8 specialized output heads
|
|
||||||
- **High Accuracy:** Ensemble learning with attention
|
|
||||||
- **Profit Optimization:** Leveraged scalping strategies
|
|
||||||
- **Robust Performance:** Multiple prediction mechanisms
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 **MASSIVE Architecture Advantages**
|
|
||||||
|
|
||||||
### **Why 504M Parameters:**
|
|
||||||
- **Maximum VRAM Usage:** Fully utilizing 4GB budget
|
|
||||||
- **Complex Pattern Recognition:** Trading requires massive capacity
|
|
||||||
- **Multi-task Learning:** 8 prediction heads need large shared backbone
|
|
||||||
- **Attention Mechanisms:** 4 specialized attention heads for market aspects
|
|
||||||
- **Future-proof Capacity:** Room for additional prediction heads
|
|
||||||
|
|
||||||
### **Ensemble Prediction Strategy:**
|
|
||||||
- **Dueling Q-Learning:** Core RL decision making
|
|
||||||
- **Extrema Detection:** Market turning points
|
|
||||||
- **Multi-timeframe Prediction:** Short/medium/long term forecasts
|
|
||||||
- **Risk Assessment:** Position sizing optimization
|
|
||||||
- **Market Regime Detection:** Strategy adaptation
|
|
||||||
- **Support/Resistance:** Entry/exit point optimization
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 **Overnight Training Targets**
|
|
||||||
|
|
||||||
### **Performance Goals:**
|
|
||||||
- 🎯 **Win Rate:** Target 85%+ with massive model capacity
|
|
||||||
- 🎯 **Profit Factor:** Target 3.0+ with advanced predictions
|
|
||||||
- 🎯 **Sharpe Ratio:** Target 2.5+ with risk assessment
|
|
||||||
- 🎯 **Max Drawdown:** Target <5% with volatility prediction
|
|
||||||
- 🎯 **ROI:** Target 50%+ overnight with 500x leverage
|
|
||||||
|
|
||||||
### **Training Metrics:**
|
|
||||||
- 🎯 **Episodes:** 400+ episodes overnight
|
|
||||||
- 🎯 **Trades:** 1,600+ trades with rapid execution
|
|
||||||
- 🎯 **Model Convergence:** Advanced ensemble learning
|
|
||||||
- 🎯 **VRAM Efficiency:** 96% utilization throughout training
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**🚀 MASSIVE UPGRADE COMPLETE: The trading system now uses 504.89 MILLION parameters for maximum accuracy within 4GB VRAM budget!**
|
|
||||||
|
|
||||||
*Report generated after successful MASSIVE model scaling for overnight training*
|
|
@ -1,172 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Dashboard Performance Monitor
|
|
||||||
|
|
||||||
This script monitors the running scalping dashboard for:
|
|
||||||
- Response time
|
|
||||||
- Error detection
|
|
||||||
- Memory usage
|
|
||||||
- Trade activity
|
|
||||||
- WebSocket connectivity
|
|
||||||
"""
|
|
||||||
|
|
||||||
import requests
|
|
||||||
import time
|
|
||||||
import logging
|
|
||||||
import psutil
|
|
||||||
import json
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
# Setup logging
|
|
||||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
def check_dashboard_status():
|
|
||||||
"""Check if dashboard is responding"""
|
|
||||||
try:
|
|
||||||
start_time = time.time()
|
|
||||||
response = requests.get("http://127.0.0.1:8051", timeout=5)
|
|
||||||
response_time = (time.time() - start_time) * 1000
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
logger.info(f"✅ Dashboard responding - {response_time:.1f}ms")
|
|
||||||
return True, response_time
|
|
||||||
else:
|
|
||||||
logger.error(f"❌ Dashboard returned status {response.status_code}")
|
|
||||||
return False, response_time
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Dashboard connection failed: {e}")
|
|
||||||
return False, 0
|
|
||||||
|
|
||||||
def check_system_resources():
|
|
||||||
"""Check system resource usage"""
|
|
||||||
try:
|
|
||||||
# Find Python processes (our dashboard)
|
|
||||||
python_processes = []
|
|
||||||
for proc in psutil.process_iter(['pid', 'name', 'memory_info', 'cpu_percent']):
|
|
||||||
if 'python' in proc.info['name'].lower():
|
|
||||||
python_processes.append(proc)
|
|
||||||
|
|
||||||
total_memory = sum(proc.info['memory_info'].rss for proc in python_processes) / 1024 / 1024
|
|
||||||
total_cpu = sum(proc.info['cpu_percent'] for proc in python_processes)
|
|
||||||
|
|
||||||
logger.info(f"📊 System Resources:")
|
|
||||||
logger.info(f" • Python Processes: {len(python_processes)}")
|
|
||||||
logger.info(f" • Total Memory: {total_memory:.1f} MB")
|
|
||||||
logger.info(f" • Total CPU: {total_cpu:.1f}%")
|
|
||||||
|
|
||||||
return len(python_processes), total_memory, total_cpu
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Failed to check system resources: {e}")
|
|
||||||
return 0, 0, 0
|
|
||||||
|
|
||||||
def check_log_for_errors():
|
|
||||||
"""Check recent logs for errors"""
|
|
||||||
try:
|
|
||||||
import os
|
|
||||||
log_file = "logs/enhanced_trading.log"
|
|
||||||
|
|
||||||
if not os.path.exists(log_file):
|
|
||||||
logger.warning("❌ Log file not found")
|
|
||||||
return 0, 0
|
|
||||||
|
|
||||||
# Read last 100 lines
|
|
||||||
with open(log_file, 'r', encoding='utf-8') as f:
|
|
||||||
lines = f.readlines()
|
|
||||||
recent_lines = lines[-100:] if len(lines) > 100 else lines
|
|
||||||
|
|
||||||
error_count = sum(1 for line in recent_lines if 'ERROR' in line)
|
|
||||||
warning_count = sum(1 for line in recent_lines if 'WARNING' in line)
|
|
||||||
|
|
||||||
if error_count > 0:
|
|
||||||
logger.warning(f"⚠️ Found {error_count} errors in recent logs")
|
|
||||||
if warning_count > 0:
|
|
||||||
logger.info(f"⚠️ Found {warning_count} warnings in recent logs")
|
|
||||||
|
|
||||||
return error_count, warning_count
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Failed to check logs: {e}")
|
|
||||||
return 0, 0
|
|
||||||
|
|
||||||
def check_trading_activity():
|
|
||||||
"""Check for recent trading activity"""
|
|
||||||
try:
|
|
||||||
import os
|
|
||||||
import glob
|
|
||||||
|
|
||||||
# Look for trade log files
|
|
||||||
trade_files = glob.glob("trade_logs/session_*.json")
|
|
||||||
|
|
||||||
if trade_files:
|
|
||||||
latest_file = max(trade_files, key=os.path.getctime)
|
|
||||||
file_size = os.path.getsize(latest_file)
|
|
||||||
file_time = datetime.fromtimestamp(os.path.getctime(latest_file))
|
|
||||||
|
|
||||||
logger.info(f"📈 Trading Activity:")
|
|
||||||
logger.info(f" • Latest Session: {os.path.basename(latest_file)}")
|
|
||||||
logger.info(f" • Log Size: {file_size} bytes")
|
|
||||||
logger.info(f" • Last Update: {file_time.strftime('%H:%M:%S')}")
|
|
||||||
|
|
||||||
return len(trade_files), file_size
|
|
||||||
else:
|
|
||||||
logger.info("📈 No trading session files found yet")
|
|
||||||
return 0, 0
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Failed to check trading activity: {e}")
|
|
||||||
return 0, 0
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main monitoring loop"""
|
|
||||||
logger.info("🔍 STARTING DASHBOARD PERFORMANCE MONITOR")
|
|
||||||
logger.info("=" * 60)
|
|
||||||
|
|
||||||
monitor_count = 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
while True:
|
|
||||||
monitor_count += 1
|
|
||||||
logger.info(f"\n🔄 Monitor Check #{monitor_count} - {datetime.now().strftime('%H:%M:%S')}")
|
|
||||||
logger.info("-" * 40)
|
|
||||||
|
|
||||||
# Check dashboard status
|
|
||||||
is_responding, response_time = check_dashboard_status()
|
|
||||||
|
|
||||||
# Check system resources
|
|
||||||
proc_count, memory_mb, cpu_percent = check_system_resources()
|
|
||||||
|
|
||||||
# Check for errors
|
|
||||||
error_count, warning_count = check_log_for_errors()
|
|
||||||
|
|
||||||
# Check trading activity
|
|
||||||
session_count, log_size = check_trading_activity()
|
|
||||||
|
|
||||||
# Summary
|
|
||||||
logger.info(f"\n📋 MONITOR SUMMARY:")
|
|
||||||
logger.info(f" • Dashboard: {'✅ OK' if is_responding else '❌ DOWN'} ({response_time:.1f}ms)")
|
|
||||||
logger.info(f" • Processes: {proc_count} running")
|
|
||||||
logger.info(f" • Memory: {memory_mb:.1f} MB")
|
|
||||||
logger.info(f" • CPU: {cpu_percent:.1f}%")
|
|
||||||
logger.info(f" • Errors: {error_count} | Warnings: {warning_count}")
|
|
||||||
logger.info(f" • Sessions: {session_count} | Latest Log: {log_size} bytes")
|
|
||||||
|
|
||||||
# Performance assessment
|
|
||||||
if is_responding and error_count == 0:
|
|
||||||
if response_time < 1000 and memory_mb < 2000:
|
|
||||||
logger.info("🎯 PERFORMANCE: EXCELLENT")
|
|
||||||
elif response_time < 2000 and memory_mb < 4000:
|
|
||||||
logger.info("✅ PERFORMANCE: GOOD")
|
|
||||||
else:
|
|
||||||
logger.info("⚠️ PERFORMANCE: MODERATE")
|
|
||||||
else:
|
|
||||||
logger.error("❌ PERFORMANCE: POOR")
|
|
||||||
|
|
||||||
# Wait before next check
|
|
||||||
time.sleep(30) # Check every 30 seconds
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
logger.info("\n👋 Monitor stopped by user")
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Monitor failed: {e}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
@ -1,83 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Training Monitor Script
|
|
||||||
|
|
||||||
Quick script to check the status of realtime training and show key metrics.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
import glob
|
|
||||||
|
|
||||||
def check_training_status():
|
|
||||||
"""Check status of training processes and logs"""
|
|
||||||
print("=" * 60)
|
|
||||||
print("REALTIME RL TRAINING STATUS CHECK")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
# Check TensorBoard logs
|
|
||||||
runs_dir = Path("runs")
|
|
||||||
if runs_dir.exists():
|
|
||||||
log_dirs = list(runs_dir.glob("rl_training_*"))
|
|
||||||
recent_logs = sorted(log_dirs, key=lambda x: x.name)[-3:] # Last 3 sessions
|
|
||||||
|
|
||||||
print("\n📊 RECENT TENSORBOARD LOGS:")
|
|
||||||
for log_dir in recent_logs:
|
|
||||||
# Get creation time
|
|
||||||
stat = log_dir.stat()
|
|
||||||
created = datetime.fromtimestamp(stat.st_ctime)
|
|
||||||
|
|
||||||
# Check for event files
|
|
||||||
event_files = list(log_dir.glob("*.tfevents.*"))
|
|
||||||
|
|
||||||
print(f" 📁 {log_dir.name}")
|
|
||||||
print(f" Created: {created.strftime('%Y-%m-%d %H:%M:%S')}")
|
|
||||||
print(f" Event files: {len(event_files)}")
|
|
||||||
|
|
||||||
if event_files:
|
|
||||||
latest_event = max(event_files, key=lambda x: x.stat().st_mtime)
|
|
||||||
modified = datetime.fromtimestamp(latest_event.stat().st_mtime)
|
|
||||||
print(f" Last update: {modified.strftime('%Y-%m-%d %H:%M:%S')}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
# Check running processes
|
|
||||||
print("🔍 PROCESS STATUS:")
|
|
||||||
try:
|
|
||||||
import subprocess
|
|
||||||
result = subprocess.run(['tasklist'], capture_output=True, text=True, shell=True)
|
|
||||||
python_processes = [line for line in result.stdout.split('\n') if 'python.exe' in line]
|
|
||||||
print(f" Python processes running: {len(python_processes)}")
|
|
||||||
for i, proc in enumerate(python_processes[:5]): # Show first 5
|
|
||||||
print(f" {i+1}. {proc.strip()}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f" Error checking processes: {e}")
|
|
||||||
|
|
||||||
# Check web services
|
|
||||||
print("\n🌐 WEB SERVICES:")
|
|
||||||
print(" TensorBoard: http://localhost:6006")
|
|
||||||
print(" Web Dashboard: http://localhost:8051")
|
|
||||||
|
|
||||||
# Check model saves
|
|
||||||
models_dir = Path("models/rl")
|
|
||||||
if models_dir.exists():
|
|
||||||
model_files = list(models_dir.glob("realtime_agent_*.pt"))
|
|
||||||
print(f"\n💾 SAVED MODELS: {len(model_files)}")
|
|
||||||
for model_file in sorted(model_files, key=lambda x: x.stat().st_mtime)[-3:]:
|
|
||||||
modified = datetime.fromtimestamp(model_file.stat().st_mtime)
|
|
||||||
print(f" 📄 {model_file.name} - {modified.strftime('%Y-%m-%d %H:%M:%S')}")
|
|
||||||
|
|
||||||
print("\n" + "=" * 60)
|
|
||||||
print("✅ MONITORING URLs:")
|
|
||||||
print("📊 TensorBoard: http://localhost:6006")
|
|
||||||
print("🌐 Dashboard: http://localhost:8051")
|
|
||||||
print("=" * 60)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
try:
|
|
||||||
check_training_status()
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\nMonitoring stopped.")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error: {e}")
|
|
@ -1,600 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Overnight Training Monitor - 504M Parameter Massive Model
|
|
||||||
================================================================================
|
|
||||||
|
|
||||||
Comprehensive monitoring system for the overnight RL training session with:
|
|
||||||
- 504.89 Million parameter Enhanced CNN + DQN Agent
|
|
||||||
- 4GB VRAM utilization
|
|
||||||
- Real-time performance tracking
|
|
||||||
- Automated model checkpointing
|
|
||||||
- Training analytics and reporting
|
|
||||||
- Memory usage optimization
|
|
||||||
- Profit maximization metrics
|
|
||||||
|
|
||||||
Run this script to monitor the entire overnight training session.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
import psutil
|
|
||||||
import torch
|
|
||||||
import logging
|
|
||||||
import json
|
|
||||||
import matplotlib.pyplot as plt
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, List, Optional
|
|
||||||
import numpy as np
|
|
||||||
import pandas as pd
|
|
||||||
from threading import Thread
|
|
||||||
import subprocess
|
|
||||||
import GPUtil
|
|
||||||
|
|
||||||
# Setup comprehensive logging
|
|
||||||
log_dir = Path("logs/overnight_training")
|
|
||||||
log_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Configure detailed logging
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
|
|
||||||
handlers=[
|
|
||||||
logging.FileHandler(log_dir / f"overnight_training_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log"),
|
|
||||||
logging.StreamHandler()
|
|
||||||
]
|
|
||||||
)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
class OvernightTrainingMonitor:
|
|
||||||
"""Comprehensive overnight training monitor for massive 504M parameter model"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
"""Initialize the overnight training monitor"""
|
|
||||||
self.start_time = datetime.now()
|
|
||||||
self.monitoring = True
|
|
||||||
|
|
||||||
# Model specifications
|
|
||||||
self.model_specs = {
|
|
||||||
'total_parameters': 504_889_098,
|
|
||||||
'enhanced_cnn_params': 168_296_366,
|
|
||||||
'dqn_agent_params': 336_592_732,
|
|
||||||
'memory_usage_mb': 1926.7,
|
|
||||||
'target_vram_gb': 4.0,
|
|
||||||
'architecture': 'Massive Enhanced CNN + DQN Agent'
|
|
||||||
}
|
|
||||||
|
|
||||||
# Training metrics tracking
|
|
||||||
self.training_metrics = {
|
|
||||||
'episodes_completed': 0,
|
|
||||||
'total_reward': 0.0,
|
|
||||||
'best_reward': -float('inf'),
|
|
||||||
'average_reward': 0.0,
|
|
||||||
'win_rate': 0.0,
|
|
||||||
'total_trades': 0,
|
|
||||||
'profit_factor': 0.0,
|
|
||||||
'sharpe_ratio': 0.0,
|
|
||||||
'max_drawdown': 0.0,
|
|
||||||
'final_balance': 0.0,
|
|
||||||
'training_loss': 0.0
|
|
||||||
}
|
|
||||||
|
|
||||||
# System monitoring
|
|
||||||
self.system_metrics = {
|
|
||||||
'cpu_usage': [],
|
|
||||||
'memory_usage': [],
|
|
||||||
'gpu_usage': [],
|
|
||||||
'gpu_memory': [],
|
|
||||||
'disk_io': [],
|
|
||||||
'network_io': []
|
|
||||||
}
|
|
||||||
|
|
||||||
# Performance tracking
|
|
||||||
self.performance_history = []
|
|
||||||
self.checkpoint_times = []
|
|
||||||
|
|
||||||
# Profit tracking (500x leverage simulation)
|
|
||||||
self.profit_metrics = {
|
|
||||||
'starting_balance': 10000.0,
|
|
||||||
'current_balance': 10000.0,
|
|
||||||
'total_pnl': 0.0,
|
|
||||||
'realized_pnl': 0.0,
|
|
||||||
'unrealized_pnl': 0.0,
|
|
||||||
'leverage': 500,
|
|
||||||
'fees_paid': 0.0,
|
|
||||||
'roi_percentage': 0.0
|
|
||||||
}
|
|
||||||
|
|
||||||
logger.info("OVERNIGHT TRAINING MONITOR INITIALIZED")
|
|
||||||
logger.info("="*60)
|
|
||||||
logger.info(f"Model: {self.model_specs['architecture']}")
|
|
||||||
logger.info(f"Parameters: {self.model_specs['total_parameters']:,}")
|
|
||||||
logger.info(f"Leverage: {self.profit_metrics['leverage']}x")
|
|
||||||
|
|
||||||
def check_system_resources(self) -> Dict:
|
|
||||||
"""Check current system resource usage"""
|
|
||||||
try:
|
|
||||||
# CPU and Memory
|
|
||||||
cpu_percent = psutil.cpu_percent(interval=1)
|
|
||||||
memory = psutil.virtual_memory()
|
|
||||||
memory_percent = memory.percent
|
|
||||||
memory_used_gb = memory.used / (1024**3)
|
|
||||||
memory_total_gb = memory.total / (1024**3)
|
|
||||||
|
|
||||||
# GPU monitoring
|
|
||||||
gpu_usage = 0
|
|
||||||
gpu_memory_used = 0
|
|
||||||
gpu_memory_total = 0
|
|
||||||
|
|
||||||
if torch.cuda.is_available():
|
|
||||||
gpu_memory_used = torch.cuda.memory_allocated() / (1024**3) # GB
|
|
||||||
gpu_memory_total = torch.cuda.get_device_properties(0).total_memory / (1024**3) # GB
|
|
||||||
|
|
||||||
# Try to get GPU utilization
|
|
||||||
try:
|
|
||||||
gpus = GPUtil.getGPUs()
|
|
||||||
if gpus:
|
|
||||||
gpu_usage = gpus[0].load * 100
|
|
||||||
except:
|
|
||||||
gpu_usage = 0
|
|
||||||
|
|
||||||
# Disk I/O
|
|
||||||
disk_io = psutil.disk_io_counters()
|
|
||||||
|
|
||||||
# Network I/O
|
|
||||||
network_io = psutil.net_io_counters()
|
|
||||||
|
|
||||||
system_info = {
|
|
||||||
'timestamp': datetime.now(),
|
|
||||||
'cpu_usage': cpu_percent,
|
|
||||||
'memory_percent': memory_percent,
|
|
||||||
'memory_used_gb': memory_used_gb,
|
|
||||||
'memory_total_gb': memory_total_gb,
|
|
||||||
'gpu_usage': gpu_usage,
|
|
||||||
'gpu_memory_used_gb': gpu_memory_used,
|
|
||||||
'gpu_memory_total_gb': gpu_memory_total,
|
|
||||||
'gpu_memory_percent': (gpu_memory_used / gpu_memory_total * 100) if gpu_memory_total > 0 else 0,
|
|
||||||
'disk_read_gb': disk_io.read_bytes / (1024**3) if disk_io else 0,
|
|
||||||
'disk_write_gb': disk_io.write_bytes / (1024**3) if disk_io else 0,
|
|
||||||
'network_sent_gb': network_io.bytes_sent / (1024**3) if network_io else 0,
|
|
||||||
'network_recv_gb': network_io.bytes_recv / (1024**3) if network_io else 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return system_info
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error checking system resources: {e}")
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def _parse_training_metrics(self) -> Dict[str, Any]:
|
|
||||||
"""Parse REAL training metrics from log files - NO SYNTHETIC DATA"""
|
|
||||||
try:
|
|
||||||
# Read actual training logs for real metrics
|
|
||||||
training_log_path = Path("logs/trading.log")
|
|
||||||
if not training_log_path.exists():
|
|
||||||
logger.warning("⚠️ No training log found - metrics unavailable")
|
|
||||||
return self._default_metrics()
|
|
||||||
|
|
||||||
# Parse real metrics from training logs
|
|
||||||
with open(training_log_path, 'r') as f:
|
|
||||||
recent_lines = f.readlines()[-100:] # Get last 100 lines
|
|
||||||
|
|
||||||
# Extract real metrics from log lines
|
|
||||||
real_metrics = self._extract_real_metrics(recent_lines)
|
|
||||||
|
|
||||||
if real_metrics:
|
|
||||||
logger.info(f"✅ Parsed {len(real_metrics)} real training metrics")
|
|
||||||
return real_metrics
|
|
||||||
else:
|
|
||||||
logger.warning("⚠️ No real metrics found in logs")
|
|
||||||
return self._default_metrics()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Error parsing real training metrics: {e}")
|
|
||||||
return self._default_metrics()
|
|
||||||
|
|
||||||
def _extract_real_metrics(self, log_lines: List[str]) -> Dict[str, Any]:
|
|
||||||
"""Extract real metrics from training log lines"""
|
|
||||||
metrics = {}
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Look for real training indicators
|
|
||||||
loss_values = []
|
|
||||||
trade_counts = []
|
|
||||||
pnl_values = []
|
|
||||||
|
|
||||||
for line in log_lines:
|
|
||||||
# Extract real loss values
|
|
||||||
if "loss:" in line.lower() or "Loss" in line:
|
|
||||||
try:
|
|
||||||
# Extract numeric loss value
|
|
||||||
import re
|
|
||||||
loss_match = re.search(r'loss[:\s]+([\d\.]+)', line, re.IGNORECASE)
|
|
||||||
if loss_match:
|
|
||||||
loss_values.append(float(loss_match.group(1)))
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Extract real trade information
|
|
||||||
if "TRADE" in line and "OPENED" in line:
|
|
||||||
trade_counts.append(1)
|
|
||||||
|
|
||||||
# Extract real PnL values
|
|
||||||
if "PnL:" in line:
|
|
||||||
try:
|
|
||||||
pnl_match = re.search(r'PnL[:\s]+\$?([+-]?[\d\.]+)', line)
|
|
||||||
if pnl_match:
|
|
||||||
pnl_values.append(float(pnl_match.group(1)))
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Calculate real averages
|
|
||||||
if loss_values:
|
|
||||||
metrics['current_loss'] = sum(loss_values) / len(loss_values)
|
|
||||||
metrics['loss_trend'] = 'decreasing' if len(loss_values) > 1 and loss_values[-1] < loss_values[0] else 'stable'
|
|
||||||
|
|
||||||
if trade_counts:
|
|
||||||
metrics['trades_per_hour'] = len(trade_counts)
|
|
||||||
|
|
||||||
if pnl_values:
|
|
||||||
metrics['total_pnl'] = sum(pnl_values)
|
|
||||||
metrics['avg_pnl'] = sum(pnl_values) / len(pnl_values)
|
|
||||||
metrics['win_rate'] = len([p for p in pnl_values if p > 0]) / len(pnl_values)
|
|
||||||
|
|
||||||
# Add timestamp
|
|
||||||
metrics['timestamp'] = datetime.now()
|
|
||||||
metrics['data_source'] = 'real_training_logs'
|
|
||||||
|
|
||||||
return metrics
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Error extracting real metrics: {e}")
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def _default_metrics(self) -> Dict[str, Any]:
|
|
||||||
"""Return default metrics when no real data is available"""
|
|
||||||
return {
|
|
||||||
'current_loss': 0.0,
|
|
||||||
'trades_per_hour': 0,
|
|
||||||
'total_pnl': 0.0,
|
|
||||||
'avg_pnl': 0.0,
|
|
||||||
'win_rate': 0.0,
|
|
||||||
'timestamp': datetime.now(),
|
|
||||||
'data_source': 'no_real_data_available',
|
|
||||||
'loss_trend': 'unknown'
|
|
||||||
}
|
|
||||||
|
|
||||||
def update_training_metrics(self):
|
|
||||||
"""Update training metrics from TensorBoard logs and saved models"""
|
|
||||||
try:
|
|
||||||
# Look for TensorBoard log files
|
|
||||||
runs_dir = Path("runs")
|
|
||||||
if runs_dir.exists():
|
|
||||||
latest_run = max(runs_dir.glob("*"), key=lambda p: p.stat().st_mtime, default=None)
|
|
||||||
if latest_run:
|
|
||||||
# Parse TensorBoard logs (simplified)
|
|
||||||
logger.info(f"📈 Latest training run: {latest_run.name}")
|
|
||||||
|
|
||||||
# Check for model checkpoints
|
|
||||||
models_dir = Path("models/rl")
|
|
||||||
if models_dir.exists():
|
|
||||||
checkpoints = list(models_dir.glob("*.pt"))
|
|
||||||
if checkpoints:
|
|
||||||
latest_checkpoint = max(checkpoints, key=lambda p: p.stat().st_mtime)
|
|
||||||
checkpoint_time = datetime.fromtimestamp(latest_checkpoint.stat().st_mtime)
|
|
||||||
self.checkpoint_times.append(checkpoint_time)
|
|
||||||
logger.info(f"💾 Latest checkpoint: {latest_checkpoint.name} at {checkpoint_time}")
|
|
||||||
|
|
||||||
# Parse REAL training metrics from logs - NO SYNTHETIC DATA
|
|
||||||
real_metrics = self._parse_training_metrics()
|
|
||||||
|
|
||||||
if real_metrics['data_source'] == 'real_training_logs':
|
|
||||||
# Use real metrics from training logs
|
|
||||||
logger.info("✅ Using REAL training metrics")
|
|
||||||
self.training_metrics['total_pnl'] = real_metrics.get('total_pnl', 0.0)
|
|
||||||
self.training_metrics['avg_pnl'] = real_metrics.get('avg_pnl', 0.0)
|
|
||||||
self.training_metrics['win_rate'] = real_metrics.get('win_rate', 0.0)
|
|
||||||
self.training_metrics['current_loss'] = real_metrics.get('current_loss', 0.0)
|
|
||||||
self.training_metrics['trades_per_hour'] = real_metrics.get('trades_per_hour', 0)
|
|
||||||
else:
|
|
||||||
# No real data available - use safe defaults (NO SYNTHETIC)
|
|
||||||
logger.warning("⚠️ No real training metrics available - using zero values")
|
|
||||||
self.training_metrics['total_pnl'] = 0.0
|
|
||||||
self.training_metrics['avg_pnl'] = 0.0
|
|
||||||
self.training_metrics['win_rate'] = 0.0
|
|
||||||
self.training_metrics['current_loss'] = 0.0
|
|
||||||
self.training_metrics['trades_per_hour'] = 0
|
|
||||||
|
|
||||||
# Update other real metrics
|
|
||||||
self.training_metrics['memory_usage'] = self.check_system_resources()['memory_percent']
|
|
||||||
self.training_metrics['gpu_usage'] = self.check_system_resources()['gpu_usage']
|
|
||||||
self.training_metrics['training_time'] = (datetime.now() - self.start_time).total_seconds()
|
|
||||||
|
|
||||||
# Log real metrics
|
|
||||||
logger.info(f"🔄 Real Training Metrics Updated:")
|
|
||||||
logger.info(f" 💰 Total PnL: ${self.training_metrics['total_pnl']:.2f}")
|
|
||||||
logger.info(f" 📊 Win Rate: {self.training_metrics['win_rate']:.1%}")
|
|
||||||
logger.info(f" 🔢 Trades: {self.training_metrics['trades_per_hour']}")
|
|
||||||
logger.info(f" 📉 Loss: {self.training_metrics['current_loss']:.4f}")
|
|
||||||
logger.info(f" 💾 Memory: {self.training_metrics['memory_usage']:.1f}%")
|
|
||||||
logger.info(f" 🎮 GPU: {self.training_metrics['gpu_usage']:.1f}%")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Error updating real training metrics: {e}")
|
|
||||||
# Set safe defaults on error (NO SYNTHETIC FALLBACK)
|
|
||||||
self.training_metrics.update({
|
|
||||||
'total_pnl': 0.0,
|
|
||||||
'avg_pnl': 0.0,
|
|
||||||
'win_rate': 0.0,
|
|
||||||
'current_loss': 0.0,
|
|
||||||
'trades_per_hour': 0
|
|
||||||
})
|
|
||||||
|
|
||||||
def log_comprehensive_status(self):
|
|
||||||
"""Log comprehensive training status"""
|
|
||||||
system_info = self.check_system_resources()
|
|
||||||
self.update_training_metrics()
|
|
||||||
|
|
||||||
runtime = datetime.now() - self.start_time
|
|
||||||
runtime_hours = runtime.total_seconds() / 3600
|
|
||||||
|
|
||||||
logger.info("MASSIVE MODEL OVERNIGHT TRAINING STATUS")
|
|
||||||
logger.info("="*60)
|
|
||||||
logger.info("TRAINING PROGRESS:")
|
|
||||||
logger.info(f" Runtime: {runtime}")
|
|
||||||
logger.info(f" Epochs: {self.training_metrics['episodes_completed']}")
|
|
||||||
logger.info(f" Loss: {self.training_metrics['current_loss']:.6f}")
|
|
||||||
logger.info(f" Accuracy: {self.training_metrics['win_rate']:.4f}")
|
|
||||||
logger.info(f" Learning Rate: {self.training_metrics['memory_usage']:.8f}")
|
|
||||||
logger.info(f" Batch Size: {self.training_metrics['trades_per_hour']}")
|
|
||||||
logger.info("")
|
|
||||||
logger.info("PROFIT METRICS:")
|
|
||||||
logger.info(f" Leverage: {self.profit_metrics['leverage']}x")
|
|
||||||
logger.info(f" Fee Rate: {self.profit_metrics['roi_percentage']:.4f}%")
|
|
||||||
logger.info(f" Min Profit Move: {self.profit_metrics['fees_paid']:.3f}%")
|
|
||||||
logger.info("")
|
|
||||||
logger.info("MODEL SPECIFICATIONS:")
|
|
||||||
logger.info(f" Total Parameters: {self.model_specs['total_parameters']:,}")
|
|
||||||
logger.info(f" Enhanced CNN: {self.model_specs['enhanced_cnn_params']:,}")
|
|
||||||
logger.info(f" DQN Agent: {self.model_specs['dqn_agent_params']:,}")
|
|
||||||
logger.info(f" Memory Usage: {self.model_specs['memory_usage_mb']:.1f} MB")
|
|
||||||
logger.info(f" Target VRAM: {self.model_specs['target_vram_gb']} GB")
|
|
||||||
logger.info("")
|
|
||||||
logger.info("SYSTEM STATUS:")
|
|
||||||
logger.info(f" CPU Usage: {system_info['cpu_usage']:.1f}%")
|
|
||||||
logger.info(f" RAM Usage: {system_info['memory_used_gb']:.1f}/{system_info['memory_total_gb']:.1f} GB ({system_info['memory_percent']:.1f}%)")
|
|
||||||
logger.info(f" GPU Usage: {system_info['gpu_usage']:.1f}%")
|
|
||||||
logger.info(f" GPU Memory: {system_info['gpu_memory_used_gb']:.1f}/{system_info['gpu_memory_total_gb']:.1f} GB")
|
|
||||||
logger.info(f" Disk Usage: {system_info['disk_read_gb']:.1f}/{system_info['disk_write_gb']:.1f} GB")
|
|
||||||
logger.info(f" Temperature: {system_info['gpu_memory_percent']:.1f}C")
|
|
||||||
logger.info("")
|
|
||||||
logger.info("PERFORMANCE ESTIMATES:")
|
|
||||||
logger.info(f" Estimated Completion: {runtime_hours:.1f} hours")
|
|
||||||
logger.info(f" Estimated Total Time: {runtime_hours:.1f} hours")
|
|
||||||
logger.info(f" Progress: {self.training_metrics['win_rate']*100:.1f}%")
|
|
||||||
|
|
||||||
# Save performance snapshot
|
|
||||||
snapshot = {
|
|
||||||
'timestamp': datetime.now().isoformat(),
|
|
||||||
'runtime_hours': runtime_hours,
|
|
||||||
'training_metrics': self.training_metrics.copy(),
|
|
||||||
'profit_metrics': self.profit_metrics.copy(),
|
|
||||||
'system_info': system_info
|
|
||||||
}
|
|
||||||
self.performance_history.append(snapshot)
|
|
||||||
|
|
||||||
def create_performance_plots(self):
|
|
||||||
"""Create real-time performance visualization plots"""
|
|
||||||
try:
|
|
||||||
if len(self.performance_history) < 2:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Extract time series data
|
|
||||||
timestamps = [datetime.fromisoformat(h['timestamp']) for h in self.performance_history]
|
|
||||||
runtime_hours = [h['runtime_hours'] for h in self.performance_history]
|
|
||||||
|
|
||||||
# Training metrics
|
|
||||||
episodes = [h['training_metrics']['episodes_completed'] for h in self.performance_history]
|
|
||||||
rewards = [h['training_metrics']['average_reward'] for h in self.performance_history]
|
|
||||||
win_rates = [h['training_metrics']['win_rate'] for h in self.performance_history]
|
|
||||||
|
|
||||||
# Profit metrics
|
|
||||||
profits = [h['profit_metrics']['total_pnl'] for h in self.performance_history]
|
|
||||||
roi = [h['profit_metrics']['roi_percentage'] for h in self.performance_history]
|
|
||||||
|
|
||||||
# System metrics
|
|
||||||
cpu_usage = [h['system_info'].get('cpu_usage', 0) for h in self.performance_history]
|
|
||||||
gpu_memory = [h['system_info'].get('gpu_memory_percent', 0) for h in self.performance_history]
|
|
||||||
|
|
||||||
# Create comprehensive dashboard
|
|
||||||
plt.style.use('dark_background')
|
|
||||||
fig, axes = plt.subplots(2, 3, figsize=(20, 12))
|
|
||||||
fig.suptitle('🚀 MASSIVE MODEL OVERNIGHT TRAINING DASHBOARD 🚀', fontsize=16, fontweight='bold')
|
|
||||||
|
|
||||||
# Training Episodes
|
|
||||||
axes[0, 0].plot(runtime_hours, episodes, 'cyan', linewidth=2, marker='o')
|
|
||||||
axes[0, 0].set_title('📈 Training Episodes', fontsize=14, fontweight='bold')
|
|
||||||
axes[0, 0].set_xlabel('Runtime (Hours)')
|
|
||||||
axes[0, 0].set_ylabel('Episodes Completed')
|
|
||||||
axes[0, 0].grid(True, alpha=0.3)
|
|
||||||
|
|
||||||
# Average Reward
|
|
||||||
axes[0, 1].plot(runtime_hours, rewards, 'lime', linewidth=2, marker='s')
|
|
||||||
axes[0, 1].set_title('🎯 Average Reward', fontsize=14, fontweight='bold')
|
|
||||||
axes[0, 1].set_xlabel('Runtime (Hours)')
|
|
||||||
axes[0, 1].set_ylabel('Average Reward')
|
|
||||||
axes[0, 1].grid(True, alpha=0.3)
|
|
||||||
|
|
||||||
# Win Rate
|
|
||||||
axes[0, 2].plot(runtime_hours, [w*100 for w in win_rates], 'gold', linewidth=2, marker='^')
|
|
||||||
axes[0, 2].set_title('🏆 Win Rate (%)', fontsize=14, fontweight='bold')
|
|
||||||
axes[0, 2].set_xlabel('Runtime (Hours)')
|
|
||||||
axes[0, 2].set_ylabel('Win Rate (%)')
|
|
||||||
axes[0, 2].grid(True, alpha=0.3)
|
|
||||||
|
|
||||||
# Profit/Loss (500x Leverage)
|
|
||||||
axes[1, 0].plot(runtime_hours, profits, 'magenta', linewidth=3, marker='D')
|
|
||||||
axes[1, 0].axhline(y=0, color='red', linestyle='--', alpha=0.7)
|
|
||||||
axes[1, 0].set_title('💰 P&L (500x Leverage)', fontsize=14, fontweight='bold')
|
|
||||||
axes[1, 0].set_xlabel('Runtime (Hours)')
|
|
||||||
axes[1, 0].set_ylabel('Total P&L ($)')
|
|
||||||
axes[1, 0].grid(True, alpha=0.3)
|
|
||||||
|
|
||||||
# ROI Percentage
|
|
||||||
axes[1, 1].plot(runtime_hours, roi, 'orange', linewidth=2, marker='*')
|
|
||||||
axes[1, 1].axhline(y=0, color='red', linestyle='--', alpha=0.7)
|
|
||||||
axes[1, 1].set_title('📊 ROI (%)', fontsize=14, fontweight='bold')
|
|
||||||
axes[1, 1].set_xlabel('Runtime (Hours)')
|
|
||||||
axes[1, 1].set_ylabel('ROI (%)')
|
|
||||||
axes[1, 1].grid(True, alpha=0.3)
|
|
||||||
|
|
||||||
# System Resources
|
|
||||||
axes[1, 2].plot(runtime_hours, cpu_usage, 'red', linewidth=2, label='CPU %', marker='o')
|
|
||||||
axes[1, 2].plot(runtime_hours, gpu_memory, 'cyan', linewidth=2, label='VRAM %', marker='s')
|
|
||||||
axes[1, 2].set_title('💻 System Resources', fontsize=14, fontweight='bold')
|
|
||||||
axes[1, 2].set_xlabel('Runtime (Hours)')
|
|
||||||
axes[1, 2].set_ylabel('Usage (%)')
|
|
||||||
axes[1, 2].legend()
|
|
||||||
axes[1, 2].grid(True, alpha=0.3)
|
|
||||||
|
|
||||||
plt.tight_layout()
|
|
||||||
|
|
||||||
# Save plot
|
|
||||||
plots_dir = Path("plots/overnight_training")
|
|
||||||
plots_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
|
||||||
plot_path = plots_dir / f"training_dashboard_{timestamp}.png"
|
|
||||||
plt.savefig(plot_path, dpi=300, bbox_inches='tight', facecolor='black')
|
|
||||||
plt.close()
|
|
||||||
|
|
||||||
logger.info(f"📊 Performance dashboard saved: {plot_path}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error creating performance plots: {e}")
|
|
||||||
|
|
||||||
def save_progress_report(self):
|
|
||||||
"""Save comprehensive progress report"""
|
|
||||||
try:
|
|
||||||
runtime = datetime.now() - self.start_time
|
|
||||||
|
|
||||||
report = {
|
|
||||||
'session_info': {
|
|
||||||
'start_time': self.start_time.isoformat(),
|
|
||||||
'current_time': datetime.now().isoformat(),
|
|
||||||
'runtime': str(runtime),
|
|
||||||
'runtime_hours': runtime.total_seconds() / 3600
|
|
||||||
},
|
|
||||||
'model_specifications': self.model_specs,
|
|
||||||
'training_metrics': self.training_metrics,
|
|
||||||
'profit_metrics': self.profit_metrics,
|
|
||||||
'system_metrics_summary': {
|
|
||||||
'avg_cpu_usage': np.mean(self.system_metrics['cpu_usage']) if self.system_metrics['cpu_usage'] else 0,
|
|
||||||
'avg_memory_usage': np.mean(self.system_metrics['memory_usage']) if self.system_metrics['memory_usage'] else 0,
|
|
||||||
'avg_gpu_usage': np.mean(self.system_metrics['gpu_usage']) if self.system_metrics['gpu_usage'] else 0,
|
|
||||||
'avg_gpu_memory': np.mean(self.system_metrics['gpu_memory']) if self.system_metrics['gpu_memory'] else 0
|
|
||||||
},
|
|
||||||
'performance_history': self.performance_history
|
|
||||||
}
|
|
||||||
|
|
||||||
# Save report
|
|
||||||
reports_dir = Path("reports/overnight_training")
|
|
||||||
reports_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
|
||||||
report_path = reports_dir / f"progress_report_{timestamp}.json"
|
|
||||||
|
|
||||||
with open(report_path, 'w') as f:
|
|
||||||
json.dump(report, f, indent=2, default=str)
|
|
||||||
|
|
||||||
logger.info(f"📄 Progress report saved: {report_path}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error saving progress report: {e}")
|
|
||||||
|
|
||||||
def monitor_overnight_training(self, check_interval: int = 300):
|
|
||||||
"""Main monitoring loop for overnight training"""
|
|
||||||
logger.info("🌙 STARTING OVERNIGHT TRAINING MONITORING")
|
|
||||||
logger.info(f"⏰ Check interval: {check_interval} seconds ({check_interval/60:.1f} minutes)")
|
|
||||||
logger.info("🚀 Monitoring the MASSIVE 504M parameter model training...")
|
|
||||||
|
|
||||||
try:
|
|
||||||
while self.monitoring:
|
|
||||||
# Log comprehensive status
|
|
||||||
self.log_comprehensive_status()
|
|
||||||
|
|
||||||
# Create performance plots every hour
|
|
||||||
runtime_hours = (datetime.now() - self.start_time).total_seconds() / 3600
|
|
||||||
if len(self.performance_history) > 0 and len(self.performance_history) % 12 == 0: # Every hour (12 * 5min = 1hr)
|
|
||||||
self.create_performance_plots()
|
|
||||||
|
|
||||||
# Save progress report every 2 hours
|
|
||||||
if len(self.performance_history) > 0 and len(self.performance_history) % 24 == 0: # Every 2 hours
|
|
||||||
self.save_progress_report()
|
|
||||||
|
|
||||||
# Check if we've been running for 8+ hours (full overnight session)
|
|
||||||
if runtime_hours >= 8:
|
|
||||||
logger.info("🌅 OVERNIGHT TRAINING SESSION COMPLETED (8+ hours)")
|
|
||||||
self.finalize_overnight_session()
|
|
||||||
break
|
|
||||||
|
|
||||||
# Wait for next check
|
|
||||||
time.sleep(check_interval)
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
logger.info("🛑 MONITORING STOPPED BY USER")
|
|
||||||
self.finalize_overnight_session()
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ MONITORING ERROR: {e}")
|
|
||||||
self.finalize_overnight_session()
|
|
||||||
|
|
||||||
def finalize_overnight_session(self):
|
|
||||||
"""Finalize the overnight training session"""
|
|
||||||
logger.info("🏁 FINALIZING OVERNIGHT TRAINING SESSION")
|
|
||||||
|
|
||||||
# Final status log
|
|
||||||
self.log_comprehensive_status()
|
|
||||||
|
|
||||||
# Create final performance plots
|
|
||||||
self.create_performance_plots()
|
|
||||||
|
|
||||||
# Save final comprehensive report
|
|
||||||
self.save_progress_report()
|
|
||||||
|
|
||||||
# Calculate session summary
|
|
||||||
runtime = datetime.now() - self.start_time
|
|
||||||
runtime_hours = runtime.total_seconds() / 3600
|
|
||||||
|
|
||||||
logger.info("="*80)
|
|
||||||
logger.info("🌅 OVERNIGHT TRAINING SESSION COMPLETE")
|
|
||||||
logger.info("="*80)
|
|
||||||
logger.info(f"⏰ Total Runtime: {runtime}")
|
|
||||||
logger.info(f"📊 Total Episodes: {self.training_metrics['episodes_completed']:,}")
|
|
||||||
logger.info(f"💹 Total Trades: {self.training_metrics['total_trades']:,}")
|
|
||||||
logger.info(f"💰 Final P&L: ${self.profit_metrics['total_pnl']:+,.2f}")
|
|
||||||
logger.info(f"📈 Final ROI: {self.profit_metrics['roi_percentage']:+.2f}%")
|
|
||||||
logger.info(f"🏆 Final Win Rate: {self.training_metrics['win_rate']:.1%}")
|
|
||||||
logger.info(f"🎯 Avg Reward: {self.training_metrics['average_reward']:.2f}")
|
|
||||||
logger.info("="*80)
|
|
||||||
logger.info("🚀 MASSIVE 504M PARAMETER MODEL TRAINING SESSION COMPLETED!")
|
|
||||||
logger.info("="*80)
|
|
||||||
|
|
||||||
self.monitoring = False
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main function to start overnight monitoring"""
|
|
||||||
try:
|
|
||||||
logger.info("🚀 INITIALIZING OVERNIGHT TRAINING MONITOR")
|
|
||||||
logger.info("💡 Monitoring 504.89 Million Parameter Enhanced CNN + DQN Agent")
|
|
||||||
logger.info("🎯 Target: 4GB VRAM utilization with maximum profit optimization")
|
|
||||||
|
|
||||||
# Create monitor
|
|
||||||
monitor = OvernightTrainingMonitor()
|
|
||||||
|
|
||||||
# Start monitoring (check every 5 minutes)
|
|
||||||
monitor.monitor_overnight_training(check_interval=300)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Fatal error in overnight monitoring: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
@ -51,14 +51,15 @@ async def start_training_pipeline(orchestrator, trading_executor):
|
|||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Start real-time processing
|
# Start real-time processing (if available in Basic orchestrator)
|
||||||
await orchestrator.start_realtime_processing()
|
if hasattr(orchestrator, 'start_realtime_processing'):
|
||||||
logger.info("Real-time processing started")
|
await orchestrator.start_realtime_processing()
|
||||||
|
logger.info("Real-time processing started")
|
||||||
|
else:
|
||||||
|
logger.info("Real-time processing not available in Basic orchestrator")
|
||||||
|
|
||||||
# Start COB integration
|
# COB integration not available in Basic orchestrator
|
||||||
if hasattr(orchestrator, 'start_cob_integration'):
|
logger.info("COB integration not available - using Basic orchestrator")
|
||||||
await orchestrator.start_cob_integration()
|
|
||||||
logger.info("COB integration started")
|
|
||||||
|
|
||||||
# Main training loop
|
# Main training loop
|
||||||
iteration = 0
|
iteration = 0
|
||||||
@ -139,20 +140,15 @@ def start_clean_dashboard_with_training():
|
|||||||
|
|
||||||
# Initialize core components
|
# Initialize core components
|
||||||
from core.data_provider import DataProvider
|
from core.data_provider import DataProvider
|
||||||
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
|
from core.orchestrator import TradingOrchestrator
|
||||||
from core.trading_executor import TradingExecutor
|
from core.trading_executor import TradingExecutor
|
||||||
|
|
||||||
# Create data provider
|
# Create data provider
|
||||||
data_provider = DataProvider()
|
data_provider = DataProvider()
|
||||||
|
|
||||||
# Create enhanced orchestrator with full training capabilities
|
# Create basic orchestrator - stable and efficient
|
||||||
orchestrator = EnhancedTradingOrchestrator(
|
orchestrator = TradingOrchestrator(data_provider)
|
||||||
data_provider=data_provider,
|
logger.info("Basic Trading Orchestrator created for stability")
|
||||||
symbols=['ETH/USDT', 'BTC/USDT'],
|
|
||||||
enhanced_rl_training=True, # Enable RL training
|
|
||||||
model_registry={}
|
|
||||||
)
|
|
||||||
logger.info("Enhanced Trading Orchestrator created with training enabled")
|
|
||||||
|
|
||||||
# Create trading executor
|
# Create trading executor
|
||||||
trading_executor = TradingExecutor()
|
trading_executor = TradingExecutor()
|
||||||
|
@ -1,86 +0,0 @@
|
|||||||
<!DOCTYPE html>
|
|
||||||
<html>
|
|
||||||
<head>
|
|
||||||
<title>JavaScript Debug Test</title>
|
|
||||||
<script>
|
|
||||||
// Test the same debugging code we injected
|
|
||||||
window.dashDebug = {
|
|
||||||
callbackCount: 0,
|
|
||||||
lastUpdate: null,
|
|
||||||
errors: [],
|
|
||||||
|
|
||||||
log: function(message, data) {
|
|
||||||
const timestamp = new Date().toISOString();
|
|
||||||
console.log(`[DASH DEBUG ${timestamp}] ${message}`, data || '');
|
|
||||||
|
|
||||||
// Store in window for inspection
|
|
||||||
if (!window.dashLogs) window.dashLogs = [];
|
|
||||||
window.dashLogs.push({timestamp, message, data});
|
|
||||||
|
|
||||||
// Keep only last 100 logs
|
|
||||||
if (window.dashLogs.length > 100) {
|
|
||||||
window.dashLogs = window.dashLogs.slice(-100);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Test fetch override
|
|
||||||
const originalFetch = window.fetch;
|
|
||||||
window.fetch = function(...args) {
|
|
||||||
const url = args[0];
|
|
||||||
|
|
||||||
if (typeof url === 'string' && url.includes('_dash-update-component')) {
|
|
||||||
window.dashDebug.log('FETCH REQUEST to _dash-update-component', {
|
|
||||||
url: url,
|
|
||||||
method: (args[1] || {}).method || 'GET'
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
return originalFetch.apply(this, args);
|
|
||||||
};
|
|
||||||
|
|
||||||
// Helper functions
|
|
||||||
window.getDashDebugInfo = function() {
|
|
||||||
return {
|
|
||||||
callbackCount: window.dashDebug.callbackCount,
|
|
||||||
lastUpdate: window.dashDebug.lastUpdate,
|
|
||||||
errors: window.dashDebug.errors,
|
|
||||||
logs: window.dashLogs || []
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
window.clearDashLogs = function() {
|
|
||||||
window.dashLogs = [];
|
|
||||||
window.dashDebug.errors = [];
|
|
||||||
window.dashDebug.callbackCount = 0;
|
|
||||||
console.log('Dash debug logs cleared');
|
|
||||||
};
|
|
||||||
|
|
||||||
// Test the logging
|
|
||||||
document.addEventListener('DOMContentLoaded', function() {
|
|
||||||
window.dashDebug.log('TEST: DOM LOADED');
|
|
||||||
|
|
||||||
// Test logging every 2 seconds
|
|
||||||
setInterval(() => {
|
|
||||||
window.dashDebug.log('TEST: Periodic log', {
|
|
||||||
timestamp: new Date(),
|
|
||||||
test: 'data'
|
|
||||||
});
|
|
||||||
}, 2000);
|
|
||||||
});
|
|
||||||
</script>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>JavaScript Debug Test</h1>
|
|
||||||
<p>Open browser console and check for debug logs.</p>
|
|
||||||
<p>Use these commands in console:</p>
|
|
||||||
<ul>
|
|
||||||
<li><code>getDashDebugInfo()</code> - Get debug info</li>
|
|
||||||
<li><code>clearDashLogs()</code> - Clear logs</li>
|
|
||||||
<li><code>window.dashLogs</code> - View all logs</li>
|
|
||||||
</ul>
|
|
||||||
|
|
||||||
<button onclick="window.dashDebug.log('TEST: Button clicked')">Test Log</button>
|
|
||||||
<button onclick="fetch('/_dash-update-component')">Test Fetch</button>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
@ -1,239 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Training Status Audit - Check if models are actively training
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
from pathlib import Path
|
|
||||||
sys.path.append(str(Path('.').absolute()))
|
|
||||||
|
|
||||||
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
|
|
||||||
from core.data_provider import DataProvider
|
|
||||||
|
|
||||||
async def check_training_status():
|
|
||||||
print("=" * 70)
|
|
||||||
print("TRAINING STATUS AUDIT")
|
|
||||||
print("=" * 70)
|
|
||||||
|
|
||||||
try:
|
|
||||||
data_provider = DataProvider()
|
|
||||||
orchestrator = EnhancedTradingOrchestrator(
|
|
||||||
data_provider=data_provider,
|
|
||||||
symbols=['ETH/USDT', 'BTC/USDT'],
|
|
||||||
enhanced_rl_training=True
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"✓ Enhanced Orchestrator created")
|
|
||||||
|
|
||||||
# 1. Check DQN Agent Status
|
|
||||||
print("\n--- DQN AGENT STATUS ---")
|
|
||||||
if hasattr(orchestrator, 'sensitivity_dqn_agent'):
|
|
||||||
dqn_agent = orchestrator.sensitivity_dqn_agent
|
|
||||||
print(f"DQN Agent: {dqn_agent}")
|
|
||||||
|
|
||||||
if dqn_agent is not None:
|
|
||||||
print(f"DQN Agent Type: {type(dqn_agent)}")
|
|
||||||
|
|
||||||
# Check if it has training stats
|
|
||||||
if hasattr(dqn_agent, 'get_enhanced_training_stats'):
|
|
||||||
try:
|
|
||||||
stats = dqn_agent.get_enhanced_training_stats()
|
|
||||||
print(f"DQN Training Stats: {stats}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error getting DQN stats: {e}")
|
|
||||||
|
|
||||||
# Check memory and training status
|
|
||||||
if hasattr(dqn_agent, 'memory'):
|
|
||||||
print(f"DQN Memory Size: {len(dqn_agent.memory)}")
|
|
||||||
if hasattr(dqn_agent, 'batch_size'):
|
|
||||||
print(f"DQN Batch Size: {dqn_agent.batch_size}")
|
|
||||||
if hasattr(dqn_agent, 'epsilon'):
|
|
||||||
print(f"DQN Epsilon: {dqn_agent.epsilon}")
|
|
||||||
|
|
||||||
# Check if training is possible
|
|
||||||
can_train = hasattr(dqn_agent, 'replay') and hasattr(dqn_agent, 'memory')
|
|
||||||
print(f"DQN Can Train: {can_train}")
|
|
||||||
|
|
||||||
else:
|
|
||||||
print("❌ DQN Agent is None - needs initialization")
|
|
||||||
try:
|
|
||||||
orchestrator._initialize_sensitivity_dqn()
|
|
||||||
print("✓ DQN Agent initialized")
|
|
||||||
dqn_agent = orchestrator.sensitivity_dqn_agent
|
|
||||||
print(f"New DQN Agent: {type(dqn_agent)}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error initializing DQN: {e}")
|
|
||||||
else:
|
|
||||||
print("❌ No DQN agent attribute found")
|
|
||||||
|
|
||||||
# 2. Check CNN Status
|
|
||||||
print("\n--- CNN MODEL STATUS ---")
|
|
||||||
if hasattr(orchestrator, 'williams_structure'):
|
|
||||||
williams = orchestrator.williams_structure
|
|
||||||
print(f"Williams CNN: {williams}")
|
|
||||||
|
|
||||||
if williams is not None:
|
|
||||||
print(f"Williams Type: {type(williams)}")
|
|
||||||
|
|
||||||
# Check if it has training stats
|
|
||||||
if hasattr(williams, 'get_training_stats'):
|
|
||||||
try:
|
|
||||||
stats = williams.get_training_stats()
|
|
||||||
print(f"CNN Training Stats: {stats}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error getting CNN stats: {e}")
|
|
||||||
|
|
||||||
# Check if it's enabled
|
|
||||||
print(f"Williams Enabled: {getattr(orchestrator, 'williams_enabled', False)}")
|
|
||||||
else:
|
|
||||||
print("❌ Williams CNN is None")
|
|
||||||
else:
|
|
||||||
print("❌ No Williams CNN attribute found")
|
|
||||||
|
|
||||||
# 3. Check COB Integration Training
|
|
||||||
print("\n--- COB INTEGRATION STATUS ---")
|
|
||||||
if hasattr(orchestrator, 'cob_integration'):
|
|
||||||
cob = orchestrator.cob_integration
|
|
||||||
print(f"COB Integration: {cob}")
|
|
||||||
|
|
||||||
if cob is not None:
|
|
||||||
print(f"COB Type: {type(cob)}")
|
|
||||||
|
|
||||||
# Check if COB is started
|
|
||||||
cob_active = getattr(orchestrator, 'cob_integration_active', False)
|
|
||||||
print(f"COB Active: {cob_active}")
|
|
||||||
|
|
||||||
# Try to start COB if not active
|
|
||||||
if not cob_active:
|
|
||||||
print("Starting COB integration...")
|
|
||||||
try:
|
|
||||||
await orchestrator.start_cob_integration()
|
|
||||||
print("✓ COB integration started")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error starting COB: {e}")
|
|
||||||
|
|
||||||
# Get COB stats
|
|
||||||
try:
|
|
||||||
stats = cob.get_statistics()
|
|
||||||
print(f"COB Statistics: {stats}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error getting COB stats: {e}")
|
|
||||||
|
|
||||||
# Check COB feature generation
|
|
||||||
cob_features = getattr(orchestrator, 'latest_cob_features', {})
|
|
||||||
print(f"COB Features Available: {list(cob_features.keys())}")
|
|
||||||
else:
|
|
||||||
print("❌ COB Integration is None")
|
|
||||||
else:
|
|
||||||
print("❌ No COB integration attribute found")
|
|
||||||
|
|
||||||
# 4. Check Training Queues and Learning
|
|
||||||
print("\n--- TRAINING ACTIVITY STATUS ---")
|
|
||||||
|
|
||||||
# Check extrema trainer
|
|
||||||
if hasattr(orchestrator, 'extrema_trainer'):
|
|
||||||
extrema = orchestrator.extrema_trainer
|
|
||||||
print(f"Extrema Trainer: {extrema}")
|
|
||||||
if extrema and hasattr(extrema, 'get_training_stats'):
|
|
||||||
try:
|
|
||||||
stats = extrema.get_training_stats()
|
|
||||||
print(f"Extrema Training Stats: {stats}")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error getting extrema stats: {e}")
|
|
||||||
|
|
||||||
# Check negative case trainer
|
|
||||||
if hasattr(orchestrator, 'negative_case_trainer'):
|
|
||||||
negative = orchestrator.negative_case_trainer
|
|
||||||
print(f"Negative Case Trainer: {negative}")
|
|
||||||
|
|
||||||
# Check recent decisions and training queues
|
|
||||||
if hasattr(orchestrator, 'recent_decisions'):
|
|
||||||
recent_decisions = orchestrator.recent_decisions
|
|
||||||
print(f"Recent Decisions: {len(recent_decisions) if recent_decisions else 0}")
|
|
||||||
|
|
||||||
if hasattr(orchestrator, 'sensitivity_learning_queue'):
|
|
||||||
queue = orchestrator.sensitivity_learning_queue
|
|
||||||
print(f"Sensitivity Learning Queue: {len(queue) if queue else 0}")
|
|
||||||
|
|
||||||
if hasattr(orchestrator, 'rl_evaluation_queue'):
|
|
||||||
queue = orchestrator.rl_evaluation_queue
|
|
||||||
print(f"RL Evaluation Queue: {len(queue) if queue else 0}")
|
|
||||||
|
|
||||||
# 5. Test Signal Generation and Training
|
|
||||||
print("\n--- TESTING SIGNAL GENERATION ---")
|
|
||||||
|
|
||||||
# Generate a test decision to see if training is triggered
|
|
||||||
try:
|
|
||||||
print("Making coordinated decisions...")
|
|
||||||
decisions = await orchestrator.make_coordinated_decisions()
|
|
||||||
print(f"Decisions Generated: {len(decisions) if decisions else 0}")
|
|
||||||
|
|
||||||
for symbol, decision in decisions.items():
|
|
||||||
if decision:
|
|
||||||
print(f"{symbol}: {decision.action} (confidence: {decision.confidence:.3f})")
|
|
||||||
else:
|
|
||||||
print(f"{symbol}: No decision")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error making decisions: {e}")
|
|
||||||
|
|
||||||
# 6. Wait and check for training activity
|
|
||||||
print("\n--- MONITORING TRAINING ACTIVITY (10 seconds) ---")
|
|
||||||
|
|
||||||
initial_stats = {}
|
|
||||||
|
|
||||||
# Capture initial state
|
|
||||||
if hasattr(orchestrator, 'sensitivity_dqn_agent') and orchestrator.sensitivity_dqn_agent:
|
|
||||||
if hasattr(orchestrator.sensitivity_dqn_agent, 'memory'):
|
|
||||||
initial_stats['dqn_memory'] = len(orchestrator.sensitivity_dqn_agent.memory)
|
|
||||||
|
|
||||||
# Wait and monitor
|
|
||||||
for i in range(10):
|
|
||||||
await asyncio.sleep(1)
|
|
||||||
print(f"Monitoring... {i+1}/10")
|
|
||||||
|
|
||||||
# Check if any training happened
|
|
||||||
if hasattr(orchestrator, 'sensitivity_dqn_agent') and orchestrator.sensitivity_dqn_agent:
|
|
||||||
if hasattr(orchestrator.sensitivity_dqn_agent, 'memory'):
|
|
||||||
current_memory = len(orchestrator.sensitivity_dqn_agent.memory)
|
|
||||||
if current_memory != initial_stats.get('dqn_memory', 0):
|
|
||||||
print(f"🔥 DQN training detected! Memory: {initial_stats.get('dqn_memory', 0)} -> {current_memory}")
|
|
||||||
|
|
||||||
# Final status
|
|
||||||
print("\n--- FINAL TRAINING STATUS ---")
|
|
||||||
|
|
||||||
# Check if models are actively learning
|
|
||||||
dqn_learning = False
|
|
||||||
cnn_learning = False
|
|
||||||
cob_learning = False
|
|
||||||
|
|
||||||
if hasattr(orchestrator, 'sensitivity_dqn_agent') and orchestrator.sensitivity_dqn_agent:
|
|
||||||
memory_size = getattr(orchestrator.sensitivity_dqn_agent, 'memory', [])
|
|
||||||
batch_size = getattr(orchestrator.sensitivity_dqn_agent, 'batch_size', 32)
|
|
||||||
dqn_learning = len(memory_size) >= batch_size if hasattr(memory_size, '__len__') else False
|
|
||||||
|
|
||||||
print(f"DQN Learning Ready: {dqn_learning}")
|
|
||||||
print(f"CNN Learning Ready: {cnn_learning}")
|
|
||||||
print(f"COB Learning Ready: {cob_learning}")
|
|
||||||
|
|
||||||
# GPU Utilization Check
|
|
||||||
try:
|
|
||||||
import GPUtil
|
|
||||||
gpus = GPUtil.getGPUs()
|
|
||||||
if gpus:
|
|
||||||
for gpu in gpus:
|
|
||||||
print(f"GPU {gpu.id}: {gpu.load*100:.1f}% utilization, {gpu.memoryUtil*100:.1f}% memory")
|
|
||||||
else:
|
|
||||||
print("No GPUs detected")
|
|
||||||
except ImportError:
|
|
||||||
print("GPUtil not available - cannot check GPU status")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Error in training status check: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(check_training_status())
|
|
Binary file not shown.
Before Width: | Height: | Size: 75 KiB |
@ -1,189 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
NO SYNTHETIC DATA VERIFICATION SCRIPT
|
|
||||||
|
|
||||||
This script scans the entire codebase to ensure NO synthetic, mock,
|
|
||||||
dummy, or generated data implementations remain.
|
|
||||||
|
|
||||||
Run this script to verify 100% real market data compliance.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import List, Dict, Tuple
|
|
||||||
|
|
||||||
# Patterns that indicate synthetic data
|
|
||||||
FORBIDDEN_PATTERNS = [
|
|
||||||
r'np\.random\.',
|
|
||||||
r'random\.uniform',
|
|
||||||
r'random\.choice',
|
|
||||||
r'random\.normal',
|
|
||||||
r'generate.*data',
|
|
||||||
r'create.*fake',
|
|
||||||
r'dummy.*data',
|
|
||||||
r'mock.*data',
|
|
||||||
r'simulate.*',
|
|
||||||
r'synthetic.*data',
|
|
||||||
r'fake.*data',
|
|
||||||
r'test.*data.*=',
|
|
||||||
r'simulated.*',
|
|
||||||
r'generated.*data'
|
|
||||||
]
|
|
||||||
|
|
||||||
# Allowed exceptions (legitimate uses)
|
|
||||||
ALLOWED_EXCEPTIONS = [
|
|
||||||
'np.random.choice', # In training for batch sampling
|
|
||||||
'random exploration', # RL exploration
|
|
||||||
'random seed', # For reproducibility
|
|
||||||
'random.choice.*action', # RL action selection
|
|
||||||
'random sample', # Data sampling (not generation)
|
|
||||||
'model.train.*random', # Training mode randomness
|
|
||||||
'test.*real.*data', # Testing with real data
|
|
||||||
'random.*shuffle', # Data shuffling
|
|
||||||
'random.*split' # Data splitting
|
|
||||||
]
|
|
||||||
|
|
||||||
# File extensions to check
|
|
||||||
EXTENSIONS = ['.py', '.md', '.txt', '.json', '.yaml', '.yml']
|
|
||||||
|
|
||||||
def is_allowed_exception(line: str, pattern: str) -> bool:
|
|
||||||
"""Check if a pattern match is an allowed exception"""
|
|
||||||
line_lower = line.lower()
|
|
||||||
line_stripped = line.strip()
|
|
||||||
|
|
||||||
# Skip comments and documentation
|
|
||||||
if line_stripped.startswith('#') or line_stripped.startswith('*') or line_stripped.startswith('//'):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Skip markdown documentation
|
|
||||||
if any(keyword in line_lower for keyword in ['code:', '```', 'line ', '📁', '❌', '✅']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Skip policy documentation (mentions of forbidden things in policy docs)
|
|
||||||
if any(keyword in line_lower for keyword in ['policy', 'forbidden', 'not allowed', 'never use', 'zero synthetic']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Skip error messages and logging about synthetic data
|
|
||||||
if any(keyword in line_lower for keyword in ['logger.', 'print(', 'error(', 'warning(']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Skip variable names and string literals mentioning synthetic data
|
|
||||||
if any(keyword in line_lower for keyword in ['_synthetic_', 'allow_synthetic', 'no synthetic']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Skip function/method definitions that handle real data
|
|
||||||
if any(keyword in line_lower for keyword in ['def ', 'class ', 'from real', 'real market']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Check for legitimate RL exploration (with context)
|
|
||||||
if any(keyword in line_lower for keyword in ['exploration', 'epsilon', 'action selection', 'random exploration']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Check for legitimate training randomness
|
|
||||||
if any(keyword in line_lower for keyword in ['batch.*sample', 'shuffle', 'split', 'randint.*start']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Check for reproducibility
|
|
||||||
if 'seed' in line_lower:
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Check for legitimate data operations (not generation)
|
|
||||||
if any(keyword in line_lower for keyword in ['test_data =', 'latest_data =', 'test_dataset =']):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Skip verification script itself
|
|
||||||
if 'verify_no_synthetic_data.py' in str(line):
|
|
||||||
return True
|
|
||||||
|
|
||||||
# Check other allowed patterns
|
|
||||||
for exception in ALLOWED_EXCEPTIONS:
|
|
||||||
if re.search(exception, line_lower):
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
def scan_file(file_path: Path) -> List[Tuple[int, str, str]]:
|
|
||||||
"""Scan a file for forbidden patterns"""
|
|
||||||
violations = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:
|
|
||||||
lines = f.readlines()
|
|
||||||
|
|
||||||
for line_num, line in enumerate(lines, 1):
|
|
||||||
for pattern in FORBIDDEN_PATTERNS:
|
|
||||||
if re.search(pattern, line, re.IGNORECASE):
|
|
||||||
# Check if it's an allowed exception
|
|
||||||
if not is_allowed_exception(line, pattern):
|
|
||||||
violations.append((line_num, pattern, line.strip()))
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"⚠️ Error scanning {file_path}: {e}")
|
|
||||||
|
|
||||||
return violations
|
|
||||||
|
|
||||||
def scan_codebase(root_path: Path) -> Dict[str, List[Tuple[int, str, str]]]:
|
|
||||||
"""Scan entire codebase for synthetic data violations"""
|
|
||||||
violations = {}
|
|
||||||
|
|
||||||
# Skip certain directories
|
|
||||||
skip_dirs = {'.git', '__pycache__', 'node_modules', '.vscode', 'cache', 'logs', 'runs'}
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(root_path):
|
|
||||||
# Skip unwanted directories
|
|
||||||
dirs[:] = [d for d in dirs if d not in skip_dirs]
|
|
||||||
|
|
||||||
for file in files:
|
|
||||||
file_path = Path(root) / file
|
|
||||||
|
|
||||||
# Check only relevant file types
|
|
||||||
if file_path.suffix in EXTENSIONS:
|
|
||||||
file_violations = scan_file(file_path)
|
|
||||||
if file_violations:
|
|
||||||
relative_path = file_path.relative_to(root_path)
|
|
||||||
violations[str(relative_path)] = file_violations
|
|
||||||
|
|
||||||
return violations
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Main verification function"""
|
|
||||||
print("🔍 SCANNING CODEBASE FOR SYNTHETIC DATA VIOLATIONS...")
|
|
||||||
print("=" * 80)
|
|
||||||
|
|
||||||
# Get project root
|
|
||||||
project_root = Path(__file__).parent
|
|
||||||
|
|
||||||
# Scan codebase
|
|
||||||
violations = scan_codebase(project_root)
|
|
||||||
|
|
||||||
if not violations:
|
|
||||||
print("✅ SUCCESS: NO SYNTHETIC DATA FOUND!")
|
|
||||||
print("🎯 100% REAL MARKET DATA COMPLIANCE VERIFIED")
|
|
||||||
print("🚫 Zero synthetic, mock, dummy, or generated data")
|
|
||||||
print("=" * 80)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
# Report violations
|
|
||||||
print(f"❌ FOUND {len(violations)} FILES WITH POTENTIAL SYNTHETIC DATA:")
|
|
||||||
print("=" * 80)
|
|
||||||
|
|
||||||
total_violations = 0
|
|
||||||
for file_path, file_violations in violations.items():
|
|
||||||
print(f"\n📁 {file_path}:")
|
|
||||||
for line_num, pattern, line in file_violations:
|
|
||||||
total_violations += 1
|
|
||||||
print(f" Line {line_num}: {pattern}")
|
|
||||||
print(f" Code: {line[:100]}...")
|
|
||||||
|
|
||||||
print("=" * 80)
|
|
||||||
print(f"❌ TOTAL VIOLATIONS: {total_violations}")
|
|
||||||
print("🚨 CRITICAL: Synthetic data detected - must be removed!")
|
|
||||||
print("🎯 Only 100% real market data is allowed")
|
|
||||||
|
|
||||||
return 1
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
exit_code = main()
|
|
||||||
sys.exit(exit_code)
|
|
@ -30,7 +30,7 @@ import logging
|
|||||||
import json
|
import json
|
||||||
import time
|
import time
|
||||||
import threading
|
import threading
|
||||||
from typing import Dict, List, Optional, Any
|
from typing import Dict, List, Optional, Any, Union
|
||||||
import os
|
import os
|
||||||
import asyncio
|
import asyncio
|
||||||
import dash_bootstrap_components as dbc
|
import dash_bootstrap_components as dbc
|
||||||
@ -51,20 +51,15 @@ logging.getLogger('dash.dash').setLevel(logging.WARNING)
|
|||||||
# Import core components
|
# Import core components
|
||||||
from core.config import get_config
|
from core.config import get_config
|
||||||
from core.data_provider import DataProvider
|
from core.data_provider import DataProvider
|
||||||
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
|
from core.orchestrator import TradingOrchestrator
|
||||||
from core.trading_executor import TradingExecutor
|
from core.trading_executor import TradingExecutor
|
||||||
|
|
||||||
# Import layout and component managers
|
# Import layout and component managers
|
||||||
from web.layout_manager import DashboardLayoutManager
|
from web.layout_manager import DashboardLayoutManager
|
||||||
from web.component_manager import DashboardComponentManager
|
from web.component_manager import DashboardComponentManager
|
||||||
|
|
||||||
# Import optional components
|
# Enhanced RL components are no longer available - using Basic orchestrator only
|
||||||
try:
|
ENHANCED_RL_AVAILABLE = False
|
||||||
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
|
|
||||||
ENHANCED_RL_AVAILABLE = True
|
|
||||||
except ImportError:
|
|
||||||
ENHANCED_RL_AVAILABLE = False
|
|
||||||
logger.warning("Enhanced RL components not available")
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from core.cob_integration import COBIntegration
|
from core.cob_integration import COBIntegration
|
||||||
@ -86,23 +81,24 @@ except ImportError:
|
|||||||
# Import RL COB trader for 1B parameter model integration
|
# Import RL COB trader for 1B parameter model integration
|
||||||
from core.realtime_rl_cob_trader import RealtimeRLCOBTrader, PredictionResult
|
from core.realtime_rl_cob_trader import RealtimeRLCOBTrader, PredictionResult
|
||||||
|
|
||||||
|
# Using Basic orchestrator only - Enhanced orchestrator removed for stability
|
||||||
|
ENHANCED_ORCHESTRATOR_AVAILABLE = False
|
||||||
|
USE_ENHANCED_ORCHESTRATOR = False
|
||||||
|
|
||||||
class CleanTradingDashboard:
|
class CleanTradingDashboard:
|
||||||
"""Clean, modular trading dashboard implementation"""
|
"""Clean, modular trading dashboard implementation"""
|
||||||
|
|
||||||
def __init__(self, data_provider: DataProvider = None, orchestrator: EnhancedTradingOrchestrator = None, trading_executor: TradingExecutor = None):
|
def __init__(self, data_provider: Optional[DataProvider] = None, orchestrator: Optional[Any] = None, trading_executor: Optional[TradingExecutor] = None):
|
||||||
self.config = get_config()
|
self.config = get_config()
|
||||||
|
|
||||||
# Initialize components
|
# Initialize components
|
||||||
self.data_provider = data_provider or DataProvider()
|
self.data_provider = data_provider or DataProvider()
|
||||||
self.trading_executor = trading_executor or TradingExecutor()
|
self.trading_executor = trading_executor or TradingExecutor()
|
||||||
|
|
||||||
# Initialize orchestrator with enhanced capabilities
|
# Initialize orchestrator - USING BASIC ORCHESTRATOR ONLY
|
||||||
if orchestrator is None:
|
if orchestrator is None:
|
||||||
self.orchestrator = EnhancedTradingOrchestrator(
|
self.orchestrator = TradingOrchestrator(self.data_provider)
|
||||||
data_provider=self.data_provider,
|
logger.info("Using Basic Trading Orchestrator for stability")
|
||||||
symbols=['ETH/USDT', 'BTC/USDT'],
|
|
||||||
enhanced_rl_training=True
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
self.orchestrator = orchestrator
|
self.orchestrator = orchestrator
|
||||||
|
|
||||||
@ -141,11 +137,12 @@ class CleanTradingDashboard:
|
|||||||
self.is_streaming = False
|
self.is_streaming = False
|
||||||
self.tick_cache = []
|
self.tick_cache = []
|
||||||
|
|
||||||
# COB data cache
|
# COB data cache - using same approach as cob_realtime_dashboard.py
|
||||||
self.cob_cache = {
|
self.cob_cache = {
|
||||||
'ETH/USDT': {'last_update': 0, 'data': None, 'updates_count': 0},
|
'ETH/USDT': {'last_update': 0, 'data': None, 'updates_count': 0},
|
||||||
'BTC/USDT': {'last_update': 0, 'data': None, 'updates_count': 0}
|
'BTC/USDT': {'last_update': 0, 'data': None, 'updates_count': 0}
|
||||||
}
|
}
|
||||||
|
self.latest_cob_data = {} # Cache for COB integration data
|
||||||
|
|
||||||
# Initialize timezone
|
# Initialize timezone
|
||||||
timezone_name = self.config.get('system', {}).get('timezone', 'Europe/Sofia')
|
timezone_name = self.config.get('system', {}).get('timezone', 'Europe/Sofia')
|
||||||
@ -170,8 +167,8 @@ class CleanTradingDashboard:
|
|||||||
# Connect to orchestrator for real trading signals
|
# Connect to orchestrator for real trading signals
|
||||||
self._connect_to_orchestrator()
|
self._connect_to_orchestrator()
|
||||||
|
|
||||||
# Initialize REAL COB integration from enhanced orchestrator (NO separate RL trader needed)
|
# Initialize REAL COB integration - using proper approach from enhanced orchestrator
|
||||||
self._initialize_cob_integration()
|
self._initialize_cob_integration_proper()
|
||||||
|
|
||||||
# Start Universal Data Stream
|
# Start Universal Data Stream
|
||||||
if self.unified_stream:
|
if self.unified_stream:
|
||||||
@ -182,40 +179,28 @@ class CleanTradingDashboard:
|
|||||||
# Start signal generation loop to ensure continuous trading signals
|
# Start signal generation loop to ensure continuous trading signals
|
||||||
self._start_signal_generation_loop()
|
self._start_signal_generation_loop()
|
||||||
|
|
||||||
logger.info("Clean Trading Dashboard initialized with REAL COB integration and signal generation")
|
logger.info("Clean Trading Dashboard initialized with PROPER COB integration and signal generation")
|
||||||
|
|
||||||
def load_model_dynamically(self, model_name: str, model_type: str, model_path: str = None) -> bool:
|
def load_model_dynamically(self, model_name: str, model_type: str, model_path: Optional[str] = None) -> bool:
|
||||||
"""Dynamically load a model at runtime"""
|
"""Dynamically load a model at runtime - Not implemented in orchestrator"""
|
||||||
try:
|
logger.warning("Dynamic model loading not implemented in orchestrator")
|
||||||
if hasattr(self.orchestrator, 'load_model'):
|
return False
|
||||||
success = self.orchestrator.load_model(model_name, model_type, model_path)
|
|
||||||
if success:
|
|
||||||
logger.info(f"Successfully loaded model: {model_name}")
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error loading model {model_name}: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def unload_model_dynamically(self, model_name: str) -> bool:
|
def unload_model_dynamically(self, model_name: str) -> bool:
|
||||||
"""Dynamically unload a model at runtime"""
|
"""Dynamically unload a model at runtime - Not implemented in orchestrator"""
|
||||||
try:
|
logger.warning("Dynamic model unloading not implemented in orchestrator")
|
||||||
if hasattr(self.orchestrator, 'unload_model'):
|
return False
|
||||||
success = self.orchestrator.unload_model(model_name)
|
|
||||||
if success:
|
|
||||||
logger.info(f"Successfully unloaded model: {model_name}")
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error unloading model {model_name}: {e}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
def get_loaded_models_status(self) -> Dict[str, Any]:
|
def get_loaded_models_status(self) -> Dict[str, Any]:
|
||||||
"""Get status of all loaded models"""
|
"""Get status of all loaded models from training metrics"""
|
||||||
try:
|
try:
|
||||||
if hasattr(self.orchestrator, 'list_loaded_models'):
|
# Get status from training metrics instead
|
||||||
return self.orchestrator.list_loaded_models()
|
metrics = self._get_training_metrics()
|
||||||
return {'loaded_models': {}, 'total_models': 0, 'system_status': 'NO_ORCHESTRATOR'}
|
return {
|
||||||
|
'loaded_models': metrics.get('loaded_models', {}),
|
||||||
|
'total_models': len(metrics.get('loaded_models', {})),
|
||||||
|
'system_status': 'ACTIVE' if metrics.get('training_status', {}).get('active_sessions', 0) > 0 else 'INACTIVE'
|
||||||
|
}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting model status: {e}")
|
logger.error(f"Error getting model status: {e}")
|
||||||
return {'loaded_models': {}, 'total_models': 0, 'system_status': 'ERROR'}
|
return {'loaded_models': {}, 'total_models': 0, 'system_status': 'ERROR'}
|
||||||
@ -1022,112 +1007,144 @@ class CleanTradingDashboard:
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
def _get_cob_status(self) -> Dict:
|
def _get_cob_status(self) -> Dict:
|
||||||
"""Get REAL COB integration status - NO SIMULATION"""
|
"""Get REAL COB integration status - FIXED TO USE ENHANCED ORCHESTRATOR PROPERLY"""
|
||||||
try:
|
try:
|
||||||
status = {
|
status = {
|
||||||
'trading_enabled': bool(self.trading_executor and getattr(self.trading_executor, 'trading_enabled', False)),
|
'trading_enabled': bool(self.trading_executor and getattr(self.trading_executor, 'trading_enabled', False)),
|
||||||
'simulation_mode': bool(self.trading_executor and getattr(self.trading_executor, 'simulation_mode', True)),
|
'simulation_mode': bool(self.trading_executor and getattr(self.trading_executor, 'simulation_mode', True)),
|
||||||
'data_provider_status': 'Active',
|
'data_provider_status': 'Active',
|
||||||
'websocket_status': 'Connected' if self.is_streaming else 'Disconnected',
|
'websocket_status': 'Connected' if self.is_streaming else 'Disconnected',
|
||||||
'cob_status': 'No Real COB Integration', # Default
|
'cob_status': 'No COB Integration', # Default
|
||||||
|
'orchestrator_type': 'Basic',
|
||||||
'rl_model_status': 'Inactive',
|
'rl_model_status': 'Inactive',
|
||||||
'predictions_count': 0,
|
'predictions_count': 0,
|
||||||
'cache_size': 0
|
'cache_size': 0
|
||||||
}
|
}
|
||||||
|
|
||||||
# Check REAL COB integration from enhanced orchestrator
|
# Check if we have Enhanced Orchestrator - PROPER TYPE CHECK
|
||||||
if hasattr(self.orchestrator, 'cob_integration') and self.orchestrator.cob_integration:
|
is_enhanced = (ENHANCED_ORCHESTRATOR_AVAILABLE and
|
||||||
cob_integration = self.orchestrator.cob_integration
|
self.orchestrator.__class__.__name__ == 'EnhancedTradingOrchestrator')
|
||||||
|
|
||||||
# Get real COB integration statistics
|
if is_enhanced:
|
||||||
try:
|
status['orchestrator_type'] = 'Enhanced'
|
||||||
cob_stats = cob_integration.get_statistics()
|
|
||||||
if cob_stats:
|
|
||||||
active_symbols = cob_stats.get('active_symbols', [])
|
|
||||||
total_updates = cob_stats.get('total_updates', 0)
|
|
||||||
provider_status = cob_stats.get('provider_status', 'Unknown')
|
|
||||||
|
|
||||||
if active_symbols:
|
# Check COB integration in Enhanced orchestrator
|
||||||
status['cob_status'] = f'REAL COB Active ({len(active_symbols)} symbols)'
|
if hasattr(self.orchestrator, 'cob_integration'):
|
||||||
status['active_symbols'] = active_symbols
|
cob_integration = getattr(self.orchestrator, 'cob_integration', None)
|
||||||
status['cache_size'] = total_updates
|
if cob_integration is not None:
|
||||||
status['provider_status'] = provider_status
|
# Get real COB integration statistics
|
||||||
|
try:
|
||||||
|
if hasattr(cob_integration, 'get_statistics'):
|
||||||
|
cob_stats = cob_integration.get_statistics()
|
||||||
|
if cob_stats:
|
||||||
|
active_symbols = cob_stats.get('active_symbols', [])
|
||||||
|
total_updates = cob_stats.get('total_updates', 0)
|
||||||
|
provider_status = cob_stats.get('provider_status', 'Unknown')
|
||||||
|
|
||||||
|
if active_symbols:
|
||||||
|
status['cob_status'] = f'Enhanced COB Active ({len(active_symbols)} symbols)'
|
||||||
|
status['active_symbols'] = active_symbols
|
||||||
|
status['cache_size'] = total_updates
|
||||||
|
status['provider_status'] = provider_status
|
||||||
|
else:
|
||||||
|
status['cob_status'] = 'Enhanced COB Integration Loaded (No Data)'
|
||||||
|
else:
|
||||||
|
status['cob_status'] = 'Enhanced COB Integration (Stats Unavailable)'
|
||||||
|
else:
|
||||||
|
status['cob_status'] = 'Enhanced COB Integration (No Stats Method)'
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Error getting COB statistics: {e}")
|
||||||
|
status['cob_status'] = 'Enhanced COB Integration (Error Getting Stats)'
|
||||||
else:
|
else:
|
||||||
status['cob_status'] = 'REAL COB Integration Loaded (No Data)'
|
status['cob_status'] = 'Enhanced Orchestrator (COB Integration Not Initialized)'
|
||||||
|
# Don't log warning here to avoid spam, just info level
|
||||||
|
logger.debug("Enhanced orchestrator has COB integration attribute but it's None")
|
||||||
else:
|
else:
|
||||||
status['cob_status'] = 'REAL COB Integration (Stats Unavailable)'
|
status['cob_status'] = 'Enhanced Orchestrator Missing COB Integration'
|
||||||
|
logger.debug("Enhanced orchestrator available but has no COB integration attribute")
|
||||||
except Exception as e:
|
else:
|
||||||
logger.debug(f"Error getting COB statistics: {e}")
|
status['cob_status'] = 'Enhanced Orchestrator Missing COB Integration'
|
||||||
status['cob_status'] = 'REAL COB Integration (Error Getting Stats)'
|
logger.debug("Enhanced orchestrator available but has no COB integration attribute")
|
||||||
else:
|
else:
|
||||||
status['cob_status'] = 'No Enhanced Orchestrator COB Integration'
|
if not ENHANCED_ORCHESTRATOR_AVAILABLE:
|
||||||
logger.warning("Enhanced orchestrator has no COB integration - using basic orchestrator")
|
status['cob_status'] = 'Enhanced Orchestrator Not Available'
|
||||||
|
status['orchestrator_type'] = 'Basic (Enhanced Unavailable)'
|
||||||
|
else:
|
||||||
|
status['cob_status'] = 'Basic Orchestrator (No COB Support)'
|
||||||
|
status['orchestrator_type'] = 'Basic (Enhanced Not Used)'
|
||||||
|
|
||||||
return status
|
return status
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting COB status: {e}")
|
logger.error(f"Error getting COB status: {e}")
|
||||||
return {'error': str(e), 'cob_status': 'Error Getting Status'}
|
return {'error': str(e), 'cob_status': 'Error Getting Status', 'orchestrator_type': 'Unknown'}
|
||||||
|
|
||||||
def _get_cob_snapshot(self, symbol: str) -> Optional[Any]:
|
def _get_cob_snapshot(self, symbol: str) -> Optional[Any]:
|
||||||
"""Get COB snapshot for symbol - REAL DATA ONLY"""
|
"""Get COB snapshot for symbol using enhanced orchestrator approach"""
|
||||||
try:
|
try:
|
||||||
# Get from REAL COB integration via enhanced orchestrator
|
# Get from Enhanced Orchestrator's COB integration (proper way)
|
||||||
if not hasattr(self.orchestrator, 'cob_integration') or self.orchestrator.cob_integration is None:
|
if (ENHANCED_ORCHESTRATOR_AVAILABLE and
|
||||||
logger.warning(f"No REAL COB integration available for {symbol}")
|
hasattr(self.orchestrator, 'cob_integration') and
|
||||||
return None
|
self.orchestrator.__class__.__name__ == 'EnhancedTradingOrchestrator'):
|
||||||
|
|
||||||
cob_integration = self.orchestrator.cob_integration
|
cob_integration = getattr(self.orchestrator, 'cob_integration', None)
|
||||||
|
if cob_integration is not None:
|
||||||
# Get real COB snapshot
|
# Get real COB snapshot using the proper method
|
||||||
if hasattr(cob_integration, 'get_cob_snapshot'):
|
if hasattr(cob_integration, 'get_cob_snapshot'):
|
||||||
snapshot = cob_integration.get_cob_snapshot(symbol)
|
snapshot = cob_integration.get_cob_snapshot(symbol)
|
||||||
if snapshot:
|
if snapshot:
|
||||||
logger.debug(f"Retrieved REAL COB snapshot for {symbol}")
|
logger.debug(f"Retrieved Enhanced COB snapshot for {symbol}")
|
||||||
return snapshot
|
return snapshot
|
||||||
else:
|
else:
|
||||||
logger.debug(f"No REAL COB data available for {symbol}")
|
logger.debug(f"No Enhanced COB data available for {symbol}")
|
||||||
|
elif hasattr(cob_integration, 'get_consolidated_orderbook'):
|
||||||
|
# Alternative method name
|
||||||
|
snapshot = cob_integration.get_consolidated_orderbook(symbol)
|
||||||
|
if snapshot:
|
||||||
|
logger.debug(f"Retrieved Enhanced COB orderbook for {symbol}")
|
||||||
|
return snapshot
|
||||||
|
else:
|
||||||
|
logger.warning("Enhanced COB integration has no recognized snapshot method")
|
||||||
else:
|
else:
|
||||||
logger.warning("COB integration has no get_cob_snapshot method")
|
logger.debug(f"No Enhanced COB integration available for {symbol}")
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.warning(f"Error getting REAL COB snapshot for {symbol}: {e}")
|
logger.warning(f"Error getting Enhanced COB snapshot for {symbol}: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def _get_training_metrics(self) -> Dict:
|
def _get_training_metrics(self) -> Dict:
|
||||||
"""Get training metrics data - Enhanced with loaded models and real-time losses"""
|
"""Get training metrics data - HANDLES BOTH ENHANCED AND BASIC ORCHESTRATORS"""
|
||||||
try:
|
try:
|
||||||
metrics = {}
|
metrics = {}
|
||||||
|
|
||||||
# Loaded Models Section - FIXED
|
# Loaded Models Section - FIXED
|
||||||
loaded_models = {}
|
loaded_models = {}
|
||||||
|
|
||||||
# 1. DQN Model Status and Loss Tracking
|
# 1. DQN Model Status and Loss Tracking - FIXED ATTRIBUTE ACCESS
|
||||||
dqn_active = False
|
dqn_active = False
|
||||||
dqn_last_loss = 0.0
|
dqn_last_loss = 0.0
|
||||||
dqn_prediction_count = 0
|
dqn_prediction_count = 0
|
||||||
last_action = 'NONE'
|
last_action = 'NONE'
|
||||||
last_confidence = 0.0
|
last_confidence = 0.0
|
||||||
|
|
||||||
if self.orchestrator and hasattr(self.orchestrator, 'sensitivity_dqn_agent'):
|
# Using Basic orchestrator only - Enhanced orchestrator removed
|
||||||
if self.orchestrator.sensitivity_dqn_agent is not None:
|
is_enhanced = False
|
||||||
|
|
||||||
|
# Basic orchestrator doesn't have DQN agent - create default status
|
||||||
|
try:
|
||||||
|
# Check if Basic orchestrator has any DQN features
|
||||||
|
if hasattr(self.orchestrator, 'some_basic_dqn_method'):
|
||||||
dqn_active = True
|
dqn_active = True
|
||||||
dqn_agent = self.orchestrator.sensitivity_dqn_agent
|
# Get basic stats if available
|
||||||
|
else:
|
||||||
# Get DQN stats
|
dqn_active = False
|
||||||
if hasattr(dqn_agent, 'get_enhanced_training_stats'):
|
logger.debug("Basic orchestrator - no DQN features available")
|
||||||
dqn_stats = dqn_agent.get_enhanced_training_stats()
|
except Exception as e:
|
||||||
dqn_last_loss = dqn_stats.get('last_loss', 0.0)
|
logger.debug(f"Error checking Basic orchestrator DQN: {e}")
|
||||||
dqn_prediction_count = dqn_stats.get('prediction_count', 0)
|
dqn_active = False
|
||||||
|
|
||||||
# Get last action with confidence
|
|
||||||
if hasattr(dqn_agent, 'last_action_taken') and dqn_agent.last_action_taken is not None:
|
|
||||||
action_map = {0: 'SELL', 1: 'BUY'}
|
|
||||||
last_action = action_map.get(dqn_agent.last_action_taken, 'NONE')
|
|
||||||
last_confidence = getattr(dqn_agent, 'last_confidence', 0.0) * 100
|
|
||||||
|
|
||||||
dqn_model_info = {
|
dqn_model_info = {
|
||||||
'active': dqn_active,
|
'active': dqn_active,
|
||||||
@ -1139,72 +1156,46 @@ class CleanTradingDashboard:
|
|||||||
},
|
},
|
||||||
'loss_5ma': dqn_last_loss, # Real loss from training
|
'loss_5ma': dqn_last_loss, # Real loss from training
|
||||||
'model_type': 'DQN',
|
'model_type': 'DQN',
|
||||||
'description': 'Deep Q-Network Agent',
|
'description': 'Deep Q-Network Agent' + (' (Enhanced)' if is_enhanced else ' (Basic)'),
|
||||||
'prediction_count': dqn_prediction_count,
|
'prediction_count': dqn_prediction_count,
|
||||||
'epsilon': getattr(self.orchestrator.sensitivity_dqn_agent, 'epsilon', 0.0) if dqn_active else 1.0
|
'epsilon': 1.0 # Default epsilon for Basic orchestrator
|
||||||
}
|
}
|
||||||
loaded_models['dqn'] = dqn_model_info
|
loaded_models['dqn'] = dqn_model_info
|
||||||
|
|
||||||
# 2. CNN Model Status
|
# 2. CNN Model Status - NOT AVAILABLE IN BASIC ORCHESTRATOR
|
||||||
cnn_active = False
|
cnn_active = False
|
||||||
cnn_last_loss = 0.0
|
cnn_last_loss = 0.0234 # Default loss value
|
||||||
|
|
||||||
if hasattr(self.orchestrator, 'williams_structure') and self.orchestrator.williams_structure:
|
|
||||||
cnn_active = True
|
|
||||||
williams = self.orchestrator.williams_structure
|
|
||||||
if hasattr(williams, 'get_training_stats'):
|
|
||||||
cnn_stats = williams.get_training_stats()
|
|
||||||
cnn_last_loss = cnn_stats.get('avg_loss', 0.0234)
|
|
||||||
|
|
||||||
cnn_model_info = {
|
cnn_model_info = {
|
||||||
'active': cnn_active,
|
'active': cnn_active,
|
||||||
'parameters': 50000000, # ~50M params
|
'parameters': 50000000, # ~50M params
|
||||||
'last_prediction': {
|
'last_prediction': {
|
||||||
'timestamp': datetime.now().strftime('%H:%M:%S'),
|
'timestamp': datetime.now().strftime('%H:%M:%S'),
|
||||||
'action': 'MONITORING',
|
'action': 'MONITORING' if cnn_active else 'INACTIVE',
|
||||||
'confidence': 0.0
|
'confidence': 0.0
|
||||||
},
|
},
|
||||||
'loss_5ma': cnn_last_loss,
|
'loss_5ma': cnn_last_loss,
|
||||||
'model_type': 'CNN',
|
'model_type': 'CNN',
|
||||||
'description': 'Williams Market Structure CNN'
|
'description': 'Williams Market Structure CNN' + (' (Enhanced Only)' if not is_enhanced else '')
|
||||||
}
|
}
|
||||||
loaded_models['cnn'] = cnn_model_info
|
loaded_models['cnn'] = cnn_model_info
|
||||||
|
|
||||||
# 3. COB RL Model Status - Use REAL COB integration from enhanced orchestrator
|
# 3. COB RL Model Status - NOT AVAILABLE IN BASIC ORCHESTRATOR
|
||||||
cob_active = False
|
cob_active = False
|
||||||
cob_last_loss = 0.0
|
cob_last_loss = 0.012 # Default loss value
|
||||||
cob_predictions_count = 0
|
cob_predictions_count = 0
|
||||||
|
|
||||||
# Check for REAL COB integration in enhanced orchestrator
|
|
||||||
if hasattr(self.orchestrator, 'cob_integration') and self.orchestrator.cob_integration:
|
|
||||||
cob_active = True
|
|
||||||
try:
|
|
||||||
# Get COB integration statistics
|
|
||||||
cob_stats = self.orchestrator.cob_integration.get_statistics()
|
|
||||||
if cob_stats:
|
|
||||||
cob_predictions_count = cob_stats.get('total_predictions', 0)
|
|
||||||
provider_stats = cob_stats.get('provider_stats', {})
|
|
||||||
cob_last_loss = provider_stats.get('avg_training_loss', 0.012)
|
|
||||||
|
|
||||||
# Get latest COB features count
|
|
||||||
total_cob_features = len(getattr(self.orchestrator, 'latest_cob_features', {}))
|
|
||||||
if total_cob_features > 0:
|
|
||||||
cob_predictions_count += total_cob_features * 100 # Estimate
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug(f"Could not get REAL COB stats: {e}")
|
|
||||||
|
|
||||||
cob_model_info = {
|
cob_model_info = {
|
||||||
'active': cob_active,
|
'active': cob_active,
|
||||||
'parameters': 400000000, # 400M optimized (real COB integration)
|
'parameters': 400000000, # 400M optimized (Enhanced COB integration)
|
||||||
'last_prediction': {
|
'last_prediction': {
|
||||||
'timestamp': datetime.now().strftime('%H:%M:%S'),
|
'timestamp': datetime.now().strftime('%H:%M:%S'),
|
||||||
'action': 'REAL_COB_INFERENCE' if cob_active else 'INACTIVE',
|
'action': 'ENHANCED_COB_INFERENCE' if cob_active else ('INACTIVE' if is_enhanced else 'NOT_AVAILABLE'),
|
||||||
'confidence': 0.0
|
'confidence': 0.0
|
||||||
},
|
},
|
||||||
'loss_5ma': cob_last_loss,
|
'loss_5ma': cob_last_loss,
|
||||||
'model_type': 'REAL_COB_RL',
|
'model_type': 'ENHANCED_COB_RL',
|
||||||
'description': 'Real COB Integration from Enhanced Orchestrator',
|
'description': 'Enhanced COB Integration' + (' (Enhanced Only)' if not is_enhanced else ''),
|
||||||
'predictions_count': cob_predictions_count
|
'predictions_count': cob_predictions_count
|
||||||
}
|
}
|
||||||
loaded_models['cob_rl'] = cob_model_info
|
loaded_models['cob_rl'] = cob_model_info
|
||||||
@ -1220,7 +1211,8 @@ class CleanTradingDashboard:
|
|||||||
'signal_generation': 'ACTIVE' if signal_generation_active else 'INACTIVE',
|
'signal_generation': 'ACTIVE' if signal_generation_active else 'INACTIVE',
|
||||||
'last_update': datetime.now().strftime('%H:%M:%S'),
|
'last_update': datetime.now().strftime('%H:%M:%S'),
|
||||||
'models_loaded': len(loaded_models),
|
'models_loaded': len(loaded_models),
|
||||||
'total_parameters': sum(m['parameters'] for m in loaded_models.values() if m['active'])
|
'total_parameters': sum(m['parameters'] for m in loaded_models.values() if m['active']),
|
||||||
|
'orchestrator_type': 'Enhanced' if is_enhanced else 'Basic'
|
||||||
}
|
}
|
||||||
|
|
||||||
# COB $1 Buckets (sample data for now)
|
# COB $1 Buckets (sample data for now)
|
||||||
@ -1229,7 +1221,7 @@ class CleanTradingDashboard:
|
|||||||
return metrics
|
return metrics
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error getting enhanced training metrics: {e}")
|
logger.error(f"Error getting training metrics: {e}")
|
||||||
return {'error': str(e), 'loaded_models': {}, 'training_status': {'active_sessions': 0}}
|
return {'error': str(e), 'loaded_models': {}, 'training_status': {'active_sessions': 0}}
|
||||||
|
|
||||||
def _is_signal_generation_active(self) -> bool:
|
def _is_signal_generation_active(self) -> bool:
|
||||||
@ -1275,13 +1267,8 @@ class CleanTradingDashboard:
|
|||||||
def signal_worker():
|
def signal_worker():
|
||||||
logger.info("Starting continuous signal generation loop")
|
logger.info("Starting continuous signal generation loop")
|
||||||
|
|
||||||
# Initialize DQN if not available
|
# Basic orchestrator doesn't have DQN - using momentum signals only
|
||||||
if not hasattr(self.orchestrator, 'sensitivity_dqn_agent') or self.orchestrator.sensitivity_dqn_agent is None:
|
logger.info("Using momentum-based signals (Basic orchestrator)")
|
||||||
try:
|
|
||||||
self.orchestrator._initialize_sensitivity_dqn()
|
|
||||||
logger.info("DQN Agent initialized for signal generation")
|
|
||||||
except Exception as e:
|
|
||||||
logger.warning(f"Could not initialize DQN: {e}")
|
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
@ -1293,10 +1280,8 @@ class CleanTradingDashboard:
|
|||||||
if not current_price:
|
if not current_price:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# 1. Generate DQN signal (with exploration)
|
# 1. Generate basic signal (Basic orchestrator doesn't have DQN)
|
||||||
dqn_signal = self._generate_dqn_signal(symbol, current_price)
|
# Skip DQN signals - Basic orchestrator doesn't support them
|
||||||
if dqn_signal:
|
|
||||||
self._process_dashboard_signal(dqn_signal)
|
|
||||||
|
|
||||||
# 2. Generate simple momentum signal as backup
|
# 2. Generate simple momentum signal as backup
|
||||||
momentum_signal = self._generate_momentum_signal(symbol, current_price)
|
momentum_signal = self._generate_momentum_signal(symbol, current_price)
|
||||||
@ -1322,83 +1307,9 @@ class CleanTradingDashboard:
|
|||||||
logger.error(f"Error starting signal generation loop: {e}")
|
logger.error(f"Error starting signal generation loop: {e}")
|
||||||
|
|
||||||
def _generate_dqn_signal(self, symbol: str, current_price: float) -> Optional[Dict]:
|
def _generate_dqn_signal(self, symbol: str, current_price: float) -> Optional[Dict]:
|
||||||
"""Generate trading signal using DQN agent"""
|
"""Generate trading signal using DQN agent - NOT AVAILABLE IN BASIC ORCHESTRATOR"""
|
||||||
try:
|
# Basic orchestrator doesn't have DQN features
|
||||||
if not hasattr(self.orchestrator, 'sensitivity_dqn_agent') or self.orchestrator.sensitivity_dqn_agent is None:
|
return None
|
||||||
return None
|
|
||||||
|
|
||||||
dqn_agent = self.orchestrator.sensitivity_dqn_agent
|
|
||||||
|
|
||||||
# Create a simple state vector (expanded for DQN)
|
|
||||||
state_features = []
|
|
||||||
|
|
||||||
# Get recent price data
|
|
||||||
df = self.data_provider.get_historical_data(symbol, '1m', limit=20)
|
|
||||||
if df is not None and len(df) >= 10:
|
|
||||||
prices = df['close'].values
|
|
||||||
volumes = df['volume'].values
|
|
||||||
|
|
||||||
# Price features
|
|
||||||
state_features.extend([
|
|
||||||
(current_price - prices[-2]) / prices[-2], # 1-period return
|
|
||||||
(current_price - prices[-5]) / prices[-5], # 5-period return
|
|
||||||
(current_price - prices[-10]) / prices[-10], # 10-period return
|
|
||||||
prices.std() / prices.mean(), # Volatility
|
|
||||||
volumes[-1] / volumes.mean(), # Volume ratio
|
|
||||||
])
|
|
||||||
|
|
||||||
# Technical indicators
|
|
||||||
sma_5 = prices[-5:].mean()
|
|
||||||
sma_10 = prices[-10:].mean()
|
|
||||||
state_features.extend([
|
|
||||||
(current_price - sma_5) / sma_5, # Price vs SMA5
|
|
||||||
(current_price - sma_10) / sma_10, # Price vs SMA10
|
|
||||||
(sma_5 - sma_10) / sma_10, # SMA trend
|
|
||||||
])
|
|
||||||
else:
|
|
||||||
# Fallback features if no data
|
|
||||||
state_features = [0.0] * 8
|
|
||||||
|
|
||||||
# Pad or truncate to expected state size
|
|
||||||
if hasattr(dqn_agent, 'state_dim'):
|
|
||||||
target_size = dqn_agent.state_dim if isinstance(dqn_agent.state_dim, int) else dqn_agent.state_dim[0]
|
|
||||||
while len(state_features) < target_size:
|
|
||||||
state_features.append(0.0)
|
|
||||||
state_features = state_features[:target_size]
|
|
||||||
|
|
||||||
state = np.array(state_features, dtype=np.float32)
|
|
||||||
|
|
||||||
# Get action from DQN (with exploration)
|
|
||||||
action = dqn_agent.act(state, explore=True, current_price=current_price)
|
|
||||||
|
|
||||||
if action is not None:
|
|
||||||
# Map action to signal
|
|
||||||
action_map = {0: 'SELL', 1: 'BUY'}
|
|
||||||
signal_action = action_map.get(action, 'HOLD')
|
|
||||||
|
|
||||||
# Calculate confidence based on epsilon (exploration factor)
|
|
||||||
confidence = max(0.3, 1.0 - dqn_agent.epsilon)
|
|
||||||
|
|
||||||
# Store last action for display
|
|
||||||
dqn_agent.last_action_taken = action
|
|
||||||
dqn_agent.last_confidence = confidence
|
|
||||||
|
|
||||||
return {
|
|
||||||
'action': signal_action,
|
|
||||||
'symbol': symbol,
|
|
||||||
'price': current_price,
|
|
||||||
'confidence': confidence,
|
|
||||||
'timestamp': datetime.now().strftime('%H:%M:%S'),
|
|
||||||
'size': 0.01,
|
|
||||||
'reason': f'DQN signal (ε={dqn_agent.epsilon:.3f})',
|
|
||||||
'model': 'DQN'
|
|
||||||
}
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug(f"Error generating DQN signal for {symbol}: {e}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _generate_momentum_signal(self, symbol: str, current_price: float) -> Optional[Dict]:
|
def _generate_momentum_signal(self, symbol: str, current_price: float) -> Optional[Dict]:
|
||||||
"""Generate simple momentum-based signal as backup"""
|
"""Generate simple momentum-based signal as backup"""
|
||||||
@ -1463,51 +1374,16 @@ class CleanTradingDashboard:
|
|||||||
logger.info(f"Generated {signal['action']} signal for {signal['symbol']} "
|
logger.info(f"Generated {signal['action']} signal for {signal['symbol']} "
|
||||||
f"(conf: {signal['confidence']:.2f}, model: {signal.get('model', 'UNKNOWN')})")
|
f"(conf: {signal['confidence']:.2f}, model: {signal.get('model', 'UNKNOWN')})")
|
||||||
|
|
||||||
# Trigger training if DQN agent is available
|
# DQN training not available in Basic orchestrator
|
||||||
if signal.get('model') == 'DQN' and hasattr(self.orchestrator, 'sensitivity_dqn_agent'):
|
# Skip DQN training - Basic orchestrator doesn't support it
|
||||||
if self.orchestrator.sensitivity_dqn_agent is not None:
|
|
||||||
self._train_dqn_on_signal(signal)
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error processing dashboard signal: {e}")
|
logger.error(f"Error processing dashboard signal: {e}")
|
||||||
|
|
||||||
def _train_dqn_on_signal(self, signal: Dict):
|
def _train_dqn_on_signal(self, signal: Dict):
|
||||||
"""Train DQN agent on generated signal for continuous learning"""
|
"""Train DQN agent on generated signal - NOT AVAILABLE IN BASIC ORCHESTRATOR"""
|
||||||
try:
|
# Basic orchestrator doesn't have DQN features
|
||||||
dqn_agent = self.orchestrator.sensitivity_dqn_agent
|
return
|
||||||
|
|
||||||
# Create synthetic training experience
|
|
||||||
current_price = signal['price']
|
|
||||||
action = 0 if signal['action'] == 'SELL' else 1
|
|
||||||
|
|
||||||
# Simulate small price movement for reward calculation
|
|
||||||
import random
|
|
||||||
price_change = random.uniform(-0.001, 0.001) # ±0.1% random movement
|
|
||||||
next_price = current_price * (1 + price_change)
|
|
||||||
|
|
||||||
# Calculate reward based on action correctness
|
|
||||||
if action == 1 and price_change > 0: # BUY and price went up
|
|
||||||
reward = price_change * 10 # Amplify reward
|
|
||||||
elif action == 0 and price_change < 0: # SELL and price went down
|
|
||||||
reward = abs(price_change) * 10
|
|
||||||
else:
|
|
||||||
reward = -0.1 # Small penalty for incorrect prediction
|
|
||||||
|
|
||||||
# Create state vectors (simplified)
|
|
||||||
state = np.random.random(dqn_agent.state_dim if isinstance(dqn_agent.state_dim, int) else dqn_agent.state_dim[0])
|
|
||||||
next_state = state + np.random.normal(0, 0.01, state.shape) # Small state change
|
|
||||||
|
|
||||||
# Add experience to memory
|
|
||||||
dqn_agent.remember(state, action, reward, next_state, True)
|
|
||||||
|
|
||||||
# Trigger training if enough experiences
|
|
||||||
if len(dqn_agent.memory) >= dqn_agent.batch_size:
|
|
||||||
loss = dqn_agent.replay()
|
|
||||||
if loss:
|
|
||||||
logger.debug(f"DQN training loss: {loss:.6f}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.debug(f"Error training DQN on signal: {e}")
|
|
||||||
|
|
||||||
def _get_cob_dollar_buckets(self) -> List[Dict]:
|
def _get_cob_dollar_buckets(self) -> List[Dict]:
|
||||||
"""Get COB $1 price buckets with volume data"""
|
"""Get COB $1 price buckets with volume data"""
|
||||||
@ -1843,90 +1719,93 @@ class CleanTradingDashboard:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error clearing session: {e}")
|
logger.error(f"Error clearing session: {e}")
|
||||||
|
|
||||||
def _initialize_cob_integration(self):
|
def _initialize_cob_integration_proper(self):
|
||||||
"""Initialize REAL COB integration from enhanced orchestrator - NO SIMULATION"""
|
"""Initialize COB integration using Enhanced Orchestrator - PROPER APPROACH"""
|
||||||
try:
|
try:
|
||||||
logger.info("Connecting to REAL COB integration from enhanced orchestrator...")
|
logger.info("Connecting to COB integration from Enhanced Orchestrator...")
|
||||||
|
|
||||||
# Check if orchestrator has real COB integration
|
# Check if we have Enhanced Orchestrator
|
||||||
if not hasattr(self.orchestrator, 'cob_integration') or self.orchestrator.cob_integration is None:
|
if not ENHANCED_ORCHESTRATOR_AVAILABLE:
|
||||||
logger.error("CRITICAL: Enhanced orchestrator has NO COB integration!")
|
logger.error("Enhanced Orchestrator not available - COB integration requires Enhanced Orchestrator")
|
||||||
logger.error("This means we're using basic orchestrator instead of enhanced one")
|
|
||||||
logger.error("Dashboard will NOT have real COB data until this is fixed")
|
|
||||||
return
|
return
|
||||||
|
|
||||||
# Connect to the real COB integration
|
# Check if Enhanced Orchestrator has COB integration
|
||||||
cob_integration = self.orchestrator.cob_integration
|
if not hasattr(self.orchestrator, 'cob_integration'):
|
||||||
logger.info(f"REAL COB integration found: {type(cob_integration)}")
|
logger.error("Enhanced Orchestrator has no cob_integration attribute")
|
||||||
|
return
|
||||||
|
|
||||||
# Verify COB integration is active and working
|
if self.orchestrator.cob_integration is None:
|
||||||
if hasattr(cob_integration, 'get_statistics'):
|
logger.warning("Enhanced Orchestrator COB integration is None - needs to be started")
|
||||||
stats = cob_integration.get_statistics()
|
|
||||||
logger.info(f"COB statistics: {stats}")
|
|
||||||
|
|
||||||
# Register callbacks if available
|
# Try to start the COB integration asynchronously
|
||||||
if hasattr(cob_integration, 'add_dashboard_callback'):
|
def start_cob_async():
|
||||||
cob_integration.add_dashboard_callback(self._on_real_cob_update)
|
"""Start COB integration in async context"""
|
||||||
logger.info("Registered dashboard callback with REAL COB integration")
|
import asyncio
|
||||||
|
async def _start_cob():
|
||||||
# CRITICAL: Start the COB integration if it's not already started
|
try:
|
||||||
# This is the missing piece - the COB integration needs to be started!
|
# Start the COB integration from enhanced orchestrator
|
||||||
def start_cob_async():
|
|
||||||
"""Start COB integration in async context"""
|
|
||||||
import asyncio
|
|
||||||
async def _start_cob():
|
|
||||||
try:
|
|
||||||
# Check if COB integration needs to be started
|
|
||||||
if hasattr(self.orchestrator, 'cob_integration_active') and not self.orchestrator.cob_integration_active:
|
|
||||||
logger.info("Starting COB integration from dashboard...")
|
|
||||||
await self.orchestrator.start_cob_integration()
|
await self.orchestrator.start_cob_integration()
|
||||||
logger.info("COB integration started successfully from dashboard")
|
logger.info("COB integration started successfully from Enhanced Orchestrator")
|
||||||
|
|
||||||
|
# Register dashboard callback if possible
|
||||||
|
if hasattr(self.orchestrator.cob_integration, 'add_dashboard_callback'):
|
||||||
|
self.orchestrator.cob_integration.add_dashboard_callback(self._on_enhanced_cob_update)
|
||||||
|
logger.info("Registered dashboard callback with Enhanced COB integration")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error starting COB integration from Enhanced Orchestrator: {e}")
|
||||||
|
|
||||||
|
# Run in new event loop if needed
|
||||||
|
try:
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
if loop.is_running():
|
||||||
|
# If loop is already running, schedule as task
|
||||||
|
asyncio.create_task(_start_cob())
|
||||||
else:
|
else:
|
||||||
logger.info("COB integration already active or starting")
|
# If no loop running, run directly
|
||||||
|
loop.run_until_complete(_start_cob())
|
||||||
|
except RuntimeError:
|
||||||
|
# No event loop, create new one
|
||||||
|
asyncio.run(_start_cob())
|
||||||
|
|
||||||
# Wait a moment for data to start flowing
|
# Start COB integration in background thread to avoid blocking dashboard
|
||||||
await asyncio.sleep(3)
|
import threading
|
||||||
|
cob_start_thread = threading.Thread(target=start_cob_async, daemon=True)
|
||||||
|
cob_start_thread.start()
|
||||||
|
logger.info("Enhanced COB integration startup initiated in background")
|
||||||
|
|
||||||
# Verify COB data is flowing
|
else:
|
||||||
|
# COB integration already exists, just register callback
|
||||||
|
cob_integration = self.orchestrator.cob_integration
|
||||||
|
logger.info(f"Enhanced COB integration found: {type(cob_integration)}")
|
||||||
|
|
||||||
|
# Register callbacks if available
|
||||||
|
if hasattr(cob_integration, 'add_dashboard_callback'):
|
||||||
|
cob_integration.add_dashboard_callback(self._on_enhanced_cob_update)
|
||||||
|
logger.info("Registered dashboard callback with existing Enhanced COB integration")
|
||||||
|
|
||||||
|
# Verify COB integration is active and working
|
||||||
|
if hasattr(cob_integration, 'get_statistics'):
|
||||||
|
try:
|
||||||
stats = cob_integration.get_statistics()
|
stats = cob_integration.get_statistics()
|
||||||
logger.info(f"COB integration status after start: {stats}")
|
logger.info(f"Enhanced COB statistics: {stats}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error starting COB integration from dashboard: {e}")
|
logger.debug(f"Could not get COB statistics: {e}")
|
||||||
|
|
||||||
# Run in new event loop if needed
|
logger.info("Enhanced COB integration connection completed")
|
||||||
try:
|
logger.info("NO SIMULATION - Using Enhanced Orchestrator real market data only")
|
||||||
loop = asyncio.get_event_loop()
|
|
||||||
if loop.is_running():
|
|
||||||
# If loop is already running, schedule as task
|
|
||||||
asyncio.create_task(_start_cob())
|
|
||||||
else:
|
|
||||||
# If no loop running, run directly
|
|
||||||
loop.run_until_complete(_start_cob())
|
|
||||||
except RuntimeError:
|
|
||||||
# No event loop, create new one
|
|
||||||
asyncio.run(_start_cob())
|
|
||||||
|
|
||||||
# Start COB integration in background thread to avoid blocking dashboard
|
|
||||||
import threading
|
|
||||||
cob_start_thread = threading.Thread(target=start_cob_async, daemon=True)
|
|
||||||
cob_start_thread.start()
|
|
||||||
|
|
||||||
logger.info("REAL COB integration connected successfully")
|
|
||||||
logger.info("NO SIMULATION - Using live market data only")
|
|
||||||
logger.info("COB integration startup initiated in background")
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"CRITICAL: Failed to connect to REAL COB integration: {e}")
|
logger.error(f"CRITICAL: Failed to connect to Enhanced COB integration: {e}")
|
||||||
logger.error("Dashboard will operate without COB data")
|
logger.error("Dashboard will operate without COB data")
|
||||||
|
|
||||||
def _on_real_cob_update(self, symbol: str, cob_data: Dict):
|
def _on_enhanced_cob_update(self, symbol: str, cob_data: Dict):
|
||||||
"""Handle real COB data updates - NO SIMULATION"""
|
"""Handle Enhanced COB data updates - NO SIMULATION"""
|
||||||
try:
|
try:
|
||||||
# Process real COB data update
|
# Process Enhanced COB data update
|
||||||
current_time = time.time()
|
current_time = time.time()
|
||||||
|
|
||||||
# Update cache with REAL COB data
|
# Update cache with Enhanced COB data (same format as cob_realtime_dashboard.py)
|
||||||
if symbol not in self.cob_cache:
|
if symbol not in self.cob_cache:
|
||||||
self.cob_cache[symbol] = {'last_update': 0, 'data': None, 'updates_count': 0}
|
self.cob_cache[symbol] = {'last_update': 0, 'data': None, 'updates_count': 0}
|
||||||
|
|
||||||
@ -1936,13 +1815,16 @@ class CleanTradingDashboard:
|
|||||||
'updates_count': self.cob_cache[symbol].get('updates_count', 0) + 1
|
'updates_count': self.cob_cache[symbol].get('updates_count', 0) + 1
|
||||||
}
|
}
|
||||||
|
|
||||||
# Log real COB data updates
|
# Also update latest_cob_data for compatibility
|
||||||
|
self.latest_cob_data[symbol] = cob_data
|
||||||
|
|
||||||
|
# Log Enhanced COB data updates
|
||||||
update_count = self.cob_cache[symbol]['updates_count']
|
update_count = self.cob_cache[symbol]['updates_count']
|
||||||
if update_count % 50 == 0: # Every 50 real updates
|
if update_count % 50 == 0: # Every 50 Enhanced updates
|
||||||
logger.info(f"[REAL-COB] {symbol} - Real update #{update_count}")
|
logger.info(f"[ENHANCED-COB] {symbol} - Enhanced update #{update_count}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error handling REAL COB update for {symbol}: {e}")
|
logger.error(f"Error handling Enhanced COB update for {symbol}: {e}")
|
||||||
|
|
||||||
def _start_cob_data_subscription(self):
|
def _start_cob_data_subscription(self):
|
||||||
"""Start COB data subscription with proper caching"""
|
"""Start COB data subscription with proper caching"""
|
||||||
@ -2226,6 +2108,10 @@ class CleanTradingDashboard:
|
|||||||
def _start_unified_stream(self):
|
def _start_unified_stream(self):
|
||||||
"""Start the unified data stream in background"""
|
"""Start the unified data stream in background"""
|
||||||
try:
|
try:
|
||||||
|
if self.unified_stream is None:
|
||||||
|
logger.warning("Unified stream is None - cannot start")
|
||||||
|
return
|
||||||
|
|
||||||
import asyncio
|
import asyncio
|
||||||
loop = asyncio.new_event_loop()
|
loop = asyncio.new_event_loop()
|
||||||
asyncio.set_event_loop(loop)
|
asyncio.set_event_loop(loop)
|
||||||
@ -2429,7 +2315,7 @@ class CleanTradingDashboard:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
def create_clean_dashboard(data_provider=None, orchestrator=None, trading_executor=None):
|
def create_clean_dashboard(data_provider: Optional[DataProvider] = None, orchestrator: Optional[TradingOrchestrator] = None, trading_executor: Optional[TradingExecutor] = None):
|
||||||
"""Factory function to create a CleanTradingDashboard instance"""
|
"""Factory function to create a CleanTradingDashboard instance"""
|
||||||
return CleanTradingDashboard(
|
return CleanTradingDashboard(
|
||||||
data_provider=data_provider,
|
data_provider=data_provider,
|
||||||
|
Reference in New Issue
Block a user