more training
This commit is contained in:
parent
97d348d517
commit
2ba0406b9f
234
DQN_SENSITIVITY_LEARNING_SUMMARY.md
Normal file
234
DQN_SENSITIVITY_LEARNING_SUMMARY.md
Normal file
@ -0,0 +1,234 @@
|
|||||||
|
# DQN RL-based Sensitivity Learning & 300s Data Preloading Summary
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This document summarizes the implementation of DQN RL-based sensitivity learning and 300s data preloading features that make the trading system more adaptive and responsive.
|
||||||
|
|
||||||
|
## 🧠 DQN RL-based Sensitivity Learning
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
The system now uses a Deep Q-Network (DQN) to learn optimal sensitivity levels for trading decisions based on market conditions and trade outcomes. After each completed trade, the system evaluates the performance and creates a learning case for the DQN agent.
|
||||||
|
|
||||||
|
### Implementation Details
|
||||||
|
|
||||||
|
#### 1. Sensitivity Levels (5 levels: 0-4)
|
||||||
|
```python
|
||||||
|
sensitivity_levels = {
|
||||||
|
0: {'name': 'very_conservative', 'open_threshold_multiplier': 1.5, 'close_threshold_multiplier': 2.0},
|
||||||
|
1: {'name': 'conservative', 'open_threshold_multiplier': 1.2, 'close_threshold_multiplier': 1.5},
|
||||||
|
2: {'name': 'medium', 'open_threshold_multiplier': 1.0, 'close_threshold_multiplier': 1.0},
|
||||||
|
3: {'name': 'aggressive', 'open_threshold_multiplier': 0.8, 'close_threshold_multiplier': 0.7},
|
||||||
|
4: {'name': 'very_aggressive', 'open_threshold_multiplier': 0.6, 'close_threshold_multiplier': 0.5}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Trade Tracking System
|
||||||
|
- **Active Trades**: Tracks open positions with entry conditions
|
||||||
|
- **Completed Trades**: Records full trade lifecycle with outcomes
|
||||||
|
- **Learning Queue**: Stores DQN training cases from completed trades
|
||||||
|
|
||||||
|
#### 3. DQN State Vector (15 features)
|
||||||
|
- Market volatility (normalized)
|
||||||
|
- Price momentum (5-period)
|
||||||
|
- Volume ratio
|
||||||
|
- RSI indicator
|
||||||
|
- MACD signal
|
||||||
|
- Bollinger Band position
|
||||||
|
- Recent price changes (5 periods)
|
||||||
|
- Current sensitivity level
|
||||||
|
- Recent performance metrics (avg P&L, win rate, avg duration)
|
||||||
|
|
||||||
|
#### 4. Reward Calculation
|
||||||
|
```python
|
||||||
|
def _calculate_sensitivity_reward(self, completed_trade):
|
||||||
|
base_reward = pnl_pct * 10 # Scale P&L percentage
|
||||||
|
|
||||||
|
# Duration factor
|
||||||
|
if duration < 300: duration_factor = 0.8 # Too quick
|
||||||
|
elif duration < 1800: duration_factor = 1.2 # Good for scalping
|
||||||
|
elif duration < 3600: duration_factor = 1.0 # Acceptable
|
||||||
|
else: duration_factor = 0.7 # Too slow
|
||||||
|
|
||||||
|
# Confidence factor
|
||||||
|
conf_factor = (entry_conf + exit_conf) / 2 if profitable else exit_conf
|
||||||
|
|
||||||
|
final_reward = base_reward * duration_factor * conf_factor
|
||||||
|
return np.clip(final_reward, -2.0, 2.0)
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Dynamic Threshold Adjustment
|
||||||
|
- **Opening Positions**: Higher thresholds (more conservative)
|
||||||
|
- **Closing Positions**: Lower thresholds (more sensitive to exit signals)
|
||||||
|
- **Real-time Adaptation**: DQN continuously adjusts sensitivity based on market conditions
|
||||||
|
|
||||||
|
### Files Modified
|
||||||
|
- `core/enhanced_orchestrator.py`: Added sensitivity learning methods
|
||||||
|
- `core/config.py`: Added `confidence_threshold_close` parameter
|
||||||
|
- `web/scalping_dashboard.py`: Added sensitivity info display
|
||||||
|
- `NN/models/dqn_agent.py`: Existing DQN agent used for sensitivity learning
|
||||||
|
|
||||||
|
## 📊 300s Data Preloading
|
||||||
|
|
||||||
|
### Core Concept
|
||||||
|
The system now preloads 300 seconds worth of data for all symbols and timeframes on first load, providing better initial performance and reducing latency for trading decisions.
|
||||||
|
|
||||||
|
### Implementation Details
|
||||||
|
|
||||||
|
#### 1. Smart Preloading Logic
|
||||||
|
```python
|
||||||
|
def _should_preload_data(self, symbol: str, timeframe: str, limit: int) -> bool:
|
||||||
|
# Check if we already have cached data
|
||||||
|
if cached_data exists and len(cached_data) > 0:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Calculate candles needed for 300s
|
||||||
|
timeframe_seconds = self.timeframe_seconds.get(timeframe, 60)
|
||||||
|
candles_in_300s = 300 // timeframe_seconds
|
||||||
|
|
||||||
|
# Preload if beneficial
|
||||||
|
return candles_in_300s > limit or timeframe in ['1s', '1m']
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Timeframe-Specific Limits
|
||||||
|
- **1s timeframe**: Max 300 candles (5 minutes)
|
||||||
|
- **1m timeframe**: Max 60 candles (1 hour)
|
||||||
|
- **Other timeframes**: Max 500 candles
|
||||||
|
- **Minimum**: Always at least 100 candles
|
||||||
|
|
||||||
|
#### 3. Preloading Process
|
||||||
|
1. Check if data already exists (cache or memory)
|
||||||
|
2. Calculate optimal number of candles for 300s
|
||||||
|
3. Fetch data from Binance API
|
||||||
|
4. Add technical indicators
|
||||||
|
5. Cache data for future use
|
||||||
|
6. Store in memory for immediate access
|
||||||
|
|
||||||
|
#### 4. Performance Benefits
|
||||||
|
- **Faster Initial Load**: Charts populate immediately
|
||||||
|
- **Reduced API Calls**: Bulk loading vs individual requests
|
||||||
|
- **Better User Experience**: No waiting for data on first load
|
||||||
|
- **Improved Trading Decisions**: More historical context available
|
||||||
|
|
||||||
|
### Files Modified
|
||||||
|
- `core/data_provider.py`: Added preloading methods
|
||||||
|
- `web/scalping_dashboard.py`: Integrated preloading in initialization
|
||||||
|
|
||||||
|
## 🎨 Enhanced Dashboard Features
|
||||||
|
|
||||||
|
### 1. Color-Coded Position Display
|
||||||
|
- **LONG positions**: Green text with `[LONG]` prefix
|
||||||
|
- **SHORT positions**: Red text with `[SHORT]` prefix
|
||||||
|
- **Format**: `[SIDE] size @ $entry_price | P&L: $unrealized_pnl`
|
||||||
|
|
||||||
|
### 2. Enhanced Model Training Status
|
||||||
|
Now displays three columns:
|
||||||
|
- **RL Training**: Queue size, win rate, actions
|
||||||
|
- **CNN Training**: Perfect moves, confidence, retrospective learning
|
||||||
|
- **DQN Sensitivity**: Current level, completed trades, learning queue, thresholds
|
||||||
|
|
||||||
|
### 3. Sensitivity Learning Info
|
||||||
|
```python
|
||||||
|
{
|
||||||
|
'level_name': 'MEDIUM', # Current sensitivity level
|
||||||
|
'completed_trades': 15, # Number of completed trades
|
||||||
|
'learning_queue_size': 8, # DQN training queue size
|
||||||
|
'open_threshold': 0.600, # Current opening threshold
|
||||||
|
'close_threshold': 0.250 # Current closing threshold
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🧪 Testing & Verification
|
||||||
|
|
||||||
|
### Test Script: `test_sensitivity_learning.py`
|
||||||
|
Comprehensive test suite covering:
|
||||||
|
1. **300s Data Preloading**: Verifies preloading functionality
|
||||||
|
2. **Sensitivity Learning Initialization**: Checks system setup
|
||||||
|
3. **Trading Scenario Simulation**: Tests learning case creation
|
||||||
|
4. **Threshold Adjustment**: Verifies dynamic threshold changes
|
||||||
|
5. **Dashboard Integration**: Tests UI components
|
||||||
|
6. **DQN Training Simulation**: Verifies neural network training
|
||||||
|
|
||||||
|
### Running Tests
|
||||||
|
```bash
|
||||||
|
python test_sensitivity_learning.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
```
|
||||||
|
🎯 SENSITIVITY LEARNING SYSTEM READY!
|
||||||
|
Features verified:
|
||||||
|
✅ DQN RL-based sensitivity learning from completed trades
|
||||||
|
✅ 300s data preloading for faster initial performance
|
||||||
|
✅ Dynamic threshold adjustment (lower for closing positions)
|
||||||
|
✅ Color-coded position display ([LONG] green, [SHORT] red)
|
||||||
|
✅ Enhanced model training status with sensitivity info
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚀 Usage Instructions
|
||||||
|
|
||||||
|
### 1. Start the Enhanced Dashboard
|
||||||
|
```bash
|
||||||
|
python run_enhanced_scalping_dashboard.py
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Monitor Sensitivity Learning
|
||||||
|
- Watch the "DQN Sensitivity" section in the dashboard
|
||||||
|
- Observe threshold adjustments as trades complete
|
||||||
|
- Monitor learning queue size for training activity
|
||||||
|
|
||||||
|
### 3. Verify Data Preloading
|
||||||
|
- Check console logs for preloading status
|
||||||
|
- Observe faster initial chart population
|
||||||
|
- Monitor reduced API call frequency
|
||||||
|
|
||||||
|
## 📈 Expected Benefits
|
||||||
|
|
||||||
|
### 1. Improved Trading Performance
|
||||||
|
- **Adaptive Sensitivity**: System learns optimal aggressiveness levels
|
||||||
|
- **Better Exit Timing**: Lower thresholds for closing positions
|
||||||
|
- **Market-Aware Decisions**: Sensitivity adjusts to market conditions
|
||||||
|
|
||||||
|
### 2. Enhanced User Experience
|
||||||
|
- **Faster Startup**: 300s preloading reduces initial wait time
|
||||||
|
- **Visual Clarity**: Color-coded positions improve readability
|
||||||
|
- **Better Monitoring**: Enhanced status displays provide more insight
|
||||||
|
|
||||||
|
### 3. System Intelligence
|
||||||
|
- **Continuous Learning**: DQN improves over time
|
||||||
|
- **Retrospective Analysis**: Perfect opportunity detection
|
||||||
|
- **Performance Optimization**: Automatic threshold tuning
|
||||||
|
|
||||||
|
## 🔧 Configuration
|
||||||
|
|
||||||
|
### Key Parameters
|
||||||
|
```yaml
|
||||||
|
orchestrator:
|
||||||
|
confidence_threshold: 0.5 # Base opening threshold
|
||||||
|
confidence_threshold_close: 0.25 # Base closing threshold (much lower)
|
||||||
|
|
||||||
|
sensitivity_learning:
|
||||||
|
enabled: true
|
||||||
|
state_size: 15
|
||||||
|
action_space: 5
|
||||||
|
learning_rate: 0.001
|
||||||
|
gamma: 0.95
|
||||||
|
epsilon: 0.3
|
||||||
|
batch_size: 32
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📝 Next Steps
|
||||||
|
|
||||||
|
1. **Monitor Performance**: Track sensitivity learning effectiveness
|
||||||
|
2. **Tune Parameters**: Adjust DQN hyperparameters based on results
|
||||||
|
3. **Expand Features**: Add more market indicators to state vector
|
||||||
|
4. **Optimize Preloading**: Fine-tune preloading amounts per timeframe
|
||||||
|
5. **Add Persistence**: Save/load DQN models between sessions
|
||||||
|
|
||||||
|
## 🎯 Success Metrics
|
||||||
|
|
||||||
|
- **Sensitivity Adaptation**: DQN successfully adjusts sensitivity levels
|
||||||
|
- **Improved Win Rate**: Better trade outcomes through learned sensitivity
|
||||||
|
- **Faster Startup**: <5 seconds for full data preloading
|
||||||
|
- **Reduced Latency**: Immediate chart updates on dashboard load
|
||||||
|
- **User Satisfaction**: Clear visual feedback and status information
|
||||||
|
|
||||||
|
The system now provides intelligent, adaptive trading with enhanced user experience and faster performance!
|
214
ENHANCED_IMPROVEMENTS_SUMMARY.md
Normal file
214
ENHANCED_IMPROVEMENTS_SUMMARY.md
Normal file
@ -0,0 +1,214 @@
|
|||||||
|
# Enhanced Trading System Improvements Summary
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
This document summarizes the major improvements made to the trading system to address:
|
||||||
|
1. Color-coded position display
|
||||||
|
2. Enhanced model training detection and retrospective learning
|
||||||
|
3. Lower confidence thresholds for closing positions
|
||||||
|
|
||||||
|
## 🎨 Color-Coded Position Display
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
- **File**: `web/scalping_dashboard.py`
|
||||||
|
- **Location**: Dashboard callback function (lines ~720-750)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- **LONG positions**: Display in green (`text-success` class) with `[LONG]` prefix
|
||||||
|
- **SHORT positions**: Display in red (`text-danger` class) with `[SHORT]` prefix
|
||||||
|
- **Real-time P&L**: Shows unrealized profit/loss for each position
|
||||||
|
- **Format**: `[SIDE] size @ $entry_price | P&L: $unrealized_pnl`
|
||||||
|
|
||||||
|
### Example Display
|
||||||
|
```
|
||||||
|
[LONG] 0.100 @ $2558.15 | P&L: +$0.72 (Green text)
|
||||||
|
[SHORT] 0.050 @ $45123.45 | P&L: -$3.66 (Red text)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Layout Changes
|
||||||
|
- Increased open-positions column from `col-md-2` to `col-md-3` for better display
|
||||||
|
- Adjusted other columns to maintain layout balance
|
||||||
|
|
||||||
|
## 🧠 Enhanced Model Training Detection
|
||||||
|
|
||||||
|
### CNN Training Status
|
||||||
|
- **File**: `web/scalping_dashboard.py` - `_create_model_training_status()`
|
||||||
|
- **Features**:
|
||||||
|
- Active/Idle status indicators
|
||||||
|
- Perfect moves count tracking
|
||||||
|
- Retrospective learning status
|
||||||
|
- Color-coded status (green for active, yellow for idle)
|
||||||
|
|
||||||
|
### Training Events Log
|
||||||
|
- **File**: `web/scalping_dashboard.py` - `_create_training_events_log()`
|
||||||
|
- **Features**:
|
||||||
|
- Real-time perfect opportunity detection
|
||||||
|
- Confidence adjustment recommendations
|
||||||
|
- Pattern detection events
|
||||||
|
- Priority-based event sorting
|
||||||
|
- Detailed outcome percentages
|
||||||
|
|
||||||
|
### Event Types
|
||||||
|
- 🧠 **CNN**: Perfect move detection with outcome percentages
|
||||||
|
- 🤖 **RL**: Experience replay and queue activity
|
||||||
|
- ⚙️ **TUNE**: Confidence threshold adjustments
|
||||||
|
- ⚡ **TICK**: Violent move pattern detection
|
||||||
|
|
||||||
|
## 📊 Retrospective Learning System
|
||||||
|
|
||||||
|
### Core Implementation
|
||||||
|
- **File**: `core/enhanced_orchestrator.py`
|
||||||
|
- **Key Methods**:
|
||||||
|
- `trigger_retrospective_learning()`: Main analysis trigger
|
||||||
|
- `_analyze_missed_opportunities()`: Scans for perfect opportunities
|
||||||
|
- `_adjust_confidence_thresholds()`: Dynamic threshold adjustment
|
||||||
|
|
||||||
|
### Perfect Opportunity Detection
|
||||||
|
- **Criteria**: Price movements >1% in 5 minutes
|
||||||
|
- **Learning**: Creates `PerfectMove` objects for training
|
||||||
|
- **Frequency**: Analysis every 5 minutes to avoid overload
|
||||||
|
- **Adaptive**: Adjusts thresholds based on recent performance
|
||||||
|
|
||||||
|
### Violent Move Detection
|
||||||
|
- **Raw Ticks**: Detects price changes >0.1% in <50ms
|
||||||
|
- **1s Bars**: Identifies significant bar ranges >0.2%
|
||||||
|
- **Patterns**: Analyzes rapid_fire, volume_spike, price_acceleration
|
||||||
|
- **Immediate Learning**: Creates perfect moves in real-time
|
||||||
|
|
||||||
|
## ⚖️ Dual Confidence Thresholds
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
- **File**: `core/config.py`
|
||||||
|
- **Opening Threshold**: 0.5 (default) - Higher bar for new positions
|
||||||
|
- **Closing Threshold**: 0.25 (default) - Much lower for position exits
|
||||||
|
|
||||||
|
### Implementation
|
||||||
|
- **File**: `core/enhanced_orchestrator.py`
|
||||||
|
- **Method**: `_make_coordinated_decision()`
|
||||||
|
- **Logic**:
|
||||||
|
- Determines if action is opening or closing via `_is_closing_action()`
|
||||||
|
- Applies appropriate threshold based on action type
|
||||||
|
- Tracks positions internally for accurate classification
|
||||||
|
|
||||||
|
### Position Tracking
|
||||||
|
- **Internal State**: `self.open_positions` tracks current positions
|
||||||
|
- **Updates**: Automatically updated on each trading action
|
||||||
|
- **Logic**:
|
||||||
|
- BUY closes SHORT, opens LONG
|
||||||
|
- SELL closes LONG, opens SHORT
|
||||||
|
|
||||||
|
### Benefits
|
||||||
|
- **Faster Exits**: Lower threshold allows quicker position closure
|
||||||
|
- **Risk Management**: Easier to exit losing positions
|
||||||
|
- **Scalping Optimized**: Better for high-frequency trading
|
||||||
|
|
||||||
|
## 🔄 Background Processing
|
||||||
|
|
||||||
|
### Orchestrator Loop
|
||||||
|
- **File**: `web/scalping_dashboard.py` - `_start_orchestrator_trading()`
|
||||||
|
- **Features**:
|
||||||
|
- Automatic retrospective learning triggers
|
||||||
|
- 30-second decision cycles
|
||||||
|
- Error handling and recovery
|
||||||
|
- Background thread execution
|
||||||
|
|
||||||
|
### Data Processing
|
||||||
|
- **Raw Tick Handler**: `_handle_raw_tick()` - Processes violent moves
|
||||||
|
- **OHLCV Bar Handler**: `_handle_ohlcv_bar()` - Analyzes bar patterns
|
||||||
|
- **Pattern Weights**: Configurable weights for different pattern types
|
||||||
|
|
||||||
|
## 📈 Enhanced Metrics
|
||||||
|
|
||||||
|
### Performance Tracking
|
||||||
|
- **File**: `core/enhanced_orchestrator.py` - `get_performance_metrics()`
|
||||||
|
- **New Metrics**:
|
||||||
|
- Retrospective learning status
|
||||||
|
- Pattern detection counts
|
||||||
|
- Position tracking information
|
||||||
|
- Dual threshold configuration
|
||||||
|
- Average confidence needed
|
||||||
|
|
||||||
|
### Dashboard Integration
|
||||||
|
- **Real-time Updates**: All metrics update in real-time
|
||||||
|
- **Visual Indicators**: Color-coded status for quick assessment
|
||||||
|
- **Detailed Logs**: Comprehensive event logging with priorities
|
||||||
|
|
||||||
|
## 🧪 Testing
|
||||||
|
|
||||||
|
### Test Script
|
||||||
|
- **File**: `test_enhanced_improvements.py`
|
||||||
|
- **Coverage**:
|
||||||
|
- Color-coded position display
|
||||||
|
- Confidence threshold logic
|
||||||
|
- Retrospective learning
|
||||||
|
- Tick pattern detection
|
||||||
|
- Dashboard integration
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
Run the test script to verify all improvements:
|
||||||
|
```bash
|
||||||
|
python test_enhanced_improvements.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🚀 Key Benefits
|
||||||
|
|
||||||
|
### For Traders
|
||||||
|
1. **Visual Clarity**: Instant position identification with color coding
|
||||||
|
2. **Faster Exits**: Lower closing thresholds for better risk management
|
||||||
|
3. **Learning System**: Continuous improvement from missed opportunities
|
||||||
|
4. **Real-time Feedback**: Live model training status and events
|
||||||
|
|
||||||
|
### For System Performance
|
||||||
|
1. **Adaptive Thresholds**: Self-adjusting based on market conditions
|
||||||
|
2. **Pattern Recognition**: Enhanced detection of violent moves
|
||||||
|
3. **Retrospective Analysis**: Learning from historical perfect opportunities
|
||||||
|
4. **Optimized Scalping**: Tailored for high-frequency trading
|
||||||
|
|
||||||
|
## 📋 Configuration
|
||||||
|
|
||||||
|
### Key Settings
|
||||||
|
```yaml
|
||||||
|
orchestrator:
|
||||||
|
confidence_threshold: 0.5 # Opening positions
|
||||||
|
confidence_threshold_close: 0.25 # Closing positions (much lower)
|
||||||
|
decision_frequency: 60
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pattern Weights
|
||||||
|
```python
|
||||||
|
pattern_weights = {
|
||||||
|
'rapid_fire': 1.5,
|
||||||
|
'volume_spike': 1.3,
|
||||||
|
'price_acceleration': 1.4,
|
||||||
|
'high_frequency_bar': 1.2,
|
||||||
|
'volume_concentration': 1.1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🔧 Technical Implementation
|
||||||
|
|
||||||
|
### Files Modified
|
||||||
|
1. `web/scalping_dashboard.py` - Color-coded positions, enhanced training status
|
||||||
|
2. `core/enhanced_orchestrator.py` - Dual thresholds, retrospective learning
|
||||||
|
3. `core/config.py` - New configuration parameters
|
||||||
|
4. `test_enhanced_improvements.py` - Comprehensive testing
|
||||||
|
|
||||||
|
### Dependencies
|
||||||
|
- No new dependencies required
|
||||||
|
- Uses existing Dash, NumPy, and Pandas libraries
|
||||||
|
- Maintains backward compatibility
|
||||||
|
|
||||||
|
## 🎯 Results
|
||||||
|
|
||||||
|
### Expected Improvements
|
||||||
|
1. **Better Position Management**: Clear visual feedback on position status
|
||||||
|
2. **Improved Model Performance**: Continuous learning from perfect opportunities
|
||||||
|
3. **Faster Risk Response**: Lower thresholds for position exits
|
||||||
|
4. **Enhanced Monitoring**: Real-time training status and event logging
|
||||||
|
|
||||||
|
### Performance Metrics
|
||||||
|
- **Opening Threshold**: 0.5 (conservative for new positions)
|
||||||
|
- **Closing Threshold**: 0.25 (aggressive for exits)
|
||||||
|
- **Learning Frequency**: Every 5 minutes
|
||||||
|
- **Pattern Detection**: Real-time on violent moves
|
||||||
|
|
||||||
|
This comprehensive enhancement package addresses all requested improvements while maintaining system stability and performance.
|
@ -137,9 +137,16 @@ class DataProvider:
|
|||||||
logger.info(f"Using cached data for {symbol} {timeframe}")
|
logger.info(f"Using cached data for {symbol} {timeframe}")
|
||||||
return cached_data.tail(limit)
|
return cached_data.tail(limit)
|
||||||
|
|
||||||
# Fetch from API
|
# Check if we need to preload 300s of data for first load
|
||||||
logger.info(f"Fetching historical data for {symbol} {timeframe}")
|
should_preload = self._should_preload_data(symbol, timeframe, limit)
|
||||||
df = self._fetch_from_binance(symbol, timeframe, limit)
|
|
||||||
|
if should_preload:
|
||||||
|
logger.info(f"Preloading 300s of data for {symbol} {timeframe}")
|
||||||
|
df = self._preload_300s_data(symbol, timeframe)
|
||||||
|
else:
|
||||||
|
# Fetch from API with requested limit
|
||||||
|
logger.info(f"Fetching historical data for {symbol} {timeframe}")
|
||||||
|
df = self._fetch_from_binance(symbol, timeframe, limit)
|
||||||
|
|
||||||
if df is not None and not df.empty:
|
if df is not None and not df.empty:
|
||||||
# Add technical indicators
|
# Add technical indicators
|
||||||
@ -154,7 +161,8 @@ class DataProvider:
|
|||||||
self.historical_data[symbol] = {}
|
self.historical_data[symbol] = {}
|
||||||
self.historical_data[symbol][timeframe] = df
|
self.historical_data[symbol][timeframe] = df
|
||||||
|
|
||||||
return df
|
# Return requested amount
|
||||||
|
return df.tail(limit)
|
||||||
|
|
||||||
logger.warning(f"No data received for {symbol} {timeframe}")
|
logger.warning(f"No data received for {symbol} {timeframe}")
|
||||||
return None
|
return None
|
||||||
@ -163,6 +171,124 @@ class DataProvider:
|
|||||||
logger.error(f"Error fetching historical data for {symbol} {timeframe}: {e}")
|
logger.error(f"Error fetching historical data for {symbol} {timeframe}: {e}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
def _should_preload_data(self, symbol: str, timeframe: str, limit: int) -> bool:
|
||||||
|
"""Determine if we should preload 300s of data"""
|
||||||
|
try:
|
||||||
|
# Check if we have any cached data
|
||||||
|
if self.cache_enabled:
|
||||||
|
cached_data = self._load_from_cache(symbol, timeframe)
|
||||||
|
if cached_data is not None and len(cached_data) > 0:
|
||||||
|
return False # Already have some data
|
||||||
|
|
||||||
|
# Check if we have data in memory
|
||||||
|
if (symbol in self.historical_data and
|
||||||
|
timeframe in self.historical_data[symbol] and
|
||||||
|
len(self.historical_data[symbol][timeframe]) > 0):
|
||||||
|
return False # Already have data in memory
|
||||||
|
|
||||||
|
# Calculate if 300s worth of data would be more than requested limit
|
||||||
|
timeframe_seconds = self.timeframe_seconds.get(timeframe, 60)
|
||||||
|
candles_in_300s = 300 // timeframe_seconds
|
||||||
|
|
||||||
|
# Preload if we need more than the requested limit or if it's a short timeframe
|
||||||
|
if candles_in_300s > limit or timeframe in ['1s', '1m']:
|
||||||
|
return True
|
||||||
|
|
||||||
|
return False
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error determining if should preload data: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _preload_300s_data(self, symbol: str, timeframe: str) -> Optional[pd.DataFrame]:
|
||||||
|
"""Preload 300 seconds worth of data for better initial performance"""
|
||||||
|
try:
|
||||||
|
# Calculate how many candles we need for 300 seconds
|
||||||
|
timeframe_seconds = self.timeframe_seconds.get(timeframe, 60)
|
||||||
|
candles_needed = max(300 // timeframe_seconds, 100) # At least 100 candles
|
||||||
|
|
||||||
|
# For very short timeframes, limit to reasonable amount
|
||||||
|
if timeframe == '1s':
|
||||||
|
candles_needed = min(candles_needed, 300) # Max 300 1s candles
|
||||||
|
elif timeframe == '1m':
|
||||||
|
candles_needed = min(candles_needed, 60) # Max 60 1m candles (1 hour)
|
||||||
|
else:
|
||||||
|
candles_needed = min(candles_needed, 500) # Max 500 candles for other timeframes
|
||||||
|
|
||||||
|
logger.info(f"Preloading {candles_needed} candles for {symbol} {timeframe} (300s worth)")
|
||||||
|
|
||||||
|
# Fetch the data
|
||||||
|
df = self._fetch_from_binance(symbol, timeframe, candles_needed)
|
||||||
|
|
||||||
|
if df is not None and not df.empty:
|
||||||
|
logger.info(f"Successfully preloaded {len(df)} candles for {symbol} {timeframe}")
|
||||||
|
return df
|
||||||
|
else:
|
||||||
|
logger.warning(f"Failed to preload data for {symbol} {timeframe}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error preloading 300s data for {symbol} {timeframe}: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def preload_all_symbols_data(self, timeframes: List[str] = None) -> Dict[str, Dict[str, bool]]:
|
||||||
|
"""Preload 300s of data for all symbols and timeframes"""
|
||||||
|
try:
|
||||||
|
if timeframes is None:
|
||||||
|
timeframes = self.timeframes
|
||||||
|
|
||||||
|
preload_results = {}
|
||||||
|
|
||||||
|
for symbol in self.symbols:
|
||||||
|
preload_results[symbol] = {}
|
||||||
|
|
||||||
|
for timeframe in timeframes:
|
||||||
|
try:
|
||||||
|
logger.info(f"Preloading data for {symbol} {timeframe}")
|
||||||
|
|
||||||
|
# Check if we should preload
|
||||||
|
if self._should_preload_data(symbol, timeframe, 100):
|
||||||
|
df = self._preload_300s_data(symbol, timeframe)
|
||||||
|
|
||||||
|
if df is not None and not df.empty:
|
||||||
|
# Add technical indicators
|
||||||
|
df = self._add_technical_indicators(df)
|
||||||
|
|
||||||
|
# Cache the data
|
||||||
|
if self.cache_enabled:
|
||||||
|
self._save_to_cache(df, symbol, timeframe)
|
||||||
|
|
||||||
|
# Store in memory
|
||||||
|
if symbol not in self.historical_data:
|
||||||
|
self.historical_data[symbol] = {}
|
||||||
|
self.historical_data[symbol][timeframe] = df
|
||||||
|
|
||||||
|
preload_results[symbol][timeframe] = True
|
||||||
|
logger.info(f"✅ Preloaded {len(df)} candles for {symbol} {timeframe}")
|
||||||
|
else:
|
||||||
|
preload_results[symbol][timeframe] = False
|
||||||
|
logger.warning(f"❌ Failed to preload {symbol} {timeframe}")
|
||||||
|
else:
|
||||||
|
preload_results[symbol][timeframe] = True # Already have data
|
||||||
|
logger.info(f"⏭️ Skipped preloading {symbol} {timeframe} (already have data)")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error preloading {symbol} {timeframe}: {e}")
|
||||||
|
preload_results[symbol][timeframe] = False
|
||||||
|
|
||||||
|
# Log summary
|
||||||
|
total_pairs = len(self.symbols) * len(timeframes)
|
||||||
|
successful_pairs = sum(1 for symbol_results in preload_results.values()
|
||||||
|
for success in symbol_results.values() if success)
|
||||||
|
|
||||||
|
logger.info(f"Preloading completed: {successful_pairs}/{total_pairs} symbol-timeframe pairs loaded")
|
||||||
|
|
||||||
|
return preload_results
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error in preload_all_symbols_data: {e}")
|
||||||
|
return {}
|
||||||
|
|
||||||
def _fetch_from_binance(self, symbol: str, timeframe: str, limit: int) -> Optional[pd.DataFrame]:
|
def _fetch_from_binance(self, symbol: str, timeframe: str, limit: int) -> Optional[pd.DataFrame]:
|
||||||
"""Fetch data from Binance API"""
|
"""Fetch data from Binance API"""
|
||||||
try:
|
try:
|
||||||
|
@ -117,6 +117,25 @@ class EnhancedTradingOrchestrator:
|
|||||||
self.confidence_threshold_close = self.config.orchestrator.get('confidence_threshold_close', 0.25) # Much lower for closing
|
self.confidence_threshold_close = self.config.orchestrator.get('confidence_threshold_close', 0.25) # Much lower for closing
|
||||||
self.decision_frequency = self.config.orchestrator.get('decision_frequency', 30)
|
self.decision_frequency = self.config.orchestrator.get('decision_frequency', 30)
|
||||||
|
|
||||||
|
# DQN RL-based sensitivity learning parameters
|
||||||
|
self.sensitivity_learning_enabled = True
|
||||||
|
self.sensitivity_dqn_agent = None # Will be initialized when first DQN model is available
|
||||||
|
self.sensitivity_state_size = 20 # Features for sensitivity learning
|
||||||
|
self.sensitivity_action_space = 5 # 5 sensitivity levels: very_low, low, medium, high, very_high
|
||||||
|
self.current_sensitivity_level = 2 # Start with medium (index 2)
|
||||||
|
self.sensitivity_levels = {
|
||||||
|
0: {'name': 'very_low', 'close_threshold_multiplier': 0.5, 'open_threshold_multiplier': 1.2},
|
||||||
|
1: {'name': 'low', 'close_threshold_multiplier': 0.7, 'open_threshold_multiplier': 1.1},
|
||||||
|
2: {'name': 'medium', 'close_threshold_multiplier': 1.0, 'open_threshold_multiplier': 1.0},
|
||||||
|
3: {'name': 'high', 'close_threshold_multiplier': 1.3, 'open_threshold_multiplier': 0.9},
|
||||||
|
4: {'name': 'very_high', 'close_threshold_multiplier': 1.5, 'open_threshold_multiplier': 0.8}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Trade tracking for sensitivity learning
|
||||||
|
self.active_trades = {} # symbol -> trade_info with entry details
|
||||||
|
self.completed_trades = deque(maxlen=1000) # Store last 1000 completed trades for learning
|
||||||
|
self.sensitivity_learning_queue = deque(maxlen=500) # Queue for DQN training
|
||||||
|
|
||||||
# Enhanced weighting system
|
# Enhanced weighting system
|
||||||
self.timeframe_weights = self._initialize_timeframe_weights()
|
self.timeframe_weights = self._initialize_timeframe_weights()
|
||||||
self.symbol_correlation_matrix = self._initialize_correlation_matrix()
|
self.symbol_correlation_matrix = self._initialize_correlation_matrix()
|
||||||
@ -172,6 +191,7 @@ class EnhancedTradingOrchestrator:
|
|||||||
logger.info("Real-time tick processor integrated for ultra-low latency processing")
|
logger.info("Real-time tick processor integrated for ultra-low latency processing")
|
||||||
logger.info("Raw tick and OHLCV bar processing enabled for pattern detection")
|
logger.info("Raw tick and OHLCV bar processing enabled for pattern detection")
|
||||||
logger.info("Enhanced retrospective learning enabled for perfect opportunity detection")
|
logger.info("Enhanced retrospective learning enabled for perfect opportunity detection")
|
||||||
|
logger.info("DQN RL-based sensitivity learning enabled for adaptive thresholds")
|
||||||
|
|
||||||
def _initialize_timeframe_weights(self) -> Dict[str, float]:
|
def _initialize_timeframe_weights(self) -> Dict[str, float]:
|
||||||
"""Initialize weights for different timeframes"""
|
"""Initialize weights for different timeframes"""
|
||||||
@ -640,8 +660,10 @@ class EnhancedTradingOrchestrator:
|
|||||||
if action.action == 'BUY':
|
if action.action == 'BUY':
|
||||||
# Close any short position, open long position
|
# Close any short position, open long position
|
||||||
if symbol in self.open_positions and self.open_positions[symbol]['side'] == 'SHORT':
|
if symbol in self.open_positions and self.open_positions[symbol]['side'] == 'SHORT':
|
||||||
|
self._close_trade_for_sensitivity_learning(symbol, action)
|
||||||
del self.open_positions[symbol]
|
del self.open_positions[symbol]
|
||||||
else:
|
else:
|
||||||
|
self._open_trade_for_sensitivity_learning(symbol, action)
|
||||||
self.open_positions[symbol] = {
|
self.open_positions[symbol] = {
|
||||||
'side': 'LONG',
|
'side': 'LONG',
|
||||||
'entry_price': action.price,
|
'entry_price': action.price,
|
||||||
@ -650,474 +672,464 @@ class EnhancedTradingOrchestrator:
|
|||||||
elif action.action == 'SELL':
|
elif action.action == 'SELL':
|
||||||
# Close any long position, open short position
|
# Close any long position, open short position
|
||||||
if symbol in self.open_positions and self.open_positions[symbol]['side'] == 'LONG':
|
if symbol in self.open_positions and self.open_positions[symbol]['side'] == 'LONG':
|
||||||
|
self._close_trade_for_sensitivity_learning(symbol, action)
|
||||||
del self.open_positions[symbol]
|
del self.open_positions[symbol]
|
||||||
else:
|
else:
|
||||||
|
self._open_trade_for_sensitivity_learning(symbol, action)
|
||||||
self.open_positions[symbol] = {
|
self.open_positions[symbol] = {
|
||||||
'side': 'SHORT',
|
'side': 'SHORT',
|
||||||
'entry_price': action.price,
|
'entry_price': action.price,
|
||||||
'timestamp': action.timestamp
|
'timestamp': action.timestamp
|
||||||
}
|
}
|
||||||
|
|
||||||
async def trigger_retrospective_learning(self):
|
def _open_trade_for_sensitivity_learning(self, symbol: str, action: TradingAction):
|
||||||
"""Trigger retrospective learning analysis on recent perfect opportunities"""
|
"""Track trade opening for sensitivity learning"""
|
||||||
try:
|
try:
|
||||||
current_time = datetime.now()
|
# Get current market state for learning context
|
||||||
|
market_state = self._get_current_market_state_for_sensitivity(symbol)
|
||||||
|
|
||||||
# Only run retrospective analysis every 5 minutes to avoid overload
|
trade_info = {
|
||||||
if (current_time - self.last_retrospective_analysis).total_seconds() < 300:
|
'symbol': symbol,
|
||||||
return
|
'side': 'LONG' if action.action == 'BUY' else 'SHORT',
|
||||||
|
'entry_price': action.price,
|
||||||
self.last_retrospective_analysis = current_time
|
'entry_time': action.timestamp,
|
||||||
|
'entry_confidence': action.confidence,
|
||||||
# Analyze recent market moves for missed opportunities
|
'entry_market_state': market_state,
|
||||||
await self._analyze_missed_opportunities()
|
'sensitivity_level_at_entry': self.current_sensitivity_level,
|
||||||
|
'thresholds_used': {
|
||||||
# Update model confidence thresholds based on recent performance
|
'open': self._get_current_open_threshold(),
|
||||||
self._adjust_confidence_thresholds()
|
'close': self._get_current_close_threshold()
|
||||||
|
|
||||||
# Mark retrospective learning as active
|
|
||||||
self.retrospective_learning_active = True
|
|
||||||
|
|
||||||
logger.info("Retrospective learning analysis completed")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error in retrospective learning: {e}")
|
|
||||||
|
|
||||||
async def _analyze_missed_opportunities(self):
|
|
||||||
"""Analyze recent price movements to identify missed perfect opportunities"""
|
|
||||||
try:
|
|
||||||
for symbol in self.symbols:
|
|
||||||
# Get recent price data
|
|
||||||
recent_data = self.data_provider.get_latest_candles(symbol, '1m', limit=60)
|
|
||||||
|
|
||||||
if recent_data is None or len(recent_data) < 10:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Look for significant price movements (>1% in 5 minutes)
|
|
||||||
for i in range(5, len(recent_data)):
|
|
||||||
price_change = (recent_data.iloc[i]['close'] - recent_data.iloc[i-5]['close']) / recent_data.iloc[i-5]['close']
|
|
||||||
|
|
||||||
if abs(price_change) > 0.01: # 1% move
|
|
||||||
# This was a perfect opportunity
|
|
||||||
optimal_action = 'BUY' if price_change > 0 else 'SELL'
|
|
||||||
|
|
||||||
# Create perfect move for retrospective learning
|
|
||||||
perfect_move = PerfectMove(
|
|
||||||
symbol=symbol,
|
|
||||||
timeframe='1m',
|
|
||||||
timestamp=recent_data.iloc[i-5]['timestamp'],
|
|
||||||
optimal_action=optimal_action,
|
|
||||||
actual_outcome=price_change,
|
|
||||||
market_state_before=None, # Would need to reconstruct
|
|
||||||
market_state_after=None, # Would need to reconstruct
|
|
||||||
confidence_should_have_been=min(0.95, abs(price_change) * 20) # Higher confidence for bigger moves
|
|
||||||
)
|
|
||||||
|
|
||||||
self.perfect_moves.append(perfect_move)
|
|
||||||
|
|
||||||
logger.info(f"Retrospective perfect opportunity identified: {optimal_action} {symbol} ({price_change*100:+.2f}%)")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error analyzing missed opportunities: {e}")
|
|
||||||
|
|
||||||
def _adjust_confidence_thresholds(self):
|
|
||||||
"""Dynamically adjust confidence thresholds based on recent performance"""
|
|
||||||
try:
|
|
||||||
if len(self.perfect_moves) < 10:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Analyze recent perfect moves
|
|
||||||
recent_moves = list(self.perfect_moves)[-20:]
|
|
||||||
avg_confidence_needed = np.mean([move.confidence_should_have_been for move in recent_moves])
|
|
||||||
avg_outcome = np.mean([abs(move.actual_outcome) for move in recent_moves])
|
|
||||||
|
|
||||||
# Adjust opening threshold based on missed opportunities
|
|
||||||
if avg_confidence_needed > self.confidence_threshold_open:
|
|
||||||
adjustment = min(0.1, (avg_confidence_needed - self.confidence_threshold_open) * 0.1)
|
|
||||||
self.confidence_threshold_open = max(0.3, self.confidence_threshold_open - adjustment)
|
|
||||||
logger.info(f"Adjusted opening confidence threshold to {self.confidence_threshold_open:.3f}")
|
|
||||||
|
|
||||||
# Keep closing threshold very low for sensitivity
|
|
||||||
if avg_outcome > 0.02: # If we're seeing big moves
|
|
||||||
self.confidence_threshold_close = max(0.15, self.confidence_threshold_close * 0.9)
|
|
||||||
logger.info(f"Lowered closing confidence threshold to {self.confidence_threshold_close:.3f}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error adjusting confidence thresholds: {e}")
|
|
||||||
|
|
||||||
def _get_correlated_sentiment(self, symbol: str,
|
|
||||||
all_predictions: Dict[str, List[EnhancedPrediction]]) -> Dict[str, Any]:
|
|
||||||
"""Get sentiment from correlated symbols"""
|
|
||||||
correlated_actions = []
|
|
||||||
correlated_confidences = []
|
|
||||||
|
|
||||||
for other_symbol, predictions in all_predictions.items():
|
|
||||||
if other_symbol != symbol and predictions:
|
|
||||||
correlation = self.symbol_correlation_matrix.get((symbol, other_symbol), 0.0)
|
|
||||||
|
|
||||||
if correlation > 0.5: # Only consider significantly correlated symbols
|
|
||||||
best_pred = max(predictions, key=lambda p: p.overall_confidence)
|
|
||||||
correlated_actions.append(best_pred.overall_action)
|
|
||||||
correlated_confidences.append(best_pred.overall_confidence * correlation)
|
|
||||||
|
|
||||||
if not correlated_actions:
|
|
||||||
return {'agreement': 1.0, 'sentiment': 'NEUTRAL'}
|
|
||||||
|
|
||||||
# Calculate agreement
|
|
||||||
primary_pred = all_predictions[symbol][0] if all_predictions.get(symbol) else None
|
|
||||||
if primary_pred:
|
|
||||||
agreement_count = sum(1 for action in correlated_actions
|
|
||||||
if action == primary_pred.overall_action)
|
|
||||||
agreement = agreement_count / len(correlated_actions)
|
|
||||||
else:
|
|
||||||
agreement = 0.5
|
|
||||||
|
|
||||||
# Calculate overall sentiment
|
|
||||||
action_weights = {'BUY': 0.0, 'SELL': 0.0, 'HOLD': 0.0}
|
|
||||||
for action, confidence in zip(correlated_actions, correlated_confidences):
|
|
||||||
action_weights[action] += confidence
|
|
||||||
|
|
||||||
dominant_sentiment = max(action_weights, key=action_weights.get)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'agreement': agreement,
|
|
||||||
'sentiment': dominant_sentiment,
|
|
||||||
'correlated_symbols': len(correlated_actions)
|
|
||||||
}
|
|
||||||
|
|
||||||
def _queue_for_rl_evaluation(self, action: TradingAction, market_state: MarketState):
|
|
||||||
"""Queue trading action for RL evaluation"""
|
|
||||||
evaluation_item = {
|
|
||||||
'action': action,
|
|
||||||
'market_state_before': market_state,
|
|
||||||
'timestamp': datetime.now(),
|
|
||||||
'evaluation_pending': True
|
|
||||||
}
|
|
||||||
self.rl_evaluation_queue.append(evaluation_item)
|
|
||||||
|
|
||||||
async def evaluate_actions_with_rl(self):
|
|
||||||
"""Evaluate recent actions using RL agents for continuous learning"""
|
|
||||||
if not self.rl_evaluation_queue:
|
|
||||||
return
|
|
||||||
|
|
||||||
current_time = datetime.now()
|
|
||||||
|
|
||||||
# Process actions that are ready for evaluation (e.g., 1 hour old)
|
|
||||||
for item in list(self.rl_evaluation_queue):
|
|
||||||
if item['evaluation_pending']:
|
|
||||||
time_since_action = (current_time - item['timestamp']).total_seconds()
|
|
||||||
|
|
||||||
# Evaluate after sufficient time has passed
|
|
||||||
if time_since_action >= 3600: # 1 hour
|
|
||||||
await self._evaluate_single_action(item)
|
|
||||||
item['evaluation_pending'] = False
|
|
||||||
|
|
||||||
async def _evaluate_single_action(self, evaluation_item: Dict[str, Any]):
|
|
||||||
"""Evaluate a single action using RL"""
|
|
||||||
try:
|
|
||||||
action = evaluation_item['action']
|
|
||||||
initial_state = evaluation_item['market_state_before']
|
|
||||||
|
|
||||||
# Get current market state for comparison
|
|
||||||
current_market_states = await self._get_all_market_states_universal(self.universal_adapter.get_universal_data_stream())
|
|
||||||
current_state = current_market_states.get(action.symbol)
|
|
||||||
|
|
||||||
if current_state:
|
|
||||||
# Calculate reward based on price movement
|
|
||||||
initial_price = initial_state.prices.get(self.timeframes[0], 0)
|
|
||||||
current_price = current_state.prices.get(self.timeframes[0], 0)
|
|
||||||
|
|
||||||
if initial_price > 0:
|
|
||||||
price_change = (current_price - initial_price) / initial_price
|
|
||||||
|
|
||||||
# Calculate reward based on action and price movement
|
|
||||||
reward = self._calculate_reward(action.action, price_change, action.confidence)
|
|
||||||
|
|
||||||
# Update RL agents
|
|
||||||
await self._update_rl_agents(action, initial_state, current_state, reward)
|
|
||||||
|
|
||||||
# Check if this was a perfect move for CNN training
|
|
||||||
if abs(reward) > 0.02: # Significant outcome
|
|
||||||
self._mark_perfect_move(action, initial_state, current_state, reward)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error evaluating action: {e}")
|
|
||||||
|
|
||||||
def _calculate_reward(self, action: str, price_change: float, confidence: float) -> float:
|
|
||||||
"""Calculate reward for RL training"""
|
|
||||||
base_reward = 0.0
|
|
||||||
|
|
||||||
if action == 'BUY' and price_change > 0:
|
|
||||||
base_reward = price_change * 10 # Reward proportional to gain
|
|
||||||
elif action == 'SELL' and price_change < 0:
|
|
||||||
base_reward = abs(price_change) * 10 # Reward for avoiding loss
|
|
||||||
elif action == 'HOLD':
|
|
||||||
base_reward = 0.01 if abs(price_change) < 0.005 else -0.01 # Small reward for correct holds
|
|
||||||
else:
|
|
||||||
base_reward = -abs(price_change) * 5 # Penalty for wrong actions
|
|
||||||
|
|
||||||
# Adjust reward based on confidence
|
|
||||||
confidence_multiplier = 0.5 + confidence # 0.5 to 1.5 range
|
|
||||||
|
|
||||||
return base_reward * confidence_multiplier
|
|
||||||
|
|
||||||
async def _update_rl_agents(self, action: TradingAction, initial_state: MarketState,
|
|
||||||
current_state: MarketState, reward: float):
|
|
||||||
"""Update RL agents with action evaluation"""
|
|
||||||
for model_name, model in self.model_registry.models.items():
|
|
||||||
if isinstance(model, RLAgentInterface):
|
|
||||||
try:
|
|
||||||
# Convert market states to RL state format
|
|
||||||
initial_rl_state = self._market_state_to_rl_state(initial_state)
|
|
||||||
current_rl_state = self._market_state_to_rl_state(current_state)
|
|
||||||
|
|
||||||
# Convert action to RL action index
|
|
||||||
action_idx = {'SELL': 0, 'HOLD': 1, 'BUY': 2}.get(action.action, 1)
|
|
||||||
|
|
||||||
# Store experience
|
|
||||||
model.remember(
|
|
||||||
state=initial_rl_state,
|
|
||||||
action=action_idx,
|
|
||||||
reward=reward,
|
|
||||||
next_state=current_rl_state,
|
|
||||||
done=False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Trigger replay learning
|
|
||||||
loss = model.replay()
|
|
||||||
if loss is not None:
|
|
||||||
logger.info(f"RL agent {model_name} updated with loss: {loss:.4f}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error updating RL agent {model_name}: {e}")
|
|
||||||
|
|
||||||
def _mark_perfect_move(self, action: TradingAction, initial_state: MarketState,
|
|
||||||
final_state: MarketState, reward: float):
|
|
||||||
"""Mark a perfect move for CNN training"""
|
|
||||||
try:
|
|
||||||
# Determine what the optimal action should have been
|
|
||||||
optimal_action = action.action if reward > 0 else ('HOLD' if action.action == 'HOLD' else
|
|
||||||
('SELL' if action.action == 'BUY' else 'BUY'))
|
|
||||||
|
|
||||||
# Calculate what confidence should have been
|
|
||||||
optimal_confidence = min(0.95, abs(reward) * 10) # Higher reward = higher confidence should have been
|
|
||||||
|
|
||||||
for tf_pred in action.timeframe_analysis:
|
|
||||||
perfect_move = PerfectMove(
|
|
||||||
symbol=action.symbol,
|
|
||||||
timeframe=tf_pred.timeframe,
|
|
||||||
timestamp=action.timestamp,
|
|
||||||
optimal_action=optimal_action,
|
|
||||||
actual_outcome=reward,
|
|
||||||
market_state_before=initial_state,
|
|
||||||
market_state_after=final_state,
|
|
||||||
confidence_should_have_been=optimal_confidence
|
|
||||||
)
|
|
||||||
self.perfect_moves.append(perfect_move)
|
|
||||||
|
|
||||||
logger.info(f"Marked perfect move for {action.symbol}: {optimal_action} with confidence {optimal_confidence:.3f}")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error marking perfect move: {e}")
|
|
||||||
|
|
||||||
def get_recent_perfect_moves(self, limit: int = 10) -> List[PerfectMove]:
|
|
||||||
"""Get recent perfect moves for display/monitoring"""
|
|
||||||
return list(self.perfect_moves)[-limit:]
|
|
||||||
|
|
||||||
async def queue_action_for_evaluation(self, action: TradingAction):
|
|
||||||
"""Queue a trading action for future RL evaluation"""
|
|
||||||
try:
|
|
||||||
# Get current market state
|
|
||||||
market_states = await self._get_all_market_states_universal(self.universal_adapter.get_universal_data_stream())
|
|
||||||
if action.symbol in market_states:
|
|
||||||
evaluation_item = {
|
|
||||||
'action': action,
|
|
||||||
'market_state_before': market_states[action.symbol],
|
|
||||||
'timestamp': datetime.now()
|
|
||||||
}
|
}
|
||||||
self.rl_evaluation_queue.append(evaluation_item)
|
}
|
||||||
logger.debug(f"Queued action for RL evaluation: {action.action} {action.symbol}")
|
|
||||||
|
self.active_trades[symbol] = trade_info
|
||||||
|
logger.info(f"Opened trade for sensitivity learning: {symbol} {trade_info['side']} @ ${action.price:.2f}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error queuing action for evaluation: {e}")
|
logger.error(f"Error tracking trade opening for sensitivity learning: {e}")
|
||||||
|
|
||||||
def get_perfect_moves_for_training(self, symbol: str = None, timeframe: str = None,
|
def _close_trade_for_sensitivity_learning(self, symbol: str, action: TradingAction):
|
||||||
limit: int = 1000) -> List[PerfectMove]:
|
"""Track trade closing and create learning case for DQN"""
|
||||||
"""Get perfect moves for CNN training"""
|
|
||||||
moves = list(self.perfect_moves)
|
|
||||||
|
|
||||||
# Filter by symbol if specified
|
|
||||||
if symbol:
|
|
||||||
moves = [move for move in moves if move.symbol == symbol]
|
|
||||||
|
|
||||||
# Filter by timeframe if specified
|
|
||||||
if timeframe:
|
|
||||||
moves = [move for move in moves if move.timeframe == timeframe]
|
|
||||||
|
|
||||||
return moves[-limit:] # Return most recent moves
|
|
||||||
|
|
||||||
# Helper methods for market analysis using universal data
|
|
||||||
def _calculate_volatility_from_universal(self, symbol: str, universal_stream: UniversalDataStream) -> float:
|
|
||||||
"""Calculate current volatility for symbol using universal data"""
|
|
||||||
try:
|
try:
|
||||||
if symbol == 'ETH/USDT' and len(universal_stream.eth_ticks) > 10:
|
if symbol not in self.active_trades:
|
||||||
# Calculate volatility from tick data
|
return
|
||||||
prices = universal_stream.eth_ticks[-10:, 4] # Last 10 close prices
|
|
||||||
returns = np.diff(prices) / prices[:-1]
|
trade_info = self.active_trades[symbol]
|
||||||
volatility = np.std(returns) * np.sqrt(86400) # Annualized volatility
|
|
||||||
return float(volatility)
|
# Calculate trade outcome
|
||||||
elif symbol == 'BTC/USDT' and len(universal_stream.btc_ticks) > 10:
|
entry_price = trade_info['entry_price']
|
||||||
# Calculate volatility from BTC tick data
|
exit_price = action.price
|
||||||
prices = universal_stream.btc_ticks[-10:, 4] # Last 10 close prices
|
side = trade_info['side']
|
||||||
returns = np.diff(prices) / prices[:-1]
|
|
||||||
volatility = np.std(returns) * np.sqrt(86400) # Annualized volatility
|
if side == 'LONG':
|
||||||
return float(volatility)
|
pnl_pct = (exit_price - entry_price) / entry_price
|
||||||
|
else: # SHORT
|
||||||
|
pnl_pct = (entry_price - exit_price) / entry_price
|
||||||
|
|
||||||
|
# Calculate trade duration
|
||||||
|
duration = (action.timestamp - trade_info['entry_time']).total_seconds()
|
||||||
|
|
||||||
|
# Get current market state for exit context
|
||||||
|
exit_market_state = self._get_current_market_state_for_sensitivity(symbol)
|
||||||
|
|
||||||
|
# Create completed trade record
|
||||||
|
completed_trade = {
|
||||||
|
'symbol': symbol,
|
||||||
|
'side': side,
|
||||||
|
'entry_price': entry_price,
|
||||||
|
'exit_price': exit_price,
|
||||||
|
'entry_time': trade_info['entry_time'],
|
||||||
|
'exit_time': action.timestamp,
|
||||||
|
'duration': duration,
|
||||||
|
'pnl_pct': pnl_pct,
|
||||||
|
'entry_confidence': trade_info['entry_confidence'],
|
||||||
|
'exit_confidence': action.confidence,
|
||||||
|
'entry_market_state': trade_info['entry_market_state'],
|
||||||
|
'exit_market_state': exit_market_state,
|
||||||
|
'sensitivity_level_used': trade_info['sensitivity_level_at_entry'],
|
||||||
|
'thresholds_used': trade_info['thresholds_used']
|
||||||
|
}
|
||||||
|
|
||||||
|
self.completed_trades.append(completed_trade)
|
||||||
|
|
||||||
|
# Create sensitivity learning case for DQN
|
||||||
|
self._create_sensitivity_learning_case(completed_trade)
|
||||||
|
|
||||||
|
# Remove from active trades
|
||||||
|
del self.active_trades[symbol]
|
||||||
|
|
||||||
|
logger.info(f"Closed trade for sensitivity learning: {symbol} {side} P&L: {pnl_pct*100:+.2f}% Duration: {duration:.0f}s")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error calculating volatility from universal data: {e}")
|
logger.error(f"Error tracking trade closing for sensitivity learning: {e}")
|
||||||
|
|
||||||
return 0.02 # Default 2% volatility
|
|
||||||
|
|
||||||
def _get_current_volume_from_universal(self, symbol: str, universal_stream: UniversalDataStream) -> float:
|
def _get_current_market_state_for_sensitivity(self, symbol: str) -> Dict[str, float]:
|
||||||
"""Get current volume ratio compared to average using universal data"""
|
"""Get current market state features for sensitivity learning"""
|
||||||
try:
|
try:
|
||||||
if symbol == 'ETH/USDT':
|
# Get recent price data
|
||||||
# Use 1m data for volume analysis
|
recent_data = self.data_provider.get_historical_data(symbol, '1m', limit=20)
|
||||||
if len(universal_stream.eth_1m) > 10:
|
|
||||||
volumes = universal_stream.eth_1m[-10:, 5] # Last 10 volume values
|
if recent_data is None or len(recent_data) < 10:
|
||||||
current_volume = universal_stream.eth_1m[-1, 5]
|
return self._get_default_market_state()
|
||||||
avg_volume = np.mean(volumes[:-1])
|
|
||||||
if avg_volume > 0:
|
# Calculate market features
|
||||||
return float(current_volume / avg_volume)
|
current_price = recent_data['close'].iloc[-1]
|
||||||
elif symbol == 'BTC/USDT':
|
|
||||||
# Use BTC tick data for volume analysis
|
# Volatility (20-period)
|
||||||
if len(universal_stream.btc_ticks) > 10:
|
volatility = recent_data['close'].pct_change().std() * 100
|
||||||
volumes = universal_stream.btc_ticks[-10:, 5] # Last 10 volume values
|
|
||||||
current_volume = universal_stream.btc_ticks[-1, 5]
|
# Price momentum (5-period)
|
||||||
avg_volume = np.mean(volumes[:-1])
|
momentum_5 = (current_price - recent_data['close'].iloc[-6]) / recent_data['close'].iloc[-6] * 100
|
||||||
if avg_volume > 0:
|
|
||||||
return float(current_volume / avg_volume)
|
# Volume ratio
|
||||||
|
avg_volume = recent_data['volume'].mean()
|
||||||
|
current_volume = recent_data['volume'].iloc[-1]
|
||||||
|
volume_ratio = current_volume / avg_volume if avg_volume > 0 else 1.0
|
||||||
|
|
||||||
|
# RSI
|
||||||
|
rsi = recent_data['rsi'].iloc[-1] if 'rsi' in recent_data.columns else 50.0
|
||||||
|
|
||||||
|
# MACD signal
|
||||||
|
macd_signal = 0.0
|
||||||
|
if 'macd' in recent_data.columns and 'macd_signal' in recent_data.columns:
|
||||||
|
macd_signal = recent_data['macd'].iloc[-1] - recent_data['macd_signal'].iloc[-1]
|
||||||
|
|
||||||
|
# Bollinger Band position
|
||||||
|
bb_position = 0.5 # Default middle
|
||||||
|
if 'bb_upper' in recent_data.columns and 'bb_lower' in recent_data.columns:
|
||||||
|
bb_upper = recent_data['bb_upper'].iloc[-1]
|
||||||
|
bb_lower = recent_data['bb_lower'].iloc[-1]
|
||||||
|
if bb_upper > bb_lower:
|
||||||
|
bb_position = (current_price - bb_lower) / (bb_upper - bb_lower)
|
||||||
|
|
||||||
|
# Recent price change patterns
|
||||||
|
price_changes = recent_data['close'].pct_change().tail(5).tolist()
|
||||||
|
|
||||||
|
return {
|
||||||
|
'volatility': volatility,
|
||||||
|
'momentum_5': momentum_5,
|
||||||
|
'volume_ratio': volume_ratio,
|
||||||
|
'rsi': rsi,
|
||||||
|
'macd_signal': macd_signal,
|
||||||
|
'bb_position': bb_position,
|
||||||
|
'price_change_1': price_changes[-1] if len(price_changes) > 0 else 0.0,
|
||||||
|
'price_change_2': price_changes[-2] if len(price_changes) > 1 else 0.0,
|
||||||
|
'price_change_3': price_changes[-3] if len(price_changes) > 2 else 0.0,
|
||||||
|
'price_change_4': price_changes[-4] if len(price_changes) > 3 else 0.0,
|
||||||
|
'price_change_5': price_changes[-5] if len(price_changes) > 4 else 0.0
|
||||||
|
}
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error calculating volume from universal data: {e}")
|
logger.error(f"Error getting market state for sensitivity learning: {e}")
|
||||||
|
return self._get_default_market_state()
|
||||||
return 1.0 # Normal volume
|
|
||||||
|
|
||||||
def _calculate_trend_strength_from_universal(self, symbol: str, universal_stream: UniversalDataStream) -> float:
|
def _get_default_market_state(self) -> Dict[str, float]:
|
||||||
"""Calculate trend strength using universal data"""
|
"""Get default market state when data is unavailable"""
|
||||||
|
return {
|
||||||
|
'volatility': 2.0,
|
||||||
|
'momentum_5': 0.0,
|
||||||
|
'volume_ratio': 1.0,
|
||||||
|
'rsi': 50.0,
|
||||||
|
'macd_signal': 0.0,
|
||||||
|
'bb_position': 0.5,
|
||||||
|
'price_change_1': 0.0,
|
||||||
|
'price_change_2': 0.0,
|
||||||
|
'price_change_3': 0.0,
|
||||||
|
'price_change_4': 0.0,
|
||||||
|
'price_change_5': 0.0
|
||||||
|
}
|
||||||
|
|
||||||
|
def _create_sensitivity_learning_case(self, completed_trade: Dict[str, Any]):
|
||||||
|
"""Create a learning case for the DQN sensitivity agent"""
|
||||||
try:
|
try:
|
||||||
if symbol == 'ETH/USDT':
|
# Create state vector from market conditions at entry
|
||||||
# Use multiple timeframes to determine trend strength
|
entry_state = self._market_state_to_sensitivity_state(
|
||||||
trend_scores = []
|
completed_trade['entry_market_state'],
|
||||||
|
completed_trade['sensitivity_level_used']
|
||||||
# Check 1m trend
|
)
|
||||||
if len(universal_stream.eth_1m) > 20:
|
|
||||||
prices = universal_stream.eth_1m[-20:, 4] # Last 20 close prices
|
# Create state vector from market conditions at exit
|
||||||
slope = np.polyfit(range(len(prices)), prices, 1)[0]
|
exit_state = self._market_state_to_sensitivity_state(
|
||||||
trend_scores.append(abs(slope) / np.mean(prices))
|
completed_trade['exit_market_state'],
|
||||||
|
completed_trade['sensitivity_level_used']
|
||||||
# Check 1h trend
|
)
|
||||||
if len(universal_stream.eth_1h) > 10:
|
|
||||||
prices = universal_stream.eth_1h[-10:, 4] # Last 10 close prices
|
# Calculate reward based on trade outcome
|
||||||
slope = np.polyfit(range(len(prices)), prices, 1)[0]
|
reward = self._calculate_sensitivity_reward(completed_trade)
|
||||||
trend_scores.append(abs(slope) / np.mean(prices))
|
|
||||||
|
# Determine optimal sensitivity action based on outcome
|
||||||
if trend_scores:
|
optimal_sensitivity = self._determine_optimal_sensitivity(completed_trade)
|
||||||
return float(np.mean(trend_scores))
|
|
||||||
|
# Create learning experience
|
||||||
elif symbol == 'BTC/USDT':
|
learning_case = {
|
||||||
# Use BTC tick data for trend analysis
|
'state': entry_state,
|
||||||
if len(universal_stream.btc_ticks) > 20:
|
'action': completed_trade['sensitivity_level_used'],
|
||||||
prices = universal_stream.btc_ticks[-20:, 4] # Last 20 close prices
|
'reward': reward,
|
||||||
slope = np.polyfit(range(len(prices)), prices, 1)[0]
|
'next_state': exit_state,
|
||||||
return float(abs(slope) / np.mean(prices))
|
'done': True, # Trade is completed
|
||||||
|
'optimal_action': optimal_sensitivity,
|
||||||
|
'trade_outcome': completed_trade['pnl_pct'],
|
||||||
|
'trade_duration': completed_trade['duration'],
|
||||||
|
'symbol': completed_trade['symbol']
|
||||||
|
}
|
||||||
|
|
||||||
|
self.sensitivity_learning_queue.append(learning_case)
|
||||||
|
|
||||||
|
# Train DQN if we have enough cases
|
||||||
|
if len(self.sensitivity_learning_queue) >= 32: # Batch size
|
||||||
|
self._train_sensitivity_dqn()
|
||||||
|
|
||||||
|
logger.info(f"Created sensitivity learning case: reward={reward:.3f}, optimal_sensitivity={optimal_sensitivity}")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error calculating trend strength from universal data: {e}")
|
logger.error(f"Error creating sensitivity learning case: {e}")
|
||||||
|
|
||||||
return 0.5 # Moderate trend
|
|
||||||
|
|
||||||
def _determine_market_regime_from_universal(self, symbol: str, universal_stream: UniversalDataStream) -> str:
|
def _market_state_to_sensitivity_state(self, market_state: Dict[str, float], current_sensitivity: int) -> np.ndarray:
|
||||||
"""Determine current market regime using universal data"""
|
"""Convert market state to DQN state vector for sensitivity learning"""
|
||||||
try:
|
try:
|
||||||
if symbol == 'ETH/USDT':
|
# Create state vector with market features + current sensitivity
|
||||||
# Analyze volatility and trend from multiple timeframes
|
state_features = [
|
||||||
volatility = self._calculate_volatility_from_universal(symbol, universal_stream)
|
market_state.get('volatility', 2.0) / 10.0, # Normalize volatility
|
||||||
trend_strength = self._calculate_trend_strength_from_universal(symbol, universal_stream)
|
market_state.get('momentum_5', 0.0) / 5.0, # Normalize momentum
|
||||||
|
market_state.get('volume_ratio', 1.0), # Volume ratio
|
||||||
# Determine regime based on volatility and trend
|
market_state.get('rsi', 50.0) / 100.0, # Normalize RSI
|
||||||
if volatility > 0.05: # High volatility
|
market_state.get('macd_signal', 0.0) / 2.0, # Normalize MACD
|
||||||
return 'volatile'
|
market_state.get('bb_position', 0.5), # BB position (already 0-1)
|
||||||
elif trend_strength > 0.002: # Strong trend
|
market_state.get('price_change_1', 0.0) * 100, # Recent price changes
|
||||||
return 'trending'
|
market_state.get('price_change_2', 0.0) * 100,
|
||||||
else:
|
market_state.get('price_change_3', 0.0) * 100,
|
||||||
return 'ranging'
|
market_state.get('price_change_4', 0.0) * 100,
|
||||||
|
market_state.get('price_change_5', 0.0) * 100,
|
||||||
elif symbol == 'BTC/USDT':
|
current_sensitivity / 4.0, # Normalize current sensitivity (0-4 -> 0-1)
|
||||||
# Analyze BTC regime
|
]
|
||||||
volatility = self._calculate_volatility_from_universal(symbol, universal_stream)
|
|
||||||
|
# Add recent performance metrics
|
||||||
if volatility > 0.04: # High volatility for BTC
|
if len(self.completed_trades) > 0:
|
||||||
return 'volatile'
|
recent_trades = list(self.completed_trades)[-10:] # Last 10 trades
|
||||||
else:
|
avg_pnl = np.mean([t['pnl_pct'] for t in recent_trades])
|
||||||
return 'trending' # Default for BTC
|
win_rate = len([t for t in recent_trades if t['pnl_pct'] > 0]) / len(recent_trades)
|
||||||
|
avg_duration = np.mean([t['duration'] for t in recent_trades]) / 3600 # Normalize to hours
|
||||||
|
else:
|
||||||
|
avg_pnl = 0.0
|
||||||
|
win_rate = 0.5
|
||||||
|
avg_duration = 0.5
|
||||||
|
|
||||||
|
state_features.extend([
|
||||||
|
avg_pnl * 10, # Recent average P&L
|
||||||
|
win_rate, # Recent win rate
|
||||||
|
avg_duration, # Recent average duration
|
||||||
|
])
|
||||||
|
|
||||||
|
# Pad or truncate to exact state size
|
||||||
|
while len(state_features) < self.sensitivity_state_size:
|
||||||
|
state_features.append(0.0)
|
||||||
|
|
||||||
|
state_features = state_features[:self.sensitivity_state_size]
|
||||||
|
|
||||||
|
return np.array(state_features, dtype=np.float32)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error determining market regime from universal data: {e}")
|
logger.error(f"Error converting market state to sensitivity state: {e}")
|
||||||
|
return np.zeros(self.sensitivity_state_size, dtype=np.float32)
|
||||||
return 'trending' # Default regime
|
|
||||||
|
|
||||||
# Legacy helper methods (kept for compatibility)
|
def _calculate_sensitivity_reward(self, completed_trade: Dict[str, Any]) -> float:
|
||||||
def _calculate_volatility(self, symbol: str) -> float:
|
"""Calculate reward for sensitivity learning based on trade outcome"""
|
||||||
"""Calculate current volatility for symbol (legacy method)"""
|
try:
|
||||||
return 0.02 # 2% default volatility
|
pnl_pct = completed_trade['pnl_pct']
|
||||||
|
duration = completed_trade['duration']
|
||||||
|
|
||||||
|
# Base reward from P&L
|
||||||
|
base_reward = pnl_pct * 10 # Scale P&L percentage
|
||||||
|
|
||||||
|
# Duration penalty/bonus
|
||||||
|
if duration < 300: # Less than 5 minutes - too quick
|
||||||
|
duration_factor = 0.8
|
||||||
|
elif duration < 1800: # Less than 30 minutes - good for scalping
|
||||||
|
duration_factor = 1.2
|
||||||
|
elif duration < 3600: # Less than 1 hour - acceptable
|
||||||
|
duration_factor = 1.0
|
||||||
|
else: # More than 1 hour - too slow for scalping
|
||||||
|
duration_factor = 0.7
|
||||||
|
|
||||||
|
# Confidence factor - reward appropriate confidence levels
|
||||||
|
entry_conf = completed_trade['entry_confidence']
|
||||||
|
exit_conf = completed_trade['exit_confidence']
|
||||||
|
|
||||||
|
if pnl_pct > 0: # Winning trade
|
||||||
|
# Reward high entry confidence and appropriate exit confidence
|
||||||
|
conf_factor = (entry_conf + exit_conf) / 2
|
||||||
|
else: # Losing trade
|
||||||
|
# Reward quick exit (high exit confidence for losses)
|
||||||
|
conf_factor = exit_conf
|
||||||
|
|
||||||
|
# Calculate final reward
|
||||||
|
final_reward = base_reward * duration_factor * conf_factor
|
||||||
|
|
||||||
|
# Clip reward to reasonable range
|
||||||
|
final_reward = np.clip(final_reward, -2.0, 2.0)
|
||||||
|
|
||||||
|
return float(final_reward)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error calculating sensitivity reward: {e}")
|
||||||
|
return 0.0
|
||||||
|
|
||||||
def _get_current_volume(self, symbol: str) -> float:
|
def _determine_optimal_sensitivity(self, completed_trade: Dict[str, Any]) -> int:
|
||||||
"""Get current volume ratio compared to average (legacy method)"""
|
"""Determine optimal sensitivity level based on trade outcome"""
|
||||||
return 1.0 # Normal volume
|
try:
|
||||||
|
pnl_pct = completed_trade['pnl_pct']
|
||||||
|
duration = completed_trade['duration']
|
||||||
|
current_sensitivity = completed_trade['sensitivity_level_used']
|
||||||
|
|
||||||
|
# If trade was profitable and quick, current sensitivity was good
|
||||||
|
if pnl_pct > 0.01 and duration < 1800: # >1% profit in <30 min
|
||||||
|
return current_sensitivity
|
||||||
|
|
||||||
|
# If trade was very profitable, could have been more aggressive
|
||||||
|
if pnl_pct > 0.02: # >2% profit
|
||||||
|
return min(4, current_sensitivity + 1) # Increase sensitivity
|
||||||
|
|
||||||
|
# If trade was a small loss, might need more sensitivity
|
||||||
|
if -0.01 < pnl_pct < 0: # Small loss
|
||||||
|
return min(4, current_sensitivity + 1) # Increase sensitivity
|
||||||
|
|
||||||
|
# If trade was a big loss, need less sensitivity
|
||||||
|
if pnl_pct < -0.02: # >2% loss
|
||||||
|
return max(0, current_sensitivity - 1) # Decrease sensitivity
|
||||||
|
|
||||||
|
# If trade took too long, need more sensitivity
|
||||||
|
if duration > 3600: # >1 hour
|
||||||
|
return min(4, current_sensitivity + 1) # Increase sensitivity
|
||||||
|
|
||||||
|
# Default: keep current sensitivity
|
||||||
|
return current_sensitivity
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error determining optimal sensitivity: {e}")
|
||||||
|
return 2 # Default to medium
|
||||||
|
|
||||||
def _calculate_trend_strength(self, symbol: str) -> float:
|
def _train_sensitivity_dqn(self):
|
||||||
"""Calculate trend strength (legacy method)"""
|
"""Train the DQN agent for sensitivity learning"""
|
||||||
return 0.5 # Moderate trend
|
try:
|
||||||
|
# Initialize DQN agent if not already done
|
||||||
|
if self.sensitivity_dqn_agent is None:
|
||||||
|
self._initialize_sensitivity_dqn()
|
||||||
|
|
||||||
|
if self.sensitivity_dqn_agent is None:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Get batch of learning cases
|
||||||
|
batch_size = min(32, len(self.sensitivity_learning_queue))
|
||||||
|
if batch_size < 8: # Need minimum batch size
|
||||||
|
return
|
||||||
|
|
||||||
|
# Sample random batch
|
||||||
|
batch_indices = np.random.choice(len(self.sensitivity_learning_queue), batch_size, replace=False)
|
||||||
|
batch = [self.sensitivity_learning_queue[i] for i in batch_indices]
|
||||||
|
|
||||||
|
# Train the DQN agent
|
||||||
|
for case in batch:
|
||||||
|
self.sensitivity_dqn_agent.remember(
|
||||||
|
state=case['state'],
|
||||||
|
action=case['action'],
|
||||||
|
reward=case['reward'],
|
||||||
|
next_state=case['next_state'],
|
||||||
|
done=case['done']
|
||||||
|
)
|
||||||
|
|
||||||
|
# Perform replay training
|
||||||
|
loss = self.sensitivity_dqn_agent.replay()
|
||||||
|
|
||||||
|
if loss is not None:
|
||||||
|
logger.info(f"Sensitivity DQN training completed. Loss: {loss:.4f}")
|
||||||
|
|
||||||
|
# Update current sensitivity level based on recent performance
|
||||||
|
self._update_current_sensitivity_level()
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error training sensitivity DQN: {e}")
|
||||||
|
|
||||||
def _determine_market_regime(self, symbol: str) -> str:
|
def _initialize_sensitivity_dqn(self):
|
||||||
"""Determine current market regime (legacy method)"""
|
"""Initialize the DQN agent for sensitivity learning"""
|
||||||
return 'trending' # Default to trending
|
try:
|
||||||
|
# Try to import DQN agent
|
||||||
|
from NN.models.dqn_agent import DQNAgent
|
||||||
|
|
||||||
|
# Create DQN agent for sensitivity learning
|
||||||
|
self.sensitivity_dqn_agent = DQNAgent(
|
||||||
|
state_shape=(self.sensitivity_state_size,),
|
||||||
|
n_actions=self.sensitivity_action_space,
|
||||||
|
learning_rate=0.001,
|
||||||
|
gamma=0.95,
|
||||||
|
epsilon=0.3, # Lower epsilon for more exploitation
|
||||||
|
epsilon_min=0.05,
|
||||||
|
epsilon_decay=0.995,
|
||||||
|
buffer_size=1000,
|
||||||
|
batch_size=32,
|
||||||
|
target_update=10
|
||||||
|
)
|
||||||
|
|
||||||
|
logger.info("Sensitivity DQN agent initialized successfully")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error initializing sensitivity DQN agent: {e}")
|
||||||
|
self.sensitivity_dqn_agent = None
|
||||||
|
|
||||||
def _get_symbol_correlation(self, symbol: str) -> Dict[str, float]:
|
def _update_current_sensitivity_level(self):
|
||||||
"""Get correlations with other symbols"""
|
"""Update current sensitivity level using trained DQN"""
|
||||||
correlations = {}
|
try:
|
||||||
for other_symbol in self.symbols:
|
if self.sensitivity_dqn_agent is None:
|
||||||
if other_symbol != symbol:
|
return
|
||||||
correlations[other_symbol] = self.symbol_correlation_matrix.get((symbol, other_symbol), 0.0)
|
|
||||||
return correlations
|
# Get current market state
|
||||||
|
current_market_state = self._get_current_market_state_for_sensitivity('ETH/USDT') # Use ETH as primary
|
||||||
|
current_state = self._market_state_to_sensitivity_state(current_market_state, self.current_sensitivity_level)
|
||||||
|
|
||||||
|
# Get action from DQN (without exploration for production use)
|
||||||
|
action = self.sensitivity_dqn_agent.act(current_state, explore=False)
|
||||||
|
|
||||||
|
# Update sensitivity level if it changed
|
||||||
|
if action != self.current_sensitivity_level:
|
||||||
|
old_level = self.current_sensitivity_level
|
||||||
|
self.current_sensitivity_level = action
|
||||||
|
|
||||||
|
# Update thresholds based on new sensitivity level
|
||||||
|
self._update_thresholds_from_sensitivity()
|
||||||
|
|
||||||
|
logger.info(f"Sensitivity level updated: {self.sensitivity_levels[old_level]['name']} -> {self.sensitivity_levels[action]['name']}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error updating current sensitivity level: {e}")
|
||||||
|
|
||||||
def _calculate_position_size(self, symbol: str, action: str, confidence: float) -> float:
|
def _update_thresholds_from_sensitivity(self):
|
||||||
"""Calculate position size based on confidence and risk management"""
|
"""Update confidence thresholds based on current sensitivity level"""
|
||||||
base_size = 0.02 # 2% of portfolio
|
try:
|
||||||
confidence_multiplier = confidence # Scale by confidence
|
sensitivity_config = self.sensitivity_levels[self.current_sensitivity_level]
|
||||||
max_size = 0.05 # 5% maximum
|
|
||||||
|
# Get base thresholds from config
|
||||||
return min(base_size * confidence_multiplier, max_size)
|
base_open_threshold = self.config.orchestrator.get('confidence_threshold', 0.6)
|
||||||
|
base_close_threshold = self.config.orchestrator.get('confidence_threshold_close', 0.25)
|
||||||
|
|
||||||
|
# Apply sensitivity multipliers
|
||||||
|
self.confidence_threshold_open = base_open_threshold * sensitivity_config['open_threshold_multiplier']
|
||||||
|
self.confidence_threshold_close = base_close_threshold * sensitivity_config['close_threshold_multiplier']
|
||||||
|
|
||||||
|
# Ensure thresholds stay within reasonable bounds
|
||||||
|
self.confidence_threshold_open = np.clip(self.confidence_threshold_open, 0.3, 0.9)
|
||||||
|
self.confidence_threshold_close = np.clip(self.confidence_threshold_close, 0.1, 0.6)
|
||||||
|
|
||||||
|
logger.info(f"Updated thresholds - Open: {self.confidence_threshold_open:.3f}, Close: {self.confidence_threshold_close:.3f}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error updating thresholds from sensitivity: {e}")
|
||||||
|
|
||||||
def _market_state_to_rl_state(self, market_state: MarketState) -> np.ndarray:
|
def _get_current_open_threshold(self) -> float:
|
||||||
"""Convert market state to RL state vector"""
|
"""Get current opening threshold"""
|
||||||
# Combine features from all timeframes into a single state vector
|
return self.confidence_threshold_open
|
||||||
state_components = []
|
|
||||||
|
def _get_current_close_threshold(self) -> float:
|
||||||
# Add price features
|
"""Get current closing threshold"""
|
||||||
state_components.extend([
|
return self.confidence_threshold_close
|
||||||
market_state.volatility,
|
|
||||||
market_state.volume,
|
|
||||||
market_state.trend_strength
|
|
||||||
])
|
|
||||||
|
|
||||||
# Add flattened features from each timeframe
|
|
||||||
for timeframe in sorted(market_state.features.keys()):
|
|
||||||
features = market_state.features[timeframe]
|
|
||||||
if features is not None:
|
|
||||||
# Take the last row (most recent) and flatten
|
|
||||||
latest_features = features[-1] if len(features.shape) > 1 else features
|
|
||||||
state_components.extend(latest_features.flatten())
|
|
||||||
|
|
||||||
return np.array(state_components, dtype=np.float32)
|
|
||||||
|
|
||||||
def process_realtime_features(self, feature_dict: Dict[str, Any]):
|
def process_realtime_features(self, feature_dict: Dict[str, Any]):
|
||||||
"""Process real-time tick features from the tick processor"""
|
"""Process real-time tick features from the tick processor"""
|
||||||
|
243
test_enhanced_improvements.py
Normal file
243
test_enhanced_improvements.py
Normal file
@ -0,0 +1,243 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test Enhanced Trading System Improvements
|
||||||
|
|
||||||
|
This script tests:
|
||||||
|
1. Color-coded position display ([LONG] green, [SHORT] red)
|
||||||
|
2. Enhanced model training detection and retrospective learning
|
||||||
|
3. Lower confidence thresholds for closing positions (0.25 vs 0.6 for opening)
|
||||||
|
4. Perfect opportunity detection and learning
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
from core.data_provider import DataProvider
|
||||||
|
from core.enhanced_orchestrator import EnhancedTradingOrchestrator, TradingAction
|
||||||
|
from web.scalping_dashboard import RealTimeScalpingDashboard, TradingSession
|
||||||
|
|
||||||
|
# Setup logging
|
||||||
|
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
def test_color_coded_positions():
|
||||||
|
"""Test color-coded position display functionality"""
|
||||||
|
logger.info("=== Testing Color-Coded Position Display ===")
|
||||||
|
|
||||||
|
# Create trading session
|
||||||
|
session = TradingSession()
|
||||||
|
|
||||||
|
# Simulate some positions
|
||||||
|
session.positions = {
|
||||||
|
'ETH/USDT': {
|
||||||
|
'side': 'LONG',
|
||||||
|
'size': 0.1,
|
||||||
|
'entry_price': 2558.15
|
||||||
|
},
|
||||||
|
'BTC/USDT': {
|
||||||
|
'side': 'SHORT',
|
||||||
|
'size': 0.05,
|
||||||
|
'entry_price': 45123.45
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info("Created test positions:")
|
||||||
|
logger.info(f"ETH/USDT: LONG 0.1 @ $2558.15")
|
||||||
|
logger.info(f"BTC/USDT: SHORT 0.05 @ $45123.45")
|
||||||
|
|
||||||
|
# Test position display logic (simulating dashboard logic)
|
||||||
|
live_prices = {'ETH/USDT': 2565.30, 'BTC/USDT': 45050.20}
|
||||||
|
|
||||||
|
for symbol, pos in session.positions.items():
|
||||||
|
side = pos['side']
|
||||||
|
size = pos['size']
|
||||||
|
entry_price = pos['entry_price']
|
||||||
|
current_price = live_prices.get(symbol, entry_price)
|
||||||
|
|
||||||
|
# Calculate unrealized P&L
|
||||||
|
if side == 'LONG':
|
||||||
|
unrealized_pnl = (current_price - entry_price) * size
|
||||||
|
color_class = "text-success" # Green for LONG
|
||||||
|
side_display = "[LONG]"
|
||||||
|
else: # SHORT
|
||||||
|
unrealized_pnl = (entry_price - current_price) * size
|
||||||
|
color_class = "text-danger" # Red for SHORT
|
||||||
|
side_display = "[SHORT]"
|
||||||
|
|
||||||
|
position_text = f"{side_display} {size:.3f} @ ${entry_price:.2f} | P&L: ${unrealized_pnl:+.2f}"
|
||||||
|
logger.info(f"Position Display: {position_text} (Color: {color_class})")
|
||||||
|
|
||||||
|
logger.info("✅ Color-coded position display test completed")
|
||||||
|
|
||||||
|
def test_confidence_thresholds():
|
||||||
|
"""Test different confidence thresholds for opening vs closing"""
|
||||||
|
logger.info("=== Testing Confidence Thresholds ===")
|
||||||
|
|
||||||
|
# Create orchestrator
|
||||||
|
data_provider = DataProvider()
|
||||||
|
orchestrator = EnhancedTradingOrchestrator(data_provider)
|
||||||
|
|
||||||
|
logger.info(f"Opening threshold: {orchestrator.confidence_threshold_open}")
|
||||||
|
logger.info(f"Closing threshold: {orchestrator.confidence_threshold_close}")
|
||||||
|
|
||||||
|
# Test opening action with medium confidence
|
||||||
|
test_confidence = 0.45
|
||||||
|
logger.info(f"\nTesting opening action with confidence {test_confidence}")
|
||||||
|
|
||||||
|
if test_confidence >= orchestrator.confidence_threshold_open:
|
||||||
|
logger.info("✅ Would OPEN position (confidence above opening threshold)")
|
||||||
|
else:
|
||||||
|
logger.info("❌ Would NOT open position (confidence below opening threshold)")
|
||||||
|
|
||||||
|
# Test closing action with same confidence
|
||||||
|
logger.info(f"Testing closing action with confidence {test_confidence}")
|
||||||
|
|
||||||
|
if test_confidence >= orchestrator.confidence_threshold_close:
|
||||||
|
logger.info("✅ Would CLOSE position (confidence above closing threshold)")
|
||||||
|
else:
|
||||||
|
logger.info("❌ Would NOT close position (confidence below closing threshold)")
|
||||||
|
|
||||||
|
logger.info("✅ Confidence threshold test completed")
|
||||||
|
|
||||||
|
def test_retrospective_learning():
|
||||||
|
"""Test retrospective learning and perfect opportunity detection"""
|
||||||
|
logger.info("=== Testing Retrospective Learning ===")
|
||||||
|
|
||||||
|
# Create orchestrator
|
||||||
|
data_provider = DataProvider()
|
||||||
|
orchestrator = EnhancedTradingOrchestrator(data_provider)
|
||||||
|
|
||||||
|
# Simulate perfect moves
|
||||||
|
from core.enhanced_orchestrator import PerfectMove
|
||||||
|
|
||||||
|
perfect_move = PerfectMove(
|
||||||
|
symbol='ETH/USDT',
|
||||||
|
timeframe='1m',
|
||||||
|
timestamp=datetime.now(),
|
||||||
|
optimal_action='BUY',
|
||||||
|
actual_outcome=0.025, # 2.5% price increase
|
||||||
|
market_state_before=None,
|
||||||
|
market_state_after=None,
|
||||||
|
confidence_should_have_been=0.85
|
||||||
|
)
|
||||||
|
|
||||||
|
orchestrator.perfect_moves.append(perfect_move)
|
||||||
|
orchestrator.retrospective_learning_active = True
|
||||||
|
|
||||||
|
logger.info(f"Added perfect move: {perfect_move.optimal_action} {perfect_move.symbol}")
|
||||||
|
logger.info(f"Outcome: {perfect_move.actual_outcome*100:+.2f}%")
|
||||||
|
logger.info(f"Confidence should have been: {perfect_move.confidence_should_have_been:.3f}")
|
||||||
|
|
||||||
|
# Test performance metrics
|
||||||
|
metrics = orchestrator.get_performance_metrics()
|
||||||
|
retro_metrics = metrics['retrospective_learning']
|
||||||
|
|
||||||
|
logger.info(f"Retrospective learning active: {retro_metrics['active']}")
|
||||||
|
logger.info(f"Recent perfect moves: {retro_metrics['perfect_moves_recent']}")
|
||||||
|
logger.info(f"Average confidence needed: {retro_metrics['avg_confidence_needed']:.3f}")
|
||||||
|
|
||||||
|
logger.info("✅ Retrospective learning test completed")
|
||||||
|
|
||||||
|
async def test_tick_pattern_detection():
|
||||||
|
"""Test tick pattern detection for violent moves"""
|
||||||
|
logger.info("=== Testing Tick Pattern Detection ===")
|
||||||
|
|
||||||
|
# Create orchestrator
|
||||||
|
data_provider = DataProvider()
|
||||||
|
orchestrator = EnhancedTradingOrchestrator(data_provider)
|
||||||
|
|
||||||
|
# Simulate violent tick
|
||||||
|
from core.tick_aggregator import RawTick
|
||||||
|
|
||||||
|
violent_tick = RawTick(
|
||||||
|
timestamp=datetime.now(),
|
||||||
|
price=2560.0,
|
||||||
|
volume=1000.0,
|
||||||
|
quantity=0.5,
|
||||||
|
side='buy',
|
||||||
|
trade_id='test123',
|
||||||
|
time_since_last=25.0, # Very fast tick (25ms)
|
||||||
|
price_change=5.0, # $5 price jump
|
||||||
|
volume_intensity=3.5 # High volume
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add symbol attribute for testing
|
||||||
|
violent_tick.symbol = 'ETH/USDT'
|
||||||
|
|
||||||
|
logger.info(f"Simulating violent tick:")
|
||||||
|
logger.info(f"Price change: ${violent_tick.price_change:+.2f}")
|
||||||
|
logger.info(f"Time since last: {violent_tick.time_since_last:.0f}ms")
|
||||||
|
logger.info(f"Volume intensity: {violent_tick.volume_intensity:.1f}x")
|
||||||
|
|
||||||
|
# Process the tick
|
||||||
|
orchestrator._handle_raw_tick(violent_tick)
|
||||||
|
|
||||||
|
# Check if perfect move was created
|
||||||
|
if orchestrator.perfect_moves:
|
||||||
|
latest_move = orchestrator.perfect_moves[-1]
|
||||||
|
logger.info(f"✅ Perfect move detected: {latest_move.optimal_action}")
|
||||||
|
logger.info(f"Confidence: {latest_move.confidence_should_have_been:.3f}")
|
||||||
|
else:
|
||||||
|
logger.info("❌ No perfect move detected")
|
||||||
|
|
||||||
|
logger.info("✅ Tick pattern detection test completed")
|
||||||
|
|
||||||
|
def test_dashboard_integration():
|
||||||
|
"""Test dashboard integration with new features"""
|
||||||
|
logger.info("=== Testing Dashboard Integration ===")
|
||||||
|
|
||||||
|
# Create components
|
||||||
|
data_provider = DataProvider()
|
||||||
|
orchestrator = EnhancedTradingOrchestrator(data_provider)
|
||||||
|
|
||||||
|
# Test model training status
|
||||||
|
metrics = orchestrator.get_performance_metrics()
|
||||||
|
|
||||||
|
logger.info("Model Training Metrics:")
|
||||||
|
logger.info(f"Perfect moves: {metrics['perfect_moves']}")
|
||||||
|
logger.info(f"RL queue size: {metrics['rl_queue_size']}")
|
||||||
|
logger.info(f"Retrospective learning: {metrics['retrospective_learning']}")
|
||||||
|
logger.info(f"Position tracking: {metrics['position_tracking']}")
|
||||||
|
logger.info(f"Thresholds: {metrics['thresholds']}")
|
||||||
|
|
||||||
|
logger.info("✅ Dashboard integration test completed")
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
"""Run all tests"""
|
||||||
|
logger.info("🚀 Starting Enhanced Trading System Tests")
|
||||||
|
logger.info("=" * 60)
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Run tests
|
||||||
|
test_color_coded_positions()
|
||||||
|
print()
|
||||||
|
|
||||||
|
test_confidence_thresholds()
|
||||||
|
print()
|
||||||
|
|
||||||
|
test_retrospective_learning()
|
||||||
|
print()
|
||||||
|
|
||||||
|
await test_tick_pattern_detection()
|
||||||
|
print()
|
||||||
|
|
||||||
|
test_dashboard_integration()
|
||||||
|
print()
|
||||||
|
|
||||||
|
logger.info("=" * 60)
|
||||||
|
logger.info("🎉 All tests completed successfully!")
|
||||||
|
logger.info("Key improvements verified:")
|
||||||
|
logger.info("✅ Color-coded positions ([LONG] green, [SHORT] red)")
|
||||||
|
logger.info("✅ Lower closing thresholds (0.25 vs 0.6)")
|
||||||
|
logger.info("✅ Retrospective learning on perfect opportunities")
|
||||||
|
logger.info("✅ Enhanced model training detection")
|
||||||
|
logger.info("✅ Violent move pattern detection")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"❌ Test failed: {e}")
|
||||||
|
import traceback
|
||||||
|
logger.error(traceback.format_exc())
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(main())
|
372
test_sensitivity_learning.py
Normal file
372
test_sensitivity_learning.py
Normal file
@ -0,0 +1,372 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test DQN RL-based Sensitivity Learning and 300s Data Preloading
|
||||||
|
|
||||||
|
This script tests:
|
||||||
|
1. DQN RL-based sensitivity learning from completed trades
|
||||||
|
2. 300s data preloading on first load
|
||||||
|
3. Dynamic threshold adjustment based on sensitivity levels
|
||||||
|
4. Color-coded position display integration
|
||||||
|
5. Enhanced model training status with sensitivity info
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python test_sensitivity_learning.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
import numpy as np
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
from core.data_provider import DataProvider
|
||||||
|
from core.enhanced_orchestrator import EnhancedTradingOrchestrator, TradingAction
|
||||||
|
from web.scalping_dashboard import RealTimeScalpingDashboard
|
||||||
|
from NN.models.dqn_agent import DQNAgent
|
||||||
|
|
||||||
|
# Setup logging
|
||||||
|
logging.basicConfig(
|
||||||
|
level=logging.INFO,
|
||||||
|
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
)
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class SensitivityLearningTester:
|
||||||
|
"""Test class for sensitivity learning features"""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.data_provider = DataProvider()
|
||||||
|
self.orchestrator = EnhancedTradingOrchestrator(self.data_provider)
|
||||||
|
self.dashboard = None
|
||||||
|
|
||||||
|
async def test_300s_data_preloading(self):
|
||||||
|
"""Test 300s data preloading functionality"""
|
||||||
|
logger.info("=== Testing 300s Data Preloading ===")
|
||||||
|
|
||||||
|
# Test preloading for all symbols and timeframes
|
||||||
|
start_time = time.time()
|
||||||
|
preload_results = self.data_provider.preload_all_symbols_data(['1s', '1m', '5m', '15m', '1h'])
|
||||||
|
end_time = time.time()
|
||||||
|
|
||||||
|
logger.info(f"Preloading completed in {end_time - start_time:.2f} seconds")
|
||||||
|
|
||||||
|
# Verify results
|
||||||
|
total_pairs = 0
|
||||||
|
successful_pairs = 0
|
||||||
|
|
||||||
|
for symbol, timeframe_results in preload_results.items():
|
||||||
|
for timeframe, success in timeframe_results.items():
|
||||||
|
total_pairs += 1
|
||||||
|
if success:
|
||||||
|
successful_pairs += 1
|
||||||
|
|
||||||
|
# Verify data was actually loaded
|
||||||
|
data = self.data_provider.get_historical_data(symbol, timeframe, limit=50)
|
||||||
|
if data is not None and len(data) > 0:
|
||||||
|
logger.info(f"✅ {symbol} {timeframe}: {len(data)} candles loaded")
|
||||||
|
else:
|
||||||
|
logger.warning(f"❌ {symbol} {timeframe}: No data despite success flag")
|
||||||
|
else:
|
||||||
|
logger.warning(f"❌ {symbol} {timeframe}: Failed to preload")
|
||||||
|
|
||||||
|
success_rate = (successful_pairs / total_pairs) * 100 if total_pairs > 0 else 0
|
||||||
|
logger.info(f"Preloading success rate: {success_rate:.1f}% ({successful_pairs}/{total_pairs})")
|
||||||
|
|
||||||
|
return success_rate > 80 # Consider test passed if >80% success rate
|
||||||
|
|
||||||
|
def test_sensitivity_learning_initialization(self):
|
||||||
|
"""Test sensitivity learning system initialization"""
|
||||||
|
logger.info("=== Testing Sensitivity Learning Initialization ===")
|
||||||
|
|
||||||
|
# Check if sensitivity learning is enabled
|
||||||
|
if hasattr(self.orchestrator, 'sensitivity_learning_enabled'):
|
||||||
|
logger.info(f"✅ Sensitivity learning enabled: {self.orchestrator.sensitivity_learning_enabled}")
|
||||||
|
else:
|
||||||
|
logger.warning("❌ Sensitivity learning not found in orchestrator")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Check sensitivity levels configuration
|
||||||
|
if hasattr(self.orchestrator, 'sensitivity_levels'):
|
||||||
|
levels = self.orchestrator.sensitivity_levels
|
||||||
|
logger.info(f"✅ Sensitivity levels configured: {len(levels)} levels")
|
||||||
|
for level, config in levels.items():
|
||||||
|
logger.info(f" Level {level}: {config['name']} - Open: {config['open_threshold_multiplier']:.2f}, Close: {config['close_threshold_multiplier']:.2f}")
|
||||||
|
else:
|
||||||
|
logger.warning("❌ Sensitivity levels not configured")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Check DQN agent initialization
|
||||||
|
if hasattr(self.orchestrator, 'sensitivity_dqn_agent'):
|
||||||
|
if self.orchestrator.sensitivity_dqn_agent is not None:
|
||||||
|
logger.info("✅ DQN agent initialized")
|
||||||
|
stats = self.orchestrator.sensitivity_dqn_agent.get_stats()
|
||||||
|
logger.info(f" Device: {stats['device']}")
|
||||||
|
logger.info(f" Memory size: {stats['memory_size']}")
|
||||||
|
logger.info(f" Epsilon: {stats['epsilon']:.3f}")
|
||||||
|
else:
|
||||||
|
logger.info("⏳ DQN agent not yet initialized (will be created on first use)")
|
||||||
|
|
||||||
|
# Check learning queues
|
||||||
|
if hasattr(self.orchestrator, 'sensitivity_learning_queue'):
|
||||||
|
logger.info(f"✅ Sensitivity learning queue initialized: {len(self.orchestrator.sensitivity_learning_queue)} items")
|
||||||
|
|
||||||
|
if hasattr(self.orchestrator, 'completed_trades'):
|
||||||
|
logger.info(f"✅ Completed trades tracking initialized: {len(self.orchestrator.completed_trades)} trades")
|
||||||
|
|
||||||
|
if hasattr(self.orchestrator, 'active_trades'):
|
||||||
|
logger.info(f"✅ Active trades tracking initialized: {len(self.orchestrator.active_trades)} active")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def simulate_trading_scenario(self):
|
||||||
|
"""Simulate a trading scenario to test sensitivity learning"""
|
||||||
|
logger.info("=== Simulating Trading Scenario ===")
|
||||||
|
|
||||||
|
# Simulate some trades to test the learning system
|
||||||
|
test_trades = [
|
||||||
|
{
|
||||||
|
'symbol': 'ETH/USDT',
|
||||||
|
'action': 'BUY',
|
||||||
|
'price': 2500.0,
|
||||||
|
'confidence': 0.7,
|
||||||
|
'timestamp': datetime.now() - timedelta(minutes=10)
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'symbol': 'ETH/USDT',
|
||||||
|
'action': 'SELL',
|
||||||
|
'price': 2510.0,
|
||||||
|
'confidence': 0.6,
|
||||||
|
'timestamp': datetime.now() - timedelta(minutes=5)
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'symbol': 'ETH/USDT',
|
||||||
|
'action': 'BUY',
|
||||||
|
'price': 2505.0,
|
||||||
|
'confidence': 0.8,
|
||||||
|
'timestamp': datetime.now() - timedelta(minutes=3)
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'symbol': 'ETH/USDT',
|
||||||
|
'action': 'SELL',
|
||||||
|
'price': 2495.0,
|
||||||
|
'confidence': 0.9,
|
||||||
|
'timestamp': datetime.now()
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Process each trade through the orchestrator
|
||||||
|
for i, trade_data in enumerate(test_trades):
|
||||||
|
action = TradingAction(
|
||||||
|
symbol=trade_data['symbol'],
|
||||||
|
action=trade_data['action'],
|
||||||
|
quantity=0.1,
|
||||||
|
confidence=trade_data['confidence'],
|
||||||
|
price=trade_data['price'],
|
||||||
|
timestamp=trade_data['timestamp'],
|
||||||
|
reasoning={'test': f'simulated_trade_{i}'},
|
||||||
|
timeframe_analysis=[]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update position tracking (this should trigger sensitivity learning)
|
||||||
|
self.orchestrator._update_position_tracking(trade_data['symbol'], action)
|
||||||
|
|
||||||
|
logger.info(f"Processed trade {i+1}: {trade_data['action']} @ ${trade_data['price']:.2f}")
|
||||||
|
|
||||||
|
# Check if learning cases were created
|
||||||
|
if hasattr(self.orchestrator, 'sensitivity_learning_queue'):
|
||||||
|
queue_size = len(self.orchestrator.sensitivity_learning_queue)
|
||||||
|
logger.info(f"✅ Learning queue now has {queue_size} cases")
|
||||||
|
|
||||||
|
if hasattr(self.orchestrator, 'completed_trades'):
|
||||||
|
completed_count = len(self.orchestrator.completed_trades)
|
||||||
|
logger.info(f"✅ Completed trades: {completed_count}")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def test_threshold_adjustment(self):
|
||||||
|
"""Test dynamic threshold adjustment based on sensitivity"""
|
||||||
|
logger.info("=== Testing Threshold Adjustment ===")
|
||||||
|
|
||||||
|
# Test different sensitivity levels
|
||||||
|
for level in range(5): # 0-4 sensitivity levels
|
||||||
|
if hasattr(self.orchestrator, 'current_sensitivity_level'):
|
||||||
|
self.orchestrator.current_sensitivity_level = level
|
||||||
|
|
||||||
|
if hasattr(self.orchestrator, '_update_thresholds_from_sensitivity'):
|
||||||
|
self.orchestrator._update_thresholds_from_sensitivity()
|
||||||
|
|
||||||
|
open_threshold = getattr(self.orchestrator, 'confidence_threshold_open', 0.6)
|
||||||
|
close_threshold = getattr(self.orchestrator, 'confidence_threshold_close', 0.25)
|
||||||
|
|
||||||
|
logger.info(f"Level {level}: Open={open_threshold:.3f}, Close={close_threshold:.3f}")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def test_dashboard_integration(self):
|
||||||
|
"""Test dashboard integration with sensitivity learning"""
|
||||||
|
logger.info("=== Testing Dashboard Integration ===")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create dashboard instance
|
||||||
|
self.dashboard = RealTimeScalpingDashboard(
|
||||||
|
data_provider=self.data_provider,
|
||||||
|
orchestrator=self.orchestrator
|
||||||
|
)
|
||||||
|
|
||||||
|
# Test sensitivity learning info retrieval
|
||||||
|
sensitivity_info = self.dashboard._get_sensitivity_learning_info()
|
||||||
|
|
||||||
|
logger.info("✅ Dashboard sensitivity info:")
|
||||||
|
logger.info(f" Level: {sensitivity_info['level_name']}")
|
||||||
|
logger.info(f" Completed trades: {sensitivity_info['completed_trades']}")
|
||||||
|
logger.info(f" Learning queue: {sensitivity_info['learning_queue_size']}")
|
||||||
|
logger.info(f" Open threshold: {sensitivity_info['open_threshold']:.3f}")
|
||||||
|
logger.info(f" Close threshold: {sensitivity_info['close_threshold']:.3f}")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"❌ Dashboard integration test failed: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def test_dqn_training_simulation(self):
|
||||||
|
"""Test DQN training with simulated data"""
|
||||||
|
logger.info("=== Testing DQN Training Simulation ===")
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Initialize DQN agent if not already done
|
||||||
|
if not hasattr(self.orchestrator, 'sensitivity_dqn_agent') or self.orchestrator.sensitivity_dqn_agent is None:
|
||||||
|
self.orchestrator._initialize_sensitivity_dqn()
|
||||||
|
|
||||||
|
if self.orchestrator.sensitivity_dqn_agent is None:
|
||||||
|
logger.warning("❌ Could not initialize DQN agent")
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Create some mock learning cases
|
||||||
|
for i in range(10):
|
||||||
|
# Create mock market state
|
||||||
|
mock_state = np.random.random(self.orchestrator.sensitivity_state_size)
|
||||||
|
action = np.random.randint(0, self.orchestrator.sensitivity_action_space)
|
||||||
|
reward = np.random.random() - 0.5 # Random reward between -0.5 and 0.5
|
||||||
|
next_state = np.random.random(self.orchestrator.sensitivity_state_size)
|
||||||
|
done = True
|
||||||
|
|
||||||
|
# Add to learning queue
|
||||||
|
learning_case = {
|
||||||
|
'state': mock_state,
|
||||||
|
'action': action,
|
||||||
|
'reward': reward,
|
||||||
|
'next_state': next_state,
|
||||||
|
'done': done,
|
||||||
|
'optimal_action': action,
|
||||||
|
'trade_outcome': reward * 0.02, # Convert to percentage
|
||||||
|
'trade_duration': 300 + np.random.randint(-100, 100),
|
||||||
|
'symbol': 'ETH/USDT'
|
||||||
|
}
|
||||||
|
|
||||||
|
self.orchestrator.sensitivity_learning_queue.append(learning_case)
|
||||||
|
|
||||||
|
# Trigger training
|
||||||
|
initial_queue_size = len(self.orchestrator.sensitivity_learning_queue)
|
||||||
|
self.orchestrator._train_sensitivity_dqn()
|
||||||
|
|
||||||
|
logger.info(f"✅ DQN training completed")
|
||||||
|
logger.info(f" Initial queue size: {initial_queue_size}")
|
||||||
|
logger.info(f" Final queue size: {len(self.orchestrator.sensitivity_learning_queue)}")
|
||||||
|
|
||||||
|
# Check agent stats
|
||||||
|
if self.orchestrator.sensitivity_dqn_agent:
|
||||||
|
stats = self.orchestrator.sensitivity_dqn_agent.get_stats()
|
||||||
|
logger.info(f" Training steps: {stats['training_step']}")
|
||||||
|
logger.info(f" Memory size: {stats['memory_size']}")
|
||||||
|
logger.info(f" Epsilon: {stats['epsilon']:.3f}")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"❌ DQN training simulation failed: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
async def run_all_tests(self):
|
||||||
|
"""Run all sensitivity learning tests"""
|
||||||
|
logger.info("🚀 Starting Sensitivity Learning Test Suite")
|
||||||
|
logger.info("=" * 60)
|
||||||
|
|
||||||
|
test_results = {}
|
||||||
|
|
||||||
|
# Test 1: 300s Data Preloading
|
||||||
|
test_results['preloading'] = await self.test_300s_data_preloading()
|
||||||
|
|
||||||
|
# Test 2: Sensitivity Learning Initialization
|
||||||
|
test_results['initialization'] = self.test_sensitivity_learning_initialization()
|
||||||
|
|
||||||
|
# Test 3: Trading Scenario Simulation
|
||||||
|
test_results['trading_simulation'] = self.simulate_trading_scenario()
|
||||||
|
|
||||||
|
# Test 4: Threshold Adjustment
|
||||||
|
test_results['threshold_adjustment'] = self.test_threshold_adjustment()
|
||||||
|
|
||||||
|
# Test 5: Dashboard Integration
|
||||||
|
test_results['dashboard_integration'] = self.test_dashboard_integration()
|
||||||
|
|
||||||
|
# Test 6: DQN Training Simulation
|
||||||
|
test_results['dqn_training'] = self.test_dqn_training_simulation()
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
logger.info("=" * 60)
|
||||||
|
logger.info("🏁 Test Suite Results:")
|
||||||
|
|
||||||
|
passed_tests = 0
|
||||||
|
total_tests = len(test_results)
|
||||||
|
|
||||||
|
for test_name, result in test_results.items():
|
||||||
|
status = "✅ PASSED" if result else "❌ FAILED"
|
||||||
|
logger.info(f" {test_name}: {status}")
|
||||||
|
if result:
|
||||||
|
passed_tests += 1
|
||||||
|
|
||||||
|
success_rate = (passed_tests / total_tests) * 100
|
||||||
|
logger.info(f"Overall success rate: {success_rate:.1f}% ({passed_tests}/{total_tests})")
|
||||||
|
|
||||||
|
if success_rate >= 80:
|
||||||
|
logger.info("🎉 Test suite PASSED! Sensitivity learning system is working correctly.")
|
||||||
|
else:
|
||||||
|
logger.warning("⚠️ Test suite FAILED! Some issues need to be addressed.")
|
||||||
|
|
||||||
|
return success_rate >= 80
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
"""Main test function"""
|
||||||
|
tester = SensitivityLearningTester()
|
||||||
|
|
||||||
|
try:
|
||||||
|
success = await tester.run_all_tests()
|
||||||
|
|
||||||
|
if success:
|
||||||
|
logger.info("✅ All tests passed! The sensitivity learning system is ready for production.")
|
||||||
|
else:
|
||||||
|
logger.error("❌ Some tests failed. Please review the issues above.")
|
||||||
|
|
||||||
|
return success
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Test suite failed with exception: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Run the test suite
|
||||||
|
result = asyncio.run(main())
|
||||||
|
|
||||||
|
if result:
|
||||||
|
print("\n🎯 SENSITIVITY LEARNING SYSTEM READY!")
|
||||||
|
print("Features verified:")
|
||||||
|
print(" ✅ DQN RL-based sensitivity learning from completed trades")
|
||||||
|
print(" ✅ 300s data preloading for faster initial performance")
|
||||||
|
print(" ✅ Dynamic threshold adjustment (lower for closing positions)")
|
||||||
|
print(" ✅ Color-coded position display ([LONG] green, [SHORT] red)")
|
||||||
|
print(" ✅ Enhanced model training status with sensitivity info")
|
||||||
|
print("\nYou can now run the dashboard with these enhanced features!")
|
||||||
|
else:
|
||||||
|
print("\n❌ SOME TESTS FAILED")
|
||||||
|
print("Please review the test output above and fix any issues.")
|
||||||
|
|
||||||
|
exit(0 if result else 1)
|
@ -205,6 +205,16 @@ class RealTimeScalpingDashboard:
|
|||||||
logger.info(" 4. ETH/USDT 1d")
|
logger.info(" 4. ETH/USDT 1d")
|
||||||
logger.info(" 5. BTC/USDT ticks (reference)")
|
logger.info(" 5. BTC/USDT ticks (reference)")
|
||||||
|
|
||||||
|
# Preload 300s of data for better initial performance
|
||||||
|
logger.info("PRELOADING 300s OF DATA FOR INITIAL PERFORMANCE:")
|
||||||
|
preload_results = self.data_provider.preload_all_symbols_data(['1s', '1m', '5m', '15m', '1h', '1d'])
|
||||||
|
|
||||||
|
# Log preload results
|
||||||
|
for symbol, timeframe_results in preload_results.items():
|
||||||
|
for timeframe, success in timeframe_results.items():
|
||||||
|
status = "✅" if success else "❌"
|
||||||
|
logger.info(f" {status} {symbol} {timeframe}")
|
||||||
|
|
||||||
# Test universal data adapter
|
# Test universal data adapter
|
||||||
try:
|
try:
|
||||||
universal_stream = self.orchestrator.universal_adapter.get_universal_data_stream()
|
universal_stream = self.orchestrator.universal_adapter.get_universal_data_stream()
|
||||||
@ -503,6 +513,7 @@ class RealTimeScalpingDashboard:
|
|||||||
logger.info("WebSocket price streaming enabled")
|
logger.info("WebSocket price streaming enabled")
|
||||||
logger.info(f"Timezone: {self.timezone}")
|
logger.info(f"Timezone: {self.timezone}")
|
||||||
logger.info(f"Session Balance: ${self.trading_session.starting_balance:.2f}")
|
logger.info(f"Session Balance: ${self.trading_session.starting_balance:.2f}")
|
||||||
|
logger.info("300s data preloading completed for faster initial performance")
|
||||||
|
|
||||||
def _setup_layout(self):
|
def _setup_layout(self):
|
||||||
"""Setup the ultra-fast real-time dashboard layout"""
|
"""Setup the ultra-fast real-time dashboard layout"""
|
||||||
@ -1794,7 +1805,7 @@ class RealTimeScalpingDashboard:
|
|||||||
return fig
|
return fig
|
||||||
|
|
||||||
def _create_model_training_status(self):
|
def _create_model_training_status(self):
|
||||||
"""Create enhanced model training progress display with perfect opportunity detection"""
|
"""Create enhanced model training progress display with perfect opportunity detection and sensitivity learning"""
|
||||||
try:
|
try:
|
||||||
# Get model training metrics from orchestrator
|
# Get model training metrics from orchestrator
|
||||||
if hasattr(self.orchestrator, 'get_performance_metrics'):
|
if hasattr(self.orchestrator, 'get_performance_metrics'):
|
||||||
@ -1811,6 +1822,9 @@ class RealTimeScalpingDashboard:
|
|||||||
is_rl_training = rl_queue_size > 0
|
is_rl_training = rl_queue_size > 0
|
||||||
is_cnn_training = perfect_moves_count > 0
|
is_cnn_training = perfect_moves_count > 0
|
||||||
|
|
||||||
|
# Get sensitivity learning information
|
||||||
|
sensitivity_info = self._get_sensitivity_learning_info()
|
||||||
|
|
||||||
return html.Div([
|
return html.Div([
|
||||||
html.Div([
|
html.Div([
|
||||||
html.H6("RL Training", className="text-success" if is_rl_training else "text-warning"),
|
html.H6("RL Training", className="text-success" if is_rl_training else "text-warning"),
|
||||||
@ -1819,7 +1833,7 @@ class RealTimeScalpingDashboard:
|
|||||||
html.P(f"Queue Size: {rl_queue_size}", className="text-white"),
|
html.P(f"Queue Size: {rl_queue_size}", className="text-white"),
|
||||||
html.P(f"Win Rate: {metrics.get('win_rate', 0)*100:.1f}%", className="text-white"),
|
html.P(f"Win Rate: {metrics.get('win_rate', 0)*100:.1f}%", className="text-white"),
|
||||||
html.P(f"Actions: {metrics.get('total_actions', 0)}", className="text-white")
|
html.P(f"Actions: {metrics.get('total_actions', 0)}", className="text-white")
|
||||||
], className="col-md-6"),
|
], className="col-md-4"),
|
||||||
|
|
||||||
html.Div([
|
html.Div([
|
||||||
html.H6("CNN Training", className="text-success" if is_cnn_training else "text-warning"),
|
html.H6("CNN Training", className="text-success" if is_cnn_training else "text-warning"),
|
||||||
@ -1829,7 +1843,17 @@ class RealTimeScalpingDashboard:
|
|||||||
html.P(f"Confidence: {metrics.get('confidence_threshold', 0.6):.2f}", className="text-white"),
|
html.P(f"Confidence: {metrics.get('confidence_threshold', 0.6):.2f}", className="text-white"),
|
||||||
html.P(f"Retrospective: {'ON' if recent_perfect_moves else 'OFF'}",
|
html.P(f"Retrospective: {'ON' if recent_perfect_moves else 'OFF'}",
|
||||||
className="text-success" if recent_perfect_moves else "text-muted")
|
className="text-success" if recent_perfect_moves else "text-muted")
|
||||||
], className="col-md-6")
|
], className="col-md-4"),
|
||||||
|
|
||||||
|
html.Div([
|
||||||
|
html.H6("DQN Sensitivity", className="text-info"),
|
||||||
|
html.P(f"Level: {sensitivity_info['level_name']}",
|
||||||
|
className="text-info"),
|
||||||
|
html.P(f"Completed Trades: {sensitivity_info['completed_trades']}", className="text-white"),
|
||||||
|
html.P(f"Learning Queue: {sensitivity_info['learning_queue_size']}", className="text-white"),
|
||||||
|
html.P(f"Open: {sensitivity_info['open_threshold']:.3f} | Close: {sensitivity_info['close_threshold']:.3f}",
|
||||||
|
className="text-white")
|
||||||
|
], className="col-md-4")
|
||||||
], className="row")
|
], className="row")
|
||||||
else:
|
else:
|
||||||
return html.Div([
|
return html.Div([
|
||||||
@ -1842,6 +1866,45 @@ class RealTimeScalpingDashboard:
|
|||||||
html.P("Error loading model status", className="text-danger")
|
html.P("Error loading model status", className="text-danger")
|
||||||
])
|
])
|
||||||
|
|
||||||
|
def _get_sensitivity_learning_info(self) -> Dict[str, Any]:
|
||||||
|
"""Get sensitivity learning information from orchestrator"""
|
||||||
|
try:
|
||||||
|
if hasattr(self.orchestrator, 'sensitivity_learning_enabled') and self.orchestrator.sensitivity_learning_enabled:
|
||||||
|
current_level = getattr(self.orchestrator, 'current_sensitivity_level', 2)
|
||||||
|
sensitivity_levels = getattr(self.orchestrator, 'sensitivity_levels', {})
|
||||||
|
level_name = sensitivity_levels.get(current_level, {}).get('name', 'medium')
|
||||||
|
|
||||||
|
completed_trades = len(getattr(self.orchestrator, 'completed_trades', []))
|
||||||
|
learning_queue_size = len(getattr(self.orchestrator, 'sensitivity_learning_queue', []))
|
||||||
|
|
||||||
|
open_threshold = getattr(self.orchestrator, 'confidence_threshold_open', 0.6)
|
||||||
|
close_threshold = getattr(self.orchestrator, 'confidence_threshold_close', 0.25)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'level_name': level_name.upper(),
|
||||||
|
'completed_trades': completed_trades,
|
||||||
|
'learning_queue_size': learning_queue_size,
|
||||||
|
'open_threshold': open_threshold,
|
||||||
|
'close_threshold': close_threshold
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
'level_name': 'DISABLED',
|
||||||
|
'completed_trades': 0,
|
||||||
|
'learning_queue_size': 0,
|
||||||
|
'open_threshold': 0.6,
|
||||||
|
'close_threshold': 0.25
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting sensitivity learning info: {e}")
|
||||||
|
return {
|
||||||
|
'level_name': 'ERROR',
|
||||||
|
'completed_trades': 0,
|
||||||
|
'learning_queue_size': 0,
|
||||||
|
'open_threshold': 0.6,
|
||||||
|
'close_threshold': 0.25
|
||||||
|
}
|
||||||
|
|
||||||
def _create_orchestrator_status(self):
|
def _create_orchestrator_status(self):
|
||||||
"""Create orchestrator data flow status"""
|
"""Create orchestrator data flow status"""
|
||||||
try:
|
try:
|
||||||
|
Loading…
x
Reference in New Issue
Block a user