models cleanup

This commit is contained in:
Dobromir Popov 2025-05-30 03:20:05 +03:00
parent 75dbac1761
commit 2a148b0ac6
21 changed files with 937 additions and 570 deletions

View File

@ -1,114 +1,137 @@
# Williams Market Structure CNN Integration Summary
# Williams CNN Pivot Integration - CORRECTED ARCHITECTURE
## 🎯 Overview
The Williams Market Structure has been enhanced with CNN-based pivot prediction capabilities, enabling real-time training and prediction at each detected pivot point using multi-timeframe, multi-symbol data.
## ✅ Key Features Implemented
## 🔄 **CORRECTED: SINGLE TIMEFRAME RECURSIVE APPROACH**
### 🔄 **Recursive Pivot Structure**
- **Level 0**: Raw OHLCV price data → Swing points using multiple strengths [2, 3, 5, 8, 13]
- **Level 1**: Level 0 pivot points → Treated as "price bars" for higher-level pivots
- **Level 2-4**: Recursive application on previous level's pivots
- **True Recursion**: Each level builds on the previous level's pivot points
The Williams Market Structure implementation has been corrected to use **ONLY 1s timeframe data** with recursive analysis, not multi-timeframe analysis.
### **RECURSIVE STRUCTURE (CORRECTED)**
### 🧠 **CNN Integration Architecture**
```
Each Pivot Detection Triggers:
1. Train CNN on previous pivot (features) → current pivot (ground truth)
2. Predict next pivot using current pivot features
3. Store current features for next training cycle
Input: 1s OHLCV Data (from real-time data stream)
Level 0: 1s OHLCV → Swing Points (strength 2,3,5)
↓ (treat Level 0 swings as "price bars")
Level 1: Level 0 Swings → Higher-Level Swing Points
↓ (treat Level 1 swings as "price bars")
Level 2: Level 1 Swings → Even Higher-Level Swing Points
↓ (treat Level 2 swings as "price bars")
Level 3: Level 2 Swings → Top-Level Swing Points
↓ (treat Level 3 swings as "price bars")
Level 4: Level 3 Swings → Highest-Level Swing Points
```
### 📊 **Multi-Timeframe Input Features**
- **ETH Primary Symbol**:
- 900 x 1s bars with indicators (10 features)
- 900 x 1m bars with indicators (10 features)
- 900 x 1h bars with indicators (10 features)
- 5 minutes of tick-derived features (10 features)
- **BTC Reference Symbol**:
- 5 minutes of tick-derived features (4 features)
- **Pivot Context**: Recent pivot characteristics (3 features)
- **Chart Labels**: Symbol/timeframe identification (3 features)
- **Total**: 900 timesteps × 50 features
### **HOW RECURSION WORKS**
### 🎯 **Multi-Level Output Prediction**
- **10 Outputs Total**: 5 Williams levels × (type + price)
- Level 0-4: [swing_type (0=LOW, 1=HIGH), normalized_price]
- Allows prediction across all recursive levels simultaneously
1. **Level 0**: Apply swing detection (strength 2,3,5) to raw 1s OHLCV data
- Input: 1000 x 1s bars → Output: ~50 swing points
### 📐 **Smart Normalization Strategy**
- **Data Flow**: Keep actual values throughout pipeline for validation
- **Final Step**: Normalize using 1h timeframe min/max range
- **Cross-Timeframe Preservation**: Maintains relationships between different timeframes
- **Price Features**: Normalized with 1h range
- **Non-Price Features**: Feature-wise normalization (indicators, counts, etc.)
2. **Level 1**: Convert Level 0 swing points to "price bars" format
- Each swing point becomes: [timestamp, swing_price, swing_price, swing_price, swing_price, 0]
- Apply swing detection to these 50 "price bars" → Output: ~10 swing points
## 🔧 **Integration with TrainingDataPacket**
3. **Level 2**: Convert Level 1 swing points to "price bars" format
- Apply swing detection to these 10 "price bars" → Output: ~3 swing points
Successfully leverages existing `TrainingDataPacket` from `core/unified_data_stream.py`:
4. **Level 3**: Convert Level 2 swing points to "price bars" format
- Apply swing detection to these 3 "price bars" → Output: ~1 swing point
5. **Level 4**: Convert Level 3 swing points to "price bars" format
- Apply swing detection → Output: Final highest-level structure
### **KEY CLARIFICATIONS**
**NOT Multi-Timeframe**: Williams does NOT use 1m, 1h, 4h data
**Single Timeframe Recursive**: Uses ONLY 1s data, analyzed recursively
**NOT Cross-Timeframe**: Different levels ≠ different timeframes
**Fractal Analysis**: Different levels = different magnifications of same 1s data
**NOT Mixed Data Sources**: All levels use derivatives of original 1s data
**Pure Recursion**: Level N uses Level N-1 swing points as input
## 🧠 **CNN INTEGRATION (Multi-Timeframe Features)**
While Williams structure uses only 1s data recursively, the CNN features can still use multi-timeframe data for enhanced context:
### **CNN INPUT FEATURES (900 timesteps × 50 features)**
**ETH Features (40 features per timestep):**
- 1s bars with indicators (10 features)
- 1m bars with indicators (10 features)
- 1h bars with indicators (10 features)
- Tick-derived 1s features (10 features)
**BTC Reference (4 features per timestep):**
- Tick-derived correlation features
**Williams Pivot Features (3 features per timestep):**
- Current pivot characteristics from recursive analysis
- Level-specific trend information
- Structure break indicators
**Chart Labels (3 features per timestep):**
- Data source identification
### **CNN PREDICTION OUTPUT (10 values)**
For each newly detected pivot, predict next pivot for all 5 levels:
- Level 0: [type (0=LOW, 1=HIGH), normalized_price]
- Level 1: [type, normalized_price]
- Level 2: [type, normalized_price]
- Level 3: [type, normalized_price]
- Level 4: [type, normalized_price]
### **NORMALIZATION STRATEGY**
- Use 1h timeframe min/max range for price normalization
- Preserves cross-timeframe relationships in CNN features
- Williams structure calculations remain in actual values
## 📊 **IMPLEMENTATION STATUS**
**Williams Recursive Structure**: Correctly implemented using 1s data only
**Swing Detection**: Multi-strength detection (2,3,5) at each level
**Pivot Conversion**: Level N swings → Level N+1 "price bars"
**CNN Framework**: Ready for training (disabled without TensorFlow)
**Dashboard Integration**: Fixed configuration and error handling
**Online Learning**: Single epoch training at each new pivot
## 🚀 **USAGE EXAMPLE**
```python
@dataclass
class TrainingDataPacket:
timestamp: datetime
symbol: str
tick_cache: List[Dict[str, Any]] # ✅ Used for tick features
one_second_bars: List[Dict[str, Any]] # ✅ Used for 1s data
multi_timeframe_data: Dict[str, List[Dict[str, Any]]] # ✅ Used for 1m, 1h data
cnn_features: Optional[Dict[str, np.ndarray]] # ✅ Populated by Williams
cnn_predictions: Optional[Dict[str, np.ndarray]] # ✅ Populated by Williams
from training.williams_market_structure import WilliamsMarketStructure
# Initialize Williams with simplified strengths
williams = WilliamsMarketStructure(
swing_strengths=[2, 3, 5], # Applied to ALL levels recursively
enable_cnn_feature=False, # Disable CNN (no TensorFlow)
training_data_provider=None
)
# Calculate recursive structure from 1s OHLCV data only
ohlcv_1s_data = get_1s_data() # Shape: (N, 6) [timestamp, O, H, L, C, V]
structure_levels = williams.calculate_recursive_pivot_points(ohlcv_1s_data)
# Extract features for RL training (250 features total)
rl_features = williams.extract_features_for_rl(structure_levels)
# Results: 5 levels of recursive swing analysis from single 1s timeframe
for level_key, level_data in structure_levels.items():
print(f"{level_key}: {len(level_data.swing_points)} swing points")
print(f" Trend: {level_data.trend_analysis.direction.value}")
print(f" Bias: {level_data.current_bias.value}")
```
## 🚀 **CNN Training Flow**
## 🔧 **NEXT STEPS**
### **At Each Pivot Point Detection:**
1. **Test Recursive Structure**: Verify each level builds correctly from previous level
2. **Enable CNN Training**: Install TensorFlow for enhanced pivot prediction
3. **Validate Features**: Ensure RL features maintain cross-level relationships
4. **Monitor Performance**: Check dashboard shows correct pivot detection across levels
1. **Training Phase** (if previous pivot exists):
```python
X_train = previous_pivot_features # (900, 50)
y_train = current_actual_pivot # (10,) for all levels
model.fit(X_train, y_train, epochs=1) # Online learning
```
2. **Prediction Phase**:
```python
X_predict = current_pivot_features # (900, 50)
y_predict = model.predict(X_predict) # (10,) predictions for all levels
```
3. **State Management**:
```python
previous_pivot_details = {
'features': X_predict,
'pivot': current_pivot_object
}
```
## 🛠 **Implementation Status**
### ✅ **Completed Components**
- [x] Recursive Williams pivot calculation (5 levels)
- [x] CNN integration hooks at each pivot detection
- [x] Multi-timeframe feature extraction from TrainingDataPacket
- [x] 1h-based normalization strategy
- [x] Multi-level output prediction (10 outputs)
- [x] Online learning with single-step training
- [x] Dashboard integration with proper diagnostics
- [x] Comprehensive test suite
### ⚠ **Current Limitations**
- CNN disabled due to TensorFlow dependencies not installed
- Placeholder technical indicators (TODO: Add real SMA, EMA, RSI, MACD, etc.)
- Higher-level ground truth uses simplified logic (needs full Williams context)
### 🔄 **Real-Time Dashboard Integration**
Fixed dashboard Williams integration:
- **Reduced data requirement**: 20 bars minimum (from 50)
- **Proper configuration**: Uses swing_strengths=[2, 3, 5]
- **Enhanced diagnostics**: Data quality validation and pivot detection logging
- **Consistent timezone handling**: Proper timestamp conversion for pivot display
This corrected architecture ensures Williams Market Structure follows Larry Williams' true methodology: recursive fractal analysis of market structure within a single timeframe, not cross-timeframe analysis.
## 📈 **Performance Characteristics**

285
cleanup_and_setup_models.py Normal file
View File

@ -0,0 +1,285 @@
#!/usr/bin/env python3
"""
Model Cleanup and Training Setup Script
This script:
1. Backs up current models
2. Cleans old/conflicting models
3. Sets up proper training progression system
4. Initializes fresh model training
"""
import os
import shutil
import json
import logging
from datetime import datetime
from pathlib import Path
import torch
# Setup logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
class ModelCleanupManager:
"""Manager for cleaning up and organizing model files"""
def __init__(self):
self.root_dir = Path(".")
self.models_dir = self.root_dir / "models"
self.backup_dir = self.root_dir / "model_backups" / f"backup_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
self.training_progress_file = self.models_dir / "training_progress.json"
# Create backup directory
self.backup_dir.mkdir(parents=True, exist_ok=True)
logger.info(f"Created backup directory: {self.backup_dir}")
def backup_existing_models(self):
"""Backup all existing models before cleanup"""
logger.info("🔄 Backing up existing models...")
model_files = [
# CNN models
"models/cnn_final_20250331_001817.pt.pt",
"models/cnn_best.pt.pt",
"models/cnn_BTC_USDT_*.pt",
"models/cnn_BTC_USD_*.pt",
# RL models
"models/trading_agent_*.pt",
"models/trading_agent_*.backup",
# Other models
"models/saved/cnn_model_best.pt"
]
# Backup model files
backup_count = 0
for pattern in model_files:
for file_path in self.root_dir.glob(pattern):
if file_path.is_file():
backup_path = self.backup_dir / file_path.relative_to(self.root_dir)
backup_path.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(file_path, backup_path)
backup_count += 1
logger.info(f" 📁 Backed up: {file_path}")
logger.info(f"✅ Backed up {backup_count} model files to {self.backup_dir}")
def clean_old_models(self):
"""Remove old/conflicting model files"""
logger.info("🧹 Cleaning old model files...")
files_to_remove = [
# Old CNN models with architecture conflicts
"models/cnn_final_20250331_001817.pt.pt",
"models/cnn_best.pt.pt",
"models/cnn_BTC_USDT_20250329_021800.pt",
"models/cnn_BTC_USDT_20250329_021448.pt",
"models/cnn_BTC_USD_20250329_020711.pt",
"models/cnn_BTC_USD_20250329_020430.pt",
"models/cnn_BTC_USD_20250329_015217.pt",
# Old RL models
"models/trading_agent_final.pt",
"models/trading_agent_best_pnl.pt",
"models/trading_agent_best_reward.pt",
"models/trading_agent_final.pt.backup",
"models/trading_agent_best_net_pnl.pt",
"models/trading_agent_best_net_pnl.pt.backup",
"models/trading_agent_best_pnl.pt.backup",
"models/trading_agent_best_reward.pt.backup",
"models/trading_agent_live_trained.pt",
# Checkpoint files
"models/trading_agent_checkpoint_1650.pt.minimal",
"models/trading_agent_checkpoint_1650.pt.params.json",
"models/trading_agent_best_net_pnl.pt.policy.jit",
"models/trading_agent_best_net_pnl.pt.params.json",
"models/trading_agent_best_pnl.pt.params.json"
]
removed_count = 0
for file_path in files_to_remove:
path = Path(file_path)
if path.exists():
path.unlink()
removed_count += 1
logger.info(f" 🗑️ Removed: {path}")
logger.info(f"✅ Removed {removed_count} old model files")
def setup_training_progression(self):
"""Set up training progression tracking system"""
logger.info("📊 Setting up training progression system...")
# Create training progress structure
training_progress = {
"created": datetime.now().isoformat(),
"version": "1.0",
"models": {
"cnn": {
"current_version": 1,
"best_model": None,
"training_history": [],
"architecture": {
"input_channels": 5,
"window_size": 20,
"output_classes": 3
}
},
"rl": {
"current_version": 1,
"best_model": None,
"training_history": [],
"architecture": {
"state_size": 100,
"action_space": 3,
"hidden_size": 256
}
},
"williams_cnn": {
"current_version": 1,
"best_model": None,
"training_history": [],
"architecture": {
"input_shape": [900, 50],
"output_size": 10,
"enabled": False # Disabled until TensorFlow available
}
}
},
"training_stats": {
"total_sessions": 0,
"best_accuracy": 0.0,
"best_pnl": 0.0,
"last_training": None
}
}
# Save training progress
with open(self.training_progress_file, 'w') as f:
json.dump(training_progress, f, indent=2)
logger.info(f"✅ Created training progress file: {self.training_progress_file}")
def create_model_directories(self):
"""Create clean model directory structure"""
logger.info("📁 Creating clean model directory structure...")
directories = [
"models/cnn/current",
"models/cnn/training",
"models/cnn/best",
"models/rl/current",
"models/rl/training",
"models/rl/best",
"models/williams_cnn/current",
"models/williams_cnn/training",
"models/williams_cnn/best",
"models/checkpoints",
"models/training_logs"
]
for directory in directories:
Path(directory).mkdir(parents=True, exist_ok=True)
logger.info(f" 📂 Created: {directory}")
logger.info("✅ Model directory structure created")
def initialize_fresh_models(self):
"""Initialize fresh model files for training"""
logger.info("🆕 Initializing fresh models...")
# Keep only the essential saved model
essential_models = ["models/saved/cnn_model_best.pt"]
for model_path in essential_models:
if Path(model_path).exists():
logger.info(f" ✅ Keeping essential model: {model_path}")
else:
logger.warning(f" ⚠️ Essential model not found: {model_path}")
logger.info("✅ Fresh model initialization complete")
def update_model_registry(self):
"""Update model registry to use new structure"""
logger.info("⚙️ Updating model registry configuration...")
registry_config = {
"model_paths": {
"cnn_current": "models/cnn/current/",
"cnn_best": "models/cnn/best/",
"rl_current": "models/rl/current/",
"rl_best": "models/rl/best/",
"williams_current": "models/williams_cnn/current/",
"williams_best": "models/williams_cnn/best/"
},
"auto_load_best": True,
"memory_limit_gb": 8.0,
"training_enabled": True
}
config_path = Path("models/registry_config.json")
with open(config_path, 'w') as f:
json.dump(registry_config, f, indent=2)
logger.info(f"✅ Model registry config saved: {config_path}")
def run_cleanup(self):
"""Execute complete cleanup and setup process"""
logger.info("🚀 Starting model cleanup and setup process...")
logger.info("=" * 60)
try:
# Step 1: Backup existing models
self.backup_existing_models()
# Step 2: Clean old conflicting models
self.clean_old_models()
# Step 3: Setup training progression system
self.setup_training_progression()
# Step 4: Create clean directory structure
self.create_model_directories()
# Step 5: Initialize fresh models
self.initialize_fresh_models()
# Step 6: Update model registry
self.update_model_registry()
logger.info("=" * 60)
logger.info("✅ Model cleanup and setup completed successfully!")
logger.info(f"📁 Backup created at: {self.backup_dir}")
logger.info("🔄 Ready for fresh training with enhanced RL!")
except Exception as e:
logger.error(f"❌ Error during cleanup: {e}")
import traceback
logger.error(traceback.format_exc())
raise
def main():
"""Main execution function"""
print("🧹 MODEL CLEANUP AND TRAINING SETUP")
print("=" * 50)
print("This script will:")
print("1. Backup existing models")
print("2. Remove old/conflicting models")
print("3. Set up training progression tracking")
print("4. Create clean directory structure")
print("5. Initialize fresh training environment")
print("=" * 50)
response = input("Continue? (y/N): ").strip().lower()
if response != 'y':
print("❌ Cleanup cancelled")
return
cleanup_manager = ModelCleanupManager()
cleanup_manager.run_cleanup()
if __name__ == "__main__":
main()

View File

@ -2,494 +2,154 @@
{
"trade_id": 1,
"side": "LONG",
"entry_time": "2025-05-29T23:50:11.970081+00:00",
"exit_time": "2025-05-29T23:50:30.130190+00:00",
"entry_price": 2624.0,
"exit_price": 2625.1,
"size": 0.003292,
"gross_pnl": 0.0036211999999997005,
"fees": 0.008640018599999999,
"entry_time": "2025-05-30T00:13:47.305918+00:00",
"exit_time": "2025-05-30T00:14:20.443391+00:00",
"entry_price": 2640.28,
"exit_price": 2641.6,
"size": 0.003504,
"gross_pnl": 0.004625279999998981,
"fees": 0.00925385376,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.005018818600000298,
"duration": "0:00:18.160109",
"net_pnl": -0.00462857376000102,
"duration": "0:00:33.137473",
"symbol": "ETH/USDC",
"mexc_executed": false
"mexc_executed": true
},
{
"trade_id": 2,
"side": "SHORT",
"entry_time": "2025-05-29T23:50:30.130190+00:00",
"exit_time": "2025-05-29T23:50:39.216725+00:00",
"entry_price": 2625.1,
"exit_price": 2626.07,
"size": 0.003013,
"gross_pnl": -0.002922610000000767,
"fees": 0.007910887605,
"entry_time": "2025-05-30T00:14:20.443391+00:00",
"exit_time": "2025-05-30T00:14:21.418785+00:00",
"entry_price": 2641.6,
"exit_price": 2641.72,
"size": 0.003061,
"gross_pnl": -0.00036731999999966593,
"fees": 0.008086121259999999,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.010833497605000766,
"duration": "0:00:09.086535",
"net_pnl": -0.008453441259999667,
"duration": "0:00:00.975394",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 3,
"side": "LONG",
"entry_time": "2025-05-29T23:50:39.216725+00:00",
"exit_time": "2025-05-29T23:51:00.350490+00:00",
"entry_price": 2626.07,
"exit_price": 2626.61,
"size": 0.003618,
"gross_pnl": 0.0019537199999998685,
"fees": 0.009502098120000002,
"entry_time": "2025-05-30T00:14:21.418785+00:00",
"exit_time": "2025-05-30T00:14:26.477094+00:00",
"entry_price": 2641.72,
"exit_price": 2641.31,
"size": 0.003315,
"gross_pnl": -0.0013591499999995175,
"fees": 0.008756622225,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.007548378120000133,
"duration": "0:00:21.133765",
"net_pnl": -0.010115772224999518,
"duration": "0:00:05.058309",
"symbol": "ETH/USDC",
"mexc_executed": true
"mexc_executed": false
},
{
"trade_id": 4,
"side": "SHORT",
"entry_time": "2025-05-29T23:51:00.350490+00:00",
"exit_time": "2025-05-29T23:51:02.365767+00:00",
"entry_price": 2626.61,
"exit_price": 2627.1,
"size": 0.003044,
"gross_pnl": -0.0014915599999993354,
"fees": 0.007996146619999998,
"entry_time": "2025-05-30T00:14:26.477094+00:00",
"exit_time": "2025-05-30T00:14:30.535806+00:00",
"entry_price": 2641.31,
"exit_price": 2641.5,
"size": 0.002779,
"gross_pnl": -0.0005280100000001517,
"fees": 0.007340464494999999,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.009487706619999335,
"duration": "0:00:02.015277",
"net_pnl": -0.00786847449500015,
"duration": "0:00:04.058712",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 5,
"side": "LONG",
"entry_time": "2025-05-29T23:51:02.365767+00:00",
"exit_time": "2025-05-29T23:51:09.413383+00:00",
"entry_price": 2627.1,
"exit_price": 2627.01,
"size": 0.003371,
"gross_pnl": -0.0003033899999989576,
"fees": 0.008855802405000002,
"entry_time": "2025-05-30T00:14:30.535806+00:00",
"exit_time": "2025-05-30T00:14:31.552963+00:00",
"entry_price": 2641.5,
"exit_price": 2641.4,
"size": 0.00333,
"gross_pnl": -0.00033299999999969715,
"fees": 0.0087960285,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.009159192404998958,
"duration": "0:00:07.047616",
"net_pnl": -0.009129028499999699,
"duration": "0:00:01.017157",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 6,
"side": "SHORT",
"entry_time": "2025-05-29T23:51:09.413383+00:00",
"exit_time": "2025-05-29T23:51:15.458079+00:00",
"entry_price": 2627.01,
"exit_price": 2627.3,
"size": 0.003195,
"gross_pnl": -0.0009265499999998837,
"fees": 0.008393760225000001,
"entry_time": "2025-05-30T00:14:31.552963+00:00",
"exit_time": "2025-05-30T00:14:45.573808+00:00",
"entry_price": 2641.4,
"exit_price": 2641.44,
"size": 0.003364,
"gross_pnl": -0.0001345599999998776,
"fees": 0.00888573688,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.009320310224999885,
"duration": "0:00:06.044696",
"net_pnl": -0.009020296879999877,
"duration": "0:00:14.020845",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 7,
"side": "LONG",
"entry_time": "2025-05-29T23:51:15.458079+00:00",
"exit_time": "2025-05-29T23:51:29.589023+00:00",
"entry_price": 2627.3,
"exit_price": 2626.5,
"size": 0.003194,
"gross_pnl": -0.002555200000000581,
"fees": 0.0083903186,
"entry_time": "2025-05-30T00:14:45.573808+00:00",
"exit_time": "2025-05-30T00:15:20.170547+00:00",
"entry_price": 2641.44,
"exit_price": 2642.71,
"size": 0.003597,
"gross_pnl": 0.004568189999999935,
"fees": 0.009503543775,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.010945518600000582,
"duration": "0:00:14.130944",
"net_pnl": -0.004935353775000065,
"duration": "0:00:34.596739",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 8,
"side": "SHORT",
"entry_time": "2025-05-29T23:51:29.589023+00:00",
"exit_time": "2025-05-29T23:51:36.690687+00:00",
"entry_price": 2626.5,
"exit_price": 2627.31,
"size": 0.003067,
"gross_pnl": -0.0024842699999998324,
"fees": 0.008056717635,
"entry_time": "2025-05-30T00:15:20.170547+00:00",
"exit_time": "2025-05-30T00:15:44.336302+00:00",
"entry_price": 2642.71,
"exit_price": 2641.3,
"size": 0.003595,
"gross_pnl": 0.005068949999999477,
"fees": 0.009498007975,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.010540987634999832,
"duration": "0:00:07.101664",
"net_pnl": -0.004429057975000524,
"duration": "0:00:24.165755",
"symbol": "ETH/USDC",
"mexc_executed": true
},
{
"trade_id": 9,
"side": "LONG",
"entry_time": "2025-05-29T23:51:36.690687+00:00",
"exit_time": "2025-05-29T23:51:57.745006+00:00",
"entry_price": 2627.31,
"exit_price": 2628.8,
"size": 0.003616,
"gross_pnl": 0.005387840000000855,
"fees": 0.009503046880000001,
"entry_time": "2025-05-30T00:15:44.336302+00:00",
"exit_time": "2025-05-30T00:15:53.303199+00:00",
"entry_price": 2641.3,
"exit_price": 2640.69,
"size": 0.003597,
"gross_pnl": -0.002194170000000458,
"fees": 0.009499659015,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.004115206879999145,
"duration": "0:00:21.054319",
"net_pnl": -0.011693829015000459,
"duration": "0:00:08.966897",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 10,
"side": "SHORT",
"entry_time": "2025-05-29T23:51:57.745006+00:00",
"exit_time": "2025-05-29T23:52:01.784243+00:00",
"entry_price": 2628.8,
"exit_price": 2629.69,
"size": 0.003146,
"gross_pnl": -0.002799939999999599,
"fees": 0.00827160477,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.011071544769999598,
"duration": "0:00:04.039237",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 11,
"side": "LONG",
"entry_time": "2025-05-29T23:52:01.784243+00:00",
"exit_time": "2025-05-29T23:52:19.042322+00:00",
"entry_price": 2629.69,
"exit_price": 2630.16,
"size": 0.003158,
"gross_pnl": 0.0014842599999993682,
"fees": 0.00830530315,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.006821043150000632,
"duration": "0:00:17.258079",
"symbol": "ETH/USDC",
"mexc_executed": true
},
{
"trade_id": 12,
"side": "SHORT",
"entry_time": "2025-05-29T23:52:19.042322+00:00",
"exit_time": "2025-05-29T23:52:21.059640+00:00",
"entry_price": 2630.16,
"exit_price": 2629.8,
"size": 0.002501,
"gross_pnl": 0.0009003599999991812,
"fees": 0.00657757998,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.005677219980000819,
"duration": "0:00:02.017318",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 13,
"side": "LONG",
"entry_time": "2025-05-29T23:52:21.059640+00:00",
"exit_time": "2025-05-29T23:52:22.047822+00:00",
"entry_price": 2629.8,
"exit_price": 2630.1,
"size": 0.003603,
"gross_pnl": 0.001080899999999017,
"fees": 0.00947570985,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.008394809850000982,
"duration": "0:00:00.988182",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 14,
"side": "SHORT",
"entry_time": "2025-05-29T23:52:22.047822+00:00",
"exit_time": "2025-05-29T23:52:23.057350+00:00",
"entry_price": 2630.1,
"exit_price": 2630.21,
"size": 0.002891,
"gross_pnl": -0.0003180100000003681,
"fees": 0.007603778104999999,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.007921788105000367,
"duration": "0:00:01.009528",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 15,
"side": "LONG",
"entry_time": "2025-05-29T23:52:23.057350+00:00",
"exit_time": "2025-05-29T23:52:24.064707+00:00",
"entry_price": 2630.21,
"exit_price": 2630.2,
"size": 0.003294,
"gross_pnl": -3.294000000071901e-05,
"fees": 0.00866389527,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.00869683527000072,
"duration": "0:00:01.007357",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 16,
"side": "SHORT",
"entry_time": "2025-05-29T23:52:24.064707+00:00",
"exit_time": "2025-05-29T23:52:25.116203+00:00",
"entry_price": 2630.2,
"exit_price": 2630.85,
"size": 0.00314,
"gross_pnl": -0.0020410000000002856,
"fees": 0.0082598485,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.010300848500000286,
"duration": "0:00:01.051496",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 17,
"side": "LONG",
"entry_time": "2025-05-29T23:52:25.116203+00:00",
"exit_time": "2025-05-29T23:52:38.236771+00:00",
"entry_price": 2630.85,
"exit_price": 2630.7,
"size": 0.003321,
"gross_pnl": -0.0004981500000003021,
"fees": 0.008736803775,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.0092349537750003,
"duration": "0:00:13.120568",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 18,
"side": "SHORT",
"entry_time": "2025-05-29T23:52:38.236771+00:00",
"exit_time": "2025-05-29T23:52:50.383302+00:00",
"entry_price": 2630.7,
"exit_price": 2630.4,
"size": 0.002656,
"gross_pnl": 0.0007967999999992753,
"fees": 0.0069867408,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.006189940800000725,
"duration": "0:00:12.146531",
"symbol": "ETH/USDC",
"mexc_executed": true
},
{
"trade_id": 19,
"side": "LONG",
"entry_time": "2025-05-29T23:52:50.383302+00:00",
"exit_time": "2025-05-29T23:53:19.597394+00:00",
"entry_price": 2630.4,
"exit_price": 2633.26,
"size": 0.003426,
"gross_pnl": 0.009798360000000436,
"fees": 0.009016649580000001,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": 0.000781710420000436,
"duration": "0:00:29.214092",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 20,
"side": "SHORT",
"entry_time": "2025-05-29T23:53:19.597394+00:00",
"exit_time": "2025-05-29T23:53:31.013991+00:00",
"entry_price": 2633.26,
"exit_price": 2633.45,
"size": 0.003379,
"gross_pnl": -0.0006420099999986478,
"fees": 0.008898106545,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.009540116544998648,
"duration": "0:00:11.416597",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 21,
"side": "LONG",
"entry_time": "2025-05-29T23:53:31.013991+00:00",
"exit_time": "2025-05-29T23:53:37.784028+00:00",
"entry_price": 2633.45,
"exit_price": 2632.8,
"size": 0.003589,
"gross_pnl": -0.0023328499999986946,
"fees": 0.009450285625,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.011783135624998695,
"duration": "0:00:06.770037",
"symbol": "ETH/USDC",
"mexc_executed": true
},
{
"trade_id": 22,
"side": "SHORT",
"entry_time": "2025-05-29T23:53:37.784028+00:00",
"exit_time": "2025-05-29T23:53:51.855920+00:00",
"entry_price": 2632.8,
"exit_price": 2632.46,
"size": 0.003376,
"gross_pnl": 0.0011478400000004914,
"fees": 0.00888775888,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.007739918879999509,
"duration": "0:00:14.071892",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 23,
"side": "LONG",
"entry_time": "2025-05-29T23:53:51.855920+00:00",
"exit_time": "2025-05-29T23:53:54.962582+00:00",
"entry_price": 2632.46,
"exit_price": 2632.77,
"size": 0.003609,
"gross_pnl": 0.001118789999999803,
"fees": 0.009501107534999999,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.008382317535000197,
"duration": "0:00:03.106662",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 24,
"side": "SHORT",
"entry_time": "2025-05-29T23:53:54.962582+00:00",
"exit_time": "2025-05-29T23:53:59.920067+00:00",
"entry_price": 2632.77,
"exit_price": 2632.9,
"size": 0.002894,
"gross_pnl": -0.00037622000000031585,
"fees": 0.00761942449,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.007995644490000316,
"duration": "0:00:04.957485",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 25,
"side": "LONG",
"entry_time": "2025-05-29T23:53:59.920067+00:00",
"exit_time": "2025-05-29T23:54:11.215478+00:00",
"entry_price": 2632.9,
"exit_price": 2632.8,
"size": 0.003527,
"gross_pnl": -0.00035269999999967925,
"fees": 0.00928606195,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.009638761949999679,
"duration": "0:00:11.295411",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 26,
"side": "SHORT",
"entry_time": "2025-05-29T23:54:11.215478+00:00",
"exit_time": "2025-05-29T23:54:18.047968+00:00",
"entry_price": 2632.8,
"exit_price": 2632.81,
"size": 0.003608,
"gross_pnl": -3.607999999914682e-05,
"fees": 0.009499160440000001,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.009535240439999147,
"duration": "0:00:06.832490",
"symbol": "ETH/USDC",
"mexc_executed": true
},
{
"trade_id": 27,
"side": "LONG",
"entry_time": "2025-05-29T23:54:18.047968+00:00",
"exit_time": "2025-05-29T23:54:19.049526+00:00",
"entry_price": 2632.81,
"exit_price": 2632.78,
"size": 0.003337,
"gross_pnl": -0.0001001099999991502,
"fees": 0.008785636915,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.008885746914999151,
"duration": "0:00:01.001558",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 28,
"side": "SHORT",
"entry_time": "2025-05-29T23:54:19.049526+00:00",
"exit_time": "2025-05-29T23:54:28.110776+00:00",
"entry_price": 2632.78,
"exit_price": 2632.8,
"size": 0.003219,
"gross_pnl": -6.437999999994144e-05,
"fees": 0.008474951010000002,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.008539331009999943,
"duration": "0:00:09.061250",
"symbol": "ETH/USDC",
"mexc_executed": false
},
{
"trade_id": 29,
"side": "LONG",
"entry_time": "2025-05-29T23:54:28.110776+00:00",
"exit_time": "2025-05-29T23:55:00.386077+00:00",
"entry_price": 2632.8,
"exit_price": 2632.81,
"size": 0.003368,
"gross_pnl": 3.3679999999203575e-05,
"fees": 0.00886728724,
"fee_type": "taker",
"fee_rate": 0.0005,
"net_pnl": -0.008833607240000797,
"duration": "0:00:32.275301",
"symbol": "ETH/USDC",
"mexc_executed": true
}
]

318
enhanced_rl_diagnostic.py Normal file
View File

@ -0,0 +1,318 @@
#!/usr/bin/env python3
"""
Enhanced RL Diagnostic and Setup Script
This script:
1. Diagnoses why Enhanced RL shows as DISABLED
2. Explains model management and training progression
3. Sets up clean training environment
4. Provides solutions for the reward function issues
"""
import sys
import json
import logging
from datetime import datetime
from pathlib import Path
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
def check_enhanced_rl_availability():
"""Check what's causing Enhanced RL to be disabled"""
logger.info("🔍 DIAGNOSING ENHANCED RL AVAILABILITY")
logger.info("=" * 50)
issues = []
solutions = []
# Test 1: Enhanced components import
try:
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
logger.info("✅ EnhancedTradingOrchestrator imports successfully")
except ImportError as e:
issues.append(f"❌ Cannot import EnhancedTradingOrchestrator: {e}")
solutions.append("Fix: Check core/enhanced_orchestrator.py exists and is valid")
# Test 2: Unified data stream import
try:
from core.unified_data_stream import UnifiedDataStream, TrainingDataPacket, UIDataPacket
logger.info("✅ Unified data stream components import successfully")
except ImportError as e:
issues.append(f"❌ Cannot import unified data stream: {e}")
solutions.append("Fix: Check core/unified_data_stream.py exists and is valid")
# Test 3: Universal data adapter import
try:
from core.universal_data_adapter import UniversalDataAdapter
logger.info("✅ UniversalDataAdapter imports successfully")
except ImportError as e:
issues.append(f"❌ Cannot import UniversalDataAdapter: {e}")
solutions.append("Fix: Check core/universal_data_adapter.py exists and is valid")
# Test 4: Dashboard initialization logic
logger.info("🔍 Checking dashboard initialization logic...")
# Simulate dashboard initialization
try:
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
from core.data_provider import DataProvider
data_provider = DataProvider()
enhanced_orchestrator = EnhancedTradingOrchestrator(
data_provider=data_provider,
symbols=['ETH/USDT'],
enhanced_rl_training=True
)
# Check the isinstance condition
if isinstance(enhanced_orchestrator, EnhancedTradingOrchestrator):
logger.info("✅ EnhancedTradingOrchestrator isinstance check passes")
else:
issues.append("❌ isinstance(orchestrator, EnhancedTradingOrchestrator) fails")
solutions.append("Fix: Ensure dashboard is initialized with EnhancedTradingOrchestrator")
except Exception as e:
issues.append(f"❌ Cannot create EnhancedTradingOrchestrator: {e}")
solutions.append("Fix: Check orchestrator initialization parameters")
# Test 5: Main startup script
logger.info("🔍 Checking main startup configuration...")
main_file = Path("main_clean.py")
if main_file.exists():
content = main_file.read_text()
if "EnhancedTradingOrchestrator" in content:
logger.info("✅ main_clean.py uses EnhancedTradingOrchestrator")
else:
issues.append("❌ main_clean.py not using EnhancedTradingOrchestrator")
solutions.append("Fix: Update main_clean.py to use EnhancedTradingOrchestrator")
return issues, solutions
def analyze_model_management():
"""Analyze current model management setup"""
logger.info("📊 ANALYZING MODEL MANAGEMENT")
logger.info("=" * 50)
models_dir = Path("models")
# Count different model types
model_counts = {
"CNN models": len(list(models_dir.glob("**/cnn*.pt*"))),
"RL models": len(list(models_dir.glob("**/trading_agent*.pt*"))),
"Backup models": len(list(models_dir.glob("**/*.backup"))),
"Total model files": len(list(models_dir.glob("**/*.pt*")))
}
for model_type, count in model_counts.items():
logger.info(f" {model_type}: {count}")
# Check for training progression system
progress_file = models_dir / "training_progress.json"
if progress_file.exists():
logger.info("✅ Training progression file exists")
try:
with open(progress_file) as f:
progress = json.load(f)
logger.info(f" Created: {progress.get('created', 'Unknown')}")
logger.info(f" Version: {progress.get('version', 'Unknown')}")
except Exception as e:
logger.warning(f"⚠️ Cannot read progression file: {e}")
else:
logger.info("❌ No training progression tracking found")
# Check for conflicting models
conflicting_models = [
"models/cnn_final_20250331_001817.pt.pt",
"models/cnn_best.pt.pt",
"models/trading_agent_final.pt",
"models/trading_agent_best_pnl.pt"
]
conflicts = [model for model in conflicting_models if Path(model).exists()]
if conflicts:
logger.warning(f"⚠️ Found {len(conflicts)} potentially conflicting model files")
for conflict in conflicts:
logger.warning(f" {conflict}")
else:
logger.info("✅ No obvious model conflicts detected")
def analyze_reward_function():
"""Analyze the reward function and training issues"""
logger.info("🎯 ANALYZING REWARD FUNCTION ISSUES")
logger.info("=" * 50)
# Read recent dashboard logs to understand the -0.5 reward issue
log_file = Path("dashboard.log")
if log_file.exists():
try:
with open(log_file, 'r') as f:
lines = f.readlines()
# Look for reward patterns
reward_lines = [line for line in lines if "Reward:" in line]
if reward_lines:
recent_rewards = reward_lines[-10:] # Last 10 rewards
negative_rewards = [line for line in recent_rewards if "-0.5" in line]
logger.info(f"Recent rewards found: {len(recent_rewards)}")
logger.info(f"Negative -0.5 rewards: {len(negative_rewards)}")
if len(negative_rewards) > 5:
logger.warning("⚠️ High number of -0.5 rewards detected")
logger.info("This suggests blocked signals are being penalized with fees")
logger.info("Solution: Update _queue_signal_for_training to handle blocked signals better")
# Look for blocked signal patterns
blocked_signals = [line for line in lines if "NOT_EXECUTED" in line]
if blocked_signals:
logger.info(f"Blocked signals found: {len(blocked_signals)}")
recent_blocked = blocked_signals[-5:]
for line in recent_blocked:
logger.info(f" {line.strip()}")
except Exception as e:
logger.warning(f"Cannot analyze log file: {e}")
else:
logger.info("No dashboard.log found for analysis")
def provide_solutions():
"""Provide comprehensive solutions"""
logger.info("💡 COMPREHENSIVE SOLUTIONS")
logger.info("=" * 50)
solutions = {
"Enhanced RL DISABLED Issue": [
"1. Update main_clean.py to use EnhancedTradingOrchestrator (already done)",
"2. Restart the dashboard with: python main_clean.py web",
"3. Verify Enhanced RL: ENABLED appears in logs"
],
"Williams Repeated Initialization": [
"1. Dashboard reuses Williams instance now (already fixed)",
"2. Default strengths changed from [2,3,5,8,13] to [2,3,5] (already done)",
"3. No more repeated 'Williams Market Structure initialized' logs"
],
"Model Management": [
"1. Run: python cleanup_and_setup_models.py",
"2. This will backup old models and create clean structure",
"3. Set up training progression tracking",
"4. Initialize fresh training environment"
],
"Reward Function (-0.5 Issue)": [
"1. Blocked signals now get small negative reward (-0.1) instead of fee penalty",
"2. Synthetic signals handled separately from real trades",
"3. Reward calculation improved for better learning"
],
"CNN Training Sessions": [
"1. CNN training is disabled by default (no TensorFlow)",
"2. Williams pivot detection works without CNN",
"3. Enable CNN when TensorFlow available for enhanced predictions"
]
}
for category, steps in solutions.items():
logger.info(f"\n{category}:")
for step in steps:
logger.info(f" {step}")
def create_startup_script():
"""Create an optimal startup script"""
startup_script = """#!/usr/bin/env python3
# Enhanced RL Trading Dashboard Startup Script
import logging
logging.basicConfig(level=logging.INFO)
def main():
try:
# Import enhanced components
from core.data_provider import DataProvider
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
from core.trading_executor import TradingExecutor
from web.dashboard import TradingDashboard
from config import get_config
config = get_config()
# Initialize with enhanced RL support
data_provider = DataProvider()
enhanced_orchestrator = EnhancedTradingOrchestrator(
data_provider=data_provider,
symbols=config.get('symbols', ['ETH/USDT']),
enhanced_rl_training=True
)
trading_executor = TradingExecutor()
# Create dashboard with enhanced components
dashboard = TradingDashboard(
data_provider=data_provider,
orchestrator=enhanced_orchestrator, # Enhanced RL enabled
trading_executor=trading_executor
)
print("Enhanced RL Trading Dashboard Starting...")
print("Enhanced RL: ENABLED")
print("Williams Pivot Detection: ENABLED")
print("Real Market Data: ENABLED")
print("Access at: http://127.0.0.1:8050")
dashboard.run(host='127.0.0.1', port=8050, debug=False)
except Exception as e:
print(f"Startup failed: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()
"""
with open("start_enhanced_dashboard.py", "w", encoding='utf-8') as f:
f.write(startup_script)
logger.info("Created start_enhanced_dashboard.py for optimal startup")
def main():
"""Main diagnostic function"""
print("🔬 ENHANCED RL DIAGNOSTIC AND SETUP")
print("=" * 60)
print("Analyzing Enhanced RL issues and providing solutions...")
print("=" * 60)
# Run diagnostics
issues, solutions = check_enhanced_rl_availability()
analyze_model_management()
analyze_reward_function()
provide_solutions()
create_startup_script()
# Summary
print("\n" + "=" * 60)
print("📋 SUMMARY")
print("=" * 60)
if issues:
print("❌ Issues found:")
for issue in issues:
print(f" {issue}")
print("\n💡 Solutions:")
for solution in solutions:
print(f" {solution}")
else:
print("✅ No critical issues detected!")
print("\n🚀 NEXT STEPS:")
print("1. Run model cleanup: python cleanup_and_setup_models.py")
print("2. Start enhanced dashboard: python start_enhanced_dashboard.py")
print("3. Verify 'Enhanced RL: ENABLED' in dashboard")
print("4. Check Williams pivot detection on chart")
print("5. Monitor training episodes (should not all be -0.5 reward)")
if __name__ == "__main__":
main()

View File

@ -278,52 +278,90 @@ def run_live_trading():
raise
def run_web_dashboard():
"""Run web dashboard with enhanced real-time data - NO SYNTHETIC DATA"""
"""Run the web dashboard with real live data"""
try:
logger.info("Starting Web Dashboard Mode with REAL LIVE DATA...")
# Initialize with real data provider
# Initialize core components with enhanced RL support
from core.tick_aggregator import RealTimeTickAggregator
from core.data_provider import DataProvider
from core.enhanced_orchestrator import EnhancedTradingOrchestrator # Use enhanced version
from core.trading_executor import TradingExecutor
# Create tick aggregator for real-time data - fix parameter name
tick_aggregator = RealTimeTickAggregator(
symbols=['ETHUSDC', 'BTCUSDT', 'MXUSDT'],
tick_buffer_size=10000 # Changed from buffer_size to tick_buffer_size
)
# Create data provider
data_provider = DataProvider()
# Verify we have real data connection
# Verify data connection with real data
logger.info("[DATA] Verifying REAL data connection...")
test_data = data_provider.get_historical_data('ETH/USDT', '1m', limit=10, refresh=True)
if test_data is None or test_data.empty:
logger.warning("⚠️ No fresh data available - trying cached data...")
test_data = data_provider.get_historical_data('ETH/USDT', '1m', limit=10, refresh=False)
if test_data is None or test_data.empty:
logger.warning("⚠️ No data available - starting dashboard with demo mode...")
else:
symbol = config.get('symbols', ['ETH/USDT'])[0]
test_df = data_provider.get_historical_data(symbol, '1m', limit=10)
if test_df is not None and len(test_df) > 0:
logger.info("[SUCCESS] Data connection verified")
logger.info(f"[SUCCESS] Fetched {len(test_data)} candles for validation")
logger.info(f"[SUCCESS] Fetched {len(test_df)} candles for validation")
else:
logger.error("[ERROR] Data connection failed - no real data available")
return
# Initialize orchestrator with real data only
orchestrator = TradingOrchestrator(data_provider)
# Load model registry
model_registry = get_model_registry()
# Start dashboard - use the correct import
# Create ENHANCED trading orchestrator for RL training
orchestrator = EnhancedTradingOrchestrator(
data_provider=data_provider,
symbols=config.get('symbols', ['ETH/USDT']),
enhanced_rl_training=True, # Enable enhanced RL
model_registry=model_registry
)
logger.info("Enhanced RL Trading Orchestrator initialized")
# Create trading executor (handles MEXC integration)
trading_executor = TradingExecutor()
# Import and create enhanced dashboard
from web.dashboard import TradingDashboard
dashboard = TradingDashboard(data_provider, orchestrator)
dashboard = TradingDashboard(
data_provider=data_provider,
orchestrator=orchestrator, # Enhanced orchestrator
trading_executor=trading_executor
)
logger.info("[LAUNCH] LAUNCHING DASHBOARD")
logger.info(f"[ACCESS] Access at: http://127.0.0.1:8050")
# Start the dashboard
port = config.get('web', {}).get('port', 8050)
host = config.get('web', {}).get('host', '127.0.0.1')
# Run the dashboard
dashboard.run(host='127.0.0.1', port=8050, debug=False)
logger.info(f"TRADING: Starting Live Scalping Dashboard at http://{host}:{port}")
logger.info("Enhanced RL Training: ENABLED")
logger.info("Real Market Data: ENABLED")
logger.info("MEXC Integration: ENABLED")
dashboard.run(host=host, port=port, debug=False)
except Exception as e:
logger.error(f"Error in web dashboard: {e}")
logger.error("Dashboard stopped - trying fallback mode")
# Try a simpler fallback
try:
# Fallback to basic dashboard function - use working import
from web.dashboard import TradingDashboard
from core.data_provider import DataProvider
# Create minimal dashboard
data_provider = DataProvider()
orchestrator = TradingOrchestrator(data_provider)
dashboard = TradingDashboard(data_provider, orchestrator)
dashboard = TradingDashboard(data_provider)
logger.info("Using fallback dashboard")
dashboard.run(host='127.0.0.1', port=8050, debug=False)
except Exception as fallback_error:
logger.error(f"Fallback dashboard also failed: {fallback_error}")
raise
logger.error(f"Fatal error: {e}")
import traceback
logger.error("Traceback (most recent call last):")
logger.error(traceback.format_exc())
async def main():
"""Main entry point with clean mode selection"""

Binary file not shown.

View File

@ -1 +0,0 @@
{"epsilon": 1.0, "state_size": 64, "action_size": 4, "hidden_size": 384, "lstm_layers": 2, "attention_heads": 4}

Binary file not shown.

View File

@ -1 +0,0 @@
{"epsilon": 1.0, "state_size": 64, "action_size": 4, "hidden_size": 384, "lstm_layers": 2, "attention_heads": 4}

Binary file not shown.

View File

@ -1 +0,0 @@
{"epsilon": 0.843345, "state_size": 64, "action_size": 4, "hidden_size": 384, "lstm_layers": 2, "attention_heads": 4}

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

View File

@ -3,17 +3,29 @@ Williams Market Structure Implementation for RL Training
This module implements Larry Williams market structure analysis methodology for
RL training enhancement with:
- Swing high/low detection with configurable strength
- 5 levels of recursive pivot point calculation
- Trend analysis (higher highs/lows vs lower highs/lows)
- Market bias determination across multiple timeframes
- Feature extraction for RL training (250 features)
**SINGLE TIMEFRAME RECURSIVE APPROACH:**
- Level 0: 1s OHLCV data swing points using configurable strength [2, 3, 5]
- Level 1: Level 0 swing points treated as "price bars" higher-level swing points
- Level 2: Level 1 swing points treated as "price bars" even higher-level swing points
- Level 3: Level 2 swing points treated as "price bars" top-level swing points
- Level 4: Level 3 swing points treated as "price bars" highest-level swing points
**RECURSIVE METHODOLOGY:**
Each level uses the previous level's swing points as input "price data", where:
- Each swing point becomes a "price bar" with OHLC = swing point price
- Swing strength detection applied to find patterns in swing point sequences
- This creates fractal market structure analysis across 5 recursive levels
**NOT MULTI-TIMEFRAME:**
Williams structure uses ONLY 1s data and builds recursively.
Multi-timeframe data (1m, 1h) is used separately for CNN feature enhancement.
Based on Larry Williams' teachings on market structure:
- Markets move in swings between support and resistance
- Higher timeframe structure determines lower timeframe bias
- Recursive analysis reveals fractal patterns
- Trend direction determined by swing point relationships
- Higher recursive levels reveal longer-term structure patterns
- Recursive analysis reveals fractal patterns within market movements
- Trend direction determined by swing point relationships across levels
"""
import numpy as np
@ -116,7 +128,7 @@ class WilliamsMarketStructure:
enable_cnn_feature: If True, enables CNN prediction and training at pivots.
training_data_provider: Provider/stream for accessing TrainingDataPacket
"""
self.swing_strengths = swing_strengths or [2, 3, 5, 8, 13] # Fibonacci-based strengths
self.swing_strengths = swing_strengths or [2, 3, 5] # Simplified strengths for better performance
self.max_levels = 5
self.min_swings_for_trend = 3
@ -154,35 +166,47 @@ class WilliamsMarketStructure:
def calculate_recursive_pivot_points(self, ohlcv_data: np.ndarray) -> Dict[str, MarketStructureLevel]:
"""
Calculate 5 levels of recursive pivot points using TRUE recursion
Calculate 5 levels of recursive pivot points using SINGLE TIMEFRAME (1s) data
Level 1: Calculated from 1s OHLCV data
Level 2: Calculated from Level 1 pivot points treated as individual price bars
Level 3: Calculated from Level 2 pivot points treated as individual price bars
etc.
**RECURSIVE STRUCTURE:**
- Level 0: Raw 1s OHLCV data swing points (strength 2, 3, 5)
- Level 1: Level 0 swing points treated as "price bars" higher-level swing points
- Level 2: Level 1 swing points treated as "price bars" even higher-level swing points
- Level 3: Level 2 swing points treated as "price bars" top-level swing points
- Level 4: Level 3 swing points treated as "price bars" highest-level swing points
**HOW RECURSION WORKS:**
1. Start with 1s OHLCV data (timestamp, open, high, low, close, volume)
2. Find Level 0 swing points using configurable strength [2, 3, 5]
3. Convert Level 0 swing points to "price bar" format where OHLC = swing price
4. Apply swing detection to these "price bars" to find Level 1 swing points
5. Repeat process: Level N swing points "price bars" Level N+1 swing points
This creates a fractal analysis where each level reveals longer-term structure patterns
within the same 1s timeframe data, NOT across different timeframes.
Args:
ohlcv_data: OHLCV data array with columns [timestamp, open, high, low, close, volume]
ohlcv_data: 1s OHLCV data array [timestamp, open, high, low, close, volume]
Returns:
Dict of market structure levels with swing points and trend analysis
Dict of 5 market structure levels with recursive swing points and analysis
"""
if len(ohlcv_data) < 20:
logger.warning("Insufficient data for Williams structure analysis")
return self._create_empty_structure()
levels = {}
current_price_points = ohlcv_data.copy() # Start with raw price data
current_price_points = ohlcv_data.copy() # Start with raw 1s OHLCV data
for level in range(self.max_levels):
logger.debug(f"Analyzing level {level} with {len(current_price_points)} data points")
if level == 0:
# Level 0 (Level 1): Calculate from raw OHLCV data
# Level 0: Calculate swing points from raw 1s OHLCV data
swing_points = self._find_swing_points_multi_strength(current_price_points)
else:
# Level 1+ (Level 2+): Calculate from previous level's pivot points
# Treat pivot points as individual price bars
# Level 1-4: Calculate swing points from previous level's swing points
# Previous level's swing points are treated as "price bars"
swing_points = self._find_pivot_points_from_pivot_points(current_price_points, level)
if len(swing_points) < self.min_swings_for_trend:
@ -557,14 +581,24 @@ class WilliamsMarketStructure:
def _find_pivot_points_from_pivot_points(self, pivot_array: np.ndarray, level: int) -> List[SwingPoint]:
"""
Find pivot points from previous level's pivot points
Find swing points from previous level's swing points (RECURSIVE APPROACH)
For Level 2+: A Level N low pivot is when a Level N-1 pivot low is surrounded
by higher Level N-1 pivot lows (and vice versa for highs)
**RECURSIVE SWING DETECTION:**
For Level N (where N > 0): A Level N swing high occurs when a Level N-1 swing point
is higher than surrounding Level N-1 swing points (and vice versa for lows).
This is NOT multi-timeframe analysis - it's recursive fractal analysis where:
- Level 1 finds patterns in Level 0 swing sequences (from 1s data)
- Level 2 finds patterns in Level 1 swing sequences
- Level 3 finds patterns in Level 2 swing sequences
- Level 4 finds patterns in Level 3 swing sequences
All based on the original 1s timeframe data, recursively analyzed.
Args:
pivot_array: Array of pivot points as [timestamp, price, price, price, price, 0] format
level: Current level being calculated
pivot_array: Array of Level N-1 swing points formatted as "price bars"
[timestamp, price, price, price, price, 0] format
level: Current recursive level being calculated (1, 2, 3, or 4)
"""
identified_swings_in_this_call = [] # Temporary list
@ -620,10 +654,22 @@ class WilliamsMarketStructure:
def _convert_pivots_to_price_points(self, swing_points: List[SwingPoint]) -> np.ndarray:
"""
Convert swing points to price point array for next level calculation
Convert swing points to "price bar" format for next recursive level calculation
Each swing point becomes a "price bar" where OHLC = pivot price
This allows the next level to treat pivot points as individual price data
**RECURSIVE CONVERSION PROCESS:**
Each swing point from Level N becomes a "price bar" for Level N+1 calculation:
- Timestamp = swing point timestamp
- Open = High = Low = Close = swing point price (since it's a single point)
- Volume = 0 (not applicable for swing points)
This allows Level N+1 to treat Level N swing points as if they were regular
OHLCV price bars, enabling the same swing detection algorithm to find
higher-level patterns in the swing point sequences.
Example:
- Level 0: 1000 x 1s bars 50 swing points
- Level 1: 50 "price bars" (from Level 0 swings) 10 swing points
- Level 2: 10 "price bars" (from Level 1 swings) 3 swing points
"""
if len(swing_points) < 2:
return np.array([])