9 Commits

Author SHA1 Message Date
64371678ca setup aider 2025-07-23 10:27:32 +03:00
0cc104f1ef wip cob 2025-07-23 00:48:14 +03:00
8898f71832 dark mode. new COB style 2025-07-22 22:00:27 +03:00
55803c4fb9 cleanup new COB ladder 2025-07-22 21:39:36 +03:00
153ebe6ec2 stability 2025-07-22 21:18:31 +03:00
6c91bf0b93 fix sim and wip fix live 2025-07-08 02:47:10 +03:00
64678bd8d3 more live trades fix 2025-07-08 02:03:32 +03:00
4ab7bc1846 tweaks, try live trading 2025-07-08 01:33:22 +03:00
9cd2d5d8a4 fixes 2025-07-07 23:39:12 +03:00
49 changed files with 1072 additions and 90487 deletions

18
.aider.conf.yml Normal file
View File

@ -0,0 +1,18 @@
# Aider configuration file
# For more information, see: https://aider.chat/docs/config/aider_conf.html
# To use the custom OpenAI-compatible endpoint from hyperbolic.xyz
# Set the model and the API base URL.
model: Qwen/Qwen3-Coder-480B-A35B-Instruct
openai-api-base: https://api.hyperbolic.xyz/v1
openai-api-key: "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"
model-metadata-file: .aider.model.metadata.json
# The API key is now set directly in this file.
# Please replace "your-api-key-from-the-curl-command" with the actual bearer token.
#
# Alternatively, for better security, you can remove the openai-api-key line
# from this file and set it as an environment variable. To do so on Windows,
# run the following command in PowerShell and then RESTART YOUR SHELL:
#
# setx OPENAI_API_KEY "your-api-key-from-the-curl-command"

View File

@ -0,0 +1,7 @@
{
"Qwen/Qwen3-Coder-480B-A35B-Instruct": {
"context_window": 262144,
"input_cost_per_token": 0.000002,
"output_cost_per_token": 0.000002
}
}

5
.gitignore vendored
View File

@ -42,3 +42,8 @@ data/cnn_training/cnn_training_data*
testcases/* testcases/*
testcases/negative/case_index.json testcases/negative/case_index.json
chrome_user_data/* chrome_user_data/*
.aider*
!.aider.conf.yml
!.aider.model.metadata.json
.env

View File

@ -1,137 +0,0 @@
# Model Cleanup Summary Report
*Completed: 2024-12-19*
## 🎯 Objective
Clean up redundant and unused model implementations while preserving valuable architectural concepts and maintaining the production system integrity.
## 📋 Analysis Completed
- **Comprehensive Analysis**: Created detailed report of all model implementations
- **Good Ideas Documented**: Identified and recorded 50+ valuable architectural concepts
- **Production Models Identified**: Confirmed which models are actively used
- **Cleanup Plan Executed**: Removed redundant implementations systematically
## 🗑️ Files Removed
### CNN Model Implementations (4 files removed)
-`NN/models/cnn_model_pytorch.py` - Superseded by enhanced version
-`NN/models/enhanced_cnn_with_orderbook.py` - Functionality integrated elsewhere
-`NN/models/transformer_model_pytorch.py` - Basic implementation superseded
-`training/williams_market_structure.py` - Fallback no longer needed
### Enhanced Training System (5 files removed)
-`enhanced_rl_diagnostic.py` - Diagnostic script no longer needed
-`enhanced_realtime_training.py` - Functionality integrated into orchestrator
-`enhanced_rl_training_integration.py` - Superseded by orchestrator integration
-`test_enhanced_training.py` - Test for removed functionality
-`run_enhanced_cob_training.py` - Runner integrated into main system
### Test Files (3 files removed)
-`tests/test_enhanced_rl_status.py` - Testing removed enhanced RL system
-`tests/test_enhanced_dashboard_training.py` - Testing removed training system
-`tests/test_enhanced_system.py` - Testing removed enhanced system
## ✅ Files Preserved (Production Models)
### Core Production Models
- 🔒 `NN/models/cnn_model.py` - Main production CNN (Enhanced, 256+ channels)
- 🔒 `NN/models/dqn_agent.py` - Main production DQN (Enhanced CNN backbone)
- 🔒 `NN/models/cob_rl_model.py` - COB-specific RL (400M+ parameters)
- 🔒 `core/nn_decision_fusion.py` - Neural decision fusion
### Advanced Architectures (Archived for Future Use)
- 📦 `NN/models/advanced_transformer_trading.py` - 46M parameter transformer
- 📦 `NN/models/enhanced_cnn.py` - Alternative CNN architecture
- 📦 `NN/models/transformer_model.py` - MoE and transformer concepts
### Management Systems
- 🔒 `model_manager.py` - Model lifecycle management
- 🔒 `utils/checkpoint_manager.py` - Checkpoint management
## 🔄 Updates Made
### Import Updates
- ✅ Updated `NN/models/__init__.py` to reflect removed files
- ✅ Fixed imports to use correct remaining implementations
- ✅ Added proper exports for production models
### Architecture Compliance
- ✅ Maintained single source of truth for each model type
- ✅ Preserved all good architectural ideas in documentation
- ✅ Kept production system fully functional
## 💡 Good Ideas Preserved in Documentation
### Architecture Patterns
1. **Multi-Scale Processing** - Multiple kernel sizes and attention scales
2. **Attention Mechanisms** - Multi-head, self-attention, spatial attention
3. **Residual Connections** - Pre-activation, enhanced residual blocks
4. **Adaptive Architecture** - Dynamic network rebuilding
5. **Normalization Strategies** - GroupNorm, LayerNorm for different scenarios
### Training Innovations
1. **Experience Replay Variants** - Priority replay, example sifting
2. **Mixed Precision Training** - GPU optimization and memory efficiency
3. **Checkpoint Management** - Performance-based saving
4. **Model Fusion** - Neural decision fusion, MoE architectures
### Market-Specific Features
1. **Order Book Integration** - COB-specific preprocessing
2. **Market Regime Detection** - Regime-aware models
3. **Uncertainty Quantification** - Confidence estimation
4. **Position Awareness** - Position-aware action selection
## 📊 Cleanup Statistics
| Category | Files Analyzed | Files Removed | Files Preserved | Good Ideas Documented |
|----------|----------------|---------------|-----------------|----------------------|
| CNN Models | 5 | 4 | 1 | 12 |
| Transformer Models | 3 | 1 | 2 | 8 |
| RL Models | 2 | 0 | 2 | 6 |
| Training Systems | 5 | 5 | 0 | 10 |
| Test Files | 50+ | 3 | 47+ | - |
| **Total** | **65+** | **13** | **52+** | **36** |
## 🎯 Results
### Space Saved
- **Removed Files**: 13 files (~150KB of code)
- **Reduced Complexity**: Eliminated 4 redundant CNN implementations
- **Cleaner Architecture**: Single source of truth for each model type
### Knowledge Preserved
- **Comprehensive Documentation**: All good ideas documented in detail
- **Implementation Roadmap**: Clear path for future integrations
- **Architecture Patterns**: Reusable patterns identified and documented
### Production System
- **Zero Downtime**: All production models preserved and functional
- **Enhanced Imports**: Cleaner import structure
- **Future Ready**: Clear path for integrating documented innovations
## 🚀 Next Steps
### High Priority Integrations
1. Multi-scale attention mechanisms → Main CNN
2. Market regime detection → Orchestrator
3. Uncertainty quantification → Decision fusion
4. Enhanced experience replay → Main DQN
### Medium Priority
1. Relative positional encoding → Future transformer
2. Advanced normalization strategies → All models
3. Adaptive architecture features → Main models
### Future Considerations
1. MoE architecture for ensemble learning
2. Ultra-massive model variants for specialized tasks
3. Advanced transformer integration when needed
## ✅ Conclusion
Successfully cleaned up the project while:
- **Preserving** all production functionality
- **Documenting** valuable architectural innovations
- **Reducing** code complexity and redundancy
- **Maintaining** clear upgrade paths for future enhancements
The project is now cleaner, more maintainable, and ready for focused development on the core production models while having a clear roadmap for integrating the best ideas from the removed implementations.

View File

@ -1,303 +0,0 @@
# Model Implementations Analysis Report
*Generated: 2024-12-19*
## Executive Summary
This report analyzes all model implementations in the gogo2 trading system to identify valuable concepts and architectures before cleanup. The project contains multiple implementations of similar models, some unused, some experimental, and some production-ready.
## Current Model Ecosystem
### 🧠 CNN Models (5 Implementations)
#### 1. **`NN/models/cnn_model.py`** - Production Enhanced CNN
- **Status**: Currently used
- **Architecture**: Ultra-massive 256+ channel architecture with 12+ residual blocks
- **Key Features**:
- Multi-head attention mechanisms (16 heads)
- Multi-scale convolutional paths (3, 5, 7, 9 kernels)
- Spatial attention blocks
- GroupNorm for batch_size=1 compatibility
- Memory barriers to prevent in-place operations
- 2-action system optimized (BUY/SELL)
- **Good Ideas**:
- ✅ Attention mechanisms for temporal relationships
- ✅ Multi-scale feature extraction
- ✅ Robust normalization for single-sample inference
- ✅ Memory management for gradient computation
- ✅ Modular residual architecture
#### 2. **`NN/models/enhanced_cnn.py`** - Alternative Enhanced CNN
- **Status**: Alternative implementation
- **Architecture**: Ultra-massive with 3072+ channels, deep residual blocks
- **Key Features**:
- Self-attention mechanisms
- Pre-activation residual blocks
- Ultra-massive fully connected layers (3072 → 2560 → 2048 → 1536 → 1024)
- Adaptive network rebuilding based on input
- Example sifting dataset for experience replay
- **Good Ideas**:
- ✅ Pre-activation residual design
- ✅ Adaptive architecture based on input shape
- ✅ Experience replay integration in CNN training
- ✅ Ultra-wide hidden layers for complex pattern learning
#### 3. **`NN/models/cnn_model_pytorch.py`** - Standard PyTorch CNN
- **Status**: Standard implementation
- **Architecture**: Standard CNN with basic features
- **Good Ideas**:
- ✅ Clean PyTorch implementation patterns
- ✅ Standard training loops
#### 4. **`NN/models/enhanced_cnn_with_orderbook.py`** - COB-Specific CNN
- **Status**: Specialized for order book data
- **Good Ideas**:
- ✅ Order book specific preprocessing
- ✅ Market microstructure awareness
#### 5. **`training/williams_market_structure.py`** - Fallback CNN
- **Status**: Fallback implementation
- **Good Ideas**:
- ✅ Graceful fallback mechanism
- ✅ Simple architecture for testing
### 🤖 Transformer Models (3 Implementations)
#### 1. **`NN/models/transformer_model.py`** - TensorFlow Transformer
- **Status**: TensorFlow-based (outdated)
- **Architecture**: Classic transformer with positional encoding
- **Key Features**:
- Multi-head attention
- Positional encoding
- Mixture of Experts (MoE) model
- Time series + feature input combination
- **Good Ideas**:
- ✅ Positional encoding for temporal data
- ✅ MoE architecture for ensemble learning
- ✅ Multi-input design (time series + features)
- ✅ Configurable attention heads and layers
#### 2. **`NN/models/transformer_model_pytorch.py`** - PyTorch Transformer
- **Status**: PyTorch migration
- **Good Ideas**:
- ✅ PyTorch implementation patterns
- ✅ Modern transformer architecture
#### 3. **`NN/models/advanced_transformer_trading.py`** - Advanced Trading Transformer
- **Status**: Highly specialized
- **Architecture**: 46M parameter transformer with advanced features
- **Key Features**:
- Relative positional encoding
- Deep multi-scale attention (scales: 1,3,5,7,11,15)
- Market regime detection
- Uncertainty estimation
- Enhanced residual connections
- Layer norm variants
- **Good Ideas**:
- ✅ Relative positional encoding for temporal relationships
- ✅ Multi-scale attention for different time horizons
- ✅ Market regime detection integration
- ✅ Uncertainty quantification
- ✅ Deep attention mechanisms
- ✅ Cross-scale attention
- ✅ Market-specific configuration dataclass
### 🎯 RL Models (2 Implementations)
#### 1. **`NN/models/dqn_agent.py`** - Enhanced DQN Agent
- **Status**: Production system
- **Architecture**: Enhanced CNN backbone with DQN
- **Key Features**:
- Priority experience replay
- Checkpoint management integration
- Mixed precision training
- Position management awareness
- Extrema detection integration
- GPU optimization
- **Good Ideas**:
- ✅ Enhanced CNN as function approximator
- ✅ Priority experience replay
- ✅ Checkpoint management
- ✅ Mixed precision for performance
- ✅ Market context awareness
- ✅ Position-aware action selection
#### 2. **`NN/models/cob_rl_model.py`** - COB-Specific RL
- **Status**: Specialized for order book
- **Architecture**: Massive RL network (400M+ parameters)
- **Key Features**:
- Ultra-massive architecture for complex patterns
- COB-specific preprocessing
- Mixed precision training
- Model interface for easy integration
- **Good Ideas**:
- ✅ Massive capacity for complex market patterns
- ✅ COB-specific design
- ✅ Interface pattern for model management
- ✅ Mixed precision optimization
### 🔗 Decision Fusion Models
#### 1. **`core/nn_decision_fusion.py`** - Neural Decision Fusion
- **Status**: Production system
- **Key Features**:
- Multi-model prediction fusion
- Neural network for weight learning
- Dynamic model registration
- **Good Ideas**:
- ✅ Learnable model weights
- ✅ Dynamic model registration
- ✅ Neural fusion vs simple averaging
### 📊 Model Management Systems
#### 1. **`model_manager.py`** - Comprehensive Model Manager
- **Key Features**:
- Model registry with metadata
- Performance-based cleanup
- Storage management
- Model leaderboard
- 2-action system migration support
- **Good Ideas**:
- ✅ Automated model lifecycle management
- ✅ Performance-based retention
- ✅ Storage monitoring
- ✅ Model versioning
- ✅ Metadata tracking
#### 2. **`utils/checkpoint_manager.py`** - Checkpoint Management
- **Good Ideas**:
- ✅ Legacy model detection
- ✅ Performance-based checkpoint saving
- ✅ Metadata preservation
## Architectural Patterns & Good Ideas
### 🏗️ Architecture Patterns
1. **Multi-Scale Processing**
- Multiple kernel sizes (3,5,7,9,11,15)
- Different attention scales
- Temporal and spatial multi-scale
2. **Attention Mechanisms**
- Multi-head attention
- Self-attention
- Spatial attention
- Cross-scale attention
- Relative positional encoding
3. **Residual Connections**
- Pre-activation residual blocks
- Enhanced residual connections
- Memory barriers for gradient flow
4. **Adaptive Architecture**
- Dynamic network rebuilding
- Input-shape aware models
- Configurable model sizes
5. **Normalization Strategies**
- GroupNorm for batch_size=1
- LayerNorm for transformers
- BatchNorm for standard training
### 🔧 Training Innovations
1. **Experience Replay Variants**
- Priority experience replay
- Example sifting datasets
- Positive experience memory
2. **Mixed Precision Training**
- GPU optimization
- Memory efficiency
- Training speed improvements
3. **Checkpoint Management**
- Performance-based saving
- Legacy model support
- Metadata preservation
4. **Model Fusion**
- Neural decision fusion
- Mixture of Experts
- Dynamic weight learning
### 💡 Market-Specific Features
1. **Order Book Integration**
- COB-specific preprocessing
- Market microstructure awareness
- Imbalance calculations
2. **Market Regime Detection**
- Regime-aware models
- Adaptive behavior
- Context switching
3. **Uncertainty Quantification**
- Confidence estimation
- Risk-aware decisions
- Uncertainty propagation
4. **Position Awareness**
- Position-aware action selection
- Risk management integration
- Context-dependent decisions
## Recommendations for Cleanup
### ✅ Keep (Production Ready)
- `NN/models/cnn_model.py` - Main production CNN
- `NN/models/dqn_agent.py` - Main production DQN
- `NN/models/cob_rl_model.py` - COB-specific RL
- `core/nn_decision_fusion.py` - Decision fusion
- `model_manager.py` - Model management
- `utils/checkpoint_manager.py` - Checkpoint management
### 📦 Archive (Good Ideas, Not Currently Used)
- `NN/models/advanced_transformer_trading.py` - Advanced transformer concepts
- `NN/models/enhanced_cnn.py` - Alternative CNN architecture
- `NN/models/transformer_model.py` - MoE and transformer concepts
### 🗑️ Remove (Redundant/Outdated)
- `NN/models/cnn_model_pytorch.py` - Superseded by enhanced version
- `NN/models/enhanced_cnn_with_orderbook.py` - Functionality integrated elsewhere
- `NN/models/transformer_model_pytorch.py` - Basic implementation
- `training/williams_market_structure.py` - Fallback no longer needed
### 🔄 Consolidate Ideas
1. **Multi-scale attention** from advanced transformer → integrate into main CNN
2. **Market regime detection** → integrate into orchestrator
3. **Uncertainty estimation** → integrate into decision fusion
4. **Relative positional encoding** → future transformer implementation
5. **Experience replay variants** → integrate into main DQN
## Implementation Priority
### High Priority Integrations
1. Multi-scale attention mechanisms
2. Market regime detection
3. Uncertainty quantification
4. Enhanced experience replay
### Medium Priority
1. Relative positional encoding
2. Advanced normalization strategies
3. Adaptive architecture features
### Low Priority
1. MoE architecture
2. Ultra-massive model variants
3. TensorFlow migration features
## Conclusion
The project contains many innovative ideas spread across multiple implementations. The cleanup should focus on:
1. **Consolidating** the best features into production models
2. **Archiving** implementations with unique concepts
3. **Removing** redundant or superseded code
4. **Documenting** architectural patterns for future reference
The main production models (`cnn_model.py`, `dqn_agent.py`, `cob_rl_model.py`) should be enhanced with the best ideas from alternative implementations before cleanup.

Binary file not shown.

View File

@ -5,6 +5,7 @@ import requests
import hmac import hmac
import hashlib import hashlib
from urllib.parse import urlencode, quote_plus from urllib.parse import urlencode, quote_plus
import json # Added for json.dumps
from .exchange_interface import ExchangeInterface from .exchange_interface import ExchangeInterface
@ -85,37 +86,40 @@ class MEXCInterface(ExchangeInterface):
return symbol.replace('/', '_').upper() return symbol.replace('/', '_').upper()
def _generate_signature(self, timestamp: str, method: str, endpoint: str, params: Dict[str, Any]) -> str: def _generate_signature(self, timestamp: str, method: str, endpoint: str, params: Dict[str, Any]) -> str:
"""Generate signature for private API calls using MEXC's expected parameter order""" """Generate signature for private API calls using MEXC's official method"""
# MEXC requires specific parameter ordering, not alphabetical # MEXC signature format varies by method:
# Based on successful test: symbol, side, type, quantity, timestamp, then other params # For GET/DELETE: URL-encoded query string of alphabetically sorted parameters.
mexc_param_order = ['symbol', 'side', 'type', 'quantity', 'timestamp', 'recvWindow'] # For POST: JSON string of parameters (no sorting needed).
# The API-Secret is used as the HMAC SHA256 key.
# Build ordered parameter list
ordered_params = [] # Remove signature from params to avoid circular inclusion
clean_params = {k: v for k, v in params.items() if k != 'signature'}
# Add parameters in MEXC's expected order
for param_name in mexc_param_order: parameter_string: str
if param_name in params and param_name != 'signature':
ordered_params.append(f"{param_name}={params[param_name]}") if method.upper() == "POST":
# For POST requests, the signature parameter is a JSON string
# Add any remaining parameters not in the standard order (alphabetically) # Ensure sorting keys for consistent JSON string generation across runs
remaining_params = {k: v for k, v in params.items() if k not in mexc_param_order and k != 'signature'} # even though MEXC says sorting is not required for POST params, it's good practice.
for key in sorted(remaining_params.keys()): parameter_string = json.dumps(clean_params, sort_keys=True, separators=(',', ':'))
ordered_params.append(f"{key}={remaining_params[key]}") else:
# For GET/DELETE requests, parameters are spliced in dictionary order with & interval
# Create query string (MEXC doesn't use the api_key + timestamp prefix) sorted_params = sorted(clean_params.items())
query_string = '&'.join(ordered_params) parameter_string = '&'.join(f"{key}={str(value)}" for key, value in sorted_params)
logger.debug(f"MEXC signature query string: {query_string}") # The string to be signed is: accessKey + timestamp + obtained parameter string.
string_to_sign = f"{self.api_key}{timestamp}{parameter_string}"
logger.debug(f"MEXC string to sign (method {method}): {string_to_sign}")
# Generate HMAC SHA256 signature # Generate HMAC SHA256 signature
signature = hmac.new( signature = hmac.new(
self.api_secret.encode('utf-8'), self.api_secret.encode('utf-8'),
query_string.encode('utf-8'), string_to_sign.encode('utf-8'),
hashlib.sha256 hashlib.sha256
).hexdigest() ).hexdigest()
logger.debug(f"MEXC signature: {signature}") logger.debug(f"MEXC generated signature: {signature}")
return signature return signature
def _send_public_request(self, method: str, endpoint: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: def _send_public_request(self, method: str, endpoint: str, params: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
@ -145,7 +149,7 @@ class MEXCInterface(ExchangeInterface):
logger.error(f"Error in public request to {endpoint}: {e}") logger.error(f"Error in public request to {endpoint}: {e}")
return {} return {}
def _send_private_request(self, method: str, endpoint: str, params: Dict[str, Any] = None) -> Optional[Dict[str, Any]]: def _send_private_request(self, method: str, endpoint: str, params: Optional[Dict[str, Any]] = None) -> Optional[Dict[str, Any]]:
"""Send a private request to the exchange with proper signature""" """Send a private request to the exchange with proper signature"""
if params is None: if params is None:
params = {} params = {}
@ -170,8 +174,11 @@ class MEXCInterface(ExchangeInterface):
if method.upper() == "GET": if method.upper() == "GET":
response = self.session.get(url, headers=headers, params=params, timeout=10) response = self.session.get(url, headers=headers, params=params, timeout=10)
elif method.upper() == "POST": elif method.upper() == "POST":
# MEXC expects POST parameters as query string, not in body # MEXC expects POST parameters as JSON in the request body, not as query string
response = self.session.post(url, headers=headers, params=params, timeout=10) # The signature is generated from the JSON string of parameters.
# We need to exclude 'signature' from the JSON body sent, as it's for the header.
params_for_body = {k: v for k, v in params.items() if k != 'signature'}
response = self.session.post(url, headers=headers, json=params_for_body, timeout=10)
else: else:
logger.error(f"Unsupported method: {method}") logger.error(f"Unsupported method: {method}")
return None return None
@ -217,48 +224,46 @@ class MEXCInterface(ExchangeInterface):
response = self._send_public_request('GET', endpoint, params) response = self._send_public_request('GET', endpoint, params)
if response: if isinstance(response, dict):
# MEXC ticker returns a dictionary if single symbol, list if all symbols ticker_data: Dict[str, Any] = response
if isinstance(response, dict): elif isinstance(response, list) and len(response) > 0:
ticker_data = response found_ticker = next((item for item in response if item.get('symbol') == formatted_symbol), None)
elif isinstance(response, list) and len(response) > 0: if found_ticker:
# If the response is a list, try to find the specific symbol ticker_data = found_ticker
found_ticker = next((item for item in response if item.get('symbol') == formatted_symbol), None)
if found_ticker:
ticker_data = found_ticker
else:
logger.error(f"Ticker data for {formatted_symbol} not found in response list.")
return None
else: else:
logger.error(f"Unexpected ticker response format: {response}") logger.error(f"Ticker data for {formatted_symbol} not found in response list.")
return None return None
else:
logger.error(f"Unexpected ticker response format: {response}")
return None
# Extract relevant info and format for universal use # At this point, ticker_data is guaranteed to be a Dict[str, Any] due to the above logic
last_price = float(ticker_data.get('lastPrice', 0)) # If it was None, we would have returned early.
bid_price = float(ticker_data.get('bidPrice', 0))
ask_price = float(ticker_data.get('askPrice', 0))
volume = float(ticker_data.get('volume', 0)) # Base asset volume
# Determine price change and percent change # Extract relevant info and format for universal use
price_change = float(ticker_data.get('priceChange', 0)) last_price = float(ticker_data.get('lastPrice', 0))
price_change_percent = float(ticker_data.get('priceChangePercent', 0)) bid_price = float(ticker_data.get('bidPrice', 0))
ask_price = float(ticker_data.get('askPrice', 0))
volume = float(ticker_data.get('volume', 0)) # Base asset volume
logger.info(f"MEXC: Got ticker from {endpoint} for {symbol}: ${last_price:.2f}") # Determine price change and percent change
price_change = float(ticker_data.get('priceChange', 0))
return { price_change_percent = float(ticker_data.get('priceChangePercent', 0))
'symbol': formatted_symbol,
'last': last_price, logger.info(f"MEXC: Got ticker from {endpoint} for {symbol}: ${last_price:.2f}")
'bid': bid_price,
'ask': ask_price, return {
'volume': volume, 'symbol': formatted_symbol,
'high': float(ticker_data.get('highPrice', 0)), 'last': last_price,
'low': float(ticker_data.get('lowPrice', 0)), 'bid': bid_price,
'change': price_change_percent, # This is usually priceChangePercent 'ask': ask_price,
'exchange': 'MEXC', 'volume': volume,
'raw_data': ticker_data 'high': float(ticker_data.get('highPrice', 0)),
} 'low': float(ticker_data.get('lowPrice', 0)),
logger.error(f"Failed to get ticker for {symbol}") 'change': price_change_percent, # This is usually priceChangePercent
return None 'exchange': 'MEXC',
'raw_data': ticker_data
}
def get_api_symbols(self) -> List[str]: def get_api_symbols(self) -> List[str]:
"""Get list of symbols supported for API trading""" """Get list of symbols supported for API trading"""
@ -293,39 +298,89 @@ class MEXCInterface(ExchangeInterface):
logger.info(f"Supported symbols include: {supported_symbols[:10]}...") # Show first 10 logger.info(f"Supported symbols include: {supported_symbols[:10]}...") # Show first 10
return {} return {}
# Format quantity according to symbol precision requirements
formatted_quantity = self._format_quantity_for_symbol(formatted_symbol, quantity)
if formatted_quantity is None:
logger.error(f"MEXC: Failed to format quantity {quantity} for {formatted_symbol}")
return {}
# Handle order type restrictions for specific symbols
final_order_type = self._adjust_order_type_for_symbol(formatted_symbol, order_type.upper())
# Get price for limit orders
final_price = price
if final_order_type == 'LIMIT' and price is None:
# Get current market price
ticker = self.get_ticker(symbol)
if ticker and 'last' in ticker:
final_price = ticker['last']
logger.info(f"MEXC: Using market price ${final_price:.2f} for LIMIT order")
else:
logger.error(f"MEXC: Could not get market price for LIMIT order on {formatted_symbol}")
return {}
endpoint = "order" endpoint = "order"
params: Dict[str, Any] = { params: Dict[str, Any] = {
'symbol': formatted_symbol, 'symbol': formatted_symbol,
'side': side.upper(), 'side': side.upper(),
'type': order_type.upper(), 'type': final_order_type,
'quantity': str(quantity) # Quantity must be a string 'quantity': str(formatted_quantity) # Quantity must be a string
} }
if price is not None: if final_price is not None:
params['price'] = str(price) # Price must be a string for limit orders params['price'] = str(final_price) # Price must be a string for limit orders
logger.info(f"MEXC: Placing {side.upper()} {order_type.upper()} order for {quantity} {formatted_symbol} at price {price}") logger.info(f"MEXC: Placing {side.upper()} {final_order_type} order for {formatted_quantity} {formatted_symbol} at price {final_price}")
# For market orders, some parameters might be optional or handled differently.
# Check MEXC API docs for market order specifics (e.g., quoteOrderQty for buy market orders)
if order_type.upper() == 'MARKET' and side.upper() == 'BUY':
# If it's a market buy order, MEXC often expects quoteOrderQty instead of quantity
# Assuming quantity here refers to the base asset, if quoteOrderQty is needed, adjust.
# For now, we will stick to quantity and let MEXC handle the conversion if possible
pass # No specific change needed based on the current params structure
try: try:
# MEXC API endpoint for placing orders is /api/v3/order (POST) # MEXC API endpoint for placing orders is /api/v3/order (POST)
order_result = self._send_private_request('POST', endpoint, params) order_result = self._send_private_request('POST', endpoint, params)
if order_result: if order_result is not None:
logger.info(f"MEXC: Order placed successfully: {order_result}") logger.info(f"MEXC: Order placed successfully: {order_result}")
return order_result return order_result
else: else:
logger.error(f"MEXC: Error placing order: {order_result}") logger.error(f"MEXC: Error placing order: request returned None")
return {} return {}
except Exception as e: except Exception as e:
logger.error(f"MEXC: Exception placing order: {e}") logger.error(f"MEXC: Exception placing order: {e}")
return {} return {}
def _format_quantity_for_symbol(self, formatted_symbol: str, quantity: float) -> Optional[float]:
"""Format quantity according to symbol precision requirements"""
try:
# Symbol-specific precision rules
if formatted_symbol == 'ETHUSDC':
# ETHUSDC requires max 5 decimal places, step size 0.000001
formatted_qty = round(quantity, 5)
# Ensure it meets minimum step size
step_size = 0.000001
formatted_qty = round(formatted_qty / step_size) * step_size
# Round again to remove floating point errors
formatted_qty = round(formatted_qty, 6)
logger.info(f"MEXC: Formatted ETHUSDC quantity {quantity} -> {formatted_qty}")
return formatted_qty
elif formatted_symbol == 'BTCUSDC':
# Assume similar precision for BTC
formatted_qty = round(quantity, 6)
step_size = 0.000001
formatted_qty = round(formatted_qty / step_size) * step_size
formatted_qty = round(formatted_qty, 6)
return formatted_qty
else:
# Default formatting - 6 decimal places
return round(quantity, 6)
except Exception as e:
logger.error(f"Error formatting quantity for {formatted_symbol}: {e}")
return None
def _adjust_order_type_for_symbol(self, formatted_symbol: str, order_type: str) -> str:
"""Adjust order type based on symbol restrictions"""
if formatted_symbol == 'ETHUSDC':
# ETHUSDC only supports LIMIT and LIMIT_MAKER orders
if order_type == 'MARKET':
logger.info(f"MEXC: Converting MARKET order to LIMIT for {formatted_symbol} (MARKET not supported)")
return 'LIMIT'
return order_type
def cancel_order(self, symbol: str, order_id: str) -> Dict[str, Any]: def cancel_order(self, symbol: str, order_id: str) -> Dict[str, Any]:
"""Cancel an existing order on MEXC.""" """Cancel an existing order on MEXC."""

View File

@ -229,8 +229,8 @@ class COBRLModelInterface(ModelInterface):
Interface for the COB RL model that handles model management, training, and inference Interface for the COB RL model that handles model management, training, and inference
""" """
def __init__(self, model_checkpoint_dir: str = "models/realtime_rl_cob", device: str = None): def __init__(self, model_checkpoint_dir: str = "models/realtime_rl_cob", device: str = None, name=None, **kwargs):
super().__init__(name="cob_rl_model") # Initialize ModelInterface with a name super().__init__(name=name) # Initialize ModelInterface with a name
self.model_checkpoint_dir = model_checkpoint_dir self.model_checkpoint_dir = model_checkpoint_dir
self.device = torch.device(device if device else ('cuda' if torch.cuda.is_available() else 'cpu')) self.device = torch.device(device if device else ('cuda' if torch.cuda.is_available() else 'cpu'))

View File

@ -5,7 +5,7 @@ import numpy as np
from collections import deque from collections import deque
import random import random
from typing import Tuple, List from typing import Tuple, List
import osvu import os
import sys import sys
import logging import logging
import torch.nn.functional as F import torch.nn.functional as F

View File

@ -114,6 +114,34 @@ class EnhancedRealtimeTrainingSystem:
logger.info("Enhanced Real-time Training System initialized") logger.info("Enhanced Real-time Training System initialized")
def _get_dqn_state_features(self, symbol: str) -> Optional[np.ndarray]:
"""Get DQN state features from orchestrator"""
try:
if not self.orchestrator:
return None
# Get DQN state from orchestrator if available
if hasattr(self.orchestrator, 'build_comprehensive_rl_state'):
rl_state = self.orchestrator.build_comprehensive_rl_state(symbol)
if rl_state and 'state_vector' in rl_state:
return np.array(rl_state['state_vector'], dtype=np.float32)
# Fallback: create basic state from available data
if len(self.real_time_data['ohlcv_1m']) > 0:
latest_bar = self.real_time_data['ohlcv_1m'][-1]
basic_state = [
latest_bar.get('close', 0) / 10000.0, # Normalized price
latest_bar.get('volume', 0) / 1000000.0, # Normalized volume
0.0, 0.0, 0.0 # Placeholder features
]
return np.array(basic_state, dtype=np.float32)
return None
except Exception as e:
logger.debug(f"Error getting DQN state features for {symbol}: {e}")
return None
def start_training(self): def start_training(self):
"""Start the enhanced real-time training system""" """Start the enhanced real-time training system"""
if self.is_training: if self.is_training:
@ -1454,9 +1482,10 @@ class EnhancedRealtimeTrainingSystem:
model.train() model.train()
optimizer.zero_grad() optimizer.zero_grad()
# Convert numpy arrays to PyTorch tensors # Convert numpy arrays to PyTorch tensors and move to device
features_tensor = torch.from_numpy(features).float() device = next(model.parameters()).device
targets_tensor = torch.from_numpy(targets).long() features_tensor = torch.from_numpy(features).float().to(device)
targets_tensor = torch.from_numpy(targets).long().to(device)
# Ensure features_tensor has the correct shape for CNN (batch_size, channels, height, width) # Ensure features_tensor has the correct shape for CNN (batch_size, channels, height, width)
# Assuming features are flattened (batch_size, 15*20) and need to be reshaped to (batch_size, 1, 15, 20) # Assuming features are flattened (batch_size, 15*20) and need to be reshaped to (batch_size, 1, 15, 20)
@ -1471,10 +1500,37 @@ class EnhancedRealtimeTrainingSystem:
# If the CNN expects (batch_size, channels, sequence_length) # If the CNN expects (batch_size, channels, sequence_length)
# features_tensor = features_tensor.view(features_tensor.shape[0], 1, 15 * 20) # Example for 1D CNN # features_tensor = features_tensor.view(features_tensor.shape[0], 1, 15 * 20) # Example for 1D CNN
# Let's assume the CNN expects 2D input (batch_size, flattened_features) # Ensure proper shape for CNN input
if len(features_tensor.shape) == 2:
# If it's (batch_size, features), keep as is for 1D CNN
pass
elif len(features_tensor.shape) == 1:
# If it's (features), add batch dimension
features_tensor = features_tensor.unsqueeze(0)
else:
# Reshape to (batch_size, features) if needed
features_tensor = features_tensor.view(features_tensor.shape[0], -1)
# Limit input size to prevent shape mismatches
if features_tensor.shape[1] > 1000: # Limit to 1000 features
features_tensor = features_tensor[:, :1000]
outputs = model(features_tensor) outputs = model(features_tensor)
loss = criterion(outputs, targets_tensor) # Extract logits from model output (model returns a dictionary)
if isinstance(outputs, dict):
logits = outputs['logits']
elif isinstance(outputs, tuple):
logits = outputs[0] # First element is usually logits
else:
logits = outputs
# Ensure logits is a tensor
if not isinstance(logits, torch.Tensor):
logger.error(f"CNN output is not a tensor: {type(logits)}")
return 0.0
loss = criterion(logits, targets_tensor)
loss.backward() loss.backward()
optimizer.step() optimizer.step()
@ -1856,17 +1912,23 @@ class EnhancedRealtimeTrainingSystem:
if (self.orchestrator and hasattr(self.orchestrator, 'rl_agent') if (self.orchestrator and hasattr(self.orchestrator, 'rl_agent')
and self.orchestrator.rl_agent): and self.orchestrator.rl_agent):
# Get Q-values from model # Use RL agent to make prediction
q_values = self.orchestrator.rl_agent.act(current_state, return_q_values=True) current_state = self._get_dqn_state_features(symbol)
if isinstance(q_values, tuple): if current_state is None:
action, q_vals = q_values return
q_values = q_vals.tolist() if hasattr(q_vals, 'tolist') else [0, 0, 0] action = self.orchestrator.rl_agent.act(current_state, explore=False)
# Get Q-values separately if available
if hasattr(self.orchestrator.rl_agent, 'policy_net'):
with torch.no_grad():
state_tensor = torch.FloatTensor(current_state).unsqueeze(0).to(self.orchestrator.rl_agent.device)
q_values_tensor = self.orchestrator.rl_agent.policy_net(state_tensor)
if isinstance(q_values_tensor, tuple):
q_values = q_values_tensor[0].cpu().numpy()[0].tolist()
else: else:
action = q_values
q_values = [0.33, 0.33, 0.34] # Default uniform distribution q_values = [0.33, 0.33, 0.34] # Default uniform distribution
confidence = max(q_values) / sum(q_values) if sum(q_values) > 0 else 0.33 confidence = max(q_values) / sum(q_values) if sum(q_values) > 0 else 0.33
else: else:
# Fallback to technical analysis-based prediction # Fallback to technical analysis-based prediction
action, q_values, confidence = self._technical_analysis_prediction(symbol) action, q_values, confidence = self._technical_analysis_prediction(symbol)

View File

@ -1,229 +0,0 @@
# Orchestrator Architecture Streamlining Plan
## Current State Analysis
### Basic TradingOrchestrator (`core/orchestrator.py`)
- **Size**: 880 lines
- **Purpose**: Core trading decisions, model coordination
- **Features**:
- Model registry and weight management
- CNN and RL prediction combination
- Decision callbacks
- Performance tracking
- Basic RL state building
### Enhanced TradingOrchestrator (`core/enhanced_orchestrator.py`)
- **Size**: 5,743 lines (6.5x larger!)
- **Inherits from**: TradingOrchestrator
- **Additional Features**:
- Universal Data Adapter (5 timeseries)
- COB Integration
- Neural Decision Fusion
- Multi-timeframe analysis
- Market regime detection
- Sensitivity learning
- Pivot point analysis
- Extrema detection
- Context data management
- Williams market structure
- Microstructure analysis
- Order flow analysis
- Cross-asset correlation
- PnL-aware features
- Trade flow features
- Market impact estimation
- Retrospective CNN training
- Cold start predictions
## Problems Identified
### 1. **Massive Feature Bloat**
- Enhanced orchestrator has become a "god object" with too many responsibilities
- Single class doing: trading, analysis, training, data processing, market structure, etc.
- Violates Single Responsibility Principle
### 2. **Code Duplication**
- Many features reimplemented instead of extending base functionality
- Similar RL state building in both classes
- Overlapping market analysis
### 3. **Maintenance Nightmare**
- 5,743 lines in single file is unmaintainable
- Complex interdependencies
- Hard to test individual components
- Performance issues due to size
### 4. **Resource Inefficiency**
- Loading entire enhanced orchestrator even if only basic features needed
- Memory overhead from unused features
- Slower initialization
## Proposed Solution: Modular Architecture
### 1. **Keep Streamlined Base Orchestrator**
```
TradingOrchestrator (core/orchestrator.py)
├── Basic decision making
├── Model coordination
├── Performance tracking
└── Core RL state building
```
### 2. **Create Modular Extensions**
```
core/
├── orchestrator.py (Basic - 880 lines)
├── modules/
│ ├── cob_module.py # COB integration
│ ├── market_analysis_module.py # Market regime, volatility
│ ├── multi_timeframe_module.py # Multi-TF analysis
│ ├── neural_fusion_module.py # Neural decision fusion
│ ├── pivot_analysis_module.py # Williams/pivot points
│ ├── extrema_module.py # Extrema detection
│ ├── microstructure_module.py # Order flow analysis
│ ├── correlation_module.py # Cross-asset correlation
│ └── training_module.py # Advanced training features
```
### 3. **Configurable Enhanced Orchestrator**
```python
class ConfigurableOrchestrator(TradingOrchestrator):
def __init__(self, data_provider, modules=None):
super().__init__(data_provider)
self.modules = {}
# Load only requested modules
if modules:
for module_name in modules:
self.load_module(module_name)
def load_module(self, module_name):
# Dynamically load and initialize module
pass
```
### 4. **Module Interface**
```python
class OrchestratorModule:
def __init__(self, orchestrator):
self.orchestrator = orchestrator
def initialize(self):
pass
def get_features(self, symbol):
pass
def get_predictions(self, symbol):
pass
```
## Implementation Plan
### Phase 1: Extract Core Modules (Week 1)
1. Extract COB integration to `cob_module.py`
2. Extract market analysis to `market_analysis_module.py`
3. Extract neural fusion to `neural_fusion_module.py`
4. Test basic functionality
### Phase 2: Refactor Enhanced Features (Week 2)
1. Move pivot analysis to `pivot_analysis_module.py`
2. Move extrema detection to `extrema_module.py`
3. Move microstructure analysis to `microstructure_module.py`
4. Update imports and dependencies
### Phase 3: Create Configurable System (Week 3)
1. Implement `ConfigurableOrchestrator`
2. Create module loading system
3. Add configuration file support
4. Test different module combinations
### Phase 4: Clean Dashboard Integration (Week 4)
1. Update dashboard to work with both Basic and Configurable
2. Add module status display
3. Dynamic feature enabling/disabling
4. Performance optimization
## Benefits
### 1. **Maintainability**
- Each module ~200-400 lines (manageable)
- Clear separation of concerns
- Individual module testing
- Easier debugging
### 2. **Performance**
- Load only needed features
- Reduced memory footprint
- Faster initialization
- Better resource utilization
### 3. **Flexibility**
- Mix and match features
- Easy to add new modules
- Configuration-driven setup
- Development environment vs production
### 4. **Development**
- Teams can work on individual modules
- Clear interfaces reduce conflicts
- Easier to add new features
- Better code reuse
## Configuration Examples
### Minimal Setup (Basic Trading)
```yaml
orchestrator:
type: basic
modules: []
```
### Full Enhanced Setup
```yaml
orchestrator:
type: configurable
modules:
- cob_module
- neural_fusion_module
- market_analysis_module
- pivot_analysis_module
```
### Custom Setup (Research)
```yaml
orchestrator:
type: configurable
modules:
- market_analysis_module
- extrema_module
- training_module
```
## Migration Strategy
### 1. **Backward Compatibility**
- Keep current Enhanced orchestrator as deprecated
- Gradually migrate features to modules
- Provide compatibility layer
### 2. **Gradual Migration**
- Start with dashboard using Basic orchestrator
- Add modules one by one
- Test each integration
### 3. **Performance Testing**
- Compare Basic vs Enhanced vs Modular
- Memory usage analysis
- Initialization time comparison
- Decision-making speed tests
## Success Metrics
1. **Code Size**: Enhanced orchestrator < 1,000 lines
2. **Memory**: 50% reduction in memory usage for basic setup
3. **Speed**: 3x faster initialization for basic setup
4. **Maintainability**: Each module < 500 lines
5. **Testing**: 90%+ test coverage per module
This plan will transform the current monolithic enhanced orchestrator into a clean, modular, maintainable system while preserving all functionality and improving performance.

View File

@ -1,154 +0,0 @@
# Enhanced CNN Model for Short-Term High-Leverage Trading
This document provides an overview of the enhanced neural network trading system optimized for short-term high-leverage cryptocurrency trading.
## Key Components
The system consists of several integrated components, each optimized for high-frequency trading opportunities:
1. **CNN Model Architecture**: A specialized convolutional neural network designed to detect micro-patterns in price movements.
2. **Custom Loss Function**: Trading-focused loss that prioritizes profitable trades and signal diversity.
3. **Signal Interpreter**: Advanced signal processing with multiple filters to reduce false signals.
4. **Performance Visualization**: Comprehensive analytics for model evaluation and optimization.
## Architecture Improvements
### CNN Model Enhancements
The CNN model has been significantly improved for short-term trading:
- **Micro-Movement Detection**: Dedicated convolutional layers to identify small price patterns that precede larger movements
- **Adaptive Pooling**: Fixed-size output tensors regardless of input window size for consistent prediction
- **Multi-Timeframe Integration**: Ability to process data from multiple timeframes simultaneously
- **Attention Mechanism**: Focus on the most relevant features in price data
- **Dual Prediction Heads**: Separate pathways for action signals and price predictions
### Loss Function Specialization
The custom loss function has been designed specifically for trading:
```python
def compute_trading_loss(self, action_probs, price_pred, targets, future_prices=None):
# Base classification loss
action_loss = self.criterion(action_probs, targets)
# Diversity loss to ensure balanced trading signals
diversity_loss = ... # Encourage balanced trading signals
# Profitability-based loss components
price_loss = ... # Penalize incorrect price direction predictions
profit_loss = ... # Penalize unprofitable trades heavily
# Dynamic weighting based on training progress
total_loss = (action_weight * action_loss +
price_weight * price_loss +
profit_weight * profit_loss +
diversity_weight * diversity_loss)
return total_loss, action_loss, price_loss
```
Key features:
- Adaptive training phases with progressive focus on profitability
- Punishes wrong price direction predictions more than amplitude errors
- Exponential penalties for unprofitable trades
- Promotes signal diversity to avoid single-class domination
- Win-rate component to encourage strategies that win more often than lose
### Signal Interpreter
The signal interpreter provides robust filtering of model predictions:
- **Confidence Multiplier**: Amplifies high-confidence signals
- **Trend Alignment**: Ensures signals align with the overall market trend
- **Volume Filtering**: Validates signals against volume patterns
- **Oscillation Prevention**: Reduces excessive trading during uncertain periods
- **Performance Tracking**: Built-in metrics for win rate and profit per trade
## Performance Metrics
The model is evaluated on several key metrics:
- **Win Rate**: Percentage of profitable trades
- **PnL**: Overall profit and loss
- **Signal Distribution**: Balance between BUY, SELL, and HOLD signals
- **Confidence Scores**: Certainty level of predictions
## Usage Example
```python
# Initialize the model
model = CNNModelPyTorch(
window_size=24,
num_features=10,
output_size=3,
timeframes=["1m", "5m", "15m"]
)
# Make predictions
action_probs, price_pred = model.predict(market_data)
# Interpret signals with advanced filtering
interpreter = SignalInterpreter(config={
'buy_threshold': 0.65,
'sell_threshold': 0.65,
'trend_filter_enabled': True
})
signal = interpreter.interpret_signal(
action_probs,
price_pred,
market_data={'trend': current_trend, 'volume': volume_data}
)
# Take action based on the signal
if signal['action'] == 'BUY':
# Execute buy order
elif signal['action'] == 'SELL':
# Execute sell order
else:
# Hold position
```
## Optimization Results
The optimized model has demonstrated:
- Better signal diversity with appropriate balance between actions and holds
- Improved profitability with higher win rates
- Enhanced stability during volatile market conditions
- Faster adaptation to changing market regimes
## Future Improvements
Potential areas for further enhancement:
1. **Reinforcement Learning Integration**: Optimize directly for PnL through RL techniques
2. **Market Regime Detection**: Automatic identification of market states for adaptivity
3. **Multi-Asset Correlation**: Include correlations between different assets
4. **Advanced Risk Management**: Dynamic position sizing based on signal confidence
5. **Ensemble Approach**: Combine multiple model variants for more robust predictions
## Testing Framework
The system includes a comprehensive testing framework:
- **Unit Tests**: For individual components
- **Integration Tests**: For component interactions
- **Performance Backtesting**: For overall strategy evaluation
- **Visualization Tools**: For easier analysis of model behavior
## Performance Tracking
The included visualization module provides comprehensive performance dashboards:
- Loss and accuracy trends
- PnL and win rate metrics
- Signal distribution over time
- Correlation matrix of performance indicators
## Conclusion
This enhanced CNN model provides a robust foundation for short-term high-leverage trading, with specialized components optimized for rapid market movements and signal quality. The custom loss function and advanced signal interpreter work together to maximize profitability while maintaining risk control.
For best results, the model should be regularly retrained with recent market data to adapt to changing market conditions.

View File

@ -1,105 +0,0 @@
# Tensor Operation Fixes Report
*Generated: 2024-12-19*
## 🎯 Issue Summary
The orchestrator was experiencing critical tensor operation errors that prevented model predictions:
1. **Softmax Error**: `softmax() received an invalid combination of arguments - got (tuple, dim=int)`
2. **View Error**: `view size is not compatible with input tensor's size and stride`
3. **Unpacking Error**: `cannot unpack non-iterable NoneType object`
## 🔧 Fixes Applied
### 1. DQN Agent Softmax Fix (`NN/models/dqn_agent.py`)
**Problem**: Q-values tensor had incorrect dimensions for softmax operation.
**Solution**: Added dimension checking and reshaping before softmax:
```python
# Before
sell_confidence = torch.softmax(q_values, dim=1)[0, 0].item()
# After
if q_values.dim() == 1:
q_values = q_values.unsqueeze(0)
sell_confidence = torch.softmax(q_values, dim=1)[0, 0].item()
```
**Impact**: Prevents tensor dimension mismatch errors in confidence calculations.
### 2. CNN Model View Operations Fix (`NN/models/cnn_model.py`)
**Problem**: `.view()` operations failed due to non-contiguous tensor memory layout.
**Solution**: Replaced `.view()` with `.reshape()` for automatic contiguity handling:
```python
# Before
x = x.view(x.shape[0], -1, x.shape[-1])
embedded = embedded.view(batch_size, seq_len, -1).transpose(1, 2).contiguous()
# After
x = x.reshape(x.shape[0], -1, x.shape[-1])
embedded = embedded.reshape(batch_size, seq_len, -1).transpose(1, 2).contiguous()
```
**Impact**: Eliminates tensor stride incompatibility errors during CNN forward pass.
### 3. Generic Prediction Unpacking Fix (`core/orchestrator.py`)
**Problem**: Model prediction methods returned different formats, causing unpacking errors.
**Solution**: Added robust return value handling:
```python
# Before
action_probs, confidence = model.predict(feature_matrix)
# After
prediction_result = model.predict(feature_matrix)
if isinstance(prediction_result, tuple) and len(prediction_result) == 2:
action_probs, confidence = prediction_result
elif isinstance(prediction_result, dict):
action_probs = prediction_result.get('probabilities', None)
confidence = prediction_result.get('confidence', 0.7)
else:
action_probs = prediction_result
confidence = 0.7
```
**Impact**: Prevents unpacking errors when models return different formats.
## 📊 Technical Details
### Root Causes
1. **Tensor Dimension Mismatch**: DQN models sometimes output 1D tensors when 2D expected
2. **Memory Layout Issues**: `.view()` requires contiguous memory, `.reshape()` handles non-contiguous
3. **API Inconsistency**: Different models return predictions in different formats
### Best Practices Applied
- **Defensive Programming**: Check tensor dimensions before operations
- **Memory Safety**: Use `.reshape()` instead of `.view()` for flexibility
- **API Robustness**: Handle multiple return formats gracefully
## 🎯 Expected Results
After these fixes:
- ✅ DQN predictions should work without softmax errors
- ✅ CNN predictions should work without view/stride errors
- ✅ Generic model predictions should work without unpacking errors
- ✅ Orchestrator should generate proper trading decisions
## 🔄 Testing Recommendations
1. **Run Dashboard**: Test that predictions are generated successfully
2. **Monitor Logs**: Check for reduction in tensor operation errors
3. **Verify Trading Signals**: Ensure BUY/SELL/HOLD decisions are made
4. **Performance Check**: Confirm no significant performance degradation
## 📝 Notes
- Some linter errors remain but are related to missing attributes, not tensor operations
- The core tensor operation issues have been resolved
- Models should now make predictions without crashing the orchestrator

View File

@ -162,11 +162,11 @@ mexc_trading:
trading_mode: simulation # simulation, testnet, live trading_mode: simulation # simulation, testnet, live
# Position sizing as percentage of account balance # Position sizing as percentage of account balance
base_position_percent: 5.0 # 5% base position of account base_position_percent: 1 # 0.5% base position of account (MUCH SAFER)
max_position_percent: 20.0 # 20% max position of account max_position_percent: 5.0 # 2% max position of account (REDUCED)
min_position_percent: 2.0 # 2% min position of account min_position_percent: 0.5 # 0.2% min position of account (REDUCED)
leverage: 50.0 # 50x leverage (adjustable in UI) leverage: 1.0 # 1x leverage (NO LEVERAGE FOR TESTING)
simulation_account_usd: 100.0 # $100 simulation account balance simulation_account_usd: 99.9 # $100 simulation account balance
# Risk management # Risk management
max_daily_loss_usd: 200.0 max_daily_loss_usd: 200.0

View File

@ -1,952 +0,0 @@
"""
Bookmap Order Book Data Provider
This module integrates with Bookmap to gather:
- Current Order Book (COB) data
- Session Volume Profile (SVP) data
- Order book sweeps and momentum trades detection
- Real-time order size heatmap matrix (last 10 minutes)
- Level 2 market depth analysis
The data is processed and fed to CNN and DQN networks for enhanced trading decisions.
"""
import asyncio
import json
import logging
import time
import websockets
import numpy as np
import pandas as pd
from datetime import datetime, timedelta
from typing import Dict, List, Optional, Tuple, Any, Callable
from collections import deque, defaultdict
from dataclasses import dataclass
from threading import Thread, Lock
import requests
logger = logging.getLogger(__name__)
@dataclass
class OrderBookLevel:
"""Represents a single order book level"""
price: float
size: float
orders: int
side: str # 'bid' or 'ask'
timestamp: datetime
@dataclass
class OrderBookSnapshot:
"""Complete order book snapshot"""
symbol: str
timestamp: datetime
bids: List[OrderBookLevel]
asks: List[OrderBookLevel]
spread: float
mid_price: float
@dataclass
class VolumeProfileLevel:
"""Volume profile level data"""
price: float
volume: float
buy_volume: float
sell_volume: float
trades_count: int
vwap: float
@dataclass
class OrderFlowSignal:
"""Order flow signal detection"""
timestamp: datetime
signal_type: str # 'sweep', 'absorption', 'iceberg', 'momentum'
price: float
volume: float
confidence: float
description: str
class BookmapDataProvider:
"""
Real-time order book data provider using Bookmap-style analysis
Features:
- Level 2 order book monitoring
- Order flow detection (sweeps, absorptions)
- Volume profile analysis
- Order size heatmap generation
- Market microstructure analysis
"""
def __init__(self, symbols: List[str] = None, depth_levels: int = 20):
"""
Initialize Bookmap data provider
Args:
symbols: List of symbols to monitor
depth_levels: Number of order book levels to track
"""
self.symbols = symbols or ['ETHUSDT', 'BTCUSDT']
self.depth_levels = depth_levels
self.is_streaming = False
# Order book data storage
self.order_books: Dict[str, OrderBookSnapshot] = {}
self.order_book_history: Dict[str, deque] = {}
self.volume_profiles: Dict[str, List[VolumeProfileLevel]] = {}
# Heatmap data (10-minute rolling window)
self.heatmap_window = timedelta(minutes=10)
self.order_heatmaps: Dict[str, deque] = {}
self.price_levels: Dict[str, List[float]] = {}
# Order flow detection
self.flow_signals: Dict[str, deque] = {}
self.sweep_threshold = 0.8 # Minimum confidence for sweep detection
self.absorption_threshold = 0.7 # Minimum confidence for absorption
# Market microstructure metrics
self.bid_ask_spreads: Dict[str, deque] = {}
self.order_book_imbalances: Dict[str, deque] = {}
self.liquidity_metrics: Dict[str, Dict] = {}
# WebSocket connections
self.websocket_tasks: Dict[str, asyncio.Task] = {}
self.data_lock = Lock()
# Callbacks for CNN/DQN integration
self.cnn_callbacks: List[Callable] = []
self.dqn_callbacks: List[Callable] = []
# Performance tracking
self.update_counts = defaultdict(int)
self.last_update_times = {}
# Initialize data structures
for symbol in self.symbols:
self.order_book_history[symbol] = deque(maxlen=1000)
self.order_heatmaps[symbol] = deque(maxlen=600) # 10 min at 1s intervals
self.flow_signals[symbol] = deque(maxlen=500)
self.bid_ask_spreads[symbol] = deque(maxlen=1000)
self.order_book_imbalances[symbol] = deque(maxlen=1000)
self.liquidity_metrics[symbol] = {
'total_bid_size': 0.0,
'total_ask_size': 0.0,
'weighted_mid': 0.0,
'liquidity_ratio': 1.0
}
logger.info(f"BookmapDataProvider initialized for {len(self.symbols)} symbols")
logger.info(f"Tracking {depth_levels} order book levels per side")
def add_cnn_callback(self, callback: Callable[[str, Dict], None]):
"""Add callback for CNN model updates"""
self.cnn_callbacks.append(callback)
logger.info(f"Added CNN callback: {len(self.cnn_callbacks)} total")
def add_dqn_callback(self, callback: Callable[[str, Dict], None]):
"""Add callback for DQN model updates"""
self.dqn_callbacks.append(callback)
logger.info(f"Added DQN callback: {len(self.dqn_callbacks)} total")
async def start_streaming(self):
"""Start real-time order book streaming"""
if self.is_streaming:
logger.warning("Bookmap streaming already active")
return
self.is_streaming = True
logger.info("Starting Bookmap order book streaming")
# Start order book streams for each symbol
for symbol in self.symbols:
# Order book depth stream
depth_task = asyncio.create_task(self._stream_order_book_depth(symbol))
self.websocket_tasks[f"{symbol}_depth"] = depth_task
# Trade stream for order flow analysis
trade_task = asyncio.create_task(self._stream_trades(symbol))
self.websocket_tasks[f"{symbol}_trades"] = trade_task
# Start analysis threads
analysis_task = asyncio.create_task(self._continuous_analysis())
self.websocket_tasks["analysis"] = analysis_task
logger.info(f"Started streaming for {len(self.symbols)} symbols")
async def stop_streaming(self):
"""Stop order book streaming"""
if not self.is_streaming:
return
logger.info("Stopping Bookmap streaming")
self.is_streaming = False
# Cancel all tasks
for name, task in self.websocket_tasks.items():
if not task.done():
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
self.websocket_tasks.clear()
logger.info("Bookmap streaming stopped")
async def _stream_order_book_depth(self, symbol: str):
"""Stream order book depth data"""
binance_symbol = symbol.lower()
url = f"wss://stream.binance.com:9443/ws/{binance_symbol}@depth20@100ms"
while self.is_streaming:
try:
async with websockets.connect(url) as websocket:
logger.info(f"Order book depth WebSocket connected for {symbol}")
async for message in websocket:
if not self.is_streaming:
break
try:
data = json.loads(message)
await self._process_depth_update(symbol, data)
except Exception as e:
logger.warning(f"Error processing depth for {symbol}: {e}")
except Exception as e:
logger.error(f"Depth WebSocket error for {symbol}: {e}")
if self.is_streaming:
await asyncio.sleep(2)
async def _stream_trades(self, symbol: str):
"""Stream trade data for order flow analysis"""
binance_symbol = symbol.lower()
url = f"wss://stream.binance.com:9443/ws/{binance_symbol}@trade"
while self.is_streaming:
try:
async with websockets.connect(url) as websocket:
logger.info(f"Trade WebSocket connected for {symbol}")
async for message in websocket:
if not self.is_streaming:
break
try:
data = json.loads(message)
await self._process_trade_update(symbol, data)
except Exception as e:
logger.warning(f"Error processing trade for {symbol}: {e}")
except Exception as e:
logger.error(f"Trade WebSocket error for {symbol}: {e}")
if self.is_streaming:
await asyncio.sleep(2)
async def _process_depth_update(self, symbol: str, data: Dict):
"""Process order book depth update"""
try:
timestamp = datetime.now()
# Parse bids and asks
bids = []
asks = []
for bid_data in data.get('bids', []):
price = float(bid_data[0])
size = float(bid_data[1])
bids.append(OrderBookLevel(
price=price,
size=size,
orders=1, # Binance doesn't provide order count
side='bid',
timestamp=timestamp
))
for ask_data in data.get('asks', []):
price = float(ask_data[0])
size = float(ask_data[1])
asks.append(OrderBookLevel(
price=price,
size=size,
orders=1,
side='ask',
timestamp=timestamp
))
# Sort order book levels
bids.sort(key=lambda x: x.price, reverse=True)
asks.sort(key=lambda x: x.price)
# Calculate spread and mid price
if bids and asks:
best_bid = bids[0].price
best_ask = asks[0].price
spread = best_ask - best_bid
mid_price = (best_bid + best_ask) / 2
else:
spread = 0.0
mid_price = 0.0
# Create order book snapshot
snapshot = OrderBookSnapshot(
symbol=symbol,
timestamp=timestamp,
bids=bids,
asks=asks,
spread=spread,
mid_price=mid_price
)
with self.data_lock:
self.order_books[symbol] = snapshot
self.order_book_history[symbol].append(snapshot)
# Update liquidity metrics
self._update_liquidity_metrics(symbol, snapshot)
# Update order book imbalance
self._calculate_order_book_imbalance(symbol, snapshot)
# Update heatmap data
self._update_order_heatmap(symbol, snapshot)
# Update counters
self.update_counts[f"{symbol}_depth"] += 1
self.last_update_times[f"{symbol}_depth"] = timestamp
except Exception as e:
logger.error(f"Error processing depth update for {symbol}: {e}")
async def _process_trade_update(self, symbol: str, data: Dict):
"""Process trade data for order flow analysis"""
try:
timestamp = datetime.fromtimestamp(int(data['T']) / 1000)
price = float(data['p'])
quantity = float(data['q'])
is_buyer_maker = data['m']
# Analyze for order flow signals
await self._analyze_order_flow(symbol, timestamp, price, quantity, is_buyer_maker)
# Update volume profile
self._update_volume_profile(symbol, price, quantity, is_buyer_maker)
self.update_counts[f"{symbol}_trades"] += 1
except Exception as e:
logger.error(f"Error processing trade for {symbol}: {e}")
def _update_liquidity_metrics(self, symbol: str, snapshot: OrderBookSnapshot):
"""Update liquidity metrics from order book snapshot"""
try:
total_bid_size = sum(level.size for level in snapshot.bids)
total_ask_size = sum(level.size for level in snapshot.asks)
# Calculate weighted mid price
if snapshot.bids and snapshot.asks:
bid_weight = total_bid_size / (total_bid_size + total_ask_size)
ask_weight = total_ask_size / (total_bid_size + total_ask_size)
weighted_mid = (snapshot.bids[0].price * ask_weight +
snapshot.asks[0].price * bid_weight)
else:
weighted_mid = snapshot.mid_price
# Liquidity ratio (bid/ask balance)
if total_ask_size > 0:
liquidity_ratio = total_bid_size / total_ask_size
else:
liquidity_ratio = 1.0
self.liquidity_metrics[symbol] = {
'total_bid_size': total_bid_size,
'total_ask_size': total_ask_size,
'weighted_mid': weighted_mid,
'liquidity_ratio': liquidity_ratio,
'spread_bps': (snapshot.spread / snapshot.mid_price) * 10000 if snapshot.mid_price > 0 else 0
}
except Exception as e:
logger.error(f"Error updating liquidity metrics for {symbol}: {e}")
def _calculate_order_book_imbalance(self, symbol: str, snapshot: OrderBookSnapshot):
"""Calculate order book imbalance ratio"""
try:
if not snapshot.bids or not snapshot.asks:
return
# Calculate imbalance for top N levels
n_levels = min(5, len(snapshot.bids), len(snapshot.asks))
total_bid_size = sum(snapshot.bids[i].size for i in range(n_levels))
total_ask_size = sum(snapshot.asks[i].size for i in range(n_levels))
if total_bid_size + total_ask_size > 0:
imbalance = (total_bid_size - total_ask_size) / (total_bid_size + total_ask_size)
else:
imbalance = 0.0
self.order_book_imbalances[symbol].append({
'timestamp': snapshot.timestamp,
'imbalance': imbalance,
'bid_size': total_bid_size,
'ask_size': total_ask_size
})
except Exception as e:
logger.error(f"Error calculating imbalance for {symbol}: {e}")
def _update_order_heatmap(self, symbol: str, snapshot: OrderBookSnapshot):
"""Update order size heatmap matrix"""
try:
# Create heatmap entry
heatmap_entry = {
'timestamp': snapshot.timestamp,
'mid_price': snapshot.mid_price,
'levels': {}
}
# Add bid levels
for level in snapshot.bids:
price_offset = level.price - snapshot.mid_price
heatmap_entry['levels'][price_offset] = {
'side': 'bid',
'size': level.size,
'price': level.price
}
# Add ask levels
for level in snapshot.asks:
price_offset = level.price - snapshot.mid_price
heatmap_entry['levels'][price_offset] = {
'side': 'ask',
'size': level.size,
'price': level.price
}
self.order_heatmaps[symbol].append(heatmap_entry)
# Clean old entries (keep 10 minutes)
cutoff_time = snapshot.timestamp - self.heatmap_window
while (self.order_heatmaps[symbol] and
self.order_heatmaps[symbol][0]['timestamp'] < cutoff_time):
self.order_heatmaps[symbol].popleft()
except Exception as e:
logger.error(f"Error updating heatmap for {symbol}: {e}")
def _update_volume_profile(self, symbol: str, price: float, quantity: float, is_buyer_maker: bool):
"""Update volume profile with new trade"""
try:
# Initialize if not exists
if symbol not in self.volume_profiles:
self.volume_profiles[symbol] = []
# Find or create price level
price_level = None
for level in self.volume_profiles[symbol]:
if abs(level.price - price) < 0.01: # Price tolerance
price_level = level
break
if not price_level:
price_level = VolumeProfileLevel(
price=price,
volume=0.0,
buy_volume=0.0,
sell_volume=0.0,
trades_count=0,
vwap=price
)
self.volume_profiles[symbol].append(price_level)
# Update volume profile
volume = price * quantity
old_total = price_level.volume
price_level.volume += volume
price_level.trades_count += 1
if is_buyer_maker:
price_level.sell_volume += volume
else:
price_level.buy_volume += volume
# Update VWAP
if price_level.volume > 0:
price_level.vwap = ((price_level.vwap * old_total) + (price * volume)) / price_level.volume
except Exception as e:
logger.error(f"Error updating volume profile for {symbol}: {e}")
async def _analyze_order_flow(self, symbol: str, timestamp: datetime, price: float,
quantity: float, is_buyer_maker: bool):
"""Analyze order flow for sweep and absorption patterns"""
try:
# Get recent order book data
if symbol not in self.order_book_history or not self.order_book_history[symbol]:
return
recent_snapshots = list(self.order_book_history[symbol])[-10:] # Last 10 snapshots
# Check for order book sweeps
sweep_signal = self._detect_order_sweep(symbol, recent_snapshots, price, quantity, is_buyer_maker)
if sweep_signal:
self.flow_signals[symbol].append(sweep_signal)
await self._notify_flow_signal(symbol, sweep_signal)
# Check for absorption patterns
absorption_signal = self._detect_absorption(symbol, recent_snapshots, price, quantity)
if absorption_signal:
self.flow_signals[symbol].append(absorption_signal)
await self._notify_flow_signal(symbol, absorption_signal)
# Check for momentum trades
momentum_signal = self._detect_momentum_trade(symbol, price, quantity, is_buyer_maker)
if momentum_signal:
self.flow_signals[symbol].append(momentum_signal)
await self._notify_flow_signal(symbol, momentum_signal)
except Exception as e:
logger.error(f"Error analyzing order flow for {symbol}: {e}")
def _detect_order_sweep(self, symbol: str, snapshots: List[OrderBookSnapshot],
price: float, quantity: float, is_buyer_maker: bool) -> Optional[OrderFlowSignal]:
"""Detect order book sweep patterns"""
try:
if len(snapshots) < 2:
return None
before_snapshot = snapshots[-2]
after_snapshot = snapshots[-1]
# Check if multiple levels were consumed
if is_buyer_maker: # Sell order, check ask side
levels_consumed = 0
total_consumed_size = 0
for level in before_snapshot.asks[:5]: # Check top 5 levels
if level.price <= price:
levels_consumed += 1
total_consumed_size += level.size
if levels_consumed >= 2 and total_consumed_size > quantity * 1.5:
confidence = min(0.9, levels_consumed / 5.0 + 0.3)
return OrderFlowSignal(
timestamp=datetime.now(),
signal_type='sweep',
price=price,
volume=quantity * price,
confidence=confidence,
description=f"Sell sweep: {levels_consumed} levels, {total_consumed_size:.2f} size"
)
else: # Buy order, check bid side
levels_consumed = 0
total_consumed_size = 0
for level in before_snapshot.bids[:5]:
if level.price >= price:
levels_consumed += 1
total_consumed_size += level.size
if levels_consumed >= 2 and total_consumed_size > quantity * 1.5:
confidence = min(0.9, levels_consumed / 5.0 + 0.3)
return OrderFlowSignal(
timestamp=datetime.now(),
signal_type='sweep',
price=price,
volume=quantity * price,
confidence=confidence,
description=f"Buy sweep: {levels_consumed} levels, {total_consumed_size:.2f} size"
)
return None
except Exception as e:
logger.error(f"Error detecting sweep for {symbol}: {e}")
return None
def _detect_absorption(self, symbol: str, snapshots: List[OrderBookSnapshot],
price: float, quantity: float) -> Optional[OrderFlowSignal]:
"""Detect absorption patterns where large orders are absorbed without price movement"""
try:
if len(snapshots) < 3:
return None
# Check if large order was absorbed with minimal price impact
volume_threshold = 10000 # $10K minimum for absorption
price_impact_threshold = 0.001 # 0.1% max price impact
trade_value = price * quantity
if trade_value < volume_threshold:
return None
# Calculate price impact
price_before = snapshots[-3].mid_price
price_after = snapshots[-1].mid_price
price_impact = abs(price_after - price_before) / price_before
if price_impact < price_impact_threshold:
confidence = min(0.8, (trade_value / 50000) * 0.5 + 0.3) # Scale with size
return OrderFlowSignal(
timestamp=datetime.now(),
signal_type='absorption',
price=price,
volume=trade_value,
confidence=confidence,
description=f"Absorption: ${trade_value:.0f} with {price_impact*100:.3f}% impact"
)
return None
except Exception as e:
logger.error(f"Error detecting absorption for {symbol}: {e}")
return None
def _detect_momentum_trade(self, symbol: str, price: float, quantity: float,
is_buyer_maker: bool) -> Optional[OrderFlowSignal]:
"""Detect momentum trades based on size and direction"""
try:
trade_value = price * quantity
momentum_threshold = 25000 # $25K minimum for momentum classification
if trade_value < momentum_threshold:
return None
# Calculate confidence based on trade size
confidence = min(0.9, trade_value / 100000 * 0.6 + 0.3)
direction = "sell" if is_buyer_maker else "buy"
return OrderFlowSignal(
timestamp=datetime.now(),
signal_type='momentum',
price=price,
volume=trade_value,
confidence=confidence,
description=f"Large {direction}: ${trade_value:.0f}"
)
except Exception as e:
logger.error(f"Error detecting momentum for {symbol}: {e}")
return None
async def _notify_flow_signal(self, symbol: str, signal: OrderFlowSignal):
"""Notify CNN and DQN models of order flow signals"""
try:
signal_data = {
'signal_type': signal.signal_type,
'price': signal.price,
'volume': signal.volume,
'confidence': signal.confidence,
'timestamp': signal.timestamp,
'description': signal.description
}
# Notify CNN callbacks
for callback in self.cnn_callbacks:
try:
callback(symbol, signal_data)
except Exception as e:
logger.warning(f"Error in CNN callback: {e}")
# Notify DQN callbacks
for callback in self.dqn_callbacks:
try:
callback(symbol, signal_data)
except Exception as e:
logger.warning(f"Error in DQN callback: {e}")
except Exception as e:
logger.error(f"Error notifying flow signal: {e}")
async def _continuous_analysis(self):
"""Continuous analysis of market microstructure"""
while self.is_streaming:
try:
await asyncio.sleep(1) # Analyze every second
for symbol in self.symbols:
# Generate CNN features
cnn_features = self.get_cnn_features(symbol)
if cnn_features is not None:
for callback in self.cnn_callbacks:
try:
callback(symbol, {'features': cnn_features, 'type': 'orderbook'})
except Exception as e:
logger.warning(f"Error in CNN feature callback: {e}")
# Generate DQN state features
dqn_features = self.get_dqn_state_features(symbol)
if dqn_features is not None:
for callback in self.dqn_callbacks:
try:
callback(symbol, {'state': dqn_features, 'type': 'orderbook'})
except Exception as e:
logger.warning(f"Error in DQN state callback: {e}")
except Exception as e:
logger.error(f"Error in continuous analysis: {e}")
await asyncio.sleep(5)
def get_cnn_features(self, symbol: str) -> Optional[np.ndarray]:
"""Generate CNN input features from order book data"""
try:
if symbol not in self.order_books:
return None
snapshot = self.order_books[symbol]
features = []
# Order book features (40 features: 20 levels x 2 sides)
for i in range(min(20, len(snapshot.bids))):
bid = snapshot.bids[i]
features.append(bid.size)
features.append(bid.price - snapshot.mid_price) # Price offset
# Pad if not enough bid levels
while len(features) < 40:
features.extend([0.0, 0.0])
for i in range(min(20, len(snapshot.asks))):
ask = snapshot.asks[i]
features.append(ask.size)
features.append(ask.price - snapshot.mid_price) # Price offset
# Pad if not enough ask levels
while len(features) < 80:
features.extend([0.0, 0.0])
# Liquidity metrics (10 features)
metrics = self.liquidity_metrics.get(symbol, {})
features.extend([
metrics.get('total_bid_size', 0.0),
metrics.get('total_ask_size', 0.0),
metrics.get('liquidity_ratio', 1.0),
metrics.get('spread_bps', 0.0),
snapshot.spread,
metrics.get('weighted_mid', snapshot.mid_price) - snapshot.mid_price,
len(snapshot.bids),
len(snapshot.asks),
snapshot.mid_price,
time.time() % 86400 # Time of day
])
# Order book imbalance features (5 features)
if self.order_book_imbalances[symbol]:
latest_imbalance = self.order_book_imbalances[symbol][-1]
features.extend([
latest_imbalance['imbalance'],
latest_imbalance['bid_size'],
latest_imbalance['ask_size'],
latest_imbalance['bid_size'] + latest_imbalance['ask_size'],
abs(latest_imbalance['imbalance'])
])
else:
features.extend([0.0, 0.0, 0.0, 0.0, 0.0])
# Flow signal features (5 features)
recent_signals = [s for s in self.flow_signals[symbol]
if (datetime.now() - s.timestamp).seconds < 60]
sweep_count = sum(1 for s in recent_signals if s.signal_type == 'sweep')
absorption_count = sum(1 for s in recent_signals if s.signal_type == 'absorption')
momentum_count = sum(1 for s in recent_signals if s.signal_type == 'momentum')
max_confidence = max([s.confidence for s in recent_signals], default=0.0)
total_flow_volume = sum(s.volume for s in recent_signals)
features.extend([
sweep_count,
absorption_count,
momentum_count,
max_confidence,
total_flow_volume
])
return np.array(features, dtype=np.float32)
except Exception as e:
logger.error(f"Error generating CNN features for {symbol}: {e}")
return None
def get_dqn_state_features(self, symbol: str) -> Optional[np.ndarray]:
"""Generate DQN state features from order book data"""
try:
if symbol not in self.order_books:
return None
snapshot = self.order_books[symbol]
state_features = []
# Normalized order book state (20 features)
total_bid_size = sum(level.size for level in snapshot.bids[:10])
total_ask_size = sum(level.size for level in snapshot.asks[:10])
total_size = total_bid_size + total_ask_size
if total_size > 0:
for i in range(min(10, len(snapshot.bids))):
state_features.append(snapshot.bids[i].size / total_size)
# Pad bids
while len(state_features) < 10:
state_features.append(0.0)
for i in range(min(10, len(snapshot.asks))):
state_features.append(snapshot.asks[i].size / total_size)
# Pad asks
while len(state_features) < 20:
state_features.append(0.0)
else:
state_features.extend([0.0] * 20)
# Market state indicators (10 features)
metrics = self.liquidity_metrics.get(symbol, {})
# Normalize spread as percentage
spread_pct = (snapshot.spread / snapshot.mid_price) if snapshot.mid_price > 0 else 0
# Liquidity imbalance
liquidity_ratio = metrics.get('liquidity_ratio', 1.0)
liquidity_imbalance = (liquidity_ratio - 1) / (liquidity_ratio + 1)
# Recent flow signals strength
recent_signals = [s for s in self.flow_signals[symbol]
if (datetime.now() - s.timestamp).seconds < 30]
flow_strength = sum(s.confidence for s in recent_signals) / max(len(recent_signals), 1)
# Price volatility (from recent snapshots)
if len(self.order_book_history[symbol]) >= 10:
recent_prices = [s.mid_price for s in list(self.order_book_history[symbol])[-10:]]
price_volatility = np.std(recent_prices) / np.mean(recent_prices) if recent_prices else 0
else:
price_volatility = 0
state_features.extend([
spread_pct * 10000, # Spread in basis points
liquidity_imbalance,
flow_strength,
price_volatility * 100, # Volatility as percentage
min(len(snapshot.bids), 20) / 20, # Book depth ratio
min(len(snapshot.asks), 20) / 20,
sweep_count / 10 if 'sweep_count' in locals() else 0, # From CNN features
absorption_count / 5 if 'absorption_count' in locals() else 0,
momentum_count / 5 if 'momentum_count' in locals() else 0,
(datetime.now().hour * 60 + datetime.now().minute) / 1440 # Time of day normalized
])
return np.array(state_features, dtype=np.float32)
except Exception as e:
logger.error(f"Error generating DQN features for {symbol}: {e}")
return None
def get_order_heatmap_matrix(self, symbol: str, levels: int = 40) -> Optional[np.ndarray]:
"""Generate order size heatmap matrix for dashboard visualization"""
try:
if symbol not in self.order_heatmaps or not self.order_heatmaps[symbol]:
return None
# Create price levels around current mid price
current_snapshot = self.order_books.get(symbol)
if not current_snapshot:
return None
mid_price = current_snapshot.mid_price
price_step = mid_price * 0.0001 # 1 basis point steps
# Create matrix: time x price levels
time_window = min(600, len(self.order_heatmaps[symbol])) # 10 minutes max
heatmap_matrix = np.zeros((time_window, levels))
# Fill matrix with order sizes
for t, entry in enumerate(list(self.order_heatmaps[symbol])[-time_window:]):
for price_offset, level_data in entry['levels'].items():
# Convert price offset to matrix index
level_idx = int((price_offset + (levels/2) * price_step) / price_step)
if 0 <= level_idx < levels:
size_weight = 1.0 if level_data['side'] == 'bid' else -1.0
heatmap_matrix[t, level_idx] = level_data['size'] * size_weight
return heatmap_matrix
except Exception as e:
logger.error(f"Error generating heatmap matrix for {symbol}: {e}")
return None
def get_volume_profile_data(self, symbol: str) -> Optional[List[Dict]]:
"""Get session volume profile data"""
try:
if symbol not in self.volume_profiles:
return None
profile_data = []
for level in sorted(self.volume_profiles[symbol], key=lambda x: x.price):
profile_data.append({
'price': level.price,
'volume': level.volume,
'buy_volume': level.buy_volume,
'sell_volume': level.sell_volume,
'trades_count': level.trades_count,
'vwap': level.vwap,
'net_volume': level.buy_volume - level.sell_volume
})
return profile_data
except Exception as e:
logger.error(f"Error getting volume profile for {symbol}: {e}")
return None
def get_current_order_book(self, symbol: str) -> Optional[Dict]:
"""Get current order book snapshot"""
try:
if symbol not in self.order_books:
return None
snapshot = self.order_books[symbol]
return {
'timestamp': snapshot.timestamp.isoformat(),
'symbol': symbol,
'mid_price': snapshot.mid_price,
'spread': snapshot.spread,
'bids': [{'price': l.price, 'size': l.size} for l in snapshot.bids[:20]],
'asks': [{'price': l.price, 'size': l.size} for l in snapshot.asks[:20]],
'liquidity_metrics': self.liquidity_metrics.get(symbol, {}),
'recent_signals': [
{
'type': s.signal_type,
'price': s.price,
'volume': s.volume,
'confidence': s.confidence,
'timestamp': s.timestamp.isoformat()
}
for s in list(self.flow_signals[symbol])[-5:] # Last 5 signals
]
}
except Exception as e:
logger.error(f"Error getting order book for {symbol}: {e}")
return None
def get_statistics(self) -> Dict[str, Any]:
"""Get provider statistics"""
return {
'symbols': self.symbols,
'is_streaming': self.is_streaming,
'update_counts': dict(self.update_counts),
'last_update_times': {k: v.isoformat() if isinstance(v, datetime) else v
for k, v in self.last_update_times.items()},
'order_books_active': len(self.order_books),
'flow_signals_total': sum(len(signals) for signals in self.flow_signals.values()),
'cnn_callbacks': len(self.cnn_callbacks),
'dqn_callbacks': len(self.dqn_callbacks),
'websocket_tasks': len(self.websocket_tasks)
}

File diff suppressed because it is too large Load Diff

View File

@ -34,7 +34,7 @@ class COBIntegration:
Integration layer for Multi-Exchange COB data with gogo2 trading system Integration layer for Multi-Exchange COB data with gogo2 trading system
""" """
def __init__(self, data_provider: Optional[DataProvider] = None, symbols: Optional[List[str]] = None): def __init__(self, data_provider: Optional[DataProvider] = None, symbols: Optional[List[str]] = None, initial_data_limit=None, **kwargs):
""" """
Initialize COB Integration Initialize COB Integration
@ -88,7 +88,7 @@ class COBIntegration:
# Start COB provider streaming # Start COB provider streaming
try: try:
logger.info("Starting COB provider streaming...") logger.info("Starting COB provider streaming...")
await self.cob_provider.start_streaming() await self.cob_provider.start_streaming()
except Exception as e: except Exception as e:
logger.error(f"Error starting COB provider streaming: {e}") logger.error(f"Error starting COB provider streaming: {e}")
# Start a background task instead # Start a background task instead
@ -112,7 +112,7 @@ class COBIntegration:
"""Stop COB integration""" """Stop COB integration"""
logger.info("Stopping COB Integration") logger.info("Stopping COB Integration")
if self.cob_provider: if self.cob_provider:
await self.cob_provider.stop_streaming() await self.cob_provider.stop_streaming()
logger.info("COB Integration stopped") logger.info("COB Integration stopped")
def add_cnn_callback(self, callback: Callable[[str, Dict], None]): def add_cnn_callback(self, callback: Callable[[str, Dict], None]):
@ -313,7 +313,7 @@ class COBIntegration:
# Get fixed bucket size for the symbol # Get fixed bucket size for the symbol
bucket_size = 1.0 # Default bucket size bucket_size = 1.0 # Default bucket size
if self.cob_provider: if self.cob_provider:
bucket_size = self.cob_provider.fixed_usd_buckets.get(symbol, 1.0) bucket_size = self.cob_provider.fixed_usd_buckets.get(symbol, 1.0)
# Calculate price range for buckets # Calculate price range for buckets
mid_price = cob_snapshot.volume_weighted_mid mid_price = cob_snapshot.volume_weighted_mid
@ -359,15 +359,15 @@ class COBIntegration:
# Get actual Session Volume Profile (SVP) from trade data # Get actual Session Volume Profile (SVP) from trade data
svp_data = [] svp_data = []
if self.cob_provider: if self.cob_provider:
try: try:
svp_result = self.cob_provider.get_session_volume_profile(symbol, bucket_size) svp_result = self.cob_provider.get_session_volume_profile(symbol, bucket_size)
if svp_result and 'data' in svp_result: if svp_result and 'data' in svp_result:
svp_data = svp_result['data'] svp_data = svp_result['data']
logger.debug(f"Retrieved SVP data for {symbol}: {len(svp_data)} price levels") logger.debug(f"Retrieved SVP data for {symbol}: {len(svp_data)} price levels")
else: else:
logger.warning(f"No SVP data available for {symbol}") logger.warning(f"No SVP data available for {symbol}")
except Exception as e: except Exception as e:
logger.error(f"Error getting SVP data for {symbol}: {e}") logger.error(f"Error getting SVP data for {symbol}: {e}")
# Generate market stats # Generate market stats
stats = { stats = {
@ -405,18 +405,18 @@ class COBIntegration:
# Get additional real-time stats # Get additional real-time stats
realtime_stats = {} realtime_stats = {}
if self.cob_provider: if self.cob_provider:
try: try:
realtime_stats = self.cob_provider.get_realtime_stats(symbol) realtime_stats = self.cob_provider.get_realtime_stats(symbol)
if realtime_stats: if realtime_stats:
stats['realtime_1s'] = realtime_stats.get('1s_stats', {}) stats['realtime_1s'] = realtime_stats.get('1s_stats', {})
stats['realtime_5s'] = realtime_stats.get('5s_stats', {}) stats['realtime_5s'] = realtime_stats.get('5s_stats', {})
else: else:
stats['realtime_1s'] = {}
stats['realtime_5s'] = {}
except Exception as e:
logger.error(f"Error getting real-time stats for {symbol}: {e}")
stats['realtime_1s'] = {} stats['realtime_1s'] = {}
stats['realtime_5s'] = {} stats['realtime_5s'] = {}
except Exception as e:
logger.error(f"Error getting real-time stats for {symbol}: {e}")
stats['realtime_1s'] = {}
stats['realtime_5s'] = {}
return { return {
'type': 'cob_update', 'type': 'cob_update',
@ -487,9 +487,9 @@ class COBIntegration:
try: try:
for symbol in self.symbols: for symbol in self.symbols:
if self.cob_provider: if self.cob_provider:
cob_snapshot = self.cob_provider.get_consolidated_orderbook(symbol) cob_snapshot = self.cob_provider.get_consolidated_orderbook(symbol)
if cob_snapshot: if cob_snapshot:
await self._analyze_cob_patterns(symbol, cob_snapshot) await self._analyze_cob_patterns(symbol, cob_snapshot)
await asyncio.sleep(1) await asyncio.sleep(1)

View File

@ -46,12 +46,17 @@ import aiohttp.resolver
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# goal: use top 10 exchanges
# https://www.coingecko.com/en/exchanges
class ExchangeType(Enum): class ExchangeType(Enum):
BINANCE = "binance" BINANCE = "binance"
COINBASE = "coinbase" COINBASE = "coinbase"
KRAKEN = "kraken" KRAKEN = "kraken"
HUOBI = "huobi" HUOBI = "huobi"
BITFINEX = "bitfinex" BITFINEX = "bitfinex"
BYBIT = "bybit"
BITGET = "bitget"
@dataclass @dataclass
class ExchangeOrderBookLevel: class ExchangeOrderBookLevel:
@ -126,8 +131,8 @@ class MultiExchangeCOBProvider:
self.consolidation_frequency = 100 # ms self.consolidation_frequency = 100 # ms
# REST API configuration for deep order book # REST API configuration for deep order book
self.rest_api_frequency = 1000 # ms - full snapshot every 1 second self.rest_api_frequency = 2000 # ms - full snapshot every 2 seconds (reduced frequency for deeper data)
self.rest_depth_limit = 500 # Increased from 100 to 500 levels via REST for maximum depth self.rest_depth_limit = 1000 # Increased to 1000 levels via REST for maximum depth
# Exchange configurations # Exchange configurations
self.exchange_configs = self._initialize_exchange_configs() self.exchange_configs = self._initialize_exchange_configs()
@ -288,6 +293,24 @@ class MultiExchangeCOBProvider:
rate_limits={'requests_per_minute': 1000} rate_limits={'requests_per_minute': 1000}
) )
# Bybit configuration
configs[ExchangeType.BYBIT.value] = ExchangeConfig(
exchange_type=ExchangeType.BYBIT,
weight=0.18,
websocket_url="wss://stream.bybit.com/v5/public/spot",
rest_api_url="https://api.bybit.com",
symbols_mapping={'BTC/USDT': 'BTCUSDT', 'ETH/USDT': 'ETHUSDT'},
rate_limits={'requests_per_minute': 1200}
)
# Bitget configuration
configs[ExchangeType.BITGET.value] = ExchangeConfig(
exchange_type=ExchangeType.BITGET,
weight=0.12,
websocket_url="wss://ws.bitget.com/spot/v1/stream",
rest_api_url="https://api.bitget.com",
symbols_mapping={'BTC/USDT': 'BTCUSDT_SPBL', 'ETH/USDT': 'ETHUSDT_SPBL'},
rate_limits={'requests_per_minute': 1200}
)
return configs return configs
async def start_streaming(self): async def start_streaming(self):
@ -459,6 +482,10 @@ class MultiExchangeCOBProvider:
await self._stream_huobi_orderbook(symbol, config) await self._stream_huobi_orderbook(symbol, config)
elif exchange_name == ExchangeType.BITFINEX.value: elif exchange_name == ExchangeType.BITFINEX.value:
await self._stream_bitfinex_orderbook(symbol, config) await self._stream_bitfinex_orderbook(symbol, config)
elif exchange_name == ExchangeType.BYBIT.value:
await self._stream_bybit_orderbook(symbol, config)
elif exchange_name == ExchangeType.BITGET.value:
await self._stream_bitget_orderbook(symbol, config)
except Exception as e: except Exception as e:
logger.error(f"Error streaming {exchange_name} for {symbol}: {e}") logger.error(f"Error streaming {exchange_name} for {symbol}: {e}")
@ -467,6 +494,8 @@ class MultiExchangeCOBProvider:
async def _stream_binance_orderbook(self, symbol: str, config: ExchangeConfig): async def _stream_binance_orderbook(self, symbol: str, config: ExchangeConfig):
"""Stream order book data from Binance""" """Stream order book data from Binance"""
try: try:
# Use partial book depth stream with maximum levels - Binance format
# @depth20@100ms gives us 20 levels at 100ms, but we also have REST API for full depth
ws_url = f"{config.websocket_url}{config.symbols_mapping[symbol].lower()}@depth20@100ms" ws_url = f"{config.websocket_url}{config.symbols_mapping[symbol].lower()}@depth20@100ms"
logger.info(f"Connecting to Binance WebSocket: {ws_url}") logger.info(f"Connecting to Binance WebSocket: {ws_url}")

View File

@ -230,7 +230,8 @@ class TradingOrchestrator:
self.model_states['dqn']['checkpoint_loaded'] = True self.model_states['dqn']['checkpoint_loaded'] = True
self.model_states['dqn']['checkpoint_filename'] = metadata.checkpoint_id self.model_states['dqn']['checkpoint_filename'] = metadata.checkpoint_id
checkpoint_loaded = True checkpoint_loaded = True
logger.info(f"DQN checkpoint loaded: {metadata.checkpoint_id} (loss={metadata.loss:.4f})") loss_str = f"{metadata.loss:.4f}" if metadata.loss is not None else "N/A"
logger.info(f"DQN checkpoint loaded: {metadata.checkpoint_id} (loss={loss_str})")
except Exception as e: except Exception as e:
logger.warning(f"Error loading DQN checkpoint: {e}") logger.warning(f"Error loading DQN checkpoint: {e}")
@ -269,7 +270,8 @@ class TradingOrchestrator:
self.model_states['cnn']['checkpoint_loaded'] = True self.model_states['cnn']['checkpoint_loaded'] = True
self.model_states['cnn']['checkpoint_filename'] = metadata.checkpoint_id self.model_states['cnn']['checkpoint_filename'] = metadata.checkpoint_id
checkpoint_loaded = True checkpoint_loaded = True
logger.info(f"CNN checkpoint loaded: {metadata.checkpoint_id} (loss={metadata.loss:.4f})") loss_str = f"{metadata.loss:.4f}" if metadata.loss is not None else "N/A"
logger.info(f"CNN checkpoint loaded: {metadata.checkpoint_id} (loss={loss_str})")
except Exception as e: except Exception as e:
logger.warning(f"Error loading CNN checkpoint: {e}") logger.warning(f"Error loading CNN checkpoint: {e}")
@ -356,7 +358,8 @@ class TradingOrchestrator:
self.model_states['cob_rl']['checkpoint_loaded'] = True self.model_states['cob_rl']['checkpoint_loaded'] = True
self.model_states['cob_rl']['checkpoint_filename'] = metadata.checkpoint_id self.model_states['cob_rl']['checkpoint_filename'] = metadata.checkpoint_id
checkpoint_loaded = True checkpoint_loaded = True
logger.info(f"COB RL checkpoint loaded: {metadata.checkpoint_id} (loss={metadata.loss:.4f})") loss_str = f"{metadata.loss:.4f}" if metadata.loss is not None else "N/A"
logger.info(f"COB RL checkpoint loaded: {metadata.checkpoint_id} (loss={loss_str})")
except Exception as e: except Exception as e:
logger.warning(f"Error loading COB RL checkpoint: {e}") logger.warning(f"Error loading COB RL checkpoint: {e}")
@ -547,7 +550,7 @@ class TradingOrchestrator:
if self.cob_integration: if self.cob_integration:
try: try:
logger.info("Attempting to start COB integration...") logger.info("Attempting to start COB integration...")
await self.cob_integration.start_streaming() await self.cob_integration.start()
logger.info("COB Integration streaming started successfully.") logger.info("COB Integration streaming started successfully.")
except Exception as e: except Exception as e:
logger.error(f"Failed to start COB integration streaming: {e}") logger.error(f"Failed to start COB integration streaming: {e}")
@ -945,6 +948,12 @@ class TradingOrchestrator:
rl_prediction = await self._get_rl_prediction(model, symbol) rl_prediction = await self._get_rl_prediction(model, symbol)
if rl_prediction: if rl_prediction:
predictions.append(rl_prediction) predictions.append(rl_prediction)
elif isinstance(model, COBRLModelInterface):
# Get COB RL prediction
cob_prediction = await self._get_cob_rl_prediction(model, symbol)
if cob_prediction:
predictions.append(cob_prediction)
else: else:
# Generic model interface # Generic model interface
@ -1004,9 +1013,33 @@ class TradingOrchestrator:
logger.debug(f"Could not enhance CNN features with COB data: {cob_error}") logger.debug(f"Could not enhance CNN features with COB data: {cob_error}")
enhanced_features = feature_matrix enhanced_features = feature_matrix
# Add extrema features if available
if self.extrema_trainer:
try:
extrema_features = self.extrema_trainer.get_context_features_for_model(symbol)
if extrema_features is not None:
# Reshape and tile to match the enhanced_features shape
extrema_features = extrema_features.flatten()
tiled_extrema = np.tile(extrema_features, (enhanced_features.shape[0], enhanced_features.shape[1], 1))
enhanced_features = np.concatenate([enhanced_features, tiled_extrema], axis=2)
logger.debug(f"Enhanced CNN features with Extrema data for {symbol}")
except Exception as extrema_error:
logger.debug(f"Could not enhance CNN features with Extrema data: {extrema_error}")
if enhanced_features is not None: if enhanced_features is not None:
# Get CNN prediction - use the actual underlying model # Get CNN prediction - use the actual underlying model
try: try:
# Ensure features are properly shaped and limited
if isinstance(enhanced_features, np.ndarray):
# Flatten and limit features to prevent shape mismatches
enhanced_features = enhanced_features.flatten()
if len(enhanced_features) > 100: # Limit to 100 features
enhanced_features = enhanced_features[:100]
elif len(enhanced_features) < 100: # Pad with zeros
padded = np.zeros(100)
padded[:len(enhanced_features)] = enhanced_features
enhanced_features = padded
if hasattr(model.model, 'act'): if hasattr(model.model, 'act'):
# Use the CNN's act method # Use the CNN's act method
action_result = model.model.act(enhanced_features, explore=False) action_result = model.model.act(enhanced_features, explore=False)
@ -1138,6 +1171,17 @@ class TradingOrchestrator:
) )
if feature_matrix is not None: if feature_matrix is not None:
# Ensure feature_matrix is properly shaped and limited
if isinstance(feature_matrix, np.ndarray):
# Flatten and limit features to prevent shape mismatches
feature_matrix = feature_matrix.flatten()
if len(feature_matrix) > 2000: # Limit to 2000 features for generic models
feature_matrix = feature_matrix[:2000]
elif len(feature_matrix) < 2000: # Pad with zeros
padded = np.zeros(2000)
padded[:len(feature_matrix)] = feature_matrix
feature_matrix = padded
prediction_result = model.predict(feature_matrix) prediction_result = model.predict(feature_matrix)
# Handle different return formats from model.predict() # Handle different return formats from model.predict()
@ -1194,9 +1238,35 @@ class TradingOrchestrator:
# Shape: (n_timeframes, window_size, n_features) -> (n_timeframes * window_size * n_features,) # Shape: (n_timeframes, window_size, n_features) -> (n_timeframes * window_size * n_features,)
state = feature_matrix.flatten() state = feature_matrix.flatten()
# Add additional state information (position, balance, etc.) # Add extrema features if available
# This would come from a portfolio manager in a real implementation if self.extrema_trainer:
additional_state = np.array([0.0, 1.0, 0.0]) # [position, balance, unrealized_pnl] try:
extrema_features = self.extrema_trainer.get_context_features_for_model(symbol)
if extrema_features is not None:
state = np.concatenate([state, extrema_features.flatten()])
logger.debug(f"Enhanced RL state with Extrema data for {symbol}")
except Exception as extrema_error:
logger.debug(f"Could not enhance RL state with Extrema data: {extrema_error}")
# Get real-time portfolio information from the trading executor
position_size = 0.0
balance = 1.0 # Default to a normalized value if not available
unrealized_pnl = 0.0
if self.trading_executor:
position = self.trading_executor.get_current_position(symbol)
if position:
position_size = position.get('quantity', 0.0)
# Normalize balance or use a realistic value
current_balance = self.trading_executor.get_balance()
if current_balance and current_balance.get('total', 0) > 0:
# Simple normalization - can be improved
balance = min(1.0, current_balance.get('free', 0) / current_balance.get('total', 1))
unrealized_pnl = self._get_current_position_pnl(symbol, self.data_provider.get_current_price(symbol))
additional_state = np.array([position_size, balance, unrealized_pnl])
return np.concatenate([state, additional_state]) return np.concatenate([state, additional_state])
@ -1833,4 +1903,132 @@ class TradingOrchestrator:
def set_trading_executor(self, trading_executor): def set_trading_executor(self, trading_executor):
"""Set the trading executor for position tracking""" """Set the trading executor for position tracking"""
self.trading_executor = trading_executor self.trading_executor = trading_executor
logger.info("Trading executor set for position tracking and P&L feedback") logger.info("Trading executor set for position tracking and P&L feedback")
def _get_current_price(self, symbol: str) -> float:
"""Get current price for symbol"""
try:
# Try to get from data provider
if self.data_provider:
try:
# Try different methods to get current price
if hasattr(self.data_provider, 'get_latest_data'):
latest_data = self.data_provider.get_latest_data(symbol)
if latest_data and 'price' in latest_data:
return float(latest_data['price'])
elif latest_data and 'close' in latest_data:
return float(latest_data['close'])
elif hasattr(self.data_provider, 'get_current_price'):
return float(self.data_provider.get_current_price(symbol))
elif hasattr(self.data_provider, 'get_latest_candle'):
latest_candle = self.data_provider.get_latest_candle(symbol, '1m')
if latest_candle and 'close' in latest_candle:
return float(latest_candle['close'])
except Exception as e:
logger.debug(f"Could not get price from data provider: {e}")
# Try to get from universal adapter
if self.universal_adapter:
try:
data_stream = self.universal_adapter.get_latest_data(symbol)
if data_stream and hasattr(data_stream, 'current_price'):
return float(data_stream.current_price)
except Exception as e:
logger.debug(f"Could not get price from universal adapter: {e}")
# Fallback to default prices
default_prices = {
'ETH/USDT': 2500.0,
'BTC/USDT': 108000.0
}
return default_prices.get(symbol, 1000.0)
except Exception as e:
logger.error(f"Error getting current price for {symbol}: {e}")
# Return default price based on symbol
if 'ETH' in symbol:
return 2500.0
elif 'BTC' in symbol:
return 108000.0
else:
return 1000.0
def _generate_fallback_prediction(self, symbol: str) -> Dict[str, Any]:
"""Generate fallback prediction when models fail"""
try:
return {
'action': 'HOLD',
'confidence': 0.5,
'price': self._get_current_price(symbol) or 2500.0,
'timestamp': datetime.now(),
'model': 'fallback'
}
except Exception as e:
logger.debug(f"Error generating fallback prediction: {e}")
return {
'action': 'HOLD',
'confidence': 0.5,
'price': 2500.0,
'timestamp': datetime.now(),
'model': 'fallback'
}
def capture_dqn_prediction(self, symbol: str, action_idx: int, confidence: float, price: float, q_values: List[float] = None):
"""Capture DQN prediction for dashboard visualization"""
try:
if symbol not in self.recent_dqn_predictions:
self.recent_dqn_predictions[symbol] = deque(maxlen=100)
prediction_data = {
'timestamp': datetime.now(),
'action': ['SELL', 'HOLD', 'BUY'][action_idx],
'confidence': confidence,
'price': price,
'q_values': q_values or [0.33, 0.33, 0.34]
}
self.recent_dqn_predictions[symbol].append(prediction_data)
except Exception as e:
logger.debug(f"Error capturing DQN prediction: {e}")
def capture_cnn_prediction(self, symbol: str, direction: int, confidence: float, current_price: float, predicted_price: float):
"""Capture CNN prediction for dashboard visualization"""
try:
if symbol not in self.recent_cnn_predictions:
self.recent_cnn_predictions[symbol] = deque(maxlen=50)
prediction_data = {
'timestamp': datetime.now(),
'direction': ['DOWN', 'SAME', 'UP'][direction],
'confidence': confidence,
'current_price': current_price,
'predicted_price': predicted_price
}
self.recent_cnn_predictions[symbol].append(prediction_data)
except Exception as e:
logger.debug(f"Error capturing CNN prediction: {e}")
async def _get_cob_rl_prediction(self, model: COBRLModelInterface, symbol: str) -> Optional[Prediction]:
"""Get prediction from COB RL model"""
try:
cob_feature_matrix = self.get_cob_feature_matrix(symbol, sequence_length=1)
if cob_feature_matrix is None:
return None
# The model expects a 1D array of features
cob_features = cob_feature_matrix.flatten()
prediction_result = model.predict(cob_features)
if prediction_result:
direction_map = {0: 'SELL', 1: 'HOLD', 2: 'BUY'}
action = direction_map.get(prediction_result['predicted_direction'], 'HOLD')
prediction = Prediction(
action=action,
confidence=float(prediction_result['confidence']),
probabilities={direction_map.get(i, 'HOLD'): float(prob) for i, prob in enumerate(prediction_result['probabilities'])},
timeframe='cob',
timestamp=datetime.now(),
model_name=model.name,
metadata={'value': prediction_result['value']}
)
return prediction
return None
except Exception as e:
logger.error(f"Error getting COB RL prediction: {e}")
return None

View File

@ -59,6 +59,7 @@ class TradeRecord:
fees: float fees: float
confidence: float confidence: float
hold_time_seconds: float = 0.0 # Hold time in seconds hold_time_seconds: float = 0.0 # Hold time in seconds
leverage: float = 1.0 # Leverage applied to this trade
class TradingExecutor: class TradingExecutor:
"""Handles trade execution through MEXC API with risk management""" """Handles trade execution through MEXC API with risk management"""
@ -114,12 +115,17 @@ class TradingExecutor:
# Thread safety # Thread safety
self.lock = Lock() self.lock = Lock()
# Connect to exchange # Connect to exchange - skip connection check in simulation mode
if self.trading_enabled: if self.trading_enabled:
logger.info("TRADING EXECUTOR: Attempting to connect to exchange...") if self.simulation_mode:
if not self._connect_exchange(): logger.info("TRADING EXECUTOR: Simulation mode - skipping exchange connection check")
logger.error("TRADING EXECUTOR: Failed initial exchange connection. Trading will be disabled.") # In simulation mode, we don't need a real exchange connection
self.trading_enabled = False # Trading should remain enabled for simulation trades
else:
logger.info("TRADING EXECUTOR: Attempting to connect to exchange...")
if not self._connect_exchange():
logger.error("TRADING EXECUTOR: Failed initial exchange connection. Trading will be disabled.")
self.trading_enabled = False
else: else:
logger.info("TRADING EXECUTOR: Trading is explicitly disabled in config.") logger.info("TRADING EXECUTOR: Trading is explicitly disabled in config.")
@ -230,15 +236,25 @@ class TradingExecutor:
required_capital = self._calculate_position_size(confidence, current_price) required_capital = self._calculate_position_size(confidence, current_price)
# Get available balance for the quote asset # Get available balance for the quote asset
available_balance = self.exchange.get_balance(quote_asset) # For MEXC, prioritize USDT over USDC since most accounts have USDT
if quote_asset == 'USDC':
# If USDC balance is insufficient, check USDT as fallback (for MEXC compatibility) # Check USDT first (most common balance)
if available_balance < required_capital and quote_asset == 'USDC':
usdt_balance = self.exchange.get_balance('USDT') usdt_balance = self.exchange.get_balance('USDT')
usdc_balance = self.exchange.get_balance('USDC')
if usdt_balance >= required_capital: if usdt_balance >= required_capital:
available_balance = usdt_balance available_balance = usdt_balance
quote_asset = 'USDT' # Use USDT instead quote_asset = 'USDT' # Use USDT for trading
logger.info(f"BALANCE CHECK: Using USDT fallback balance for {symbol}") logger.info(f"BALANCE CHECK: Using USDT balance for {symbol} (preferred)")
elif usdc_balance >= required_capital:
available_balance = usdc_balance
logger.info(f"BALANCE CHECK: Using USDC balance for {symbol}")
else:
# Use the larger balance for reporting
available_balance = max(usdt_balance, usdc_balance)
quote_asset = 'USDT' if usdt_balance > usdc_balance else 'USDC'
else:
available_balance = self.exchange.get_balance(quote_asset)
logger.info(f"BALANCE CHECK: Symbol: {symbol}, Action: {action}, Required: ${required_capital:.2f} {quote_asset}, Available: ${available_balance:.2f} {quote_asset}") logger.info(f"BALANCE CHECK: Symbol: {symbol}, Action: {action}, Required: ${required_capital:.2f} {quote_asset}, Available: ${available_balance:.2f} {quote_asset}")
@ -329,7 +345,8 @@ class TradingExecutor:
logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Trade logged but not executed") logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Trade logged but not executed")
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = quantity * current_price * taker_fee_rate current_leverage = self.get_leverage()
simulated_fees = quantity * current_price * taker_fee_rate * current_leverage
# Create mock position for tracking # Create mock position for tracking
self.positions[symbol] = Position( self.positions[symbol] = Position(
@ -376,7 +393,8 @@ class TradingExecutor:
if order: if order:
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = quantity * current_price * taker_fee_rate current_leverage = self.get_leverage()
simulated_fees = quantity * current_price * taker_fee_rate * current_leverage
# Create position record # Create position record
self.positions[symbol] = Position( self.positions[symbol] = Position(
@ -409,6 +427,7 @@ class TradingExecutor:
return self._execute_short(symbol, confidence, current_price) return self._execute_short(symbol, confidence, current_price)
position = self.positions[symbol] position = self.positions[symbol]
current_leverage = self.get_leverage()
logger.info(f"Executing SELL: {position.quantity:.6f} {symbol} at ${current_price:.2f} " logger.info(f"Executing SELL: {position.quantity:.6f} {symbol} at ${current_price:.2f} "
f"(confidence: {confidence:.2f}) [{'SIMULATION' if self.simulation_mode else 'LIVE'}]") f"(confidence: {confidence:.2f}) [{'SIMULATION' if self.simulation_mode else 'LIVE'}]")
@ -416,13 +435,13 @@ class TradingExecutor:
if self.simulation_mode: if self.simulation_mode:
logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Trade logged but not executed") logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Trade logged but not executed")
# Calculate P&L and hold time # Calculate P&L and hold time
pnl = position.calculate_pnl(current_price) pnl = position.calculate_pnl(current_price) * current_leverage # Apply leverage to PnL
exit_time = datetime.now() exit_time = datetime.now()
hold_time_seconds = (exit_time - position.entry_time).total_seconds() hold_time_seconds = (exit_time - position.entry_time).total_seconds()
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = position.quantity * current_price * taker_fee_rate simulated_fees = position.quantity * current_price * taker_fee_rate * current_leverage # Apply leverage to fees
# Create trade record # Create trade record
trade_record = TradeRecord( trade_record = TradeRecord(
@ -433,14 +452,15 @@ class TradingExecutor:
exit_price=current_price, exit_price=current_price,
entry_time=position.entry_time, entry_time=position.entry_time,
exit_time=exit_time, exit_time=exit_time,
pnl=pnl, pnl=pnl - simulated_fees,
fees=simulated_fees, fees=simulated_fees,
confidence=confidence, confidence=confidence,
hold_time_seconds=hold_time_seconds hold_time_seconds=hold_time_seconds,
leverage=current_leverage # Store leverage
) )
self.trade_history.append(trade_record) self.trade_history.append(trade_record)
self.daily_loss += max(0, -pnl) # Add to daily loss if negative self.daily_loss += max(0, -(pnl - simulated_fees)) # Add to daily loss if negative
# Update consecutive losses # Update consecutive losses
if pnl < -0.001: # A losing trade if pnl < -0.001: # A losing trade
@ -455,7 +475,7 @@ class TradingExecutor:
self.last_trade_time[symbol] = datetime.now() self.last_trade_time[symbol] = datetime.now()
self.daily_trades += 1 self.daily_trades += 1
logger.info(f"Position closed - P&L: ${pnl:.2f}") logger.info(f"Position closed - P&L: ${pnl - simulated_fees:.2f}")
return True return True
try: try:
@ -490,10 +510,10 @@ class TradingExecutor:
if order: if order:
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = position.quantity * current_price * taker_fee_rate simulated_fees = position.quantity * current_price * taker_fee_rate * current_leverage # Apply leverage
# Calculate P&L, fees, and hold time # Calculate P&L, fees, and hold time
pnl = position.calculate_pnl(current_price) pnl = position.calculate_pnl(current_price) * current_leverage # Apply leverage to PnL
fees = simulated_fees fees = simulated_fees
exit_time = datetime.now() exit_time = datetime.now()
hold_time_seconds = (exit_time - position.entry_time).total_seconds() hold_time_seconds = (exit_time - position.entry_time).total_seconds()
@ -510,7 +530,8 @@ class TradingExecutor:
pnl=pnl - fees, pnl=pnl - fees,
fees=fees, fees=fees,
confidence=confidence, confidence=confidence,
hold_time_seconds=hold_time_seconds hold_time_seconds=hold_time_seconds,
leverage=current_leverage # Store leverage
) )
self.trade_history.append(trade_record) self.trade_history.append(trade_record)
@ -559,7 +580,8 @@ class TradingExecutor:
logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Short position logged but not executed") logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Short position logged but not executed")
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = quantity * current_price * taker_fee_rate current_leverage = self.get_leverage()
simulated_fees = quantity * current_price * taker_fee_rate * current_leverage
# Create mock short position for tracking # Create mock short position for tracking
self.positions[symbol] = Position( self.positions[symbol] = Position(
@ -606,7 +628,8 @@ class TradingExecutor:
if order: if order:
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = quantity * current_price * taker_fee_rate current_leverage = self.get_leverage()
simulated_fees = quantity * current_price * taker_fee_rate * current_leverage
# Create short position record # Create short position record
self.positions[symbol] = Position( self.positions[symbol] = Position(
@ -638,6 +661,8 @@ class TradingExecutor:
return False return False
position = self.positions[symbol] position = self.positions[symbol]
current_leverage = self.get_leverage() # Get current leverage
if position.side != 'SHORT': if position.side != 'SHORT':
logger.warning(f"Position in {symbol} is not SHORT, cannot close with BUY") logger.warning(f"Position in {symbol} is not SHORT, cannot close with BUY")
return False return False
@ -649,10 +674,10 @@ class TradingExecutor:
logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Short close logged but not executed") logger.info(f"SIMULATION MODE ({self.trading_mode.upper()}) - Short close logged but not executed")
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = position.quantity * current_price * taker_fee_rate simulated_fees = position.quantity * current_price * taker_fee_rate * current_leverage
# Calculate P&L for short position and hold time # Calculate P&L for short position and hold time
pnl = position.calculate_pnl(current_price) pnl = position.calculate_pnl(current_price) * current_leverage # Apply leverage to PnL
exit_time = datetime.now() exit_time = datetime.now()
hold_time_seconds = (exit_time - position.entry_time).total_seconds() hold_time_seconds = (exit_time - position.entry_time).total_seconds()
@ -665,21 +690,22 @@ class TradingExecutor:
exit_price=current_price, exit_price=current_price,
entry_time=position.entry_time, entry_time=position.entry_time,
exit_time=exit_time, exit_time=exit_time,
pnl=pnl, pnl=pnl - simulated_fees,
fees=simulated_fees, fees=simulated_fees,
confidence=confidence, confidence=confidence,
hold_time_seconds=hold_time_seconds hold_time_seconds=hold_time_seconds,
leverage=current_leverage # Store leverage
) )
self.trade_history.append(trade_record) self.trade_history.append(trade_record)
self.daily_loss += max(0, -pnl) # Add to daily loss if negative self.daily_loss += max(0, -(pnl - simulated_fees)) # Add to daily loss if negative
# Remove position # Remove position
del self.positions[symbol] del self.positions[symbol]
self.last_trade_time[symbol] = datetime.now() self.last_trade_time[symbol] = datetime.now()
self.daily_trades += 1 self.daily_trades += 1
logger.info(f"SHORT position closed - P&L: ${pnl:.2f}") logger.info(f"SHORT position closed - P&L: ${pnl - simulated_fees:.2f}")
return True return True
try: try:
@ -714,10 +740,10 @@ class TradingExecutor:
if order: if order:
# Calculate simulated fees in simulation mode # Calculate simulated fees in simulation mode
taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006) taker_fee_rate = self.mexc_config.get('trading_fees', {}).get('taker_fee', 0.0006)
simulated_fees = position.quantity * current_price * taker_fee_rate simulated_fees = position.quantity * current_price * taker_fee_rate * current_leverage
# Calculate P&L, fees, and hold time # Calculate P&L, fees, and hold time
pnl = position.calculate_pnl(current_price) pnl = position.calculate_pnl(current_price) * current_leverage # Apply leverage to PnL
fees = simulated_fees fees = simulated_fees
exit_time = datetime.now() exit_time = datetime.now()
hold_time_seconds = (exit_time - position.entry_time).total_seconds() hold_time_seconds = (exit_time - position.entry_time).total_seconds()
@ -734,7 +760,8 @@ class TradingExecutor:
pnl=pnl - fees, pnl=pnl - fees,
fees=fees, fees=fees,
confidence=confidence, confidence=confidence,
hold_time_seconds=hold_time_seconds hold_time_seconds=hold_time_seconds,
leverage=current_leverage # Store leverage
) )
self.trade_history.append(trade_record) self.trade_history.append(trade_record)
@ -860,7 +887,7 @@ class TradingExecutor:
'losing_trades': losing_trades, 'losing_trades': losing_trades,
'breakeven_trades': breakeven_trades, 'breakeven_trades': breakeven_trades,
'total_trades': total_trades, 'total_trades': total_trades,
'win_rate': winning_trades / max(1, total_trades), 'win_rate': winning_trades / max(1, winning_trades + losing_trades) if (winning_trades + losing_trades) > 0 else 0.0,
'avg_trade_pnl': avg_trade_pnl, 'avg_trade_pnl': avg_trade_pnl,
'avg_trade_fee': avg_trade_fee, 'avg_trade_fee': avg_trade_fee,
'avg_winning_trade': avg_winning_trade, 'avg_winning_trade': avg_winning_trade,

View File

@ -229,9 +229,12 @@ class TrainingIntegration:
# Truncate # Truncate
features = features[:50] features = features[:50]
# Get the model's device to ensure tensors are on the same device
model_device = next(cnn_model.parameters()).device
# Create tensors # Create tensors
features_tensor = torch.FloatTensor(features).unsqueeze(0).to(device) features_tensor = torch.FloatTensor(features).unsqueeze(0).to(model_device)
target_tensor = torch.LongTensor([target]).to(device) target_tensor = torch.LongTensor([target]).to(model_device)
# Training step # Training step
cnn_model.train() cnn_model.train()

View File

@ -1 +0,0 @@

View File

@ -1,148 +0,0 @@
#!/usr/bin/env python3
"""
Example: Using the Checkpoint Management System
"""
import logging
import torch
import torch.nn as nn
import numpy as np
from datetime import datetime
from utils.checkpoint_manager import save_checkpoint, load_best_checkpoint, get_checkpoint_manager
from utils.training_integration import get_training_integration
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class ExampleCNN(nn.Module):
def __init__(self, input_channels=5, num_classes=3):
super().__init__()
self.conv1 = nn.Conv2d(input_channels, 32, 3, padding=1)
self.conv2 = nn.Conv2d(32, 64, 3, padding=1)
self.pool = nn.AdaptiveAvgPool2d((1, 1))
self.fc = nn.Linear(64, num_classes)
def forward(self, x):
x = torch.relu(self.conv1(x))
x = torch.relu(self.conv2(x))
x = self.pool(x)
x = x.view(x.size(0), -1)
return self.fc(x)
def example_cnn_training():
logger.info("=== CNN Training Example ===")
model = ExampleCNN()
training_integration = get_training_integration()
for epoch in range(5): # Simulate 5 epochs
# Simulate training metrics
train_loss = 2.0 - (epoch * 0.15) + np.random.normal(0, 0.1)
train_acc = 0.3 + (epoch * 0.06) + np.random.normal(0, 0.02)
val_loss = train_loss + np.random.normal(0, 0.05)
val_acc = train_acc - 0.05 + np.random.normal(0, 0.02)
# Clamp values to realistic ranges
train_acc = max(0.0, min(1.0, train_acc))
val_acc = max(0.0, min(1.0, val_acc))
train_loss = max(0.1, train_loss)
val_loss = max(0.1, val_loss)
logger.info(f"Epoch {epoch+1}: train_acc={train_acc:.3f}, val_acc={val_acc:.3f}")
# Save checkpoint
saved = training_integration.save_cnn_checkpoint(
cnn_model=model,
model_name="example_cnn",
epoch=epoch + 1,
train_accuracy=train_acc,
val_accuracy=val_acc,
train_loss=train_loss,
val_loss=val_loss,
training_time_hours=0.1 * (epoch + 1)
)
if saved:
logger.info(f" Checkpoint saved for epoch {epoch+1}")
else:
logger.info(f" Checkpoint not saved (performance not improved)")
# Load the best checkpoint
logger.info("\\nLoading best checkpoint...")
best_result = load_best_checkpoint("example_cnn")
if best_result:
file_path, metadata = best_result
logger.info(f"Best checkpoint: {metadata.checkpoint_id}")
logger.info(f"Performance score: {metadata.performance_score:.4f}")
def example_manual_checkpoint():
logger.info("\\n=== Manual Checkpoint Example ===")
model = nn.Linear(10, 3)
performance_metrics = {
'accuracy': 0.85,
'val_accuracy': 0.82,
'loss': 0.45,
'val_loss': 0.48
}
training_metadata = {
'epoch': 25,
'training_time_hours': 2.5,
'total_parameters': sum(p.numel() for p in model.parameters())
}
logger.info("Saving checkpoint manually...")
metadata = save_checkpoint(
model=model,
model_name="example_manual",
model_type="cnn",
performance_metrics=performance_metrics,
training_metadata=training_metadata,
force_save=True
)
if metadata:
logger.info(f" Manual checkpoint saved: {metadata.checkpoint_id}")
logger.info(f" Performance score: {metadata.performance_score:.4f}")
def show_checkpoint_stats():
logger.info("\\n=== Checkpoint Statistics ===")
checkpoint_manager = get_checkpoint_manager()
stats = checkpoint_manager.get_checkpoint_stats()
logger.info(f"Total models: {stats['total_models']}")
logger.info(f"Total checkpoints: {stats['total_checkpoints']}")
logger.info(f"Total size: {stats['total_size_mb']:.2f} MB")
for model_name, model_stats in stats['models'].items():
logger.info(f"\\n{model_name}:")
logger.info(f" Checkpoints: {model_stats['checkpoint_count']}")
logger.info(f" Size: {model_stats['total_size_mb']:.2f} MB")
logger.info(f" Best performance: {model_stats['best_performance']:.4f}")
def main():
logger.info(" Checkpoint Management System Examples")
logger.info("=" * 50)
try:
example_cnn_training()
example_manual_checkpoint()
show_checkpoint_stats()
logger.info("\\n All examples completed successfully!")
logger.info("\\nTo use in your training:")
logger.info("1. Import: from utils.checkpoint_manager import save_checkpoint, load_best_checkpoint")
logger.info("2. Or use: from utils.training_integration import get_training_integration")
logger.info("3. Save checkpoints during training with performance metrics")
logger.info("4. Load best checkpoints for inference or continued training")
except Exception as e:
logger.error(f"Error in examples: {e}")
raise
if __name__ == "__main__":
main()

View File

@ -1,283 +0,0 @@
#!/usr/bin/env python3
"""
Fix RL Training Issues - Comprehensive Solution
This script addresses the critical RL training audit issues:
1. MASSIVE INPUT DATA GAP (99.25% Missing) - Implements full 13,400 feature state
2. Disconnected Training Pipeline - Fixes data flow between components
3. Missing Enhanced State Builder - Connects orchestrator to dashboard
4. Reward Calculation Issues - Ensures enhanced pivot-based rewards
5. Williams Market Structure Integration - Proper feature extraction
6. Real-time Data Integration - Live market data to RL
Usage:
python fix_rl_training_issues.py
"""
import os
import sys
import logging
from pathlib import Path
# Add project root to path
project_root = Path(__file__).parent
sys.path.insert(0, str(project_root))
logger = logging.getLogger(__name__)
def fix_orchestrator_missing_methods():
"""Fix missing methods in enhanced orchestrator"""
try:
logger.info("Checking enhanced orchestrator...")
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
# Test if methods exist
test_orchestrator = EnhancedTradingOrchestrator()
methods_to_check = [
'_get_symbol_correlation',
'build_comprehensive_rl_state',
'calculate_enhanced_pivot_reward'
]
missing_methods = []
for method in methods_to_check:
if not hasattr(test_orchestrator, method):
missing_methods.append(method)
if missing_methods:
logger.error(f"Missing methods in enhanced orchestrator: {missing_methods}")
return False
else:
logger.info("✅ All required methods present in enhanced orchestrator")
return True
except Exception as e:
logger.error(f"Error checking orchestrator: {e}")
return False
def test_comprehensive_state_building():
"""Test comprehensive RL state building"""
try:
logger.info("Testing comprehensive state building...")
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
from core.data_provider import DataProvider
# Create test instances
data_provider = DataProvider()
orchestrator = EnhancedTradingOrchestrator(data_provider=data_provider)
# Test comprehensive state building
state = orchestrator.build_comprehensive_rl_state('ETH/USDT')
if state is not None:
logger.info(f"✅ Comprehensive state built: {len(state)} features")
if len(state) == 13400:
logger.info("✅ PERFECT: Exactly 13,400 features as required!")
else:
logger.warning(f"⚠️ Expected 13,400 features, got {len(state)}")
# Check feature distribution
import numpy as np
non_zero = np.count_nonzero(state)
logger.info(f"Non-zero features: {non_zero} ({non_zero/len(state)*100:.1f}%)")
return True
else:
logger.error("❌ Comprehensive state building failed")
return False
except Exception as e:
logger.error(f"Error testing state building: {e}")
return False
def test_enhanced_reward_calculation():
"""Test enhanced reward calculation"""
try:
logger.info("Testing enhanced reward calculation...")
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
from datetime import datetime, timedelta
orchestrator = EnhancedTradingOrchestrator()
# Test data
trade_decision = {
'action': 'BUY',
'confidence': 0.75,
'price': 2500.0,
'timestamp': datetime.now()
}
trade_outcome = {
'net_pnl': 50.0,
'exit_price': 2550.0,
'duration': timedelta(minutes=15)
}
market_data = {
'volatility': 0.03,
'order_flow_direction': 'bullish',
'order_flow_strength': 0.8
}
# Test enhanced reward
enhanced_reward = orchestrator.calculate_enhanced_pivot_reward(
trade_decision, market_data, trade_outcome
)
logger.info(f"✅ Enhanced reward calculated: {enhanced_reward:.3f}")
return True
except Exception as e:
logger.error(f"Error testing reward calculation: {e}")
return False
def test_williams_integration():
"""Test Williams market structure integration"""
try:
logger.info("Testing Williams market structure integration...")
from training.williams_market_structure import extract_pivot_features, analyze_pivot_context
from core.data_provider import DataProvider
import pandas as pd
import numpy as np
# Create test data
test_data = {
'open': np.random.uniform(2400, 2600, 100),
'high': np.random.uniform(2500, 2700, 100),
'low': np.random.uniform(2300, 2500, 100),
'close': np.random.uniform(2400, 2600, 100),
'volume': np.random.uniform(1000, 5000, 100)
}
df = pd.DataFrame(test_data)
# Test pivot features
pivot_features = extract_pivot_features(df)
if pivot_features is not None:
logger.info(f"✅ Williams pivot features extracted: {len(pivot_features)} features")
# Test pivot context analysis
market_data = {'ohlcv_data': df}
context = analyze_pivot_context(market_data, datetime.now(), 'BUY')
if context is not None:
logger.info("✅ Williams pivot context analysis working")
return True
else:
logger.warning("⚠️ Pivot context analysis returned None")
return False
else:
logger.error("❌ Williams pivot feature extraction failed")
return False
except Exception as e:
logger.error(f"Error testing Williams integration: {e}")
return False
def test_dashboard_integration():
"""Test dashboard integration with enhanced features"""
try:
logger.info("Testing dashboard integration...")
from web.clean_dashboard import CleanTradingDashboard as TradingDashboard
from core.enhanced_orchestrator import EnhancedTradingOrchestrator
from core.data_provider import DataProvider
from core.trading_executor import TradingExecutor
# Create components
data_provider = DataProvider()
orchestrator = EnhancedTradingOrchestrator(data_provider=data_provider)
executor = TradingExecutor()
# Create dashboard
dashboard = TradingDashboard(
data_provider=data_provider,
orchestrator=orchestrator,
trading_executor=executor
)
# Check if dashboard has access to enhanced features
has_comprehensive_builder = hasattr(dashboard, '_build_comprehensive_rl_state')
has_enhanced_orchestrator = hasattr(dashboard.orchestrator, 'build_comprehensive_rl_state')
if has_comprehensive_builder and has_enhanced_orchestrator:
logger.info("✅ Dashboard properly integrated with enhanced features")
return True
else:
logger.warning("⚠️ Dashboard missing some enhanced features")
logger.info(f"Comprehensive builder: {has_comprehensive_builder}")
logger.info(f"Enhanced orchestrator: {has_enhanced_orchestrator}")
return False
except Exception as e:
logger.error(f"Error testing dashboard integration: {e}")
return False
def main():
"""Main function to run all fixes and tests"""
# Setup logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
logger.info("=" * 70)
logger.info("COMPREHENSIVE RL TRAINING FIX - AUDIT ISSUE RESOLUTION")
logger.info("=" * 70)
# Track results
test_results = {}
# Run all tests
tests = [
("Enhanced Orchestrator Methods", fix_orchestrator_missing_methods),
("Comprehensive State Building", test_comprehensive_state_building),
("Enhanced Reward Calculation", test_enhanced_reward_calculation),
("Williams Market Structure", test_williams_integration),
("Dashboard Integration", test_dashboard_integration)
]
for test_name, test_func in tests:
logger.info(f"\n🔧 {test_name}...")
try:
result = test_func()
test_results[test_name] = result
except Exception as e:
logger.error(f"{test_name} failed: {e}")
test_results[test_name] = False
# Summary
logger.info("\n" + "=" * 70)
logger.info("COMPREHENSIVE RL TRAINING FIX RESULTS")
logger.info("=" * 70)
passed = sum(test_results.values())
total = len(test_results)
for test_name, result in test_results.items():
status = "✅ PASS" if result else "❌ FAIL"
logger.info(f"{test_name}: {status}")
logger.info(f"\nOverall: {passed}/{total} tests passed")
if passed == total:
logger.info("🎉 ALL RL TRAINING ISSUES FIXED!")
logger.info("The system now supports:")
logger.info(" - 13,400 comprehensive RL features")
logger.info(" - Enhanced pivot-based rewards")
logger.info(" - Williams market structure integration")
logger.info(" - Proper data flow between components")
logger.info(" - Real-time data integration")
else:
logger.warning("⚠️ Some issues remain - check logs above")
return 0 if passed == total else 1
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,12 +0,0 @@
[
{
"token": "geetest eyJsb3ROdW1iZXIiOiI4NWFhM2Q3YjJkYmE0Mjk3YTQwODY0YmFhODZiMzA5NyIsImNhcHRjaGFPdXRwdXQiOiJaVkwzS3FWaWxnbEZjQWdXOENIQVgxMUVBLVVPUnE1aURQSldzcmlubDFqelBhRTNiUGlEc0VrVTJUR0xuUzRHV2k0N2JDa1hyREMwSktPWmwxX1dERkQwNWdSN1NkbFJ1Z2NDY0JmTGdLVlNBTEI0OUNrR200enZZcnZ3MUlkdnQ5RThRZURYQ2E0empLczdZMHByS3JEWV9SQW93S0d4OXltS0MxMlY0SHRzNFNYMUV1YnI1ZV9yUXZCcTZJZTZsNFVJMS1DTnc5RUhBaXRXOGU2TVZ6OFFqaGlUMndRM1F3eGxEWkpmZnF6M3VucUl5RTZXUnFSUEx1T0RQQUZkVlB3S3AzcWJTQ3JXcG5CTUFKOXFuXzV2UDlXNm1pR3FaRHZvSTY2cWRzcHlDWUMyWTV1RzJ0ZjZfRHRJaXhTTnhLWUU3cTlfcU1WR2ZJUzlHUXh6ZWg2Mkp2eG02SHZLdjFmXzJMa3FlcVkwRk94S2RxaVpyN2NkNjAxMHE5UlFJVDZLdmNZdU1Hcm04M2d4SnY1bXp4VkZCZWZFWXZfRjZGWFpnWXRMMmhWSDlQME42bHFXQkpCTUVicE1nRm0zbm1iZVBkaDYxeW12T0FUb2wyNlQ0Z2ZET2dFTVFhZTkxQlFNR2FVSFRSa2c3RGJIX2xMYXlBTHQ0TTdyYnpHSCIsInBhc3NUb2tlbiI6IjA0NmFkMGQ5ZjNiZGFmYzJhNDgwYzFiMjcyMmIzZDUzOTk5NTRmYWVlNTM1MTI1ZTQ1MjkzNzJjYWZjOGI5N2EiLCJnZW5UaW1lIjoiMTc1MTQ5ODY4NCJ9",
"url": "https://www.mexc.com/ucgateway/captcha_api/captcha/robot/robot.future.openlong.ETH_USDT.300X",
"timestamp": "2025-07-03T02:24:51.150716"
},
{
"token": "geetest eyJsb3ROdW1iZXIiOiI5ZWVlMDQ2YTg1MmQ0MTU3YTNiYjdhM2M5MzJiNzJiYSIsImNhcHRjaGFPdXRwdXQiOiJaVkwzS3FWaWxnbEZjQWdXOENIQVgxMUVBLVVPUnE1aURQSldzcmlubDFqelBhRTNiUGlEc0VrVTJUR0xuUzRHZk9hVUhKRW1ZOS1FN0h3Q3NNV3hvbVZsNnIwZXRYZzIyWHBGdUVUdDdNS19Ud1J6NnotX2pCXzRkVDJqTnJRN0J3cExjQ25DNGZQUXQ5V040TWxrZ0NMU3p6MERNd09SeHJCZVRkVE5pSU5BdmdFRDZOMkU4a19XRmJ6SFZsYUtieElnM3dLSGVTMG9URU5DLUNaNElnMDJlS2x3UWFZY3liRnhKU2ZrWG1vekZNMDVJSHVDYUpwT0d2WXhhYS1YTWlDeGE0TnZlcVFqN2JwNk04Q09PSnNxNFlfa0pkX0Ruc2w0UW1memZCUTZseF9tenFCMnFweThxd3hKTFVYX0g3TGUyMXZ2bGtubG1KS0RSUEJtTWpUcGFiZ2F4M3Q1YzJmbHJhRjk2elhHQzVBdVVQY1FrbDIyOW0xSmlnMV83cXNfTjdpZFozd0hRcWZFZGxSYVRKQTR2U18yYnFlcGdLblJ3Y3oxaWtOOW1RaWNOSnpSNFNhdm1Pdi1BSzhwSEF0V2lkVjhrTkVYc3dGbUdSazFKQXBEX1hVUjlEdl9sNWJJNEFnbVJhcVlGdjhfRUNvN1g2cmt2UGZuOElTcCIsInBhc3NUb2tlbiI6IjRmZDFhZmU5NzI3MTk0ZGI3MDNlMDg2NWQ0ZDZjZTIyYzMwMzUyNzQ5NzVjMDIwNDFiNTY3Y2Y3MDdhYjM1OTMiLCJnZW5UaW1lIjoiMTc1MTQ5ODY5MiJ9",
"url": "https://www.mexc.com/ucgateway/captcha_api/captcha/robot/robot.future.closelong.ETH_USDT.300X",
"timestamp": "2025-07-03T02:24:57.885947"
}
]

View File

@ -1,29 +0,0 @@
{
"bm_sv": "D92603BBC020E9C2CD11B2EBC8F22050~YAAQJKVf1NW5K7CXAQAAwtMVzRzHARcY60jrPVzy9G79fN3SY4z988SWHHxQlbPpyZHOj76c20AjCnS0QwveqzB08zcRoauoIe/sP3svlaIso9PIdWay0KIIVUe1XsiTJRfTm/DmS+QdrOuJb09rbfWLcEJF4/0QK7VY0UTzPTI2V3CMtxnmYjd1+tjfYsvt1R6O+Mw9mYjb7SjhRmiP/exY2UgZdLTJiqd+iWkc5Wejy5m6g5duOfRGtiA9mfs=~1",
"bm_sz": "98D80FE4B23FE6352AE5194DA699FDDB~YAAQJKVf1GK4K7CXAQAAeQ0UzRw+aXiY5/Ujp+sZm0a4j+XAJFn6fKT4oph8YqIKF6uHSgXkFY3mBt8WWY98Y2w1QzOEFRkje8HTUYQgJsV59y5DIOTZKC6wutPD/bKdVi9ZKtk4CWbHIIRuCrnU1Nw2jqj5E0hsorhKGh8GeVsAeoao8FWovgdYD6u8Qpbr9aL5YZgVEIqJx6WmWLmcIg+wA8UFj8751Fl0B3/AGxY2pACUPjonPKNuX/UDYA5e98plOYUnYLyQMEGIapSrWKo1VXhKBDPLNedJ/Q2gOCGEGlj/u1Fs407QxxXwCvRSegL91y6modtL5JGoFucV1pYc4pgTwEAEdJfcLCEBaButTbaHI9T3SneqgCoGeatMMaqz0GHbvMD7fBQofARBqzN1L6aGlmmAISMzI3wx/SnsfXBl~3228228~3294529",
"_abck": "0288E759712AF333A6EE15F66BC2A662~-1~YAAQJKVf1GC4K7CXAQAAeQ0UzQ77TfyX5SOWTgdW3DVqNFrTLz2fhLo2OC4I6ZHnW9qB0vwTjFDfOB65BwLSeFZoyVypVCGTtY/uL6f4zX0AxEGAU8tLg/jeO0acO4JpGrjYZSW1F56vEd9JbPU2HQPNERorgCDLQMSubMeLCfpqMp3VCW4w0Ssnk6Y4pBSs4mh0PH95v56XXDvat9k20/JPoK3Ip5kK2oKh5Vpk5rtNTVea66P0NBjVUw/EddRUuDDJpc8T4DtTLDXnD5SNDxEq8WDkrYd5kP4dNe0PtKcSOPYs2QLUbvAzfBuMvnhoSBaCjsqD15EZ3eDAoioli/LzsWSxaxetYfm0pA/s5HBXMdOEDi4V0E9b79N28rXcC8IJEHXtfdZdhJjwh1FW14lqF9iuOwER81wDEnIVtgwTwpd3ffrc35aNjb+kGiQ8W0FArFhUI/ZY2NDvPVngRjNrmRm0CsCm+6mdxxVNsGNMPKYG29mcGDi2P9HGDk45iOm0vzoaYUl1PlOh4VGq/V3QGbPYpkBsBtQUjrf/SQJe5IAbjCICTYlgxTo+/FAEjec+QdUsagTgV8YNycQfTK64A2bs1L1n+RO5tapLThU6NkxnUbqHOm6168RnT8ZRoAUpkJ5m3QpqSsuslnPRUPyxUr73v514jTBIUGsq4pUeRpXXd9FAh8Xkn4VZ9Bh3q4jP7eZ9Sv58mgnEVltNBFkeG3zsuIp5Hu69MSBU+8FD4gVlncbBinrTLNWRB8F00Gyvc03unrAznsTEyLiDq9guQf9tQNcGjxfggfnGq/Z1Gy/A7WMjiYw7pwGRVzAYnRgtcZoww9gQ/FdGkbp2Xl+oVZpaqFsHVvafWyOFr4pqQsmd353ddgKLjsEnpy/jcdUsIR/Ph3pYv++XlypXehXj0/GHL+WsosujJrYk4TuEsPKUcyHNr+r844mYUIhCYsI6XVKrq3fimdfdhmlkW8J1kZSTmFwP8QcwGlTK/mZDTJPyf8K5ugXcqOU8oIQzt5B2zfRwRYKHdhb8IUw=~-1~-1~-1",
"RT": "\"z=1&dm=www.mexc.com&si=f5d53b58-7845-4db4-99f1-444e43d35199&ss=mcmh857q&sl=3&tt=90n&bcn=%2F%2F684dd311.akstat.io%2F&ld=1c9o\"",
"mexc_fingerprint_visitorId": "tv1xchuZQbx9N0aBztUG",
"_ga_L6XJCQTK75": "GS2.1.s1751492192$o1$g1$t1751492248$j4$l0$h0",
"uc_token": "WEB66f893ede865e5d927efdea4a82e655ad5190239c247997d744ef9cd075f6f1e",
"u_id": "WEB66f893ede865e5d927efdea4a82e655ad5190239c247997d744ef9cd075f6f1e",
"_fbp": "fb.1.1751492193579.314807866777158389",
"mxc_exchange_layout": "BA",
"sensorsdata2015jssdkcross": "%7B%22distinct_id%22%3A%2221a8728990b84f4fa3ae64c8004b4aaa%22%2C%22first_id%22%3A%22197cd11dc751be-0dd66c04c69e96-26011f51-3686400-197cd11dc76189d%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_landing_page%22%3A%22https%3A%2F%2Fwww.mexc.com%2Fen-GB%2Flogin%3Fprevious%3D%252Ffutures%252FETH_USDT%253Ftype%253Dlinear_swap%22%7D%2C%22identities%22%3A%22eyIkaWRlbnRpdHlfY29va2llX2lkIjoiMTk3Y2QxMWRjNzUxYmUtMGRkNjZjMDRjNjllOTYtMjYwMTFmNTEtMzY4NjQwMC0xOTdjZDExZGM3NjE4OWQiLCIkaWRlbnRpdHlfbG9naW5faWQiOiIyMWE4NzI4OTkwYjg0ZjRmYTNhZTY0YzgwMDRiNGFhYSJ9%22%2C%22history_login_id%22%3A%7B%22name%22%3A%22%24identity_login_id%22%2C%22value%22%3A%2221a8728990b84f4fa3ae64c8004b4aaa%22%7D%2C%22%24device_id%22%3A%22197cd11dc751be-0dd66c04c69e96-26011f51-3686400-197cd11dc76189d%22%7D",
"mxc_theme_main": "dark",
"mexc_fingerprint_requestId": "1751492199306.WMvKJd",
"_ym_visorc": "b",
"mexc_clearance_modal_show_date": "2025-07-03-undefined",
"ak_bmsc": "35C21AA65F819E0BF9BEBDD10DCF7B70~000000000000000000000000000000~YAAQJKVf1BK2K7CXAQAAPAISzRwQdUOUs1H3HPAdl4COMFQAl+aEPzppLbdgrwA7wXbP/LZpxsYCFflUHDppYKUjzXyTZ9tIojSF3/6CW3OCiPhQo/qhf6XPbC4oQHpCNWaC9GJWEs/CGesQdfeBbhkXdfh+JpgmgCF788+x8IveDE9+9qaL/3QZRy+E7zlKjjvmMxBpahRy+ktY9/KMrCY2etyvtm91KUclr4k8HjkhtNJOlthWgUyiANXJtfbNUMgt+Hqgqa7QzSUfAEpxIXQ1CuROoY9LbU292LRN5TbtBy/uNv6qORT38rKsnpi7TGmyFSB9pj3YsoSzIuAUxYXSh4hXRgAoUQm3Yh5WdLp4ONeyZC1LIb8VCY5xXRy/VbfaHH1w7FodY1HpfHGKSiGHSNwqoiUmMPx13Rgjsgki4mE7bwFmG2H5WAilRIOZA5OkndEqGrOuiNTON7l6+g6mH0MzZ+/+3AjnfF2sXxFuV9itcs9x",
"mxc_theme_upcolor": "upgreen",
"_vid_t": "mQUFl49q1yLZhrL4tvOtFF38e+hGW5QoMS+eXKVD9Q4vQau6icnyipsdyGLW/FBukiO2ItK7EtzPIPMFrE5SbIeLSm1NKc/j+ZmobhX063QAlskf1x1J",
"_ym_isad": "2",
"_ym_d": "1751492196",
"_ym_uid": "1751492196843266888",
"bm_mi": "02862693F007017AEFD6639269A60D08~YAAQJKVf1Am2K7CXAQAAIf4RzRzNGqZ7Q3BC0kAAp/0sCOhHxxvEWTb7mBl8p7LUz0W6RZbw5Etz03Tvqu3H6+sb+yu1o0duU+bDflt7WLVSOfG5cA3im8Jeo6wZhqmxTu6gGXuBgxhrHw/RGCgcknxuZQiRM9cbM6LlZIAYiugFm2xzmO/1QcpjDhs4S8d880rv6TkMedlkYGwdgccAmvbaRVSmX9d5Yukm+hY+5GWuyKMeOjpatAhcgjShjpSDwYSpyQE7vVZLBp7TECIjI9uoWzR8A87YHScKYEuE08tb8YtGdG3O6g70NzasSX0JF3XTCjrVZA==~1",
"_ga": "GA1.1.626437359.1751492192",
"NEXT_LOCALE": "en-GB",
"x-mxc-fingerprint": "tv1xchuZQbx9N0aBztUG",
"CLIENT_LANG": "en-GB",
"sajssdk_2015_cross_new_user": "1"
}

View File

@ -1,28 +0,0 @@
{
"bm_sv": "5C10B638DC36B596422995FAFA8535C5~YAAQJKVf1MfUK7CXAQAA8NktzRwthLouCzg1Sqsm2yBQhAdvw8KbTCYRe0bzUrYEsQEahTebrBcYQoRF3+HyIAggj7MIsbFBANUqLcKJ66lD3QbuA3iU3MhUts/ZhA2dLaSoH5IbgdwiAd98s4bjsb3MSaNwI3nCEzWkLH2CZDyGJK6mhwHlA5VU6OXRLTVz+dfeh2n2fD0SbtcppFL2j9jqopWyKLaxQxYAg+Rs5g3xAo2BTa6/zmQ2YoxZR/w=~1",
"bm_sz": "11FB853E475F9672ADEDFBC783F7487B~YAAQJKVf1G7UK7CXAQAAcY8tzRy3rXBghQVq4e094ZpjhvYRjSatbOxmR/iHhc0aV6NMJkhTwCOnCDsKjeU6sgcdpYgxkpgfhbvTgm5dQ7fEQ5cgmJtfNPmEisDQxZQIOXlI4yhgq7cks4jek9T9pxBx+iLtsZYy5LqIl7mqXc7R7MxMaWvDBfSVU1T0hY9DD0U3P4fxstSIVbGdRzcX2mvGNMcdTj3JMB1y9mXzKB44Prglw0zWa7BZT4imuh5OTQTY4OLNQM7gg5ERUHI7RTcxz+CAltGtBeMHTmWa+Jat/Cw9/DOP7Rud8fESZ7pmhmRE4Fe3Vp2/C+CW3qRnoptViXYOWr/sfKIKSlxIx+QF4Tw58tE5r2XbUVzAF0rQ2mLz9ASi5FnAgJi/DBRULeKhUMVPxsPhMWX5R25J3Gj5QnIED7PjttEt~3294770~3491121",
"_abck": "F5684DE447CDB1B381EABA9AB94E79B7~-1~YAAQJKVf1GzUK7CXAQAAcY8tzQ60GFr2A1gYL72t6F06CTbh+67guEB40t7OXrDJpLYousPo1UKwE9/z804ie8unZxI7iZhwZO/AJfavIw2JHsMnYOhg8S8U/P+hTMOu0KvFYhMfmbSVSHEMInpzJlFPnFHcbYX1GtPn0US/FI8NeDxamlefbV4vHAYxQCWXp1RUVflOukD/ix7BGIvVqNdTQJDMfDY3UmNyu9JC88T8gFDUBxpTJvHNAzafWV7HTpSzLUmYzkFMp0Py39ZVOkVKgEwI9M15xseSNIzVBm6hm6DHwN9Z6ogDuaNsMkY3iJhL9+h75OTq2If9wNMiehwa5XeLHGfSYizXzUFJhuHdcEI1EZAowl2JKq4iGynNIom1/0v3focwlDFi93wxzpCXhCZBKnIRiIYGgS47zjS6kCZpYvuoBRnNvFx7tdJHMMkQQvx6+pk5UzmT4n3jUjS2WUTRoDuwiEvs5NDiO/Z2r4zHlpZnskDdpsDXT2SxvtMo1J451PCPSzt0merJ8vHZD5eLYE0tDBJaLMPzpW9MPHgW/OqrRc5QjcsdhHxNBnMGfhV2U0aHxVsuSuguZRPz7hGDRQJJXepAU8UzDM/d9KSYdMxUvSfcIk+48e3HHyodrKrfXh/0yIaeamsLeYE2na321B0DUoWe28DKbAIY3WdeYfH3WsGJ/LNrM43HeAe8Ng5Bw+5M0rO8m6MqGbaROvdt4JwBheY8g1jMcyXmXJWBAN0in+5F/sXph1sFdPxiiCc2uKQbyuBA34glvFz1JsbPGATEbicRvW0w88JlY3Ki8yNkEYxyFDv3n2C6R3I7Z/ZjdSJLVmS47sWnow1K6YAa31a3A8eVVFItran2v7S2QJBVmS7zb89yVO7oUq16z9a7o+0K5setv8d/jPkPIn9jgWcFOfVh7osl2g0vB/ZTmLoMvES5VxkWZPP3Uo9oIEyIaFzGq7ppYJ24SLj9I6wo9m5Xq9pup33F0Cpn2GyRzoxLpMm7bV/2EJ5eLBjJ3YFQRZxYf2NU1k2CJifFCfSQYOlhu7qCBxNWryWjQQgz9uvGqoKs~-1~-1~-1",
"RT": "\"z=1&dm=www.mexc.com&si=5943fd2a-6403-43d4-87aa-b4ac4403c94f&ss=mcmi7gg2&sl=3&tt=6d5&bcn=%2F%2F02179916.akstat.io%2F&ld=2fhr\"",
"mexc_fingerprint_visitorId": "tv1xchuZQbx9N0aBztUG",
"_ga_L6XJCQTK75": "GS2.1.s1751493837$o1$g1$t1751493945$j59$l0$h0",
"uc_token": "WEB3756d4bd507f4dc9e5c6732b16d40aa668a2e3aea55107801a42f40389c39b9c",
"u_id": "WEB3756d4bd507f4dc9e5c6732b16d40aa668a2e3aea55107801a42f40389c39b9c",
"_fbp": "fb.1.1751493843684.307329583674408195",
"mxc_exchange_layout": "BA",
"sensorsdata2015jssdkcross": "%7B%22distinct_id%22%3A%2221a8728990b84f4fa3ae64c8004b4aaa%22%2C%22first_id%22%3A%22197cd2b02f56f6-08b72b0d8e14ee-26011f51-3686400-197cd2b02f6b59%22%2C%22props%22%3A%7B%22%24latest_traffic_source_type%22%3A%22%E7%9B%B4%E6%8E%A5%E6%B5%81%E9%87%8F%22%2C%22%24latest_search_keyword%22%3A%22%E6%9C%AA%E5%8F%96%E5%88%B0%E5%80%BC_%E7%9B%B4%E6%8E%A5%E6%89%93%E5%BC%80%22%2C%22%24latest_referrer%22%3A%22%22%2C%22%24latest_landing_page%22%3A%22https%3A%2F%2Fwww.mexc.com%2Fen-GB%2Flogin%3Fprevious%3D%252Ffutures%252FETH_USDT%253Ftype%253Dlinear_swap%22%7D%2C%22identities%22%3A%22eyIkaWRlbnRpdHlfY29va2llX2lkIjoiMTk3Y2QyYjAyZjU2ZjYtMDhiNzJiMGQ4ZTE0ZWUtMjYwMTFmNTEtMzY4NjQwMC0xOTdjZDJiMDJmNmI1OSIsIiRpZGVudGl0eV9sb2dpbl9pZCI6IjIxYTg3Mjg5OTBiODRmNGZhM2FlNjRjODAwNGI0YWFhIn0%3D%22%2C%22history_login_id%22%3A%7B%22name%22%3A%22%24identity_login_id%22%2C%22value%22%3A%2221a8728990b84f4fa3ae64c8004b4aaa%22%7D%2C%22%24device_id%22%3A%22197cd2b02f56f6-08b72b0d8e14ee-26011f51-3686400-197cd2b02f6b59%22%7D",
"mxc_theme_main": "dark",
"mexc_fingerprint_requestId": "1751493848491.aXJWxX",
"ak_bmsc": "10B7B90E8C6CA0B2242A59C6BE9D5D09~000000000000000000000000000000~YAAQJKVf1BnQK7CXAQAAJwsrzRyGc8OCIHU9sjkSsoX2E9ZroYaoxZCEToLh8uS5k28z0rzxl4Oi8eXg1oKxdWZslNQCj4/PExgD4O1++Wfi2KNovx4cUehcmbtiR3a28w+gNaiVpWAUPjPnUTaHLAr7cgVU/IOdoOC0cdvxaHThWtwIbVu+YsGazlnHiND1w3u7V0Yc1irC6ZONXqD2rIIZlntEOFiJGPTs8egY3xMLeSpI0tZYp8CASAKzxp/v96ugcPBMehwZ03ue6s6bi8qGYgF1IuOgVTFW9lPVzxCYjvH+ASlmppbLm/vrCUSPjtzJcTz/ySfvtMYaai8cv3CwCf/Ke51plRXJo0wIzGOpBzzJG5/GMA924kx1EQiBTgJptG0i7ZrgrfhqtBjjB2sU0ZBofFqmVu/VXLV6iOCQBHFtpZeI60oFARGoZFP2mYbfxeIKG8ERrQ==",
"mexc_clearance_modal_show_date": "2025-07-03-undefined",
"_ym_isad": "2",
"_vid_t": "hRsGoNygvD+rX1A4eY/XZLO5cGWlpbA3XIXKtYTjDPFdunb5ACYp5eKitX9KQSQj/YXpG2PcnbPZDIpAVQ0AGjaUpR058ahvxYptRHKSGwPghgfLZQ==",
"_ym_visorc": "b",
"_ym_d": "1751493846",
"_ym_uid": "1751493846425437427",
"mxc_theme_upcolor": "upgreen",
"NEXT_LOCALE": "en-GB",
"x-mxc-fingerprint": "tv1xchuZQbx9N0aBztUG",
"CLIENT_LANG": "en-GB",
"_ga": "GA1.1.1034661072.1751493838",
"sajssdk_2015_cross_new_user": "1"
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,124 +0,0 @@
#!/usr/bin/env python
"""
Log Reader Utility
This script provides a convenient way to read and filter log files during
development.
"""
import os
import sys
import time
import argparse
from datetime import datetime
def parse_args():
"""Parse command line arguments"""
parser = argparse.ArgumentParser(description='Read and filter log files')
parser.add_argument('--file', type=str, help='Log file to read (defaults to most recent .log file)')
parser.add_argument('--tail', type=int, default=50, help='Number of lines to show from the end')
parser.add_argument('--follow', '-f', action='store_true', help='Follow the file as it grows')
parser.add_argument('--filter', type=str, help='Only show lines containing this string')
parser.add_argument('--list', action='store_true', help='List all log files sorted by modification time')
return parser.parse_args()
def get_most_recent_log():
"""Find the most recently modified log file"""
log_files = [f for f in os.listdir('.') if f.endswith('.log')]
if not log_files:
print("No log files found in current directory.")
sys.exit(1)
# Sort by modification time (newest first)
log_files.sort(key=lambda x: os.path.getmtime(x), reverse=True)
return log_files[0]
def list_log_files():
"""List all log files sorted by modification time"""
log_files = [f for f in os.listdir('.') if f.endswith('.log')]
if not log_files:
print("No log files found in current directory.")
sys.exit(1)
# Sort by modification time (newest first)
log_files.sort(key=lambda x: os.path.getmtime(x), reverse=True)
print(f"{'LAST MODIFIED':<20} {'SIZE':<10} FILENAME")
print("-" * 60)
for log_file in log_files:
mtime = datetime.fromtimestamp(os.path.getmtime(log_file))
size = os.path.getsize(log_file)
size_str = f"{size / 1024:.1f} KB" if size > 1024 else f"{size} B"
print(f"{mtime.strftime('%Y-%m-%d %H:%M:%S'):<20} {size_str:<10} {log_file}")
def read_log_tail(file_path, num_lines, filter_text=None):
"""Read the last N lines of a file"""
try:
with open(file_path, 'r', encoding='utf-8') as f:
# Read all lines (inefficient but simple)
lines = f.readlines()
# Filter if needed
if filter_text:
lines = [line for line in lines if filter_text in line]
# Get the last N lines
last_lines = lines[-num_lines:] if len(lines) > num_lines else lines
return last_lines
except Exception as e:
print(f"Error reading file: {str(e)}")
sys.exit(1)
def follow_log(file_path, filter_text=None):
"""Follow the log file as it grows (like tail -f)"""
try:
with open(file_path, 'r', encoding='utf-8') as f:
# Go to the end of the file
f.seek(0, 2)
while True:
line = f.readline()
if line:
if not filter_text or filter_text in line:
# Remove newlines at the end to avoid double spacing
print(line.rstrip())
else:
time.sleep(0.1) # Sleep briefly to avoid consuming CPU
except KeyboardInterrupt:
print("\nLog reading stopped.")
except Exception as e:
print(f"Error following file: {str(e)}")
sys.exit(1)
def main():
"""Main function"""
args = parse_args()
# List all log files if requested
if args.list:
list_log_files()
return
# Determine which file to read
file_path = args.file
if not file_path:
file_path = get_most_recent_log()
print(f"Reading most recent log file: {file_path}")
# Follow mode (like tail -f)
if args.follow:
print(f"Following {file_path} (Press Ctrl+C to stop)...")
# First print the tail
for line in read_log_tail(file_path, args.tail, args.filter):
print(line.rstrip())
print("-" * 80)
print("Waiting for new content...")
# Then follow
follow_log(file_path, args.filter)
else:
# Just print the tail
for line in read_log_tail(file_path, args.tail, args.filter):
print(line.rstrip())
if __name__ == "__main__":
main()

View File

@ -1,201 +1,121 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
""" """
Run Clean Trading Dashboard with Full Training Pipeline Clean Trading Dashboard Runner with Enhanced Stability and Error Handling
Integrated system with both training loop and clean web dashboard
""" """
import os
# Fix OpenMP library conflicts before importing other modules
os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
os.environ['OMP_NUM_THREADS'] = '4'
import asyncio
import logging
import sys import sys
import threading import logging
import traceback
import gc
import time import time
import psutil
import torch
from pathlib import Path from pathlib import Path
# Add project root to path
project_root = Path(__file__).parent
sys.path.insert(0, str(project_root))
from core.config import get_config, setup_logging
from core.data_provider import DataProvider
# Import checkpoint management
from utils.checkpoint_manager import get_checkpoint_manager
from utils.training_integration import get_training_integration
# Setup logging # Setup logging
setup_logging() logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
async def start_training_pipeline(orchestrator, trading_executor): def clear_gpu_memory():
"""Start the training pipeline in the background""" """Clear GPU memory cache"""
logger.info("=" * 70) if torch.cuda.is_available():
logger.info("STARTING TRAINING PIPELINE WITH CLEAN DASHBOARD") torch.cuda.empty_cache()
logger.info("=" * 70) torch.cuda.synchronize()
# Initialize checkpoint management
checkpoint_manager = get_checkpoint_manager()
training_integration = get_training_integration()
# Training statistics
training_stats = {
'iteration_count': 0,
'total_decisions': 0,
'successful_trades': 0,
'best_performance': 0.0,
'last_checkpoint_iteration': 0
}
try:
# Start real-time processing (available in Enhanced orchestrator)
if hasattr(orchestrator, 'start_realtime_processing'):
await orchestrator.start_realtime_processing()
logger.info("Real-time processing started")
# Start COB integration (available in Enhanced orchestrator)
if hasattr(orchestrator, 'start_cob_integration'):
await orchestrator.start_cob_integration()
logger.info("COB integration started - 5-minute data matrix active")
else:
logger.info("COB integration not available")
# Main training loop
iteration = 0
last_checkpoint_time = time.time()
while True:
try:
iteration += 1
training_stats['iteration_count'] = iteration
# Get symbols to process
symbols = orchestrator.symbols if hasattr(orchestrator, 'symbols') else ['ETH/USDT']
# Process each symbol
for symbol in symbols:
try:
# Make trading decision (this triggers model training)
decision = await orchestrator.make_trading_decision(symbol)
if decision:
training_stats['total_decisions'] += 1
logger.debug(f"[{symbol}] Decision: {decision.action} @ {decision.confidence:.1%}")
except Exception as e:
logger.warning(f"Error processing {symbol}: {e}")
# Status logging every 100 iterations
if iteration % 100 == 0:
current_time = time.time()
elapsed = current_time - last_checkpoint_time
logger.info(f"[TRAINING] Iteration {iteration}, Decisions: {training_stats['total_decisions']}, Time: {elapsed:.1f}s")
# Models will save their own checkpoints when performance improves
training_stats['last_checkpoint_iteration'] = iteration
last_checkpoint_time = current_time
# Brief pause to prevent overwhelming the system
await asyncio.sleep(0.1) # 100ms between iterations
except Exception as e:
logger.error(f"Training loop error: {e}")
await asyncio.sleep(5) # Wait longer on error
except Exception as e:
logger.error(f"Training pipeline error: {e}")
import traceback
logger.error(traceback.format_exc())
def start_clean_dashboard_with_training(): def check_system_resources():
"""Start clean dashboard with full training pipeline""" """Check if system has enough resources"""
try: available_ram = psutil.virtual_memory().available / 1024**3
logger.info("=" * 80) if available_ram < 2.0: # Less than 2GB available
logger.info("CLEAN TRADING DASHBOARD + FULL TRAINING PIPELINE") logger.warning(f"Low RAM: {available_ram:.1f} GB available")
logger.info("=" * 80) gc.collect()
logger.info("Features: Real-time Training, COB Integration, Clean UI") clear_gpu_memory()
logger.info("Universal Data Stream: ENABLED") return False
logger.info("Neural Decision Fusion: ENABLED") return True
logger.info("COB Integration: ENABLED")
logger.info("GPU Training: ENABLED")
logger.info("Multi-symbol: ETH/USDT, BTC/USDT")
# Get port from environment or use default
dashboard_port = int(os.environ.get('DASHBOARD_PORT', '8051'))
logger.info(f"Dashboard: http://127.0.0.1:{dashboard_port}")
logger.info("=" * 80)
# Check environment variables
enable_universal_stream = os.environ.get('ENABLE_UNIVERSAL_DATA_STREAM', '1') == '1'
enable_nn_fusion = os.environ.get('ENABLE_NN_DECISION_FUSION', '1') == '1'
enable_cob = os.environ.get('ENABLE_COB_INTEGRATION', '1') == '1'
logger.info(f"Universal Data Stream: {'ENABLED' if enable_universal_stream else 'DISABLED'}")
logger.info(f"Neural Decision Fusion: {'ENABLED' if enable_nn_fusion else 'DISABLED'}")
logger.info(f"COB Integration: {'ENABLED' if enable_cob else 'DISABLED'}")
# Get configuration
config = get_config()
# Initialize core components
from core.data_provider import DataProvider
from core.orchestrator import TradingOrchestrator
from core.trading_executor import TradingExecutor
# Create data provider
data_provider = DataProvider()
# Create enhanced orchestrator with COB integration - stable and efficient
orchestrator = TradingOrchestrator(data_provider, enhanced_rl_training=True)
logger.info("Enhanced Trading Orchestrator created with COB integration")
# Create trading executor
trading_executor = TradingExecutor()
# Import clean dashboard
from web.clean_dashboard import create_clean_dashboard
# Create clean dashboard
dashboard = create_clean_dashboard(
data_provider=data_provider,
orchestrator=orchestrator,
trading_executor=trading_executor
)
logger.info("Clean Trading Dashboard created")
# Start training pipeline in background thread
def training_worker():
"""Run training pipeline in background"""
try:
asyncio.run(start_training_pipeline(orchestrator, trading_executor))
except Exception as e:
logger.error(f"Training worker error: {e}")
training_thread = threading.Thread(target=training_worker, daemon=True)
training_thread.start()
logger.info("Training pipeline started in background")
# Wait a moment for training to initialize
time.sleep(3)
# Start dashboard server (this blocks)
logger.info(" Starting Clean Dashboard Server...")
dashboard.run_server(host='127.0.0.1', port=dashboard_port, debug=False)
except KeyboardInterrupt:
logger.info("System stopped by user")
except Exception as e:
logger.error(f"Error running clean dashboard with training: {e}")
import traceback
traceback.print_exc()
sys.exit(1)
def main(): def run_dashboard_with_recovery():
"""Main function""" """Run dashboard with automatic error recovery"""
start_clean_dashboard_with_training() max_retries = 3
retry_count = 0
while retry_count < max_retries:
try:
logger.info(f"Starting Clean Trading Dashboard (attempt {retry_count + 1}/{max_retries})")
# Check system resources
if not check_system_resources():
logger.warning("System resources low, waiting 30 seconds...")
time.sleep(30)
continue
# Import here to avoid memory issues on restart
from core.data_provider import DataProvider
from core.orchestrator import TradingOrchestrator
from core.trading_executor import TradingExecutor
from web.clean_dashboard import create_clean_dashboard
logger.info("Creating data provider...")
data_provider = DataProvider()
logger.info("Creating trading orchestrator...")
orchestrator = TradingOrchestrator(
data_provider=data_provider,
enhanced_rl_training=True
)
logger.info("Creating trading executor...")
trading_executor = TradingExecutor()
logger.info("Creating clean dashboard...")
dashboard = create_clean_dashboard(data_provider, orchestrator, trading_executor)
logger.info("Dashboard created successfully")
logger.info("=== Clean Trading Dashboard Status ===")
logger.info("- Data Provider: Active")
logger.info("- Trading Orchestrator: Active")
logger.info("- Trading Executor: Active")
logger.info("- Enhanced Training: Active")
logger.info("- Dashboard: Ready")
logger.info("=======================================")
# Start the dashboard server with error handling
try:
logger.info("Starting dashboard server on http://127.0.0.1:8050")
dashboard.run_server(host='127.0.0.1', port=8050, debug=False)
except KeyboardInterrupt:
logger.info("Dashboard stopped by user")
break
except Exception as e:
logger.error(f"Dashboard server error: {e}")
logger.error(traceback.format_exc())
raise
except Exception as e:
logger.error(f"Critical error in dashboard: {e}")
logger.error(traceback.format_exc())
retry_count += 1
if retry_count < max_retries:
logger.info(f"Attempting recovery... ({retry_count}/{max_retries})")
# Cleanup
gc.collect()
clear_gpu_memory()
# Wait before retry
wait_time = 30 * retry_count # Exponential backoff
logger.info(f"Waiting {wait_time} seconds before retry...")
time.sleep(wait_time)
else:
logger.error("Max retries reached. Exiting.")
sys.exit(1)
if __name__ == "__main__": if __name__ == "__main__":
main() try:
run_dashboard_with_recovery()
except KeyboardInterrupt:
logger.info("Application stopped by user")
sys.exit(0)
except Exception as e:
logger.error(f"Fatal error: {e}")
logger.error(traceback.format_exc())
sys.exit(1)

View File

@ -1,99 +0,0 @@
#!/usr/bin/env python3
import time
from web.clean_dashboard import CleanTradingDashboard
from core.data_provider import DataProvider
from core.orchestrator import TradingOrchestrator
from core.trading_executor import TradingExecutor
print('Testing signal preservation improvements...')
# Create dashboard instance
data_provider = DataProvider()
orchestrator = TradingOrchestrator(data_provider)
trading_executor = TradingExecutor()
dashboard = CleanTradingDashboard(
data_provider=data_provider,
orchestrator=orchestrator,
trading_executor=trading_executor
)
print(f'Initial recent_decisions count: {len(dashboard.recent_decisions)}')
# Add test signals similar to the user's example
test_signals = [
{'timestamp': '20:39:32', 'action': 'HOLD', 'confidence': 0.01, 'price': 2420.07},
{'timestamp': '20:39:02', 'action': 'HOLD', 'confidence': 0.01, 'price': 2416.89},
{'timestamp': '20:38:45', 'action': 'BUY', 'confidence': 0.65, 'price': 2415.23},
{'timestamp': '20:38:12', 'action': 'SELL', 'confidence': 0.72, 'price': 2413.45},
{'timestamp': '20:37:58', 'action': 'HOLD', 'confidence': 0.02, 'price': 2412.89}
]
# Add signals to dashboard
for signal_data in test_signals:
test_signal = {
'timestamp': signal_data['timestamp'],
'action': signal_data['action'],
'confidence': signal_data['confidence'],
'price': signal_data['price'],
'symbol': 'ETH/USDT',
'executed': False,
'blocked': True,
'manual': False,
'model': 'TEST'
}
dashboard._process_dashboard_signal(test_signal)
print(f'After adding {len(test_signals)} signals: {len(dashboard.recent_decisions)}')
# Test with larger batch to verify new limits
print('\nAdding 50 more signals to test preservation...')
for i in range(50):
test_signal = {
'timestamp': f'20:3{i//10}:{i%60:02d}',
'action': 'HOLD' if i % 3 == 0 else ('BUY' if i % 2 == 0 else 'SELL'),
'confidence': 0.01 + (i * 0.01),
'price': 2420.0 + i,
'symbol': 'ETH/USDT',
'executed': False,
'blocked': True,
'manual': False,
'model': 'BATCH_TEST'
}
dashboard._process_dashboard_signal(test_signal)
print(f'After adding 50 more signals: {len(dashboard.recent_decisions)}')
# Display recent signals
print('\nRecent signals (last 10):')
for signal in dashboard.recent_decisions[-10:]:
timestamp = dashboard._get_signal_attribute(signal, 'timestamp', 'Unknown')
action = dashboard._get_signal_attribute(signal, 'action', 'UNKNOWN')
confidence = dashboard._get_signal_attribute(signal, 'confidence', 0)
price = dashboard._get_signal_attribute(signal, 'price', 0)
print(f' {timestamp} {action}({confidence*100:.1f}%) ${price:.2f}')
# Test cleanup behavior with tick cache
print('\nTesting tick cache cleanup behavior...')
dashboard.tick_cache = [
{'datetime': time.time() - 3600, 'symbol': 'ETHUSDT', 'price': 2400.0}, # 1 hour ago
{'datetime': time.time() - 1800, 'symbol': 'ETHUSDT', 'price': 2410.0}, # 30 min ago
{'datetime': time.time() - 900, 'symbol': 'ETHUSDT', 'price': 2420.0}, # 15 min ago
]
# This should NOT clear signals aggressively anymore
signals_before = len(dashboard.recent_decisions)
dashboard._clear_old_signals_for_tick_range()
signals_after = len(dashboard.recent_decisions)
print(f'Signals before cleanup: {signals_before}')
print(f'Signals after cleanup: {signals_after}')
print(f'Signals preserved: {signals_after}/{signals_before} ({(signals_after/signals_before)*100:.1f}%)')
print('\n✅ Signal preservation test completed!')
print('Changes made:')
print('- Increased recent_decisions limit from 20/50 to 200')
print('- Made tick cache cleanup much more conservative')
print('- Only clears when >500 signals and removes >20% of old data')
print('- Extended time range for signal preservation')

View File

View File

@ -174,12 +174,64 @@ class CleanTradingDashboard:
timezone_name = self.config.get('system', {}).get('timezone', 'Europe/Sofia') timezone_name = self.config.get('system', {}).get('timezone', 'Europe/Sofia')
self.timezone = pytz.timezone(timezone_name) self.timezone = pytz.timezone(timezone_name)
# Create Dash app # Create Dash app with dark theme
self.app = Dash(__name__, external_stylesheets=[ self.app = Dash(__name__, external_stylesheets=[
'https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css', 'https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css',
'https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css' 'https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css'
]) ])
# Add custom dark theme CSS
self.app.index_string = '''
<!DOCTYPE html>
<html>
<head>
{%metas%}
<title>{%title%}</title>
{%favicon%}
{%css%}
<style>
body {
background-color: #111827 !important;
color: #f8f9fa !important;
}
.card {
background-color: #1f2937 !important;
border: 1px solid #374151 !important;
color: #f8f9fa !important;
}
.card-header {
background-color: #374151 !important;
border-bottom: 1px solid #4b5563 !important;
color: #f8f9fa !important;
}
.table {
color: #f8f9fa !important;
}
.table-dark {
background-color: #1f2937 !important;
}
.bg-light {
background-color: #374151 !important;
}
.text-muted {
color: #9ca3af !important;
}
.border {
border-color: #4b5563 !important;
}
</style>
</head>
<body>
{%app_entry%}
<footer>
{%config%}
{%scripts%}
{%renderer%}
</footer>
</body>
</html>
'''
# Suppress Dash development mode logging # Suppress Dash development mode logging
self.app.enable_dev_tools(debug=False, dev_tools_silence_routes_logging=True) self.app.enable_dev_tools(debug=False, dev_tools_silence_routes_logging=True)
@ -205,6 +257,9 @@ class CleanTradingDashboard:
# Start signal generation loop to ensure continuous trading signals # Start signal generation loop to ensure continuous trading signals
self._start_signal_generation_loop() self._start_signal_generation_loop()
# Start live balance sync for trading
self._start_live_balance_sync()
# Start training sessions if models are showing FRESH status # Start training sessions if models are showing FRESH status
threading.Thread(target=self._delayed_training_check, daemon=True).start() threading.Thread(target=self._delayed_training_check, daemon=True).start()
@ -318,6 +373,66 @@ class CleanTradingDashboard:
except Exception as e: except Exception as e:
logger.warning(f"Error getting balance: {e}") logger.warning(f"Error getting balance: {e}")
return 100.0 # Default balance return 100.0 # Default balance
def _get_live_balance(self) -> float:
"""Get real-time balance from exchange when in live trading mode"""
try:
if self.trading_executor:
# Check if we're in live trading mode
is_live = (hasattr(self.trading_executor, 'trading_enabled') and
self.trading_executor.trading_enabled and
hasattr(self.trading_executor, 'simulation_mode') and
not self.trading_executor.simulation_mode)
if is_live and hasattr(self.trading_executor, 'exchange'):
# Get real balance from exchange (throttled to avoid API spam)
import time
current_time = time.time()
# Cache balance for 5 seconds for more frequent updates in live trading
if not hasattr(self, '_last_balance_check') or current_time - self._last_balance_check > 5:
exchange = self.trading_executor.exchange
if hasattr(exchange, 'get_balance'):
live_balance = exchange.get_balance('USDC')
if live_balance is not None and live_balance > 0:
self._cached_live_balance = live_balance
self._last_balance_check = current_time
logger.info(f"LIVE BALANCE: Retrieved ${live_balance:.2f} USDC from MEXC")
return live_balance
else:
logger.warning(f"LIVE BALANCE: Retrieved ${live_balance:.2f} USDC - checking USDT as fallback")
# Also try USDT as fallback since user might have USDT
usdt_balance = exchange.get_balance('USDT')
if usdt_balance is not None and usdt_balance > 0:
self._cached_live_balance = usdt_balance
self._last_balance_check = current_time
logger.info(f"LIVE BALANCE: Using USDT balance ${usdt_balance:.2f}")
return usdt_balance
else:
logger.warning("LIVE BALANCE: Exchange does not have get_balance method")
else:
# Return cached balance if within 10 second window
if hasattr(self, '_cached_live_balance'):
return self._cached_live_balance
elif hasattr(self.trading_executor, 'simulation_mode') and self.trading_executor.simulation_mode:
# In simulation mode, show dynamic balance based on P&L
initial_balance = self._get_initial_balance()
realized_pnl = sum(trade.get('pnl', 0) for trade in self.closed_trades)
simulation_balance = initial_balance + realized_pnl
logger.debug(f"SIMULATION BALANCE: ${simulation_balance:.2f} (Initial: ${initial_balance:.2f} + P&L: ${realized_pnl:.2f})")
return simulation_balance
else:
logger.debug("LIVE BALANCE: Not in live trading mode, using initial balance")
# Fallback to initial balance for simulation mode
return self._get_initial_balance()
except Exception as e:
logger.error(f"Error getting live balance: {e}")
# Return cached balance if available, otherwise fallback
if hasattr(self, '_cached_live_balance'):
return self._cached_live_balance
return self._get_initial_balance()
def _setup_layout(self): def _setup_layout(self):
"""Setup the dashboard layout using layout manager""" """Setup the dashboard layout using layout manager"""
@ -411,17 +526,48 @@ class CleanTradingDashboard:
trade_count = len(self.closed_trades) trade_count = len(self.closed_trades)
trade_str = f"{trade_count} Trades" trade_str = f"{trade_count} Trades"
# Portfolio value # Portfolio value - use live balance for live trading
initial_balance = self._get_initial_balance() current_balance = self._get_live_balance()
portfolio_value = initial_balance + total_session_pnl # Use total P&L including unrealized portfolio_value = current_balance + total_session_pnl # Use total P&L including unrealized
portfolio_str = f"${portfolio_value:.2f}"
# MEXC status # Show live balance indicator for live trading
balance_indicator = ""
if self.trading_executor:
is_live = (hasattr(self.trading_executor, 'trading_enabled') and
self.trading_executor.trading_enabled and
hasattr(self.trading_executor, 'simulation_mode') and
not self.trading_executor.simulation_mode)
if is_live:
balance_indicator = " (LIVE)"
portfolio_str = f"${portfolio_value:.2f}{balance_indicator}"
# MEXC status with balance info
mexc_status = "SIM" mexc_status = "SIM"
if self.trading_executor: if self.trading_executor:
if hasattr(self.trading_executor, 'trading_enabled') and self.trading_executor.trading_enabled: if hasattr(self.trading_executor, 'trading_enabled') and self.trading_executor.trading_enabled:
if hasattr(self.trading_executor, 'simulation_mode') and not self.trading_executor.simulation_mode: if hasattr(self.trading_executor, 'simulation_mode') and self.trading_executor.simulation_mode:
mexc_status = "LIVE" # Show simulation mode status with simulated balance
mexc_status = f"SIM - ${current_balance:.2f}"
elif hasattr(self.trading_executor, 'simulation_mode') and not self.trading_executor.simulation_mode:
# Show live balance in MEXC status - detect currency
try:
exchange = self.trading_executor.exchange
usdc_balance = exchange.get_balance('USDC') if hasattr(exchange, 'get_balance') else 0
usdt_balance = exchange.get_balance('USDT') if hasattr(exchange, 'get_balance') else 0
if usdc_balance > 0:
mexc_status = f"LIVE - ${usdc_balance:.2f} USDC"
elif usdt_balance > 0:
mexc_status = f"LIVE - ${usdt_balance:.2f} USDT"
else:
mexc_status = f"LIVE - ${current_balance:.2f}"
except:
mexc_status = f"LIVE - ${current_balance:.2f}"
else:
mexc_status = "SIM"
else:
mexc_status = "DISABLED"
return price_str, session_pnl_str, position_str, trade_str, portfolio_str, mexc_status return price_str, session_pnl_str, position_str, trade_str, portfolio_str, mexc_status
@ -2876,6 +3022,39 @@ class CleanTradingDashboard:
except Exception as e: except Exception as e:
logger.error(f"Error starting signal generation loop: {e}") logger.error(f"Error starting signal generation loop: {e}")
def _start_live_balance_sync(self):
"""Start continuous live balance synchronization for trading"""
def balance_sync_worker():
while True:
try:
if self.trading_executor:
is_live = (hasattr(self.trading_executor, 'trading_enabled') and
self.trading_executor.trading_enabled and
hasattr(self.trading_executor, 'simulation_mode') and
not self.trading_executor.simulation_mode)
if is_live and hasattr(self.trading_executor, 'exchange'):
# Force balance refresh every 15 seconds in live mode
if hasattr(self, '_last_balance_check'):
del self._last_balance_check # Force refresh
balance = self._get_live_balance()
if balance > 0:
logger.debug(f"BALANCE SYNC: Live balance: ${balance:.2f}")
else:
logger.warning("BALANCE SYNC: Could not retrieve live balance")
# Sync balance every 15 seconds for live trading
time.sleep(15)
except Exception as e:
logger.debug(f"Error in balance sync loop: {e}")
time.sleep(30) # Wait longer on error
# Start balance sync thread only if we have trading enabled
if self.trading_executor:
threading.Thread(target=balance_sync_worker, daemon=True).start()
logger.info("BALANCE SYNC: Background balance synchronization started")
def _generate_dqn_signal(self, symbol: str, current_price: float) -> Optional[Dict]: def _generate_dqn_signal(self, symbol: str, current_price: float) -> Optional[Dict]:
"""Generate trading signal using DQN agent - NOT AVAILABLE IN BASIC ORCHESTRATOR""" """Generate trading signal using DQN agent - NOT AVAILABLE IN BASIC ORCHESTRATOR"""
@ -4389,9 +4568,9 @@ class CleanTradingDashboard:
import requests import requests
import time import time
# Use Binance REST API for order book data # Use Binance REST API for order book data with maximum depth
binance_symbol = symbol.replace('/', '') binance_symbol = symbol.replace('/', '')
url = f"https://api.binance.com/api/v3/depth?symbol={binance_symbol}&limit=500" url = f"https://api.binance.com/api/v3/depth?symbol={binance_symbol}&limit=1000"
response = requests.get(url, timeout=5) response = requests.get(url, timeout=5)
if response.status_code == 200: if response.status_code == 200:
@ -4401,8 +4580,8 @@ class CleanTradingDashboard:
bids = [] bids = []
asks = [] asks = []
# Process bids (buy orders) # Process bids (buy orders) - increased to 500 levels for better bucket filling
for bid in data['bids'][:100]: # Top 100 levels for bid in data['bids'][:500]: # Top 500 levels
price = float(bid[0]) price = float(bid[0])
size = float(bid[1]) size = float(bid[1])
bids.append({ bids.append({
@ -4411,8 +4590,8 @@ class CleanTradingDashboard:
'total': price * size 'total': price * size
}) })
# Process asks (sell orders) # Process asks (sell orders) - increased to 500 levels for better bucket filling
for ask in data['asks'][:100]: # Top 100 levels for ask in data['asks'][:500]: # Top 500 levels
price = float(ask[0]) price = float(ask[0])
size = float(ask[1]) size = float(ask[1])
asks.append({ asks.append({
@ -4519,28 +4698,35 @@ class CleanTradingDashboard:
imbalance = cob_snapshot['stats']['imbalance'] imbalance = cob_snapshot['stats']['imbalance']
abs_imbalance = abs(imbalance) abs_imbalance = abs(imbalance)
# Dynamic threshold based on imbalance strength # Dynamic threshold based on imbalance strength with realistic confidence
if abs_imbalance > 0.8: # Very strong imbalance (>80%) if abs_imbalance > 0.8: # Very strong imbalance (>80%)
threshold = 0.05 # 5% threshold for very strong signals threshold = 0.05 # 5% threshold for very strong signals
confidence_multiplier = 3.0 base_confidence = 0.85 # High but not perfect confidence
confidence_boost = (abs_imbalance - 0.8) * 0.75 # Scale remaining 15%
elif abs_imbalance > 0.5: # Strong imbalance (>50%) elif abs_imbalance > 0.5: # Strong imbalance (>50%)
threshold = 0.1 # 10% threshold for strong signals threshold = 0.1 # 10% threshold for strong signals
confidence_multiplier = 2.5 base_confidence = 0.70 # Good confidence
confidence_boost = (abs_imbalance - 0.5) * 0.50 # Scale up to 85%
elif abs_imbalance > 0.3: # Moderate imbalance (>30%) elif abs_imbalance > 0.3: # Moderate imbalance (>30%)
threshold = 0.15 # 15% threshold for moderate signals threshold = 0.15 # 15% threshold for moderate signals
confidence_multiplier = 2.0 base_confidence = 0.55 # Moderate confidence
confidence_boost = (abs_imbalance - 0.3) * 0.75 # Scale up to 70%
else: # Weak imbalance else: # Weak imbalance
threshold = 0.2 # 20% threshold for weak signals threshold = 0.2 # 20% threshold for weak signals
confidence_multiplier = 1.5 base_confidence = 0.35 # Low confidence
confidence_boost = abs_imbalance * 0.67 # Scale up to 55%
# Generate signal if imbalance exceeds threshold # Generate signal if imbalance exceeds threshold
if abs_imbalance > threshold: if abs_imbalance > threshold:
# Calculate more realistic confidence (never exactly 1.0)
final_confidence = min(0.95, base_confidence + confidence_boost)
signal = { signal = {
'timestamp': datetime.now(), 'timestamp': datetime.now(),
'type': 'cob_liquidity_imbalance', 'type': 'cob_liquidity_imbalance',
'action': 'BUY' if imbalance > 0 else 'SELL', 'action': 'BUY' if imbalance > 0 else 'SELL',
'symbol': symbol, 'symbol': symbol,
'confidence': min(1.0, abs_imbalance * confidence_multiplier), 'confidence': final_confidence,
'strength': abs_imbalance, 'strength': abs_imbalance,
'threshold_used': threshold, 'threshold_used': threshold,
'signal_strength': 'very_strong' if abs_imbalance > 0.8 else 'strong' if abs_imbalance > 0.5 else 'moderate' if abs_imbalance > 0.3 else 'weak', 'signal_strength': 'very_strong' if abs_imbalance > 0.8 else 'strong' if abs_imbalance > 0.5 else 'moderate' if abs_imbalance > 0.3 else 'weak',
@ -5071,9 +5257,7 @@ class CleanTradingDashboard:
if self.orchestrator and hasattr(self.orchestrator, 'add_decision_callback'): if self.orchestrator and hasattr(self.orchestrator, 'add_decision_callback'):
def connect_worker(): def connect_worker():
try: try:
loop = asyncio.new_event_loop() self.orchestrator.add_decision_callback(self._on_trading_decision)
asyncio.set_event_loop(loop)
loop.run_until_complete(self.orchestrator.add_decision_callback(self._on_trading_decision))
logger.info("Successfully connected to orchestrator for trading signals.") logger.info("Successfully connected to orchestrator for trading signals.")
except Exception as e: except Exception as e:
logger.error(f"Orchestrator connection worker failed: {e}") logger.error(f"Orchestrator connection worker failed: {e}")
@ -5478,15 +5662,18 @@ class CleanTradingDashboard:
import torch import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Get the model's device to ensure tensors are on the same device
model_device = next(model.parameters()).device
# Handle different input shapes for different CNN models # Handle different input shapes for different CNN models
if hasattr(model, 'input_shape'): if hasattr(model, 'input_shape'):
# EnhancedCNN model # EnhancedCNN model
features_tensor = torch.FloatTensor(features).unsqueeze(0).to(device) features_tensor = torch.FloatTensor(features).unsqueeze(0).to(model_device)
else: else:
# Basic CNN model - reshape appropriately # Basic CNN model - reshape appropriately
features_tensor = torch.FloatTensor(features).unsqueeze(0).unsqueeze(0).to(device) features_tensor = torch.FloatTensor(features).unsqueeze(0).unsqueeze(0).to(model_device)
target_tensor = torch.LongTensor([target]).to(device) target_tensor = torch.LongTensor([target]).to(model_device)
# Set model to training mode and zero gradients # Set model to training mode and zero gradients
model.train() model.train()
@ -5605,10 +5792,11 @@ class CleanTradingDashboard:
if hasattr(network, 'forward'): if hasattr(network, 'forward'):
import torch import torch
import torch.nn as nn import torch.nn as nn
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Get the model's device to ensure tensors are on the same device
features_tensor = torch.FloatTensor(features).unsqueeze(0).to(device) model_device = next(network.parameters()).device
action_target_tensor = torch.LongTensor([action_target]).to(device) features_tensor = torch.FloatTensor(features).unsqueeze(0).to(model_device)
confidence_target_tensor = torch.FloatTensor([confidence_target]).to(device) action_target_tensor = torch.LongTensor([action_target]).to(model_device)
confidence_target_tensor = torch.FloatTensor([confidence_target]).to(model_device)
network.train() network.train()
network_output = network(features_tensor) network_output = network(features_tensor)

View File

@ -286,11 +286,11 @@ class DashboardComponentManager:
if hasattr(cob_snapshot, 'stats'): if hasattr(cob_snapshot, 'stats'):
# Old format with stats attribute # Old format with stats attribute
stats = cob_snapshot.stats stats = cob_snapshot.stats
mid_price = stats.get('mid_price', 0) mid_price = stats.get('mid_price', 0)
spread_bps = stats.get('spread_bps', 0) spread_bps = stats.get('spread_bps', 0)
imbalance = stats.get('imbalance', 0) imbalance = stats.get('imbalance', 0)
bids = getattr(cob_snapshot, 'consolidated_bids', []) bids = getattr(cob_snapshot, 'consolidated_bids', [])
asks = getattr(cob_snapshot, 'consolidated_asks', []) asks = getattr(cob_snapshot, 'consolidated_asks', [])
else: else:
# New COBSnapshot format with direct attributes # New COBSnapshot format with direct attributes
mid_price = getattr(cob_snapshot, 'volume_weighted_mid', 0) mid_price = getattr(cob_snapshot, 'volume_weighted_mid', 0)
@ -405,26 +405,23 @@ class DashboardComponentManager:
], className="text-center") ], className="text-center")
def _create_cob_ladder_panel(self, bids, asks, mid_price, symbol=""): def _create_cob_ladder_panel(self, bids, asks, mid_price, symbol=""):
"""Creates the right panel with the compact COB ladder.""" """Creates Bookmap-style COB display with horizontal bars extending from center price."""
# Use symbol-specific bucket sizes: ETH = $1, BTC = $10 # Use symbol-specific bucket sizes: ETH = $1, BTC = $10
bucket_size = 1.0 if "ETH" in symbol else 10.0 bucket_size = 1.0 if "ETH" in symbol else 10.0
num_levels = 5 num_levels = 20 # Show 20 levels each side
def aggregate_buckets(orders): def aggregate_buckets(orders):
buckets = {} buckets = {}
for order in orders: for order in orders:
# Handle both dictionary format and ConsolidatedOrderBookLevel objects # Handle both dictionary format and ConsolidatedOrderBookLevel objects
if hasattr(order, 'price'): if hasattr(order, 'price'):
# ConsolidatedOrderBookLevel object
price = order.price price = order.price
size = order.total_size size = order.total_size
volume_usd = order.total_volume_usd volume_usd = order.total_volume_usd
else: else:
# Dictionary format (legacy) price = order.get('price', 0)
price = order.get('price', 0) size = order.get('total_size', order.get('size', 0))
# Handle both old format (size) and new format (total_size) volume_usd = order.get('total_volume_usd', size * price)
size = order.get('total_size', order.get('size', 0))
volume_usd = order.get('total_volume_usd', size * price)
if price > 0: if price > 0:
bucket_key = round(price / bucket_size) * bucket_size bucket_key = round(price / bucket_size) * bucket_size
@ -437,68 +434,168 @@ class DashboardComponentManager:
bid_buckets = aggregate_buckets(bids) bid_buckets = aggregate_buckets(bids)
ask_buckets = aggregate_buckets(asks) ask_buckets = aggregate_buckets(asks)
# Calculate max volume for scaling
all_usd_volumes = [b['usd_volume'] for b in bid_buckets.values()] + [a['usd_volume'] for a in ask_buckets.values()] all_usd_volumes = [b['usd_volume'] for b in bid_buckets.values()] + [a['usd_volume'] for a in ask_buckets.values()]
max_volume = max(all_usd_volumes) if all_usd_volumes else 1 max_volume = max(all_usd_volumes) if all_usd_volumes else 1
# Create price levels around mid price - expanded range for more bars
center_bucket = round(mid_price / bucket_size) * bucket_size center_bucket = round(mid_price / bucket_size) * bucket_size
ask_levels = [center_bucket + i * bucket_size for i in range(1, num_levels + 1)] ask_levels = [center_bucket + i * bucket_size for i in range(1, num_levels + 1)]
bid_levels = [center_bucket - i * bucket_size for i in range(num_levels)] bid_levels = [center_bucket - i * bucket_size for i in range(num_levels)]
# Debug: Log how many orders we have to work with
print(f"DEBUG COB: {symbol} - Processing {len(bids)} bids, {len(asks)} asks")
print(f"DEBUG COB: Mid price: ${mid_price:.2f}, Bucket size: ${bucket_size}")
print(f"DEBUG COB: Bid buckets: {len(bid_buckets)}, Ask buckets: {len(ask_buckets)}")
if bid_buckets:
print(f"DEBUG COB: Bid price range: ${min(bid_buckets.keys()):.2f} - ${max(bid_buckets.keys()):.2f}")
if ask_buckets:
print(f"DEBUG COB: Ask price range: ${min(ask_buckets.keys()):.2f} - ${max(ask_buckets.keys()):.2f}")
def create_ladder_row(price, bucket_data, max_vol, row_type): def create_bookmap_row(price, bid_data, ask_data, max_vol):
usd_volume = bucket_data.get('usd_volume', 0) """Create a Bookmap-style row with horizontal bars extending from center"""
crypto_volume = bucket_data.get('crypto_volume', 0) bid_volume = bid_data.get('usd_volume', 0)
ask_volume = ask_data.get('usd_volume', 0)
progress = (usd_volume / max_vol) * 100 if max_vol > 0 else 0 # Calculate bar widths (0-100%)
color = "danger" if row_type == 'ask' else "success" bid_width = (bid_volume / max_vol) * 100 if max_vol > 0 else 0
text_color = "text-danger" if row_type == 'ask' else "text-success" ask_width = (ask_volume / max_vol) * 100 if max_vol > 0 else 0
# Format USD volume (no $ symbol) # Format volumes
if usd_volume > 1e6: def format_volume(vol):
usd_str = f"{usd_volume/1e6:.1f}M" if vol > 1e6:
elif usd_volume > 1e3: return f"{vol/1e6:.1f}M"
usd_str = f"{usd_volume/1e3:.0f}K" elif vol > 1e3:
else: return f"{vol/1e3:.0f}K"
usd_str = f"{usd_volume:,.0f}" elif vol > 0:
return f"{vol:,.0f}"
return ""
# Format crypto volume (no unit symbol) bid_vol_str = format_volume(bid_volume)
if crypto_volume > 1000: ask_vol_str = format_volume(ask_volume)
crypto_str = f"{crypto_volume/1000:.1f}K"
elif crypto_volume > 1: return html.Div([
crypto_str = f"{crypto_volume:.1f}" # Price level row
else: html.Div([
crypto_str = f"{crypto_volume:.3f}" # Bid side (left) - green bar extending right
html.Div([
html.Div(
bid_vol_str,
className="text-end small fw-bold px-2",
style={
"background": "linear-gradient(90deg, rgba(34, 197, 94, 0.3), rgba(34, 197, 94, 0.9))" if bid_volume > 0 else "transparent",
"color": "#ffffff" if bid_volume > 0 else "transparent",
"width": f"{bid_width}%",
"minHeight": "22px",
"display": "flex",
"alignItems": "center",
"justifyContent": "flex-end",
"marginLeft": "auto",
"border": "1px solid rgba(34, 197, 94, 0.5)" if bid_volume > 0 else "none",
"borderRadius": "2px",
"textShadow": "1px 1px 2px rgba(0,0,0,0.8)",
"fontWeight": "600"
}
)
], style={"width": "40%", "display": "flex", "justifyContent": "flex-end", "padding": "1px"}),
# Price in center
html.Div(
f"{price:,.0f}",
className="text-center small fw-bold px-2",
style={
"width": "20%",
"minHeight": "22px",
"display": "flex",
"alignItems": "center",
"justifyContent": "center",
"background": "linear-gradient(180deg, rgba(75, 85, 99, 0.9), rgba(55, 65, 81, 0.9))",
"color": "#f8f9fa",
"borderLeft": "1px solid rgba(156, 163, 175, 0.3)",
"borderRight": "1px solid rgba(156, 163, 175, 0.3)",
"textShadow": "1px 1px 2px rgba(0,0,0,0.8)",
"fontWeight": "600"
}
),
# Ask side (right) - red bar extending left
html.Div([
html.Div(
ask_vol_str,
className="text-start small fw-bold px-2",
style={
"background": "linear-gradient(270deg, rgba(239, 68, 68, 0.3), rgba(239, 68, 68, 0.9))" if ask_volume > 0 else "transparent",
"color": "#ffffff" if ask_volume > 0 else "transparent",
"width": f"{ask_width}%",
"minHeight": "22px",
"display": "flex",
"alignItems": "center",
"justifyContent": "flex-start",
"border": "1px solid rgba(239, 68, 68, 0.5)" if ask_volume > 0 else "none",
"borderRadius": "2px",
"textShadow": "1px 1px 2px rgba(0,0,0,0.8)",
"fontWeight": "600"
}
)
], style={"width": "40%", "display": "flex", "justifyContent": "flex-start", "padding": "1px"})
], style={
"display": "flex",
"alignItems": "center",
"marginBottom": "2px",
"background": "rgba(17, 24, 39, 0.95)",
"border": "1px solid rgba(75, 85, 99, 0.3)",
"borderRadius": "3px"
})
])
return html.Tr([ # Create all price levels
html.Td(f"${price:,.0f}", className=f"{text_color} price-level small"), all_levels = sorted(set(ask_levels + bid_levels + [center_bucket]), reverse=True)
html.Td(
dbc.Progress(value=progress, color=color, className="vh-25 compact-progress"), rows = []
className="progress-cell p-0" for price in all_levels:
), bid_data = bid_buckets.get(price, {'usd_volume': 0})
html.Td(usd_str, className="volume-level text-end fw-bold small p-0 pe-1"), ask_data = ask_buckets.get(price, {'usd_volume': 0})
html.Td(crypto_str, className="volume-level text-start small text-muted p-0 ps-1")
], className="compact-ladder-row p-0") # Only show rows with some volume or near mid price
if bid_data['usd_volume'] > 0 or ask_data['usd_volume'] > 0 or abs(price - mid_price) <= bucket_size * 5:
rows.append(create_bookmap_row(price, bid_data, ask_data, max_volume))
def get_bucket_data(buckets, price): # Add header with improved dark theme styling
return buckets.get(price, {'usd_volume': 0, 'crypto_volume': 0}) header = html.Div([
html.Div("BIDS", className="text-center fw-bold small",
style={"width": "40%", "color": "#10b981", "textShadow": "1px 1px 2px rgba(0,0,0,0.8)"}),
html.Div("PRICE", className="text-center fw-bold small",
style={"width": "20%", "color": "#f8f9fa", "textShadow": "1px 1px 2px rgba(0,0,0,0.8)"}),
html.Div("ASKS", className="text-center fw-bold small",
style={"width": "40%", "color": "#ef4444", "textShadow": "1px 1px 2px rgba(0,0,0,0.8)"})
], style={
"display": "flex",
"marginBottom": "8px",
"padding": "8px",
"background": "linear-gradient(180deg, rgba(31, 41, 55, 0.95), rgba(17, 24, 39, 0.95))",
"border": "1px solid rgba(75, 85, 99, 0.4)",
"borderRadius": "6px",
"boxShadow": "0 2px 4px rgba(0,0,0,0.3)"
})
ask_rows = [create_ladder_row(p, get_bucket_data(ask_buckets, p), max_volume, 'ask') for p in sorted(ask_levels, reverse=True)] return html.Div([
bid_rows = [create_ladder_row(p, get_bucket_data(bid_buckets, p), max_volume, 'bid') for p in sorted(bid_levels, reverse=True)] header,
html.Div(rows, style={
mid_row = html.Tr([ "maxHeight": "500px",
html.Td(f"${mid_price:,.0f}", colSpan=4, className="text-center fw-bold small mid-price-row p-0") "overflowY": "auto",
]) "background": "linear-gradient(180deg, rgba(17, 24, 39, 0.98), rgba(31, 41, 55, 0.98))",
"border": "2px solid rgba(75, 85, 99, 0.4)",
ladder_table = html.Table([ "borderRadius": "8px",
html.Thead(html.Tr([ "boxShadow": "inset 0 2px 4px rgba(0,0,0,0.3)"
html.Th("Price", className="small p-0"), })
html.Th("Volume", className="small p-0"), ], style={
html.Th("USD", className="small text-end p-0 pe-1"), "fontFamily": "monospace",
html.Th("Crypto", className="small text-start p-0 ps-1") "background": "rgba(17, 24, 39, 0.9)",
])), "padding": "8px",
html.Tbody(ask_rows + [mid_row] + bid_rows) "borderRadius": "8px",
], className="table table-sm table-borderless cob-ladder-table-compact m-0 p-0") # Compact classes "border": "1px solid rgba(75, 85, 99, 0.3)"
})
return ladder_table
def format_cob_data_with_buckets(self, cob_snapshot, symbol, price_buckets, memory_stats, bucket_size=1.0): def format_cob_data_with_buckets(self, cob_snapshot, symbol, price_buckets, memory_stats, bucket_size=1.0):
"""Format COB data with price buckets for high-frequency display""" """Format COB data with price buckets for high-frequency display"""

View File

@ -15,12 +15,16 @@ class DashboardLayoutManager:
self.trading_executor = trading_executor self.trading_executor = trading_executor
def create_main_layout(self): def create_main_layout(self):
"""Create the main dashboard layout""" """Create the main dashboard layout with dark theme"""
return html.Div([ return html.Div([
self._create_header(), self._create_header(),
self._create_interval_component(), self._create_interval_component(),
self._create_main_content() self._create_main_content()
], className="container-fluid") ], className="container-fluid", style={
"backgroundColor": "#111827",
"minHeight": "100vh",
"color": "#f8f9fa"
})
def _create_header(self): def _create_header(self):
"""Create the dashboard header""" """Create the dashboard header"""
@ -84,7 +88,12 @@ class DashboardLayoutManager:
html.H5(id=card_id, className=f"{text_class} mb-0 small"), html.H5(id=card_id, className=f"{text_class} mb-0 small"),
html.P(label, className="text-muted mb-0 tiny") html.P(label, className="text-muted mb-0 tiny")
], className="card-body text-center p-2") ], className="card-body text-center p-2")
], className="card bg-light", style={"height": "60px"}) ], className="card", style={
"height": "60px",
"backgroundColor": "#1f2937",
"border": "1px solid #374151",
"color": "#f8f9fa"
})
cards.append(card) cards.append(card)
return html.Div( return html.Div(