more agressive trading avtions. audit

This commit is contained in:
Dobromir Popov
2025-07-02 00:52:50 +03:00
parent c267657456
commit 0f155b319c
8 changed files with 292 additions and 61 deletions

View File

@ -0,0 +1,65 @@
# Aggressive Trading Thresholds Summary
## Overview
Lowered confidence thresholds across the entire trading system to execute trades more aggressively, generating more training data for the checkpoint-enabled models.
## Changes Made
### 1. Clean Dashboard (`web/clean_dashboard.py`)
- **CLOSE_POSITION_THRESHOLD**: `0.25``0.15` (40% reduction)
- **OPEN_POSITION_THRESHOLD**: `0.60``0.35` (42% reduction)
### 2. DQN Agent (`NN/models/dqn_agent.py`)
- **entry_confidence_threshold**: `0.7``0.35` (50% reduction)
- **exit_confidence_threshold**: `0.3``0.15` (50% reduction)
### 3. Trading Orchestrator (`core/orchestrator.py`)
- **confidence_threshold**: `0.20``0.15` (25% reduction)
- **confidence_threshold_close**: `0.10``0.08` (20% reduction)
### 4. Realtime RL COB Trader (`core/realtime_rl_cob_trader.py`)
- **min_confidence_threshold**: `0.7``0.35` (50% reduction)
### 5. Training Integration (`core/training_integration.py`)
- **min_confidence_threshold**: `0.3``0.15` (50% reduction)
## Expected Impact
### More Aggressive Trading
- **Entry Thresholds**: Now require only 35% confidence to open new positions (vs 60-70% previously)
- **Exit Thresholds**: Now require only 8-15% confidence to close positions (vs 25-30% previously)
- **Overall**: System will execute ~2-3x more trades than before
### Better Training Data Generation
- **More Executed Actions**: Since we now store training progress, more executed trades = more training data
- **Faster Learning**: Models will learn from real trading outcomes more frequently
- **Split-Second Decisions**: With 100ms training intervals, models can adapt quickly to market changes
### Risk Management
- **Position Sizing**: Small position sizes (0.005) limit risk per trade
- **Profit Incentives**: System still has profit-based incentives for closing positions
- **Leverage Control**: User-controlled leverage settings provide additional risk management
## Training Frequency
- **Decision Fusion**: Every 100ms
- **COB RL**: Every 100ms
- **DQN**: Every 30 seconds
- **CNN**: Every 30 seconds
## Monitoring
- Training performance metrics are tracked and displayed
- Average, min, max training times are logged
- Training frequency and total calls are monitored
- Real-time performance feedback available in dashboard
## Next Steps
1. Monitor trade execution frequency
2. Track training data generation rate
3. Observe model learning progress
4. Adjust thresholds further if needed based on performance
## Notes
- All changes maintain the existing profit incentive system
- Position management logic remains intact
- Risk controls through position sizing and leverage are preserved
- Training checkpoint system ensures progress is not lost

View File

@ -0,0 +1,166 @@
# Placeholder Functions Audit Report
## Overview
This audit identifies functions that appear to be implemented but are actually just placeholders or mock implementations, similar to the COB training issue that caused debugging problems.
## Critical Placeholder Functions
### 1. **COB RL Training Functions** (HIGH PRIORITY)
#### `core/training_integration.py` - Line 178
```python
def _train_cob_rl_on_trade_outcome(self, trade_record: Dict[str, Any], reward: float) -> bool:
"""Train COB RL on trade outcome (placeholder)"""
# COB RL training would go here - requires more specific implementation
# For now, just log that we could train COB RL
logger.debug(f"COB RL training opportunity: features={len(cob_features)}")
return True
```
**Issue**: Returns `True` but does no actual training. This was the original COB training issue.
#### `web/clean_dashboard.py` - Line 4438
```python
def _perform_real_cob_rl_training(self, market_data: List[Dict]):
"""Perform actual COB RL training with real market microstructure data"""
# For now, create a simple checkpoint for COB RL to prevent recreation
checkpoint_data = {
'model_state_dict': {}, # Placeholder
'training_samples': len(market_data),
'cob_features_processed': True
}
```
**Issue**: Only creates placeholder checkpoints, no actual training.
### 2. **CNN Training Functions** (HIGH PRIORITY)
#### `core/training_integration.py` - Line 148
```python
def _train_cnn_on_trade_outcome(self, trade_record: Dict[str, Any], reward: float) -> bool:
"""Train CNN on trade outcome (placeholder)"""
# CNN training would go here - requires more specific implementation
# For now, just log that we could train CNN
logger.debug(f"CNN training opportunity: features={len(cnn_features)}, predictions={len(cnn_predictions)}")
return True
```
**Issue**: Returns `True` but does no actual training.
#### `web/clean_dashboard.py` - Line 4239
```python
def _perform_real_cnn_training(self, market_data: List[Dict]):
# Multiple issues with CNN model access and training
model.train() # CNNModel doesn't have train() method
outputs = model(features_tensor) # CNNModel is not callable
model.losses.append(loss_value) # CNNModel doesn't have losses attribute
```
**Issue**: Tries to access non-existent CNN model methods and attributes.
### 3. **Dynamic Model Loading** (MEDIUM PRIORITY)
#### `web/clean_dashboard.py` - Lines 234, 239
```python
def load_model_dynamically(self, model_name: str, model_type: str, model_path: Optional[str] = None) -> bool:
"""Dynamically load a model at runtime - Not implemented in orchestrator"""
logger.warning("Dynamic model loading not implemented in orchestrator")
return False
def unload_model_dynamically(self, model_name: str) -> bool:
"""Dynamically unload a model at runtime - Not implemented in orchestrator"""
logger.warning("Dynamic model unloading not implemented in orchestrator")
return False
```
**Issue**: Always returns `False`, no actual implementation.
### 4. **Universal Data Stream** (LOW PRIORITY)
#### `web/clean_dashboard.py` - Lines 76-221
```python
class UnifiedDataStream:
"""Placeholder for disabled Universal Data Stream"""
def __init__(self, *args, **kwargs):
pass
def register_consumer(self, *args, **kwargs):
pass
def _handle_unified_stream_data(self, data):
"""Placeholder for unified stream data handling."""
pass
```
**Issue**: Complete placeholder implementation.
### 5. **Enhanced Training System** (MEDIUM PRIORITY)
#### `web/clean_dashboard.py` - Line 3447
```python
logger.warning("Enhanced training system not available - using mock predictions")
```
**Issue**: Falls back to mock predictions when enhanced training is not available.
## Mock Data Generation (Found in Tests)
### Test Files with Mock Data
- `tests/test_tick_processor_simple.py` - Lines 51-84: Mock tick data generation
- `tests/test_tick_processor_final.py` - Lines 228-240: Mock tick features
- `tests/test_realtime_tick_processor.py` - Lines 234-243: Mock tick features
- `tests/test_realtime_rl_cob_trader.py` - Lines 161-169: Mock COB data
- `tests/test_nn_driven_trading.py` - Lines 39-65: Mock predictions
- `tests/test_model_persistence.py` - Lines 24-54: Mock agent class
## Impact Analysis
### High Impact Issues
1. **COB RL Training**: No actual training occurs, models don't learn from COB data
2. **CNN Training**: No actual training occurs, models don't learn from CNN features
3. **Model Loading**: Dynamic model management doesn't work
### Medium Impact Issues
1. **Enhanced Training**: Falls back to mock predictions
2. **Universal Data Stream**: Disabled functionality
### Low Impact Issues
1. **Test Mock Data**: Only affects tests, not production
## Recommendations
### Immediate Actions (High Priority)
1. **Implement real COB RL training** in `_perform_real_cob_rl_training()`
2. **Fix CNN training** by implementing proper CNN model interface
3. **Implement dynamic model loading** in orchestrator
### Medium Priority
1. **Implement enhanced training system** to avoid mock predictions
2. **Enable Universal Data Stream** if needed
### Low Priority
1. **Replace test mock data** with real data generation where possible
## Detection Methods
### Code Patterns to Watch For
1. Functions that return `True` but do nothing
2. Functions with "placeholder" or "mock" in comments
3. Functions that only log debug messages
4. Functions that access non-existent attributes/methods
5. Functions that create empty dictionaries as placeholders
### Testing Strategies
1. **Unit tests** that verify actual functionality, not just return values
2. **Integration tests** that verify training actually occurs
3. **Monitoring** of model performance to detect when training isn't working
4. **Log analysis** to identify placeholder function calls
## Prevention
### Development Guidelines
1. **Never return `True`** from training functions without actual training
2. **Always implement** core functionality before marking as complete
3. **Use proper interfaces** for model training
4. **Add TODO comments** for incomplete implementations
5. **Test with real data** instead of mock data in production code
### Code Review Checklist
- [ ] Training functions actually perform training
- [ ] Model interfaces are properly implemented
- [ ] No placeholder return values in critical functions
- [ ] Mock data only used in tests, not production
- [ ] All TODO/FIXME items are tracked and prioritized