t perd viz - wip
This commit is contained in:
195
ANNOTATE/LIVE_UPDATES_IMPLEMENTATION.md
Normal file
195
ANNOTATE/LIVE_UPDATES_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,195 @@
|
|||||||
|
# Live Updates Implementation Summary
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
Added real-time chart updates and model prediction visualization to the ANNOTATE dashboard, matching the functionality of the clean_dashboard.
|
||||||
|
|
||||||
|
## What Was Implemented
|
||||||
|
|
||||||
|
### 1. Live Chart Updates (1s and 1m charts)
|
||||||
|
|
||||||
|
**Backend: `/api/live-updates` endpoint** (`ANNOTATE/web/app.py`)
|
||||||
|
- Polls for latest candle data from orchestrator's data provider
|
||||||
|
- Returns latest OHLCV data for requested symbol/timeframe
|
||||||
|
- Provides model predictions (DQN, CNN, Transformer)
|
||||||
|
|
||||||
|
**Frontend: Polling System** (`ANNOTATE/web/static/js/live_updates_polling.js`)
|
||||||
|
- Polls `/api/live-updates` every 2 seconds
|
||||||
|
- Subscribes to active timeframes (1s, 1m, etc.)
|
||||||
|
- Automatically updates charts with new candles
|
||||||
|
|
||||||
|
**Chart Updates** (`ANNOTATE/web/static/js/chart_manager.js`)
|
||||||
|
- `updateLatestCandle()` - Updates or extends chart with new candle
|
||||||
|
- Efficiently uses Plotly's `extendTraces` for new candles
|
||||||
|
- Uses `restyle` for updating current candle
|
||||||
|
|
||||||
|
### 2. Model Prediction Visualization
|
||||||
|
|
||||||
|
**Orchestrator Storage** (`core/orchestrator.py`)
|
||||||
|
- Added `recent_transformer_predictions` tracking
|
||||||
|
- Added `store_transformer_prediction()` method
|
||||||
|
- Tracks last 50 predictions per symbol
|
||||||
|
|
||||||
|
**Dashboard Visualization** (`web/clean_dashboard.py`)
|
||||||
|
- Added `_add_transformer_predictions_to_chart()` method
|
||||||
|
- Added `_get_recent_transformer_predictions()` helper
|
||||||
|
- Transformer predictions shown as:
|
||||||
|
- Cyan lines/stars for UP predictions
|
||||||
|
- Orange lines/stars for DOWN predictions
|
||||||
|
- Light blue lines/stars for STABLE predictions
|
||||||
|
|
||||||
|
**ANNOTATE Visualization** (`ANNOTATE/web/static/js/chart_manager.js`)
|
||||||
|
- Added `updatePredictions()` method
|
||||||
|
- Added `_addDQNPrediction()` - Shows arrows (▲/▼)
|
||||||
|
- Added `_addCNNPrediction()` - Shows trend lines with diamonds (◆)
|
||||||
|
- Added `_addTransformerPrediction()` - Shows trend lines with stars (★)
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
### Live Chart Updates Flow
|
||||||
|
```
|
||||||
|
1. JavaScript polls /api/live-updates every 2s
|
||||||
|
2. Backend fetches latest candle from data_provider
|
||||||
|
3. Backend returns candle + predictions
|
||||||
|
4. Frontend updates chart with new/updated candle
|
||||||
|
5. Frontend visualizes model predictions
|
||||||
|
```
|
||||||
|
|
||||||
|
### Prediction Visualization Flow
|
||||||
|
```
|
||||||
|
1. Model makes prediction
|
||||||
|
2. Call orchestrator.store_transformer_prediction(symbol, prediction_data)
|
||||||
|
3. Prediction stored in recent_transformer_predictions deque
|
||||||
|
4. Dashboard/ANNOTATE polls and retrieves predictions
|
||||||
|
5. Predictions rendered as shapes/annotations on charts
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Storing Transformer Predictions
|
||||||
|
```python
|
||||||
|
orchestrator.store_transformer_prediction('ETH/USDT', {
|
||||||
|
'timestamp': datetime.now(),
|
||||||
|
'confidence': 0.75,
|
||||||
|
'current_price': 3500.0,
|
||||||
|
'predicted_price': 3550.0,
|
||||||
|
'price_change': 1.43, # percentage
|
||||||
|
'horizon_minutes': 10
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
### Enabling Live Updates
|
||||||
|
Live updates start automatically when:
|
||||||
|
- ANNOTATE dashboard loads
|
||||||
|
- Charts are initialized
|
||||||
|
- appState.currentTimeframes is set
|
||||||
|
|
||||||
|
### Prediction Display
|
||||||
|
Predictions appear on the 1m chart as:
|
||||||
|
- **DQN**: Green/red arrows for BUY/SELL
|
||||||
|
- **CNN**: Dotted trend lines with diamond markers
|
||||||
|
- **Transformer**: Dash-dot lines with star markers
|
||||||
|
- Opacity/size scales with confidence
|
||||||
|
|
||||||
|
## Backtest Prediction Visualization
|
||||||
|
|
||||||
|
### Implemented
|
||||||
|
1. **Automatic prediction clearing** - Predictions are cleared when backtest starts
|
||||||
|
2. **Prediction storage during backtest** - All model predictions stored in orchestrator
|
||||||
|
3. **Model type detection** - Automatically detects Transformer/CNN/DQN models
|
||||||
|
4. **Real-time visualization** - Predictions appear on charts as backtest runs
|
||||||
|
|
||||||
|
## Training Prediction Visualization
|
||||||
|
|
||||||
|
### Implemented
|
||||||
|
1. **Automatic prediction clearing** - Predictions cleared when training starts
|
||||||
|
2. **Prediction storage during training** - Model predictions stored every 10 batches on first epoch
|
||||||
|
3. **Metadata tracking** - Current price and timestamp stored with each batch
|
||||||
|
4. **Real-time visualization** - Training predictions appear on charts via polling
|
||||||
|
|
||||||
|
### How It Works
|
||||||
|
|
||||||
|
**Backtest:**
|
||||||
|
```python
|
||||||
|
# When backtest starts:
|
||||||
|
orchestrator.clear_predictions(symbol)
|
||||||
|
|
||||||
|
# During backtest, for each prediction:
|
||||||
|
if 'transformer' in model_type:
|
||||||
|
orchestrator.store_transformer_prediction(symbol, prediction_data)
|
||||||
|
elif 'cnn' in model_type:
|
||||||
|
orchestrator.recent_cnn_predictions[symbol].append(prediction_data)
|
||||||
|
elif 'dqn' in model_type:
|
||||||
|
orchestrator.recent_dqn_predictions[symbol].append(prediction_data)
|
||||||
|
|
||||||
|
# Predictions automatically appear via /api/live-updates polling
|
||||||
|
```
|
||||||
|
|
||||||
|
**Training:**
|
||||||
|
```python
|
||||||
|
# When training starts:
|
||||||
|
orchestrator.clear_predictions(symbol)
|
||||||
|
|
||||||
|
# During training (every 10 batches on first epoch):
|
||||||
|
with torch.no_grad():
|
||||||
|
trainer.model.eval()
|
||||||
|
outputs = trainer.model(**batch)
|
||||||
|
# Extract prediction and store
|
||||||
|
orchestrator.store_transformer_prediction(symbol, {
|
||||||
|
'timestamp': batch['metadata']['timestamp'],
|
||||||
|
'current_price': batch['metadata']['current_price'],
|
||||||
|
'action': predicted_action,
|
||||||
|
'confidence': confidence,
|
||||||
|
'source': 'training'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Predictions automatically appear via /api/live-updates polling
|
||||||
|
```
|
||||||
|
|
||||||
|
## What's Still Missing
|
||||||
|
|
||||||
|
### Not Yet Implemented
|
||||||
|
1. **Historical prediction replay** - Can't replay predictions from past sessions
|
||||||
|
2. **Prediction accuracy overlay** - No visual feedback on prediction outcomes
|
||||||
|
3. **Multi-symbol predictions** - Only primary symbol predictions are tracked
|
||||||
|
4. **Prediction persistence** - Predictions are lost when app restarts
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
### Backend
|
||||||
|
- `core/orchestrator.py` - Added transformer prediction tracking
|
||||||
|
- `web/clean_dashboard.py` - Added transformer visualization
|
||||||
|
- `ANNOTATE/web/app.py` - Added `/api/live-updates` endpoint
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
- `ANNOTATE/web/static/js/chart_manager.js` - Added prediction visualization methods
|
||||||
|
- `ANNOTATE/web/static/js/live_updates_polling.js` - Updated to call prediction visualization
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### Verify Live Updates
|
||||||
|
1. Open ANNOTATE dashboard
|
||||||
|
2. Check browser console for "Started polling for live updates"
|
||||||
|
3. Watch 1s/1m charts update every 2 seconds
|
||||||
|
4. Check network tab for `/api/live-updates` requests
|
||||||
|
|
||||||
|
### Verify Predictions
|
||||||
|
1. Ensure orchestrator is running
|
||||||
|
2. Make predictions using models
|
||||||
|
3. Call `orchestrator.store_transformer_prediction()`
|
||||||
|
4. Check charts for prediction markers
|
||||||
|
5. Verify predictions appear in `/api/live-updates` response
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
- **Polling frequency**: 2 seconds (configurable)
|
||||||
|
- **Prediction storage**: 50 predictions per symbol (deque with maxlen)
|
||||||
|
- **Chart updates**: Efficient Plotly operations (extendTraces/restyle)
|
||||||
|
- **Memory usage**: Minimal (predictions auto-expire from deque)
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. **Add backtest prediction storage** - Store predictions during backtest runs
|
||||||
|
2. **Add prediction outcome tracking** - Track if predictions were accurate
|
||||||
|
3. **Add prediction accuracy visualization** - Show success/failure markers
|
||||||
|
4. **Add prediction filtering** - Filter by model, confidence, timeframe
|
||||||
|
5. **Add prediction export** - Export predictions for analysis
|
||||||
@@ -113,6 +113,8 @@ class TrainingSession:
|
|||||||
final_loss: Optional[float] = None
|
final_loss: Optional[float] = None
|
||||||
accuracy: Optional[float] = None
|
accuracy: Optional[float] = None
|
||||||
error: Optional[str] = None
|
error: Optional[str] = None
|
||||||
|
gpu_utilization: Optional[float] = None # GPU utilization percentage
|
||||||
|
cpu_utilization: Optional[float] = None # CPU utilization percentage
|
||||||
|
|
||||||
|
|
||||||
class RealTrainingAdapter:
|
class RealTrainingAdapter:
|
||||||
@@ -240,6 +242,13 @@ class RealTrainingAdapter:
|
|||||||
logger.info(f" Training ID: {training_id}")
|
logger.info(f" Training ID: {training_id}")
|
||||||
logger.info(f" Test cases: {len(test_cases)}")
|
logger.info(f" Test cases: {len(test_cases)}")
|
||||||
|
|
||||||
|
# Clear previous predictions for clean visualization
|
||||||
|
# Get symbol from first test case
|
||||||
|
symbol = test_cases[0].get('symbol', 'ETH/USDT') if test_cases else 'ETH/USDT'
|
||||||
|
if self.orchestrator and hasattr(self.orchestrator, 'clear_predictions'):
|
||||||
|
self.orchestrator.clear_predictions(symbol)
|
||||||
|
logger.info(f" Cleared previous predictions for {symbol}")
|
||||||
|
|
||||||
# Prepare training data from test cases
|
# Prepare training data from test cases
|
||||||
training_data = self._prepare_training_data(test_cases)
|
training_data = self._prepare_training_data(test_cases)
|
||||||
|
|
||||||
@@ -1301,6 +1310,7 @@ class RealTrainingAdapter:
|
|||||||
"""
|
"""
|
||||||
import torch
|
import torch
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
try:
|
try:
|
||||||
market_state = training_sample.get('market_state', {})
|
market_state = training_sample.get('market_state', {})
|
||||||
@@ -1522,7 +1532,6 @@ class RealTrainingAdapter:
|
|||||||
time_in_position_minutes = 0.0
|
time_in_position_minutes = 0.0
|
||||||
if in_position:
|
if in_position:
|
||||||
try:
|
try:
|
||||||
from datetime import datetime
|
|
||||||
entry_timestamp = training_sample.get('timestamp')
|
entry_timestamp = training_sample.get('timestamp')
|
||||||
current_timestamp = training_sample.get('timestamp')
|
current_timestamp = training_sample.get('timestamp')
|
||||||
|
|
||||||
@@ -1645,7 +1654,14 @@ class RealTrainingAdapter:
|
|||||||
'norm_params': norm_params_dict, # Dict with keys: '1s', '1m', '1h', '1d', 'btc'
|
'norm_params': norm_params_dict, # Dict with keys: '1s', '1m', '1h', '1d', 'btc'
|
||||||
|
|
||||||
# Legacy support (use 1m as default)
|
# Legacy support (use 1m as default)
|
||||||
'price_data': price_data_1m if price_data_1m is not None else ref_data
|
'price_data': price_data_1m if price_data_1m is not None else ref_data,
|
||||||
|
|
||||||
|
# Metadata for prediction visualization
|
||||||
|
'metadata': {
|
||||||
|
'current_price': float(current_price),
|
||||||
|
'timestamp': training_sample.get('timestamp', datetime.now()),
|
||||||
|
'symbol': training_sample.get('symbol', 'ETH/USDT')
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return batch
|
return batch
|
||||||
@@ -1962,6 +1978,13 @@ class RealTrainingAdapter:
|
|||||||
# Generate batches fresh for each epoch
|
# Generate batches fresh for each epoch
|
||||||
for i, batch in enumerate(batch_generator()):
|
for i, batch in enumerate(batch_generator()):
|
||||||
try:
|
try:
|
||||||
|
# Store prediction before training (for visualization)
|
||||||
|
# Only store predictions on first epoch and every 10th batch to avoid clutter
|
||||||
|
if epoch == 0 and i % 10 == 0 and self.orchestrator:
|
||||||
|
# Get symbol from batch metadata or use default
|
||||||
|
symbol = batch.get('metadata', {}).get('symbol', 'ETH/USDT')
|
||||||
|
self._store_training_prediction(batch, trainer, symbol)
|
||||||
|
|
||||||
# Call the trainer's train_step method with mini-batch
|
# Call the trainer's train_step method with mini-batch
|
||||||
# Batch is already on GPU and contains multiple samples
|
# Batch is already on GPU and contains multiple samples
|
||||||
result = trainer.train_step(batch, accumulate_gradients=False)
|
result = trainer.train_step(batch, accumulate_gradients=False)
|
||||||
@@ -2252,6 +2275,28 @@ class RealTrainingAdapter:
|
|||||||
|
|
||||||
session = self.training_sessions[training_id]
|
session = self.training_sessions[training_id]
|
||||||
|
|
||||||
|
# Get current GPU/CPU utilization
|
||||||
|
gpu_util = None
|
||||||
|
cpu_util = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
from utils.gpu_monitor import get_gpu_monitor
|
||||||
|
gpu_monitor = get_gpu_monitor()
|
||||||
|
gpu_metrics = gpu_monitor.get_gpu_utilization()
|
||||||
|
if gpu_metrics:
|
||||||
|
gpu_util = gpu_metrics.get('gpu_utilization_percent')
|
||||||
|
if gpu_util is None and gpu_metrics.get('memory_usage_percent'):
|
||||||
|
# Fallback to memory usage as proxy
|
||||||
|
gpu_util = gpu_metrics.get('memory_usage_percent')
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Could not get GPU metrics: {e}")
|
||||||
|
|
||||||
|
try:
|
||||||
|
import psutil
|
||||||
|
cpu_util = psutil.cpu_percent(interval=0.1)
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Could not get CPU metrics: {e}")
|
||||||
|
|
||||||
return {
|
return {
|
||||||
'status': session.status,
|
'status': session.status,
|
||||||
'model_name': session.model_name,
|
'model_name': session.model_name,
|
||||||
@@ -2262,7 +2307,9 @@ class RealTrainingAdapter:
|
|||||||
'final_loss': session.final_loss,
|
'final_loss': session.final_loss,
|
||||||
'accuracy': session.accuracy,
|
'accuracy': session.accuracy,
|
||||||
'duration_seconds': session.duration_seconds,
|
'duration_seconds': session.duration_seconds,
|
||||||
'error': session.error
|
'error': session.error,
|
||||||
|
'gpu_utilization': gpu_util,
|
||||||
|
'cpu_utilization': cpu_util
|
||||||
}
|
}
|
||||||
|
|
||||||
def get_active_training_session(self) -> Optional[Dict]:
|
def get_active_training_session(self) -> Optional[Dict]:
|
||||||
@@ -2415,6 +2462,59 @@ class RealTrainingAdapter:
|
|||||||
all_signals.sort(key=lambda x: x.get('timestamp', ''), reverse=True)
|
all_signals.sort(key=lambda x: x.get('timestamp', ''), reverse=True)
|
||||||
return all_signals[:limit]
|
return all_signals[:limit]
|
||||||
|
|
||||||
|
def _store_training_prediction(self, batch: Dict, trainer, symbol: str):
|
||||||
|
"""Store a prediction from training batch for visualization"""
|
||||||
|
try:
|
||||||
|
import torch
|
||||||
|
|
||||||
|
# Make prediction on the batch (without training)
|
||||||
|
with torch.no_grad():
|
||||||
|
trainer.model.eval()
|
||||||
|
|
||||||
|
# Get prediction from model
|
||||||
|
outputs = trainer.model(
|
||||||
|
price_data_1s=batch.get('price_data_1s'),
|
||||||
|
price_data_1m=batch.get('price_data_1m'),
|
||||||
|
price_data_1h=batch.get('price_data_1h'),
|
||||||
|
price_data_1d=batch.get('price_data_1d'),
|
||||||
|
tech_data=batch.get('tech_data'),
|
||||||
|
market_data=batch.get('market_data')
|
||||||
|
)
|
||||||
|
|
||||||
|
trainer.model.train()
|
||||||
|
|
||||||
|
# Extract action prediction
|
||||||
|
action_probs = outputs.get('action_probs')
|
||||||
|
if action_probs is not None:
|
||||||
|
action_idx = torch.argmax(action_probs, dim=-1).item()
|
||||||
|
confidence = action_probs[0, action_idx].item()
|
||||||
|
|
||||||
|
# Map to BUY/SELL/HOLD
|
||||||
|
actions = ['BUY', 'SELL', 'HOLD']
|
||||||
|
action = actions[action_idx] if action_idx < len(actions) else 'HOLD'
|
||||||
|
|
||||||
|
# Get current price from batch metadata
|
||||||
|
current_price = batch.get('metadata', {}).get('current_price', 0)
|
||||||
|
timestamp = batch.get('metadata', {}).get('timestamp', datetime.now())
|
||||||
|
|
||||||
|
if current_price > 0:
|
||||||
|
# Store in orchestrator
|
||||||
|
if hasattr(self.orchestrator, 'store_transformer_prediction'):
|
||||||
|
self.orchestrator.store_transformer_prediction(symbol, {
|
||||||
|
'timestamp': timestamp,
|
||||||
|
'current_price': current_price,
|
||||||
|
'predicted_price': current_price * (1.01 if action == 'BUY' else 0.99),
|
||||||
|
'price_change': 1.0 if action == 'BUY' else -1.0,
|
||||||
|
'confidence': confidence,
|
||||||
|
'action': action,
|
||||||
|
'horizon_minutes': 10,
|
||||||
|
'source': 'training'
|
||||||
|
})
|
||||||
|
logger.debug(f"Stored training prediction: {action} @ {current_price} (conf: {confidence:.2f})")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Error storing training prediction: {e}")
|
||||||
|
|
||||||
def _realtime_inference_loop(self, inference_id: str, model_name: str, symbol: str, data_provider):
|
def _realtime_inference_loop(self, inference_id: str, model_name: str, symbol: str, data_provider):
|
||||||
"""
|
"""
|
||||||
Real-time inference loop using orchestrator's REAL prediction methods
|
Real-time inference loop using orchestrator's REAL prediction methods
|
||||||
|
|||||||
@@ -107,7 +107,7 @@ class BacktestRunner:
|
|||||||
self.lock = threading.Lock()
|
self.lock = threading.Lock()
|
||||||
|
|
||||||
def start_backtest(self, backtest_id: str, model, data_provider, symbol: str, timeframe: str,
|
def start_backtest(self, backtest_id: str, model, data_provider, symbol: str, timeframe: str,
|
||||||
start_time: Optional[str] = None, end_time: Optional[str] = None):
|
orchestrator=None, start_time: Optional[str] = None, end_time: Optional[str] = None):
|
||||||
"""Start backtest in background thread"""
|
"""Start backtest in background thread"""
|
||||||
|
|
||||||
# Initialize backtest state
|
# Initialize backtest state
|
||||||
@@ -122,22 +122,34 @@ class BacktestRunner:
|
|||||||
'new_predictions': [],
|
'new_predictions': [],
|
||||||
'position': None, # {'type': 'long/short', 'entry_price': float, 'entry_time': str}
|
'position': None, # {'type': 'long/short', 'entry_price': float, 'entry_time': str}
|
||||||
'error': None,
|
'error': None,
|
||||||
'stop_requested': False
|
'stop_requested': False,
|
||||||
|
'orchestrator': orchestrator,
|
||||||
|
'symbol': symbol
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Clear previous predictions from orchestrator
|
||||||
|
if orchestrator and hasattr(orchestrator, 'recent_transformer_predictions'):
|
||||||
|
if symbol in orchestrator.recent_transformer_predictions:
|
||||||
|
orchestrator.recent_transformer_predictions[symbol].clear()
|
||||||
|
if symbol in orchestrator.recent_cnn_predictions:
|
||||||
|
orchestrator.recent_cnn_predictions[symbol].clear()
|
||||||
|
if symbol in orchestrator.recent_dqn_predictions:
|
||||||
|
orchestrator.recent_dqn_predictions[symbol].clear()
|
||||||
|
logger.info(f"Cleared previous predictions for backtest on {symbol}")
|
||||||
|
|
||||||
with self.lock:
|
with self.lock:
|
||||||
self.active_backtests[backtest_id] = state
|
self.active_backtests[backtest_id] = state
|
||||||
|
|
||||||
# Run backtest in background thread
|
# Run backtest in background thread
|
||||||
thread = threading.Thread(
|
thread = threading.Thread(
|
||||||
target=self._run_backtest,
|
target=self._run_backtest,
|
||||||
args=(backtest_id, model, data_provider, symbol, timeframe, start_time, end_time)
|
args=(backtest_id, model, data_provider, symbol, timeframe, orchestrator, start_time, end_time)
|
||||||
)
|
)
|
||||||
thread.daemon = True
|
thread.daemon = True
|
||||||
thread.start()
|
thread.start()
|
||||||
|
|
||||||
def _run_backtest(self, backtest_id: str, model, data_provider, symbol: str, timeframe: str,
|
def _run_backtest(self, backtest_id: str, model, data_provider, symbol: str, timeframe: str,
|
||||||
start_time: Optional[str] = None, end_time: Optional[str] = None):
|
orchestrator=None, start_time: Optional[str] = None, end_time: Optional[str] = None):
|
||||||
"""Execute backtest candle-by-candle"""
|
"""Execute backtest candle-by-candle"""
|
||||||
try:
|
try:
|
||||||
state = self.active_backtests[backtest_id]
|
state = self.active_backtests[backtest_id]
|
||||||
@@ -203,10 +215,51 @@ class BacktestRunner:
|
|||||||
'price': current_price,
|
'price': current_price,
|
||||||
'action': prediction['action'],
|
'action': prediction['action'],
|
||||||
'confidence': prediction['confidence'],
|
'confidence': prediction['confidence'],
|
||||||
'timeframe': timeframe
|
'timeframe': timeframe,
|
||||||
|
'current_price': current_price
|
||||||
}
|
}
|
||||||
state['new_predictions'].append(pred_data)
|
state['new_predictions'].append(pred_data)
|
||||||
|
|
||||||
|
# Store in orchestrator for visualization
|
||||||
|
if orchestrator and hasattr(orchestrator, 'store_transformer_prediction'):
|
||||||
|
# Determine model type from model class name
|
||||||
|
model_type = model.__class__.__name__.lower()
|
||||||
|
|
||||||
|
# Store in appropriate prediction collection
|
||||||
|
if 'transformer' in model_type:
|
||||||
|
orchestrator.store_transformer_prediction(symbol, {
|
||||||
|
'timestamp': current_time,
|
||||||
|
'current_price': current_price,
|
||||||
|
'predicted_price': current_price * (1.01 if prediction['action'] == 'BUY' else 0.99),
|
||||||
|
'price_change': 1.0 if prediction['action'] == 'BUY' else -1.0,
|
||||||
|
'confidence': prediction['confidence'],
|
||||||
|
'action': prediction['action'],
|
||||||
|
'horizon_minutes': 10
|
||||||
|
})
|
||||||
|
elif 'cnn' in model_type:
|
||||||
|
if hasattr(orchestrator, 'recent_cnn_predictions'):
|
||||||
|
if symbol not in orchestrator.recent_cnn_predictions:
|
||||||
|
from collections import deque
|
||||||
|
orchestrator.recent_cnn_predictions[symbol] = deque(maxlen=50)
|
||||||
|
orchestrator.recent_cnn_predictions[symbol].append({
|
||||||
|
'timestamp': current_time,
|
||||||
|
'current_price': current_price,
|
||||||
|
'predicted_price': current_price * (1.01 if prediction['action'] == 'BUY' else 0.99),
|
||||||
|
'confidence': prediction['confidence'],
|
||||||
|
'direction': 2 if prediction['action'] == 'BUY' else 0
|
||||||
|
})
|
||||||
|
elif 'dqn' in model_type or 'rl' in model_type:
|
||||||
|
if hasattr(orchestrator, 'recent_dqn_predictions'):
|
||||||
|
if symbol not in orchestrator.recent_dqn_predictions:
|
||||||
|
from collections import deque
|
||||||
|
orchestrator.recent_dqn_predictions[symbol] = deque(maxlen=100)
|
||||||
|
orchestrator.recent_dqn_predictions[symbol].append({
|
||||||
|
'timestamp': current_time,
|
||||||
|
'current_price': current_price,
|
||||||
|
'action': prediction['action'],
|
||||||
|
'confidence': prediction['confidence']
|
||||||
|
})
|
||||||
|
|
||||||
# Execute trade logic
|
# Execute trade logic
|
||||||
self._execute_trade_logic(state, prediction, current_price, current_time)
|
self._execute_trade_logic(state, prediction, current_price, current_time)
|
||||||
|
|
||||||
@@ -1678,6 +1731,7 @@ class AnnotationDashboard:
|
|||||||
data_provider=self.data_provider,
|
data_provider=self.data_provider,
|
||||||
symbol=symbol,
|
symbol=symbol,
|
||||||
timeframe=timeframe,
|
timeframe=timeframe,
|
||||||
|
orchestrator=self.orchestrator,
|
||||||
start_time=start_time,
|
start_time=start_time,
|
||||||
end_time=end_time
|
end_time=end_time
|
||||||
)
|
)
|
||||||
@@ -2024,6 +2078,81 @@ class AnnotationDashboard:
|
|||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
|
@self.server.route('/api/live-updates', methods=['POST'])
|
||||||
|
def get_live_updates():
|
||||||
|
"""Get live chart and prediction updates (polling endpoint)"""
|
||||||
|
try:
|
||||||
|
data = request.get_json()
|
||||||
|
symbol = data.get('symbol', 'ETH/USDT')
|
||||||
|
timeframe = data.get('timeframe', '1m')
|
||||||
|
|
||||||
|
response = {
|
||||||
|
'success': True,
|
||||||
|
'chart_update': None,
|
||||||
|
'prediction': None
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get latest candle for the requested timeframe
|
||||||
|
if self.orchestrator and self.orchestrator.data_provider:
|
||||||
|
try:
|
||||||
|
# Get latest candle
|
||||||
|
ohlcv_data = self.orchestrator.data_provider.get_ohlcv_data(symbol, timeframe, limit=1)
|
||||||
|
if ohlcv_data and len(ohlcv_data) > 0:
|
||||||
|
latest_candle = ohlcv_data[-1]
|
||||||
|
response['chart_update'] = {
|
||||||
|
'symbol': symbol,
|
||||||
|
'timeframe': timeframe,
|
||||||
|
'candle': {
|
||||||
|
'timestamp': latest_candle[0],
|
||||||
|
'open': float(latest_candle[1]),
|
||||||
|
'high': float(latest_candle[2]),
|
||||||
|
'low': float(latest_candle[3]),
|
||||||
|
'close': float(latest_candle[4]),
|
||||||
|
'volume': float(latest_candle[5])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Error getting latest candle: {e}")
|
||||||
|
|
||||||
|
# Get latest model predictions
|
||||||
|
if self.orchestrator:
|
||||||
|
try:
|
||||||
|
# Get latest predictions from orchestrator
|
||||||
|
predictions = {}
|
||||||
|
|
||||||
|
# DQN predictions
|
||||||
|
if hasattr(self.orchestrator, 'recent_dqn_predictions') and symbol in self.orchestrator.recent_dqn_predictions:
|
||||||
|
dqn_preds = list(self.orchestrator.recent_dqn_predictions[symbol])
|
||||||
|
if dqn_preds:
|
||||||
|
predictions['dqn'] = dqn_preds[-1]
|
||||||
|
|
||||||
|
# CNN predictions
|
||||||
|
if hasattr(self.orchestrator, 'recent_cnn_predictions') and symbol in self.orchestrator.recent_cnn_predictions:
|
||||||
|
cnn_preds = list(self.orchestrator.recent_cnn_predictions[symbol])
|
||||||
|
if cnn_preds:
|
||||||
|
predictions['cnn'] = cnn_preds[-1]
|
||||||
|
|
||||||
|
# Transformer predictions
|
||||||
|
if hasattr(self.orchestrator, 'recent_transformer_predictions') and symbol in self.orchestrator.recent_transformer_predictions:
|
||||||
|
transformer_preds = list(self.orchestrator.recent_transformer_predictions[symbol])
|
||||||
|
if transformer_preds:
|
||||||
|
predictions['transformer'] = transformer_preds[-1]
|
||||||
|
|
||||||
|
if predictions:
|
||||||
|
response['prediction'] = predictions
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.debug(f"Error getting predictions: {e}")
|
||||||
|
|
||||||
|
return jsonify(response)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error in live updates: {e}")
|
||||||
|
return jsonify({
|
||||||
|
'success': False,
|
||||||
|
'error': str(e)
|
||||||
|
})
|
||||||
|
|
||||||
@self.server.route('/api/realtime-inference/signals', methods=['GET'])
|
@self.server.route('/api/realtime-inference/signals', methods=['GET'])
|
||||||
def get_realtime_signals():
|
def get_realtime_signals():
|
||||||
"""Get latest real-time inference signals"""
|
"""Get latest real-time inference signals"""
|
||||||
|
|||||||
@@ -111,6 +111,80 @@ class ChartManager {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update latest candle on chart (for live updates)
|
||||||
|
* Efficiently updates only the last candle or adds a new one
|
||||||
|
*/
|
||||||
|
updateLatestCandle(symbol, timeframe, candle) {
|
||||||
|
try {
|
||||||
|
const plotId = `plot-${timeframe}`;
|
||||||
|
const plotElement = document.getElementById(plotId);
|
||||||
|
|
||||||
|
if (!plotElement) {
|
||||||
|
console.debug(`Chart ${plotId} not found for live update`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get current chart data
|
||||||
|
const chartData = Plotly.Plots.data(plotId);
|
||||||
|
if (!chartData || chartData.length < 2) {
|
||||||
|
console.debug(`Chart ${plotId} not initialized yet`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const candlestickTrace = chartData[0];
|
||||||
|
const volumeTrace = chartData[1];
|
||||||
|
|
||||||
|
// Parse timestamp
|
||||||
|
const candleTimestamp = new Date(candle.timestamp);
|
||||||
|
|
||||||
|
// Check if this is updating the last candle or adding a new one
|
||||||
|
const lastTimestamp = candlestickTrace.x[candlestickTrace.x.length - 1];
|
||||||
|
const isNewCandle = !lastTimestamp || new Date(lastTimestamp).getTime() < candleTimestamp.getTime();
|
||||||
|
|
||||||
|
if (isNewCandle) {
|
||||||
|
// Add new candle using extendTraces (most efficient)
|
||||||
|
Plotly.extendTraces(plotId, {
|
||||||
|
x: [[candleTimestamp]],
|
||||||
|
open: [[candle.open]],
|
||||||
|
high: [[candle.high]],
|
||||||
|
low: [[candle.low]],
|
||||||
|
close: [[candle.close]]
|
||||||
|
}, [0]);
|
||||||
|
|
||||||
|
// Update volume color based on price direction
|
||||||
|
const volumeColor = candle.close >= candle.open ? '#10b981' : '#ef4444';
|
||||||
|
Plotly.extendTraces(plotId, {
|
||||||
|
x: [[candleTimestamp]],
|
||||||
|
y: [[candle.volume]],
|
||||||
|
marker: { color: [[volumeColor]] }
|
||||||
|
}, [1]);
|
||||||
|
} else {
|
||||||
|
// Update last candle using restyle
|
||||||
|
const lastIndex = candlestickTrace.x.length - 1;
|
||||||
|
Plotly.restyle(plotId, {
|
||||||
|
'x': [[...candlestickTrace.x.slice(0, lastIndex), candleTimestamp]],
|
||||||
|
'open': [[...candlestickTrace.open.slice(0, lastIndex), candle.open]],
|
||||||
|
'high': [[...candlestickTrace.high.slice(0, lastIndex), candle.high]],
|
||||||
|
'low': [[...candlestickTrace.low.slice(0, lastIndex), candle.low]],
|
||||||
|
'close': [[...candlestickTrace.close.slice(0, lastIndex), candle.close]]
|
||||||
|
}, [0]);
|
||||||
|
|
||||||
|
// Update volume
|
||||||
|
const volumeColor = candle.close >= candle.open ? '#10b981' : '#ef4444';
|
||||||
|
Plotly.restyle(plotId, {
|
||||||
|
'x': [[...volumeTrace.x.slice(0, lastIndex), candleTimestamp]],
|
||||||
|
'y': [[...volumeTrace.y.slice(0, lastIndex), candle.volume]],
|
||||||
|
'marker.color': [[...volumeTrace.marker.color.slice(0, lastIndex), volumeColor]]
|
||||||
|
}, [1]);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.debug(`Updated ${timeframe} chart with new candle at ${candleTimestamp.toISOString()}`);
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`Error updating latest candle for ${timeframe}:`, error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Initialize charts for all timeframes with pivot bounds
|
* Initialize charts for all timeframes with pivot bounds
|
||||||
*/
|
*/
|
||||||
@@ -1640,4 +1714,172 @@ class ChartManager {
|
|||||||
loadingDiv.remove();
|
loadingDiv.remove();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update model predictions on charts
|
||||||
|
*/
|
||||||
|
updatePredictions(predictions) {
|
||||||
|
if (!predictions) return;
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Update predictions on 1m chart (primary timeframe for predictions)
|
||||||
|
const timeframe = '1m';
|
||||||
|
const chart = this.charts[timeframe];
|
||||||
|
if (!chart) return;
|
||||||
|
|
||||||
|
const plotId = chart.plotId;
|
||||||
|
const plotElement = document.getElementById(plotId);
|
||||||
|
if (!plotElement) return;
|
||||||
|
|
||||||
|
// Get current chart data
|
||||||
|
const chartData = plotElement.data;
|
||||||
|
if (!chartData || chartData.length < 2) return;
|
||||||
|
|
||||||
|
// Prepare prediction markers
|
||||||
|
const predictionShapes = [];
|
||||||
|
const predictionAnnotations = [];
|
||||||
|
|
||||||
|
// Add DQN predictions (arrows)
|
||||||
|
if (predictions.dqn) {
|
||||||
|
this._addDQNPrediction(predictions.dqn, predictionShapes, predictionAnnotations);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add CNN predictions (trend lines)
|
||||||
|
if (predictions.cnn) {
|
||||||
|
this._addCNNPrediction(predictions.cnn, predictionShapes, predictionAnnotations);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add Transformer predictions (star markers with trend lines)
|
||||||
|
if (predictions.transformer) {
|
||||||
|
this._addTransformerPrediction(predictions.transformer, predictionShapes, predictionAnnotations);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update chart layout with predictions
|
||||||
|
if (predictionShapes.length > 0 || predictionAnnotations.length > 0) {
|
||||||
|
Plotly.relayout(plotId, {
|
||||||
|
shapes: [...(chart.layout.shapes || []), ...predictionShapes],
|
||||||
|
annotations: [...(chart.layout.annotations || []), ...predictionAnnotations]
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
} catch (error) {
|
||||||
|
console.debug('Error updating predictions:', error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
_addDQNPrediction(prediction, shapes, annotations) {
|
||||||
|
const timestamp = new Date(prediction.timestamp || Date.now());
|
||||||
|
const price = prediction.current_price || 0;
|
||||||
|
const action = prediction.action || 'HOLD';
|
||||||
|
const confidence = prediction.confidence || 0;
|
||||||
|
|
||||||
|
if (action === 'HOLD' || confidence < 0.4) return;
|
||||||
|
|
||||||
|
// Add arrow annotation
|
||||||
|
annotations.push({
|
||||||
|
x: timestamp,
|
||||||
|
y: price,
|
||||||
|
text: action === 'BUY' ? '▲' : '▼',
|
||||||
|
showarrow: false,
|
||||||
|
font: {
|
||||||
|
size: 16,
|
||||||
|
color: action === 'BUY' ? '#10b981' : '#ef4444'
|
||||||
|
},
|
||||||
|
opacity: 0.5 + confidence * 0.5
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
_addCNNPrediction(prediction, shapes, annotations) {
|
||||||
|
const timestamp = new Date(prediction.timestamp || Date.now());
|
||||||
|
const currentPrice = prediction.current_price || 0;
|
||||||
|
const predictedPrice = prediction.predicted_price || currentPrice;
|
||||||
|
const confidence = prediction.confidence || 0;
|
||||||
|
|
||||||
|
if (confidence < 0.4 || currentPrice === 0) return;
|
||||||
|
|
||||||
|
// Calculate end time (5 minutes ahead)
|
||||||
|
const endTime = new Date(timestamp.getTime() + 5 * 60 * 1000);
|
||||||
|
|
||||||
|
// Determine color based on direction
|
||||||
|
const isUp = predictedPrice > currentPrice;
|
||||||
|
const color = isUp ? 'rgba(0, 255, 0, 0.5)' : 'rgba(255, 0, 0, 0.5)';
|
||||||
|
|
||||||
|
// Add trend line
|
||||||
|
shapes.push({
|
||||||
|
type: 'line',
|
||||||
|
x0: timestamp,
|
||||||
|
y0: currentPrice,
|
||||||
|
x1: endTime,
|
||||||
|
y1: predictedPrice,
|
||||||
|
line: {
|
||||||
|
color: color,
|
||||||
|
width: 2,
|
||||||
|
dash: 'dot'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add target marker
|
||||||
|
annotations.push({
|
||||||
|
x: endTime,
|
||||||
|
y: predictedPrice,
|
||||||
|
text: '◆',
|
||||||
|
showarrow: false,
|
||||||
|
font: {
|
||||||
|
size: 12,
|
||||||
|
color: isUp ? '#10b981' : '#ef4444'
|
||||||
|
},
|
||||||
|
opacity: 0.5 + confidence * 0.5
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
_addTransformerPrediction(prediction, shapes, annotations) {
|
||||||
|
const timestamp = new Date(prediction.timestamp || Date.now());
|
||||||
|
const currentPrice = prediction.current_price || 0;
|
||||||
|
const predictedPrice = prediction.predicted_price || currentPrice;
|
||||||
|
const confidence = prediction.confidence || 0;
|
||||||
|
const priceChange = prediction.price_change || 0;
|
||||||
|
const horizonMinutes = prediction.horizon_minutes || 10;
|
||||||
|
|
||||||
|
if (confidence < 0.3 || currentPrice === 0) return;
|
||||||
|
|
||||||
|
// Calculate end time
|
||||||
|
const endTime = new Date(timestamp.getTime() + horizonMinutes * 60 * 1000);
|
||||||
|
|
||||||
|
// Determine color based on price change
|
||||||
|
let color;
|
||||||
|
if (priceChange > 0.5) {
|
||||||
|
color = 'rgba(0, 200, 255, 0.6)'; // Cyan for UP
|
||||||
|
} else if (priceChange < -0.5) {
|
||||||
|
color = 'rgba(255, 100, 0, 0.6)'; // Orange for DOWN
|
||||||
|
} else {
|
||||||
|
color = 'rgba(150, 150, 255, 0.5)'; // Light blue for STABLE
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add trend line
|
||||||
|
shapes.push({
|
||||||
|
type: 'line',
|
||||||
|
x0: timestamp,
|
||||||
|
y0: currentPrice,
|
||||||
|
x1: endTime,
|
||||||
|
y1: predictedPrice,
|
||||||
|
line: {
|
||||||
|
color: color,
|
||||||
|
width: 2 + confidence * 2,
|
||||||
|
dash: 'dashdot'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add star marker at target
|
||||||
|
annotations.push({
|
||||||
|
x: endTime,
|
||||||
|
y: predictedPrice,
|
||||||
|
text: '★',
|
||||||
|
showarrow: false,
|
||||||
|
font: {
|
||||||
|
size: 14 + confidence * 6,
|
||||||
|
color: color
|
||||||
|
},
|
||||||
|
opacity: 0.6 + confidence * 0.4
|
||||||
|
});
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
222
ANNOTATE/web/static/js/live_updates_polling.js
Normal file
222
ANNOTATE/web/static/js/live_updates_polling.js
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
/**
|
||||||
|
* Polling-based Live Updates for ANNOTATE
|
||||||
|
* Replaces WebSocket with simple polling (like clean_dashboard)
|
||||||
|
*/
|
||||||
|
|
||||||
|
class LiveUpdatesPolling {
|
||||||
|
constructor() {
|
||||||
|
this.pollInterval = null;
|
||||||
|
this.pollDelay = 2000; // Poll every 2 seconds (like clean_dashboard)
|
||||||
|
this.subscriptions = new Set();
|
||||||
|
this.isPolling = false;
|
||||||
|
|
||||||
|
// Callbacks
|
||||||
|
this.onChartUpdate = null;
|
||||||
|
this.onPredictionUpdate = null;
|
||||||
|
this.onConnectionChange = null;
|
||||||
|
|
||||||
|
console.log('LiveUpdatesPolling initialized');
|
||||||
|
}
|
||||||
|
|
||||||
|
start() {
|
||||||
|
if (this.isPolling) {
|
||||||
|
console.log('Already polling');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.isPolling = true;
|
||||||
|
this._startPolling();
|
||||||
|
|
||||||
|
if (this.onConnectionChange) {
|
||||||
|
this.onConnectionChange(true);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('Started polling for live updates');
|
||||||
|
}
|
||||||
|
|
||||||
|
stop() {
|
||||||
|
if (this.pollInterval) {
|
||||||
|
clearInterval(this.pollInterval);
|
||||||
|
this.pollInterval = null;
|
||||||
|
}
|
||||||
|
this.isPolling = false;
|
||||||
|
|
||||||
|
if (this.onConnectionChange) {
|
||||||
|
this.onConnectionChange(false);
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log('Stopped polling for live updates');
|
||||||
|
}
|
||||||
|
|
||||||
|
_startPolling() {
|
||||||
|
// Poll immediately, then set interval
|
||||||
|
this._poll();
|
||||||
|
this.pollInterval = setInterval(() => {
|
||||||
|
this._poll();
|
||||||
|
}, this.pollDelay);
|
||||||
|
}
|
||||||
|
|
||||||
|
_poll() {
|
||||||
|
// Poll each subscription
|
||||||
|
this.subscriptions.forEach(sub => {
|
||||||
|
fetch('/api/live-updates', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: { 'Content-Type': 'application/json' },
|
||||||
|
body: JSON.stringify({
|
||||||
|
symbol: sub.symbol,
|
||||||
|
timeframe: sub.timeframe
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.then(response => response.json())
|
||||||
|
.then(data => {
|
||||||
|
if (data.success) {
|
||||||
|
// Handle chart update
|
||||||
|
if (data.chart_update && this.onChartUpdate) {
|
||||||
|
this.onChartUpdate(data.chart_update);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handle prediction update
|
||||||
|
if (data.prediction && this.onPredictionUpdate) {
|
||||||
|
this.onPredictionUpdate(data.prediction);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch(error => {
|
||||||
|
console.debug('Polling error:', error);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
subscribe(symbol, timeframe) {
|
||||||
|
const key = `${symbol}_${timeframe}`;
|
||||||
|
this.subscriptions.add({ symbol, timeframe, key });
|
||||||
|
|
||||||
|
// Auto-start polling if not already started
|
||||||
|
if (!this.isPolling) {
|
||||||
|
this.start();
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`Subscribed to live updates: ${symbol} ${timeframe}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
unsubscribe(symbol, timeframe) {
|
||||||
|
const key = `${symbol}_${timeframe}`;
|
||||||
|
this.subscriptions.forEach(sub => {
|
||||||
|
if (sub.key === key) {
|
||||||
|
this.subscriptions.delete(sub);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Stop polling if no subscriptions
|
||||||
|
if (this.subscriptions.size === 0) {
|
||||||
|
this.stop();
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`Unsubscribed from live updates: ${symbol} ${timeframe}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
isConnected() {
|
||||||
|
return this.isPolling;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Global instance
|
||||||
|
window.liveUpdatesPolling = null;
|
||||||
|
|
||||||
|
// Initialize on page load
|
||||||
|
document.addEventListener('DOMContentLoaded', function() {
|
||||||
|
// Initialize polling
|
||||||
|
window.liveUpdatesPolling = new LiveUpdatesPolling();
|
||||||
|
|
||||||
|
// Setup callbacks (same interface as WebSocket version)
|
||||||
|
window.liveUpdatesPolling.onConnectionChange = function(connected) {
|
||||||
|
const statusElement = document.getElementById('ws-connection-status');
|
||||||
|
if (statusElement) {
|
||||||
|
if (connected) {
|
||||||
|
statusElement.innerHTML = '<span class="badge bg-success">Live</span>';
|
||||||
|
} else {
|
||||||
|
statusElement.innerHTML = '<span class="badge bg-secondary">Offline</span>';
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.liveUpdatesPolling.onChartUpdate = function(data) {
|
||||||
|
// Update chart with new candle
|
||||||
|
if (window.appState && window.appState.chartManager) {
|
||||||
|
window.appState.chartManager.updateLatestCandle(data.symbol, data.timeframe, data.candle);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
window.liveUpdatesPolling.onPredictionUpdate = function(data) {
|
||||||
|
// Update prediction visualization on charts
|
||||||
|
if (window.appState && window.appState.chartManager) {
|
||||||
|
window.appState.chartManager.updatePredictions(data);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update prediction display
|
||||||
|
if (typeof updatePredictionDisplay === 'function') {
|
||||||
|
updatePredictionDisplay(data);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add to prediction history
|
||||||
|
if (typeof predictionHistory !== 'undefined') {
|
||||||
|
predictionHistory.unshift(data);
|
||||||
|
if (predictionHistory.length > 5) {
|
||||||
|
predictionHistory = predictionHistory.slice(0, 5);
|
||||||
|
}
|
||||||
|
if (typeof updatePredictionHistory === 'function') {
|
||||||
|
updatePredictionHistory();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// Function to subscribe to all active timeframes
|
||||||
|
function subscribeToActiveTimeframes() {
|
||||||
|
if (window.appState && window.appState.currentSymbol && window.appState.currentTimeframes) {
|
||||||
|
const symbol = window.appState.currentSymbol;
|
||||||
|
window.appState.currentTimeframes.forEach(timeframe => {
|
||||||
|
window.liveUpdatesPolling.subscribe(symbol, timeframe);
|
||||||
|
});
|
||||||
|
console.log(`Subscribed to live updates for ${symbol}: ${window.appState.currentTimeframes.join(', ')}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Auto-start polling
|
||||||
|
console.log('Auto-starting polling for live updates...');
|
||||||
|
window.liveUpdatesPolling.start();
|
||||||
|
|
||||||
|
// Wait for DOM and appState to be ready, then subscribe
|
||||||
|
function initializeSubscriptions() {
|
||||||
|
// Wait a bit for charts to initialize
|
||||||
|
setTimeout(() => {
|
||||||
|
subscribeToActiveTimeframes();
|
||||||
|
}, 2000);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Subscribe when DOM is ready
|
||||||
|
if (document.readyState === 'loading') {
|
||||||
|
document.addEventListener('DOMContentLoaded', initializeSubscriptions);
|
||||||
|
} else {
|
||||||
|
initializeSubscriptions();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Also monitor for appState changes (fallback)
|
||||||
|
let lastTimeframes = null;
|
||||||
|
setInterval(() => {
|
||||||
|
if (window.appState && window.appState.currentTimeframes && window.appState.chartManager) {
|
||||||
|
const currentTimeframes = window.appState.currentTimeframes.join(',');
|
||||||
|
if (currentTimeframes !== lastTimeframes) {
|
||||||
|
lastTimeframes = currentTimeframes;
|
||||||
|
subscribeToActiveTimeframes();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, 3000);
|
||||||
|
});
|
||||||
|
|
||||||
|
// Cleanup on page unload
|
||||||
|
window.addEventListener('beforeunload', function() {
|
||||||
|
if (window.liveUpdatesPolling) {
|
||||||
|
window.liveUpdatesPolling.stop();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
@@ -85,7 +85,7 @@
|
|||||||
<script src="{{ url_for('static', filename='js/annotation_manager.js') }}?v={{ range(1, 10000) | random }}"></script>
|
<script src="{{ url_for('static', filename='js/annotation_manager.js') }}?v={{ range(1, 10000) | random }}"></script>
|
||||||
<script src="{{ url_for('static', filename='js/time_navigator.js') }}?v={{ range(1, 10000) | random }}"></script>
|
<script src="{{ url_for('static', filename='js/time_navigator.js') }}?v={{ range(1, 10000) | random }}"></script>
|
||||||
<script src="{{ url_for('static', filename='js/training_controller.js') }}?v={{ range(1, 10000) | random }}"></script>
|
<script src="{{ url_for('static', filename='js/training_controller.js') }}?v={{ range(1, 10000) | random }}"></script>
|
||||||
<script src="{{ url_for('static', filename='js/live_updates_ws.js') }}?v={{ range(1, 10000) | random }}"></script>
|
<script src="{{ url_for('static', filename='js/live_updates_polling.js') }}?v={{ range(1, 10000) | random }}"></script>
|
||||||
|
|
||||||
{% block extra_js %}{% endblock %}
|
{% block extra_js %}{% endblock %}
|
||||||
|
|
||||||
|
|||||||
@@ -42,6 +42,7 @@
|
|||||||
<div class="small">
|
<div class="small">
|
||||||
<div>Epoch: <span id="training-epoch">0</span>/<span id="training-total-epochs">0</span></div>
|
<div>Epoch: <span id="training-epoch">0</span>/<span id="training-total-epochs">0</span></div>
|
||||||
<div>Loss: <span id="training-loss">--</span></div>
|
<div>Loss: <span id="training-loss">--</span></div>
|
||||||
|
<div>GPU: <span id="training-gpu-util">--</span>% | CPU: <span id="training-cpu-util">--</span>%</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -446,6 +447,14 @@
|
|||||||
document.getElementById('training-epoch').textContent = progress.current_epoch;
|
document.getElementById('training-epoch').textContent = progress.current_epoch;
|
||||||
document.getElementById('training-total-epochs').textContent = progress.total_epochs;
|
document.getElementById('training-total-epochs').textContent = progress.total_epochs;
|
||||||
document.getElementById('training-loss').textContent = progress.current_loss.toFixed(4);
|
document.getElementById('training-loss').textContent = progress.current_loss.toFixed(4);
|
||||||
|
|
||||||
|
// Update GPU/CPU utilization
|
||||||
|
const gpuUtil = progress.gpu_utilization !== null && progress.gpu_utilization !== undefined
|
||||||
|
? progress.gpu_utilization.toFixed(1) : '--';
|
||||||
|
const cpuUtil = progress.cpu_utilization !== null && progress.cpu_utilization !== undefined
|
||||||
|
? progress.cpu_utilization.toFixed(1) : '--';
|
||||||
|
document.getElementById('training-gpu-util').textContent = gpuUtil;
|
||||||
|
document.getElementById('training-cpu-util').textContent = cpuUtil;
|
||||||
|
|
||||||
// Check if complete
|
// Check if complete
|
||||||
if (progress.status === 'completed') {
|
if (progress.status === 'completed') {
|
||||||
|
|||||||
@@ -2816,4 +2816,17 @@ class TradingOrchestrator:
|
|||||||
logger.debug(f"Stored transformer prediction for {symbol}: {prediction.get('action', 'N/A')}")
|
logger.debug(f"Stored transformer prediction for {symbol}: {prediction.get('action', 'N/A')}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Error storing transformer prediction: {e}")
|
logger.error(f"Error storing transformer prediction: {e}")
|
||||||
|
|
||||||
|
def clear_predictions(self, symbol: str):
|
||||||
|
"""Clear all stored predictions for a symbol (useful for backtests)"""
|
||||||
|
try:
|
||||||
|
if symbol in self.recent_transformer_predictions:
|
||||||
|
self.recent_transformer_predictions[symbol].clear()
|
||||||
|
if symbol in self.recent_cnn_predictions:
|
||||||
|
self.recent_cnn_predictions[symbol].clear()
|
||||||
|
if symbol in self.recent_dqn_predictions:
|
||||||
|
self.recent_dqn_predictions[symbol].clear()
|
||||||
|
logger.info(f"Cleared all predictions for {symbol}")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error clearing predictions: {e}")
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user