6.8 KiB
6.8 KiB
Live Updates Implementation Summary
Overview
Added real-time chart updates and model prediction visualization to the ANNOTATE dashboard, matching the functionality of the clean_dashboard.
What Was Implemented
1. Live Chart Updates (1s and 1m charts)
Backend: /api/live-updates endpoint (ANNOTATE/web/app.py)
- Polls for latest candle data from orchestrator's data provider
- Returns latest OHLCV data for requested symbol/timeframe
- Provides model predictions (DQN, CNN, Transformer)
Frontend: Polling System (ANNOTATE/web/static/js/live_updates_polling.js)
- Polls
/api/live-updatesevery 2 seconds - Subscribes to active timeframes (1s, 1m, etc.)
- Automatically updates charts with new candles
Chart Updates (ANNOTATE/web/static/js/chart_manager.js)
updateLatestCandle()- Updates or extends chart with new candle- Efficiently uses Plotly's
extendTracesfor new candles - Uses
restylefor updating current candle
2. Model Prediction Visualization
Orchestrator Storage (core/orchestrator.py)
- Added
recent_transformer_predictionstracking - Added
store_transformer_prediction()method - Tracks last 50 predictions per symbol
Dashboard Visualization (web/clean_dashboard.py)
- Added
_add_transformer_predictions_to_chart()method - Added
_get_recent_transformer_predictions()helper - Transformer predictions shown as:
- Cyan lines/stars for UP predictions
- Orange lines/stars for DOWN predictions
- Light blue lines/stars for STABLE predictions
ANNOTATE Visualization (ANNOTATE/web/static/js/chart_manager.js)
- Added
updatePredictions()method - Added
_addDQNPrediction()- Shows arrows (▲/▼) - Added
_addCNNPrediction()- Shows trend lines with diamonds (◆) - Added
_addTransformerPrediction()- Shows trend lines with stars (★)
How It Works
Live Chart Updates Flow
1. JavaScript polls /api/live-updates every 2s
2. Backend fetches latest candle from data_provider
3. Backend returns candle + predictions
4. Frontend updates chart with new/updated candle
5. Frontend visualizes model predictions
Prediction Visualization Flow
1. Model makes prediction
2. Call orchestrator.store_transformer_prediction(symbol, prediction_data)
3. Prediction stored in recent_transformer_predictions deque
4. Dashboard/ANNOTATE polls and retrieves predictions
5. Predictions rendered as shapes/annotations on charts
Usage
Storing Transformer Predictions
orchestrator.store_transformer_prediction('ETH/USDT', {
'timestamp': datetime.now(),
'confidence': 0.75,
'current_price': 3500.0,
'predicted_price': 3550.0,
'price_change': 1.43, # percentage
'horizon_minutes': 10
})
Enabling Live Updates
Live updates start automatically when:
- ANNOTATE dashboard loads
- Charts are initialized
- appState.currentTimeframes is set
Prediction Display
Predictions appear on the 1m chart as:
- DQN: Green/red arrows for BUY/SELL
- CNN: Dotted trend lines with diamond markers
- Transformer: Dash-dot lines with star markers
- Opacity/size scales with confidence
Backtest Prediction Visualization
Implemented
- Automatic prediction clearing - Predictions are cleared when backtest starts
- Prediction storage during backtest - All model predictions stored in orchestrator
- Model type detection - Automatically detects Transformer/CNN/DQN models
- Real-time visualization - Predictions appear on charts as backtest runs
Training Prediction Visualization
Implemented
- Automatic prediction clearing - Predictions cleared when training starts
- Prediction storage during training - Model predictions stored every 10 batches on first epoch
- Metadata tracking - Current price and timestamp stored with each batch
- Real-time visualization - Training predictions appear on charts via polling
How It Works
Backtest:
# When backtest starts:
orchestrator.clear_predictions(symbol)
# During backtest, for each prediction:
if 'transformer' in model_type:
orchestrator.store_transformer_prediction(symbol, prediction_data)
elif 'cnn' in model_type:
orchestrator.recent_cnn_predictions[symbol].append(prediction_data)
elif 'dqn' in model_type:
orchestrator.recent_dqn_predictions[symbol].append(prediction_data)
# Predictions automatically appear via /api/live-updates polling
Training:
# When training starts:
orchestrator.clear_predictions(symbol)
# During training (every 10 batches on first epoch):
with torch.no_grad():
trainer.model.eval()
outputs = trainer.model(**batch)
# Extract prediction and store
orchestrator.store_transformer_prediction(symbol, {
'timestamp': batch['metadata']['timestamp'],
'current_price': batch['metadata']['current_price'],
'action': predicted_action,
'confidence': confidence,
'source': 'training'
})
# Predictions automatically appear via /api/live-updates polling
What's Still Missing
Not Yet Implemented
- Historical prediction replay - Can't replay predictions from past sessions
- Prediction accuracy overlay - No visual feedback on prediction outcomes
- Multi-symbol predictions - Only primary symbol predictions are tracked
- Prediction persistence - Predictions are lost when app restarts
Files Modified
Backend
core/orchestrator.py- Added transformer prediction trackingweb/clean_dashboard.py- Added transformer visualizationANNOTATE/web/app.py- Added/api/live-updatesendpoint
Frontend
ANNOTATE/web/static/js/chart_manager.js- Added prediction visualization methodsANNOTATE/web/static/js/live_updates_polling.js- Updated to call prediction visualization
Testing
Verify Live Updates
- Open ANNOTATE dashboard
- Check browser console for "Started polling for live updates"
- Watch 1s/1m charts update every 2 seconds
- Check network tab for
/api/live-updatesrequests
Verify Predictions
- Ensure orchestrator is running
- Make predictions using models
- Call
orchestrator.store_transformer_prediction() - Check charts for prediction markers
- Verify predictions appear in
/api/live-updatesresponse
Performance
- Polling frequency: 2 seconds (configurable)
- Prediction storage: 50 predictions per symbol (deque with maxlen)
- Chart updates: Efficient Plotly operations (extendTraces/restyle)
- Memory usage: Minimal (predictions auto-expire from deque)
Next Steps
- Add backtest prediction storage - Store predictions during backtest runs
- Add prediction outcome tracking - Track if predictions were accurate
- Add prediction accuracy visualization - Show success/failure markers
- Add prediction filtering - Filter by model, confidence, timeframe
- Add prediction export - Export predictions for analysis