268 lines
9.8 KiB
Markdown
268 lines
9.8 KiB
Markdown
# Universal Data Stream Architecture Audit & Optimization Plan
|
|
|
|
## 📊 UNIVERSAL DATA FORMAT SPECIFICATION
|
|
|
|
Our trading system is built around **5 core timeseries streams** that provide a standardized data format to all models:
|
|
|
|
### Core Timeseries (The Sacred 5)
|
|
1. **ETH/USDT Ticks (1s)** - Primary trading pair real-time data
|
|
2. **ETH/USDT 1m** - Short-term price action and patterns
|
|
3. **ETH/USDT 1h** - Medium-term trends and momentum
|
|
4. **ETH/USDT 1d** - Long-term market structure
|
|
5. **BTC/USDT Ticks (1s)** - Reference asset for correlation analysis
|
|
|
|
### Data Format Structure
|
|
```python
|
|
@dataclass
|
|
class UniversalDataStream:
|
|
eth_ticks: np.ndarray # [timestamp, open, high, low, close, volume]
|
|
eth_1m: np.ndarray # [timestamp, open, high, low, close, volume]
|
|
eth_1h: np.ndarray # [timestamp, open, high, low, close, volume]
|
|
eth_1d: np.ndarray # [timestamp, open, high, low, close, volume]
|
|
btc_ticks: np.ndarray # [timestamp, open, high, low, close, volume]
|
|
timestamp: datetime
|
|
metadata: Dict[str, Any]
|
|
```
|
|
|
|
## 🏗️ CURRENT ARCHITECTURE COMPONENTS
|
|
|
|
### 1. Universal Data Adapter (`core/universal_data_adapter.py`)
|
|
- **Status**: ✅ Implemented
|
|
- **Purpose**: Converts any data source into universal 5-timeseries format
|
|
- **Key Features**:
|
|
- Format validation
|
|
- Data quality assessment
|
|
- Model-specific formatting (CNN, RL, Transformer)
|
|
- Window size management
|
|
- Missing data handling
|
|
|
|
### 2. Unified Data Stream (`core/unified_data_stream.py`)
|
|
- **Status**: ✅ Implemented with Subscriber Architecture
|
|
- **Purpose**: Central data distribution hub
|
|
- **Key Features**:
|
|
- Publisher-Subscriber pattern
|
|
- Consumer registration system
|
|
- Multi-consumer data distribution
|
|
- Performance tracking
|
|
- Data caching and buffering
|
|
|
|
### 3. Enhanced Orchestrator Integration
|
|
- **Status**: ✅ Implemented
|
|
- **Purpose**: Neural Decision Fusion using universal data
|
|
- **Key Features**:
|
|
- NN-driven decision making
|
|
- Model prediction fusion
|
|
- Market context preparation
|
|
- Cross-asset correlation analysis
|
|
|
|
## 📈 DATA FLOW MAPPING
|
|
|
|
### Current Data Flow
|
|
```
|
|
Data Provider (Binance API)
|
|
↓
|
|
Universal Data Adapter
|
|
↓
|
|
Unified Data Stream (Publisher)
|
|
↓
|
|
┌─────────────────┬─────────────────┬─────────────────┐
|
|
│ Dashboard │ Orchestrator │ Models │
|
|
│ Subscriber │ Subscriber │ Subscriber │
|
|
└─────────────────┴─────────────────┴─────────────────┘
|
|
```
|
|
|
|
### Registered Consumers
|
|
1. **Trading Dashboard** - UI data updates (`ticks`, `ohlcv`, `ui_data`)
|
|
2. **Enhanced Orchestrator** - NN decision making (`training_data`, `ohlcv`)
|
|
3. **CNN Models** - Pattern recognition (formatted CNN data)
|
|
4. **RL Models** - Action learning (state vectors)
|
|
5. **COB Integration** - Order book analysis (microstructure data)
|
|
|
|
## 🔍 ARCHITECTURE AUDIT FINDINGS
|
|
|
|
### ✅ STRENGTHS
|
|
1. **Standardized Format**: All models receive consistent data structure
|
|
2. **Publisher-Subscriber**: Efficient one-to-many data distribution
|
|
3. **Performance Tracking**: Built-in metrics and monitoring
|
|
4. **Multi-Timeframe**: Comprehensive temporal view
|
|
5. **Real-time Processing**: Live data with proper buffering
|
|
|
|
### ⚠️ OPTIMIZATION OPPORTUNITIES
|
|
|
|
#### 1. **Memory Efficiency**
|
|
- **Issue**: Multiple data copies across consumers
|
|
- **Impact**: High memory usage with many subscribers
|
|
- **Solution**: Implement shared memory buffers with copy-on-write
|
|
|
|
#### 2. **Processing Latency**
|
|
- **Issue**: Sequential processing in some callbacks
|
|
- **Impact**: Delays in real-time decision making
|
|
- **Solution**: Parallel consumer notification with thread pools
|
|
|
|
#### 3. **Data Staleness**
|
|
- **Issue**: No real-time freshness validation
|
|
- **Impact**: Models might use outdated data
|
|
- **Solution**: Timestamp-based data validity checks
|
|
|
|
#### 4. **Network Optimization**
|
|
- **Issue**: Individual API calls for each timeframe
|
|
- **Impact**: Rate limiting and bandwidth waste
|
|
- **Solution**: Batch requests and intelligent caching
|
|
|
|
## 🚀 OPTIMIZATION IMPLEMENTATION PLAN
|
|
|
|
### Phase 1: Memory Optimization
|
|
```python
|
|
# Implement shared memory data structures
|
|
class SharedDataBuffer:
|
|
def __init__(self, max_size: int):
|
|
self.data = np.zeros((max_size, 6), dtype=np.float32) # OHLCV + timestamp
|
|
self.write_index = 0
|
|
self.readers = {} # Consumer ID -> last read index
|
|
|
|
def write(self, new_data: np.ndarray):
|
|
# Atomic write operation
|
|
self.data[self.write_index] = new_data
|
|
self.write_index = (self.write_index + 1) % len(self.data)
|
|
|
|
def read(self, consumer_id: str, count: int) -> np.ndarray:
|
|
# Return data since last read for this consumer
|
|
last_read = self.readers.get(consumer_id, 0)
|
|
data_slice = self._get_data_slice(last_read, count)
|
|
self.readers[consumer_id] = self.write_index
|
|
return data_slice
|
|
```
|
|
|
|
### Phase 2: Parallel Processing
|
|
```python
|
|
# Implement concurrent consumer notification
|
|
class ParallelDataDistributor:
|
|
def __init__(self, max_workers: int = 4):
|
|
self.executor = ThreadPoolExecutor(max_workers=max_workers)
|
|
|
|
def distribute_to_consumers(self, data_packet: Dict[str, Any]):
|
|
futures = []
|
|
for consumer in self.active_consumers:
|
|
future = self.executor.submit(self._notify_consumer, consumer, data_packet)
|
|
futures.append(future)
|
|
|
|
# Wait for all notifications to complete
|
|
for future in as_completed(futures, timeout=0.1):
|
|
try:
|
|
future.result()
|
|
except Exception as e:
|
|
logger.warning(f"Consumer notification failed: {e}")
|
|
```
|
|
|
|
### Phase 3: Intelligent Caching
|
|
```python
|
|
# Implement smart data caching with expiration
|
|
class SmartDataCache:
|
|
def __init__(self):
|
|
self.cache = {}
|
|
self.expiry_times = {}
|
|
self.hit_count = 0
|
|
self.miss_count = 0
|
|
|
|
def get_data(self, symbol: str, timeframe: str, force_refresh: bool = False) -> np.ndarray:
|
|
cache_key = f"{symbol}_{timeframe}"
|
|
current_time = time.time()
|
|
|
|
if not force_refresh and cache_key in self.cache:
|
|
if current_time < self.expiry_times[cache_key]:
|
|
self.hit_count += 1
|
|
return self.cache[cache_key]
|
|
|
|
# Cache miss - fetch fresh data
|
|
self.miss_count += 1
|
|
fresh_data = self._fetch_fresh_data(symbol, timeframe)
|
|
|
|
# Cache with appropriate expiration
|
|
self.cache[cache_key] = fresh_data
|
|
self.expiry_times[cache_key] = current_time + self._get_cache_duration(timeframe)
|
|
|
|
return fresh_data
|
|
```
|
|
|
|
## 📋 INTEGRATION CHECKLIST
|
|
|
|
### Dashboard Integration
|
|
- [ ] Verify `web/clean_dashboard.py` uses UnifiedDataStream
|
|
- [ ] Ensure proper subscriber registration
|
|
- [ ] Check data type requirements (`ui_data`, `ohlcv`)
|
|
- [ ] Validate real-time updates
|
|
|
|
### Model Integration
|
|
- [ ] CNN models receive formatted universal data
|
|
- [ ] RL models get proper state vectors
|
|
- [ ] Neural Decision Fusion uses all 5 timeseries
|
|
- [ ] COB integration processes microstructure data
|
|
|
|
### Performance Monitoring
|
|
- [ ] Stream statistics tracking
|
|
- [ ] Consumer performance metrics
|
|
- [ ] Data quality monitoring
|
|
- [ ] Memory usage optimization
|
|
|
|
## 🎯 IMMEDIATE ACTION ITEMS
|
|
|
|
### High Priority
|
|
1. **Audit Dashboard Subscriber** - Ensure `clean_dashboard.py` properly subscribes
|
|
2. **Verify Model Data Flow** - Check all models receive universal format
|
|
3. **Monitor Memory Usage** - Track memory consumption across consumers
|
|
4. **Performance Profiling** - Measure data distribution latency
|
|
|
|
### Medium Priority
|
|
1. **Implement Shared Buffers** - Reduce memory duplication
|
|
2. **Add Data Freshness Checks** - Prevent stale data usage
|
|
3. **Optimize Network Calls** - Batch API requests where possible
|
|
4. **Enhanced Error Handling** - Graceful degradation on data issues
|
|
|
|
### Low Priority
|
|
1. **Advanced Caching** - Predictive data pre-loading
|
|
2. **Compression** - Reduce data transfer overhead
|
|
3. **Distributed Processing** - Scale across multiple processes
|
|
4. **Real-time Analytics** - Live data quality metrics
|
|
|
|
## 🔧 IMPLEMENTATION STATUS
|
|
|
|
### ✅ Completed
|
|
- Universal Data Adapter with 5 timeseries
|
|
- Unified Data Stream with subscriber pattern
|
|
- Enhanced Orchestrator integration
|
|
- Neural Decision Fusion using universal data
|
|
|
|
### 🚧 In Progress
|
|
- Dashboard subscriber optimization
|
|
- Memory usage profiling
|
|
- Performance monitoring
|
|
|
|
### 📅 Planned
|
|
- Shared memory implementation
|
|
- Parallel consumer notification
|
|
- Advanced caching strategies
|
|
- Real-time quality monitoring
|
|
|
|
## 📊 SUCCESS METRICS
|
|
|
|
### Performance Targets
|
|
- **Data Latency**: < 10ms from source to consumer
|
|
- **Memory Efficiency**: < 500MB total for all consumers
|
|
- **Cache Hit Rate**: > 80% for historical data requests
|
|
- **Consumer Throughput**: > 100 updates/second per consumer
|
|
|
|
### Quality Targets
|
|
- **Data Completeness**: > 99.9% for all 5 timeseries
|
|
- **Timestamp Accuracy**: < 1ms deviation from source
|
|
- **Format Compliance**: 100% validation success
|
|
- **Error Rate**: < 0.1% failed distributions
|
|
|
|
---
|
|
|
|
## 🎯 CONCLUSION
|
|
|
|
The Universal Data Stream architecture is the **backbone** of our trading system. The 5 timeseries format ensures all models receive consistent, high-quality data. The subscriber architecture enables efficient distribution, but there are clear optimization opportunities for memory usage, processing latency, and caching.
|
|
|
|
**Next Steps**: Focus on implementing shared memory buffers and parallel consumer notification to improve performance while maintaining the integrity of our universal data format.
|
|
|
|
**Critical**: All optimization work must preserve the 5 timeseries structure as it's fundamental to our model training and decision making processes. |