remove ws, fix predictions
This commit is contained in:
176
ANNOTATE/TIMEZONE_FIX_COMPLETE.md
Normal file
176
ANNOTATE/TIMEZONE_FIX_COMPLETE.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# Timezone Fix - Complete Implementation
|
||||
|
||||
## Summary
|
||||
|
||||
All datetime values are now stored and processed in UTC. Display timezone is configurable and only used for UI display.
|
||||
|
||||
## Changes Made
|
||||
|
||||
### 1. `utils/timezone_utils.py` - Core Timezone Utilities
|
||||
|
||||
**Changed:**
|
||||
- All internal processing now uses UTC (not Sofia timezone)
|
||||
- `now_system()` → `now_utc()` (returns UTC)
|
||||
- `normalize_timestamp()` → Returns UTC (not Sofia)
|
||||
- `normalize_dataframe_timestamps()` → Returns UTC
|
||||
- `normalize_dataframe_index()` → Returns UTC
|
||||
- Added `now_display()` for UI display timezone
|
||||
- Added `to_display_timezone()` for converting UTC to display timezone
|
||||
- Deprecated `to_sofia()`, `now_sofia()`, `to_system_timezone()`
|
||||
|
||||
**Key Functions:**
|
||||
- `now_utc()` - Use for all internal processing
|
||||
- `to_utc()` - Convert to UTC
|
||||
- `now_display()` - Get current time in display timezone (UI only)
|
||||
- `to_display_timezone()` - Convert UTC to display timezone (UI only)
|
||||
- `format_timestamp_for_display()` - Format UTC timestamp for display
|
||||
|
||||
### 2. `core/config.py` - Configuration
|
||||
|
||||
**Added:**
|
||||
- `display_timezone` in default config (default: 'Europe/Sofia')
|
||||
|
||||
### 3. `config.yaml` - Config File
|
||||
|
||||
**Changed:**
|
||||
- `system.timezone` → `system.display_timezone`
|
||||
- All internal processing uses UTC regardless of this setting
|
||||
|
||||
### 4. `ANNOTATE/core/inference_training_system.py`
|
||||
|
||||
**Fixed:**
|
||||
- All `datetime.now(timezone.utc)` calls already correct
|
||||
- Added comments clarifying UTC usage
|
||||
|
||||
### 5. `ANNOTATE/core/real_training_adapter.py`
|
||||
|
||||
**Fixed:**
|
||||
- All `datetime.now()` → `datetime.now(timezone.utc)`
|
||||
- Lines: 2498, 3057, 4258, 4456
|
||||
|
||||
### 6. `ANNOTATE/web/app.py`
|
||||
|
||||
**Fixed:**
|
||||
- `datetime.now()` → `datetime.now(timezone.utc)`
|
||||
- Lines: 3057, 3438
|
||||
|
||||
### 7. `ANNOTATE/web/static/js/chart_manager.js`
|
||||
|
||||
**Already Correct:**
|
||||
- Uses `toISOString()` for UTC consistency
|
||||
- `normalizeTimestamp()` helper ensures UTC
|
||||
|
||||
## Configuration
|
||||
|
||||
### Display Timezone
|
||||
|
||||
Set in `config.yaml`:
|
||||
```yaml
|
||||
system:
|
||||
display_timezone: "Europe/Sofia" # Change this to your preferred display timezone
|
||||
```
|
||||
|
||||
Or in code:
|
||||
```python
|
||||
from utils.timezone_utils import get_display_timezone
|
||||
display_tz = get_display_timezone() # Returns configured display timezone
|
||||
```
|
||||
|
||||
## Usage Guidelines
|
||||
|
||||
### For Internal Processing (Backend)
|
||||
|
||||
**ALWAYS use UTC:**
|
||||
```python
|
||||
from datetime import datetime, timezone
|
||||
from utils.timezone_utils import now_utc, to_utc
|
||||
|
||||
# Get current time
|
||||
current_time = datetime.now(timezone.utc) # or now_utc()
|
||||
|
||||
# Convert to UTC
|
||||
utc_time = to_utc(some_datetime)
|
||||
|
||||
# Store in database
|
||||
timestamp = datetime.now(timezone.utc).isoformat()
|
||||
```
|
||||
|
||||
### For UI Display (Frontend/Backend)
|
||||
|
||||
**Convert UTC to display timezone only for display:**
|
||||
```python
|
||||
from utils.timezone_utils import to_display_timezone, format_timestamp_for_display
|
||||
|
||||
# Convert UTC to display timezone
|
||||
display_time = to_display_timezone(utc_datetime)
|
||||
|
||||
# Format for display
|
||||
formatted = format_timestamp_for_display(utc_datetime, '%Y-%m-%d %H:%M:%S')
|
||||
```
|
||||
|
||||
### JavaScript (Frontend)
|
||||
|
||||
**Already handles UTC correctly:**
|
||||
```javascript
|
||||
// All timestamps should be in UTC ISO format
|
||||
const timestamp = new Date(utcIsoString).toISOString();
|
||||
|
||||
// For display, convert to local timezone (browser handles this)
|
||||
const displayTime = new Date(utcIsoString).toLocaleString();
|
||||
```
|
||||
|
||||
## Migration Notes
|
||||
|
||||
### Old Code (DEPRECATED)
|
||||
```python
|
||||
from utils.timezone_utils import now_system, to_sofia, normalize_timestamp
|
||||
|
||||
# OLD - Don't use
|
||||
time = now_system() # Returns Sofia timezone
|
||||
time = to_sofia(dt) # Converts to Sofia
|
||||
time = normalize_timestamp(ts) # Returns Sofia timezone
|
||||
```
|
||||
|
||||
### New Code (CORRECT)
|
||||
```python
|
||||
from datetime import datetime, timezone
|
||||
from utils.timezone_utils import now_utc, to_utc, to_display_timezone
|
||||
|
||||
# NEW - Use this
|
||||
time = datetime.now(timezone.utc) # or now_utc()
|
||||
time = to_utc(dt) # Converts to UTC
|
||||
display_time = to_display_timezone(utc_time) # For UI only
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **No More Timezone Misalignment**: All predictions align with candles
|
||||
2. **Consistent Storage**: All database timestamps in UTC
|
||||
3. **Configurable Display**: Users can set their preferred display timezone
|
||||
4. **Clean Implementation**: No more timezone patches
|
||||
5. **International Support**: Easy to support multiple timezones
|
||||
|
||||
## Testing
|
||||
|
||||
1. **Verify Predictions Align with Candles**
|
||||
- Start inference
|
||||
- Check that predictions appear at correct candle times
|
||||
- No 1-2 hour offset
|
||||
|
||||
2. **Verify Display Timezone**
|
||||
- Change `display_timezone` in config
|
||||
- Restart application
|
||||
- Verify UI shows times in configured timezone
|
||||
|
||||
3. **Verify UTC Storage**
|
||||
- Check database timestamps are in UTC
|
||||
- Check all API responses use UTC
|
||||
- Check logs use UTC
|
||||
|
||||
## Removed Code
|
||||
|
||||
All old timezone patches have been removed:
|
||||
- No more `to_sofia()` conversions in processing
|
||||
- No more `normalize_timestamp()` converting to Sofia
|
||||
- No more `SYSTEM_TIMEZONE` usage in processing
|
||||
- Clean, unified UTC implementation
|
||||
@@ -89,14 +89,32 @@ class HistoricalDataLoader:
|
||||
try:
|
||||
# FORCE refresh for 1s/1m if requesting latest data OR incremental update
|
||||
force_refresh = (timeframe in ['1s', '1m'] and (bypass_cache or (not start_time and not end_time)))
|
||||
|
||||
# Try to get data from DataProvider's cached data first (most efficient)
|
||||
if hasattr(self.data_provider, 'cached_data'):
|
||||
with self.data_provider.data_lock:
|
||||
cached_df = self.data_provider.cached_data.get(symbol, {}).get(timeframe)
|
||||
|
||||
if cached_df is not None and not cached_df.empty:
|
||||
# Use cached data if we have enough candles
|
||||
if len(cached_df) >= min(limit, 100): # Use cached if we have at least 100 candles
|
||||
# If time range is specified, check if cached data covers it
|
||||
use_cached_data = True
|
||||
if start_time or end_time:
|
||||
if isinstance(cached_df.index, pd.DatetimeIndex):
|
||||
cache_start = cached_df.index.min()
|
||||
cache_end = cached_df.index.max()
|
||||
|
||||
# Check if requested range is within cached range
|
||||
if start_time and start_time < cache_start:
|
||||
use_cached_data = False
|
||||
elif end_time and end_time > cache_end:
|
||||
use_cached_data = False
|
||||
elif start_time and end_time:
|
||||
# Both specified - check if range overlaps
|
||||
if end_time < cache_start or start_time > cache_end:
|
||||
use_cached_data = False
|
||||
|
||||
# Use cached data if we have enough candles and it covers the range
|
||||
if use_cached_data and len(cached_df) >= min(limit, 100): # Use cached if we have at least 100 candles
|
||||
elapsed_ms = (time.time() - start_time_ms) * 1000
|
||||
logger.debug(f" DataProvider cache hit for {symbol} {timeframe} ({len(cached_df)} candles, {elapsed_ms:.1f}ms)")
|
||||
|
||||
@@ -109,9 +127,12 @@ class HistoricalDataLoader:
|
||||
limit
|
||||
)
|
||||
|
||||
# Cache in memory
|
||||
self.memory_cache[cache_key] = (filtered_df, datetime.now())
|
||||
return filtered_df
|
||||
# Only return cached data if filter produced results
|
||||
if filtered_df is not None and not filtered_df.empty:
|
||||
# Cache in memory
|
||||
self.memory_cache[cache_key] = (filtered_df, datetime.now())
|
||||
return filtered_df
|
||||
# If filter returned empty, fall through to fetch from DuckDB/API
|
||||
|
||||
# Try unified storage first if available
|
||||
if hasattr(self.data_provider, 'is_unified_storage_enabled') and \
|
||||
@@ -156,28 +177,47 @@ class HistoricalDataLoader:
|
||||
except Exception as e:
|
||||
logger.debug(f"Unified storage not available, falling back to cached data: {e}")
|
||||
|
||||
# Fallback to existing cached data method
|
||||
# Use DataProvider's cached data if available
|
||||
# Fallback to existing cached data method (duplicate check - should not reach here if first check worked)
|
||||
# This is kept for backward compatibility but should rarely execute
|
||||
if hasattr(self.data_provider, 'cached_data'):
|
||||
if symbol in self.data_provider.cached_data:
|
||||
if timeframe in self.data_provider.cached_data[symbol]:
|
||||
df = self.data_provider.cached_data[symbol][timeframe]
|
||||
|
||||
if df is not None and not df.empty:
|
||||
# Filter by time range with direction support
|
||||
df = self._filter_by_time_range(
|
||||
df.copy(),
|
||||
start_time,
|
||||
end_time,
|
||||
direction,
|
||||
limit
|
||||
)
|
||||
# Check if cached data covers the requested time range
|
||||
use_cached_data = True
|
||||
if start_time or end_time:
|
||||
if isinstance(df.index, pd.DatetimeIndex):
|
||||
cache_start = df.index.min()
|
||||
cache_end = df.index.max()
|
||||
|
||||
if start_time and start_time < cache_start:
|
||||
use_cached_data = False
|
||||
elif end_time and end_time > cache_end:
|
||||
use_cached_data = False
|
||||
elif start_time and end_time:
|
||||
if end_time < cache_start or start_time > cache_end:
|
||||
use_cached_data = False
|
||||
|
||||
# Cache in memory
|
||||
self.memory_cache[cache_key] = (df.copy(), datetime.now())
|
||||
|
||||
logger.info(f"Loaded {len(df)} candles for {symbol} {timeframe}")
|
||||
return df
|
||||
if use_cached_data:
|
||||
# Filter by time range with direction support
|
||||
df = self._filter_by_time_range(
|
||||
df.copy(),
|
||||
start_time,
|
||||
end_time,
|
||||
direction,
|
||||
limit
|
||||
)
|
||||
|
||||
# Only return if filter produced results
|
||||
if df is not None and not df.empty:
|
||||
# Cache in memory
|
||||
self.memory_cache[cache_key] = (df.copy(), datetime.now())
|
||||
|
||||
logger.info(f"Loaded {len(df)} candles for {symbol} {timeframe}")
|
||||
return df
|
||||
# If filter returned empty or range not covered, fall through to fetch from DuckDB/API
|
||||
|
||||
# Check DuckDB first for historical data (always check for infinite scroll)
|
||||
if self.data_provider.duckdb_storage and (start_time or end_time):
|
||||
@@ -198,7 +238,7 @@ class HistoricalDataLoader:
|
||||
self.memory_cache[cache_key] = (df.copy(), datetime.now())
|
||||
return df
|
||||
else:
|
||||
logger.info(f"📡 No data in DuckDB, fetching from exchange API for {symbol} {timeframe}")
|
||||
logger.info(f"No data in DuckDB, fetching from exchange API for {symbol} {timeframe}")
|
||||
|
||||
# Fetch from exchange API with time range
|
||||
df = self._fetch_from_exchange_api(
|
||||
@@ -212,7 +252,7 @@ class HistoricalDataLoader:
|
||||
|
||||
if df is not None and not df.empty:
|
||||
elapsed_ms = (time.time() - start_time_ms) * 1000
|
||||
logger.info(f"🌐 Exchange API hit for {symbol} {timeframe} ({len(df)} candles, {elapsed_ms:.1f}ms)")
|
||||
logger.info(f"Exchange API hit for {symbol} {timeframe} ({len(df)} candles, {elapsed_ms:.1f}ms)")
|
||||
|
||||
# Store in DuckDB for future use
|
||||
if self.data_provider.duckdb_storage:
|
||||
|
||||
@@ -3589,8 +3589,7 @@ class RealTrainingAdapter:
|
||||
if model_name == 'Transformer' and self.orchestrator:
|
||||
trainer = getattr(self.orchestrator, 'primary_transformer_trainer', None)
|
||||
if trainer and trainer.model:
|
||||
# Get recent market data
|
||||
market_data, norm_params = self._get_realtime_market_data(symbol, data_provider)
|
||||
# Use provided market_data and norm_params (already fetched by caller)
|
||||
if not market_data:
|
||||
return None
|
||||
|
||||
@@ -4493,15 +4492,22 @@ class RealTrainingAdapter:
|
||||
time.sleep(1)
|
||||
continue
|
||||
|
||||
# Make prediction using the model
|
||||
prediction = self._make_realtime_prediction(model_name, symbol, data_provider)
|
||||
# Make prediction using the model - returns tuple (prediction_dict, market_data_dict)
|
||||
prediction_result = self._make_realtime_prediction(model_name, symbol, data_provider)
|
||||
|
||||
# Unpack tuple: prediction is the dict, market_data_info contains norm_params
|
||||
if prediction_result is None:
|
||||
time.sleep(1)
|
||||
continue
|
||||
|
||||
prediction, market_data_info = prediction_result
|
||||
|
||||
# Register inference frame reference for later training when actual candle arrives
|
||||
# This stores a reference (timestamp range) instead of copying 600 candles
|
||||
# The reference allows us to retrieve the exact data from DuckDB when training
|
||||
if prediction and self.training_coordinator:
|
||||
# Get norm_params for storage in reference
|
||||
_, norm_params = self._get_realtime_market_data(symbol, data_provider)
|
||||
if prediction and self.training_coordinator and market_data_info:
|
||||
# Get norm_params from market_data_info
|
||||
norm_params = market_data_info.get('norm_params', {})
|
||||
self._register_inference_frame(session, symbol, timeframe, prediction, data_provider, norm_params)
|
||||
|
||||
if prediction:
|
||||
@@ -4554,10 +4560,41 @@ class RealTrainingAdapter:
|
||||
|
||||
# Store prediction for visualization (INCLUDE predicted_candle for ghost candles!)
|
||||
if self.orchestrator and hasattr(self.orchestrator, 'store_transformer_prediction'):
|
||||
# Get denormalized predicted_price (should already be denormalized from _make_realtime_prediction_internal)
|
||||
predicted_price = prediction.get('predicted_price')
|
||||
|
||||
# Always get actual current_price from latest candle to ensure it's denormalized
|
||||
# This is more reliable than trusting get_current_price which might return normalized values
|
||||
actual_current_price = current_price
|
||||
try:
|
||||
df_latest = data_provider.get_historical_data(symbol, timeframe, limit=1, refresh=False)
|
||||
if df_latest is not None and not df_latest.empty:
|
||||
actual_current_price = float(df_latest['close'].iloc[-1])
|
||||
else:
|
||||
# Try other timeframes
|
||||
for tf in ['1m', '1h', '1d']:
|
||||
if tf != timeframe:
|
||||
df_tf = data_provider.get_historical_data(symbol, tf, limit=1, refresh=False)
|
||||
if df_tf is not None and not df_tf.empty:
|
||||
actual_current_price = float(df_tf['close'].iloc[-1])
|
||||
break
|
||||
except Exception as e:
|
||||
logger.debug(f"Error getting actual price from candle: {e}")
|
||||
# Fallback: if current_price looks normalized (< 1000 for ETH/USDT), try to denormalize
|
||||
if current_price < 1000 and symbol == 'ETH/USDT': # ETH should be > 1000, normalized would be < 1
|
||||
if market_data_info and 'norm_params' in market_data_info:
|
||||
norm_params = market_data_info['norm_params']
|
||||
if '1m' in norm_params:
|
||||
params = norm_params['1m']
|
||||
price_min = params['price_min']
|
||||
price_max = params['price_max']
|
||||
# Denormalize: price = normalized * (max - min) + min
|
||||
actual_current_price = float(current_price * (price_max - price_min) + price_min)
|
||||
|
||||
prediction_data = {
|
||||
'timestamp': datetime.now(timezone.utc).isoformat(),
|
||||
'current_price': current_price,
|
||||
'predicted_price': prediction.get('predicted_price', current_price),
|
||||
'current_price': actual_current_price, # Use denormalized price
|
||||
'predicted_price': predicted_price if predicted_price is not None else actual_current_price,
|
||||
'price_change': 1.0 if prediction['action'] == 'BUY' else -1.0,
|
||||
'confidence': prediction['confidence'],
|
||||
'action': prediction['action'],
|
||||
@@ -4596,45 +4633,101 @@ class RealTrainingAdapter:
|
||||
|
||||
if predicted_price_val is not None:
|
||||
prediction_data['predicted_price'] = predicted_price_val
|
||||
prediction_data['price_change'] = ((predicted_price_val - current_price) / current_price) * 100
|
||||
# Calculate price_change using denormalized prices
|
||||
prediction_data['price_change'] = ((predicted_price_val - actual_current_price) / actual_current_price) * 100
|
||||
else:
|
||||
prediction_data['predicted_price'] = prediction.get('predicted_price', current_price)
|
||||
prediction_data['price_change'] = 1.0 if prediction['action'] == 'BUY' else -1.0
|
||||
# Fallback: use predicted_price from prediction dict (should be denormalized)
|
||||
fallback_predicted = prediction.get('predicted_price')
|
||||
if fallback_predicted is not None:
|
||||
prediction_data['predicted_price'] = fallback_predicted
|
||||
prediction_data['price_change'] = ((fallback_predicted - actual_current_price) / actual_current_price) * 100
|
||||
else:
|
||||
prediction_data['predicted_price'] = actual_current_price
|
||||
prediction_data['price_change'] = 1.0 if prediction['action'] == 'BUY' else -1.0
|
||||
else:
|
||||
# Fallback to estimated price if no candle prediction
|
||||
logger.warning(f"!!! No predicted_candle in prediction object - ghost candles will not appear!")
|
||||
prediction_data['predicted_price'] = prediction.get('predicted_price', current_price * (1.01 if prediction['action'] == 'BUY' else 0.99))
|
||||
prediction_data['price_change'] = 1.0 if prediction['action'] == 'BUY' else -1.0
|
||||
|
||||
# Include trend_vector if available (convert tensors to Python types)
|
||||
# Include trend_vector if available (convert tensors to Python types and denormalize)
|
||||
if 'trend_vector' in prediction:
|
||||
trend_vec = prediction['trend_vector']
|
||||
# Convert any tensors to Python native types
|
||||
# Get normalization params for denormalization
|
||||
norm_params_for_denorm = {}
|
||||
if market_data_info and 'norm_params' in market_data_info:
|
||||
norm_params_for_denorm = market_data_info['norm_params']
|
||||
|
||||
# Convert any tensors to Python native types and denormalize price values
|
||||
if isinstance(trend_vec, dict):
|
||||
serialized_trend = {}
|
||||
for key, value in trend_vec.items():
|
||||
if hasattr(value, 'numel'): # Tensor
|
||||
if value.numel() == 1: # Scalar tensor
|
||||
serialized_trend[key] = value.item()
|
||||
val = value.item()
|
||||
# Denormalize price_delta if it's a price-related value
|
||||
if key == 'price_delta' and norm_params_for_denorm:
|
||||
val = self._denormalize_price_value(val, norm_params_for_denorm, '1m')
|
||||
serialized_trend[key] = val
|
||||
else: # Multi-element tensor
|
||||
serialized_trend[key] = value.detach().cpu().tolist()
|
||||
val_list = value.detach().cpu().tolist()
|
||||
# Denormalize pivot_prices if it's a price array (can be nested)
|
||||
if key == 'pivot_prices' and norm_params_for_denorm:
|
||||
val_list = self._denormalize_nested_price_array(val_list, norm_params_for_denorm, '1m')
|
||||
serialized_trend[key] = val_list
|
||||
elif hasattr(value, 'tolist'): # Other array-like
|
||||
serialized_trend[key] = value.tolist()
|
||||
val_list = value.tolist()
|
||||
if key == 'pivot_prices' and norm_params_for_denorm:
|
||||
val_list = self._denormalize_nested_price_array(val_list, norm_params_for_denorm, '1m')
|
||||
serialized_trend[key] = val_list
|
||||
elif isinstance(value, (list, tuple)):
|
||||
# Recursively convert list/tuple of tensors
|
||||
serialized_trend[key] = []
|
||||
serialized_list = []
|
||||
for v in value:
|
||||
if hasattr(v, 'numel'):
|
||||
if v.numel() == 1:
|
||||
serialized_trend[key].append(v.item())
|
||||
val = v.item()
|
||||
if key == 'pivot_prices' and norm_params_for_denorm:
|
||||
val = self._denormalize_price_value(val, norm_params_for_denorm, '1m')
|
||||
serialized_list.append(val)
|
||||
else:
|
||||
serialized_trend[key].append(v.detach().cpu().tolist())
|
||||
val_list = v.detach().cpu().tolist()
|
||||
if key == 'pivot_prices' and norm_params_for_denorm:
|
||||
# Handle nested arrays (pivot_prices is [[p1, p2, p3, ...]])
|
||||
val_list = self._denormalize_nested_price_array(val_list, norm_params_for_denorm, '1m')
|
||||
serialized_list.append(val_list)
|
||||
elif hasattr(v, 'tolist'):
|
||||
serialized_trend[key].append(v.tolist())
|
||||
val_list = v.tolist()
|
||||
if key == 'pivot_prices' and norm_params_for_denorm:
|
||||
# Handle nested arrays
|
||||
val_list = self._denormalize_nested_price_array(val_list, norm_params_for_denorm, '1m')
|
||||
serialized_list.append(val_list)
|
||||
elif isinstance(v, (list, tuple)):
|
||||
# Nested list - handle pivot_prices structure
|
||||
if key == 'pivot_prices' and norm_params_for_denorm:
|
||||
nested_denorm = self._denormalize_nested_price_array(list(v), norm_params_for_denorm, '1m')
|
||||
serialized_list.append(nested_denorm)
|
||||
else:
|
||||
serialized_list.append(list(v))
|
||||
else:
|
||||
serialized_trend[key].append(v)
|
||||
serialized_list.append(v)
|
||||
serialized_trend[key] = serialized_list
|
||||
else:
|
||||
serialized_trend[key] = value
|
||||
# Denormalize price_delta if it's a scalar
|
||||
if key == 'price_delta' and isinstance(value, (int, float)) and norm_params_for_denorm:
|
||||
serialized_trend[key] = self._denormalize_price_value(value, norm_params_for_denorm, '1m')
|
||||
else:
|
||||
serialized_trend[key] = value
|
||||
|
||||
# Denormalize vector array if it contains price deltas
|
||||
if 'vector' in serialized_trend and isinstance(serialized_trend['vector'], list) and norm_params_for_denorm:
|
||||
vector = serialized_trend['vector']
|
||||
if len(vector) > 0 and isinstance(vector[0], list) and len(vector[0]) > 0:
|
||||
# vector is [[price_delta, time_delta]]
|
||||
price_delta_norm = vector[0][0]
|
||||
price_delta_denorm = self._denormalize_price_value(price_delta_norm, norm_params_for_denorm, '1m')
|
||||
serialized_trend['vector'] = [[price_delta_denorm, vector[0][1]]]
|
||||
|
||||
prediction_data['trend_vector'] = serialized_trend
|
||||
else:
|
||||
prediction_data['trend_vector'] = trend_vec
|
||||
@@ -4870,3 +4963,82 @@ class RealTrainingAdapter:
|
||||
return ((current_price - entry_price) / entry_price) * 100 # Percentage
|
||||
else: # short
|
||||
return ((entry_price - current_price) / entry_price) * 100 # Percentage
|
||||
|
||||
def _denormalize_price_value(self, normalized_value: float, norm_params: Dict, timeframe: str = '1m') -> float:
|
||||
"""
|
||||
Denormalize a single price value using normalization parameters
|
||||
|
||||
Args:
|
||||
normalized_value: Normalized price value (0-1 range)
|
||||
norm_params: Dictionary of normalization parameters by timeframe
|
||||
timeframe: Timeframe to use for denormalization (default: '1m')
|
||||
|
||||
Returns:
|
||||
Denormalized price value
|
||||
"""
|
||||
try:
|
||||
if timeframe in norm_params:
|
||||
params = norm_params[timeframe]
|
||||
price_min = params.get('price_min', 0.0)
|
||||
price_max = params.get('price_max', 1.0)
|
||||
if price_max > price_min:
|
||||
# Denormalize: price = normalized * (max - min) + min
|
||||
return float(normalized_value * (price_max - price_min) + price_min)
|
||||
# Fallback: return as-is if no params available
|
||||
return float(normalized_value)
|
||||
except Exception as e:
|
||||
logger.debug(f"Error denormalizing price value: {e}")
|
||||
return float(normalized_value)
|
||||
|
||||
def _denormalize_price_array(self, normalized_array: list, norm_params: Dict, timeframe: str = '1m') -> list:
|
||||
"""
|
||||
Denormalize an array of price values using normalization parameters
|
||||
|
||||
Args:
|
||||
normalized_array: List of normalized price values (0-1 range)
|
||||
norm_params: Dictionary of normalization parameters by timeframe
|
||||
timeframe: Timeframe to use for denormalization (default: '1m')
|
||||
|
||||
Returns:
|
||||
List of denormalized price values
|
||||
"""
|
||||
try:
|
||||
if timeframe in norm_params:
|
||||
params = norm_params[timeframe]
|
||||
price_min = params.get('price_min', 0.0)
|
||||
price_max = params.get('price_max', 1.0)
|
||||
if price_max > price_min:
|
||||
# Denormalize each value: price = normalized * (max - min) + min
|
||||
return [float(v * (price_max - price_min) + price_min) if isinstance(v, (int, float)) else v
|
||||
for v in normalized_array]
|
||||
# Fallback: return as-is if no params available
|
||||
return [float(v) if isinstance(v, (int, float)) else v for v in normalized_array]
|
||||
except Exception as e:
|
||||
logger.debug(f"Error denormalizing price array: {e}")
|
||||
return [float(v) if isinstance(v, (int, float)) else v for v in normalized_array]
|
||||
|
||||
def _denormalize_nested_price_array(self, normalized_array: list, norm_params: Dict, timeframe: str = '1m') -> list:
|
||||
"""
|
||||
Denormalize a nested array of price values (e.g., [[p1, p2, p3], [p4, p5, p6]])
|
||||
|
||||
Args:
|
||||
normalized_array: Nested list of normalized price values
|
||||
norm_params: Dictionary of normalization parameters by timeframe
|
||||
timeframe: Timeframe to use for denormalization (default: '1m')
|
||||
|
||||
Returns:
|
||||
Nested list of denormalized price values
|
||||
"""
|
||||
try:
|
||||
result = []
|
||||
for item in normalized_array:
|
||||
if isinstance(item, (list, tuple)):
|
||||
# Recursively denormalize nested arrays
|
||||
result.append(self._denormalize_price_array(list(item), norm_params, timeframe))
|
||||
else:
|
||||
# Single value - denormalize it
|
||||
result.append(self._denormalize_price_value(item, norm_params, timeframe) if isinstance(item, (int, float)) else item)
|
||||
return result
|
||||
except Exception as e:
|
||||
logger.debug(f"Error denormalizing nested price array: {e}")
|
||||
return normalized_array
|
||||
|
||||
5
ANNOTATE/core/we need to fully move the Inference Trai
Normal file
5
ANNOTATE/core/we need to fully move the Inference Trai
Normal file
@@ -0,0 +1,5 @@
|
||||
we need to fully move the Inference Training Coordinator functions in Orchestrator - both classes have overlaping responsibilities and only one should exist.
|
||||
|
||||
InferenceFrameReference also should be in core/data_models.py.
|
||||
|
||||
we do not need a core folder in ANNOTATE app. we should refactor and move the classes in the main /core folder. this is a design flaw. we should have only one "core" naturally. the purpose of ANNOTATE app is to provide UI for creating test cases and anotating data and also running inference and training. all implementations should be in the main system and only referenced and used in the ANNOTATE app
|
||||
@@ -715,22 +715,9 @@ class AnnotationDashboard:
|
||||
static_folder='static'
|
||||
)
|
||||
|
||||
# Initialize SocketIO for WebSocket support
|
||||
try:
|
||||
from flask_socketio import SocketIO, emit
|
||||
self.socketio = SocketIO(
|
||||
self.server,
|
||||
cors_allowed_origins="*",
|
||||
async_mode='threading',
|
||||
logger=False,
|
||||
engineio_logger=False
|
||||
)
|
||||
self.has_socketio = True
|
||||
logger.info("SocketIO initialized for real-time updates")
|
||||
except ImportError:
|
||||
self.socketio = None
|
||||
self.has_socketio = False
|
||||
logger.warning("flask-socketio not installed - live updates will use polling")
|
||||
# WebSocket support removed - using HTTP polling only
|
||||
self.socketio = None
|
||||
self.has_socketio = False
|
||||
|
||||
# Suppress werkzeug request logs (reduce noise from polling endpoints)
|
||||
werkzeug_logger = logging.getLogger('werkzeug')
|
||||
@@ -777,9 +764,7 @@ class AnnotationDashboard:
|
||||
# Initialize training strategy manager (controls training decisions)
|
||||
self.training_strategy = TrainingStrategyManager(self.data_provider, self.training_adapter)
|
||||
self.training_strategy.dashboard = self
|
||||
# Pass socketio to training adapter for live trade updates
|
||||
if self.has_socketio and self.socketio:
|
||||
self.training_adapter.socketio = self.socketio
|
||||
# WebSocket removed - using HTTP polling only
|
||||
# Backtest runner for replaying visible chart with predictions
|
||||
self.backtest_runner = BacktestRunner()
|
||||
|
||||
@@ -2546,27 +2531,46 @@ class AnnotationDashboard:
|
||||
'prediction': None
|
||||
}
|
||||
|
||||
# Get latest candle for the requested timeframe
|
||||
if self.orchestrator and self.orchestrator.data_provider:
|
||||
# Get latest candle for the requested timeframe using data_loader
|
||||
if self.data_loader:
|
||||
try:
|
||||
# Get latest candle
|
||||
ohlcv_data = self.orchestrator.data_provider.get_ohlcv_data(symbol, timeframe, limit=1)
|
||||
if ohlcv_data and len(ohlcv_data) > 0:
|
||||
latest_candle = ohlcv_data[-1]
|
||||
# Get latest candle from data_loader
|
||||
df = self.data_loader.get_data(symbol, timeframe, limit=2, direction='latest')
|
||||
if df is not None and not df.empty:
|
||||
latest_candle = df.iloc[-1]
|
||||
|
||||
# Format timestamp as ISO string (ensure UTC format for frontend)
|
||||
timestamp = latest_candle.name
|
||||
if hasattr(timestamp, 'isoformat'):
|
||||
# If timezone-aware, convert to UTC ISO string
|
||||
if timestamp.tzinfo is not None:
|
||||
timestamp_str = timestamp.astimezone(timezone.utc).isoformat()
|
||||
else:
|
||||
# Assume UTC if no timezone info
|
||||
timestamp_str = timestamp.isoformat() + 'Z'
|
||||
else:
|
||||
timestamp_str = str(timestamp)
|
||||
|
||||
# Determine if candle is confirmed (we have 2 candles, so previous is confirmed)
|
||||
is_confirmed = len(df) >= 2
|
||||
|
||||
response['chart_update'] = {
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'candle': {
|
||||
'timestamp': latest_candle[0],
|
||||
'open': float(latest_candle[1]),
|
||||
'high': float(latest_candle[2]),
|
||||
'low': float(latest_candle[3]),
|
||||
'close': float(latest_candle[4]),
|
||||
'volume': float(latest_candle[5])
|
||||
}
|
||||
'timestamp': timestamp_str,
|
||||
'open': float(latest_candle['open']),
|
||||
'high': float(latest_candle['high']),
|
||||
'low': float(latest_candle['low']),
|
||||
'close': float(latest_candle['close']),
|
||||
'volume': float(latest_candle['volume'])
|
||||
},
|
||||
'is_confirmed': is_confirmed
|
||||
}
|
||||
except Exception as e:
|
||||
logger.debug(f"Error getting latest candle: {e}")
|
||||
logger.debug(f"Error getting latest candle from data_loader: {e}", exc_info=True)
|
||||
else:
|
||||
logger.debug("Data loader not available for live updates")
|
||||
|
||||
# Get latest model predictions
|
||||
if self.orchestrator:
|
||||
@@ -2762,9 +2766,7 @@ class AnnotationDashboard:
|
||||
'error': str(e)
|
||||
})
|
||||
|
||||
# WebSocket event handlers (if SocketIO is available)
|
||||
if self.has_socketio:
|
||||
self._setup_websocket_handlers()
|
||||
# WebSocket removed - using HTTP polling only
|
||||
|
||||
def _serialize_prediction(self, prediction: Dict) -> Dict:
|
||||
"""Convert PyTorch tensors in prediction dict to JSON-serializable Python types"""
|
||||
@@ -2793,184 +2795,7 @@ class AnnotationDashboard:
|
||||
# Fallback: return as-is (might fail JSON serialization but won't crash)
|
||||
return prediction
|
||||
|
||||
def _setup_websocket_handlers(self):
|
||||
"""Setup WebSocket event handlers for real-time updates"""
|
||||
if not self.has_socketio:
|
||||
return
|
||||
|
||||
@self.socketio.on('connect')
|
||||
def handle_connect():
|
||||
"""Handle client connection"""
|
||||
logger.info(f"WebSocket client connected")
|
||||
from flask_socketio import emit
|
||||
emit('connection_response', {'status': 'connected', 'message': 'Connected to ANNOTATE live updates'})
|
||||
|
||||
@self.socketio.on('disconnect')
|
||||
def handle_disconnect():
|
||||
"""Handle client disconnection"""
|
||||
logger.info(f"WebSocket client disconnected")
|
||||
|
||||
@self.socketio.on('subscribe_live_updates')
|
||||
def handle_subscribe(data):
|
||||
"""Subscribe to live chart and prediction updates"""
|
||||
from flask_socketio import emit, join_room
|
||||
symbol = data.get('symbol', 'ETH/USDT')
|
||||
timeframe = data.get('timeframe', '1s')
|
||||
room = f"{symbol}_{timeframe}"
|
||||
|
||||
join_room(room)
|
||||
logger.info(f"Client subscribed to live updates: {room}")
|
||||
emit('subscription_confirmed', {'room': room, 'symbol': symbol, 'timeframe': timeframe})
|
||||
|
||||
# Start live update thread if not already running
|
||||
if not hasattr(self, '_live_update_thread') or not self._live_update_thread.is_alive():
|
||||
self._start_live_update_thread()
|
||||
|
||||
@self.socketio.on('request_prediction')
|
||||
def handle_prediction_request(data):
|
||||
"""Handle manual prediction request"""
|
||||
from flask_socketio import emit
|
||||
try:
|
||||
symbol = data.get('symbol', 'ETH/USDT')
|
||||
timeframe = data.get('timeframe', '1s')
|
||||
prediction_steps = data.get('prediction_steps', 1)
|
||||
|
||||
# Get prediction from model
|
||||
prediction = self._get_live_prediction(symbol, timeframe, prediction_steps)
|
||||
|
||||
emit('prediction_update', prediction)
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling prediction request: {e}")
|
||||
emit('prediction_error', {'error': str(e)})
|
||||
|
||||
@self.socketio.on('prediction_accuracy')
|
||||
def handle_prediction_accuracy(data):
|
||||
"""
|
||||
Handle validated prediction accuracy - trigger incremental training
|
||||
|
||||
This is called when frontend validates a prediction against actual candle.
|
||||
We use this data to incrementally train the model for continuous improvement.
|
||||
"""
|
||||
from flask_socketio import emit
|
||||
try:
|
||||
timeframe = data.get('timeframe')
|
||||
timestamp = data.get('timestamp')
|
||||
predicted = data.get('predicted') # [O, H, L, C, V]
|
||||
actual = data.get('actual') # [O, H, L, C]
|
||||
errors = data.get('errors') # {open, high, low, close}
|
||||
pct_errors = data.get('pctErrors')
|
||||
direction_correct = data.get('directionCorrect')
|
||||
accuracy = data.get('accuracy')
|
||||
|
||||
if not all([timeframe, timestamp, predicted, actual]):
|
||||
logger.warning("Incomplete prediction accuracy data received")
|
||||
return
|
||||
|
||||
logger.info(f"[{timeframe}] Prediction validated: {accuracy:.1f}% accuracy, direction: {direction_correct}")
|
||||
logger.debug(f" Errors: O={pct_errors['open']:.2f}% H={pct_errors['high']:.2f}% L={pct_errors['low']:.2f}% C={pct_errors['close']:.2f}%")
|
||||
|
||||
# Trigger incremental training on this validated prediction
|
||||
self._train_on_validated_prediction(
|
||||
timeframe=timeframe,
|
||||
timestamp=timestamp,
|
||||
predicted=predicted,
|
||||
actual=actual,
|
||||
errors=errors,
|
||||
direction_correct=direction_correct,
|
||||
accuracy=accuracy
|
||||
)
|
||||
|
||||
# Send confirmation back to frontend
|
||||
emit('training_update', {
|
||||
'status': 'training_triggered',
|
||||
'timestamp': timestamp,
|
||||
'accuracy': accuracy,
|
||||
'message': f'Incremental training triggered on validated prediction'
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error handling prediction accuracy: {e}", exc_info=True)
|
||||
emit('training_error', {'error': str(e)})
|
||||
|
||||
def _start_live_update_thread(self):
|
||||
"""Start background thread for live updates"""
|
||||
import threading
|
||||
|
||||
def live_update_worker():
|
||||
"""Background worker for live updates"""
|
||||
import time
|
||||
from flask_socketio import emit
|
||||
|
||||
logger.info("Live update thread started")
|
||||
|
||||
while True:
|
||||
try:
|
||||
# Get active rooms (symbol_timeframe combinations)
|
||||
# For now, update all subscribed clients every second
|
||||
|
||||
# Get latest chart data
|
||||
if self.data_provider:
|
||||
for symbol in ['ETH/USDT', 'BTC/USDT']: # TODO: Get from active subscriptions
|
||||
for timeframe in ['1s', '1m']:
|
||||
room = f"{symbol}_{timeframe}"
|
||||
|
||||
# Get latest candles (need last 2 to determine confirmation status)
|
||||
try:
|
||||
candles = self.data_provider.get_ohlcv(symbol, timeframe, limit=2)
|
||||
if candles and len(candles) > 0:
|
||||
latest_candle = candles[-1]
|
||||
|
||||
# Determine if candle is confirmed (closed)
|
||||
# For 1s: candle is confirmed when next candle starts (2s delay)
|
||||
# For others: candle is confirmed when next candle starts
|
||||
is_confirmed = len(candles) >= 2 # If we have 2 candles, the first is confirmed
|
||||
|
||||
# Format timestamp consistently
|
||||
timestamp = latest_candle.get('timestamp')
|
||||
if isinstance(timestamp, str):
|
||||
# Already formatted
|
||||
formatted_timestamp = timestamp
|
||||
else:
|
||||
# Convert to ISO string then format
|
||||
from datetime import datetime
|
||||
if isinstance(timestamp, datetime):
|
||||
formatted_timestamp = timestamp.strftime('%Y-%m-%d %H:%M:%S')
|
||||
else:
|
||||
formatted_timestamp = str(timestamp)
|
||||
|
||||
# Emit chart update with full candle data
|
||||
self.socketio.emit('chart_update', {
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'candle': {
|
||||
'timestamp': formatted_timestamp,
|
||||
'open': float(latest_candle.get('open', 0)),
|
||||
'high': float(latest_candle.get('high', 0)),
|
||||
'low': float(latest_candle.get('low', 0)),
|
||||
'close': float(latest_candle.get('close', 0)),
|
||||
'volume': float(latest_candle.get('volume', 0))
|
||||
},
|
||||
'is_confirmed': is_confirmed, # True if this candle is closed/confirmed
|
||||
'has_previous': len(candles) >= 2 # True if we have previous candle for validation
|
||||
}, room=room)
|
||||
|
||||
# Get prediction if model is loaded
|
||||
if self.orchestrator and hasattr(self.orchestrator, 'primary_transformer'):
|
||||
prediction = self._get_live_prediction(symbol, timeframe, 1)
|
||||
if prediction:
|
||||
self.socketio.emit('prediction_update', prediction, room=room)
|
||||
|
||||
except Exception as e:
|
||||
logger.debug(f"Error getting data for {symbol} {timeframe}: {e}")
|
||||
|
||||
time.sleep(1) # Update every second
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in live update thread: {e}")
|
||||
time.sleep(5) # Wait longer on error
|
||||
|
||||
self._live_update_thread = threading.Thread(target=live_update_worker, daemon=True)
|
||||
self._live_update_thread.start()
|
||||
# WebSocket code removed - using HTTP polling only
|
||||
|
||||
def _get_live_transformer_prediction(self, symbol: str = 'ETH/USDT'):
|
||||
"""
|
||||
@@ -3423,12 +3248,10 @@ class AnnotationDashboard:
|
||||
logger.info(f"Access locally at: http://localhost:{port}")
|
||||
logger.info(f"Access from network at: http://<your-ip>:{port}")
|
||||
|
||||
if self.has_socketio:
|
||||
logger.info("Running with WebSocket support (SocketIO)")
|
||||
self.socketio.run(self.server, host=host, port=port, debug=debug, allow_unsafe_werkzeug=True)
|
||||
else:
|
||||
logger.warning("Running without WebSocket support - install flask-socketio for live updates")
|
||||
self.server.run(host=host, port=port, debug=debug)
|
||||
# WebSocket removed - using HTTP polling only
|
||||
# Start Flask server
|
||||
self.server.run(host=host, port=port, debug=debug, use_reloader=False)
|
||||
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
@@ -554,7 +554,12 @@ class ChartManager {
|
||||
};
|
||||
|
||||
const layout = {
|
||||
title: '',
|
||||
title: {
|
||||
text: `${timeframe} (Europe/Sofia Time)`,
|
||||
font: { size: 12, color: '#9ca3af' },
|
||||
xanchor: 'left',
|
||||
x: 0.01
|
||||
},
|
||||
showlegend: false,
|
||||
xaxis: {
|
||||
rangeslider: { visible: false },
|
||||
@@ -562,7 +567,13 @@ class ChartManager {
|
||||
color: '#9ca3af',
|
||||
showgrid: true,
|
||||
zeroline: false,
|
||||
fixedrange: false
|
||||
fixedrange: false,
|
||||
type: 'date',
|
||||
// NOTE: Plotly.js always displays times in browser's local timezone
|
||||
// Timestamps are stored as UTC but displayed in local time
|
||||
// This is expected behavior - users see times in their timezone
|
||||
// tickformat: '%Y-%m-%d %H:%M:%S',
|
||||
// hoverformat: '%Y-%m-%d %H:%M:%S'
|
||||
},
|
||||
yaxis: {
|
||||
title: {
|
||||
@@ -3031,38 +3042,53 @@ class ChartManager {
|
||||
if (!chart || !chart.data) return;
|
||||
|
||||
const lastIdx = chart.data.timestamps.length - 1;
|
||||
const lastTimestamp = new Date(chart.data.timestamps[lastIdx]);
|
||||
const lastTimestamp = chart.data.timestamps[lastIdx]; // Keep as ISO string
|
||||
const currentPrice = chart.data.close[lastIdx];
|
||||
|
||||
// Calculate target point
|
||||
// steepness is [0, 1], angle is in degrees
|
||||
// Project ahead based on timeframe to avoid zoom issues
|
||||
// Project ahead based on timeframe
|
||||
// For 1s: 30s ahead, 1m: 2min ahead, 1h: 30min ahead
|
||||
const projectionSeconds = timeframe === '1s' ? 30 :
|
||||
timeframe === '1m' ? 120 :
|
||||
timeframe === '1h' ? 1800 : 300;
|
||||
const targetTime = new Date(lastTimestamp.getTime() + projectionSeconds * 1000);
|
||||
|
||||
// CRITICAL FIX: Format targetTime as ISO string with 'Z' to match chart data format
|
||||
// This prevents the 2-hour timezone offset issue
|
||||
const targetTimeMs = new Date(lastTimestamp).getTime() + projectionSeconds * 1000;
|
||||
const targetTime = new Date(targetTimeMs).toISOString();
|
||||
|
||||
let targetPrice = currentPrice;
|
||||
|
||||
if (trendVector.price_delta) {
|
||||
// If model provided explicit price delta (denormalized ideally)
|
||||
// Note: backend sends price_delta as normalized value usually?
|
||||
// But trend_vector dict constructed in model usually has raw value if we didn't normalize?
|
||||
// Actually, checking model code, it returns raw tensor value.
|
||||
// If normalized, it's small. If real price, it's big.
|
||||
// Heuristic: if delta is < 1.0 and price is > 100, it's likely normalized or percentage.
|
||||
// CRITICAL FIX: Check if price_delta is normalized (< 1.0) or real price change
|
||||
if (trendVector.price_delta !== undefined && trendVector.price_delta !== null) {
|
||||
const priceDelta = parseFloat(trendVector.price_delta);
|
||||
|
||||
// Safer to use angle/steepness if delta is ambiguous, but let's try to interpret direction
|
||||
const direction = trendVector.direction === 'up' ? 1 : (trendVector.direction === 'down' ? -1 : 0);
|
||||
const steepness = trendVector.steepness || 0; // 0 to 1
|
||||
// If price_delta is very small (< 1.0), it's likely normalized - scale it
|
||||
if (Math.abs(priceDelta) < 1.0) {
|
||||
// Normalized value - treat as percentage of current price
|
||||
targetPrice = currentPrice * (1 + priceDelta);
|
||||
} else {
|
||||
// Real price delta - add directly
|
||||
targetPrice = currentPrice + priceDelta;
|
||||
}
|
||||
} else {
|
||||
// Fallback: Use direction and steepness
|
||||
const direction = trendVector.direction === 'up' ? 1 :
|
||||
(trendVector.direction === 'down' ? -1 : 0);
|
||||
const steepness = parseFloat(trendVector.steepness) || 0; // 0 to 1
|
||||
|
||||
// Estimate price change based on steepness (max 2% move in 5 mins)
|
||||
const maxChange = 0.02 * currentPrice;
|
||||
// Estimate price change based on steepness (max 1% move per projection period)
|
||||
const maxChange = 0.01 * currentPrice;
|
||||
const projectedChange = maxChange * steepness * direction;
|
||||
targetPrice = currentPrice + projectedChange;
|
||||
}
|
||||
|
||||
// Sanity check: Don't let target price go to 0 or negative
|
||||
if (targetPrice <= 0 || !isFinite(targetPrice)) {
|
||||
console.warn('Invalid target price calculated:', targetPrice, 'using current price instead');
|
||||
targetPrice = currentPrice;
|
||||
}
|
||||
|
||||
// Draw trend ray
|
||||
shapes.push({
|
||||
type: 'line',
|
||||
@@ -3081,7 +3107,7 @@ class ChartManager {
|
||||
annotations.push({
|
||||
x: targetTime,
|
||||
y: targetPrice,
|
||||
text: `Target<br>${targetPrice.toFixed(2)}`,
|
||||
text: `Target<br>$${targetPrice.toFixed(2)}`,
|
||||
showarrow: true,
|
||||
arrowhead: 2,
|
||||
ax: 0,
|
||||
|
||||
Reference in New Issue
Block a user