4.5 KiB
4.5 KiB
Technology Stack
Core Technologies
Python Ecosystem
- Python 3.x: Primary language
- PyTorch: Deep learning framework (CPU/CUDA/DirectML support)
- NumPy/Pandas: Data manipulation and analysis
- scikit-learn: ML utilities and preprocessing
Web & API
- Dash/Plotly: Interactive web dashboard
- Flask: ANNOTATE web UI
- FastAPI: COBY REST API
- WebSockets: Real-time data streaming
Data Storage
- DuckDB: Primary data storage (time-series optimized)
- SQLite: Metadata and predictions database
- Redis: High-performance caching (COBY)
- TimescaleDB: Optional time-series storage (COBY)
Exchange Integration
- ccxt: Multi-exchange API library
- websocket-client: Real-time market data
- pybit: Bybit-specific integration
Monitoring & Logging
- TensorBoard: Training visualization
- wandb: Experiment tracking
- structlog: Structured logging (COBY)
Hardware Acceleration
GPU Support
- NVIDIA CUDA (via PyTorch CUDA builds)
- AMD DirectML (via onnxruntime-directml)
- CPU fallback (default PyTorch CPU build)
Note: PyTorch is NOT in requirements.txt to avoid pulling NVIDIA CUDA deps on AMD machines. Install manually based on hardware.
Project Structure
gogo2/
├── core/ # Core trading system components
├── models/ # Trained model checkpoints
├── NN/ # Neural network models and training
├── COBY/ # Multi-exchange data aggregation
├── ANNOTATE/ # Manual annotation UI
├── web/ # Main dashboard
├── utils/ # Shared utilities
├── cache/ # Data caching
├── data/ # Databases and exports
├── logs/ # System logs
└── @checkpoints/ # Model checkpoints archive
Configuration
- config.yaml: Main system configuration (exchanges, symbols, timeframes, trading params)
- models.yml: Model-specific settings (CNN, RL, training)
- .env: Sensitive credentials (API keys, database passwords)
Common Commands
Running the System
# Main dashboard with live training
python main_dashboard.py --port 8051
# Dashboard without training
python main_dashboard.py --port 8051 --no-training
# Clean dashboard (alternative)
python run_clean_dashboard.py
Training
# Unified training runner - realtime mode
python training_runner.py --mode realtime --duration 4
# Backtest training
python training_runner.py --mode backtest --start-date 2024-01-01 --end-date 2024-12-31
# CNN training with TensorBoard
python main_clean.py --mode cnn --symbol ETH/USDT
tensorboard --logdir=runs
# RL training
python main_clean.py --mode rl --symbol ETH/USDT
Backtesting
# 30-day backtest
python main_backtest.py --start 2024-01-01 --end 2024-01-31
# Custom symbol and window
python main_backtest.py --start 2024-01-01 --end 2024-12-31 --symbol BTC/USDT --window 48
COBY System
# Start COBY data aggregation
python COBY/main.py --debug
# Access COBY dashboard: http://localhost:8080
# COBY API: http://localhost:8080/api/...
# COBY WebSocket: ws://localhost:8081/dashboard
ANNOTATE System
# Start annotation UI
python ANNOTATE/web/app.py
# Access at: http://127.0.0.1:8051
Testing
# Run tests
python -m pytest tests/
# Test specific components
python test_cnn_only.py
python test_training.py
python test_duckdb_storage.py
Monitoring
# TensorBoard for training metrics
tensorboard --logdir=runs
# Access at: http://localhost:6006
# Check data stream status
python check_stream.py status
python check_stream.py ohlcv
python check_stream.py cob
Development Tools
- TensorBoard: Training visualization (runs/ directory)
- wandb: Experiment tracking
- pytest: Testing framework
- Git: Version control
Dependencies Management
# Install dependencies
pip install -r requirements.txt
# Install PyTorch (choose based on hardware)
# CPU-only:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# NVIDIA GPU (CUDA 12.1):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# AMD NPU:
pip install onnxruntime-directml onnx transformers optimum
Performance Targets
- Memory Usage: <2GB per model, <28GB total system
- Training Speed: ~20 seconds for 50 epochs
- Inference Latency: <200ms per prediction
- Real Data Processing: 1000+ candles per timeframe