195 lines
4.3 KiB
Markdown
195 lines
4.3 KiB
Markdown
# Quick Start Guide
|
|
|
|
## 🚀 Fastest Way to Start
|
|
|
|
### First Time Setup
|
|
|
|
```bash
|
|
cd /mnt/shared/DEV/repos/d-popov.com/gogo2
|
|
python -m venv venv
|
|
source venv/bin/activate
|
|
pip install -r requirements.txt
|
|
|
|
# Auto-detect and install correct PyTorch (NVIDIA/AMD/CPU)
|
|
./scripts/setup-pytorch.sh
|
|
```
|
|
|
|
### Daily Use (After Setup)
|
|
|
|
Your system is **ready to go** with GPU support!
|
|
|
|
```bash
|
|
cd /mnt/shared/DEV/repos/d-popov.com/gogo2
|
|
source venv/bin/activate
|
|
python kill_dashboard.py # Kill any stale processes
|
|
python ANNOTATE/web/app.py
|
|
```
|
|
|
|
**Access:** http://localhost:8051
|
|
|
|
**GPU Status:**
|
|
- ✅ AMD Radeon Graphics (Strix Halo 8050S/8060S)
|
|
- ✅ ROCm 6.2 PyTorch installed
|
|
- ✅ 47GB shared memory
|
|
- ✅ 2-3x faster training vs CPU
|
|
|
|
## Alternative: Use Existing Docker Container
|
|
|
|
You have `amd-strix-halo-llama-rocm` container already running with ROCm support:
|
|
|
|
### Setup Container (One-Time)
|
|
|
|
```bash
|
|
# 1. Install Python in container (Fedora-based)
|
|
docker exec amd-strix-halo-llama-rocm dnf install -y python3.12 python3-pip python3-devel git
|
|
|
|
# 2. Create symlinks
|
|
docker exec amd-strix-halo-llama-rocm bash -c "ln -sf /usr/bin/python3.12 /usr/bin/python3 && ln -sf /usr/bin/python3.12 /usr/bin/python"
|
|
|
|
# 3. Copy project to container
|
|
docker exec amd-strix-halo-llama-rocm mkdir -p /workspace
|
|
docker cp /mnt/shared/DEV/repos/d-popov.com/gogo2 amd-strix-halo-llama-rocm:/workspace/
|
|
|
|
# 4. Install dependencies
|
|
docker exec amd-strix-halo-llama-rocm bash -c "cd /workspace/gogo2 && pip3 install -r requirements.txt && pip3 install torch --index-url https://download.pytorch.org/whl/rocm6.2"
|
|
```
|
|
|
|
### Start ANNOTATE in Container
|
|
|
|
```bash
|
|
# Enter container
|
|
docker exec -it amd-strix-halo-llama-rocm bash
|
|
|
|
# Inside container:
|
|
cd /workspace/gogo2
|
|
python3 ANNOTATE/web/app.py --port 8051
|
|
```
|
|
|
|
**Access:** http://localhost:8051 (if port is exposed)
|
|
|
|
**Helper script:** `./scripts/attach-to-rocm-container.sh` (guides you through setup)
|
|
|
|
## Development Workflows
|
|
|
|
### 1. ANNOTATE Dashboard (Manual Trading)
|
|
|
|
```bash
|
|
source venv/bin/activate
|
|
python ANNOTATE/web/app.py
|
|
```
|
|
|
|
- Create trade annotations
|
|
- Train models on annotations
|
|
- Test inference
|
|
|
|
### 2. Main Dashboard (Live Trading)
|
|
|
|
```bash
|
|
source venv/bin/activate
|
|
python main_dashboard.py --port 8050
|
|
```
|
|
|
|
- Real-time market data
|
|
- Live predictions
|
|
- Performance monitoring
|
|
|
|
### 3. Training Runner
|
|
|
|
```bash
|
|
source venv/bin/activate
|
|
|
|
# Real-time training (4 hours)
|
|
python training_runner.py --mode realtime --duration 4 --symbol ETH/USDT
|
|
|
|
# Backtest training
|
|
python training_runner.py --mode backtest --start-date 2024-01-01 --end-date 2024-12-31
|
|
```
|
|
|
|
### 4. COB Dashboard
|
|
|
|
```bash
|
|
source venv/bin/activate
|
|
python web/cob_realtime_dashboard.py
|
|
```
|
|
|
|
- Order book analysis
|
|
- Market microstructure
|
|
- Liquidity monitoring
|
|
|
|
## Troubleshooting
|
|
|
|
### Port Already in Use
|
|
|
|
```bash
|
|
# Kill stale processes
|
|
python kill_dashboard.py
|
|
|
|
# Or manually
|
|
lsof -i :8051
|
|
kill -9 <PID>
|
|
```
|
|
|
|
### GPU Not Working
|
|
|
|
```bash
|
|
# Check GPU
|
|
python -c "import torch; print(f'CUDA: {torch.cuda.is_available()}'); print(f'Device: {torch.cuda.get_device_name(0)}')"
|
|
|
|
# Should show:
|
|
# CUDA: True
|
|
# Device: AMD Radeon Graphics
|
|
```
|
|
|
|
### Missing Dependencies
|
|
|
|
```bash
|
|
# Reinstall
|
|
pip install -r requirements.txt
|
|
pip install torch --index-url https://download.pytorch.org/whl/rocm6.2
|
|
```
|
|
|
|
## Documentation
|
|
|
|
- **📖 Full Setup:** [readme.md](readme.md)
|
|
- **🐳 Docker Guide:** [docs/AMD_STRIX_HALO_DOCKER.md](docs/AMD_STRIX_HALO_DOCKER.md)
|
|
- **🔌 Container Usage:** [docs/USING_EXISTING_ROCM_CONTAINER.md](docs/USING_EXISTING_ROCM_CONTAINER.md)
|
|
- **🎓 Training Guide:** [ANNOTATE/TRAINING_GUIDE.md](ANNOTATE/TRAINING_GUIDE.md)
|
|
- **🔧 Kill Processes:** [kill_dashboard.py](kill_dashboard.py)
|
|
|
|
## Common Commands
|
|
|
|
```bash
|
|
# Activate environment
|
|
source venv/bin/activate
|
|
|
|
# Check Python/GPU
|
|
python --version
|
|
python -c "import torch; print(torch.cuda.is_available())"
|
|
|
|
# Kill stale processes
|
|
python kill_dashboard.py
|
|
|
|
# List Docker containers
|
|
docker ps -a
|
|
|
|
# Attach to container
|
|
docker exec -it amd-strix-halo-llama-rocm bash
|
|
|
|
# View logs
|
|
tail -f logs/*.log
|
|
```
|
|
|
|
## Next Steps
|
|
|
|
1. ✅ **Start ANNOTATE** - Create trading annotations
|
|
2. 📊 **Train Models** - Use your annotations to train
|
|
3. 🔴 **Live Inference** - Test predictions in real-time
|
|
4. 📈 **Monitor Performance** - Track accuracy and profits
|
|
|
|
---
|
|
|
|
**System:** AMD Strix Halo (Radeon 8050S/8060S)
|
|
**Status:** ✅ Ready for GPU-accelerated training
|
|
**Last Updated:** 2025-11-12
|
|
|