17. docker deployment
This commit is contained in:
264
COBY/PORTAINER_DEPLOYMENT.md
Normal file
264
COBY/PORTAINER_DEPLOYMENT.md
Normal file
@ -0,0 +1,264 @@
|
||||
# COBY Portainer Deployment Guide
|
||||
|
||||
This guide explains how to deploy the COBY Multi-Exchange Data Aggregation System using Portainer with Git repository integration.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Portainer CE/EE installed and running
|
||||
- Docker Swarm or Docker Compose environment
|
||||
- Access to the Git repository containing the COBY project
|
||||
- Minimum system requirements:
|
||||
- 4GB RAM
|
||||
- 2 CPU cores
|
||||
- 20GB disk space
|
||||
|
||||
## Deployment Steps
|
||||
|
||||
### 1. Access Portainer
|
||||
|
||||
1. Open your Portainer web interface
|
||||
2. Navigate to your environment (local Docker or Docker Swarm)
|
||||
|
||||
### 2. Create Stack from Git Repository
|
||||
|
||||
1. Go to **Stacks** in the left sidebar
|
||||
2. Click **Add stack**
|
||||
3. Choose **Repository** as the build method
|
||||
4. Configure the repository settings:
|
||||
|
||||
**Repository Configuration:**
|
||||
- **Repository URL**: `https://github.com/your-username/your-repo.git`
|
||||
- **Repository reference**: `main` (or your preferred branch)
|
||||
- **Compose path**: `COBY/docker-compose.portainer.yml`
|
||||
- **Additional files**: Leave empty (all configs are embedded)
|
||||
|
||||
### 3. Configure Environment Variables
|
||||
|
||||
In the **Environment variables** section, add the following variables (optional customizations):
|
||||
|
||||
```bash
|
||||
# Database Configuration
|
||||
DB_PASSWORD=your_secure_database_password
|
||||
REDIS_PASSWORD=your_secure_redis_password
|
||||
|
||||
# API Configuration
|
||||
API_PORT=8080
|
||||
WS_PORT=8081
|
||||
|
||||
# Monitoring (if using monitoring profile)
|
||||
PROMETHEUS_PORT=9090
|
||||
GRAFANA_PORT=3001
|
||||
GRAFANA_PASSWORD=your_grafana_password
|
||||
|
||||
# Performance Tuning
|
||||
MAX_CONNECTIONS_PER_EXCHANGE=5
|
||||
DATA_BUFFER_SIZE=10000
|
||||
BATCH_WRITE_SIZE=1000
|
||||
```
|
||||
|
||||
### 4. Deploy the Stack
|
||||
|
||||
1. **Stack name**: Enter `coby-system` (or your preferred name)
|
||||
2. **Environment variables**: Configure as needed (see above)
|
||||
3. **Access control**: Set appropriate permissions
|
||||
4. Click **Deploy the stack**
|
||||
|
||||
### 5. Monitor Deployment
|
||||
|
||||
1. Watch the deployment logs in Portainer
|
||||
2. Check that all services start successfully:
|
||||
- `coby-timescaledb` (Database)
|
||||
- `coby-redis` (Cache)
|
||||
- `coby-app` (Main application)
|
||||
- `coby-dashboard` (Web interface)
|
||||
|
||||
### 6. Verify Installation
|
||||
|
||||
Once deployed, verify the installation:
|
||||
|
||||
1. **Health Checks**: All services should show as "healthy" in Portainer
|
||||
2. **Web Dashboard**: Access `http://your-server:8080/` (served by your reverse proxy)
|
||||
3. **API Endpoint**: Check `http://your-server:8080/health`
|
||||
4. **Logs**: Review logs for any errors
|
||||
|
||||
**Reverse Proxy Configuration**: Configure your reverse proxy to forward requests to the COBY app on port 8080. The application serves both the API and web dashboard from the same port.
|
||||
|
||||
## Service Ports
|
||||
|
||||
The following ports will be exposed:
|
||||
|
||||
- **8080**: REST API + Web Dashboard (served by FastAPI)
|
||||
- **8081**: WebSocket API
|
||||
- **5432**: TimescaleDB (optional external access)
|
||||
- **6379**: Redis (optional external access)
|
||||
|
||||
**Note**: The web dashboard is now served directly by the FastAPI application at port 8080, eliminating the need for a separate nginx container since you have a reverse proxy.
|
||||
|
||||
## Optional Monitoring Stack
|
||||
|
||||
To enable Prometheus and Grafana monitoring:
|
||||
|
||||
1. In the stack configuration, add the profile: `monitoring`
|
||||
2. Additional ports will be exposed:
|
||||
- **9090**: Prometheus
|
||||
- **3001**: Grafana
|
||||
- **9100**: Node Exporter
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Resource Limits
|
||||
|
||||
The stack includes resource limits for each service:
|
||||
|
||||
- **COBY App**: 2GB RAM, 2 CPU cores (includes web dashboard)
|
||||
- **TimescaleDB**: 1GB RAM, 1 CPU core
|
||||
- **Redis**: 512MB RAM, 0.5 CPU cores
|
||||
|
||||
### Persistent Data
|
||||
|
||||
The following volumes are created for persistent data:
|
||||
|
||||
- `timescale_data`: Database storage
|
||||
- `redis_data`: Redis persistence
|
||||
- `coby_logs`: Application logs
|
||||
- `coby_data`: Application data
|
||||
- `prometheus_data`: Metrics storage (if monitoring enabled)
|
||||
- `grafana_data`: Grafana dashboards (if monitoring enabled)
|
||||
|
||||
### Network Configuration
|
||||
|
||||
- **Network**: `coby-network` (172.20.0.0/16)
|
||||
- **Internal communication**: All services communicate via Docker network
|
||||
- **External access**: Only specified ports are exposed
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Services not starting**:
|
||||
- Check resource availability
|
||||
- Review service logs in Portainer
|
||||
- Verify environment variables
|
||||
|
||||
2. **Database connection issues**:
|
||||
- Ensure TimescaleDB is healthy
|
||||
- Check database credentials
|
||||
- Verify network connectivity
|
||||
|
||||
3. **Web dashboard not accessible**:
|
||||
- Confirm port 8080 is accessible through your reverse proxy
|
||||
- Check that coby-app is running and healthy
|
||||
- Verify static files are being served at the root path
|
||||
|
||||
### Log Access
|
||||
|
||||
Access logs through Portainer:
|
||||
|
||||
1. Go to **Containers**
|
||||
2. Click on the container name
|
||||
3. Select **Logs** tab
|
||||
4. Use filters to find specific issues
|
||||
|
||||
### Health Checks
|
||||
|
||||
Monitor service health:
|
||||
|
||||
1. **Portainer Dashboard**: Shows health status
|
||||
2. **API Health**: `GET /health` endpoint
|
||||
3. **Database**: `pg_isready` command
|
||||
4. **Redis**: `redis-cli ping` command
|
||||
|
||||
## Scaling and Updates
|
||||
|
||||
### Horizontal Scaling
|
||||
|
||||
To scale the main application:
|
||||
|
||||
1. Go to the stack in Portainer
|
||||
2. Edit the stack
|
||||
3. Modify the `coby-app` service replicas
|
||||
4. Redeploy the stack
|
||||
|
||||
### Updates
|
||||
|
||||
To update the system:
|
||||
|
||||
1. **Git-based updates**: Portainer will pull latest changes
|
||||
2. **Manual updates**: Edit stack configuration
|
||||
3. **Rolling updates**: Use Docker Swarm mode for zero-downtime updates
|
||||
|
||||
### Backup
|
||||
|
||||
Regular backups should include:
|
||||
|
||||
- **Database**: TimescaleDB data volume
|
||||
- **Configuration**: Stack configuration in Portainer
|
||||
- **Logs**: Application logs for troubleshooting
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Change default passwords** for database and Redis
|
||||
2. **Use environment variables** for sensitive data
|
||||
3. **Limit network exposure** to required ports only
|
||||
4. **Regular updates** of base images
|
||||
5. **Monitor logs** for security events
|
||||
|
||||
## Performance Tuning
|
||||
|
||||
### Database Optimization
|
||||
|
||||
- Adjust `shared_buffers` in TimescaleDB
|
||||
- Configure connection pooling
|
||||
- Monitor query performance
|
||||
|
||||
### Application Tuning
|
||||
|
||||
- Adjust `DATA_BUFFER_SIZE` for throughput
|
||||
- Configure `BATCH_WRITE_SIZE` for database writes
|
||||
- Monitor memory usage and adjust limits
|
||||
|
||||
### Network Optimization
|
||||
|
||||
- Use Docker overlay networks for multi-host deployments
|
||||
- Configure load balancing for high availability
|
||||
- Monitor network latency between services
|
||||
|
||||
## Support
|
||||
|
||||
For issues and support:
|
||||
|
||||
1. Check the application logs
|
||||
2. Review Portainer container status
|
||||
3. Consult the main project documentation
|
||||
4. Submit issues to the project repository
|
||||
|
||||
## Example Stack Configuration
|
||||
|
||||
Here's a complete example of environment variables for production:
|
||||
|
||||
```bash
|
||||
# Production Configuration
|
||||
ENVIRONMENT=production
|
||||
DEBUG=false
|
||||
LOG_LEVEL=INFO
|
||||
|
||||
# Security
|
||||
DB_PASSWORD=prod_secure_db_pass_2024
|
||||
REDIS_PASSWORD=prod_secure_redis_pass_2024
|
||||
|
||||
# Performance
|
||||
MAX_CONNECTIONS_PER_EXCHANGE=10
|
||||
DATA_BUFFER_SIZE=20000
|
||||
BATCH_WRITE_SIZE=2000
|
||||
|
||||
# Monitoring
|
||||
PROMETHEUS_PORT=9090
|
||||
GRAFANA_PORT=3001
|
||||
GRAFANA_PASSWORD=secure_grafana_pass
|
||||
|
||||
# Exchange Configuration
|
||||
EXCHANGES=binance,coinbase,kraken,bybit,okx,huobi,kucoin,gateio,bitfinex,mexc
|
||||
SYMBOLS=BTCUSDT,ETHUSDT,ADAUSDT,DOTUSDT
|
||||
```
|
||||
|
||||
This configuration provides a robust production deployment suitable for high-throughput cryptocurrency data aggregation.
|
Reference in New Issue
Block a user