7.0 KiB
COBY Portainer Deployment Guide
This guide explains how to deploy the COBY Multi-Exchange Data Aggregation System using Portainer with Git repository integration.
Prerequisites
- Portainer CE/EE installed and running
- Docker Swarm or Docker Compose environment
- Access to the Git repository containing the COBY project
- Minimum system requirements:
- 4GB RAM
- 2 CPU cores
- 20GB disk space
Deployment Steps
1. Access Portainer
- Open your Portainer web interface
- Navigate to your environment (local Docker or Docker Swarm)
2. Create Stack from Git Repository
-
Go to Stacks in the left sidebar
-
Click Add stack
-
Choose Repository as the build method
-
Configure the repository settings:
Repository Configuration:
- Repository URL:
https://github.com/your-username/your-repo.git
- Repository reference:
main
(or your preferred branch) - Compose path:
COBY/docker-compose.portainer.yml
- Additional files: Leave empty (all configs are embedded)
- Repository URL:
3. Configure Environment Variables
In the Environment variables section, add the following variables (optional customizations):
# Database Configuration
DB_PASSWORD=your_secure_database_password
REDIS_PASSWORD=your_secure_redis_password
# API Configuration
API_PORT=8080
WS_PORT=8081
# Monitoring (if using monitoring profile)
PROMETHEUS_PORT=9090
GRAFANA_PORT=3001
GRAFANA_PASSWORD=your_grafana_password
# Performance Tuning
MAX_CONNECTIONS_PER_EXCHANGE=5
DATA_BUFFER_SIZE=10000
BATCH_WRITE_SIZE=1000
4. Deploy the Stack
- Stack name: Enter
coby-system
(or your preferred name) - Environment variables: Configure as needed (see above)
- Access control: Set appropriate permissions
- Click Deploy the stack
5. Monitor Deployment
- Watch the deployment logs in Portainer
- Check that all services start successfully:
coby-timescaledb
(Database)coby-redis
(Cache)coby-app
(Main application)coby-dashboard
(Web interface)
6. Verify Installation
Once deployed, verify the installation:
- Health Checks: All services should show as "healthy" in Portainer
- Web Dashboard: Access
http://your-server:8080/
(served by your reverse proxy) - API Endpoint: Check
http://your-server:8080/health
- Logs: Review logs for any errors
Reverse Proxy Configuration: Configure your reverse proxy to forward requests to the COBY app on port 8080. The application serves both the API and web dashboard from the same port.
Service Ports
The following ports will be exposed:
- 8080: REST API + Web Dashboard (served by FastAPI)
- 8081: WebSocket API
- 5432: TimescaleDB (optional external access)
- 6379: Redis (optional external access)
Note: The web dashboard is now served directly by the FastAPI application at port 8080, eliminating the need for a separate nginx container since you have a reverse proxy.
Optional Monitoring Stack
To enable Prometheus and Grafana monitoring:
- In the stack configuration, add the profile:
monitoring
- Additional ports will be exposed:
- 9090: Prometheus
- 3001: Grafana
- 9100: Node Exporter
Configuration Options
Resource Limits
The stack includes resource limits for each service:
- COBY App: 2GB RAM, 2 CPU cores (includes web dashboard)
- TimescaleDB: 1GB RAM, 1 CPU core
- Redis: 512MB RAM, 0.5 CPU cores
Persistent Data
The following volumes are created for persistent data:
timescale_data
: Database storageredis_data
: Redis persistencecoby_logs
: Application logscoby_data
: Application dataprometheus_data
: Metrics storage (if monitoring enabled)grafana_data
: Grafana dashboards (if monitoring enabled)
Network Configuration
- Network:
coby-network
(172.20.0.0/16) - Internal communication: All services communicate via Docker network
- External access: Only specified ports are exposed
Troubleshooting
Common Issues
-
Services not starting:
- Check resource availability
- Review service logs in Portainer
- Verify environment variables
-
Database connection issues:
- Ensure TimescaleDB is healthy
- Check database credentials
- Verify network connectivity
-
Web dashboard not accessible:
- Confirm port 8080 is accessible through your reverse proxy
- Check that coby-app is running and healthy
- Verify static files are being served at the root path
Log Access
Access logs through Portainer:
- Go to Containers
- Click on the container name
- Select Logs tab
- Use filters to find specific issues
Health Checks
Monitor service health:
- Portainer Dashboard: Shows health status
- API Health:
GET /health
endpoint - Database:
pg_isready
command - Redis:
redis-cli ping
command
Scaling and Updates
Horizontal Scaling
To scale the main application:
- Go to the stack in Portainer
- Edit the stack
- Modify the
coby-app
service replicas - Redeploy the stack
Updates
To update the system:
- Git-based updates: Portainer will pull latest changes
- Manual updates: Edit stack configuration
- Rolling updates: Use Docker Swarm mode for zero-downtime updates
Backup
Regular backups should include:
- Database: TimescaleDB data volume
- Configuration: Stack configuration in Portainer
- Logs: Application logs for troubleshooting
Security Considerations
- Change default passwords for database and Redis
- Use environment variables for sensitive data
- Limit network exposure to required ports only
- Regular updates of base images
- Monitor logs for security events
Performance Tuning
Database Optimization
- Adjust
shared_buffers
in TimescaleDB - Configure connection pooling
- Monitor query performance
Application Tuning
- Adjust
DATA_BUFFER_SIZE
for throughput - Configure
BATCH_WRITE_SIZE
for database writes - Monitor memory usage and adjust limits
Network Optimization
- Use Docker overlay networks for multi-host deployments
- Configure load balancing for high availability
- Monitor network latency between services
Support
For issues and support:
- Check the application logs
- Review Portainer container status
- Consult the main project documentation
- Submit issues to the project repository
Example Stack Configuration
Here's a complete example of environment variables for production:
# Production Configuration
ENVIRONMENT=production
DEBUG=false
LOG_LEVEL=INFO
# Security
DB_PASSWORD=prod_secure_db_pass_2024
REDIS_PASSWORD=prod_secure_redis_pass_2024
# Performance
MAX_CONNECTIONS_PER_EXCHANGE=10
DATA_BUFFER_SIZE=20000
BATCH_WRITE_SIZE=2000
# Monitoring
PROMETHEUS_PORT=9090
GRAFANA_PORT=3001
GRAFANA_PASSWORD=secure_grafana_pass
# Exchange Configuration
EXCHANGES=binance,coinbase,kraken,bybit,okx,huobi,kucoin,gateio,bitfinex,mexc
SYMBOLS=BTCUSDT,ETHUSDT,ADAUSDT,DOTUSDT
This configuration provides a robust production deployment suitable for high-throughput cryptocurrency data aggregation.