COBY : specs + task 1
This commit is contained in:
273
COBY/docker/README.md
Normal file
273
COBY/docker/README.md
Normal file
@ -0,0 +1,273 @@
|
||||
# Market Data Infrastructure Docker Setup
|
||||
|
||||
This directory contains Docker Compose configurations and scripts for deploying TimescaleDB and Redis infrastructure for the multi-exchange data aggregation system.
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
- **TimescaleDB**: Time-series database optimized for high-frequency market data
|
||||
- **Redis**: High-performance caching layer for real-time data
|
||||
- **Network**: Isolated Docker network for secure communication
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
- Docker Engine 20.10+
|
||||
- Docker Compose 2.0+
|
||||
- At least 4GB RAM available for containers
|
||||
- 50GB+ disk space for data storage
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. **Copy environment file**:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
|
||||
2. **Edit configuration** (update passwords and settings):
|
||||
```bash
|
||||
nano .env
|
||||
```
|
||||
|
||||
3. **Deploy infrastructure**:
|
||||
```bash
|
||||
chmod +x deploy.sh
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
4. **Verify deployment**:
|
||||
```bash
|
||||
docker-compose -f timescaledb-compose.yml ps
|
||||
```
|
||||
|
||||
## 📁 File Structure
|
||||
|
||||
```
|
||||
docker/
|
||||
├── timescaledb-compose.yml # Main Docker Compose configuration
|
||||
├── init-scripts/ # Database initialization scripts
|
||||
│ └── 01-init-timescaledb.sql
|
||||
├── redis.conf # Redis configuration
|
||||
├── .env # Environment variables
|
||||
├── deploy.sh # Deployment script
|
||||
├── backup.sh # Backup script
|
||||
├── restore.sh # Restore script
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Key variables in `.env`:
|
||||
|
||||
```bash
|
||||
# Database credentials
|
||||
POSTGRES_PASSWORD=your_secure_password
|
||||
POSTGRES_USER=market_user
|
||||
POSTGRES_DB=market_data
|
||||
|
||||
# Redis settings
|
||||
REDIS_PASSWORD=your_redis_password
|
||||
|
||||
# Performance tuning
|
||||
POSTGRES_SHARED_BUFFERS=256MB
|
||||
POSTGRES_EFFECTIVE_CACHE_SIZE=1GB
|
||||
REDIS_MAXMEMORY=2gb
|
||||
```
|
||||
|
||||
### TimescaleDB Configuration
|
||||
|
||||
The database is pre-configured with:
|
||||
- Optimized PostgreSQL settings for time-series data
|
||||
- TimescaleDB extension enabled
|
||||
- Hypertables for automatic partitioning
|
||||
- Retention policies (90 days for raw data)
|
||||
- Continuous aggregates for common queries
|
||||
- Proper indexes for query performance
|
||||
|
||||
### Redis Configuration
|
||||
|
||||
Redis is configured for:
|
||||
- High-frequency data caching
|
||||
- Memory optimization (2GB limit)
|
||||
- Persistence with AOF and RDB
|
||||
- Optimized for order book data structures
|
||||
|
||||
## 🔌 Connection Details
|
||||
|
||||
After deployment, connect using:
|
||||
|
||||
### TimescaleDB
|
||||
```
|
||||
Host: 192.168.0.10
|
||||
Port: 5432
|
||||
Database: market_data
|
||||
Username: market_user
|
||||
Password: (from .env file)
|
||||
```
|
||||
|
||||
### Redis
|
||||
```
|
||||
Host: 192.168.0.10
|
||||
Port: 6379
|
||||
Password: (from .env file)
|
||||
```
|
||||
|
||||
## 🗄️ Database Schema
|
||||
|
||||
The system creates the following tables:
|
||||
|
||||
- `order_book_snapshots`: Real-time order book data
|
||||
- `trade_events`: Individual trade events
|
||||
- `heatmap_data`: Aggregated price bucket data
|
||||
- `ohlcv_data`: OHLCV candlestick data
|
||||
- `exchange_status`: Exchange connection monitoring
|
||||
- `system_metrics`: System performance metrics
|
||||
|
||||
## 💾 Backup & Restore
|
||||
|
||||
### Create Backup
|
||||
```bash
|
||||
chmod +x backup.sh
|
||||
./backup.sh
|
||||
```
|
||||
|
||||
Backups are stored in `./backups/` with timestamp.
|
||||
|
||||
### Restore from Backup
|
||||
```bash
|
||||
chmod +x restore.sh
|
||||
./restore.sh market_data_backup_YYYYMMDD_HHMMSS.tar.gz
|
||||
```
|
||||
|
||||
### Automated Backups
|
||||
|
||||
Set up a cron job for regular backups:
|
||||
```bash
|
||||
# Daily backup at 2 AM
|
||||
0 2 * * * /path/to/docker/backup.sh
|
||||
```
|
||||
|
||||
## 📊 Monitoring
|
||||
|
||||
### Health Checks
|
||||
|
||||
Check service health:
|
||||
```bash
|
||||
# TimescaleDB
|
||||
docker exec market_data_timescaledb pg_isready -U market_user -d market_data
|
||||
|
||||
# Redis
|
||||
docker exec market_data_redis redis-cli -a your_password ping
|
||||
```
|
||||
|
||||
### View Logs
|
||||
```bash
|
||||
# All services
|
||||
docker-compose -f timescaledb-compose.yml logs -f
|
||||
|
||||
# Specific service
|
||||
docker-compose -f timescaledb-compose.yml logs -f timescaledb
|
||||
```
|
||||
|
||||
### Database Queries
|
||||
|
||||
Connect to TimescaleDB:
|
||||
```bash
|
||||
docker exec -it market_data_timescaledb psql -U market_user -d market_data
|
||||
```
|
||||
|
||||
Example queries:
|
||||
```sql
|
||||
-- Check table sizes
|
||||
SELECT
|
||||
schemaname,
|
||||
tablename,
|
||||
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
|
||||
FROM pg_tables
|
||||
WHERE schemaname = 'market_data';
|
||||
|
||||
-- Recent order book data
|
||||
SELECT * FROM market_data.order_book_snapshots
|
||||
ORDER BY timestamp DESC LIMIT 10;
|
||||
|
||||
-- Exchange status
|
||||
SELECT * FROM market_data.exchange_status
|
||||
ORDER BY timestamp DESC LIMIT 10;
|
||||
```
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Update Images
|
||||
```bash
|
||||
docker-compose -f timescaledb-compose.yml pull
|
||||
docker-compose -f timescaledb-compose.yml up -d
|
||||
```
|
||||
|
||||
### Clean Up Old Data
|
||||
```bash
|
||||
# TimescaleDB has automatic retention policies
|
||||
# Manual cleanup if needed:
|
||||
docker exec market_data_timescaledb psql -U market_user -d market_data -c "
|
||||
SELECT drop_chunks('market_data.order_book_snapshots', INTERVAL '30 days');
|
||||
"
|
||||
```
|
||||
|
||||
### Scale Resources
|
||||
|
||||
Edit `timescaledb-compose.yml` to adjust:
|
||||
- Memory limits
|
||||
- CPU limits
|
||||
- Shared buffers
|
||||
- Connection limits
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Port conflicts**: Change ports in compose file if 5432/6379 are in use
|
||||
2. **Memory issues**: Reduce shared_buffers and Redis maxmemory
|
||||
3. **Disk space**: Monitor `/var/lib/docker/volumes/` usage
|
||||
4. **Connection refused**: Check firewall settings and container status
|
||||
|
||||
### Performance Tuning
|
||||
|
||||
1. **TimescaleDB**:
|
||||
- Adjust `shared_buffers` based on available RAM
|
||||
- Tune `effective_cache_size` to 75% of system RAM
|
||||
- Monitor query performance with `pg_stat_statements`
|
||||
|
||||
2. **Redis**:
|
||||
- Adjust `maxmemory` based on data volume
|
||||
- Monitor memory usage with `INFO memory`
|
||||
- Use appropriate eviction policy
|
||||
|
||||
### Recovery Procedures
|
||||
|
||||
1. **Container failure**: `docker-compose restart <service>`
|
||||
2. **Data corruption**: Restore from latest backup
|
||||
3. **Network issues**: Check Docker network configuration
|
||||
4. **Performance degradation**: Review logs and system metrics
|
||||
|
||||
## 🔐 Security
|
||||
|
||||
- Change default passwords in `.env`
|
||||
- Use strong passwords (20+ characters)
|
||||
- Restrict network access to trusted IPs
|
||||
- Regular security updates
|
||||
- Monitor access logs
|
||||
- Enable SSL/TLS for production
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For issues related to:
|
||||
- TimescaleDB: Check [TimescaleDB docs](https://docs.timescale.com/)
|
||||
- Redis: Check [Redis docs](https://redis.io/documentation)
|
||||
- Docker: Check [Docker docs](https://docs.docker.com/)
|
||||
|
||||
## 🔄 Updates
|
||||
|
||||
This infrastructure supports:
|
||||
- Rolling updates with zero downtime
|
||||
- Blue-green deployments
|
||||
- Automated failover
|
||||
- Data migration scripts
|
108
COBY/docker/backup.sh
Normal file
108
COBY/docker/backup.sh
Normal file
@ -0,0 +1,108 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Backup script for market data infrastructure
|
||||
# Run this script regularly to backup your data
|
||||
|
||||
set -e
|
||||
|
||||
# Configuration
|
||||
BACKUP_DIR="./backups"
|
||||
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
||||
RETENTION_DAYS=30
|
||||
|
||||
# Load environment variables
|
||||
if [ -f .env ]; then
|
||||
source .env
|
||||
fi
|
||||
|
||||
echo "🗄️ Starting backup process..."
|
||||
|
||||
# Create backup directory if it doesn't exist
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
# Backup TimescaleDB
|
||||
echo "📊 Backing up TimescaleDB..."
|
||||
docker exec market_data_timescaledb pg_dump \
|
||||
-U market_user \
|
||||
-d market_data \
|
||||
--verbose \
|
||||
--no-password \
|
||||
--format=custom \
|
||||
--compress=9 \
|
||||
> "$BACKUP_DIR/timescaledb_backup_$TIMESTAMP.dump"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ TimescaleDB backup completed: timescaledb_backup_$TIMESTAMP.dump"
|
||||
else
|
||||
echo "❌ TimescaleDB backup failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Backup Redis
|
||||
echo "📦 Backing up Redis..."
|
||||
docker exec market_data_redis redis-cli \
|
||||
-a "$REDIS_PASSWORD" \
|
||||
--rdb /data/redis_backup_$TIMESTAMP.rdb \
|
||||
BGSAVE
|
||||
|
||||
# Wait for Redis backup to complete
|
||||
sleep 5
|
||||
|
||||
# Copy Redis backup from container
|
||||
docker cp market_data_redis:/data/redis_backup_$TIMESTAMP.rdb "$BACKUP_DIR/"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ Redis backup completed: redis_backup_$TIMESTAMP.rdb"
|
||||
else
|
||||
echo "❌ Redis backup failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create backup metadata
|
||||
cat > "$BACKUP_DIR/backup_$TIMESTAMP.info" << EOF
|
||||
Backup Information
|
||||
==================
|
||||
Timestamp: $TIMESTAMP
|
||||
Date: $(date)
|
||||
TimescaleDB Backup: timescaledb_backup_$TIMESTAMP.dump
|
||||
Redis Backup: redis_backup_$TIMESTAMP.rdb
|
||||
|
||||
Container Versions:
|
||||
TimescaleDB: $(docker exec market_data_timescaledb psql -U market_user -d market_data -t -c "SELECT version();")
|
||||
Redis: $(docker exec market_data_redis redis-cli -a "$REDIS_PASSWORD" INFO server | grep redis_version)
|
||||
|
||||
Database Size:
|
||||
$(docker exec market_data_timescaledb psql -U market_user -d market_data -c "\l+")
|
||||
EOF
|
||||
|
||||
# Compress backups
|
||||
echo "🗜️ Compressing backups..."
|
||||
tar -czf "$BACKUP_DIR/market_data_backup_$TIMESTAMP.tar.gz" \
|
||||
-C "$BACKUP_DIR" \
|
||||
"timescaledb_backup_$TIMESTAMP.dump" \
|
||||
"redis_backup_$TIMESTAMP.rdb" \
|
||||
"backup_$TIMESTAMP.info"
|
||||
|
||||
# Remove individual files after compression
|
||||
rm "$BACKUP_DIR/timescaledb_backup_$TIMESTAMP.dump"
|
||||
rm "$BACKUP_DIR/redis_backup_$TIMESTAMP.rdb"
|
||||
rm "$BACKUP_DIR/backup_$TIMESTAMP.info"
|
||||
|
||||
echo "✅ Compressed backup created: market_data_backup_$TIMESTAMP.tar.gz"
|
||||
|
||||
# Clean up old backups
|
||||
echo "🧹 Cleaning up old backups (older than $RETENTION_DAYS days)..."
|
||||
find "$BACKUP_DIR" -name "market_data_backup_*.tar.gz" -mtime +$RETENTION_DAYS -delete
|
||||
|
||||
# Display backup information
|
||||
BACKUP_SIZE=$(du -h "$BACKUP_DIR/market_data_backup_$TIMESTAMP.tar.gz" | cut -f1)
|
||||
echo ""
|
||||
echo "📋 Backup Summary:"
|
||||
echo " File: market_data_backup_$TIMESTAMP.tar.gz"
|
||||
echo " Size: $BACKUP_SIZE"
|
||||
echo " Location: $BACKUP_DIR"
|
||||
echo ""
|
||||
echo "🔄 To restore from this backup:"
|
||||
echo " ./restore.sh market_data_backup_$TIMESTAMP.tar.gz"
|
||||
echo ""
|
||||
echo "✅ Backup process completed successfully!"
|
112
COBY/docker/deploy.sh
Normal file
112
COBY/docker/deploy.sh
Normal file
@ -0,0 +1,112 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Deployment script for market data infrastructure
|
||||
# Run this on your Docker host at 192.168.0.10
|
||||
|
||||
set -e
|
||||
|
||||
echo "🚀 Deploying Market Data Infrastructure..."
|
||||
|
||||
# Check if Docker and Docker Compose are available
|
||||
if ! command -v docker &> /dev/null; then
|
||||
echo "❌ Docker is not installed or not in PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v docker-compose &> /dev/null && ! docker compose version &> /dev/null; then
|
||||
echo "❌ Docker Compose is not installed or not in PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set Docker Compose command
|
||||
if docker compose version &> /dev/null; then
|
||||
DOCKER_COMPOSE="docker compose"
|
||||
else
|
||||
DOCKER_COMPOSE="docker-compose"
|
||||
fi
|
||||
|
||||
# Create necessary directories
|
||||
echo "📁 Creating directories..."
|
||||
mkdir -p ./data/timescale
|
||||
mkdir -p ./data/redis
|
||||
mkdir -p ./logs
|
||||
mkdir -p ./backups
|
||||
|
||||
# Set proper permissions
|
||||
echo "🔐 Setting permissions..."
|
||||
chmod 755 ./data/timescale
|
||||
chmod 755 ./data/redis
|
||||
chmod 755 ./logs
|
||||
chmod 755 ./backups
|
||||
|
||||
# Copy environment file if it doesn't exist
|
||||
if [ ! -f .env ]; then
|
||||
echo "📋 Creating .env file..."
|
||||
cp .env.example .env
|
||||
echo "⚠️ Please edit .env file with your specific configuration"
|
||||
echo "⚠️ Default passwords are set - change them for production!"
|
||||
fi
|
||||
|
||||
# Pull latest images
|
||||
echo "📥 Pulling Docker images..."
|
||||
$DOCKER_COMPOSE -f timescaledb-compose.yml pull
|
||||
|
||||
# Stop existing containers if running
|
||||
echo "🛑 Stopping existing containers..."
|
||||
$DOCKER_COMPOSE -f timescaledb-compose.yml down
|
||||
|
||||
# Start the services
|
||||
echo "🏃 Starting services..."
|
||||
$DOCKER_COMPOSE -f timescaledb-compose.yml up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
echo "⏳ Waiting for services to be ready..."
|
||||
sleep 30
|
||||
|
||||
# Check service health
|
||||
echo "🏥 Checking service health..."
|
||||
|
||||
# Check TimescaleDB
|
||||
if docker exec market_data_timescaledb pg_isready -U market_user -d market_data; then
|
||||
echo "✅ TimescaleDB is ready"
|
||||
else
|
||||
echo "❌ TimescaleDB is not ready"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check Redis
|
||||
if docker exec market_data_redis redis-cli -a market_data_redis_2024 ping | grep -q PONG; then
|
||||
echo "✅ Redis is ready"
|
||||
else
|
||||
echo "❌ Redis is not ready"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Display connection information
|
||||
echo ""
|
||||
echo "🎉 Deployment completed successfully!"
|
||||
echo ""
|
||||
echo "📊 Connection Information:"
|
||||
echo " TimescaleDB:"
|
||||
echo " Host: 192.168.0.10"
|
||||
echo " Port: 5432"
|
||||
echo " Database: market_data"
|
||||
echo " Username: market_user"
|
||||
echo " Password: (check .env file)"
|
||||
echo ""
|
||||
echo " Redis:"
|
||||
echo " Host: 192.168.0.10"
|
||||
echo " Port: 6379"
|
||||
echo " Password: (check .env file)"
|
||||
echo ""
|
||||
echo "📝 Next steps:"
|
||||
echo " 1. Update your application configuration to use these connection details"
|
||||
echo " 2. Test the connection from your application"
|
||||
echo " 3. Set up monitoring and alerting"
|
||||
echo " 4. Configure backup schedules"
|
||||
echo ""
|
||||
echo "🔍 To view logs:"
|
||||
echo " docker-compose -f timescaledb-compose.yml logs -f"
|
||||
echo ""
|
||||
echo "🛑 To stop services:"
|
||||
echo " docker-compose -f timescaledb-compose.yml down"
|
214
COBY/docker/init-scripts/01-init-timescaledb.sql
Normal file
214
COBY/docker/init-scripts/01-init-timescaledb.sql
Normal file
@ -0,0 +1,214 @@
|
||||
-- Initialize TimescaleDB extension and create market data schema
|
||||
CREATE EXTENSION IF NOT EXISTS timescaledb;
|
||||
|
||||
-- Create database schema for market data
|
||||
CREATE SCHEMA IF NOT EXISTS market_data;
|
||||
|
||||
-- Set search path
|
||||
SET search_path TO market_data, public;
|
||||
|
||||
-- Order book snapshots table
|
||||
CREATE TABLE IF NOT EXISTS order_book_snapshots (
|
||||
id BIGSERIAL,
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
exchange VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
bids JSONB NOT NULL,
|
||||
asks JSONB NOT NULL,
|
||||
sequence_id BIGINT,
|
||||
mid_price DECIMAL(20,8),
|
||||
spread DECIMAL(20,8),
|
||||
bid_volume DECIMAL(30,8),
|
||||
ask_volume DECIMAL(30,8),
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
PRIMARY KEY (timestamp, symbol, exchange)
|
||||
);
|
||||
|
||||
-- Convert to hypertable
|
||||
SELECT create_hypertable('order_book_snapshots', 'timestamp', if_not_exists => TRUE);
|
||||
|
||||
-- Create indexes for better query performance
|
||||
CREATE INDEX IF NOT EXISTS idx_order_book_symbol_exchange ON order_book_snapshots (symbol, exchange, timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_order_book_timestamp ON order_book_snapshots (timestamp DESC);
|
||||
|
||||
-- Trade events table
|
||||
CREATE TABLE IF NOT EXISTS trade_events (
|
||||
id BIGSERIAL,
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
exchange VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
price DECIMAL(20,8) NOT NULL,
|
||||
size DECIMAL(30,8) NOT NULL,
|
||||
side VARCHAR(4) NOT NULL,
|
||||
trade_id VARCHAR(100) NOT NULL,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
PRIMARY KEY (timestamp, symbol, exchange, trade_id)
|
||||
);
|
||||
|
||||
-- Convert to hypertable
|
||||
SELECT create_hypertable('trade_events', 'timestamp', if_not_exists => TRUE);
|
||||
|
||||
-- Create indexes for trade events
|
||||
CREATE INDEX IF NOT EXISTS idx_trade_events_symbol_exchange ON trade_events (symbol, exchange, timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_trade_events_timestamp ON trade_events (timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_trade_events_price ON trade_events (symbol, price, timestamp DESC);
|
||||
|
||||
-- Aggregated heatmap data table
|
||||
CREATE TABLE IF NOT EXISTS heatmap_data (
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
bucket_size DECIMAL(10,2) NOT NULL,
|
||||
price_bucket DECIMAL(20,8) NOT NULL,
|
||||
volume DECIMAL(30,8) NOT NULL,
|
||||
side VARCHAR(3) NOT NULL,
|
||||
exchange_count INTEGER NOT NULL,
|
||||
exchanges JSONB,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
PRIMARY KEY (timestamp, symbol, bucket_size, price_bucket, side)
|
||||
);
|
||||
|
||||
-- Convert to hypertable
|
||||
SELECT create_hypertable('heatmap_data', 'timestamp', if_not_exists => TRUE);
|
||||
|
||||
-- Create indexes for heatmap data
|
||||
CREATE INDEX IF NOT EXISTS idx_heatmap_symbol_bucket ON heatmap_data (symbol, bucket_size, timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_heatmap_timestamp ON heatmap_data (timestamp DESC);
|
||||
|
||||
-- OHLCV data table
|
||||
CREATE TABLE IF NOT EXISTS ohlcv_data (
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
timeframe VARCHAR(10) NOT NULL,
|
||||
open_price DECIMAL(20,8) NOT NULL,
|
||||
high_price DECIMAL(20,8) NOT NULL,
|
||||
low_price DECIMAL(20,8) NOT NULL,
|
||||
close_price DECIMAL(20,8) NOT NULL,
|
||||
volume DECIMAL(30,8) NOT NULL,
|
||||
trade_count INTEGER,
|
||||
vwap DECIMAL(20,8),
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
PRIMARY KEY (timestamp, symbol, timeframe)
|
||||
);
|
||||
|
||||
-- Convert to hypertable
|
||||
SELECT create_hypertable('ohlcv_data', 'timestamp', if_not_exists => TRUE);
|
||||
|
||||
-- Create indexes for OHLCV data
|
||||
CREATE INDEX IF NOT EXISTS idx_ohlcv_symbol_timeframe ON ohlcv_data (symbol, timeframe, timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_ohlcv_timestamp ON ohlcv_data (timestamp DESC);
|
||||
|
||||
-- Exchange status tracking table
|
||||
CREATE TABLE IF NOT EXISTS exchange_status (
|
||||
exchange VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
status VARCHAR(20) NOT NULL, -- 'connected', 'disconnected', 'error'
|
||||
last_message_time TIMESTAMPTZ,
|
||||
error_message TEXT,
|
||||
connection_count INTEGER DEFAULT 0,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
PRIMARY KEY (timestamp, exchange)
|
||||
);
|
||||
|
||||
-- Convert to hypertable
|
||||
SELECT create_hypertable('exchange_status', 'timestamp', if_not_exists => TRUE);
|
||||
|
||||
-- Create indexes for exchange status
|
||||
CREATE INDEX IF NOT EXISTS idx_exchange_status_exchange ON exchange_status (exchange, timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_exchange_status_timestamp ON exchange_status (timestamp DESC);
|
||||
|
||||
-- System metrics table for monitoring
|
||||
CREATE TABLE IF NOT EXISTS system_metrics (
|
||||
metric_name VARCHAR(50) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
value DECIMAL(20,8) NOT NULL,
|
||||
labels JSONB,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
PRIMARY KEY (timestamp, metric_name)
|
||||
);
|
||||
|
||||
-- Convert to hypertable
|
||||
SELECT create_hypertable('system_metrics', 'timestamp', if_not_exists => TRUE);
|
||||
|
||||
-- Create indexes for system metrics
|
||||
CREATE INDEX IF NOT EXISTS idx_system_metrics_name ON system_metrics (metric_name, timestamp DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_system_metrics_timestamp ON system_metrics (timestamp DESC);
|
||||
|
||||
-- Create retention policies (keep data for 90 days by default)
|
||||
SELECT add_retention_policy('order_book_snapshots', INTERVAL '90 days', if_not_exists => TRUE);
|
||||
SELECT add_retention_policy('trade_events', INTERVAL '90 days', if_not_exists => TRUE);
|
||||
SELECT add_retention_policy('heatmap_data', INTERVAL '90 days', if_not_exists => TRUE);
|
||||
SELECT add_retention_policy('ohlcv_data', INTERVAL '365 days', if_not_exists => TRUE);
|
||||
SELECT add_retention_policy('exchange_status', INTERVAL '30 days', if_not_exists => TRUE);
|
||||
SELECT add_retention_policy('system_metrics', INTERVAL '30 days', if_not_exists => TRUE);
|
||||
|
||||
-- Create continuous aggregates for common queries
|
||||
CREATE MATERIALIZED VIEW IF NOT EXISTS hourly_ohlcv
|
||||
WITH (timescaledb.continuous) AS
|
||||
SELECT
|
||||
symbol,
|
||||
exchange,
|
||||
time_bucket('1 hour', timestamp) AS hour,
|
||||
first(price, timestamp) AS open_price,
|
||||
max(price) AS high_price,
|
||||
min(price) AS low_price,
|
||||
last(price, timestamp) AS close_price,
|
||||
sum(size) AS volume,
|
||||
count(*) AS trade_count,
|
||||
avg(price) AS vwap
|
||||
FROM trade_events
|
||||
GROUP BY symbol, exchange, hour
|
||||
WITH NO DATA;
|
||||
|
||||
-- Add refresh policy for continuous aggregate
|
||||
SELECT add_continuous_aggregate_policy('hourly_ohlcv',
|
||||
start_offset => INTERVAL '3 hours',
|
||||
end_offset => INTERVAL '1 hour',
|
||||
schedule_interval => INTERVAL '1 hour',
|
||||
if_not_exists => TRUE);
|
||||
|
||||
-- Create view for latest order book data
|
||||
CREATE OR REPLACE VIEW latest_order_books AS
|
||||
SELECT DISTINCT ON (symbol, exchange)
|
||||
symbol,
|
||||
exchange,
|
||||
timestamp,
|
||||
bids,
|
||||
asks,
|
||||
mid_price,
|
||||
spread,
|
||||
bid_volume,
|
||||
ask_volume
|
||||
FROM order_book_snapshots
|
||||
ORDER BY symbol, exchange, timestamp DESC;
|
||||
|
||||
-- Create view for latest heatmap data
|
||||
CREATE OR REPLACE VIEW latest_heatmaps AS
|
||||
SELECT DISTINCT ON (symbol, bucket_size, price_bucket, side)
|
||||
symbol,
|
||||
bucket_size,
|
||||
price_bucket,
|
||||
side,
|
||||
timestamp,
|
||||
volume,
|
||||
exchange_count,
|
||||
exchanges
|
||||
FROM heatmap_data
|
||||
ORDER BY symbol, bucket_size, price_bucket, side, timestamp DESC;
|
||||
|
||||
-- Grant permissions to market_user
|
||||
GRANT ALL PRIVILEGES ON SCHEMA market_data TO market_user;
|
||||
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA market_data TO market_user;
|
||||
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA market_data TO market_user;
|
||||
GRANT ALL PRIVILEGES ON ALL FUNCTIONS IN SCHEMA market_data TO market_user;
|
||||
|
||||
-- Set default privileges for future objects
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA market_data GRANT ALL ON TABLES TO market_user;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA market_data GRANT ALL ON SEQUENCES TO market_user;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA market_data GRANT ALL ON FUNCTIONS TO market_user;
|
||||
|
||||
-- Create database user for read-only access (for dashboards)
|
||||
CREATE USER IF NOT EXISTS dashboard_user WITH PASSWORD 'dashboard_read_2024';
|
||||
GRANT CONNECT ON DATABASE market_data TO dashboard_user;
|
||||
GRANT USAGE ON SCHEMA market_data TO dashboard_user;
|
||||
GRANT SELECT ON ALL TABLES IN SCHEMA market_data TO dashboard_user;
|
||||
ALTER DEFAULT PRIVILEGES IN SCHEMA market_data GRANT SELECT ON TABLES TO dashboard_user;
|
37
COBY/docker/manual-init.sh
Normal file
37
COBY/docker/manual-init.sh
Normal file
@ -0,0 +1,37 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Manual database initialization script
|
||||
# Run this to initialize the TimescaleDB schema
|
||||
|
||||
echo "🔧 Initializing TimescaleDB schema..."
|
||||
|
||||
# Check if we can connect to the database
|
||||
echo "📡 Testing connection to TimescaleDB..."
|
||||
|
||||
# You can run this command on your Docker host (192.168.0.10)
|
||||
# Replace with your actual password from the .env file
|
||||
|
||||
PGPASSWORD="market_data_secure_pass_2024" psql -h 192.168.0.10 -p 5432 -U market_user -d market_data -c "SELECT version();"
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ Connection successful!"
|
||||
|
||||
echo "🏗️ Creating database schema..."
|
||||
|
||||
# Execute the initialization script
|
||||
PGPASSWORD="market_data_secure_pass_2024" psql -h 192.168.0.10 -p 5432 -U market_user -d market_data -f ../docker/init-scripts/01-init-timescaledb.sql
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ Database schema initialized successfully!"
|
||||
|
||||
echo "📊 Verifying tables..."
|
||||
PGPASSWORD="market_data_secure_pass_2024" psql -h 192.168.0.10 -p 5432 -U market_user -d market_data -c "\dt market_data.*"
|
||||
|
||||
else
|
||||
echo "❌ Schema initialization failed"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "❌ Cannot connect to database"
|
||||
exit 1
|
||||
fi
|
131
COBY/docker/redis.conf
Normal file
131
COBY/docker/redis.conf
Normal file
@ -0,0 +1,131 @@
|
||||
# Redis configuration for market data caching
|
||||
# Optimized for high-frequency trading data
|
||||
|
||||
# Network settings
|
||||
bind 0.0.0.0
|
||||
port 6379
|
||||
tcp-backlog 511
|
||||
timeout 0
|
||||
tcp-keepalive 300
|
||||
|
||||
# General settings
|
||||
daemonize no
|
||||
supervised no
|
||||
pidfile /var/run/redis_6379.pid
|
||||
loglevel notice
|
||||
logfile ""
|
||||
databases 16
|
||||
|
||||
# Snapshotting (persistence)
|
||||
save 900 1
|
||||
save 300 10
|
||||
save 60 10000
|
||||
stop-writes-on-bgsave-error yes
|
||||
rdbcompression yes
|
||||
rdbchecksum yes
|
||||
dbfilename dump.rdb
|
||||
dir /data
|
||||
|
||||
# Replication
|
||||
replica-serve-stale-data yes
|
||||
replica-read-only yes
|
||||
repl-diskless-sync no
|
||||
repl-diskless-sync-delay 5
|
||||
repl-ping-replica-period 10
|
||||
repl-timeout 60
|
||||
repl-disable-tcp-nodelay no
|
||||
repl-backlog-size 1mb
|
||||
repl-backlog-ttl 3600
|
||||
|
||||
# Security
|
||||
requirepass market_data_redis_2024
|
||||
|
||||
# Memory management
|
||||
maxmemory 2gb
|
||||
maxmemory-policy allkeys-lru
|
||||
maxmemory-samples 5
|
||||
|
||||
# Lazy freeing
|
||||
lazyfree-lazy-eviction no
|
||||
lazyfree-lazy-expire no
|
||||
lazyfree-lazy-server-del no
|
||||
replica-lazy-flush no
|
||||
|
||||
# Threaded I/O
|
||||
io-threads 4
|
||||
io-threads-do-reads yes
|
||||
|
||||
# Append only file (AOF)
|
||||
appendonly yes
|
||||
appendfilename "appendonly.aof"
|
||||
appendfsync everysec
|
||||
no-appendfsync-on-rewrite no
|
||||
auto-aof-rewrite-percentage 100
|
||||
auto-aof-rewrite-min-size 64mb
|
||||
aof-load-truncated yes
|
||||
aof-use-rdb-preamble yes
|
||||
|
||||
# Lua scripting
|
||||
lua-time-limit 5000
|
||||
|
||||
# Slow log
|
||||
slowlog-log-slower-than 10000
|
||||
slowlog-max-len 128
|
||||
|
||||
# Latency monitor
|
||||
latency-monitor-threshold 100
|
||||
|
||||
# Event notification
|
||||
notify-keyspace-events ""
|
||||
|
||||
# Hash settings (optimized for order book data)
|
||||
hash-max-ziplist-entries 512
|
||||
hash-max-ziplist-value 64
|
||||
|
||||
# List settings
|
||||
list-max-ziplist-size -2
|
||||
list-compress-depth 0
|
||||
|
||||
# Set settings
|
||||
set-max-intset-entries 512
|
||||
|
||||
# Sorted set settings
|
||||
zset-max-ziplist-entries 128
|
||||
zset-max-ziplist-value 64
|
||||
|
||||
# HyperLogLog settings
|
||||
hll-sparse-max-bytes 3000
|
||||
|
||||
# Streams settings
|
||||
stream-node-max-bytes 4096
|
||||
stream-node-max-entries 100
|
||||
|
||||
# Active rehashing
|
||||
activerehashing yes
|
||||
|
||||
# Client settings
|
||||
client-output-buffer-limit normal 0 0 0
|
||||
client-output-buffer-limit replica 256mb 64mb 60
|
||||
client-output-buffer-limit pubsub 32mb 8mb 60
|
||||
client-query-buffer-limit 1gb
|
||||
|
||||
# Protocol settings
|
||||
proto-max-bulk-len 512mb
|
||||
|
||||
# Frequency settings
|
||||
hz 10
|
||||
|
||||
# Dynamic HZ
|
||||
dynamic-hz yes
|
||||
|
||||
# AOF rewrite settings
|
||||
aof-rewrite-incremental-fsync yes
|
||||
|
||||
# RDB settings
|
||||
rdb-save-incremental-fsync yes
|
||||
|
||||
# Jemalloc settings
|
||||
jemalloc-bg-thread yes
|
||||
|
||||
# TLS settings (disabled for internal network)
|
||||
tls-port 0
|
188
COBY/docker/restore.sh
Normal file
188
COBY/docker/restore.sh
Normal file
@ -0,0 +1,188 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Restore script for market data infrastructure
|
||||
# Usage: ./restore.sh <backup_file.tar.gz>
|
||||
|
||||
set -e
|
||||
|
||||
# Check if backup file is provided
|
||||
if [ $# -eq 0 ]; then
|
||||
echo "❌ Usage: $0 <backup_file.tar.gz>"
|
||||
echo "Available backups:"
|
||||
ls -la ./backups/market_data_backup_*.tar.gz 2>/dev/null || echo "No backups found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_FILE="$1"
|
||||
RESTORE_DIR="./restore_temp"
|
||||
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
||||
|
||||
# Load environment variables
|
||||
if [ -f .env ]; then
|
||||
source .env
|
||||
fi
|
||||
|
||||
echo "🔄 Starting restore process..."
|
||||
echo "📁 Backup file: $BACKUP_FILE"
|
||||
|
||||
# Check if backup file exists
|
||||
if [ ! -f "$BACKUP_FILE" ]; then
|
||||
echo "❌ Backup file not found: $BACKUP_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create temporary restore directory
|
||||
mkdir -p "$RESTORE_DIR"
|
||||
|
||||
# Extract backup
|
||||
echo "📦 Extracting backup..."
|
||||
tar -xzf "$BACKUP_FILE" -C "$RESTORE_DIR"
|
||||
|
||||
# Find extracted files
|
||||
TIMESCALE_BACKUP=$(find "$RESTORE_DIR" -name "timescaledb_backup_*.dump" | head -1)
|
||||
REDIS_BACKUP=$(find "$RESTORE_DIR" -name "redis_backup_*.rdb" | head -1)
|
||||
BACKUP_INFO=$(find "$RESTORE_DIR" -name "backup_*.info" | head -1)
|
||||
|
||||
if [ -z "$TIMESCALE_BACKUP" ] || [ -z "$REDIS_BACKUP" ]; then
|
||||
echo "❌ Invalid backup file structure"
|
||||
rm -rf "$RESTORE_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Display backup information
|
||||
if [ -f "$BACKUP_INFO" ]; then
|
||||
echo "📋 Backup Information:"
|
||||
cat "$BACKUP_INFO"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Confirm restore
|
||||
read -p "⚠️ This will replace all existing data. Continue? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "❌ Restore cancelled"
|
||||
rm -rf "$RESTORE_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Stop services
|
||||
echo "🛑 Stopping services..."
|
||||
docker-compose -f timescaledb-compose.yml down
|
||||
|
||||
# Backup current data (just in case)
|
||||
echo "💾 Creating safety backup of current data..."
|
||||
mkdir -p "./backups/pre_restore_$TIMESTAMP"
|
||||
docker run --rm -v market_data_timescale_data:/data -v "$(pwd)/backups/pre_restore_$TIMESTAMP":/backup alpine tar czf /backup/current_timescale.tar.gz -C /data .
|
||||
docker run --rm -v market_data_redis_data:/data -v "$(pwd)/backups/pre_restore_$TIMESTAMP":/backup alpine tar czf /backup/current_redis.tar.gz -C /data .
|
||||
|
||||
# Start only TimescaleDB for restore
|
||||
echo "🏃 Starting TimescaleDB for restore..."
|
||||
docker-compose -f timescaledb-compose.yml up -d timescaledb
|
||||
|
||||
# Wait for TimescaleDB to be ready
|
||||
echo "⏳ Waiting for TimescaleDB to be ready..."
|
||||
sleep 30
|
||||
|
||||
# Check if TimescaleDB is ready
|
||||
if ! docker exec market_data_timescaledb pg_isready -U market_user -d market_data; then
|
||||
echo "❌ TimescaleDB is not ready"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Drop existing database and recreate
|
||||
echo "🗑️ Dropping existing database..."
|
||||
docker exec market_data_timescaledb psql -U postgres -c "DROP DATABASE IF EXISTS market_data;"
|
||||
docker exec market_data_timescaledb psql -U postgres -c "CREATE DATABASE market_data OWNER market_user;"
|
||||
|
||||
# Restore TimescaleDB
|
||||
echo "📊 Restoring TimescaleDB..."
|
||||
docker cp "$TIMESCALE_BACKUP" market_data_timescaledb:/tmp/restore.dump
|
||||
docker exec market_data_timescaledb pg_restore \
|
||||
-U market_user \
|
||||
-d market_data \
|
||||
--verbose \
|
||||
--no-password \
|
||||
/tmp/restore.dump
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo "✅ TimescaleDB restore completed"
|
||||
else
|
||||
echo "❌ TimescaleDB restore failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Stop TimescaleDB
|
||||
docker-compose -f timescaledb-compose.yml stop timescaledb
|
||||
|
||||
# Restore Redis data
|
||||
echo "📦 Restoring Redis data..."
|
||||
# Remove existing Redis data
|
||||
docker volume rm market_data_redis_data 2>/dev/null || true
|
||||
docker volume create market_data_redis_data
|
||||
|
||||
# Copy Redis backup to volume
|
||||
docker run --rm -v market_data_redis_data:/data -v "$(pwd)/$RESTORE_DIR":/backup alpine cp "/backup/$(basename "$REDIS_BACKUP")" /data/dump.rdb
|
||||
|
||||
# Start all services
|
||||
echo "🏃 Starting all services..."
|
||||
docker-compose -f timescaledb-compose.yml up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
echo "⏳ Waiting for services to be ready..."
|
||||
sleep 30
|
||||
|
||||
# Verify restore
|
||||
echo "🔍 Verifying restore..."
|
||||
|
||||
# Check TimescaleDB
|
||||
if docker exec market_data_timescaledb pg_isready -U market_user -d market_data; then
|
||||
echo "✅ TimescaleDB is ready"
|
||||
|
||||
# Show table counts
|
||||
echo "📊 Database table counts:"
|
||||
docker exec market_data_timescaledb psql -U market_user -d market_data -c "
|
||||
SELECT
|
||||
schemaname,
|
||||
tablename,
|
||||
n_tup_ins as row_count
|
||||
FROM pg_stat_user_tables
|
||||
WHERE schemaname = 'market_data'
|
||||
ORDER BY tablename;
|
||||
"
|
||||
else
|
||||
echo "❌ TimescaleDB verification failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check Redis
|
||||
if docker exec market_data_redis redis-cli -a "$REDIS_PASSWORD" ping | grep -q PONG; then
|
||||
echo "✅ Redis is ready"
|
||||
|
||||
# Show Redis info
|
||||
echo "📦 Redis database info:"
|
||||
docker exec market_data_redis redis-cli -a "$REDIS_PASSWORD" INFO keyspace
|
||||
else
|
||||
echo "❌ Redis verification failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Clean up
|
||||
echo "🧹 Cleaning up temporary files..."
|
||||
rm -rf "$RESTORE_DIR"
|
||||
|
||||
echo ""
|
||||
echo "🎉 Restore completed successfully!"
|
||||
echo ""
|
||||
echo "📋 Restore Summary:"
|
||||
echo " Source: $BACKUP_FILE"
|
||||
echo " Timestamp: $TIMESTAMP"
|
||||
echo " Safety backup: ./backups/pre_restore_$TIMESTAMP/"
|
||||
echo ""
|
||||
echo "⚠️ If you encounter any issues, you can restore the safety backup:"
|
||||
echo " docker-compose -f timescaledb-compose.yml down"
|
||||
echo " docker volume rm market_data_timescale_data market_data_redis_data"
|
||||
echo " docker volume create market_data_timescale_data"
|
||||
echo " docker volume create market_data_redis_data"
|
||||
echo " docker run --rm -v market_data_timescale_data:/data -v $(pwd)/backups/pre_restore_$TIMESTAMP:/backup alpine tar xzf /backup/current_timescale.tar.gz -C /data"
|
||||
echo " docker run --rm -v market_data_redis_data:/data -v $(pwd)/backups/pre_restore_$TIMESTAMP:/backup alpine tar xzf /backup/current_redis.tar.gz -C /data"
|
||||
echo " docker-compose -f timescaledb-compose.yml up -d"
|
78
COBY/docker/timescaledb-compose.yml
Normal file
78
COBY/docker/timescaledb-compose.yml
Normal file
@ -0,0 +1,78 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
timescaledb:
|
||||
image: timescale/timescaledb:latest-pg15
|
||||
container_name: market_data_timescaledb
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: market_data
|
||||
POSTGRES_USER: market_user
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-market_data_secure_pass_2024}
|
||||
POSTGRES_INITDB_ARGS: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
|
||||
# TimescaleDB specific settings
|
||||
TIMESCALEDB_TELEMETRY: 'off'
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
- timescale_data:/var/lib/postgresql/data
|
||||
- ./init-scripts:/docker-entrypoint-initdb.d
|
||||
command: >
|
||||
postgres
|
||||
-c shared_preload_libraries=timescaledb
|
||||
-c max_connections=200
|
||||
-c shared_buffers=256MB
|
||||
-c effective_cache_size=1GB
|
||||
-c maintenance_work_mem=64MB
|
||||
-c checkpoint_completion_target=0.9
|
||||
-c wal_buffers=16MB
|
||||
-c default_statistics_target=100
|
||||
-c random_page_cost=1.1
|
||||
-c effective_io_concurrency=200
|
||||
-c work_mem=4MB
|
||||
-c min_wal_size=1GB
|
||||
-c max_wal_size=4GB
|
||||
-c max_worker_processes=8
|
||||
-c max_parallel_workers_per_gather=4
|
||||
-c max_parallel_workers=8
|
||||
-c max_parallel_maintenance_workers=4
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U market_user -d market_data"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
networks:
|
||||
- market_data_network
|
||||
|
||||
redis:
|
||||
image: redis:7-alpine
|
||||
container_name: market_data_redis
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "6379:6379"
|
||||
volumes:
|
||||
- redis_data:/data
|
||||
- ./redis.conf:/usr/local/etc/redis/redis.conf
|
||||
command: redis-server /usr/local/etc/redis/redis.conf
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
networks:
|
||||
- market_data_network
|
||||
|
||||
volumes:
|
||||
timescale_data:
|
||||
driver: local
|
||||
redis_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
market_data_network:
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.20.0.0/16
|
Reference in New Issue
Block a user