Dobromir Popov c8b0f77d32 suggestions
2025-02-12 01:38:05 +02:00
..
2025-02-12 01:38:05 +02:00
2025-02-12 01:27:38 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00
2025-02-12 01:38:05 +02:00

To run this code:

Install Dependencies: pip install -r requirements.txt

Set up .env: Create a .env file in your project root and add your MEXC API keys:

MEXC_API_KEY=your_api_key MEXC_API_SECRET=your_api_secret Use code with caution. Run: python main.py

Important Considerations and Next Steps:

Hyperparameter Tuning: The provided hyperparameters are a starting point. You'll need to experiment with d_model, num_heads, num_layers, d_ff, learning rate, weight decay, and dropout to optimize performance. Consider using a hyperparameter optimization library like Optuna.

Loss Function Choices: MSE is used as a placeholder. For predicting price movements, you might consider using a loss function that focuses on the direction of the change (up or down) rather than just the magnitude. For volume, you might need a different loss function altogether.

Trading Strategy: The included "trading logic" is purely for demonstration. You'll need to develop a robust trading strategy with proper risk management, entry/exit criteria, and position sizing.

Data Normalization/Scaling: Normalize or scale your input features (candles and ticks) to improve training stability and performance. Common techniques include min-max scaling or standardization. This should be added to data_utils.py.

Evaluation Metrics: Track relevant metrics beyond just loss, such as Sharpe ratio, maximum drawdown, and win rate. This should be added to train.py and possibly a separate evaluation.py module.

Backtesting: Before deploying live, thoroughly backtest your model and strategy on historical data. This helps you assess its performance and identify potential weaknesses. This code trains and backtests, and you'd ideally separate those.

Overfitting: Monitor for overfitting (the model performing well on training data but poorly on new data). Techniques like dropout, weight decay, and early stopping can help mitigate overfitting.

Memory usage: the code uses a deque to store the data. This prevents out of memory errors and keeps only the most recent N samples.

Learned indicators: This is a complex part. you can create a new NN, that will be trained to predict the next candle data based only on HLOCV. the weights of this NN can be used as new indicators, concatenated to the others.