359 lines
41 KiB
Plaintext
359 lines
41 KiB
Plaintext
2025-03-17 02:49:17,843 - INFO - Starting live trading demo for ETH/USDT on 1m timeframe
|
||
2025-03-17 02:49:17,844 - INFO - Using model: models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:49:17,847 - INFO - Exchange initialized with standard CCXT: mexc
|
||
2025-03-17 02:49:17,848 - INFO - Fetching initial data for ETH/USDT
|
||
2025-03-17 02:49:18,537 - ERROR - Error fetching OHLCV data: mexc {"code":700002,"msg":"Signature for this request is not valid."}
|
||
2025-03-17 02:49:18,537 - WARNING - No initial data received
|
||
2025-03-17 02:49:18,537 - ERROR - Failed to fetch initial data. Exiting.
|
||
2025-03-17 02:50:45,182 - INFO - Starting live trading demo for ETH/USDT on 1m timeframe
|
||
2025-03-17 02:50:45,182 - INFO - Using model: models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:50:45,182 - INFO - Using mock data for demo mode (no API keys required)
|
||
2025-03-17 02:50:45,182 - INFO - Generating mock data for ETH/USDT (1m)
|
||
2025-03-17 02:50:45,189 - INFO - Generated 1000 mock candles
|
||
2025-03-17 02:50:45,217 - INFO - Using GPU: NVIDIA GeForce RTX 4060 Laptop GPU
|
||
2025-03-17 02:50:46,501 - WARNING - Failed to load with weights_only=True: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 02:50:46,566 - WARNING - Failed with safe_globals: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 02:50:46,623 - ERROR - Error in live trading: Error(s) in loading state_dict for DQN:
|
||
size mismatch for fc1.weight: copying a param with shape torch.Size([384, 40]) from checkpoint, the shape in current model is torch.Size([256, 64]).
|
||
size mismatch for fc1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for lstm.weight_ih_l0: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.weight_hh_l0: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.bias_ih_l0: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for lstm.bias_hh_l0: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for lstm.weight_ih_l1: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.weight_hh_l1: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.bias_ih_l1: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for lstm.bias_hh_l1: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for attention.in_proj_weight: copying a param with shape torch.Size([1152, 384]) from checkpoint, the shape in current model is torch.Size([768, 256]).
|
||
size mismatch for attention.in_proj_bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
|
||
size mismatch for attention.out_proj.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for attention.out_proj.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for fc2.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for fc2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln2.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for fc3.weight: copying a param with shape torch.Size([192, 384]) from checkpoint, the shape in current model is torch.Size([128, 256]).
|
||
size mismatch for fc3.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([128]).
|
||
size mismatch for value_stream.weight: copying a param with shape torch.Size([1, 192]) from checkpoint, the shape in current model is torch.Size([1, 128]).
|
||
size mismatch for advantage_stream.weight: copying a param with shape torch.Size([4, 192]) from checkpoint, the shape in current model is torch.Size([3, 128]).
|
||
size mismatch for advantage_stream.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([3]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([1152, 384]) from checkpoint, the shape in current model is torch.Size([768, 256]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 384]) from checkpoint, the shape in current model is torch.Size([2048, 256]).
|
||
size mismatch for transformer_encoder.layers.0.linear2.weight: copying a param with shape torch.Size([384, 2048]) from checkpoint, the shape in current model is torch.Size([256, 2048]).
|
||
size mismatch for transformer_encoder.layers.0.linear2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm2.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([1152, 384]) from checkpoint, the shape in current model is torch.Size([768, 256]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 384]) from checkpoint, the shape in current model is torch.Size([2048, 256]).
|
||
size mismatch for transformer_encoder.layers.1.linear2.weight: copying a param with shape torch.Size([384, 2048]) from checkpoint, the shape in current model is torch.Size([256, 2048]).
|
||
size mismatch for transformer_encoder.layers.1.linear2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm2.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
2025-03-17 02:50:46,625 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 236, in run_live_demo
|
||
agent.load(args.model)
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\main.py", line 1776, in load
|
||
self.policy_net.load_state_dict(checkpoint['policy_net'])
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\nn\modules\module.py", line 2581, in load_state_dict
|
||
raise RuntimeError(
|
||
RuntimeError: Error(s) in loading state_dict for DQN:
|
||
size mismatch for fc1.weight: copying a param with shape torch.Size([384, 40]) from checkpoint, the shape in current model is torch.Size([256, 64]).
|
||
size mismatch for fc1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for lstm.weight_ih_l0: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.weight_hh_l0: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.bias_ih_l0: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for lstm.bias_hh_l0: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for lstm.weight_ih_l1: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.weight_hh_l1: copying a param with shape torch.Size([1536, 384]) from checkpoint, the shape in current model is torch.Size([1024, 256]).
|
||
size mismatch for lstm.bias_ih_l1: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for lstm.bias_hh_l1: copying a param with shape torch.Size([1536]) from checkpoint, the shape in current model is torch.Size([1024]).
|
||
size mismatch for attention.in_proj_weight: copying a param with shape torch.Size([1152, 384]) from checkpoint, the shape in current model is torch.Size([768, 256]).
|
||
size mismatch for attention.in_proj_bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
|
||
size mismatch for attention.out_proj.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for attention.out_proj.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for fc2.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for fc2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln2.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for ln2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for fc3.weight: copying a param with shape torch.Size([192, 384]) from checkpoint, the shape in current model is torch.Size([128, 256]).
|
||
size mismatch for fc3.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([128]).
|
||
size mismatch for value_stream.weight: copying a param with shape torch.Size([1, 192]) from checkpoint, the shape in current model is torch.Size([1, 128]).
|
||
size mismatch for advantage_stream.weight: copying a param with shape torch.Size([4, 192]) from checkpoint, the shape in current model is torch.Size([3, 128]).
|
||
size mismatch for advantage_stream.bias: copying a param with shape torch.Size([4]) from checkpoint, the shape in current model is torch.Size([3]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.in_proj_weight: copying a param with shape torch.Size([1152, 384]) from checkpoint, the shape in current model is torch.Size([768, 256]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.in_proj_bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.out_proj.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for transformer_encoder.layers.0.self_attn.out_proj.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.linear1.weight: copying a param with shape torch.Size([2048, 384]) from checkpoint, the shape in current model is torch.Size([2048, 256]).
|
||
size mismatch for transformer_encoder.layers.0.linear2.weight: copying a param with shape torch.Size([384, 2048]) from checkpoint, the shape in current model is torch.Size([256, 2048]).
|
||
size mismatch for transformer_encoder.layers.0.linear2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm2.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.0.norm2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.in_proj_weight: copying a param with shape torch.Size([1152, 384]) from checkpoint, the shape in current model is torch.Size([768, 256]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.in_proj_bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.out_proj.weight: copying a param with shape torch.Size([384, 384]) from checkpoint, the shape in current model is torch.Size([256, 256]).
|
||
size mismatch for transformer_encoder.layers.1.self_attn.out_proj.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.linear1.weight: copying a param with shape torch.Size([2048, 384]) from checkpoint, the shape in current model is torch.Size([2048, 256]).
|
||
size mismatch for transformer_encoder.layers.1.linear2.weight: copying a param with shape torch.Size([384, 2048]) from checkpoint, the shape in current model is torch.Size([256, 2048]).
|
||
size mismatch for transformer_encoder.layers.1.linear2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm2.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
size mismatch for transformer_encoder.layers.1.norm2.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([256]).
|
||
|
||
2025-03-17 02:52:12,557 - INFO - Starting live trading demo for ETH/USDT on 1m timeframe
|
||
2025-03-17 02:52:12,558 - INFO - Using model: models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:52:12,558 - INFO - Using mock data for demo mode (no API keys required)
|
||
2025-03-17 02:52:12,558 - INFO - Generating mock data for ETH/USDT (1m)
|
||
2025-03-17 02:52:12,565 - INFO - Generated 1000 mock candles
|
||
2025-03-17 02:52:12,607 - INFO - Extracted model architecture: state_size=40, action_size=4, hidden_size=384, lstm_layers=2, attention_heads=4
|
||
2025-03-17 02:52:12,636 - INFO - Using GPU: NVIDIA GeForce RTX 4060 Laptop GPU
|
||
2025-03-17 02:52:13,909 - WARNING - Failed to load with weights_only=True: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 02:52:13,973 - WARNING - Failed with safe_globals: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 02:52:14,032 - INFO - Model loaded from models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:52:14,032 - INFO - Model loaded successfully
|
||
2025-03-17 02:52:14,035 - INFO - Starting live trading simulation...
|
||
2025-03-17 02:52:19,117 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:52:19,118 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:52:19,118 - INFO - Continuing after error...
|
||
2025-03-17 02:52:29,139 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:52:29,140 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:52:29,140 - INFO - Continuing after error...
|
||
2025-03-17 02:52:39,157 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:52:39,157 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:52:39,158 - INFO - Continuing after error...
|
||
2025-03-17 02:52:49,176 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:52:49,177 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:52:49,177 - INFO - Continuing after error...
|
||
2025-03-17 02:52:59,196 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:52:59,196 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:52:59,196 - INFO - Continuing after error...
|
||
2025-03-17 02:53:09,220 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:53:09,220 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:53:09,220 - INFO - Continuing after error...
|
||
2025-03-17 02:53:19,244 - ERROR - Error in live trading loop: not enough values to unpack (expected 4, got 3)
|
||
2025-03-17 02:53:19,245 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 367, in run_live_demo
|
||
next_state, reward, done, info = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: not enough values to unpack (expected 4, got 3)
|
||
|
||
2025-03-17 02:53:19,245 - INFO - Continuing after error...
|
||
2025-03-17 02:53:53,471 - INFO - Starting live trading demo for ETH/USDT on 1m timeframe
|
||
2025-03-17 02:53:53,472 - INFO - Using model: models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:53:53,472 - INFO - Using mock data for demo mode (no API keys required)
|
||
2025-03-17 02:53:53,472 - INFO - Generating mock data for ETH/USDT (1m)
|
||
2025-03-17 02:53:53,479 - INFO - Generated 1000 mock candles
|
||
2025-03-17 02:53:53,520 - INFO - Extracted model architecture: state_size=40, action_size=4, hidden_size=384, lstm_layers=2, attention_heads=4
|
||
2025-03-17 02:53:53,552 - INFO - Using GPU: NVIDIA GeForce RTX 4060 Laptop GPU
|
||
2025-03-17 02:53:54,887 - WARNING - Failed to load with weights_only=True: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 02:53:54,958 - WARNING - Failed with safe_globals: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy.dtype was not an allowed global by default. Please use `torch.serialization.add_safe_globals([dtype])` or the `torch.serialization.safe_globals([dtype])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 02:53:55,016 - INFO - Model loaded from models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:53:55,017 - INFO - Model loaded successfully
|
||
2025-03-17 02:53:55,019 - INFO - Starting live trading simulation...
|
||
2025-03-17 02:54:24,295 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:54:54,484 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:55:24,631 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:55:54,809 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:56:24,987 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:56:55,157 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:57:25,288 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:57:55,450 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:58:25,571 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:58:55,733 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:59:25,898 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 02:59:55,196 - INFO - Starting live trading demo for ETH/USDT on 1m timeframe
|
||
2025-03-17 02:59:55,196 - INFO - Using model: models/trading_agent_best_pnl.pt
|
||
2025-03-17 02:59:55,200 - INFO - Exchange initialized with standard CCXT: mexc
|
||
2025-03-17 02:59:55,200 - INFO - Fetching initial data for ETH/USDT
|
||
2025-03-17 02:59:55,844 - ERROR - Error fetching OHLCV data: mexc {"code":700002,"msg":"Signature for this request is not valid."}
|
||
2025-03-17 02:59:55,844 - WARNING - No initial data received
|
||
2025-03-17 02:59:55,844 - ERROR - Failed to fetch initial data. Exiting.
|
||
2025-03-17 02:59:56,090 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:00:26,253 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:00:56,413 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:01:26,591 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:01:56,732 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:02:26,890 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:02:57,233 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:03:27,392 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:03:57,555 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:04:27,713 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:04:57,867 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:05:28,019 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:05:58,171 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:06:28,323 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:06:58,510 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:07:28,695 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:07:58,884 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:08:29,079 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:08:59,295 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:09:29,452 - ERROR - Error creating chart: Line2D.set() got an unexpected keyword argument 'type'
|
||
2025-03-17 03:40:19,333 - INFO - Starting live trading demo for ETH/USDT on 1m timeframe
|
||
2025-03-17 03:40:19,333 - INFO - Using model: models/trading_agent_best_pnl.pt
|
||
2025-03-17 03:40:19,336 - INFO - Exchange initialized with standard CCXT: mexc
|
||
2025-03-17 03:40:19,336 - INFO - Fetching initial data for ETH/USDT
|
||
2025-03-17 03:40:24,689 - INFO - Fetched 500 candles for ETH/USDT (1m)
|
||
2025-03-17 03:40:24,693 - INFO - Initialized environment with 500 candles
|
||
2025-03-17 03:40:24,728 - INFO - Extracted model architecture: state_size=64, action_size=4, hidden_size=384, lstm_layers=2, attention_heads=4
|
||
2025-03-17 03:40:24,748 - INFO - Using GPU: NVIDIA GeForce RTX 4060 Laptop GPU
|
||
2025-03-17 03:40:25,795 - ERROR - Error loading model: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 03:40:25,797 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\main.py", line 1819, in load
|
||
checkpoint = torch.load(path, map_location=self.device)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\serialization.py", line 1470, in load
|
||
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
||
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
|
||
2025-03-17 03:40:25,797 - WARNING - Failed to load model with weights_only=True: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
2025-03-17 03:40:25,860 - ERROR - Error loading model: 'str' object has no attribute '__module__'
|
||
2025-03-17 03:40:25,863 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 280, in run_live_demo
|
||
agent.load(args.model)
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\main.py", line 1819, in load
|
||
checkpoint = torch.load(path, map_location=self.device)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\serialization.py", line 1470, in load
|
||
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
||
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
||
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
||
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
||
WeightsUnpickler error: Unsupported global: GLOBAL numpy._core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([scalar])` or the `torch.serialization.safe_globals([scalar])` context manager to allowlist this global if you trust this class/function.
|
||
|
||
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
||
|
||
During handling of the above exception, another exception occurred:
|
||
|
||
Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\main.py", line 1819, in load
|
||
checkpoint = torch.load(path, map_location=self.device)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\serialization.py", line 1462, in load
|
||
return _load(
|
||
^^^^^^
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\serialization.py", line 1964, in _load
|
||
result = unpickler.load()
|
||
^^^^^^^^^^^^^^^^
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\_weights_only_unpickler.py", line 334, in load
|
||
elif full_path in _get_user_allowed_globals():
|
||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||
File "C:\Users\popov\miniforge3\Lib\site-packages\torch\_weights_only_unpickler.py", line 144, in _get_user_allowed_globals
|
||
module, name = f.__module__, f.__name__
|
||
^^^^^^^^^^^^
|
||
AttributeError: 'str' object has no attribute '__module__'. Did you mean: '__mod__'?
|
||
|
||
2025-03-17 03:40:25,863 - WARNING - Failed with safe_globals: 'str' object has no attribute '__module__'
|
||
2025-03-17 03:40:25,919 - INFO - Model loaded from models/trading_agent_best_pnl.pt
|
||
2025-03-17 03:40:25,920 - INFO - Model loaded successfully
|
||
2025-03-17 03:40:25,925 - INFO - Starting live trading simulation...
|
||
2025-03-17 03:40:26,348 - INFO - Fetched 1 candles for ETH/USDT (1m)
|
||
2025-03-17 03:40:26,406 - ERROR - Error in live trading loop: too many values to unpack (expected 3)
|
||
2025-03-17 03:40:26,406 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 370, in run_live_demo
|
||
next_state, reward, done = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: too many values to unpack (expected 3)
|
||
|
||
2025-03-17 03:40:26,406 - INFO - Continuing after error...
|
||
2025-03-17 03:40:31,926 - INFO - Fetched 1 candles for ETH/USDT (1m)
|
||
2025-03-17 03:40:31,933 - ERROR - Error in live trading loop: too many values to unpack (expected 3)
|
||
2025-03-17 03:40:31,933 - ERROR - Traceback (most recent call last):
|
||
File "D:\DEV\workspace\REPOS\git.d-popov.com\ai-kevin\crypto\gogo2\run_live_demo.py", line 370, in run_live_demo
|
||
next_state, reward, done = env.step(action)
|
||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||
ValueError: too many values to unpack (expected 3)
|
||
|
||
2025-03-17 03:40:31,933 - INFO - Continuing after error...
|