-
Notifications
You must be signed in to change notification settings - Fork 122
Description
Hi, thank you for sharing your code. Can you provide more examples of models? Because the performance of A2C is not very good, I am trying to use my own data (including newly added signal_features) and experimenting with different models, but they are not performing well, especially the DQN model. This may be a silly question, please forgive my unfamiliarity with reinforcement learning. Additionally, I encountered errors when using the DDPG model. I have included my modifications and the error below. I would be grateful for your help.
` def _process_data(self, keys: List[str]=['Stochastic_K_1', 'Stochastic_D_1', 'Stochastic_K_2', 'Stochastic_D_2', 'MACD_DIF',
'MACD_DEA', 'MACD_Histogram', 'Moving_Average']) -> Dict[str, np.ndarray]:
signal_features = {}
for symbol in self.trading_symbols:
get_signal_at = lambda time: \
self.original_simulator.price_at(symbol, time)[keys]
if self.multiprocessing_pool is None:
p = list(map(get_signal_at, self.time_points))
else:
p = self.multiprocessing_pool.map(get_signal_at, self.time_points)
signal_features[symbol] = np.array(p)
# data = self.prices
signal_features = np.column_stack(list(signal_features.values()))
return signal_features`
