These are the core files of the environment created to simulate the behavior of the neutrino
module; a Python API used to implement algorithms to trade at Brazilian financial market.
For this purpose, this folder implements an environment that can be composed by multiple limit order book (LOB) and replay historical high-frequency order book data. It can be used to develop new strategies already compatible with neutrino
and backtest them using real data.
Neutrino Gym
is implemented in a reinforcement learning
fashion because it makes sense to the algorithmic trading development and to be "machine learning" friendly.
It is the core NeutrinoGym
interface. The methods you should be aware are:
- setParameters(init, end, datafolder, starttime, endtime, logfolder, instruments):
- resetAgent(agent, hold_pos=False): Reset the agent's state and set the initial position as the last one, if required. Return an observation object.
- callBack(agent, observation): Return an action object, that should be passed to the step function. This function updates the agent state.
- step (actions): Render one frame of the environment, taking into account the messages generated by the agent (stored in the actions object). Return observation, reward, done, info.
Both the observation
and actions
objects are classes that link the dynamic between the environment and the agent. Check the code to see more details.
Wrappers are used to transform the environment and the agent in a modular way, so you can change them without affecting the main classes. It is especially important to the agent class, that cannot use some libraries in the production environment (as Matplotlib). So, you can write your algorithm normally and wrappes it to generate all information you need to validate it. Once validated, the startegy you wrote will be almost the same the one you will put into prodution (you will just need to change some imports).