A Node.js web service querying the postgres database produced by cardano-db-sync
,
and interfacing with cardano-node
via cardano-cli
.
[email protected]
[email protected]
docker
docker-compose
There must be a .env
or .env.test
file in the root directory with the following configs:
# App port
PORT=8080
# Application log level [ trace | debug | info | warn | error | fatal ]
LOGGER_LEVEL="debug"
# App address to bind to
BIND_ADDRESS=0.0.0.0
# PostgresDB connection string
DB_CONNECTION_STRING="postgresql://postgres:[email protected]:5432/cardano-test"
# cardano-node topology absolute file path
TOPOLOGY_FILE_PATH="/etc/node/topology-test.json"
# Absolute path to the cardano-cli binary https://github.com/input-output-hk/cardano-node/tree/master/cardano-cli
CARDANO_CLI_PATH="/etc/node/cardano-cli"
# Absolute path to the cardano-node binary https://github.com/input-output-hk/cardano-node/tree/master/cardano-node
CARDANO_NODE_PATH="/etc/node/cardano-node"
# cardano-node genesis absolute file path
GENESIS_SHELLEY_PATH="/etc/node/testnet-genesis.json"
# cardano-node socket
CARDANO_NODE_SOCKET_PATH=/tmp/node.socket
# max amount of transactions to be returned in `/block` endpoint. Otherwise only hashes will be sent
PAGE_SIZE=25
# relative ttl to be used if any is sent during the request
DEFAULT_RELATIVE_TTL=1000
# Exemption types file path
EXEMPTION_TYPES_PATH="/etc/node/exemptions.json"
# request payload limit
BODY_LIMIT=1048576
# disables Search Api if its true
DISABLE_SEARCH_API=false
The offline cache enables reproducible builds.
yarn --offline
yarn test
yarn testnet:services:up
yarn testnet:services:down
yarn dev
Rosetta openapi spec file is used to:
- Generate Typescript types to be used in the app
- To generate validation schemas that are used by Fastify to improve JSON rendering. For further details see Fastify Documentation.
To do so, the following steps are required:
- Download the specs from here
- Place them
src/server/openApi.json
- Introduce as many changes as you need (
metadata
fields need to be populated manually to allow Fastify to return the fields) - Execute
yarn generate-rosetta-types
In order to execute e2e tests a postgres docker container is started and a DB is populated with real mainnet data. This way, we can count on real data that will require too much effort to be populated (lots of inserts and data thats liked and it might take too much time to generate interesting scenarios).
There are two ways to import it and both are described below. It's worth mentioning that we rely upon naming convention so please pay attention to it.
⚠ IMPORTANT NOTE: If cardano-db-sync
is updated and it introduces a new schema, tests will still work as they were created with previous one. A manual update following the steps described below is required.
In order to populate a large dataset, pg_dump
can be used. Our current test cases require [Genesis, ~65k]
block range. Not all blocks are used although different checks find it very useful, for example, when checking account balance up to certain block.
Steps to generate and use a new snapshot are:
- Run a cardano node (it could be Cardano Rosetta)
- When the node has reached the block you wish to export, execute
pg_dump $DB > network.bak
. For example, if you are using default Cardano Rosetta db running a mainnet node, you can runpg_dump -U postgres cexplorer > mainnet.bak
. - Generate a
tar
file formainnet.bak
and place it in/cardano-rosetta-server/test/e2e/jest-setup/mainnet-db-snapshot.tar
. - Jest, before running will pick up the file and load it into the container. It will load the snapshot based on the network name.
After introducing staking, it has been realized that it would be impossible to import a mainnet dump as staking operations where introduced after block 4M leading to a db-snapshot.tar
that is +3.5GB. To avoid such thing which is not practical, a new way to import data has been introduced that helps importing specific blocks.
Steps to do so:
- Run a mainnet node
- Edit dump block script and specify the block ids to be exported
- Change the credentials if needed, current default ones don't require any password and connect to a db named
cexplorer
- A file will stored at
/tmp/fixture_data.sql
containing some sql statements (basically the same ones aspg_dump
creates) - Rename
/tmp/fixture_data.sql
to/tmp/network-fixture-data.sql
, for example, if running for launchpad it will end up being/tmp/launchpad-fixture-data.sql
- Generate a
tar
file fornetwork-fixture-data.sql
and place it in/cardano-rosetta-server/test/e2e/jest-setup/network-fixture-data.tar
, for examplelaunchpad-fixture-data.tar
- Jest wil pick it up as with the full snapshot db.
Two things are worth mentioning:
- Data to be exported needs to be defined in the
dump_blocks.sh
script. If a new table is needed, the script needs to be updated. - In order to be able to selectively import data some constraints had to be disabled.
A new db will be created for each snapshot (See this file for details). You can use any of them when running e2e tests. Default is mainnet
and can be used like:
beforeAll(async () => {
database = setupDatabase();
server = setupServer(database);
});
afterAll(async () => {
await database.end();
});
If a different snapshot needs to be used (mainnet
being the default), a db connection based on snapshot name can be created:
test('Launchpad', async () => {
const launchpad = setupDatabase(process.env.DB_CONNECTION_STRING, 'launchpad');
const launchpadServer = setupServer(launchpad);
const response = await launchpadServer.inject({
method: 'post',
url: '/block',
payload: generatePayload(1)
});
expect(response.statusCode).toEqual(StatusCodes.OK);
expect(response.json().block.block_identifier.hash).toEqual(
'da5cfaff39fde97c797cf5a6c3657d10b603cea1daa830f9a83b3a5cc62e4e8a'
);
});
- Introduce a way to check current schema in order to throw an error if
cardano-db-sync
is updated but it doesn't match the snapshot. - Automate snapshots generation as it currently is a semi-manual process.