Skip to content

Commit

Permalink
[spellcheck] Part 6: Spell check directory pytest (#12791)
Browse files Browse the repository at this point in the history
  • Loading branch information
shreyan-gupta authored Jan 24, 2025
1 parent fda9120 commit 02494d7
Show file tree
Hide file tree
Showing 62 changed files with 153 additions and 104 deletions.
1 change: 0 additions & 1 deletion cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@
"bitmask",
"bitvec",
"BLOCKLIST",
"bootnodes",
"borsh",
"bufbuild",
"bytesize",
Expand Down
10 changes: 4 additions & 6 deletions pytest/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@ a local test cluster using neard binary at `../target/debug/neard`.
There is also some capacity of starting the cluster on remote
machines.


## Running tests

### Running tests locally
Expand Down Expand Up @@ -43,7 +42,7 @@ used. The `../nightly/README.md` file describes this is more detail.

The test library has code for executing tests while running the nodes
on remote Google Cloud machines. Presumably that code worked in the
past but I, mina86, haven’t tried it and am a bit sceptical as to
past but I, mina86, haven’t tried it and am a bit skeptical as to
whether it is still functional. Regardless, for anyone who wants to
try it out, the instructions are as follows:

Expand All @@ -54,14 +53,13 @@ Prerequisites:

Steps:

1. Choose or upload a near binary here: https://console.cloud.google.com/storage/browser/nearprotocol_nearcore_release?project=near-core
1. Choose or upload a near binary here: <https://console.cloud.google.com/storage/browser/nearprotocol_nearcore_release?project=near-core>
2. Fill the binary filename in remote.json. Modify zones as needed,
they’ll be used in round-robin manner.
3. `NEAR_PYTEST_CONFIG=remote.json python tests/...`
4. Run `python tests/delete_remote_nodes.py` to make sure the remote
nodes are shut down properly (especially if tests failed early).


## Creating new tests

To add a test simply create a Python script inside of the `tests`
Expand Down Expand Up @@ -178,7 +176,7 @@ located in `../runtime/near-test-contracts/res` directory.
The `NAYDUCK=1`, `NIGHTLY_RUNNER=1` and `NAYDUCK_TIMEOUT=<timeout>`
environment variables are set when tests are run on NayDuck. If
necessary and no other option exists, the first two can be used to
change test’s behaviour to accommodate it running on the testing
change test’s behavior to accommodate it running on the testing
infrastructure as opposed to local machine. Meanwhile,
`NAYDUCK_TIMEOUT` specifies how much time in seconds test has to run
before NayDuck decides the test failed.
Expand Down Expand Up @@ -214,9 +212,9 @@ located in pytest/lib and are imported using the following statement:
`sys.path.append(str(pathlib.Path(__file__).resolve().parents[2] / 'lib'))`
In order to make VSCode see that import you can add this path to the python
extra paths config. In order to do so add the following into the settings file:

```
"python.analysis.extraPaths": [
"pytest/lib"
]
```

1 change: 1 addition & 0 deletions pytest/endtoend/endtoend.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
Account balances get exported to prometheus and can be used to detect when transactions stop affecting the world, aka the chain of testnet or mainnet.
We observe effects on the chain using public RPC endpoints to ensure canaries are running a fork.
cspell:words endtoend
python3 endtoend/endtoend.py
--ips <ip_node1,ip_node2>
--accounts <account_node1,account_node2>
Expand Down
2 changes: 2 additions & 0 deletions pytest/lib/branches.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
import semver
from configured_logger import logger

# cspell:words BASEHREF
_UNAME = os.uname()[0]
_IS_DARWIN = _UNAME == 'Darwin'
_BASEHREF = 'https://s3-us-west-1.amazonaws.com/build.nearprotocol.com'
Expand Down Expand Up @@ -131,6 +132,7 @@ def patch_binary(binary: pathlib.Path) -> None:
Currently only supports NixOS.
"""
# cspell:words patchelf nixpkgs nixos rpath
# Are we running on NixOS and require patching…?
try:
with open('/etc/os-release', 'r') as f:
Expand Down
9 changes: 5 additions & 4 deletions pytest/lib/cluster.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@
from proxy import NodesProxy
import state_sync_lib

# cspell:ignore nretry pmap preemptible proxify uefi useragent
os.environ["ADVERSARY_CONSENT"] = "1"

remote_nodes = []
Expand Down Expand Up @@ -117,7 +118,7 @@ def make_boot_nodes_arg(boot_node: BootNode) -> typing.Tuple[str]:
Apart from `None` as described above, `boot_node` can be a [`BaseNode`]
object, or an iterable (think list) of [`BaseNode`] objects. The boot node
address of a BaseNode object is contstructed using [`BaseNode.addr_with_pk`]
address of a BaseNode object is constructed using [`BaseNode.addr_with_pk`]
method.
If iterable of nodes is given, the `neard` is going to be configured with
Expand Down Expand Up @@ -596,9 +597,9 @@ def kill(self, *, gentle=False):
self._process.wait(5)
self._process = None

def reload_updateable_config(self):
logger.info(f"Reloading updateable config for node {self.ordinal}.")
"""Sends SIGHUP signal to the process in order to trigger updateable config reload."""
def reload_updatable_config(self):
logger.info(f"Reloading updatable config for node {self.ordinal}.")
"""Sends SIGHUP signal to the process in order to trigger updatable config reload."""
self._process.send_signal(signal.SIGHUP)

def reset_data(self):
Expand Down
1 change: 1 addition & 0 deletions pytest/lib/configured_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ def new_logger(
:param stderr: Optional to set. If outfile is not set, and stderr is set to True, then will log to stderr instead of stdout.
:return: The configured logger.
"""
# cspell:ignore levelname
# If name is not specified, create one so that this can be a separate logger.
if name is None:
name = f"logger_{uuid.uuid1()}"
Expand Down
1 change: 1 addition & 0 deletions pytest/lib/key.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from nacl.signing import SigningKey


# cspell:ignore urandom
class Key:
account_id: str
pk: str
Expand Down
30 changes: 17 additions & 13 deletions pytest/lib/mocknet.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from metrics import Metrics
from transaction import sign_payment_tx_and_get_hash, sign_staking_tx_and_get_hash

# cspell:ignore tmpl zxcv gmtime loadtester pkeys
DEFAULT_KEY_TARGET = '/tmp/mocknet'
KEY_TARGET_ENV_VAR = 'NEAR_PYTEST_KEY_TARGET'
# NODE_SSH_KEY_PATH = '~/.ssh/near_ops'
Expand Down Expand Up @@ -197,7 +198,7 @@ def start_load_test_helper_script(
contract_deploy_time=shlex.quote(str(contract_deploy_time)),
)
logger.info(
f'Starting load test helper. Node accound id: {node_account_id}.')
f'Starting load test helper. Node account id: {node_account_id}.')
logger.debug(f'The load test helper script is:{s}')
return s

Expand Down Expand Up @@ -367,7 +368,7 @@ def send_transaction(node, tx, tx_hash, account_id, timeout=120):
error_data = response['error']['data']
if 'timeout' in error_data.lower():
logger.warning(
f'transaction {tx_hash} returned Timout, checking status again.'
f'transaction {tx_hash} returned Timeout, checking status again.'
)
time.sleep(5)
response = node.get_tx(tx_hash, account_id)
Expand Down Expand Up @@ -452,7 +453,7 @@ def accounts_from_nodes(nodes):
return pmap(get_validator_account, nodes)


def kill_proccess_script(pid):
def kill_process_script(pid):
return f'''
sudo kill {pid}
while kill -0 {pid}; do
Expand Down Expand Up @@ -482,7 +483,7 @@ def stop_node(node):
pids = get_near_pid(m).split()

for pid in pids:
m.run('bash', input=kill_proccess_script(pid))
m.run('bash', input=kill_process_script(pid))
m.run('sudo -u ubuntu -i', input=TMUX_STOP_SCRIPT)


Expand All @@ -501,6 +502,7 @@ def compress_and_upload(nodes, src_filename, dst_filename):
nodes)


# cspell:ignore redownload
def redownload_neard(nodes, binary_url):
pmap(
lambda node: node.machine.
Expand Down Expand Up @@ -1030,14 +1032,16 @@ def update_config_file(
json.dump(config_json, f, indent=2)


def upload_config(node, config_json, overrider):
def upload_config(node, config_json, override_fn):
copied_config = json.loads(json.dumps(config_json))
if overrider:
overrider(node, copied_config)
if override_fn:
override_fn(node, copied_config)
upload_json(node, '/home/ubuntu/.near/config.json', copied_config)


def create_and_upload_config_file_from_default(nodes, chain_id, overrider=None):
def create_and_upload_config_file_from_default(nodes,
chain_id,
override_fn=None):
nodes[0].machine.run(
'rm -rf /home/ubuntu/.near-tmp && mkdir /home/ubuntu/.near-tmp && /home/ubuntu/neard --home /home/ubuntu/.near-tmp init --chain-id {}'
.format(chain_id))
Expand All @@ -1056,21 +1060,21 @@ def create_and_upload_config_file_from_default(nodes, chain_id, overrider=None):
if 'telemetry' in config_json:
config_json['telemetry']['endpoints'] = []

pmap(lambda node: upload_config(node, config_json, overrider), nodes)
pmap(lambda node: upload_config(node, config_json, override_fn), nodes)


def update_existing_config_file(node, overrider=None):
def update_existing_config_file(node, override_fn=None):
config_json = download_and_read_json(
node,
'/home/ubuntu/.near/config.json',
)
overrider(node, config_json)
override_fn(node, config_json)
upload_json(node, '/home/ubuntu/.near/config.json', config_json)


def update_existing_config_files(nodes, overrider=None):
def update_existing_config_files(nodes, override_fn=None):
pmap(
lambda node: update_existing_config_file(node, overrider=overrider),
lambda node: update_existing_config_file(node, override_fn=override_fn),
nodes,
)

Expand Down
1 change: 1 addition & 0 deletions pytest/lib/network.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ def _run_process(cmd):

def init_network_pillager():
_run_process(["mkdir", "-p", "/sys/fs/cgroup/net_cls/block"])
# cspell:ignore classid
try:
with open("/sys/fs/cgroup/net_cls/block/net_cls.classid", 'w') as f:
f.write("42")
Expand Down
1 change: 1 addition & 0 deletions pytest/lib/proxy.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
# all the nodes if the parameter is not None
#
# See `tests/sanity/nodes_proxy.py` for an example usage
# cspell:ignore proxified proxifies proxify

import asyncio
import atexit
Expand Down
4 changes: 3 additions & 1 deletion pytest/lib/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ class LogTracker:
"""

def __init__(self, node: cluster.BaseNode) -> None:
"""Initialises the tracker for given local node.
"""Initializes the tracker for given local node.
Args:
node: Node to create tracker for.
Expand Down Expand Up @@ -237,6 +237,7 @@ def get_near_tempdir(subdir=None, *, clean=False):


def load_binary_file(filepath):
# cspell:ignore binaryfile
with open(filepath, "rb") as binaryfile:
return bytearray(binaryfile.read())

Expand All @@ -255,6 +256,7 @@ def load_test_contract(


def user_name():
# cspell:ignore getlogin
username = os.getlogin()
if username == 'root': # digitalocean
username = gcloud.list()[0].username.replace('_nearprotocol_com', '')
Expand Down
3 changes: 2 additions & 1 deletion pytest/tests/contracts/gibberish.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#!/usr/bin/env python3
# Experiments with deploying gibberish contracts. Specifically,
# 1. Deploys completely gibberish contracts
# 2. Gets an existing wasm contract, and tries to arbitrarily pertrurb bytes in it
# 2. Gets an existing wasm contract, and tries to arbitrarily perturb bytes in it

import sys, time, random
import base58
Expand Down Expand Up @@ -42,6 +42,7 @@
hash_ = nodes[0].get_latest_block().hash_bytes
logger.info("Deploying perturbed contract #%s" % iter_)

# cspell:words mething
new_name = '%s_mething' % iter_
new_output = '%s_llo' % iter_

Expand Down
13 changes: 9 additions & 4 deletions pytest/tests/loadtest/locust/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,13 @@ to set up the network under test is outside the scope of this document. This is
only about generating the load.

## Install

```sh
# Run in nearcore directory. Locust is installed as a part of these dependencies.
pip3 install -r pytest/requirements.txt
```

<!-- cspell:ignore pyopenssl -->
*Note: You will need a working python3 / pip3 environment. While the code is
written in a backwards compatible way, modern OSs with modern python are
preferred. Completely independent of locust, you may run into problems with
Expand All @@ -22,17 +24,20 @@ error messages involving `X509_V_FLAG_CB_ISSUER_CHECK`.*

The load generator needs access to an account key with plenty of tokens.
For a local test setup, this works just fine.

```sh
# This assumes you are running against localnet
KEY=~/.near/localnet/node0/validator_key.json
```

For a quick demo, you can also run a localnet using [nearup](https://github.com/near/nearup).

```sh
nearup run localnet --binary-path ../nearcore/target/release/ --num-nodes 4 --num-shards 4 --override
```

Then to actually run it, this is the command. (Update ports and IP according to your localnet, nearup will print it.)

```sh
cd pytest/tests/loadtest/locust/
locust -H 127.0.0.1:3030 \
Expand All @@ -56,6 +61,7 @@ approaches anything close to 100%, you should use more workers.
Luckily, Locust has the ability to swarm the load generation across many processes.

The simplest way to do this on a single machine is to use `--processes` argument:

```sh
locust -H 127.0.0.1:3030 \
-f locustfiles/ft.py \
Expand All @@ -65,11 +71,11 @@ locust -H 127.0.0.1:3030 \

This will spawn 8 Locust Python processes, each capable of fully utilizing one CPU core.
According to the current measurements, Locust on a single CPU core can send 500 transactions per
second, and this number linearly scales with the number of processes.
second, and this number linearly scales with the number of processes.

To scale further to multiple machines, start one process with the `--master` argument and as many as
you like with `--worker`. (If they run on different machines, you also need to provide
`--master-host` and `--master-port`, if running on the same machine it will work automagically.)
`--master-host` and `--master-port`, if running on the same machine it will work automatically.)

Start the master:

Expand Down Expand Up @@ -141,7 +147,7 @@ Currently supported load types:
| Sweat (normal load) | sweat.py | (`--sweat-wasm $WASM_PATH`) | Creates a single instance of the SWEAT contract. A mix of FT transfers and batch minting with batch sizes comparable to mainnet observations in summer 2023. |
| Sweat (storage stress test) | sweat.py | `--tags=storage-stress-test` <br> (`--sweat-wasm $WASM_PATH`) | Creates a single instance of the SWEAT contract. Sends maximally large batches to mint more tokens, thereby touching many storage nodes per receipt. This load will take a while to initialize enough Sweat users on chain. |
| Sweat (claim) | sweat.py | `--tags=claim-test` <br> (`--sweat-wasm $WASM_PATH`) <br> (`--sweat-claim-wasm $WASM_PATH`) | Creates a single instance of the SWEAT and SWEAT.CLAIM contract. Sends deferred batches to mint more tokens, thereby touching many storage nodes per receipt. Then calls balance checks that iterate through populated state. |
| Minting inscriptions | inscription.py | (`--inscription-wasm $WASM_PATH`) | Creates a single insctance of the inscription contract and spawns multiple users who mint inscriptions using this contract. |
| Minting inscriptions | inscription.py | (`--inscription-wasm $WASM_PATH`) | Creates a single instance of the inscription contract and spawns multiple users who mint inscriptions using this contract. |

## Notes on Storage Stress Test

Expand Down Expand Up @@ -169,7 +175,6 @@ avoid this, you can stop and restart tests from within the UI. This way, they
will remember the account list and start the next test immediately, without long
setup.


### Master Key Requirements

The `--funding-key` provided must always have enough balance to fund many users.
Expand Down
5 changes: 3 additions & 2 deletions pytest/tests/loadtest/locust/common/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
logger = new_logger(level=logging.WARN)

# This is used to make the specific tests wait for the do_on_locust_init function
# to initialize the funding account, before initializating the users.
# to initialize the funding account, before initializing the users.
INIT_DONE = threading.Event()


Expand Down Expand Up @@ -960,6 +960,7 @@ def _(parser):

class TestEvaluateRpcResult(unittest.TestCase):

# cspell:disable
def test_smart_contract_panic(self):
input = """{
"result": {
Expand All @@ -981,7 +982,7 @@ def test_smart_contract_panic(self):
"index": 0,
"kind": {
"FunctionCallError": {
"ExecutionError": "Smart contract panicked: The account doesnt have enough balance"
"ExecutionError": "Smart contract panicked: The account doesn't have enough balance"
}
}
}
Expand Down
Loading

0 comments on commit 02494d7

Please sign in to comment.