Skip to content

Commit

Permalink
[spellcheck] Part 1: Spell check directories benchmarks and chain (#1…
Browse files Browse the repository at this point in the history
  • Loading branch information
shreyan-gupta authored Jan 24, 2025
1 parent 5318b06 commit 5b32984
Show file tree
Hide file tree
Showing 122 changed files with 527 additions and 413 deletions.
2 changes: 2 additions & 0 deletions benchmarks/continuous/db/Justfile
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ db_host := "34.90.190.128"
db_name := "benchmarks"
db_port := "5432"

# cspell:words sqlfluff
# Check for SQL errors.
lint_sql path=".":
sqlfluff lint {{path}} --dialect {{sql_dialect}}
Expand All @@ -18,6 +19,7 @@ lint_sql path=".":
fix_sql path=".":
sqlfluff fix {{path}} --dialect {{sql_dialect}}

# cspell:ignore psql pgpass PGPASSFILE PGSSLMODE dbname
# Connect to Cloud SQL with psql.
#
# Expects the password to be stored locally in your `~/.pgpass` file.
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/continuous/db/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,5 +40,5 @@ select setting from pg_settings where name = 'max_connections';
```

## Remote connection

<!-- cspell:words psql -->
To connect to the database remotely, you can execute the `psql` recipe in the [`Justfile`](./Justfile).
4 changes: 3 additions & 1 deletion benchmarks/continuous/db/tool/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# Requirements

<!-- cspell:ignore libpq pgpass -->

- An installation of [`libpq`](https://www.postgresql.org/docs/15/libpq.html).
- The name of the package providing `libpq` differs across operating systems and package managers. For example on Ubuntu you can install `libpq-dev`.
- A `~/.pggass` file with an entry matching the db URL (see [dbprofile](./dbprofile)). Wrong password file setup may lead to unintuitive error messages, therefore it is recommended to read [the docs](https://www.postgresql.org/docs/15/libpq-pgpass.html) to get the following two points right:
- A `~/.pgpass` file with an entry matching the db URL (see [dbprofile](./dbprofile)). Wrong password file setup may lead to unintuitive error messages, therefore it is recommended to read [the docs](https://www.postgresql.org/docs/15/libpq-pgpass.html) to get the following two points right:
- Format of password entries.
- `.pgpass` file permissions.

Expand Down
1 change: 1 addition & 0 deletions benchmarks/continuous/db/tool/orm/src/models.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ use serde::Deserialize;

use crate::schema::ft_transfers;

// cspell:ignore Insertable
#[derive(Insertable, Deserialize)]
#[diesel(table_name = ft_transfers)]
#[diesel(check_for_backend(diesel::pg::Pg))]
Expand Down
1 change: 1 addition & 0 deletions benchmarks/continuous/db/tool/orm/src/schema.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
// @generated automatically by Diesel CLI.
// cspell:ignore Timestamptz

diesel::table! {
ft_transfers (id) {
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/synth-bm/src/rpc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ impl RpcResponseHandler {
Some(res) => res,
None => {
warn!(
"Expectet {} responses but channel closed after {num_received}",
"Expected {} responses but channel closed after {num_received}",
self.num_expected_responses
);
break;
Expand Down
2 changes: 1 addition & 1 deletion chain/chain-primitives/src/error.rs
Original file line number Diff line number Diff line change
Expand Up @@ -418,7 +418,7 @@ impl Error {
Error::NoParentShardId(_) => "no_parent_shard_id",
Error::InvalidStateRequest(_) => "invalid_state_request",
Error::InvalidRandomnessBeaconOutput => "invalid_randomness_beacon_output",
Error::InvalidBlockMerkleRoot => "invalid_block_merkele_root",
Error::InvalidBlockMerkleRoot => "invalid_block_merkle_root",
Error::InvalidProtocolVersion => "invalid_protocol_version",
Error::NotAValidator(_) => "not_a_validator",
Error::NotAChunkValidator => "not_a_chunk_validator",
Expand Down
8 changes: 4 additions & 4 deletions chain/chain/src/doomslug.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ const MAX_HISTORY_SIZE: usize = 1000;
/// `TwoThirds` means the block can only be produced if at least 2/3 of the stake is approving it,
/// and is what should be used in production (and what guarantees finality)
/// `NoApprovals` means the block production is not blocked on approvals. This is used
/// in many tests (e.g. `cross_shard_tx`) to create lots of forkfulness.
/// in many tests (e.g. `cross_shard_tx`) to create lots of forks.
#[derive(PartialEq, Eq, Debug, Clone, Copy)]
pub enum DoomslugThresholdMode {
NoApprovals,
Expand Down Expand Up @@ -138,7 +138,7 @@ pub struct Doomslug {
/// Information to track the timer (see `start_timer` routine in the paper)
timer: DoomslugTimer,
/// How many approvals to have before producing a block. In production should be always `HalfStake`,
/// but for many tests we use `NoApprovals` to invoke more forkfulness
/// but for many tests we use `NoApprovals` to invoke more forks
threshold_mode: DoomslugThresholdMode,

/// Approvals that were created by this doomslug instance (for debugging only).
Expand Down Expand Up @@ -405,7 +405,7 @@ impl Doomslug {
}

/// Returns the largest height for which we have enough approvals to be theoretically able to
/// produce a block (in practice a blocks might not be produceable yet if not enough time
/// produce a block (in practice a blocks might not be producible yet if not enough time
/// passed since it accumulated enough approvals)
pub fn get_largest_height_crossing_threshold(&self) -> BlockHeight {
self.largest_threshold_height.get()
Expand Down Expand Up @@ -824,7 +824,7 @@ mod tests {
}
}

// Not processing a block at height 2 should not produce an appoval
// Not processing a block at height 2 should not produce an approval
ds.set_tip(hash(&[2]), 2, 0);
clock.advance(Duration::milliseconds(400));
assert_eq!(ds.process_timer(&signer), vec![]);
Expand Down
4 changes: 2 additions & 2 deletions chain/chain/src/flat_storage_resharder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ pub struct FlatStorageResharder {
resharding_config: MutableConfigValue<ReshardingConfig>,
#[cfg(feature = "test_features")]
/// TEST ONLY.
/// If non zero, the start of schedulable tasks (such as split parent) will be postponed by
/// If non zero, the start of scheduled tasks (such as split parent) will be postponed by
/// the specified number of blocks.
pub adv_task_delay_by_blocks: BlockHeightDelta,
}
Expand Down Expand Up @@ -1204,7 +1204,7 @@ pub enum TaskExecutionStatus {
NotStarted,
}

/// Result of a schedulable flat storage resharding task.
/// Result of a scheduled flat storage resharding task.
#[derive(Clone, Debug, Copy, Eq, PartialEq)]
pub enum FlatStorageReshardingTaskResult {
Successful { num_batches_done: usize },
Expand Down
2 changes: 1 addition & 1 deletion chain/chain/src/garbage_collection.rs
Original file line number Diff line number Diff line change
Expand Up @@ -984,7 +984,7 @@ impl<'a> ChainStoreUpdate<'a> {
store_update.delete(col, key);
}
DBCol::BlockPerHeight => {
panic!("Must use gc_col_glock_per_height method to gc DBCol::BlockPerHeight");
panic!("Must use gc_col_block_per_height method to gc DBCol::BlockPerHeight");
}
DBCol::TransactionResultForBlock => {
store_update.delete(col, key);
Expand Down
2 changes: 1 addition & 1 deletion chain/chain/src/orphan.rs
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ impl Chain {
num_orphans = self.orphans.len())
.entered();
// Check if there are orphans we can process.
// check within the descendents of `prev_hash` to see if there are orphans there that
// check within the descendants of `prev_hash` to see if there are orphans there that
// are ready to request missing chunks for
let orphans_to_check =
self.orphans.get_orphans_within_depth(prev_hash, NUM_ORPHAN_ANCESTORS_CHECK);
Expand Down
1 change: 1 addition & 0 deletions chain/chain/src/runtime/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1347,6 +1347,7 @@ fn calculate_transactions_size_limit(
.try_into()
.expect("Can't convert usize to u64!")
} else {
// cspell:words roundtripping
// In general, we limit the number of transactions via send_fees.
// However, as a second line of defense, we want to limit the byte size
// of transaction as well. Rather than introducing a separate config for
Expand Down
6 changes: 3 additions & 3 deletions chain/chain/src/runtime/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ struct TestEnvConfig {
create_flat_storage: bool,
}

/// Environment to test runtime behaviour separate from Chain.
/// Environment to test runtime behavior separate from Chain.
/// Runtime operates in a mock chain where i-th block is attached to (i-1)-th one, has height `i` and hash
/// `hash([i])`.
struct TestEnv {
Expand Down Expand Up @@ -153,7 +153,7 @@ impl TestEnv {
let genesis_hash = hash(&[0]);

if config.create_flat_storage {
// Create flat storage. Naturally it happens on Chain creation, but here we test only Runtime behaviour
// Create flat storage. Naturally it happens on Chain creation, but here we test only Runtime behavior
// and use a mock chain, so we need to initialize flat storage manually.
let flat_storage_manager = runtime.get_flat_storage_manager();
for shard_uid in
Expand Down Expand Up @@ -1555,7 +1555,7 @@ fn test_genesis_hash() {
}

/// Creates a signed transaction between each pair of `signers`,
/// where transactions outcoming from a single signer differ by nonce.
/// where transaction outcomes from a single signer differ by nonce.
/// The transactions are then shuffled and used to fill a transaction pool.
fn generate_transaction_pool(signers: &Vec<Signer>, block_hash: CryptoHash) -> TransactionPool {
const TEST_SEED: RngSeed = [3; 32];
Expand Down
4 changes: 2 additions & 2 deletions chain/chain/src/stateless_validation/metrics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ pub static CHUNK_STATE_WITNESS_DECODE_TIME: LazyLock<HistogramVec> = LazyLock::n
.unwrap()
});

pub(crate) static CHUNK_STATE_WITNESS_MAIN_STATE_TRANSISTION_SIZE: LazyLock<HistogramVec> =
pub(crate) static CHUNK_STATE_WITNESS_MAIN_STATE_TRANSITION_SIZE: LazyLock<HistogramVec> =
LazyLock::new(|| {
try_create_histogram_vec(
"near_chunk_state_witness_main_state_transition_size",
Expand Down Expand Up @@ -186,7 +186,7 @@ fn record_witness_size_metrics_fallible(
CHUNK_STATE_WITNESS_TOTAL_SIZE
.with_label_values(&[&shard_id.as_str()])
.observe(encoded_size as f64);
CHUNK_STATE_WITNESS_MAIN_STATE_TRANSISTION_SIZE
CHUNK_STATE_WITNESS_MAIN_STATE_TRANSITION_SIZE
.with_label_values(&[shard_id.as_str()])
.observe(borsh::to_vec(&witness.main_state_transition)?.len() as f64);
CHUNK_STATE_WITNESS_NEW_TRANSACTIONS_SIZE
Expand Down
1 change: 1 addition & 0 deletions chain/chain/src/store/latest_witnesses.rs
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,7 @@ impl ChainStore {
),
)
})?;
// cspell:words deser
let key_deser = LatestWitnessesKey::deserialize(&key_to_delete)?;

store_update.delete(DBCol::LatestChunkStateWitnesses, &key_to_delete);
Expand Down
1 change: 1 addition & 0 deletions chain/chain/src/store/merkle_proof.rs
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,7 @@ mod tests {
}

fn verify_proof(&self, index: u64, against: u64, proof: &MerklePath) {
// cspell:words provee
let provee = self.block_hashes[index as usize];
let root = self.block_merkle_roots[against as usize];
assert!(verify_hash(root, proof, provee));
Expand Down
6 changes: 3 additions & 3 deletions chain/chain/src/test_utils/kv_runtime.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,13 +171,13 @@ impl MockEpochManager {
.collect();

let validators_per_shard = block_producers.len() / vs.validator_groups as usize;
let coef = block_producers.len() / vs.num_shards as usize;
let coefficient = block_producers.len() / vs.num_shards as usize;

let chunk_producers: Vec<Vec<ValidatorStake>> = (0..vs.num_shards)
.map(|shard_index| {
let shard_index = shard_index as usize;
let offset =
shard_index * coef / validators_per_shard * validators_per_shard;
shard_index * coefficient / validators_per_shard * validators_per_shard;
block_producers[offset..offset + validators_per_shard].to_vec()
})
.collect();
Expand Down Expand Up @@ -361,7 +361,7 @@ impl KeyValueRuntime {
shard_layout.shard_ids().map(|_| Trie::EMPTY_ROOT).collect();
set_genesis_state_roots(&mut store_update, &genesis_roots);
set_genesis_hash(&mut store_update, &CryptoHash::default());
store_update.commit().expect("Store failed on genesis intialization");
store_update.commit().expect("Store failed on genesis initialization");

Arc::new(KeyValueRuntime {
store,
Expand Down
2 changes: 1 addition & 1 deletion chain/chain/src/tests/simple_chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ fn build_chain() {
// The hashes here will have to be modified after changes to the protocol.
// In particular if you update protocol version or add new protocol
// features. If this assert is failing without you adding any new or
// stabilising any existing protocol features, this indicates bug in your
// stabilizing any existing protocol features, this indicates bug in your
// code which unexpectedly changes the protocol.
//
// To update the hashes you can use cargo-insta. Note that you’ll need to
Expand Down
2 changes: 1 addition & 1 deletion chain/chain/src/types.rs
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ mod tests {
}

#[test]
fn test_execution_outcome_merklization() {
fn test_execution_outcome_merkelization() {
let outcome1 = ExecutionOutcomeWithId {
id: Default::default(),
outcome: ExecutionOutcome {
Expand Down
8 changes: 4 additions & 4 deletions chain/chunks/src/shards_manager_actor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -487,7 +487,7 @@ impl ShardsManagerActor {
}

// Note: If request_from_archival is true, we potentially call
// get_part_owner unnecessarily. It’s probably not worth optimising
// get_part_owner unnecessarily. It’s probably not worth optimizing
// though unless you can think of a concise way to do it.
let part_owner = self.epoch_manager.get_part_owner(&epoch_id, part_ord)?;
let we_own_part = Some(&part_owner) == me;
Expand Down Expand Up @@ -833,7 +833,7 @@ impl ShardsManagerActor {
}
}

/// Resends chunk requests if haven't received it within expected time.
/// Resend chunk requests if haven't received it within expected time.
pub fn resend_chunk_requests(&mut self) {
let _span = tracing::debug_span!(
target: "client",
Expand Down Expand Up @@ -1516,13 +1516,13 @@ impl ShardsManagerActor {
if !self.encoded_chunks.height_within_horizon(header.height_created()) {
return Err(Error::ChainError(near_chain::Error::InvalidChunkHeight));
}
// We shouldn't process unrequested chunk if we have seen one with same (height_created + shard_id) but different chunk_hash
// We shouldn't process un-requested chunk if we have seen one with same (height_created + shard_id) but different chunk_hash
if let Some(hash) = self
.encoded_chunks
.get_chunk_hash_by_height_and_shard(header.height_created(), header.shard_id())
{
if hash != &chunk_hash {
warn!(target: "client", "Rejecting unrequested chunk {:?}, height {}, shard_id {}, because of having {:?}", chunk_hash, header.height_created(), header.shard_id(), hash);
warn!(target: "client", "Rejecting un-requested chunk {:?}, height {}, shard_id {}, because of having {:?}", chunk_hash, header.height_created(), header.shard_id(), hash);
return Err(Error::DuplicateChunkHeight);
}
}
Expand Down
2 changes: 1 addition & 1 deletion chain/client-primitives/src/debug.rs
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ pub struct ValidatorStatus {
pub shards: u64,
// Current height.
pub head_height: u64,
// Current validators with their stake (stake is in NEAR - not yoctonear).
// Current validators with their stake (stake is in NEAR - not yocto near).
pub validators: Option<Vec<(AccountId, u64)>>,
// All approvals that we've sent.
pub approval_history: Vec<ApprovalHistoryEntry>,
Expand Down
23 changes: 10 additions & 13 deletions chain/client/src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ use near_chain::{
BlockProcessingArtifact, BlockStatus, Chain, ChainGenesis, ChainStoreAccess, Doomslug,
DoomslugThresholdMode, Provenance,
};
use near_chain_configs::{ClientConfig, MutableValidatorSigner, UpdateableClientConfig};
use near_chain_configs::{ClientConfig, MutableValidatorSigner, UpdatableClientConfig};
use near_chunks::adapter::ShardsManagerRequestFromClient;
use near_chunks::client::ShardedTransactionPool;
use near_chunks::logic::{decode_encoded_chunk, persist_chunk};
Expand Down Expand Up @@ -118,7 +118,7 @@ pub struct CatchupState {

pub struct Client {
/// Adversarial controls - should be enabled only to test disruptive
/// behaviour on chain.
/// behavior on chain.
#[cfg(feature = "test_features")]
pub adv_produce_blocks: Option<AdvProduceBlocksMode>,
#[cfg(feature = "test_features")]
Expand Down Expand Up @@ -210,10 +210,7 @@ impl AsRef<Client> for Client {
}

impl Client {
pub(crate) fn update_client_config(
&self,
update_client_config: UpdateableClientConfig,
) -> bool {
pub(crate) fn update_client_config(&self, update_client_config: UpdatableClientConfig) -> bool {
let mut is_updated = false;
is_updated |= self.config.expected_shutdown.update(update_client_config.expected_shutdown);
is_updated |= self.config.resharding_config.update(update_client_config.resharding_config);
Expand Down Expand Up @@ -1100,7 +1097,7 @@ impl Client {
);
if let Some(limit) = prepared_transactions.limited_by {
// When some transactions from the pool didn't fit into the chunk due to a limit, it's reported in a metric.
metrics::PRODUCED_CHUNKS_SOME_POOL_TRANSACTIONS_DIDNT_FIT
metrics::PRODUCED_CHUNKS_SOME_POOL_TRANSACTIONS_DID_NOT_FIT
.with_label_values(&[&shard_id.to_string(), limit.as_ref()])
.inc();
}
Expand All @@ -1114,8 +1111,8 @@ impl Client {
}

/// Calculates the root of receipt proofs.
/// All receipts are groupped by receiver_id and hash is calculated
/// for each such group. Then we merkalize these hashes to calculate
/// All receipts are grouped by receiver_id and hash is calculated
/// for each such group. Then we merklize these hashes to calculate
/// the receipts root.
///
/// Receipts root is used in the following ways:
Expand Down Expand Up @@ -1838,7 +1835,7 @@ impl Client {
}
}

// Run shadown chunk validation on the new block, unless it's coming from sync.
// Run shadow chunk validation on the new block, unless it's coming from sync.
// Syncing has to be fast to catch up with the rest of the chain,
// applying the chunks would make the sync unworkably slow.
if provenance != Provenance::SYNC {
Expand Down Expand Up @@ -1870,7 +1867,7 @@ impl Client {
match status {
BlockStatus::Next => {
// If this block immediately follows the current tip, remove
// transactions from the txpool.
// transactions from the tx pool.
self.remove_transactions_for_block(validator_id, block).unwrap_or_default();
}
BlockStatus::Fork => {
Expand Down Expand Up @@ -2839,14 +2836,14 @@ impl Client {
// With the current implementation we just fetch chunk producers and block producers
// of this and the next epoch (which covers what we need, as described above), but may
// require some tuning in the future. In particular, if we decide that connecting to
// block & chunk producers of the next expoch is too expensive, we can postpone it
// block & chunk producers of the next epoch is too expensive, we can postpone it
// till almost the end of this epoch.
let mut account_keys = AccountKeys::new();
for epoch_id in [&tip.epoch_id, &tip.next_epoch_id] {
// We assume here that calls to get_epoch_chunk_producers and get_epoch_block_producers_ordered
// are cheaper than block processing (and that they will work with both this and
// the next epoch). The caching on top of that (in tier1_accounts_cache field) is just
// a defence in depth, based on the previous experience with expensive
// a defense in depth, based on the previous experience with expensive
// EpochManagerAdapter::get_validators_info call.
for cp in self.epoch_manager.get_epoch_chunk_producers(epoch_id)? {
account_keys
Expand Down
Loading

0 comments on commit 5b32984

Please sign in to comment.