Skip to content

Commit

Permalink
fix broken links, and add lint to auto-check them (#10217)
Browse files Browse the repository at this point in the history
  • Loading branch information
Ekleog-NEAR authored Nov 20, 2023
1 parent 88c3efc commit efad34a
Show file tree
Hide file tree
Showing 16 changed files with 37 additions and 49 deletions.
9 changes: 9 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -216,3 +216,12 @@ jobs:
save-if: "false" # use the cache from nextest, but don’t double-save
- run: ./chain/jsonrpc/build_errors_schema.sh
- run: git diff --quiet ./chain/jsonrpc/res/rpc_errors_schema.json || exit 1

lychee_checks:
name: "Lychee Lints"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: lycheeverse/lychee-action@2ac9f030ccdea0033e2510a23a67da2a2da98492
with:
fail: true
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,6 @@ On [betanet](https://docs.near.org/docs/concepts/networks#betanet) we run
nightly build from master with all the nightly protocol features enabled. Every
five weeks, we stabilize some protocol features and make a release candidate for
testnet. The process for feature stabilization can be found in [this
document](docs/protocol_upgrade.md). After the release candidate has been
document](docs/practices/protocol_upgrade.md). After the release candidate has been
running on testnet for four weeks and no issues are observed, we stabilize and
publish the release for mainnet.
2 changes: 1 addition & 1 deletion chain/chunks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

This crate cotains functions to handle chunks. In NEAR - the block consists of multiple chunks - at most one per shard.

When a chunk is created, the creator encodes its contents using Reed Solomon encoding (ErasureCoding) and adds cross-shard receipts - creating PartialEncodedChunks that are later sent to all the validators (each validator gets a subset of them). This is done for data availability reasons (so that we need only a part of the validators to reconstruct the whole chunk). You can read more about it in the Nightshade paper (https://near.org/nightshade/)
When a chunk is created, the creator encodes its contents using Reed Solomon encoding (ErasureCoding) and adds cross-shard receipts - creating PartialEncodedChunks that are later sent to all the validators (each validator gets a subset of them). This is done for data availability reasons (so that we need only a part of the validators to reconstruct the whole chunk). You can read more about it in [the Nightshade paper](https://near.org/papers/nightshade).


A honest validator will only approve a block if it receives its assigned parts for all chunks in the block - which means that for each chunk, it has `has_all_parts()` returning true.
Expand Down
2 changes: 1 addition & 1 deletion chain/epoch-manager/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ EpochManager has also a lot of methords that allows you to fetch information fro

## RewardCalculator
RewardCalculator is responsible for computing rewards for the validators at the end of the epoch, based on their block/chunk productions.
You can see more details on the https://nomicon.io/Economics/README.html#validator-rewards-calculation
You can see more details on [the Nomicon documentation](https://nomicon.io/Economics/Economic#validator-rewards-calculation).

## Validator Selection / proposals / proposals_to_epoch_info
These files/functions are responsible for selecting the validators for the next epoch (and internally - also deciding which validator will produce which block and which chunk).
Expand Down
4 changes: 2 additions & 2 deletions chain/rosetta-rpc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,11 +110,11 @@ in production and will be located at `./target/release/neard`.

Alternatively, during development and testing it may be better to
follow the method recommended when [contributing to
nearcore](https://docs.near.org/docs/community/contribute/contribute-nearcore)
nearcore](https://github.com/near/nearcore/blob/master/CONTRIBUTING.md)
which creates a slightly less optimised executable but does it faster:

```bash
cargo build --release --package neard --bin neard
cargo build --profile dev-release --package neard --bin neard
```

## How to Configure
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ starting point when reading the code base.

The [tracing](https://tracing.rs) crate is used for structured, hierarchical
event output and logging. We also integrate [Prometheus](https://prometheus.io)
for light-weight metric output. See the [style](./style.md) documentation for
for light-weight metric output. See the [style](../practices/style.md) documentation for
more information on the usage.

### Testing
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/gas/estimator.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ cloud-hosted VMs) and manual labor to set it up.

The other supported metric `icount` is much more stable. It uses
[qemu](https://www.qemu.org/) to emulate an x86 CPU. We then insert a custom
[TCG plugin](https://qemu.readthedocs.io/en/latest/devel/tcg-plugins.html)
[TCG plugin](https://www.qemu.org/docs/master/devel/tcg-plugins.html)
([counter.c](https://github.com/near/nearcore/blob/08c4a1bd4b16847eb1c2fccee36bf16f6efb71fd/runtime/runtime-params-estimator/emu-cost/counter_plugin/counter.c))
that counts the number of executed x86 instructions. It also intercepts system
calls and counts the number of bytes seen in `sys_read`, `sys_write` and their
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture/how/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ peer. Once `ClientActor` realizes that it is more than `sync_height_threshold`
to sync. The synchronization process is done in three steps:

1. Header sync. The node first identifies the headers it needs to sync through a
[`get_locator`](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/*client/src/sync.rs#L332)
[`get_locator`](https://github.com/near/nearcore/blob/279044f09a7e6e5e3f26db4898af3655dae6eda6/chain/client/src/sync.rs#L332)
calculation. This is essentially an exponential backoff computation that
tries to identify commonly known headers between the node and its peers. Then
it would request headers from different peers, at most
Expand Down
3 changes: 1 addition & 2 deletions docs/architecture/how/epoch.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,7 @@ Epoch length is set in the genesis config. Currently in mainnet it is set to 432
"epoch_length": 43200
```

<!-- TODO: Where is this supposed to point to? -->
See http://go/mainnet-genesis for more details.
See [the mainnet genesis](https://s3-us-west-1.amazonaws.com/build.nearprotocol.com/nearcore-deploy/mainnet/genesis.json) for more details.

This means that each epoch lasts around 15 hours.

Expand Down
3 changes: 0 additions & 3 deletions docs/architecture/how/gc.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,3 @@ Same as before, we’d remove up to 2 blocks in each run:
![](https://user-images.githubusercontent.com/1711539/195650127-b30865e1-d9c1-4950-8607-67d82a185b76.png)

Until we catch up to the `gc_stop`.

(the original drawings for this document are
[here](https://docs.google.com/document/d/1BiEuJqm4phwQbi-fjzHMZPzDL-94z9Dqkc3XPNnxKJM/edit?usp=sharing))
4 changes: 2 additions & 2 deletions docs/architecture/network.md
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ This section describes different protocols of sending messages currently used in
## 10.1 Messages between Actors.

`Near` is build on `Actix`'s `actor`
[framework](https://actix.rs/book/actix/sec-2-actor.html). Usually each actor
[framework](https://actix.rs/docs/actix/actor). Usually each actor
runs on its own dedicated thread. Some, like `PeerActor` have one thread per
each instance. Only messages implementing `actix::Message`, can be sent
using between threads. Each actor has its own queue; Processing of messages
Expand Down Expand Up @@ -474,7 +474,7 @@ or hash (which seems to be used only for route back...). If target is the
account - it will be converted using `routing_table.account_owner` to the peer.

Upon receiving the message, the `PeerManagerActor`
[will sign it](https://github.com/near/nearcore/blob/master/chain/network/src/peer_manager.rs#L1285)
[will sign it](https://github.com/near/nearcore/blob/cadf11d5851be7611011b4e89542e11f41f3d827/chain/network/src/peer_manager/peer_manager_actor.rs)
and convert into RoutedMessage (which also have things like TTL etc.).

Then it will use the `routing_table`, to find the route to the target peer (add
Expand Down
2 changes: 1 addition & 1 deletion docs/misc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ something, but don't know where to put it, put it here!
## Crate Versioning and Publishing

While all the crates in the workspace are directly unversioned (`v0.0.0`), they
all share a unified variable version in the [workspace manifest](Cargo.toml).
all share a unified variable version in the [workspace manifest](../../Cargo.toml).
This keeps versions consistent across the workspace and informs their versions
at the moment of publishing.

Expand Down
15 changes: 15 additions & 0 deletions lychee.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
exclude = [
# these are private repositories, presumably any link to it are in internal documentation only.
"^https://github.com/near/near-ops",
"^https://github.com/near/nearcore-private",

# localhost is linked to refer to services supposed to be running on the host.
"^http://localhost",

# jsonrpc internal links
"^file://.*/chain/jsonrpc/res/debug/",

# used as placeholders for a template
"^https://github.com/near/nearcore/pull/XXXX$",
"^https://github.com/near/NEPs/blob/master/neps/nep-XXXX.md$",
]
16 changes: 0 additions & 16 deletions runtime/near-vm/engine/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,6 @@ Wasmer Engines are mainly responsible for two things:
* **Load** an`Artifact` so it can be used by the user (normally,
pushing the code into executable memory and so on).

It currently has three implementations:

1. Universal with [`wasmer-engine-universal`],
2. Native with [`wasmer-engine-dylib`],
3. Object with [`wasmer-engine-staticlib`].

## Example Implementation

Please check [`wasmer-engine-dummy`] for an example implementation for
an `Engine`.

### Acknowledgments

This project borrowed some of the code of the trap implementation from
Expand All @@ -29,10 +18,5 @@ the [`wasmtime-api`], the code since then has evolved significantly.
Please check [Wasmer `ATTRIBUTIONS`] to further see licenses and other
attributions of the project.


[`wasmer-engine-universal`]: https://github.com/wasmerio/wasmer/tree/master/lib/engine-universal
[`wasmer-engine-dylib`]: https://github.com/wasmerio/wasmer/tree/master/lib/engine-dylib
[`wasmer-engine-staticlib`]: https://github.com/wasmerio/wasmer/tree/master/lib/engine-staticlib
[`wasmer-engine-dummy`]: https://github.com/wasmerio/wasmer/tree/master/tests/lib/engine-dummy
[`wasmtime-api`]: https://crates.io/crates/wasmtime
[Wasmer `ATTRIBUTIONS`]: https://github.com/wasmerio/wasmer/blob/master/ATTRIBUTIONS.md
16 changes: 0 additions & 16 deletions runtime/near-vm/test-api/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,17 +48,6 @@ fn main() -> anyhow::Result<()> {

Wasmer is not only fast, but also designed to be *highly customizable*:

* **Pluggable engines** — An engine is responsible to drive the
compilation process and to store the generated executable code
somewhere, either:
* in-memory (with [`wasmer-engine-universal`]),
* in a native shared object file (with [`wasmer-engine-dylib`],
`.dylib`, `.so`, `.dll`), then load it with `dlopen`,
* in a native static object file (with [`wasmer-engine-staticlib`]),
in addition to emitting a C header file, which both can be linked
against a sandboxed WebAssembly runtime environment for the
compiled module with no need for runtime compilation.

* **Pluggable compilers** — A compiler is used by an engine to
transform WebAssembly into executable code:
* [`wasmer-compiler-singlepass`] provides a fast compilation-time
Expand Down Expand Up @@ -98,8 +87,3 @@ more](https://wasmerio.github.io/wasmer/crates/doc/wasmer/).
---

Made with ❤️ by the Wasmer team, for the community

[`wasmer-engine-universal`]: https://github.com/wasmerio/wasmer/tree/master/lib/engine-universal
[`wasmer-engine-dylib`]: https://github.com/wasmerio/wasmer/tree/master/lib/engine-dylib
[`wasmer-engine-staticlib`]: https://github.com/wasmerio/wasmer/tree/master/lib/engine-staticlib
[`wasmer-compiler-singlepass`]: https://github.com/wasmerio/wasmer/tree/master/lib/compiler-singlepass
2 changes: 1 addition & 1 deletion runtime/runtime-params-estimator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Use this tool to measure the running time of elementary runtime operations that
Rather, the costs are hard-codded in the `Default` impl for `RuntimeConfig`.
You can run `cargo run --package runtime-params-estimator --bin runtime-params-estimator -- --costs-file costs.txt` to convert cost table into `RuntimeConfig`.

3. **Continuous Estimation**: Take a look at [`continuous-estimation/README.md`](./continuous-estimation/README.md) to learn about the automated setup around the parameter estimator.
3. **Continuous Estimation**: Take a look at [`estimator-warehouse/README.md`](./estimator-warehouse/README.md) to learn about the automated setup around the parameter estimator.

Note, if you use the plotting functionality you would need to install [gnuplot](http://gnuplot.info/) to see the graphs.

Expand Down

0 comments on commit efad34a

Please sign in to comment.