Skip to content

Commit

Permalink
Merge branch 'eclesio/wazero-fork-upgrade' of github.com:ChainSafe/go…
Browse files Browse the repository at this point in the history
…ssamer into eclesio/wazero-fork-upgrade
  • Loading branch information
EclesioMeloJunior committed Mar 18, 2024
2 parents cc16ed5 + 1f480f1 commit 32a6573
Show file tree
Hide file tree
Showing 34 changed files with 398 additions and 179 deletions.
6 changes: 3 additions & 3 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,16 @@ updates:
schedule:
interval: "weekly"
labels:
- "dependencies"
- "S-dependencies"
- package-ecosystem: docker
directory: /
schedule:
interval: "weekly"
labels:
- "dependencies"
- "S-dependencies"
- package-ecosystem: gomod
directory: /
schedule:
interval: "weekly"
labels:
- "dependencies"
- "S-dependencies"
12 changes: 11 additions & 1 deletion .github/labels.yml
Original file line number Diff line number Diff line change
Expand Up @@ -236,4 +236,14 @@
- name: S-dependencies
color: "#1D76DB"
aliases: []
description: issues related to polkadot host disputes subsystem functionality.
description: issues related to polkadot host disputes subsystem functionality.

- name: S-infrastructure
color: "#1D76DB"
aliases: []
description: issues related to infrastructure and DevOps.

- name: S-dependencies
color: "#1D76DB"
aliases: []
description: issues related to dependencies changes. Used by dependabot.
2 changes: 1 addition & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ jobs:
- name: Generate coverage report
run: |
go test ./... -coverprofile=coverage.out -covermode=atomic -timeout=20m
- uses: codecov/codecov-action@v4.0.2
- uses: codecov/codecov-action@v4.1.0
with:
files: ./coverage.out
flags: unit-tests
Expand Down
38 changes: 38 additions & 0 deletions .github/workflows/docker-network.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
on:
pull_request:
# Commented paths to avoid skipping required workflow
# See https://github.community/t/feature-request-conditional-required-checks/16761
# paths:
# - .github/workflows/docker-grandpa.yml
# - "**/*.go"
# - "chain/**"
# - "cmd/**"
# - "dot/**"
# - "internal/**"
# - "lib/**"
# - "pkg/**"
# - "tests/stress/**"
# - go.mod
# - go.sum
name: docker-network

jobs:
docker-network-tests:
runs-on: ubuntu-latest
env:
DOCKER_BUILDKIT: "1"
steps:
- name: Cancel Previous Runs
uses: styfle/[email protected]
with:
all_but_latest: true

- uses: docker/build-push-action@v5
with:
load: true
target: builder
tags: chainsafe/gossamer:test

- name: Run grandpa
run: |
docker run chainsafe/gossamer:test sh -c "make it-network"
2 changes: 1 addition & 1 deletion .github/workflows/unit-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ jobs:
- name: Test - Race
run: make test-using-race-detector

- uses: codecov/codecov-action@v4.0.2
- uses: codecov/codecov-action@v4.1.0
with:
if_ci_failed: success
informational: true
Expand Down
3 changes: 2 additions & 1 deletion .releaserc
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@
{
"path": "dist/**"
}
]
],
"successComment": false
}
]
]
Expand Down
26 changes: 26 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,31 @@
# Semantic Versioning Changelog

# [0.9.0](https://github.com/ChainSafe/gossamer/compare/v0.8.0...v0.9.0) (2024-3-1)


### Bug Fixes

* add a limit of number of bytes while scale decoding a slice ([#3733](https://github.com/ChainSafe/gossamer/issues/3733)) ([5edbf89](https://github.com/ChainSafe/gossamer/commit/5edbf89541908cd73493d8c08de32cd48945aeb3))
* **docs:** Fixing link to polkadot runtime fundamentals to the right one ([#3763](https://github.com/ChainSafe/gossamer/issues/3763)) ([a785d32](https://github.com/ChainSafe/gossamer/commit/a785d3222b9800a38f83d7d5a6fe9c453f816658))
* don't panic if we fail to convert hex to bytes ([#3734](https://github.com/ChainSafe/gossamer/issues/3734)) ([12234de](https://github.com/ChainSafe/gossamer/commit/12234de2308644da1a326965a012106dc7c98f64))
* **dot/sync:** execute p2p handshake when there is no target ([#3695](https://github.com/ChainSafe/gossamer/issues/3695)) ([a9db0ec](https://github.com/ChainSafe/gossamer/commit/a9db0ec78205905be4b92ddb6a177a6d5fbfdace))
* fix index out of range undeterministic error in rpc test ([#3718](https://github.com/ChainSafe/gossamer/issues/3718)) ([d099384](https://github.com/ChainSafe/gossamer/commit/d0993843235eecac400c1a3a09a0236564137638))
* fix non deterministic panic during TestStableNetworkRPC integration test ([#3756](https://github.com/ChainSafe/gossamer/issues/3756)) ([ee3d243](https://github.com/ChainSafe/gossamer/commit/ee3d243debc4370dc8ac8799923bc5390f28d274))
* **lib/trie:** use `MustBeHashed` for V1 trie nodes with larger storage values ([#3739](https://github.com/ChainSafe/gossamer/issues/3739)) ([f5e48a9](https://github.com/ChainSafe/gossamer/commit/f5e48a97b9e9edf53d27a86875d834865e415f55))
* **mocks:** Set fixed version for uber mockgen in CI ([#3656](https://github.com/ChainSafe/gossamer/issues/3656)) ([ea9877e](https://github.com/ChainSafe/gossamer/commit/ea9877e4af0a6e453f3e9c5bf27069ef5c9fa797))
* **runtime/storage:** support nested storage transactions ([#3670](https://github.com/ChainSafe/gossamer/issues/3670)) ([3e99f6d](https://github.com/ChainSafe/gossamer/commit/3e99f6de37d61d1db4d0e367d91fc59e568888b4))
* segfault on node restart ([#3736](https://github.com/ChainSafe/gossamer/issues/3736)) ([d1ca7aa](https://github.com/ChainSafe/gossamer/commit/d1ca7aa6a013ba3b8190d0a953789c01d3620c36))
* **state-version:** should be uint8 instead of uint32 ([#3779](https://github.com/ChainSafe/gossamer/issues/3779)) ([c8fdb14](https://github.com/ChainSafe/gossamer/commit/c8fdb144cbdc31690c1d69900cf3d6f13fcddad5))
* update paseo chain spec ([#3770](https://github.com/ChainSafe/gossamer/issues/3770)) ([6a54f28](https://github.com/ChainSafe/gossamer/commit/6a54f28b61b315324a118557260f2cd6584fd200))
* use last finalized block on startup ([#3737](https://github.com/ChainSafe/gossamer/issues/3737)) ([c262642](https://github.com/ChainSafe/gossamer/commit/c262642784c3879400bdb2da7531aa4de8acc5a2))


### Features

* **config:** dynamically set version based on environment ([#3693](https://github.com/ChainSafe/gossamer/issues/3693)) ([5c534c9](https://github.com/ChainSafe/gossamer/commit/5c534c94cefd9e528f18b6c60e3c7d1385ea523d))
* **staging:** Expose RPC on Westend Staging Node ([#3687](https://github.com/ChainSafe/gossamer/issues/3687)) ([c374eaa](https://github.com/ChainSafe/gossamer/commit/c374eaa1487236e7f6a606ccda3c96d512d3ba46))
* **tests/scripts:** create script to retrieve trie state via rpc ([#3714](https://github.com/ChainSafe/gossamer/issues/3714)) ([5ccea40](https://github.com/ChainSafe/gossamer/commit/5ccea40bc20b2eece2fa00d41f312e9c5f2ffc5f))

# [0.8.0](https://github.com/ChainSafe/gossamer/compare/v0.7.0...v0.8.0) (2023-12-11)


Expand Down
4 changes: 4 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ it-rpc: build
@echo " > \033[32mRunning Integration Tests RPC Specs mode...\033[0m "
MODE=rpc go test ./tests/rpc/... -timeout=10m -v

it-network: build
@echo " > \033[32mRunning Integration Tests Kademlia...\033[0m "
MODE=network go test ./tests/network/... -timeout=10m -v

it-sync: build
@echo " > \033[32mRunning Integration Tests sync mode...\033[0m "
MODE=sync go test ./tests/sync/... -timeout=5m -v
Expand Down
6 changes: 3 additions & 3 deletions cmd/gossamer/commands/import_state.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ import (
func init() {
ImportStateCmd.Flags().String("chain", "", "Chain id used to load default configuration for specified chain")
ImportStateCmd.Flags().String("state-file", "", "Path to JSON file consisting of key-value pairs")
ImportStateCmd.Flags().Uint32("state-version",
uint32(trie.DefaultStateVersion),
ImportStateCmd.Flags().Uint8("state-version",
uint8(trie.DefaultStateVersion),
"State version to use when importing state",
)
ImportStateCmd.Flags().String("header-file", "", "Path to JSON file of block header corresponding to the given state")
Expand Down Expand Up @@ -60,7 +60,7 @@ func execImportState(cmd *cobra.Command) error {
return fmt.Errorf("state-file must be specified")
}

stateVersion, err := cmd.Flags().GetUint32("state-version")
stateVersion, err := cmd.Flags().GetUint8("state-version")
if err != nil {
return fmt.Errorf("failed to get state-version: %s", err)
}
Expand Down
2 changes: 2 additions & 0 deletions docs/docs/repo/labels.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,3 +50,5 @@ Below is the list of labels and their descriptions used in Gossamer repository.
- **S-subsystems-backing** - issues related to polkadot host backing subsystem functionality.
- **S-subsystems-availability** - issues related to polkadot host availability subsystem functionality.
- **S-subsystems-disputes** - issues related to polkadot host disputes subsystem functionality.
- **S-infrastructure** - issues related to infrastructure and DevOps.
- **S-dependencies** - issues related to dependencies changes. Used by dependabot.
7 changes: 7 additions & 0 deletions docs/docs/testing-and-debugging/test-suite.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,13 @@ To run Gossamer **RPC** integration tests run the following command:
make it-rpc
```


To run Gossamer **Network** integration tests run the following command:

```
make it-network
```

To run Gossamer **Sync** integration tests run the following command:

```
Expand Down
2 changes: 1 addition & 1 deletion dot/core/service_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ func Test_Service_StorageRoot(t *testing.T) {
retErr error
expErr error
expErrMsg string
stateVersion uint32
stateVersion uint8
}{
{
name: "storage trie state error",
Expand Down
27 changes: 12 additions & 15 deletions dot/network/discovery.go
Original file line number Diff line number Diff line change
Expand Up @@ -94,13 +94,14 @@ func (d *discovery) waitForPeers() (peers []peer.AddrInfo, err error) {
func (d *discovery) start() error {
// this basically only works with enabled mDNS which is used only for local test setups. Without bootnodes kademilia
// would not bee able to connect to any peers and mDNS is used to find peers in local network.
// TODO: should be refactored because this if is basically used for local integration test purpose
// TODO: should be refactored because this if is basically used for local integration test purpose.
// Instead of waiting for peers to connect to start kad we can upgrade the kad routing table on every connection,
// I think that using d.dht.{LAN/WAN}.RoutingTable().UsefulNewPeer(peerID) should be a good option
if len(d.bootnodes) == 0 {
peers, err := d.waitForPeers()
if err != nil {
return fmt.Errorf("failed while waiting for peers: %w", err)
}

d.bootnodes = peers
}
logger.Debugf("starting DHT with bootnodes %v...", d.bootnodes)
Expand Down Expand Up @@ -133,17 +134,6 @@ func (d *discovery) start() error {
return d.discoverAndAdvertise()
}

func (d *discovery) stop() error {
if d.dht == nil {
return nil
}

ethmetrics.Unregister(checkPeerCountMetrics)
ethmetrics.Unregister(peersStoreMetrics)

return d.dht.Close()
}

func (d *discovery) discoverAndAdvertise() error {
d.rd = routing.NewRoutingDiscovery(d.dht)

Expand Down Expand Up @@ -233,6 +223,13 @@ func (d *discovery) findPeers() {
}
}

func (d *discovery) findPeer(peerID peer.ID) (peer.AddrInfo, error) {
return d.dht.FindPeer(d.ctx, peerID)
func (d *discovery) stop() error {
if d.dht == nil {
return nil
}

ethmetrics.Unregister(checkPeerCountMetrics)
ethmetrics.Unregister(peersStoreMetrics)

return d.dht.Close()
}
10 changes: 6 additions & 4 deletions dot/network/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,8 @@ const (
blockAnnounceID = "/block-announces/1"
transactionsID = "/transactions/1"

maxMessageSize = 1024 * 64 // 64kb for now
maxMessageSize = 1024 * 64 // 64kb for now
findPeerQueryTimeout = 10 * time.Second
)

var (
Expand Down Expand Up @@ -292,12 +293,11 @@ func (s *Service) Start() error {
// this handles all new connections (incoming and outgoing)
// it creates a per-protocol mutex for sending outbound handshakes to the peer
// connectHandler is a part of libp2p.Notifiee interface implementation and getting called in the very end
//after or Incoming or Outgoing node is connected
// after or Incoming or Outgoing node is connected.
s.host.cm.connectHandler = func(peerID peer.ID) {
for _, prtl := range s.notificationsProtocols {
prtl.peersData.setMutex(peerID)
}
// TODO: currently we only have one set so setID is 0, change this once we have more set in peerSet
const setID = 0
s.host.cm.peerSetHandler.Incoming(setID, peerID)
}
Expand Down Expand Up @@ -711,7 +711,9 @@ func (s *Service) processMessage(msg peerset.Message) {
addrInfo := s.host.p2pHost.Peerstore().PeerInfo(peerID)
if len(addrInfo.Addrs) == 0 {
var err error
addrInfo, err = s.host.discovery.findPeer(peerID)
ctx, cancel := context.WithTimeout(s.host.discovery.ctx, findPeerQueryTimeout)
defer cancel()
addrInfo, err = s.host.discovery.dht.FindPeer(ctx, peerID)
if err != nil {
logger.Warnf("failed to find peer id %s: %s", peerID, err)
return
Expand Down
2 changes: 2 additions & 0 deletions dot/state/block.go
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@ type BlockState struct {
lock sync.RWMutex
genesisHash common.Hash
lastFinalised common.Hash
lastRound uint64
lastSetID uint64
unfinalisedBlocks *hashToBlockMap
tries *Tries

Expand Down
10 changes: 10 additions & 0 deletions dot/state/block_finalisation.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,14 @@ func (bs *BlockState) GetFinalisedHeader(round, setID uint64) (*types.Header, er
return header, nil
}

// GetRoundAndSetID returns the finalised round and setID
func (bs *BlockState) GetRoundAndSetID() (uint64, uint64) {
bs.lock.Lock()
defer bs.lock.Unlock()

return bs.lastRound, bs.lastSetID
}

// GetFinalisedHash gets the finalised block header by round and setID
func (bs *BlockState) GetFinalisedHash(round, setID uint64) (common.Hash, error) {
h, err := bs.db.Get(finalisedHashKey(round, setID))
Expand Down Expand Up @@ -182,6 +190,8 @@ func (bs *BlockState) SetFinalisedHash(hash common.Hash, round, setID uint64) er
}

bs.lastFinalised = hash
bs.lastRound = round
bs.lastSetID = setID

logger.Infof(
"🔨 finalised block #%d (%s), round %d, set id %d", header.Number, hash, round, setID)
Expand Down
36 changes: 4 additions & 32 deletions dot/state/epoch.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,6 @@ var (
errHashNotInMemory = errors.New("hash not found in memory map")
errEpochNotInDatabase = errors.New("epoch data not found in the database")
errHashNotPersisted = errors.New("hash with next epoch not found in database")
errNoPreRuntimeDigest = errors.New("header does not contain pre-runtime digest")
)

var (
Expand Down Expand Up @@ -197,39 +196,12 @@ func (s *EpochState) GetEpochForBlock(header *types.Header) (uint64, error) {
return 0, err
}

for _, d := range header.Digest {
digestValue, err := d.Value()
if err != nil {
continue
}
predigest, ok := digestValue.(types.PreRuntimeDigest)
if !ok {
continue
}

digest, err := types.DecodeBabePreDigest(predigest.Data)
if err != nil {
return 0, fmt.Errorf("failed to decode babe header: %w", err)
}

var slotNumber uint64
switch d := digest.(type) {
case types.BabePrimaryPreDigest:
slotNumber = d.SlotNumber
case types.BabeSecondaryVRFPreDigest:
slotNumber = d.SlotNumber
case types.BabeSecondaryPlainPreDigest:
slotNumber = d.SlotNumber
}

if slotNumber < firstSlot {
return 0, nil
}

return (slotNumber - firstSlot) / s.epochLength, nil
slotNumber, err := header.SlotNumber()
if err != nil {
return 0, fmt.Errorf("getting slot number: %w", err)
}

return 0, errNoPreRuntimeDigest
return (slotNumber - firstSlot) / s.epochLength, nil
}

// SetEpochDataRaw sets the epoch data raw for a given epoch
Expand Down
10 changes: 5 additions & 5 deletions dot/sync/chain_sync.go
Original file line number Diff line number Diff line change
Expand Up @@ -402,11 +402,11 @@ func (cs *chainSync) requestChainBlocks(announcedHeader, bestBlockHeader *types.
startAtBlock = announcedHeader.Number - uint(*request.Max) + 1
totalBlocks = *request.Max

logger.Infof("requesting %d blocks, descending request from #%d (%s)",
peerWhoAnnounced, gapLength, announcedHeader.Number, announcedHeader.Hash().Short())
logger.Infof("requesting %d blocks from peer: %v, descending request from #%d (%s)",
gapLength, peerWhoAnnounced, announcedHeader.Number, announcedHeader.Hash().Short())
} else {
request = network.NewBlockRequest(startingBlock, 1, network.BootstrapRequestData, network.Descending)
logger.Infof("requesting a single block #%d (%s)",
logger.Infof("requesting a single block from peer: %v with Number: #%d and Hash: (%s)",
peerWhoAnnounced, announcedHeader.Number, announcedHeader.Hash().Short())
}

Expand Down Expand Up @@ -445,8 +445,8 @@ func (cs *chainSync) requestForkBlocks(bestBlockHeader, highestFinalizedHeader,
request = network.NewBlockRequest(startingBlock, gapLength, network.BootstrapRequestData, network.Descending)
}

logger.Infof("requesting %d fork blocks, starting at #%d (%s)",
peerWhoAnnounced, gapLength, announcedHeader.Number, announcedHash.Short())
logger.Infof("requesting %d fork blocks from peer: %v starting at #%d (%s)",
gapLength, peerWhoAnnounced, announcedHeader.Number, announcedHash.Short())

resultsQueue := make(chan *syncTaskResult)
cs.workerPool.submitRequest(request, &peerWhoAnnounced, resultsQueue)
Expand Down
Loading

0 comments on commit 32a6573

Please sign in to comment.