Skip to content
This repository has been archived by the owner on Sep 11, 2023. It is now read-only.

Setting up the INTEROP phase1 ethereum2 test network

Blaise Krzakala edited this page Jan 21, 2021 · 5 revisions

DISCLAIMER: This is written in 20.01.2020r. State may change drastically.

RELEASE TAG

Special thanx to:

  1. Fabian Vogelsteller https://github.com/frozeman for supervising and leading the coop
  2. Atif Anowar https://github.com/atif-konasl for gathering knowledge about current state of Eth2.0 migration
  3. Mikhail Kalinin https://github.com/mkalinin for implementation and guidance
  4. Guillaume Ballet https://github.com/gballet for implementation and guidance
  5. Matt Marciniak https://github.com/mxmar for making this repo work as a charm
  6. Patrick Krakos A.K.A patred20 https://github.com/patred20 for making this repo work as a charm
  7. All the other guys involved in making ethereum 2.0 happen!

Great work! Thanx, B.

To read more about phase1:

  1. Execution Model
  2. Set Up execution locally
  3. Teku java execution simulation
  4. Phase1 specification
  5. Dive into eth20
  6. Overall concept of phase1

Clients used within this repository:

  1. Teku phase1 commit: d7fec811ab1843fcb900d622389d64dcf414f3b3 from repo ssh://[email protected]/txrx-research/teku.git
  2. Catalyst (go-ethereum) phase1 commit: 67100e4ac7778c573e3ae57099426787e3e1bea5 from repo [email protected]:mkalinin/go-ethereum.git

What it really solves on the eth2.0 roadmap?

look for Execution Model

Roadmap of ethereum 2.0 via Vitalik Buterin

Whole twit from @VitalikButerin

How does it look like?

Infrastructure

How to setup

You need to have google cloud account and billing enabled to perform this operation. It is good to know that google offers some cash to burn at trial.

You need to have kubernetes and helm installed in your environment.

May be helpful:

  1. Kubernetes install and config
  2. Helm install

Let's getting started

Please check README.md, may be updated in a future. For current state of dev I give you below content of Readme.md Here are the steps that you need to take to bring it up:

  1. Create NFS volume (look below)
  2. Set up your multinet-cluster/values.yaml. For current deployment we run only teku and catalyst as a eth1 <-> eth2 layer.
  3. Set up your ethstats (not implemented within repo, we currently set it up on separate instance via puppeth) [optional for monitoring]
  4. install helm chart helm install eth20. See helm --help to see all possible options.
  5. upgrade it via helm helm upgrade -f multinet-cluster/values.yaml eth20 ./multinet-cluster/
  6. Expose your pods, dashboards and other stuff (look below)

It will really take a while for the infrastructure to spin up. Please be patient and just type kubectl get pods -w to see it live.

Google Cloud Engine (GKE) Kubernetes

In order to make ReadWriteMany on common-data.yaml and deposits-storage.yml you need to create NFS within region. To do this you must create 2 compute disks. One for common-data and one for deposits-storage Lets assume that you are using europe-west4-a for your GKE. Example using gcloud sdk: gcloud compute disks create --size=200GB --zone=europe-west4-a nfs-data gcloud compute disks create --size=200GB --zone=europe-west4-a nfs-deposit Notice that nfs-disk must match helm charts name, so in order to experiment with rename you must also rename it there.

After creating disk you must deploy nfs using kubectl. Example from root of this repo: ./nfs/reapply.sh

Files should be mounted to /data/$DIR where $DIR is with consecutive: data for common-data.yml depostits for deposits-storage.yml

In order to check whats going on on disk itself do: kubectl exec -it $NFS_SERVER_POD_NAME bash

To check $NFS_SERVER_POD_NAME do kubectl get pods

To apply changes use: helm upgrade -f $PWD/multinet-cluster/values.yaml eth20 $PWD/multinet-cluster

When you want to expose eth2stats you need to export external ips: https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

Example of exposure: kubectl expose deployment eth2stats-server --type=LoadBalancer --name=svrbalancer kubectl expose deployment eth2stats-dashboard --type=LoadBalancer --name=dashbalancer kubectl expose deployment launchpad-dashboard --type=LoadBalancer --name=launchpadbalancer

What is problematic is the config of ethstats. It does not show full data.

Copy the contents of logs from kubernetes pods

Example for teku: kubectl cp teku-catalyst-4:/root/.local/share/teku/logs/teku.log ./teku-log-4 -c teku