Skip to content

nomad experimental environment with installed nomad + consul + containerd + cni + flanneld(or calico) + etcd + nomad-driver-containerd

Notifications You must be signed in to change notification settings

dyrnq/nomad-vagrant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nomad vagrant

introduce

This is a project that attempts to create a working nomad cluster with nomad + consul + containerd + cni + flanneld(or calico) + etcd + nomad-driver-containerd, using Vagrant.

tks nekione`s nekione/calico-nomad.

start vms

vagrant up vm4 vm5 vm6 vm7
vm ip install
vm4 192.168.33.4 containerd,consul(server+client),nomad(server+client),etcd
vm5 192.168.33.5 containerd,consul(server+client),nomad(server+client)
vm6 192.168.33.6 containerd,consul(server+client),nomad(server+client)
vm7 192.168.33.7 containerd,consul(client),nomad(client)
vm14 192.168.33.14 containerd,zookeeper,kafka
vm15 192.168.33.15 containerd,clickhouse

take notice of etcd not HA, just for test and demonstration.

cni options

flannel

install flanneld

bash /vagrant/scripts/install-flanneld.sh --ver "v0.25.4"
bash /vagrant/scripts/install-cni-configs.sh

init flanneld network config

etcdctl put /coreos.com/network/config '{ "Network": "10.5.0.0/16", "Backend": {"Type": "vxlan"} }'
etcdctl get --from-key /coreos.com -w simple

/coreos.com/network/config
{ "Network": "10.5.0.0/16", "Backend": {"Type": "vxlan"} }
/coreos.com/network/subnets/10.5.37.0-24
{"PublicIP":"192.168.33.6","PublicIPv6":null,"BackendType":"vxlan","BackendData":{"VNI":1,"VtepMAC":"02:52:fb:99:b1:f2"}}
/coreos.com/network/subnets/10.5.42.0-24
{"PublicIP":"192.168.33.4","PublicIPv6":null,"BackendType":"vxlan","BackendData":{"VNI":1,"VtepMAC":"de:52:46:32:23:c7"}}
/coreos.com/network/subnets/10.5.53.0-24
{"PublicIP":"192.168.33.5","PublicIPv6":null,"BackendType":"vxlan","BackendData":{"VNI":1,"VtepMAC":"c6:c0:13:b9:16:35"}}
/coreos.com/network/subnets/10.5.93.0-24
{"PublicIP":"192.168.33.7","PublicIPv6":null,"BackendType":"vxlan","BackendData":{"VNI":1,"VtepMAC":"4e:99:34:be:6d:4d"}}

test flannel network

nerdctl run --net flannel -it --rm dyrnq/nettools bash -c "ip a show dev eth0 && sleep 2s && ping -c 5 192.168.33.1"

run nomad job with cni flannel

first use nomad run deploy job, just like kubectl apply -f foo.yaml.

nomad stop -purge netshoot-2 || true
nomad run -detach /vagrant/nomad-jobs/example-job-cni-flannel.hcl

nomad job status netshoot-2

calico

install calico-node

bash /vagrant/scripts/install-calico.sh --ver "v3.28.0"

## or with calico bpf

bash /vagrant/scripts/install-calico.sh --ver "v3.28.0" --bpf

bash /vagrant/scripts/install-cni-configs.sh

reinit calico default-ipv4-ippool

## This is unnecessary
calicoctl delete ippools default-ipv4-ippool
calicoctl create -f -<<EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: default-ipv4-ippool
spec:
  allowedUses:
  - Workload
  - Tunnel
  blockSize: 24
  cidr: 10.244.0.0/16
  ipipMode: Never
  natOutgoing: true
  nodeSelector: all()
  vxlanMode: Always
EOF
## This is unnecessary
calicoctl get ippools default-ipv4-ippool

calicoctl ipam check

calicoctl ipam show --show-blocks

calicoctl get nodes -o wide

calicoctl node status

test calico network

nerdctl run --net calico -it --rm dyrnq/nettools bash -c "ip a show dev eth0 && sleep 2s && ping -c 5 192.168.33.1"

run nomad job with cni calico

first use nomad run deploy job, just like kubectl apply -f foo.yaml.

nomad stop -purge netshoot-1 || true
nomad run -detach /vagrant/nomad-jobs/example-job-cni-calico.hcl

nomad job status netshoot-1

then, waiting job running use nerdctl ps found running containers.

nerdctl -n nomad ps -a

## Unless we are operating in cgroups.v2 mode, in which case we use the
## name "nomad.slice", which ends up being the cgroup parent.
## <https://github.com/Roblox/nomad-driver-containerd/blob/v0.9.4/containerd/driver.go#L267>

## or 

nerdctl -n nomad.slice ps -a

remove job

nomad job stop -purge netshoot-1

quick rerun

nomad job stop -purge netshoot-1 && nomad run -detach /vagrant/nomad-jobs/example-job-cni-calico.hcl

apache apisix with consul

basic knowledge

How to use Consul as Registration Center in Apache APISIX?

xref

Integration service discovery registry

xref

install apisix and apisix-dashboard

bash /vagrant/scripts/install-apisix.sh

service discovery var consul dns

dig @127.0.0.1 -p 8600 nomad.service.dc1.consul. ANY
dig @127.0.0.1 -p 8600 nomad-client.service.dc1.consul. ANY

dig @127.0.0.1 -p 8600 netshoot-2-netshoot-group.service.dc1.consul. ANY

put route with apisix admin api

option disconvery var DNS

curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
{
  "uri": "/",
  "name": "consul-netshoot-2-netshoot-group",
  "upstream": {
    "timeout": {
      "connect": 6,
      "send": 6,
      "read": 6
    },
    "type": "roundrobin",
    "scheme": "http",
    "discovery_type": "dns",
    "pass_host": "pass",
    "service_name": "netshoot-2-netshoot-group.service.dc1.consul:8080",
    "keepalive_pool": {
      "idle_timeout": 60,
      "requests": 1000,
      "size": 320
    }
  }
}'

option disconvery var Consul

curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
{
  "uri": "/",
  "name": "consul-netshoot-2-netshoot-group",
  "upstream": {
    "timeout": {
      "connect": 6,
      "send": 6,
      "read": 6
    },
    "type": "roundrobin",
    "scheme": "http",
    "discovery_type": "consul",
    "pass_host": "pass",
    "service_name": "netshoot-2-netshoot-group",
    "keepalive_pool": {
      "idle_timeout": 60,
      "requests": 1000,
      "size": 320
    }
  }
}'
curl -fsSL http://127.0.0.1:8500/v1/catalog/service/netshoot-2-netshoot-group | jq -r '.[] | "\(.ServiceAddress):\(.ServicePort)"'
curl -fsL http://127.0.0.1:9090/v1/discovery/consul/dump | jq -r '.services."netshoot-2-netshoot-group"[] | "\(.host):\(.port)"'

test watch

while true ; do curl http://127.0.0.1:9080; sleep 2s; echo "------------>";  done

scale job

# first increase
nomad job scale -detach netshoot-2 20

# then decrease
nomad job scale -detach netshoot-2 1

conclusion

Flanneld and calico are all work fine with nomad, nomad can use cni and nomad-driver-containerd create containers successfully.

clickhouse

CREATE DATABASE IF NOT EXISTS LOG

CREATE TABLE LOG.log_queue
(
    `log` String
)
ENGINE = Kafka
SETTINGS 
  kafka_broker_list = '192.168.33.14:9092',
  kafka_topic_list = 'log_demo',
  kafka_group_name = 'ck-log',
  kafka_format = 'JSONAsString';



CREATE TABLE LOG.rawlog
(
    `message` String CODEC(ZSTD(1)),
    `hostname` String,
    `logfile_path` String,
    `log_time` DateTime DEFAULT now(),
     INDEX message message TYPE tokenbf_v1(30720, 2, 0) GRANULARITY 1
)
ENGINE = MergeTree
PARTITION BY (toDate(log_time))
ORDER BY (log_time)
TTL log_time + toIntervalDay(30)
SETTINGS index_granularity = 8192;

CREATE MATERIALIZED VIEW LOG.mv_rawlog TO LOG.rawlog
(
    `message` String,
    `hostname` String,
    `logfile_path` String,
    `log_time` DateTime
) AS
SELECT
    JSONExtractString(log, 'message') AS message,
    JSONExtractString(JSONExtractString(log, 'host'), 'name') AS hostname,
    JSONExtractString(JSONExtractString(JSONExtractString(log, 'log'), 'file'), 'path') AS logfile_path,
    now() AS log_time
FROM LOG.log_queue;

ref

About

nomad experimental environment with installed nomad + consul + containerd + cni + flanneld(or calico) + etcd + nomad-driver-containerd

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published