-
Notifications
You must be signed in to change notification settings - Fork 6
Install Peer on Debian Server
This page describes how to install a Subutai peer on a freshly installed Debian Stretch (stable at the time of writing) server.
- Debian Stretch
- Make sure the server is running Debian Stretch. Check the file /etc/apt/sources.list. It should look something like:
# main sources deb http://debian.intergenia.de/debian/ stretch main contrib non-free deb-src http://debian.intergenia.de/debian/ stretch main contrib non-free deb http://debian.intergenia.de/debian stretch-updates main deb-src http://debian.intergenia.de/debian stretch-updates main
- Spare "block" device for container storage
- In this example /dev/sdb is a 1.5 TB disk with one partition which we will use for this purpose.
apt-get install snapd
Expected output:
triton418:~# apt-get install snapd Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: snapd 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/5,380 kB of archives. After this operation, 30.0 MB of additional disk space will be used. Selecting previously unselected package snapd. (Reading database ... 30083 files and directories currently installed.) Preparing to unpack .../snapd_2.21-2+b1_amd64.deb ... Unpacking snapd (2.21-2+b1) ... Setting up snapd (2.21-2+b1) ... Created symlink /etc/systemd/system/multi-user.target.wants/snapd.autoimport.service → /lib/systemd/system/snapd.autoimport.service. Created symlink /etc/systemd/system/timers.target.wants/snapd.refresh.timer → /lib/systemd/system/snapd.refresh.timer. Created symlink /etc/systemd/system/sockets.target.wants/snapd.socket → /lib/systemd/system/snapd.socket. Created symlink /etc/systemd/system/multi-user.target.wants/snapd.service → /lib/systemd/system/snapd.service. Processing triggers for man-db (2.7.6.1-2) ...
Quick verify that snapd is indeed installed and running:
triton418:~# snap list No snaps are installed yet. Try "snap install hello-world".
triton418:~# snap install subutai --beta --devmode 2017-11-16T10:18:33Z INFO snap "core" has bad plugs or slots: core-support-plug (unknown interface) subutai (beta) 6.1.6 from 'ssf' installed triton418:~# snap list Name Version Rev Developer Notes core 16-2.29.3 3440 canonical core subutai 6.1.6 51 ssf devmode
The parameters `--beta --devmode` flags are used because the snap is currently running out of default security rules.
The Github issue will fix the need to run in devmode.
Snap uses systemd for running services. These can be listed by:
triton418:~# systemctl -a | grep subutai run-snapd-ns-subutai.mnt.mount loaded active mounted /run/snapd/ns/subutai.mnt snap-subutai-51.mount loaded active mounted Mount unit for subutai snap.subutai.agent-service.service loaded active running Service for snap application subutai.agent-service snap.subutai.cgmanager-service.service loaded active running Service for snap application subutai.cgmanager-service snap.subutai.dhcp-service.service loaded active running Service for snap application subutai.dhcp-service snap.subutai.dns-service.service loaded active running Service for snap application subutai.dns-service snap.subutai.net-post-conf.service loaded inactive dead Service for snap application subutai.net-post-conf snap.subutai.nginx-service.service loaded active running Service for snap application subutai.nginx-service snap.subutai.ovs-init-db-service.service loaded active running Service for snap application subutai.ovs-init-db-service snap.subutai.ovs-init-switch-service.service loaded active running Service for snap application subutai.ovs-init-switch-service snap.subutai.p2p-service.service loaded active running Service for snap application subutai.p2p-service snap.subutai.rngd-service.service loaded active running Service for snap application subutai.rngd-service snap.subutai.roaming-service.service loaded inactive dead Service for snap application subutai.roaming-service
All subutai related snaps are having names prefixed "snap.subutai".
In the above list, two services are listed as inactive. The roaming service is running for network migration for edge installation in virtual box. so on physical host or cloud it's not required. The other, net-post-conf, is one shot service that is used mostly for the autobuild.sh script for starting RH on VirtualBox.
The other services are:
Agent - is a subutai Agent service that interact with management console and executes commands from it (details here).
cgmanager is a special tool for manipulating cgroup (https://linuxcontainers.org/cgmanager/introduction/). It is used for moving containers out of agent. To keep them running when agent restarted or dead.
DHCP and DHS services serves only containers on this RH. Network in the container could be static or be set by DHCP server.
Nginx is used as a reverse proxy for accessing containers by HTTP/HTTPS or TCP/UDP ports.
2 OVS services is required by OpenvSwitch that is used for connecting and isolating network layer of container environments.
The p2p service is required for creating tunnels between peers to connect environments https://github.com/subutai-io/p2p
And finally rngd https://github.com/cernekee/rng-tools is used to speed up GPG-keys generation for agent and containers.
On this particular server, there is an entire disk available for extra storage (/dev/sdb).
triton418:~# /snap/subutai/current/bin/btrfsinit /dev/sdb1 -f umount: /dev/sdb1: not mounted btrfs-progs v4.10.1-10-gb2cf43e See http://btrfs.wiki.kernel.org for more information. Label: (null) UUID: ea2f1d1c-1d61-4e32-9a47-fd7ed82c42d7 Node size: 16384 Sector size: 4096 Filesystem size: 1.82TiB Block group profiles: Data: single 8.00MiB Metadata: DUP 1.00GiB System: DUP 8.00MiB SSD detected: no Incompat features: extref, skinny-metadata Number of devices: 1 Devices: ID SIZE PATH 1 1.82TiB /dev/sdb1 Created symlink /etc/systemd/system/multi-user.target.wants/var-snap-subutai-common-lxc.mount → /etc/systemd/system/var-snap-subutai-common-lxc.mount. triton418:~# df Filesystem 1K-blocks Used Available Use% Mounted on udev 16399408 0 16399408 0% /dev tmpfs 3288700 184040 3104660 6% /run /dev/sda4 1913434900 1339256 1814828652 1% / tmpfs 16443480 0 16443480 0% /dev/shm tmpfs 5120 0 5120 0% /run/lock tmpfs 16443480 0 16443480 0% /sys/fs/cgroup /dev/sda2 483946 36146 422815 8% /boot /dev/loop0 85760 85760 0 100% /snap/core/3440 /dev/loop1 20992 20992 0 100% /snap/subutai/51 /dev/sdb1 1953514548 16928 1951399680 1% /var/snap/subutai/common/lxc
The script will also create a unit file for systemd so that this will be mounted on every reboot.
triton418:~# systemctl -a | grep subutai run-snapd-ns-subutai.mnt.mount loaded active mounted /run/snapd/ns/subutai.mnt snap-subutai-51.mount loaded active mounted Mount unit for subutai var-snap-subutai-common-lxc.mount loaded active mounted BTRFS mount snap.subutai.agent-service.service loaded active running Service for snap application subutai.agent-service snap.subutai.cgmanager-service.service loaded active running Service for snap application subutai.cgmanager-service snap.subutai.dhcp-service.service loaded active running Service for snap application subutai.dhcp-service snap.subutai.dns-service.service loaded active running Service for snap application subutai.dns-service snap.subutai.net-post-conf.service loaded inactive dead Service for snap application subutai.net-post-conf snap.subutai.nginx-service.service loaded active running Service for snap application subutai.nginx-service snap.subutai.ovs-init-db-service.service loaded active running Service for snap application subutai.ovs-init-db-service snap.subutai.ovs-init-switch-service.service loaded active running Service for snap application subutai.ovs-init-switch-service snap.subutai.p2p-service.service loaded active running Service for snap application subutai.p2p-service snap.subutai.rngd-service.service loaded active running Service for snap application subutai.rngd-service snap.subutai.roaming-service.service loaded inactive dead Service for snap application subutai.roaming-service
Subutai containers can be managed and inspected with the subutai command:
triton418:~# subutai --help NAME: Subutai - daemon and command line interface binary USAGE: subutai [global options] command [command options] [arguments...] VERSION: 6.1.5 COMMANDS: attach attach to Subutai container backup backup Subutai container batch batch commands execution checkpoint chekpoint/restore in user space clone clone Subutai container cleanup clean Subutai environment config edit container config daemon start Subutai agent demote demote Subutai container destroy destroy Subutai container export export Subutai container import import Subutai template info information about host system hostname Set hostname of container or host list list Subutai container log print application logs map Subutai port mapping metrics list Subutai container migrate migrate Subutai container p2p P2P network operations promote promote Subutai container proxy Subutai reverse proxy quota set quotas for Subutai container rename rename Subutai container restore restore Subutai container stats statistics from host start start Subutai container stop stop Subutai container tunnel SSH tunnel management update update Subutai management, container or Resource host vxlan VXLAN tunnels operation help, h Shows a list of commands or help for one command GLOBAL OPTIONS: -d debug mode --help, -h show help --version, -v print the version
We can now experimentally try to import a Subutai template:
triton418:~# subutai import ubuntu16 INFO[2017-11-17 05:12:24] Importing ubuntu16 INFO[2017-11-17 05:12:24] Version: 4.0.0 INFO[2017-11-17 05:12:24] Template's owner signature verified INFO[2017-11-17 05:12:24] Downloading ubuntu16 73.09 MiB / 73.09 MiB [=================================================================================================================================================================================] 100.00% 6s INFO[2017-11-17 05:12:31] File integrity verified INFO[2017-11-17 05:12:31] Unpacking template ubuntu16 INFO[2017-11-17 05:12:33] Installing template ubuntu16
And list the templates and containers with the subutai list command:
triton418:~# subutai list CONT/TEMP --------- ubuntu16 triton418:~# subutai list --help NAME: subutai list - list Subutai container USAGE: subutai list [command options] [arguments...] OPTIONS: --container, -c containers only --template, -t templates only --info, -i detailed container info --ancestor, -a with ancestors --parent, -p with parent triton418:~# subutai list CONT/TEMP --------- ubuntu16 triton418:~# subutai list -t TEMPLATE -------- ubuntu16 triton418:~# subutai list -c CONTAINER
We can now create a new container based on the imported template.
triton418:~# subutai clone ubuntu16 container1 INFO[2017-11-20 05:53:34] container1 started INFO[2017-11-20 05:53:34] container1 with ID 3833098B4A64E700B8BF7B0EE9EDE186F945FD7B successfully cloned
The clone command will create a container based on a template and start that container. Having container1 running, we can now attach to the console of that container:
triton418:~# subutai attach container1 root@container1:/# ls bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
For scripting purposes, the attach command also support inline commands:
triton418:~# subutai attach container1 ls -l total 0 drwxr-xr-x 1 root root 1658 Apr 29 2016 bin drwxr-xr-x 1 root root 0 Apr 12 2016 boot drwxr-xr-x 6 root root 400 Nov 20 05:53 dev drwxr-xr-x 1 root root 2258 Nov 20 05:53 etc drwxr-xr-x 1 root root 12 Nov 17 05:12 home drwxr-xr-x 1 root root 212 Apr 29 2016 lib drwxr-xr-x 1 root root 40 Apr 29 2016 lib64 drwxr-xr-x 1 root root 0 Apr 29 2016 media drwxr-xr-x 1 root root 0 Apr 29 2016 mnt drwxr-xr-x 1 root root 0 Dec 16 2014 opt dr-xr-xr-x 215 nobody nogroup 0 Nov 20 05:53 proc drwx------ 1 root root 56 Nov 20 05:54 root drwxr-xr-x 12 root root 400 Nov 20 05:53 run drwxr-xr-x 1 root root 1764 Apr 29 2016 sbin drwxr-xr-x 1 root root 0 Apr 29 2016 srv dr-xr-xr-x 13 nobody nogroup 0 Nov 20 05:53 sys drwxrwxrwt 1 root root 94 Nov 20 05:53 tmp drwxr-xr-x 1 root root 70 Apr 29 2016 usr drwxr-xr-x 1 root root 90 Nov 17 05:12 var
We can destroy the running container using the destroy command:
triton418:~# subutai destroy container1 INFO[2017-11-20 05:56:40] container1 is destroyed
What we have done so far is setting up a Resource Host which can manage and run subutai containers. In order to turn that RH into a proper peer, we need to install the management template:
triton418:~# subutai import management INFO[2017-11-20 10:21:32] Importing management INFO[2017-11-20 10:21:33] Version: 6.1.3 INFO[2017-11-20 10:21:33] Template's owner signature verified INFO[2017-11-20 10:21:33] Unpacking template management INFO[2017-11-20 10:21:35] Installing template management INFO[2017-11-20 10:21:43] ******************** INFO[2017-11-20 10:21:43] Subutai Management UI will be shortly available at https://85.25.117.194:8443 INFO[2017-11-20 10:21:43] login: admin INFO[2017-11-20 10:21:43] password: secret INFO[2017-11-20 10:21:43] ********************
Using PeerOS
Using the Blockchain Router
- Using the E2E Plugin
- Using the Management Console
- Using the P2P Daemon