From 48f4ea4ce3409d00bd1cdeffea759a12f31dfeb9 Mon Sep 17 00:00:00 2001 From: Kevin Putnam Date: Thu, 4 Jun 2020 14:53:37 -0700 Subject: [PATCH] Hard coded release version for sphinx-copybutton and updated code blocks to use console and ShellSession where required. Signed-off-by: Kevin Putnam --- docs/DEVELOPMENT.md | 12 +-- docs/autotest.md | 6 +- docs/install.md | 184 +++++++++++++++++++------------------ docs/js/copybutton.js | 7 ++ docs/requirements.txt | 2 +- examples/gce.md | 64 ++++++------- examples/memcached.md | 32 +++---- examples/redis-operator.md | 22 ++--- 8 files changed, 169 insertions(+), 160 deletions(-) diff --git a/docs/DEVELOPMENT.md b/docs/DEVELOPMENT.md index 41c579cf4c..4402cfc7bc 100644 --- a/docs/DEVELOPMENT.md +++ b/docs/DEVELOPMENT.md @@ -341,15 +341,15 @@ pkg/pmem-registry/pmem-registry.pb.go is generated from pkg/pmem-registry/pmem-r protoc comes from package _protobuf-compiler_ on Ubuntu 18.04 - get protobuf for Go: -```sh +``` console $ git clone https://github.com/golang/protobuf.git && cd protobuf $ make # installs needed binary in $GOPATH/bin/protoc-gen-go ``` - generate by running in \~/go/src/github.com/intel/pmem-csi/pkg/pmem-registry: -```sh -protoc --plugin=protoc-gen-go=$GOPATH/bin/protoc-gen-go --go_out=plugins=grpc:./ pmem-registry.proto +``` console +$ protoc --plugin=protoc-gen-go=$GOPATH/bin/protoc-gen-go --go_out=plugins=grpc:./ pmem-registry.proto ``` ### Table of Contents in README and DEVELOPMENT @@ -357,7 +357,7 @@ protoc --plugin=protoc-gen-go=$GOPATH/bin/protoc-gen-go --go_out=plugins=grpc:./ Table of Contents can be generated using multiple methods. - One possibility is to use [pandoc](https://pandoc.org/) -```sh +``` console $ pandoc -s -t markdown_github --toc README.md -o /tmp/temp.md ``` @@ -379,8 +379,8 @@ theme. Building the documentation requires Python 3.x and venv. -```bash -make vhtml +``` console +$ make vhtml ``` ### Edit diff --git a/docs/autotest.md b/docs/autotest.md index e75097c562..02370ef770 100644 --- a/docs/autotest.md +++ b/docs/autotest.md @@ -109,7 +109,7 @@ the default `pmem-govm` cluster name via the `CLUSTER` env variable. For example, this invocation sets up a cluster using the non-default Fedora distro: -``` sh +``` TEST_DISTRO=fedora CLUSTER=fedora-govm make start ``` @@ -128,7 +128,7 @@ can be used to run individual tests and to control additional aspects of the test run. For example, to run just the E2E provisioning test (create PVC, write data in one pod, read it in another) in verbose mode: -``` sh +``` console $ KUBECONFIG=$(pwd)/_work/pmem-govm/kube.config REPO_ROOT=$(pwd) ginkgo -v -focus=pmem-csi.*should.provision.storage.with.defaults ./test/e2e/ Nov 26 11:21:28.805: INFO: The --provider flag is not set. Treating as a conformance test. Some tests may not be run. Running Suite: PMEM E2E suite @@ -150,7 +150,7 @@ Test Suite Passed It is also possible to run just the sanity tests until one of them fails: -``` sh +``` console $ REPO_ROOT=`pwd` ginkgo '-focus=sanity' -failFast ./test/e2e/ ... ``` \ No newline at end of file diff --git a/docs/install.md b/docs/install.md index abe8a75fac..417c915f72 100644 --- a/docs/install.md +++ b/docs/install.md @@ -46,8 +46,8 @@ One region per each NVDIMM is created in non-interleaved configuration. In such a configuration, a PMEM-CSI volume cannot be larger than one NVDIMM. Example of creating regions without interleaving, using all NVDIMMs: -```sh -# ipmctl create -goal PersistentMemoryType=AppDirectNotInterleaved +``` console +$ ipmctl create -goal PersistentMemoryType=AppDirectNotInterleaved ``` Alternatively, multiple NVDIMMs can be combined to form an interleaved set. @@ -56,8 +56,8 @@ for improved read/write performance and allowing one region (also, PMEM-CSI volu to be larger than single NVDIMM. Example of creating regions in interleaved mode, using all NVDIMMs: -```sh -# ipmctl create -goal PersistentMemoryType=AppDirect +``` console +$ ipmctl create -goal PersistentMemoryType=AppDirect ``` When running inside virtual machines, each virtual machine typically @@ -66,8 +66,8 @@ the virtual machine. Instead, that region must be made available for use with PMEM-CSI because when the virtual machine comes up for the first time, the entire region is already allocated for use as a single block device: -``` sh -# ndctl list -RN +``` console +$ ndctl list -RN { "regions":[ { @@ -89,20 +89,20 @@ block device: } ] } -# ls -l /dev/pmem* +$ ls -l /dev/pmem* brw-rw---- 1 root disk 259, 0 Jun 4 16:41 /dev/pmem0 ``` Labels must be initialized in such a region, which must be performed once after the first boot: -``` sh -# ndctl disable-region region0 +``` console +$ ndctl disable-region region0 disabled 1 region -# ndctl init-labels nmem0 +$ ndctl init-labels nmem0 initialized 1 nmem -# ndctl enable-region region0 +$ ndctl enable-region region0 enabled 1 region -# ndctl list -RN +$ ndctl list -RN [ { "dev":"region0", @@ -114,7 +114,7 @@ enabled 1 region "persistence_domain":"unknown" } ] -# ls -l /dev/pmem* +$ ls -l /dev/pmem* ls: cannot access '/dev/pmem*': No such file or directory ``` @@ -127,8 +127,8 @@ built anywhere in the filesystem. Pre-built container images are available and t users don't need to build from source, but they will still need some additional files. To get the source code, use: -``` -git clone https://github.com/intel/pmem-csi +``` console +$ git clone https://github.com/intel/pmem-csi ``` ### Run PMEM-CSI on Kubernetes @@ -146,8 +146,8 @@ on the Kubernetes version. - **Label the cluster nodes that provide persistent memory device(s)** -```sh - $ kubectl label node storage=pmem +``` console +$ kubectl label node storage=pmem ``` - **Set up certificates** @@ -162,16 +162,16 @@ These are the steps for manual set-up of certificates: - Download cfssl tools -```sh - $ curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o _work/bin/cfssl --create-dirs - $ curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o _work/bin/cfssljson --create-dirs - $ chmod a+x _work/bin/cfssl _work/bin/cfssljson +``` console +$ curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o _work/bin/cfssl --create-dirs +$ curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o _work/bin/cfssljson --create-dirs +$ chmod a+x _work/bin/cfssl _work/bin/cfssljson ``` - Run certificates set-up script -```sh - $ KUBCONFIG="<> PATH="$PATH:$PWD/_work/bin" ./test/setup-ca-kubernetes.sh +``` console +$ KUBCONFIG="<> PATH="$PATH:$PWD/_work/bin" ./test/setup-ca-kubernetes.sh ``` - **Deploy the driver to Kubernetes** @@ -191,8 +191,8 @@ For each Kubernetes version, four different deployment variants are provided: For example, to deploy for production with LVM device mode onto Kubernetes 1.17, use: -```sh - $ kubectl create -f deploy/kubernetes-1.17/pmem-csi-lvm.yaml +``` console +$ kubectl create -f deploy/kubernetes-1.17/pmem-csi-lvm.yaml ``` The PMEM-CSI [scheduler extender](design.md#scheduler-extender) and @@ -244,23 +244,23 @@ for `kubectl kustomize`. For example: - **Wait until all pods reach 'Running' status** -```sh - $ kubectl get pods - NAME READY STATUS RESTARTS AGE - pmem-csi-node-8kmxf 2/2 Running 0 3m15s - pmem-csi-node-bvx7m 2/2 Running 0 3m15s - pmem-csi-controller-0 2/2 Running 0 3m15s - pmem-csi-node-fbmpg 2/2 Running 0 3m15s +``` console +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +pmem-csi-node-8kmxf 2/2 Running 0 3m15s +pmem-csi-node-bvx7m 2/2 Running 0 3m15s +pmem-csi-controller-0 2/2 Running 0 3m15s +pmem-csi-node-fbmpg 2/2 Running 0 3m15s ``` - **Verify that the node labels have been configured correctly** -```sh - $ kubectl get nodes --show-labels +``` console +$ kubectl get nodes --show-labels ``` The command output must indicate that every node with PMEM has these two labels: -``` +``` console pmem-csi.intel.com/node=,storage=pmem ``` @@ -271,30 +271,30 @@ and that the driver's log output doesn't contain errors. - **Define two storage classes using the driver** -```sh - $ kubectl create -f deploy/kubernetes-/pmem-storageclass-ext4.yaml - $ kubectl create -f deploy/kubernetes-/pmem-storageclass-xfs.yaml +``` console +$ kubectl create -f deploy/kubernetes-/pmem-storageclass-ext4.yaml +$ kubectl create -f deploy/kubernetes-/pmem-storageclass-xfs.yaml ``` - **Provision two pmem-csi volumes** -```sh - $ kubectl create -f deploy/kubernetes-/pmem-pvc.yaml +``` console +$ kubectl create -f deploy/kubernetes-/pmem-pvc.yaml ``` - **Verify two Persistent Volume Claims have 'Bound' status** -```sh - $ kubectl get pvc - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - pmem-csi-pvc-ext4 Bound pvc-f70f7b36-6b36-11e9-bf09-deadbeef0100 4Gi RWO pmem-csi-sc-ext4 16s - pmem-csi-pvc-xfs Bound pvc-f7101fd2-6b36-11e9-bf09-deadbeef0100 4Gi RWO pmem-csi-sc-xfs 16s +``` console +$ kubectl get pvc +NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE +pmem-csi-pvc-ext4 Bound pvc-f70f7b36-6b36-11e9-bf09-deadbeef0100 4Gi RWO pmem-csi-sc-ext4 16s +pmem-csi-pvc-xfs Bound pvc-f7101fd2-6b36-11e9-bf09-deadbeef0100 4Gi RWO pmem-csi-sc-xfs 16s ``` - **Start two applications requesting one provisioned volume each** -```sh - $ kubectl create -f deploy/kubernetes-/pmem-app.yaml +``` console +$ kubectl create -f deploy/kubernetes-/pmem-app.yaml ``` These applications use **storage: pmem** in the nodeSelector @@ -303,30 +303,32 @@ one with ext4-format and another with xfs-format file system. - **Verify two application pods reach 'Running' status** -```sh - $ kubectl get po my-csi-app-1 my-csi-app-2 - NAME READY STATUS RESTARTS AGE - my-csi-app-1 1/1 Running 0 6m5s - NAME READY STATUS RESTARTS AGE - my-csi-app-2 1/1 Running 0 6m1s +``` console +$ kubectl get po my-csi-app-1 my-csi-app-2 +NAME READY STATUS RESTARTS AGE +my-csi-app-1 1/1 Running 0 6m5s +NAME READY STATUS RESTARTS AGE +my-csi-app-2 1/1 Running 0 6m1s ``` - **Check that applications have a pmem volume mounted with added dax option** -```sh - $ kubectl exec my-csi-app-1 -- df /data - Filesystem 1K-blocks Used Available Use% Mounted on - /dev/ndbus0region0fsdax/5ccaa889-551d-11e9-a584-928299ac4b17 - 4062912 16376 3820440 0% /data - $ kubectl exec my-csi-app-2 -- df /data - Filesystem 1K-blocks Used Available Use% Mounted on - /dev/ndbus0region0fsdax/5cc9b19e-551d-11e9-a584-928299ac4b17 - 4184064 37264 4146800 1% /data - - $ kubectl exec my-csi-app-1 -- mount |grep /data - /dev/ndbus0region0fsdax/5ccaa889-551d-11e9-a584-928299ac4b17 on /data type ext4 (rw,relatime,dax) - $ kubectl exec my-csi-app-2 -- mount |grep /data - /dev/ndbus0region0fsdax/5cc9b19e-551d-11e9-a584-928299ac4b17 on /data type xfs (rw,relatime,attr2,dax,inode64,noquota) +``` console +$ kubectl exec my-csi-app-1 -- df /data +Filesystem 1K-blocks Used Available Use% Mounted on +/dev/ndbus0region0fsdax/5ccaa889-551d-11e9-a584-928299ac4b17 + 4062912 16376 3820440 0% /data + +$ kubectl exec my-csi-app-2 -- df /data +Filesystem 1K-blocks Used Available Use% Mounted on +/dev/ndbus0region0fsdax/5cc9b19e-551d-11e9-a584-928299ac4b17 + 4184064 37264 4146800 1% /data + +$ kubectl exec my-csi-app-1 -- mount |grep /data +/dev/ndbus0region0fsdax/5ccaa889-551d-11e9-a584-928299ac4b17 on /data type ext4 (rw,relatime,dax) + +$ kubectl exec my-csi-app-2 -- mount |grep /data +/dev/ndbus0region0fsdax/5cc9b19e-551d-11e9-a584-928299ac4b17 on /data type xfs (rw,relatime,attr2,dax,inode64,noquota) ``` #### Expose persistent and cache volumes to applications @@ -421,7 +423,7 @@ in the pod meta data. In both cases, the value has to be large enough for all PMEM volumes used by the pod, otherwise pod creation will fail with an error similar to this: -``` +``` console Error: container create failed: QMP command failed: not enough space, currently 0x8000000 in use of total space for memory devices 0x3c100000 ``` @@ -493,10 +495,10 @@ current directory contains the `deploy` directory from the PMEM-CSI repository. It is also possible to reference the base via a [URL](https://github.com/kubernetes-sigs/kustomize/blob/master/examples/remoteBuild.md). -``` sh -mkdir my-pmem-csi-deployment +``` ShellSession +$ mkdir my-pmem-csi-deployment -cat >my-pmem-csi-deployment/kustomization.yaml <my-pmem-csi-deployment/kustomization.yaml <my-pmem-csi-deployment/scheduler-patch.yaml <my-pmem-csi-deployment/scheduler-patch.yaml <my-scheduler/kustomization.yaml <my-scheduler/kustomization.yaml <my-scheduler/node-port-patch.yaml <my-scheduler/node-port-patch.yaml </var/lib/scheduler/scheduler-policy.cfg' </var/lib/scheduler/scheduler-policy.cfg' </var/lib/scheduler/scheduler-policy.cfg' <kubeadm.config <kubeadm.config <= adding that label. The CA gets configured explicitly, which is supported for webhooks. -``` sh -mkdir my-webhook +``` ShellSession +$ mkdir my-webhook -cat >my-webhook/kustomization.yaml <my-webhook/kustomization.yaml <my-webhook/webhook-patch.yaml <my-webhook/webhook-patch.yaml < { var onlyCopyPromptLines = true; // Inserted from config var removePrompts = true; // Inserted from config + //MODIFICATION: Special Handling of ShellSession which ignores setting of "true" for onlyCopyPromptLines grandParent = target.parentElement.parentElement; blockType = grandParent.classList; if (blockType[0].includes("ShellSession")) { onlyCopyPromptLines = false; } + //END MODIFICATION // Text content line filtering based on prompts (if a prompt text is given) if (copybuttonPromptText.length > 0) { diff --git a/docs/requirements.txt b/docs/requirements.txt index bdc8d57374..bca591b9d0 100644 --- a/docs/requirements.txt +++ b/docs/requirements.txt @@ -2,4 +2,4 @@ sphinx sphinx_rtd_theme recommonmark sphinx-markdown-tables -sphinx-copybutton +sphinx-copybutton == 0.2.11 diff --git a/examples/gce.md b/examples/gce.md index 66622f2bcd..d15a3eb21e 100644 --- a/examples/gce.md +++ b/examples/gce.md @@ -22,8 +22,8 @@ Of the existing machine images, `debian-9` is known to support PMEM. When booting up that image on a suitable machine configuration, a `/dev/pmem0` device is created: -```sh -gcloud alpha compute instances create pmem-debian-9 --machine-type n1-highmem-96-aep --local-nvdimm size=1600 --zone us-central1-f +``` console +$ gcloud alpha compute instances create pmem-debian-9 --machine-type n1-highmem-96-aep --local-nvdimm size=1600 --zone us-central1-f ``` ### Preparing the machine image @@ -39,7 +39,7 @@ and check out the source code with `repo sync`. Before proceeding, apply the following patch: -```ShellSession +``` ShellSession $ cd src/overlays $ patch -p1 <server/kubernetes-manifests.tar.gz +``` ShellSession +$ cd kubernetes +$ mkdir server +$ curl -L https://dl.k8s.io/v1.15.1/kubernetes.tar.gz | tar -zxf - -O kubernetes/server/kubernetes-manifests.tar.gz >server/kubernetes-manifests.tar.gz ``` Then create a cluster with one master and three worker nodes: @@ -218,13 +218,13 @@ for i in $( seq 0 $(($NUM_NODES - 1)) ); do kubectl label node kubernetes-minion Then certificates need to be created. This currently works best with scripts from the pmem-csi repo: -```sh -git clone --branch release-0.5 https://github.com/intel/pmem-csi -cd pmem-csi -curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o _work/bin/cfssl --create-dirs -curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o _work/bin/cfssljson --create-dirs -chmod a+x _work/bin/cfssl _work/bin/cfssljson -PATH="$PATH:$PWD/_work/bin" ./test/setup-ca-kubernetes.sh +``` console +$ git clone --branch release-0.5 https://github.com/intel/pmem-csi +$ cd pmem-csi +$ curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o _work/bin/cfssl --create-dirs +$ curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o _work/bin/cfssljson --create-dirs +$ chmod a+x _work/bin/cfssl _work/bin/cfssljson +$ PATH="$PATH:$PWD/_work/bin" ./test/setup-ca-kubernetes.sh ``` As in QEMU, the GCE VMs come up with the entire PMEM already set up @@ -252,24 +252,24 @@ done With the nodes set up like that, we can proceed to deploy PMEM-CSI: -```sh -kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/kubernetes-1.14/pmem-csi-lvm.yaml -kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-storageclass-ext4.yaml -kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-storageclass-xfs.yaml +``` console +$ kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/kubernetes-1.14/pmem-csi-lvm.yaml +$ kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-storageclass-ext4.yaml +$ kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-storageclass-xfs.yaml ``` ### Testing PMEM-CSI This brings up the example apps, one using `ext4`, the other `xfs`: -```sh -kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-pvc.yaml -kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-app.yaml +``` console +$ kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-pvc.yaml +$ kubectl create -f https://raw.githubusercontent.com/intel/pmem-csi/v0.5.0/deploy/common/pmem-app.yaml ``` It is expected that `my-csi-app-2` will never start because the COS kernel lacks support for xfs. But `my-csi-app-1` comes up: -```console +``` console $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0c2ebc68-cd77-4c08-9fbb-f8a5d33440b9 4Gi RWO Delete Bound default/pmem-csi-pvc-ext4 pmem-csi-sc-ext4 2m52s @@ -299,13 +299,13 @@ longer necessary. It is possible to define an instance template that uses the alpha machines: -```sh -gcloud alpha compute instance-templates create kubernetes-minion-template --machine-type n1-highmem-96-aep --local-nvdimm size=1600 --region us-central1 +``` console +$ gcloud alpha compute instance-templates create kubernetes-minion-template --machine-type n1-highmem-96-aep --local-nvdimm size=1600 --region us-central1 ``` But then using that template fails (regardless whether `alpha` is used or not): -```sh +``` console $ gcloud alpha compute instance-groups managed create kubernetes-minion-group --zone us-central1-b --base-instance-name kubernetes-minion-group --size 3 --template kubernetes-minion-template ERROR: (gcloud.alpha.compute.instance-groups.managed.create) Could not fetch resource: - Internal error. Please try again or contact Google Support. (Code: '58F5F51C192A0.A2E8610.8D038E81') @@ -353,8 +353,8 @@ The COS image does not have `ndctl`. The following `DaemonSet` instead runs commands inside the `pmem-csi-driver` image. This is an alternative to running inside Docker. -```sh -kubectl create -f - < diff --git a/examples/memcached.md b/examples/memcached.md index ed091bb2a7..1f38aee65e 100644 --- a/examples/memcached.md +++ b/examples/memcached.md @@ -48,7 +48,7 @@ name and other parameters in the deployment can be modified with how one can change the namespace, volume size or add additional command line parameters: -```console +``` ShellSession $ mkdir -p my-memcached-deployment $ cat >my-memcached-deployment/kustomization.yaml < 11211 Forwarding from [::1]:11211 -> 11211 ``` In another shell we can now use `telnet` to connect to memcached: -```console +``` console $ telnet localhost 11211 Trying ::1... Connected to localhost. @@ -149,13 +149,13 @@ Connection closed by foreign host. The following command verifies the data was stored in a persistent memory data volume: -```console +``` console $ kubectl exec -n demo $(kubectl get -n demo pods -l app.kubernetes.io/name=pmem-memcached -o jsonpath={..metadata.name}) grep 'I am PMEM.' /data/memcached-memory-file Binary file /data/memcached-memory-file matches ``` To clean up, terminate the `kubectl port-forward` command and delete the memcached deployment with: -```console +``` console $ kubectl delete --kustomize my-memcached-deployment service "pmem-memcached" deleted deployment.apps "pmem-memcached" deleted @@ -192,7 +192,7 @@ This example can also be kustomized. It uses the [`pmem-csi-sc-ext4` storage class](/deploy/common/pmem-storageclass-ext4.yaml). Here we just use the defaults, in particular the default namespace: -```console +``` console $ kubectl apply --kustomize github.com/intel/pmem-csi/deploy/kustomize/memcached/persistent service/pmem-memcached created statefulset.apps/pmem-memcached created @@ -201,7 +201,7 @@ statefulset.apps/pmem-memcached created We can verify that memcached really does a warm restart by storing some data, removing the instance and then starting it again. -```console +``` console $ kubectl wait --for=condition=Ready pods -l app.kubernetes.io/name=pmem-memcached pod/pmem-memcached-0 condition met ``` @@ -209,20 +209,20 @@ pod/pmem-memcached-0 condition met Because we use a stateful set, the pod name is deterministic. There is also a corresponding persistent volume: -```console +``` console $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE memcached-data-volume-pmem-memcached-0 Bound pvc-bb2cde11-6aa2-46da-8521-9bc35c08426d 200Mi RWO pmem-csi-sc-ext4 5m45s ``` First set a key using the same approach as before: -```console +``` console $ kubectl port-forward service/pmem-memcached 11211 Forwarding from 127.0.0.1:11211 -> 11211 Forwarding from [::1]:11211 -> 11211 ``` -```console +``` console $ telnet localhost 11211 Trying ::1... Connected to localhost. @@ -246,7 +246,7 @@ Then scale down the number of memcached instances down to zero, then restart it. To avoid race conditions, it is important to wait for Kubernetes to catch up: -```console +``` console $ kubectl scale --replicas=0 statefulset/pmem-memcached statefulset.apps/pmem-memcached scaled @@ -265,7 +265,7 @@ Restart the port forwarding now because it is tied to the previous pod. Without the persistent volume and the restartable cache, the memcached cache would be empty now. With `telnet` we can verify that this is not the case and that the key is still known: -```console +``` console $ telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. @@ -291,7 +291,7 @@ Connection closed by foreign host. ``` To clean up, terminate the `kubectl port-forward` command and delete the memcached deployment with: -```console +``` console $ kubectl delete --kustomize github.com/intel/pmem-csi/deploy/kustomize/memcached/persistent service "pmem-memcached" deleted statefulset.apps "pmem-memcached" deleted @@ -301,7 +301,7 @@ Beware that at the moment, the volumes need to be removed manually after removing the stateful set. A [request to automate that](https://github.com/kubernetes/kubernetes/issues/55045) is open. -```console +``` console $ kubectl delete pvc -l app.kubernetes.io/name=pmem-memcached persistentvolumeclaim "memcached-data-volume-pmem-memcached-0" deleted ``` diff --git a/examples/redis-operator.md b/examples/redis-operator.md index 87b5554cd8..599443b3b4 100644 --- a/examples/redis-operator.md +++ b/examples/redis-operator.md @@ -10,24 +10,24 @@ This readme describes a complete example to deploy a Redis cluster through the [ The steps below describe how to install PMEM-CSI within a Kubernetes cluster that contains one master and three worker nodes. We use a specific version of [Clear Linux OS](https://clearlinux.org/) for the nodes, so the following steps can be reproduced. 1. Clone this project into `$GOPATH/src/github.com/intel`. 2. Build PMEM-CSI: - ```sh + ``` console $ cd $GOPATH/src/github.com/intel/pmem-csi $ make push-images ``` **Note**: By default, the build images are pushed into a local [Docker registry](https://docs.docker.com/registry/deploying/). 3. Start the Kubernetes cluster: - ```sh + ``` console $ TEST_CLEAR_LINUX_VERSION=29820 make start ``` **WARNING**: You may run into an SSL problem when executing this command. You can avoid the SSL error by disabling test checks for signed files, however, be aware that this is not a secure step. Use this command to disable checking for signed files: `TEST_CLEAR_LINUX_VERSION=29820 TEST_CHECK_SIGNED_FILES=false make start`. 4. Setup `KUBECONFIG` env variable to use `kubectl` binary: - ```sh + ``` console $ export KUBECONFIG=$(pwd)/_work/clear-govm/kube.config ``` 5. Verify that all pods reach `Running` status: - ```sh + ``` console $ kubectl get po NAME READY STATUS RESTARTS AGE pmem-csi-controller-0 3/3 Running 0 125ms @@ -39,7 +39,7 @@ The steps below describe how to install PMEM-CSI within a Kubernetes cluster tha ## Redis operator installation and Redis cluster deployment The steps to install the Redis operator by Spotahome are listed below: 1. Install the Redis operator and validate that the `CRD` was properly defined: - ```sh + ``` console $ kubectl create -f https://raw.githubusercontent.com/spotahome/redis-operator/master/example/operator/all-redis-operator-resources.yaml $ kubectl get po @@ -57,7 +57,7 @@ The steps to install the Redis operator by Spotahome are listed below: redisfailovers.databases.spotahome.com 2019-06-21T19:03:36Z ``` 2. Deploy a Redis cluster that uses PMEM-CSI: - ```sh + ``` console $ kubectl create -f https://raw.githubusercontent.com/spotahome/redis-operator/master/example/redisfailover/pmem.yaml $ kubectl get po @@ -79,7 +79,7 @@ The steps to install the Redis operator by Spotahome are listed below: ## Verify that the Redis instances are using the volumes provided First, match the volumes claim name used by each Redis instance to the current `pvc` which is provisioned by `pmem-csi-sc-ext4`. Both items must be properly bound. Next, check the block devices in the worker nodes of your Kubernetes cluster and match the ones under the pmem `block` device, as shown below. -```sh +``` console $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE redisfailover-pmem-data-rfr-redisfailover-pmem-0 Bound pvc-82ee55fc-9458-11e9-bec2-0242ac110004 100Mi RWO pmem-csi-sc-ext4 19m @@ -97,18 +97,18 @@ $ kubectl exec rfr-redisfailover-pmem-0 -- mount | grep /data ## Playing around with Redis operator The steps to start playing around with Redis cluster *through sentinel instances* managed by the operator, are listed below: 1. Get the network information of the Redis node working as a master. You can use any sentinel instance, i.e.: - ```sh + ``` console $ kubectl exec -it rfs-redisfailover-pmem-7595857d4c-56mmz -- redis-cli -p 26379 SENTINEL get-master-addr-by-name mymaster 1) "10.244.3.4" 2) "6379" ``` 2. Set a `key:value` pair into Redis master: - ```sh + ``` console $ kubectl exec -it rfs-redisfailover-pmem-7595857d4c-c69xg -- redis-cli -h 10.244.3.4 -p 6379 SET hello world! OK ``` 3. Get `value` from `key`: - ```sh + ``` console $ kubectl exec -it rfs-redisfailover-pmem-7595857d4c-c69xg -- redis-cli -h 10.244.3.4 -p 6379 GET hello "world!" ``` @@ -143,7 +143,7 @@ spec: ``` The deployment can be done in the same way as mentioned before, i.e.: -``` +``` console $ kubectl create -f /path/to/pmem-lb.yaml redisfailover.databases.spotahome.com/redisfailover-pmem-lb created