-
Notifications
You must be signed in to change notification settings - Fork 4
Ceph cluster installation (jewel on CentOS)
This tutorial has been prepared for the course "Cloud Storage Solutions" organized by INFN-BARI. Participants are divided in groups of 2 persons.
Each group can access 4 nodes:
- 1 admin node: deploy, mon, mds, rgw
- 3 osd nodes
The hostnames of the nodes include the group number, e.g. participants of group 1
will use the following nodes:
- ceph-adm-1
- ceph-node1-1
- ceph-node2-1
- ceph-node3-1
In the following sections, the variable GN refers to the group number.
On each node
- (skip this step if the nodes are registered in DNS) edit the file /etc/hosts:
x.x.x.x ceph-adm-$GN
y.y.y.y ceph-node1-$GN
z.z.z.z ceph-node2-$GN
w.w.w.w ceph-node3-$GN
- create the user
ceph-deploy
and set the password:
sudo useradd -d /home/ceph-deploy -m ceph-deploy
sudo passwd ceph-deploy
- provide full privileges to the user adding the following line to /etc/sudoers.d/ceph:
echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
echo "Defaults:ceph-deploy !requiretty" | sudo tee -a /etc/sudoers.d/ceph
And change permissions in this way:
sudo chmod 0440 /etc/sudoers.d/ceph
On the admin node:
- configure your admin node with password-less SSH access to each node running Ceph daemons (leave the passphrase empty). On your admin node node01, become ceph user and generate the ssh key:
# su - ceph-deploy
$ ssh-keygen -t rsa
You will have output like this:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph-deploy/.ssh/id_rsa):
Created directory '/home/ceph-deploy/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph-deploy/.ssh/id_rsa.
Your public key has been saved in /home/ceph-deploy/.ssh/id_rsa.pub.
The key fingerprint is:
6a:c9:f5:26:c5:6d:35:70:b7:76:07:70:ac:97:78:a6 ceph-deploy@node-0
The key's randomart image is:
+--[ RSA 2048]----+
| oo+ .|
| +.o.|
| oo+o|
| . .o.*.o|
| S o o= |
| . + o .E |
| = . o |
| . o |
| |
+-----------------+
Copy the key to each cluster node and test the password-less access:
ssh-copy-id ceph-deploy@ceph-node1-$GN
ssh-copy-id ceph-deploy@ceph-node2-$GN
ssh-copy-id ceph-deploy@ceph-node3-$GN
In case the remote login via password is disabled, copy paste the content of the file /home/ceph-deploy/.ssh/id_rsa.pub
that resides on the admin node into the file /home/ceph-deploy/.ssh/authorized_keys
of the other nodes (you should create the .ssh dir).
Set SELinux to Permissive:
sudo setenforce 0
On the admin node:
sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
Create the file /etc/yum.repos.d/ceph.repo
:
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
Update your repository and install ceph-deploy:
sudo yum update && sudo yum install ceph-deploy
Create a directory on your admin node node for maintaining the configuration files and keys
mkdir cluster-ceph
cd cluster-ceph
Create the cluster:
ceph-deploy new ceph-adm-$GN
Add the following line in ceph.conf [global]:
osd pool default size = 3
Install ceph:
ceph-deploy install --release jewel ceph-adm-$GN ceph-node1-$GN ceph-node2-$GN ceph-node3-$GN
Add the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial
ceph-deploy@ceph-adm-0:~/cluster-ceph$ ll
total 204
drwxrwxr-x 2 ceph-deploy ceph-deploy 4096 Sep 28 09:44 ./
drwxr-xr-x 4 ceph-deploy ceph-deploy 4096 Sep 28 09:20 ../
-rw------- 1 ceph-deploy ceph-deploy 71 Sep 28 09:44 ceph.bootstrap-mds.keyring
-rw------- 1 ceph-deploy ceph-deploy 71 Sep 28 09:44 ceph.bootstrap-osd.keyring
-rw------- 1 ceph-deploy ceph-deploy 71 Sep 28 09:44 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph-deploy ceph-deploy 63 Sep 28 09:44 ceph.client.admin.keyring
-rw-rw-r-- 1 ceph-deploy ceph-deploy 227 Sep 28 09:20 ceph.conf
-rw-rw-r-- 1 ceph-deploy ceph-deploy 164928 Sep 28 09:44 ceph-deploy-ceph.log
-rw------- 1 ceph-deploy ceph-deploy 73 Sep 28 09:19 ceph.mon.keyring
-rw-r--r-- 1 root root 1645 Oct 15 2015 release.asc
Check the disks that will be used for the OSDs:
ceph-deploy disk list ceph-node1-$GN
ceph-deploy disk list ceph-node2-$GN
ceph-deploy disk list ceph-node3-$GN
Create the OSDs on ceph-node1, ceph-node2 and ceph-node3 (vdb device):
ceph-deploy osd create ceph-node1-$GN:vdb ceph-node2-$GN:vdb ceph-node3-$GN:vdb
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command:
ceph-deploy admin ceph-adm-$GN ceph-node1-$GN ceph-node2-$GN ceph-node3-$GN
Ensure that you have the correct permissions for the ceph.client.admin.keyring.
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
Now you can check the status of your cluster using the following command:
ceph -s
The output should look like this:
ceph-deploy@ceph-adm-0:~/cluster-ceph$ ceph -s
cluster 72766f71-c301-4352-8389-2fdfdee8333a
health HEALTH_OK
monmap e1: 1 mons at {ceph-adm-0=90.147.102.33:6789/0}
election epoch 3, quorum 0 ceph-adm-0
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v46: 64 pgs, 1 pools, 0 bytes data, 0 objects
100 MB used, 76658 MB / 76759 MB avail
64 active+clean
Check the cluster's hierarchy:
ceph osd tree
Example output:
ceph-deploy@ceph-adm-0:~/cluster-ceph$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.07320 root default
-2 0.02440 host ceph-node1-0
0 0.02440 osd.0 up 1.00000 1.00000
-3 0.02440 host ceph-node2-0
1 0.02440 osd.1 up 1.00000 1.00000
-4 0.02440 host ceph-node3-0
2 0.02440 osd.2 up 1.00000 1.00000
Get information about the pools:
ceph osd dump |grep pool
Example output:
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
Add the metadata server:
ceph-deploy mds create ceph-adm-$GN
Check the mds status:
ceph mds stat
You should see something like that:
ceph-deploy@ceph-adm-0:~/cluster-ceph$ ceph mds stat
e2:, 1 up:standby
A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata.
ceph osd pool create cephfs_data 64
ceph osd pool create cephfs_metadata 64
Once the pools are created, you may enable the filesystem using the fs new command:
ceph fs new cephfs cephfs_metadata cephfs_data
Once a filesystem has been created, the MDS will be able to enter an active state. Check again its status:
ceph mds stat
You will see the mds in active state like this:
ceph-deploy@ceph-adm-0:~/cluster-ceph$ ceph mds stat
e5: 1/1/1 up {0=ceph-adm-0=up:active}
Add the RADOS GW:
ceph-deploy rgw create ceph-adm-$GN
Try to store one file into "data" pool (create it if it does not exist) using a command like this:
rados put {object-name} {file-path} --pool=data
Do this command to check if the file has been stored into the pool "data":
rados ls -p data
You can identify the object location with:
ceph osd map {pool-name} {object-name}
We will use the rbd
pool which was created during the cluster installation.
Create a new image of size 1 GB:
rbd create --size 1024 testimg
List the rbd images:
rbd ls
Show detailed information on a specific image:
rbd info testimg
To mount ceph FS with FUSE, first install ceph-fuse with the command
sudo apt-get install ceph-fuse
and then mount it, running:
sudo ceph-fuse -m {monitor_hostname:6789} {mount_point_path}
Note1: You can get the IP of one of the Ceph monitors:
ceph mon stat
Note2: the mount_point_path
must exist before you can mount the ceph filesystem.
In our case the mountpoint is the directory /ceph-fs
that we create with
sudo mkdir /ceph-fs