For this {productname} configuration, we create a three-node Ceph cluster, with several other supporting nodes, as follows:
-
ceph01, ceph02, and ceph03 - Ceph Monitor, Ceph Manager and Ceph OSD nodes
-
ceph04 - Ceph RGW node
-
ceph05 - Ceph Ansible administration node
For details on installing Ceph nodes, see Installing Red Hat Ceph Storage on Red Hat Enterprise Linux.
Once you have set up the Ceph storage cluster, create a Ceph Object Gateway (also referred to as a RADOS gateway). See Installing the Ceph Object Gateway for details.
On ceph01, ceph02, ceph03, ceph04, and ceph05, do the following:
-
Review prerequisites for setting up Ceph nodes in Requirements for Installing Red Hat Ceph Storage. In particular:
-
Decide if you want to use RAID controllers on OSD nodes.
-
Decide if you want a separate cluster network for your Ceph Network Configuration.
-
-
Prepare OSD storage (ceph01, ceph02, and ceph03 only). Set up the OSD storage on the three OSD nodes (ceph01, ceph02, and ceph03). See OSD Ansible Settings in Table 3.2 for details on supported storage types that you will enter into your Ansible configuration later. For this example, a single, unformatted block device (
/dev/sdb
), that is separate from the operating system, is configured on each of the OSD nodes. If you are installing on metal, you might want to add an extra hard drive to the machine for this purpose. -
Install Red Hat Enterprise Linux Server edition, as described in the RHEL 7 Installation Guide.
-
Register and subscribe each Ceph node as described in the Registering Red Hat Ceph Storage Nodes. Here is how to subscribe to the necessary repos:
# subscription-manager repos --disable=* # subscription-manager repos --enable=rhel-7-server-rpms # subscription-manager repos --enable=rhel-7-server-extras-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-mon-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-osd-rpms # subscription-manager repos --enable=rhel-7-server-rhceph-3-tools-rpms
-
Create an ansible user with root privilege on each node. Choose any name you like. For example:
# USER_NAME=ansibleadmin # useradd $USER_NAME -c "Ansible administrator" # passwd $USER_NAME New password: ********* Retype new password: ********* # cat << EOF >/etc/sudoers.d/admin admin ALL = (root) NOPASSWD:ALL EOF # chmod 0440 /etc/sudoers.d/$USER_NAME
Log into the Ceph Ansible node (ceph05) and configure it as follows. You will need the ceph01, ceph02, and ceph03 nodes to be running to complete these steps.
-
In the Ansible user’s home directory create a directory to store temporary values created from the ceph-ansible playbook
# USER_NAME=ansibleadmin # sudo su - $USER_NAME [ansibleadmin@ceph05 ~]$ mkdir ~/ceph-ansible-keys
-
Enable password-less ssh for the ansible user. Run ssh-keygen on ceph05 (leave passphrase empty), then run and repeat ssh-copy-id to copy the public key to the Ansible user on ceph01, ceph02, and ceph03 systems:
# USER_NAME=ansibleadmin # sudo su - $USER_NAME [ansibleadmin@ceph05 ~]$ ssh-keygen [ansibleadmin@ceph05 ~]$ ssh-copy-id $USER_NAME@ceph01 [ansibleadmin@ceph05 ~]$ ssh-copy-id $USER_NAME@ceph02 [ansibleadmin@ceph05 ~]$ ssh-copy-id $USER_NAME@ceph03 [ansibleadmin@ceph05 ~]$ exit #
-
Install the ceph-ansible package:
# yum install ceph-ansible
-
Create a symbolic between these two directories:
# ln -s /usr/share/ceph-ansible/group_vars \ /etc/ansible/group_vars
-
Create copies of Ceph sample yml files to modify:
# cd /usr/share/ceph-ansible # cp group_vars/all.yml.sample group_vars/all.yml # cp group_vars/osds.yml.sample group_vars/osds.yml # cp site.yml.sample site.yml
-
Edit the copied group_vars/all.yml file. See General Ansible Settings in Table 3.1 for details. For example:
ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_version: 3 monitor_interface: eth0 public_network: 192.168.122.0/24
Note that your network device and address range may differ.
-
Edit the copied
group_vars/osds.yml
file. See the OSD Ansible Settings in Table 3.2 for details. In this example, the second disk device (/dev/sdb
) on each OSD node is used for both data and journal storage:osd_scenario: collocated devices: - /dev/sdb dmcrypt: true osd_auto_discovery: false
-
Edit the
/etc/ansible/hosts
inventory file to identify the Ceph nodes as Ceph monitor, OSD and manager nodes. In this example, the storage devices are identified on each node as well:[mons] ceph01 ceph02 ceph03 [osds] ceph01 devices="[ '/dev/sdb' ]" ceph02 devices="[ '/dev/sdb' ]" ceph03 devices="[ '/dev/sdb' ]" [mgrs] ceph01 devices="[ '/dev/sdb' ]" ceph02 devices="[ '/dev/sdb' ]" ceph03 devices="[ '/dev/sdb' ]"
-
Add this line to the
/etc/ansible/ansible.cfg
file, to save the output from each Ansible playbook run into your Ansible user’s home directory:retry_files_save_path = ~/
-
Check that Ansible can reach all the Ceph nodes you configured as your Ansible user:
# USER_NAME=ansibleadmin # sudo su - $USER_NAME [ansibleadmin@ceph05 ~]$ ansible all -m ping ceph01 | SUCCESS => { "changed": false, "ping": "pong" } ceph02 | SUCCESS => { "changed": false, "ping": "pong" } ceph03 | SUCCESS => { "changed": false, "ping": "pong" } [ansibleadmin@ceph05 ~]$
-
Run the ceph-ansible playbook (as your Ansible user):
[ansibleadmin@ceph05 ~]$ cd /usr/share/ceph-ansible/ [ansibleadmin@ceph05 ~]$ ansible-playbook site.yml
At this point, the Ansible playbook will check your Ceph nodes and configure them for the services you requested. If anything fails, make needed corrections and rerun the command.
-
Log into one of the three Ceph nodes (ceph01, ceph02, or ceph03) and check the health of the Ceph cluster:
# ceph health HEALTH_OK
-
On the same node, verify that monitoring is working using rados:
# ceph osd pool create test 8 # echo 'Hello World!' > hello-world.txt # rados --pool test put hello-world hello-world.txt # rados --pool test get hello-world fetch.txt # cat fetch.txt Hello World!
On the Ansible system (ceph05), configure a Ceph Object Gateway to your Ceph Storage cluster (which will ultimately run on ceph04). See Installing the Ceph Object Gateway for details.