For some time now, I wanted to do a setup of Ceph, and I finally got the time to do it. This setup was done on Ubuntu 16.04
What is Ceph
Ceph is a storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object, block and file-level storage.
- Object Storage:
Ceph provides seemless access to objects via native language bindings or via the REST interface, RadosGW and also compatible for applications written for S3 and Swift.
- Block Storage:
Ceph’s Rados Block Device (RBD) provides access to block device images that are replicated and striped across the storage cluster.
- File System:
Ceph provides a network file system (CephFS) that aims for high performance.
Our Setup
We will have 4 nodes. 1 Admin node where we will deploy our cluster with, and 3 nodes that will hold the data:
- ceph-admin (10.0.8.2)
- ceph-node1 (10.0.8.3)
- ceph-node2 (10.0.8.4)
- ceph-node3 (10.0.8.5)
Host Entries
If you don’t have dns for your servers, setup the /etc/hosts
file so that the names can resolves to the ip addresses:
1 2 3 4 |
|
User Accounts and Passwordless SSH
Setup the ceph-system
user accounts on all the servers:
1 2 |
|
Setup the created user part of the sudoers that is able to issue sudo commands without a pssword:
1 2 |
|
Switch user to ceph-system
and generate SSH keys and copy the keys from the ceph-admin
server to the ceph-nodes:
1 2 3 4 5 6 |
|
Pre-Requisite Software:
Install Python and Ceph Deploy on each node:
1 2 |
|
Note: Please skip this section if you have additional disks on your servers.
The instances that im using to test this setup only has one disk, so I will be creating loop block devices using allocated files. This is not recommended as when the disk fails, all the (files/block device images) will be gone with that. But since im demonstrating this, I will create the block devices from a file:
I will be creating a 12GB file on each node
1 2 |
|
The use losetup to create the loop0 block device:
1
|
|
As you can see the loop device is showing when listing the block devices:
1 2 3 |
|
Install Ceph
Now let’s install ceph using ceph-deploy to all our nodes:
1 2 |
|
The version I was running at the time:
1 2 |
|
Initialize Ceph
Initialize the Cluster with 3 Monitors:
1
|
|
Add the initial monitors and gather the keys from the previous command:
1
|
|
At this point, we should be able to scan the block devices on our nodes:
1 2 3 |
|
Prepare the Disks:
First we will zap the block devices and then prepare to create the partitions:
1 2 3 4 5 |
|
When you scan the nodes for their disks, you will notice that the partitions has been created:
1 2 3 |
|
Now let’s activate the OSD’s by using the data partitions:
1
|
|
Redistribute Keys:
Copy the configuration files and admin key to your admin node and ceph data nodes:
1
|
|
If you would like to add more OSD’s (not tested):
1 2 3 4 |
|
Ceph Status:
Have a look at your cluster status:
1 2 3 4 5 6 7 8 9 10 |
|
Everything looks good. Also change the permissions on this file, on all the nodes in order to execute the ceph, rados commands:
1
|
|
Storage Pools:
List your pool in your Ceph Cluster:
1 2 |
|
Let’s create a new storage pool called mypool
:
1 2 |
|
Let’s the list the storage pools again:
1 2 3 |
|
You can also use the ceph command to list the pools:
1 2 3 |
|
Create a Block Device Image:
1
|
|
List the Block Device Images under your Pool:
1 2 |
|
Retrieve information from your image:
1 2 3 4 5 6 7 8 9 |
|
Create a local mapping of the image to a block device:
1 2 |
|
Now we have a block device available at /dev/rbd0
. Go ahead and mount it to /mnt
:
1
|
|
We can then see it when we list our mounted disk partitions:
1 2 3 4 |
|
We can also resize the disk on the fly, let’s resize it from 1GB to 2GB:
1 2 |
|
To grow the space we can use resize2fs for ext4 partitions and xfs_growfs for xfs partitions:
1 2 3 4 5 |
|
When we look at our mounted partitions, you will notice that the size of our mounted partition has been increased in size:
1 2 3 4 |
|
Object Storage RadosGW
Let’s create a new pool where we will store our objects:
1 2 |
|
We will now create a local file, push the file to our object storage service, then delete our local file, download the file as a file with a different name, and read the contents:
Create the local file:
1
|
|
Push the local file to our pool in our object storage:
1
|
|
List the pool (note that this can be executed from any node):
1 2 |
|
Delete the local file, download the file from our object storage and read the contents:
1 2 3 4 5 6 |
|
View the disk space from our storage-pool:
1 2 3 4 5 6 |
|