I’ve recently begun playing with ceph’s rbd pool as a way to provide network block devices for libvirt guests managed through Pacemaker, having had success with drbd and iscsi. This post should be considered notes of my ongoing experiments, and not a hard-and-fast ‘howto’ for this concept. Nevertheless, it might be useful to someone!
Concepts
We’re using ceph rbd pool (RADOS block device) to offer up the storage for the VMs. If you’ve used iscsi before, this is a similar concept, but with a replicated, distributed backend for the data.
We’re using pacemaker with the VirtualDomain RA to manage libvirt (kvm) instances.
Hardware
You’ll need a minimum of 2 nodes to run the osd daemons (object store - i.e. the data) and 3 or more nodes (always an odd number for quorum) nodes for the mon daemons (ceph pool monitoring). The ceph documentation gives suggested hardware requirements for these.
You’ll also need 2 (or more) nodes for VMs to allow live migration. CPU and memory are important here.
I started using 4 nodes - 2 disk servers, and 2 vm servers. 3 (or more) nodes run pacemaker (to allow quorum) and one (or more) vm server hosts the extra mon daemon. This was mostly due to the hardware I had to hand falling into 2 categories of ‘fast disk’ and ‘fast cpu/mem’ - but the pool can expand later as needed.
Ceph Configuration
Follow the ceph documentation on setting up a pool - using mkcephfs (or ceph-deploy) to get things going.
Make sure that the path you point your osd’s at is the mountpoint for an xfs or btrfs filesystem.
I use the following /etc/ceph/ceph.conf:
When you start ceph, run
on one of the mons and wait for it to settle - look out for
and check that the appropriate number of mons and osds are running.
Guest disk Creation
You now need to create a disk in the pool for the guest to use. This is done from any of the mon nodes with the following command - replacing with the disk size, and and with the name of your rbd pool and guest vm. If –pool is omitted, it will default to the
pool:
Note that rbd images are thinly provisioned - that is no space will be used unless files are written to the image, and the size is only an upper limit. you can later change the size of a disk with:
Full documentation on the rbd commands is available on the Ceph wiki
By default, ceph pools have replication set at 2 - i.e 2 copies of all data. If you are paranoid, you can up this number, but be aware that this requires a corresponding increase in the number of osds, and also will incur a performance hit.
Libvirt configuration
RBD support in qemu has been around for a while - definitely in the 0.15.1 releases. Some distibutions don’t compile with it enabled - in which case you need to compile it yourself with the
configure option.
To see if your version has rbd support, then try the following command (after having created the rbd image above):
you should see something like:
You need to ensure that your various libvirt daemons can communicate for migration. You can use TLS (recommended, using certificates for auth), ssh, or tcp with no auth (good for testing, but insecure). See the libvirt Remote documentation for information. Note that many distros also need you to tweak the libvirtd startup options to include
in
or
.
Once communication is established, you need to create a guest libvirt XML configuration for each guest, and deploy this to all the VMs. The important addition is the inclusion of a disk of type ‘network’ with source protocol ‘rbd’ and the name set appropriately to [poolname]/[guestname] from the rbd commands above.
Pacemaker
I am assuming here that you are familiar with the workings of pacemaker. At present, this guide only covers using pacemaker to manage libvirt - though there are resource agents for monitoring ceph’s init daemons, and for ‘mounting’ rbd images available written by the excellent people at Hastexo. You should also ensure that your OS is automatically starting libvirtd, but not automatically starting any guests (e.g. libvirt-guests init.d scripts, libvirt autostart etc).
Be aware that if you are running pacemaker to monitor VirtualDomain guests AND ceph, you may need to put in place location rules to prevent ceph running on hosts with ‘VM hardware’ and libvirt running on hosts with ‘disk hardware’.
The relevant crm configuration snippet for each guest will look something like this:
If you are used to running with DRBD and iscsi, this might seem quite short - however since libvirt is handling all of the rbd access, much of the complexity disappears.