There’s a new release of Ceph, I hope that they release a stable soon so we can do further evaluations of the Ceph storage system. A few of my work colleagues are going to the Ceph workshop next week.
I’m wondering if anyone has taken the CRUSH algorithm and used it in other domains.
The latest development branch of Ceph is out with some rather nice looking features, what’s probably the most useful are the RPM builds for those that run RHEL6 like systems.
Still no real sight of backported kernel modules :P Also some of the guys in work here just deployed a ~200tb Ceph installation which I’ve access to a 10tb RBD for doing backups on.
Given that I have a number of old 64bit capable desktop machines and a collection of hard drives at home, I could have run Tahoe-LAFS like I do in work for backup purposes. In fact Tahoe works quite well for the technically capable user.
Recently I’ve decided that I need a more central location at home to store my photo collection (I love to take photos with my Canon DSLR and Panasonic LX5).
There’s a new stable release of Ceph Argonaut, I seem to be having better luck with playing with the development releases of Ceph.
Oh how I wish that there was a backport of the kernel ceph and rbd drivers for RHEL6, I have a dodgy repo and some reverted commits that one of the guys in work told me about. It seems to run but it isn’t great, it can be found at https://github.
Having learnt how to remove and add monitor’s, meta-data and data servers (mon’s, mds’s and osd’s) for my small two node Ceph cluster. I want to say that it wasn’t too hard to do, the ceph website does have documentation for this.
As the default CRUSH map replicates across OSD’s I wanted to try replicating data across hosts just to see what would happen. In a real world scenario I would probably treat individual hosts in a rack as a failure unit and if I had more than one rack of storage, I would want to treat each rack as the minimum unit.
After my last failed attempt at Installing Ceph on SL6 or rather my attempt at configuring Ceph for a test failed miserably.
It hasn’t deterred me to test more. As a result I setup a number of Vagrant Virtual machines and got together a few puppet scripts to provision machines.
Here’s a sample manifest for puppet to automate the deployment of a machine to build Ceph. It requires that you SL6 environment to have at least the epel repository enabled.
As an exercise of a Friday afternoon of not starting something big before heading off to a conference. I’ve decided to spend an hour or two at seeing how ceph is installed and configured on an SL6 based machine (RHEL6 clone).
The install target is just an old desktop running a 64bit install of SL6x, so it’s nothing too fancy.
Following the instructions at http://ceph.com/wiki/Installing_on_RedHat_or_CentOS, I ended up doing this
I’ve been eyeing this Ceph storage system for a while for a number of potential uses, it looks like its finally ready for production usage.
It is a pitty that the cephfs component isn’t recommended for production usage yet. Still, Ceph is really going to shake up the distributed storage industry. Having used GPFS, Lustre and a few other systems in production in the past, Ceph certainly looks attractive.
I’ve been keeping an eye on ceph for quite sometime now. It looks like ceph is almost ready for production usage. There is now a support infrastructure and a commercial company backing the product. So far the OSD, MDS and MON components all run on Linux since most of the implementation appears to be in the user land I wonder if it will be ported to non-linux platforms. It’s be quite nifty if one could have a heterogeneous network of servers providing storage.