I’ve been eyeing this Ceph storage system for a while for a number of potential uses, it looks like its finally ready for production usage.
It is a pitty that the cephfs component isn’t recommended for production usage yet. Still, Ceph is really going to shake up the distributed storage industry. Having used GPFS, Lustre and a few other systems in production in the past, Ceph certainly looks attractive.
I’ve been happily using tahoe-lafs for my backup needs for a while now in work. It’s not a huge cluster, but just a small hive cache from workstations dotted around the lab.
The project recently did a release which fixes a number of bugs and issues which has made this software much more pleasant to use. I’ve also been happily using git-annex to manage my files which reside in tahoe-lafs.
Having happily running and using both git-annex and tahoe-lafs in the past year or so to manage my files and backups. I’ve been thinking about plugging in tahoe-lafs as a backend driver for iRODS. I never quite got around to doing it properly, I had only gotten the universal mass storage driver to talk to tahoe-lafs. I was planning on writing an MSO driver for iRODS to talk to tahoe-lafs’s web-api, but alas I never got around to it.
One of the things that I don’t like about git is the interface at times can be too geared towards power users. One command that I particularly like in mercurial is the hg serve command, which is incredibly easy to use.
To do the equivalent of the hg serve in git you can do the following…
user$ cd my-repo user$ git daemon --export-all --base-path=$(pwd) The above assumes you are on a trusted network and you want to share your repo with people on your LAN and you are in the toplevel directory of the repository that you want to temporarily share.
I’ve been following the development of the new git-annex assistant work that Joey Hess has been working on over the past few weeks. Even in this early state of development, it is slowly becoming more usable and accessible for less technical users.
As soon as as the issues with the limitations of kqueue and OSX’s silly limits are resolved (without the need for a user to do a sysctl -w WHATEVER) it will be pretty cool.
I feel lucky and living on the edge and I’ve recently decided to take a preview of the 2.1 branch of octopress. So far Octopress has been pretty good, it’s been a refreshing change to the usual ikiwiki system that I prefer. This experiment has worked out not too bad for me given how much I like ikiwiki for most of if not all of my other little websites and documentation sites.
The Top 500 List is out! It’s been no surprise that LLNL and IBM has reclaimed the top spot. There is some pretty cool tech that’s going to come out of these systems which the HPC community should be rubbing their hands over.
Slurm will be moving along development for these big systems as well as Lustre and zfs on linux.
I was waiting for my backups to be done hence this post, as I was using git-annex to manage my files and I decided I needed to have git-annex on a SL5 based machine. SL5 is just an opensource clone/recompile of RHEL5.
I haven’t tried to install the newer versions of Haskell Platform and GHC in a while on SL5 to instal git-annex. But the last time I checked when GHC7 was out, it was a pain to install GHC on SL5.
I’ve just uploaded the work that Matthias and Quirijn did over the past few months to the npm registry
http://search.npmjs.org/#/dri-api http://search.npmjs.org/#/dris-workflows http://search.npmjs.org/#/dri http://search.npmjs.org/#/fedora I haven’t tried installing it from the npm repository yet, I assume that they will work. There’s still some cosmetic work (needs to be branded correctly) that needs to be done as well as lots of error checking, validation and testing. It’s not a bad place to be starting to look for ideas if you are interested in building your own system to interface with Fedora Commons.
The event was actually on 2012-06-06 and it was invite only. The event was mostly about the infrastructure of Facebook. It was actually done quite tastefully with a main speaker giving an overview of the whole system. Luckily they did not turn the event into a recruitment event which was quite pleasant.
Unfortunately one of the posters that I had actually want to see and chat to the engineers involved wasn’t there, there were enough engineers around for me to harass with questions.