This post is continuation or addition to the previous post.
This time I will dump here what I’ve done to use Ceph storage backed volumes with DC/OS and Rexray.
I have considered using rexray/rbd plugin but I find it more flexible to talk to Ceph via S3 interface. If you would like to go RBD way consider this blog post instead. If you don’t have Ceph, give Minio a go. It’s easy to set up minio in DC/OS
I wanted to use rexray/s3fs docker managed module / plugin, same way I did for EFS but it doesn’t support setting custom endpoint (only allowing AWS S3, not minio for example) at the moment. So I am using rexray binary / service.
I have followed this gist and tuned the set up to match my needs.
Here are the steps:
1. Upgrade rexray (install to default location and replace the one shipped with DC/OS) to 0.10 or newer:
curl -sSL https://dl.bintray.com/emccode/rexray/install | sh -s -- stable 0.10.2
service dcos-rexray stop
cp $(which rexray) $(readlink /opt/mesosphere/bin/rexray)
service dcos-rexray start
2. Install s3fs which is a dependency here
apt install s3fs
3. Configure rexray
My chef template for that is this:
Notes / gotchas:
* s3 endpoint needs to be provided both in s3fs.endpoint and in s3fs.options.url
* setting libstorage.integration.volume.operations.mount.rootPath to “/” because default “/data” doesn’t exist in freshly created volume and fails to be created (at least for me, perhaps solvable in different way) – may be related to this issue in rexray
* setting libstorage.integration.volume.operations.remove.force = true, because of this issue in rexray
Note: Marathon doesn’t allow mounting the same volume across different applications and also using rexray service instead of docker plugin restricts the mount to single instance. See the ticket here