Rails Development On Docker – Part 3

Last time we talked about using docker-compose in the second part of our three part series. Now we have to show you how to move the (metaphorical) furniture so your team can define the perfect environment.


Most developers want to mirror their source code to the guest container. They’d like to be able to edit and commit on the host while running and testing on the guest. There are three primary ways to do this and none of them are particularly pleasant.


You may have noticed a COPY operation that seems to take a SOURCE (host) and copy it to a DESTINATION (guest). This seems ideal up until you actually mutate a file on the host and you notice no change is present in the guest’s source. This is because COPY is a one time operation and in fact is a build cache busting operation. If you want the changes you’ve made to your host source you will need to rebuild from that line. This is terrible and really should be avoided.

People also use COPY for pre-installing dependencies like this:

COPY Gemfile /usr/src/application/Gemfile
COPY Gemfile.lock /usr/src/application/Gemfile.lock
RUN bundle install

This is great because it caches the dependencies in the build, but terrible because it fails hard for things like npm where you have two options:

  1. Global installation, breaking require()
  2. Local installation, which gets wiped when you finally do use VOLUME/volumes:/-v

Basically a show stopper for any tool that generates filesystem data local to the source (npm, bower, jspm, component, vulcanize, …).


By now you’ve already seen this as it’s the next suggestion from many people doing docker-based development. By now you’ve already been hit by the TERRIBLE lag when doing a file request. Not only is VOLUME different from volumes:, -v, --volume internally, but it was never intended as a sync. On a moderately sized Rails application with 100~ assets (uncompressed because of development mode) a single uncomplicated request can take 8-12 seconds. If you, like me, investigate this further it’s noticeable by the 600ms Time-To-First-Byte latency. Think about the journey that file request takes from the browser to your host machine’s filesystem.

Really the issue is vboxfs a notoriously slow read layer, but fast for writes. Like me you’ll look for alternatives and come across NFS, but quickly find that the problem has shifted to slow writes and fast reads. Either way an elongated development process and very painful cycle.

Having gone through the same issues I discovered docker-rsync. docker-rsync is an extra layer not yet a part of the official toolkit, but definitely a valuable solution. It’s job is simple: It will (one-way) rsync your files to the intermediary machine. The volumes:, -v, --volume pick up the changes we’ve made and copy them to the host. That means we get fast reads and fast writes. Here’s how we use rsync:

docker-rsync -watch=false -dst="/usr/src/" laurelandwolf

Note: This might change in the future, as we noticed an issue with paths.

This salvation isn’t without problems since as described it’s only one-way. Rails developers will have an interesting situation where (due to some hosting circumstances) they have to commit a generated file like Gemfile.lock. This means that (for one single file) you need a way bring back the generated content if your plan is to run entirely on docker’s development environment. A few of my friends and I have come up with some absurd ideas for solving this (the latest of which is to use cron, cp, and an “exploit”).

Note: This information might be helpful, but mostly it was just me flexing my Google Draw skills.

It’s hard to explain so I’ve made some helper images:

First we have the host machine:

Phase 1

Then we have the intermediary boot2docker virtual machine on virtualbox (or whatever you use):

Phase 2

Now by default docker-machine vboxfs mounts the source to the boot2docker virtual machine:

Phase 3

Finally we have the guest container created and the volume shared directory on all 3 environments:

Phase 4a

However, as stated previously, this creates some issues so lets back up a step.

We’ll use docker-rsync to sync the files one way to the intermediary machine:

Phase 4b

Now we just have to periodically cp certain files from the rsync directory to the two-way sync’ed mount (seen later):

Phase 5b

Finally we volume mount from the intermediary machine to the guest machine:

Phase 6b

Now we have a fully working, fast, and cross-platform development environment that requires zero mutating of your computer, junks version managers (rvm, rbenv, nvm, etc), and is always isolated.

Concerns & The Future

I like docker and I also like what I’m dubbing docker-development.
There are a lot of drawbacks to this situation:

  1. It gets significantly more complex if you need back files
  2. It’s a large system, things can break easily
  3. It requires a tool that isn’t a part of the official ecosystem
  4. The cogs need serious uniformity
  5. I don’t think docker-rsync is windows compatible

It’s very apparent that while docker is another great contender for devops it’s also a huge asset for local development and teaching. Most of my issues will blow over after a time. My company had managed to turn it’s entire development process into a single button and that button is only 5 shell script lines long.

Here’s what I’d like to see out of the future:

AUTHOR "Kurtis Rainbolt-Greene <me@kurtisrainboltgreene.name>"

MUTATE docker/machine
MUTATE docker/compose
MUTATE docker/sync

ISO boot2docker
MACHINE laurelandwolf

FROM ubuntu:latest

APPLY ruby:latest
APPLY nodejs:latest
APPLY postgres-client:latest

LINK postgres:database
LINK memcached:cache

ENVFILE .env.web

RUN apt-get update && apt-get install -y imagemagik

SYNC .:/usr/src/application

RUN bundle install
RUN npm install
RUN bower install


CMD ["bin/rails", "server"]

CONTAINER postgres:latest

CONTAINER memcached:latest

If it’s not terribly clear the MUTATE instructions allow third parties to add new instructions. This opens the door for SYNC, ENVFILE, CONTAINER, LINK, and APPLY (which is just FROM but ignoring the APPLY’d FROM).


The cron job described above to bring back generated assets looks like this:

docker-machine ssh "(crontab -l ; echo "0 * * * * cp -f /rsync/usr/src/application/Gemfile.lock ${pwd}") | sort - | uniq - | crontab -"`


This is the third post in this series and we plan on having many more so thanks for reading! If this sort of thing is something you would enjoy working on we are looking for awesome engineers to join our team.

Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries. Docker, Inc. and other parties may also have trademark rights in other terms used herein.