In the previous posts we looked at creating a build container, and after that we created a blog container, serving our generated static website.
It's quite surprising to me how simple the current setup is — admittedly, it's a simple application too. It takes about 50 lines of configuration to get everything up and running.
The idea of the blog
container, which has nginx
as its main process, is to deploy it to a production server whenever we feel like it, in just "one click". There should be no need to configure a server to host our website, and it should not be necessary to build the application on the server too. This is in fact the promise, and the true power of Docker.
Running containers on a remote server requires two things:
- The server should be able to retrieve the container's image.
- The Docker engine should be running on the server.
Pushing the container image to Docker Hub
The first step is quite easy. You can create an account at the (default) image registry Docker Hub. There are alternatives, but this seems like the usual place to start. You need to provide the full image name in docker-compose.yml
(as we did in the previous post):
services:
blog:
...
image: matthiasnoback/php-and-symfony-blog
You can now build the image on your machine, using docker compose build blog
, and then push that image to Docker Hub by running docker compose push blog
. On the production server, it will later be possible (see below), to pull the container image from the registry, by running docker compose pull blog
.
Deployment to Digital Ocean using Docker Machine
Now that the container image has been pushed to Docker Hub, we can continue with the next step: installing the Docker engine on the server. You can do it manually, which I did at first. However, I thought it would be a nice occasion to learn about another tool called Docker Machine that performs this task in an automated fashion: it remotely provisions a server, making it ready to run Docker containers.
I already had an account at Digital Ocean, so I just followed the steps described in the Digital Ocean example documentation page. Basically, you let docker-machine
create a new "droplet" for you, which is a nice name for a virtual private server (VPS). Once you have done this, you can run docker
(and consequently docker-compose
) commands on the remote server, from your own laptop. It wasn't entirely clear for me at first, but it works by populating some specific environment variables, which influence the behavior of docker
commands.
First I provisioned my server by running:
docker-machine create --driver digitalocean --digitalocean-access-token secret-api-token php-and-symfony-blog
After some time I could run docker-machine env php-and-symfony-blog
, which showed something like:
docker-machine env php-and-symfony-blog
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://x.x.x.x:2376"
export DOCKER_CERT_PATH="..."
export DOCKER_MACHINE_NAME="php-and-symfony-blog"
# Run this command to configure your shell:
# eval $(docker-machine env php-and-symfony-blog)
So I followed the instructions and ran eval $(docker-machine env php-and-symfony-blog)
. From that moment on I could run any docker
command and it would be executed against the Docker engine running on the remote server, but — and this is why it's so awesome — based on the configuration files available on the host machine.
This means that I can simply run the following commands from my project root directory:
eval $(docker-machine env php-and-symfony-blog)
docker compose -f docker-compose.yml pull blog
docker compose -f docker-compose.yml up -d --no-deps --force-recreate --no-build blog
This pulls the previously pushed blog
image from Docker Hub, then starts running the blog
container. Running docker compose ps
reveals that indeed, the blog
is now up and running, serving the website at port 80 as it should.
Since the environment variables produced by
docker-machine env
will transparently rundocker
commands against the remote server from now on, you should not forget to unset these environment variables when you want to communicate with your locally installed Docker engine. Florian Klein pointed out an easy way to accomplish this in the comment section:eval $(docker-machine env -u)
Some last suggestions:
- It may be a good idea to write another Make file containing recipes for the above actions (e.g. create and provision a server — if you want that to be a reproducable thing; build, push and run a container image, etc.).
- Read more about Docker, Docker Compose, Docker Hub (and possibly Docker Machine) by browsing through its documentation pages. Digital Ocean also provides lots of useful documentation, tutorials and guides.
Conclusion
Again: it's all pretty simple, very cool and highly rewarding. I like the fact that:
- I'm in full control of every software dependency of my application.
- I don't have to manually install anything on the production server.
- I won't be afraid to destroy my VPS, since it's very easy to bring a new one up again.
Of course, we have to be very honest about our achievements: once we start going down the road, containerizing larger applications, or more inter-connected applications, we may soon get into trouble. I'm personally setting out on a journey to learn much more about this, so you may expect more about this soon.
Thanks for sharing the descriptive information on Docker tutorial. It’s really helpful to me since I'm taking Docker training. Keep doing the good work and if you are interested to know more on Docker, do check this Docker tutorial.:-https://www.youtube.com/wat...
Interesting serie!to unset the env variables, you can also use
docker-machine env -u
.Thanks, fixed it!