Logo

Automated container deployment without a registry

Learn how to automate deployment of a containerized Django app without a registry

Published: 23 Oct, 2024


Even though I’m primarily a developer instead of a system administrator, I’ve always rolled my own servers for deploying web applications. With the introduction of tools like Kamal, many developers are starting to realize that servers aren’t as scary as PaaS provider claim they are.

As for me, I find tools like Kamal as little too omakase because while I enjoy simplicity, I also need the flexibility to deploy any kind of application. That’s why I rely on Ansible for configuring my servers and then use my own custom scripts for deployment.

One type of flexibility I require is being able to deploy containers without any registry. Instead, I want to be able export a tar archive of my image and deploy it on my server. All of this must be automated, of course.

In this post, I’ll show you how I accomplish this kind of setup. This make this post more concrete, I open sourced the code for this in my Django boilerplate repository.

This whole setup can be automated using this ansible playbook.

What needs to be done locally

When I want to deploy my app, I want to take the following steps:

  1. Build the image
  2. Save the image as a tar archive
  3. Upload this archive to my servers

I use this python script to automate this process. Notice that I’m using subprocess instead of the Docker SDK. That’s because the Python SDK isn’t kept up to date with the Docker CLI. For example, it doesn’t support BuildKit yet.

To configure this tool, you create a deploy.toml file in the root of the repository containing something like the following:

servers = ["192.168.1.1"]
remote_user = "alice"
remote_directory = "/var/docker-exports"

See an example of such a file here.

The tool will upload the archive to the specified directory on all servers that you list. It uses rsync, so you’ll need SSH access to the servers you listed. The directory specified must exist and be writeable by specified user.

Workflow on the server

On the server(s) where the app will be running, I want to take the following steps:

  1. Watch the directory I specified above for any changes.
  2. If a file is present, use Docker to load it as an image.
  3. Tag the loaded image using the tag I set during development.
  4. Stop any containers using a previous version of this image.
  5. Run a container or multiple ones using this image.

As you can see, this process will fire whenever I upload an archive of my Docker image to the directory I specified above (/var/docker-exports).

Automated deployment

To trigger this process upon the appearance of new archives, I will use systemd path units.

I will create these two files in the /etc/systemd/system directory:

  1. deploy-app.path
  2. deploy-app.service

Then you must run the following commands to active the path unit:

systemctl daemon-reload
systemctl enable deploy-app.path
systemctl start deploy-app.path

The deploy-app.service unit will call a script called deploy-app.py that you must place in /usr/local/bin and make it executable with chmod u+x.

Once you upload your archive from your local development machine, you’ll see that it will automatically be deployed on the servers that you specified.

The image tag will be derived from the filename of the archive. See line 19 of deploy-app.py.

A filename will be something like simple_django__sometag__.tar. The tool will extract sometag from this string.

Conclusion

As you saw, setting up a sort of CI/CD without relying on external providers is easy. Of course, you need to set this up initially but this task needs to be done only once per project. You set it and forget it. After that, you simply work on your app and deploy by running a simple script on your local machine.


Email me if you have any questions about this post.

Subscribe to the RSS feed to keep updated.