Using Make with Django

By anders pearson 03 Mar 2016

I would like to share how I use a venerable old technology, GNU Make, to manage the common tasks associated with a modern Django project.

As you’ll see, I actually use Make for much more than just Django and that’s one of the big advantages that keeps pulling me back to it. Having the same tools available on Go, node.js, Erlang, or C projects is handy for someone like me who frequently switches between them. My examples here will be Django related, but shouldn’t be hard to adapt to other kinds of codebases. I’m going to assume basic unix commandline familiarity, but nothing too fancy.

Make has kind of a reputation for being complex and obscure. In fairness, it certainly can be. If you start down the rabbit hole of writing complex Makefiles, there really doesn’t seem to be a bottom. Its reputation isn’t helped by its association with autoconf, which does deserve its reputation for complexity, but we’ll just avoid going there.

But Make can be simple and incredibly useful. There are plenty of tutorials online and I don’t want to repeat them here. To understand nearly everything I do with Make, this sample stanza is pretty much all you need:

ruleA: ruleB ruleC
    command1
    command2

A Makefile mainly consists of a set of rules, each of which has a list of zero or more prerequisites, and then zero or more commands. At a very high level, when a rule is invoked, Make first invokes any prerequisite rules (if needed), which may have their own stanzas defined elsewhere) and then runs the designated commands. There’s some nonsense having to do with requiring actual TAB characters in there instead of spaces for indentation and more syntax for comments (lines starting with #) and defining and using variables, but that’s the gist of it right there.

So you can put something extremely simple (and pointless) like

say_hello:
    echo make says hello

in a file called Makefile, then go into the same directory and do:

$ make say_hello
echo make says hello
make says hello

Not exciting, but if you have some complex shell commands that you have to run on a regular basis, it can be convenient to just package them up in a Makefile so you don’t have to remember them or type them out every time.

Things get interesting when you add in the fact that rules are also interpreted as filenames and Make is smart about looking at timestamps and keeping track of dependencies to avoid doing more work than necessary. I won’t give trivial examples to explain that (again, there are other tutorials out there), but the general interpretation to keep in mind is that the first stanza above specifies how to generate or update a file called ruleA. Specifically, other files, ruleB and ruleC need to have first been generated (or are already up to date), then command1 and command2 are run. If ruleA has been updated more recently than ruleB and ruleC, we know it’s up to date and nothing needs to be done. On the other hand, if the ruleB or ruleC files have newer timestamps than ruleA, ruleA needs to be regenerated. There will be meatier examples later, which I think will clarify this.

The original use-case for Make was handling complex dependencies in languages with compilers and linkers. You wanted to avoid re-compiling the entire project when only one source file changed. (If you’ve worked with a large C++ project, you probably understand why that would suck.) Make (with a well-written Makefile) is very good at avoiding unnecessary recompilation while you’re developing.

So back to Django, which is written in Python, which is not a compiled language and shouldn’t suffer from problems related to unnecessary and slow recompilation. Why would Make be helpful here?

While Python is not compiled, a real world Django project has enough complexity involved in day to day tasks that Make turns out to be incredibly useful.

A Django project requires that you’ve installed at least the Django python library. If you’re actually going to access a database, you’ll also need a database driver library like psycopg2. Then Django has a whole ecosystem of pluggable applications that you can use for common functionality, each of which often require a few other miscellaneous Python libraries. On many projects that I work on, the list of python libraries that get pulled in quickly runs up into the 50 to 100 range and I don’t believe that’s uncommon.

I’m a believer in repeatable builds to keep deployment sane and reduce the “but it works on my machine” factor. So each Django project has a requirements.txt that specifies exact version numbers for every python library that the project uses. Running pip install -r requirements.txt should produce a very predictable environment (modulo entirely different OSes, etc.).

Working on multiple projects (between work and personal projects, I help maintain at least 50 different Django projects), it’s not wise to have those libraries all installed globally. The Python community’s standard solution is virtualenv, which does a nice job of letting you keep everything separate. But now you’ve got a bunch of virtualenvs that you need to manage.

I have a seperate little rant here about how the approach of “activating” virtualenvs in a particular shell environment (even via virtualenvwrapper or similar tools) is an antipattern and should be avoided. I’ll spare you most of that except to say that what I recommend instead is sticking with a standard location for your project’s virtualenv and then just calling the python, pip, etc commands in the virtualenv via their path. Eg, on my projects, I always do

$ virtualenv ve

at the top level of the project. Then I can just do:

$ ./ve/bin/python ...

and know that I’m using the correct virtualenv for the project without thinking about whether I’ve activated it yet in this terminal (I’m a person who tends to have lots and lots of terminals open at a given time and often run commands from an ephemeral shell within emacs). For Django, where most of what you do is done via manage.py commands, I actually just change the shebang line in manage.py to #!ve/bin/python so I can always just run ./manage.py do_whatever.

Let’s get back to Make though. If my project has a virtualenv that was created by running something along the lines of:

$ virtualenv ve
$ ./ve/bin/pip install -r requirements.txt

And requirements.txt gets updated, the virtualenv needs to be updated as well. This is starting to get into Make’s territory. Personally, I’ve encountered enough issues with pip messing up in-place updates on libraries that I prefer to just nuke the whole virtualenv from orbit and do a clean install anytime something changes.

Let’s add a rule to a Makefile that looks like this:

ve/bin/python: requirements.txt
    rm -rf ve
    virtualenv ve
    ./ve/bin/pip install -r requirements.txt

If that’s the only rule in the Makefile (or just the first), typing

$ make

At the terminal will ensure that the virtualenv is all set up and ready to go. On a fresh checkout, ve/bin/python won’t exist, so it will run the three commands, setting everything up. If it’s run at any point after that, it will see that ve/bin/python is more recently updated than requirements.txt and nothing needs to be done. If requirements.txt changes at some point, running make will trigger a wipe and reinstall of the virtualenv.

Already, that’s actually getting useful. It’s better when you consider that in a real project, the commands involved quickly get more complicated with specifying a custom --index-url, setting up some things so pip installs from wheels, and I even like to specify exact versions of virtualenv and setuptools in projects I don’t have to think about what might happen on systems with different versions of those installed. The actual commands are complicated enough that I’m quite happy to have them written down in the Makefile so I only need to remember how to type make.

It all gets even better again when you realize that you can use ve/bin/python as a prerequisite for other rules.

Remember that if a target rule doesn’t match a filename, Make will just always run the commands associated with it. Eg, on a Django project, to run a development server, I might run:

$ ./manage.py runserver 0.0.0.0:8001

Instead, I can add a stanza like this to my Makefile:

runserver: ve/bin/python
    ./manage.py runserver 0.0.0.0:8001

Then I can just type make runserver and it will run that command for me. Even better, since ve/bin/python is a prerequisite for runserver, if the virtualenv for the project hasn’t been created yet (eg, if I just cloned the repo and forgot that I need to install libraries and stuff), it just does that automatically. And if I’ve done a git pull that updated my requirements.txt without noticing, it will automatically update my virtualenv for me. This sort of thing has been incredibly useful when working with designers who don’t necessarily know the ins and outs of pip and virtualenv or want to pay close attention to the requirements.txt file. They just know they can run make runserver and it works (though sometimes it spends a few minutes downloading and installing stuff first).

I typically have a bunch more rules for common tasks set up in a similar fashion:

check: ve/bin/python
    ./manage.py check

migrate: check
    ./manage.py migrate

flake8: ve/bin/python
    ./ve/bin/flake8 project

test: check flake8
    ./manage.py test

That demonstrates a bit of how rules chain together. If I run make test, check and flake8 are both prerequisites, so they each get run first. They, in turn, both depend on the virtualenv being created so that will happen before anything.

Perhaps you’ve noticed that there’s also a little bug in the ve/bin/python stanza up above. ve/bin/python is created by the virtualenv ve step, but it’s used as the target for the stanza. If the pip install step fails though (because of a temporary issue with PyPI or just a typo in requirements.txt or something), it will still have “succeeded” in that ve/bin/python has a fresher timestamp than requirements.txt. So the virtualenv won’t really have the complete set of libraries installed there but subsequent runs of Make will consider everything fine (based on timestamp comparisons) and not do anything. Other rules that depend on the virtualenv being set up are going to have problems when they run.

I get around that by introducing the concept of a sentinal file. So my stanza actually becomes something like:

ve/sentinal: requirements.txt
    rm -rf ve
    virtualenv ve
    ./ve/bin/pip install -r requirements.txt
    touch ve/sentinal

Ie, now there’s a zero byte file named ve/sentinal that exists just to signal to Make that the rule was completed successfully. If the pip install step fails for some reason, it never gets created and Make won’t try to keep going until that gets fixed.

My actual Makefile setup on real projects has grown more flexible and more complex, but if you’ve followed most of what’s here, it should be relatively straightforward. In particular, I’ve taken to splitting functionality out into individual, reusable .mk files that are heavily parameterized with variables, which then just get includeed into the main Makefile where the project specific variables are set.

Eg, here is a typical one. It sets a few variables specific to the project, then just does include *.mk.

The actual Django related rules are in django.mk, which is a file that I use across all my django projects and should look similar to what’s been covered here (just with a lot more rules and variables). Other .mk files in the project handle tasks for things like javascript (jshint and jscs checks, plus npm and webpack operations) or docker. They all take default values from config.mk and are set up so those can be overridden in the main Makefile or from the commandline. The rules in these files are a bit more sophisticated than the examples here, but not by much. [I should also point out here that I am by no means a Make expert, so you may not want to view my stuff as best practices; merely a glimpse of what’s possible.]

I typically arrange things so the default rule in the Makefile runs the full suite of tests and linters. Running the tests after every change is the most common task for me, so having that available with a plain make command is nice. As an emacs user, it’s even more convenient since emacs’ default setting for the compile command is to just run make. So it’s always a quick keyboard shortcut away. (I use projectile, which is smart enough to run the compile command in the project root).

Make was originally created in 1976, but I hope you can see that it remains relevant and useful forty years later.

Tags: django python make unix