When you ever tried to write a non-trivial program that consists of several individual compilation units, then you know that it becomes very tedious to do the compilation and linking by hand. This is why people have written software to take care of this.
Most of these build systems also take care of a closely related and often tightly coupled aspects, like checking for libraries the software depends on, taking care of linking and working around toolchain limitations for different architectures. This results in a considerable complexity, which in turn results in my limited brain having problems using these systems effectively. I have more than once spent many hours to set up working build system configurations (usually Makefiles) for my projects.
Somehow I never really had a proper overview over the concepts and structure of Makefiles or configurations for Automake, QMake, CMake, SCons, waf or the other more or less frequently encountered species of build systems. Basically I was very happy if I did not have to mess with that for code written by other people, and it just worked.
The situation reminds me a bit of what someone (I can not remember who) once said about drugs against cold. His point was that none of the many available drugs really helped, otherwise there would not be so many different ones.
Recently I discovered tup, which is yet another build system. However I find it very usable, it gets some things right.
- It only manages your compilation, but does not try to figure out configuration and platform business. I think you can use KConfig to take care of that, but have never tried that. So it might be difficult to set up a portable, platform independet build configuration, but this is rarely my usecase, I mostly program for a very small audience (usually me).
- It is very easy to get the dependencies right. tup instruments the calls to the shell to automatically detect which files are read during compilation. So it automatically gets the dependencies right. In the case where you have many compilation units with dependencies between them this makes your build very robust. You can do that with the help of gcc and Makefile magic as well, but the tup way is much more painless.
- The Tupfiles (equivalent to Makefiles) are only concerned with the local directory, so usually very short and simple. Unless make, where this approach leads to an incorrect dependency graph (see RMCH), tup gets the dependencies right.
- Because the dependencies are right, parallel builds work like a charm without side effects.
- Because the dependencies are right, there is no need for things like make clean, which is often abused to work around the fact, that there are dependencies your build system does not know about. This allows for a very clean way the use tup. There is just one command that gives you an up to date and correct build.
So I really like the design of tup, and prefer it to everything else that I have tried yet. It is probably not suitable for everybody, though, but it works robustly, which is something that I have not managed to achieve with any of the other systems I tried.
When a problem is robustly under control, and this solution scales well, then you can apply it to much larger problems than possible before. The author of tup tried to demonstrate that by considering the build of a whole Linux system. So if you change something in a kernel header or in a library, tup figures out, what to recompile systemwide. He set all that up with git, but it seems to be not actively maintained. I consider it more a proof of concept than a usable Linux distribution.
But anyway, I am quite happy with tup as a build system and will try it in a few more contexts (e.g. LaTeX compiling).