We've now been working on our rewrite of Nix, Tvix, for over a year.
As you can imagine, this past year has been turbulent, to say the least, given the regions where many of us live. As a result we haven't had as much time to work on fun things (like open-source software projects!) as we'd like.
We've all been fortunate enough to continue making progress, but we just haven't had the bandwidth to communicate with you and keep you up to speed on what's going on. That's what this blog post is for.
Nix language evaluator
The most significant progress in the past six months has been on our Nix language evaluator. To answer the most important question: yes, you can play with it right now – in Tvixbolt!
We got the evaluator into its current state by first listing all the problems we were likely to encounter, then solving them independently, and finally assembling all those small-scale solutions into a coherent whole. As a result, we briefly had an impractically large private source tree, which we have since integrated into our monorepo.
This process was much slower than we would have liked, due to code review bandwidth... which is to say, we're all volunteers. People have lives, bottlenecks happen.
Most of this code was either written or reviewed by grfn, sterni and tazjin (that's me!).
How much of eval is working?
Most of it! You can enter most (but not all, sorry! Not yet, anyway.) Nix language expressions in Tvixbolt and observe how they are evaluated.
There's a lot of interesting stuff going on under the hood, such as:
-
The Tvix compiler can emit warnings and errors without failing early, and retains as much source information as possible. This will enable you to use Tvix as the basis for developer tooling, such as language servers.
-
The Tvix compiler performs in-depth scope analysis, so it can both generate efficient bytecode for accessing identifiers, and alert you about problems in your code before runtime.
-
The runtime supports tail-call optimisation in many (but – again – not yet all) cases, so you can evaluate recursive expressions in constant stack space.
-
The runtime can give you different backing representations for the same Nix type. For example, an attribute set is represented differently depending on whether you've constructed an empty one, a
name/value
pair, or a larger set. This lets us optimise frequent, well-known use-cases without impacting the general case much.
We've run some initial benchmarks against C++ Nix (using the features
that are ready), and in most cases Tvix evaluation is an order of
magnitude faster. To be fair, though, these benchmarks are in no way
indicative of real-life performance for things like nixpkgs
. More
information is coming... eventually.
How does it all work?
Tvix's evaluator uses a custom abstract machine with a Nix-specific instruction set, and a compiler that traverses a parsed Nix AST to emit this bytecode and perform a set of optimisations and other analysis. The most important benefit of this is that we can plan and lay out the execution of a program in a way that is better suited to an efficient runtime than directly traversing the AST.
TIP: You can see the generated bytecode in Tvixbolt!
This is all written in about 4000 lines of Rust (naturally), some of which – especially around scope-handling – are deceptively simple.
As part of our CI suite, we run the evaluator against some tests we wrote ourselves, as well as against the upstream Nix test suite (which we don't quite pass yet. We're working on it!).
What's next for tvix-eval?
Despite all our progress, there are still some unfinished feature areas, and some of them are pretty important:
-
The majority of Nix's builtins – including fundamental ones like
import
andderivation
– aren't implemented yet. -
Neither are recursive attribute sets (
rec
). This isn't because of a problem with the recursion itself, but because of the handling of nested keys (such asa.b
). We have a lackluster solution already, but are designing a more efficient one.
In both cases, we've mostly figured out what to do; now it's just a matter of finding the time to do it. Our progress is steady, and can be tracked in the source (viewer without Javascript here).
Apart from that, the next steps are:
-
Comprehensive benchmarking. We're standing up an infrastructure for continuous benchmarking to measure the impact of changes. It'll also let us identify and optimise hotspots
-
Implementing known optimisations. There are some areas of the code that have the potential for significant speed gains, but we're holding off implementing those until the evaluator is feature complete and passes the Nix test suite.
-
Finishing our language specification. Based on what we've learned, we're writing a specification of the Nix language that captures its various behaviours in all their tricky subtlety and subtle trickery.
Once we can evaluate nixpkgs
, we're likely to shift our focus
towards the other areas of Tvix.
The Other Areas of Tvix
Speaking of these other areas (most importantly, the builder and store implementation), we've made some nice progress there also.
While we've yet to start assembling the actual pieces, flokli and adisbladis have been hard at work on go-nix, which aims to implement many of the low-level primitives required for the Nix store and builder (hashing and encoding schemes, archive formats, reference scanning ...).
We're looking forward to telling you more in the next Tvix status update!
Outro ...
We'd be delighted to onboard new contributors to Tvix! Please take a look at the main TVL page to find out how to get in touch with us if you'd like to join!
Thanks also, of course, to NLNet for sponsoring some of this work!
And finally, we would like to thank and pay our respects to jD91mZM2 – the original author of rnix-parser – who has sadly passed away. Please, tell people how important they are to you.
We use rnix-parser
in our compiler, and its well-designed internals
(also thanks to its new maintainers!) have saved us a lot of time.
That's it for this update. Go play with Tvixbolt, have fun figuring out weird ways to break it – and if you do, let us know.
We'll see you around!