diff options
author | Vincent Ambo <tazjin@tvl.su> | 2024-08-10T20·59+0300 |
---|---|---|
committer | tazjin <tazjin@tvl.su> | 2024-08-19T11·02+0000 |
commit | d6c57eb957abc9c9101779600e04b34209d5c436 (patch) | |
tree | e6ec5d95d200912c87e7343fba1e4aca086b4515 /kontemplate.rb | |
parent | ddca074886196ba45c43646d04bd84618009159d (diff) |
refactor(tvix/eval): ensure VM operations fit in a single byte r/8519
This replaces the OpCode enum with a new Op enum which is guaranteed to fit in a single byte. Instead of carrying enum variants with data, every variant that has runtime data encodes it into the `Vec<u8>` that a `Chunk` now carries. This has several advantages: * Less stack space is required at runtime, and fewer allocations are required while compiling. * The OpCode doesn't need to carry "weird" special-cased data variants anymore. * It is faster (albeit, not by much). On my laptop, results consistently look approximately like this: Benchmark 1: ./before -E '(import <nixpkgs> {}).firefox.outPath' --log-level ERROR --no-warnings Time (mean ± σ): 8.224 s ± 0.272 s [User: 7.149 s, System: 0.688 s] Range (min … max): 7.759 s … 8.583 s 10 runs Benchmark 2: ./after -E '(import <nixpkgs> {}).firefox.outPath' --log-level ERROR --no-warnings Time (mean ± σ): 8.000 s ± 0.198 s [User: 7.036 s, System: 0.633 s] Range (min … max): 7.718 s … 8.334 s 10 runs See notes below for why the performance impact might be less than expected. * It is faster while at the same time dropping some optimisations we previously performed. This has several disadvantages: * The code is closer to how one would write it in C or Go. * Bit shifting! * There is (for now) slightly more code than before. On performance I have the following thoughts at the moment: In order to prepare for adding GC, there's a couple of places in Tvix where I'd like to fence off certain kinds of complexity (such as mutating bytecode, which, for various reaons, also has to be part of data that is subject to GC). With this change, we can drop optimisations like retroactively modifying existing bytecode and *still* achieve better performance than before. I believe that this is currently worth it to pave the way for changes that are more significant for performance. In general this also opens other avenues of optimisation: For example, we can profile which argument sizes actually exist and remove the copy overhead of varint decoding (which does show up in profiles) by using more adequately sized types for, e.g., constant indices. Known regressions: * Op::Constant is no longer printing its values in disassembly (this can be fixed, I just didn't get around to it, will do separately). Change-Id: Id9b3a4254623a45de03069dbdb70b8349e976743 Reviewed-on: https://cl.tvl.fyi/c/depot/+/12191 Tested-by: BuildkiteCI Reviewed-by: flokli <flokli@flokli.de>
Diffstat (limited to 'kontemplate.rb')
0 files changed, 0 insertions, 0 deletions