about summary refs log tree commit diff
path: root/tvix/eval/src/value
diff options
context:
space:
mode:
authorVincent Ambo <tazjin@tvl.su>2024-08-10T20·59+0300
committertazjin <tazjin@tvl.su>2024-08-19T11·02+0000
commitd6c57eb957abc9c9101779600e04b34209d5c436 (patch)
treee6ec5d95d200912c87e7343fba1e4aca086b4515 /tvix/eval/src/value
parentddca074886196ba45c43646d04bd84618009159d (diff)
refactor(tvix/eval): ensure VM operations fit in a single byte r/8519
This replaces the OpCode enum with a new Op enum which is guaranteed to fit in a
single byte. Instead of carrying enum variants with data, every variant that has
runtime data encodes it into the `Vec<u8>` that a `Chunk` now carries.

This has several advantages:

* Less stack space is required at runtime, and fewer allocations are required
  while compiling.
* The OpCode doesn't need to carry "weird" special-cased data variants anymore.
* It is faster (albeit, not by much). On my laptop, results consistently look
  approximately like this:

  Benchmark 1: ./before -E '(import <nixpkgs> {}).firefox.outPath' --log-level ERROR --no-warnings
  Time (mean ± σ):      8.224 s ±  0.272 s    [User: 7.149 s, System: 0.688 s]
  Range (min … max):    7.759 s …  8.583 s    10 runs

  Benchmark 2: ./after -E '(import <nixpkgs> {}).firefox.outPath' --log-level ERROR --no-warnings
  Time (mean ± σ):      8.000 s ±  0.198 s    [User: 7.036 s, System: 0.633 s]
  Range (min … max):    7.718 s …  8.334 s    10 runs

  See notes below for why the performance impact might be less than expected.
* It is faster while at the same time dropping some optimisations we previously
  performed.

This has several disadvantages:

* The code is closer to how one would write it in C or Go.
* Bit shifting!
* There is (for now) slightly more code than before.

On performance I have the following thoughts at the moment:

In order to prepare for adding GC, there's a couple of places in Tvix where I'd
like to fence off certain kinds of complexity (such as mutating bytecode, which,
for various reaons, also has to be part of data that is subject to GC). With
this change, we can drop optimisations like retroactively modifying existing
bytecode and *still* achieve better performance than before.

I believe that this is currently worth it to pave the way for changes that are
more significant for performance.

In general this also opens other avenues of optimisation: For example, we can
profile which argument sizes actually exist and remove the copy overhead of
varint decoding (which does show up in profiles) by using more adequately sized
types for, e.g., constant indices.

Known regressions:

* Op::Constant is no longer printing its values in disassembly (this can be
  fixed, I just didn't get around to it, will do separately).

Change-Id: Id9b3a4254623a45de03069dbdb70b8349e976743
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12191
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
Diffstat (limited to 'tvix/eval/src/value')
-rw-r--r--tvix/eval/src/value/mod.rs15
-rw-r--r--tvix/eval/src/value/thunk.rs14
2 files changed, 23 insertions, 6 deletions
diff --git a/tvix/eval/src/value/mod.rs b/tvix/eval/src/value/mod.rs
index 27c198ca9483..6109a0da2385 100644
--- a/tvix/eval/src/value/mod.rs
+++ b/tvix/eval/src/value/mod.rs
@@ -186,6 +186,21 @@ pub struct CoercionKind {
     pub import_paths: bool,
 }
 
+impl From<CoercionKind> for u8 {
+    fn from(k: CoercionKind) -> u8 {
+        k.strong as u8 | (k.import_paths as u8) << 1
+    }
+}
+
+impl From<u8> for CoercionKind {
+    fn from(byte: u8) -> Self {
+        CoercionKind {
+            strong: byte & 0x01 != 0,
+            import_paths: byte & 0x02 != 0,
+        }
+    }
+}
+
 impl<T> From<T> for Value
 where
     T: Into<NixString>,
diff --git a/tvix/eval/src/value/thunk.rs b/tvix/eval/src/value/thunk.rs
index 2019e8ab3239..4b915019d47c 100644
--- a/tvix/eval/src/value/thunk.rs
+++ b/tvix/eval/src/value/thunk.rs
@@ -27,7 +27,7 @@ use std::{
 
 use crate::{
     errors::ErrorKind,
-    opcode::OpCode,
+    opcode::Op,
     upvalues::Upvalues,
     value::Closure,
     vm::generators::{self, GenCo},
@@ -170,13 +170,15 @@ impl Thunk {
         // This is basically a recreation of compile_apply():
         // We need to push the argument onto the stack and then the function.
         // The function (not the argument) needs to be forced before calling.
-        lambda.chunk.push_op(OpCode::OpConstant(arg_idx), span);
-        lambda.chunk().push_op(OpCode::OpConstant(f_idx), span);
-        lambda.chunk.push_op(OpCode::OpForce, span);
-        lambda.chunk.push_op(OpCode::OpCall, span);
+        lambda.chunk.push_op(Op::Constant, span);
+        lambda.chunk.push_uvarint(arg_idx.0 as u64);
+        lambda.chunk.push_op(Op::Constant, span);
+        lambda.chunk.push_uvarint(f_idx.0 as u64);
+        lambda.chunk.push_op(Op::Force, span);
+        lambda.chunk.push_op(Op::Call, span);
 
         // Inform the VM that the chunk has ended
-        lambda.chunk.push_op(OpCode::OpReturn, span);
+        lambda.chunk.push_op(Op::Return, span);
 
         Thunk(Rc::new(RefCell::new(ThunkRepr::Suspended {
             upvalues: Rc::new(Upvalues::with_capacity(0)),