Age | Commit message (Collapse) | Author | Files | Lines |
|
This interface is the only way to construct lazy builtins outside of eval, which
is actually a useful feature to have.
Change-Id: I386323af20aa3134bb4f669fa66fbb21e9b05fd4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12386
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Autosubmit: tazjin <tazjin@tvl.su>
Reviewed-by: Yury Shvedov <yury.shvedov@kaspersky.com>
|
|
There's no need to duplicate this.
Change-Id: If3d930211a1d625d6c7ef129b05034e7a915da83
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12385
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Autosubmit: tazjin <tazjin@tvl.su>
Reviewed-by: Yury Shvedov <yury.shvedov@kaspersky.com>
|
|
This replaces the OpCode enum with a new Op enum which is guaranteed to fit in a
single byte. Instead of carrying enum variants with data, every variant that has
runtime data encodes it into the `Vec<u8>` that a `Chunk` now carries.
This has several advantages:
* Less stack space is required at runtime, and fewer allocations are required
while compiling.
* The OpCode doesn't need to carry "weird" special-cased data variants anymore.
* It is faster (albeit, not by much). On my laptop, results consistently look
approximately like this:
Benchmark 1: ./before -E '(import <nixpkgs> {}).firefox.outPath' --log-level ERROR --no-warnings
Time (mean ± σ): 8.224 s ± 0.272 s [User: 7.149 s, System: 0.688 s]
Range (min … max): 7.759 s … 8.583 s 10 runs
Benchmark 2: ./after -E '(import <nixpkgs> {}).firefox.outPath' --log-level ERROR --no-warnings
Time (mean ± σ): 8.000 s ± 0.198 s [User: 7.036 s, System: 0.633 s]
Range (min … max): 7.718 s … 8.334 s 10 runs
See notes below for why the performance impact might be less than expected.
* It is faster while at the same time dropping some optimisations we previously
performed.
This has several disadvantages:
* The code is closer to how one would write it in C or Go.
* Bit shifting!
* There is (for now) slightly more code than before.
On performance I have the following thoughts at the moment:
In order to prepare for adding GC, there's a couple of places in Tvix where I'd
like to fence off certain kinds of complexity (such as mutating bytecode, which,
for various reaons, also has to be part of data that is subject to GC). With
this change, we can drop optimisations like retroactively modifying existing
bytecode and *still* achieve better performance than before.
I believe that this is currently worth it to pave the way for changes that are
more significant for performance.
In general this also opens other avenues of optimisation: For example, we can
profile which argument sizes actually exist and remove the copy overhead of
varint decoding (which does show up in profiles) by using more adequately sized
types for, e.g., constant indices.
Known regressions:
* Op::Constant is no longer printing its values in disassembly (this can be
fixed, I just didn't get around to it, will do separately).
Change-Id: Id9b3a4254623a45de03069dbdb70b8349e976743
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12191
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|
|
the answer is at https://cl.tvl.fyi/c/depot/+/10798
Change-Id: I5f0ed51a3954c7241ef15a8268e0e51695e994c6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/12175
Autosubmit: aspen <root@gws.fyi>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|
|
This was made unnecessary in c92d06271 (feat(tvix/eval): drop
LightSpan::Delayed, 2023-12-08) because it didn't improve benchmarks as
much as expected and has been vestigial since; this continues the
cleanup by just removing it altogether
Change-Id: I21ec7ae9b52a5cccd2092696a5a87f658194d672
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11949
Autosubmit: aspen <root@gws.fyi>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|
|
In places where we want to extend context with that from another
NixString, use take_context() to split it off, then call .extend(),
making use of IntoIterator to avoid a bunch of clones.
Change-Id: I2460141a3ed776c64c36132b2203b6a1d710b922
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11705
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: edef <edef@edef.eu>
|
|
`toJSON` transform a Nix structure into a JSON string.
For each context in that Nix structure, the JSON string must possess it.
Thus, it is necessary to take the union of all contexts and attach it to
the final structure.
Unfortunately, the return type of `into_json` is a serde's JSON object,
not a string. Therefore, it is not possible to reuse `NixString`
machinery.
Context tests are reinforced as Nix does not test those behaviors.
Fixes b/393.
Change-Id: I5afdbc4e18dd70469192c1aa657d1049ba330149
Signed-off-by: Ryan Lahfa <tvl@lahfa.xyz>
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11266
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Previously, Nix strings were represented as a Box (within Value)
pointing to a tuple of an optional context, and another Box pointing to
the actual string allocation itself. This is pretty inefficient, both in
terms of memory usage (we use 48 whole bytes for a None context!) and in
terms of the extra indirection required to get at the actual data. It
was necessary, however, because with native Rust DSTs if we had
something like `struct NixString(Option<NixContext>, BStr)` we could
only pass around *fat* pointers to that value (with the length in the
pointer) and that'd make Value need to be bigger (which is a waste of
both memory and cache space, since that memory would be unused for all
other Values).
Instead, this commit implements *manual* allocation of a packed string
representation, with the length *in the allocation* as a field past the
context. This requires a big old pile of unsafe Rust, but the payoff is
clear:
hello outpath time: [882.18 ms 897.16 ms 911.23 ms]
change: [-15.143% -13.819% -12.500%] (p = 0.00 < 0.05)
Performance has improved.
Fortunately this change can be localized entirely within
value/string.rs, since we were abstracting things out nicely.
Change-Id: Ibf56dd16c9c503884f64facbb7f0ac596463efb6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10852
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: aspen <root@gws.fyi>
|
|
(Re-)Adds an error variant that wraps a catchable error kind, which is
used for returning the result of an evaluation.
Previously this would return the internal catchable value, which would
lead to panics if users tried to use these. Somehow this was missed; I
think we need error output tests.
Change-Id: Id6e24aa2ce4ea4358a29b2e1cf4a6749986baf8c
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10991
Tested-by: BuildkiteCI
Autosubmit: tazjin <tazjin@tvl.su>
Reviewed-by: flokli <flokli@flokli.de>
|
|
Now that I've done a ton of things to make sure Value is small on the
stack (16 bytes, which is a perfectly reasonable size for a programming
language Value enum), add a test asserting it stays that way.
These size improvements have a measurable impact, too - here's the
`hello outpath` benchmark compared between canon (as of r/7495) and this
commit:
hello outpath time: [990.56 ms 995.83 ms 1.0070 s]
change: [-7.1397% -6.1302% -5.1651%] (p = 0.00 < 0.05)
Performance has improved.
Change-Id: If99a0976eab28eb5e516fcd2f4a0e068145af23e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10799
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
Autosubmit: aspen <root@gws.fyi>
Reviewed-by: sterni <sternenseemann@systemli.org>
|
|
This is now the only enum variant for Value that is larger than 8
bytes (it's 16 bytes), so boxing it (especially since it's not
perf-critical) allows us to get the Value size down to only 16 bytes!
Change-Id: I98598e2b762944448bef982e8ff7da6d6683c4aa
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10798
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: aspen <root@gws.fyi>
|
|
This reverts commit d3d41552cf1f6485f8ebc597a2128a0d15b030a5.
This was well-intentioned, but now the boxed Path values are actually
the *largest* Value enum variants, at 16 bytes (because they're
fat-pointers, with a len) instead of 8 bytes like all the other values.
Having the double reference is a reasonable price to pay (it seems; more
benchmarks may end up disagreeing) for a smaller Value repr.
Change-Id: I0d3e84f646c8f5ffd0b7259c4e456637eea360f7
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10797
Tested-by: BuildkiteCI
Autosubmit: aspen <root@gws.fyi>
Reviewed-by: sterni <sternenseemann@systemli.org>
|
|
serde_json::Value is pretty large, and is contributing (albeit not
exclusively) to the large size of the Value repr. Putting it in a box
is *especially* cheap (since it's rarely used) and allows us
to (eventually) cut down on the size of Value.
Change-Id: I005a802d8527b639beb4e938e3320b11ffa1ef23
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10795
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: aspen <root@gws.fyi>
Tested-by: BuildkiteCI
|
|
NixString is *quite* large - like 80 bytes - because of the extra
capacity value for BString and because of the context. We want to keep
Value small since we're passing it around a lot, so let's box the
NixString inside Value::String to save on some memory, and make cloning
ostensibly a little cheaper
Change-Id: I343c8b4e7f61dc3dcbbaba4382efb3b3e5bbabb2
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10729
Tested-by: BuildkiteCI
Reviewed-by: sterni <sternenseemann@systemli.org>
|
|
In the compiler, skip emitting an OpForce if the last op was an
OpConstant for a non-thunk constant. This gives a small (~1% on my
machine) perf boost, eg when evaluating hello.outPath:
❯ hyperfine \
"./before --no-warnings -E '(import <nixpkgs> {}).hello.outPath'" \
"./after --no-warnings -E '(import <nixpkgs> {}).hello.outPath'"
Benchmark 1: ./before --no-warnings -E '(import <nixpkgs> {}).hello.outPath'
Time (mean ± σ): 1.151 s ± 0.022 s [User: 1.003 s, System: 0.151 s]
Range (min … max): 1.123 s … 1.184 s 10 runs
Benchmark 2: ./after --no-warnings -E '(import <nixpkgs> {}).hello.outPath'
Time (mean ± σ): 1.140 s ± 0.022 s [User: 0.989 s, System: 0.152 s]
Range (min … max): 1.115 s … 1.175 s 10 runs
Summary
./after --no-warnings -E '(import <nixpkgs> {}).hello.outPath' ran
1.01 ± 0.03 times faster than ./before --no-warnings -E '(import <nixpkgs> {}).hello.outPath'
Change-Id: I2105fd431d4bad699087907e16c789418e9a4062
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10714
Reviewed-by: sterni <sternenseemann@systemli.org>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
PathBuf internally contains a heap pointer (an OsString), so we were in
effect double-boxing here. Removing the extra layer by making
Tvix::Value represented by a Box<Path> rather than a Box<PathBuf> saves
us an indirection, while still avoiding the extra memory overhead of the
capacity which was the reason we were boxing PathBuf in the first place.
Change-Id: I8c185b9d4646161d1921917f83e87421496a3e24
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10725
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: aspen <root@gws.fyi>
Tested-by: BuildkiteCI
|
|
C++ nix uses C-style zero-terminated char pointers to represent strings
internally - however, up to this point, tvix has used Rust `String` and
`str` for string values. Since those are required to be valid utf-8, we
haven't been able to properly represent all the string values that Nix
supports.
To fix that, this change converts the internal representation of the
NixString struct from `Box<str>` to `BString`, from the `bstr` crate -
this is a wrapper around a `Vec<u8>` with extra functions for treating
that byte vector as a "morally string-like" value, which is basically
exactly what we need.
Since this changes a pretty fundamental assumption about a pretty core
type, there are a *lot* of changes in a lot of places to make this work,
but I've tried to keep the general philosophy and intent of most of the
code in most places intact. Most notably, there's nothing that's been
done to make the derivation stuff in //tvix/glue work with non-utf8
strings everywhere, instead opting to just convert to String/str when
passing things into that - there *might* be something to be done there,
but I don't know what the rules should be and I don't want to figure
them out in this change.
To deal with OS-native paths in a way that also works in WASM for
tvixbolt, this also adds a dependency on the "os_str_bytes" crate.
Fixes: b/189
Fixes: b/337
Change-Id: I5e6eb29c62f47dd91af954f5e12bfc3d186f5526
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10200
Reviewed-by: tazjin <tazjin@tvl.su>
Reviewed-by: flokli <flokli@flokli.de>
Reviewed-by: sterni <sternenseemann@systemli.org>
Autosubmit: aspen <root@gws.fyi>
Tested-by: BuildkiteCI
|
|
A bunch of operations in Tvix are not aware of catchable values
and does not propagate them.
In the meantime, as we wait for a better solution, we just offer this
commit for moving the needle.
Change-Id: Ic3f0e1550126b0847b597dfc1402c35e0eeef469
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10473
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
|
|
In the past reference tracking system, `tvix-io` glue was appending
plain paths in the known path state.
Now, we make up for this by just making contextful coercion of file
imports.
Change-Id: Ieb9b04dd83302c77909252d5f7733857ac3cf8fd
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10443
Tested-by: BuildkiteCI
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Reviewed-by: tazjin <tazjin@tvl.su>
|
|
Otherwise, you just fail because they are not... contextless strings!
Change-Id: I0b8f63a18cd89c3841b613d41c12ec4ee336f953
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10442
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
|
|
I am still undecided whether we need a CoercionKind to control
the coerced context, here's a simple attempt.
Change-Id: Ibe59d09ef26c519a6acfdfe392014446646dd6d8
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10426
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: raitobezarius <tvl@lahfa.xyz>
|
|
By default, we don't want contextful strings and we almost always want contextless strings.
To this end, we make taking a contextful string a very explicit operation under `to_contextful_str`
and we implement manually the `to_str` cast which requires a `if !s.has_context()` guard that
the macro cannot cover.
Change-Id: I7aae8e57a7d73e547e62b1edb0b1cc7e8c0c69b6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10425
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
This prepares the data structures to implement string contexts
in Nix.
Change-Id: Idd913c9c881daeb8d446907f4b940e462e730978
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10420
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
|
|
Change-Id: I165ff77764e272cc94d18cb03ad6cbc9a8ebefde
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10348
Autosubmit: Adam Joseph <adam@westernsemico.com>
Reviewed-by: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
|
|
Change-Id: I92d58ef216d7e0766af70f019b3dcd445284a95d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10344
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
r/7176 introduced an incorrect assumption was the benefit of the
nonrecursive coercion algorithm, namely that a coercion operation always
returns a non empty string. This allows to detect whether we are
coercing a list or not by checking if the intermediate result is empty
or not. Unfortunately, coercing null and false yields an empty string,
so we need to explicitly track whether we are coercing a list.
Updated the test case to hopefully catch similar bugs in the future. I'm
not a hundred percent certain I have not introduced a new edge case with
this, so it may be interesting to add a prop test case for this to
nix_oracle down the line. At least lists are the only nested data
structures that can be serialized as nested data structures, so the
problem is kind of limited.
Change-Id: Ia41e904356f1c41a9d35e4e65ec02f2fe5a4100e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10418
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
|
|
The default behavior of string coercion in C++ Nix is to weakly coerce
and import to store if necessary. There is a flag to make it strongly
coerce (coerceMore) and a flag that controls whether path values have
the corresponding file/directory imported into the store before
returning the (store) path as a string (copyToStore). We need to
implement our equivalent to the copyToStore (import_paths) flag for the
benefit of weak coercions that don't import into the store (dirOf,
baseNameOf, readFile, ...) and strong coercions that don't import into
the store (toString).
This makes coerce_to_string as well as CoercionKind weirder and more
versatile, but prevents us from reimplementing parts of the coercion
logic constantly as can be seen in the case of baseNameOf.
Note that it is not possible to test this properly in //tvix/eval tests
due to the lack of an appropriate EvalIO implementation being available.
Tests should be added to //tvix/glue down the line.
Change-Id: I8fb8ab99c7fe08e311d2ba1c36960746bf22f566
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10361
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
Reviewed-by: Adam Joseph <adam@westernsemico.com>
|
|
This commit adds Opcode::OpJumpIfCatchable, which can be inserted
ahead of most VM operations which expect a boolean on the stack, in
order to handle catchables in branching position properly.
Other than remembering to patch the jump, no other changes should be
required.
This commit also fixes b/343 by emitting this new opcode when
compiling if-then-else. There are probably other places where we
need to do the same thing.
Change-Id: I48de3010014c0bbeba15d34fc0d4800e0bb5a1ef
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10288
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
After this commit, the only non-builtins uses of generators are:
- coerce_to_string() uses generators::request_enter_lambda()
- Thunk::force() uses generators::request_enter_lambda()
That's it! Once those two are taken care of, GenCo can become an
implementation detail of `builtins::BuiltinGen`. No more crazy
nonlocal flow control within the interpreter: if you've got a GenCo
floating around in your code it's because you're writing a builtin,
which isn't part of the core interpreter. The interpreter won't
need GenCos to talk to itself anymore.
Technically generators::request_path_import() is also used by
coerce_to_string(), but that's just because the io_handle happens to
be part of the VM. There's no recursion-depth issue there, so the
call doesn't need to go through the generator mechanism
(request_path_import() doesn't call back to the interpreter!)
Change-Id: I83ce5774d49b88fdafdd61160975b4937a435bb0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10256
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
This commit implements deep_force() nonrecursively, by maintaining
an explicit stack rather than using the call stack for recursion.
As an added bonus, we don't need to pass around the SharedThunkSet
anymore, and can in fact completely eliminate SharedThunkSet.
Change-Id: I7c4f59f37834d451a28bf6be317eb0a90eac4ee6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10252
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
This commit fixes b/338 by properly propagating catchables through
comparison operations.
Change-Id: I6b0283a40f228ecf9a6398d24c060bdacb1077cf
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10221
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
This commit rewrites Value::nix_cmp_ordering() into an equivalent
nonrecursive form. Except for calls to Thunk::force(), the new form
no longer uses generators, and is async only because of the fact
that it calls Thunk::force().
I originally believed that this commit would make evaluation faster.
In fact it is slightly slower. I believe this is due to the added
vec![] allocation. I am investigating.
Prev-Nixpkgs-Benchmark: {"attrpath":"pkgsCross.aarch64-multiplatform.hello.outPath","peak-kbytes":"460048","system-seconds":"0.68","user-seconds":"5.73"}
This-Nixpkgs-Benchmark: {"attrpath":"pkgsCross.aarch64-multiplatform.hello.outPath","peak-kbytes":"460224","system-seconds":"0.67","user-seconds":"5.84"}
Change-Id: Ic627bc220d9c5aa3c5e68b9b8bf199837cd55af5
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10212
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
This is part of a fix for b/338.
We should never use PartialOrd::partial_cmp().
All Nix types except floats are obviously totally-ordered. In
addition, it turns out that because Nix treats division by zero
rather than producing a NaN, and because it does not support
"negative zero", even floats are in fact totally ordered in Nix.
Therefore, every call to PartialOrd::partial_cmp() in tvix is an
error. We have to *implement* this function, but we should never
call it on built-in types.
Moreover, nix_cmp_ordering() currently returns an Option<Ordering>.
I'm not sure what was going on there, since it's impossible for it
to return None. This commit fixes it to return simply Ordering
rather than Option<Ordering>.
Change-Id: If5c084164cf19cfb38c5a15554c0422faa5f895d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10218
Autosubmit: Adam Joseph <adam@westernsemico.com>
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
This commit rewrites Value::nix_eq() into an equivalent. Except for
calls to Thunk::force(), the new form no longer uses generators, and
is async only because of the fact that it calls Thunk::force().
I believed that the nonrecursive form would be faster. It is, in
fact, slightly slower. I believe this is due to the vec![]
allocation; I am investigating.
Prev-Nixpkgs-Benchmark: {"attrpath":"pkgsCross.aarch64-multiplatform.hello.outPath","peak-kbytes":"459068","system-seconds":"0.71","user-seconds":"5.39"}
This-Nixpkgs-Benchmark: {"attrpath":"pkgsCross.aarch64-multiplatform.hello.outPath","peak-kbytes":"460048","system-seconds":"0.68","user-seconds":"5.73"}
Change-Id: I10f4868891e4b7475df13f0cbc41ec78dd985dd8
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10118
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
Change-Id: Ic2bd4e8291b30ceac9fa0e88a4f56e61ae99b603
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10227
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
This commit rewrites Thunk::force() so that it is not (directly)
self-recursive. It maintains a Vec of all the
previously-encountered thunks which point to the one it is currently
forcing, rather than recursively calling itself.
Benefits:
- Short term:
This commit saves the cost of a round-trip through the generator
machinery for the generators::request_force() which is removed by
this commit.
- Medium term:
Once a similar transformation has been applied to nix_cmp(),
nix_add(), nix_eq(), and coerce_to_string(), those four functions,
along with Thunk::force(), will make non-tail calls only to each
other. They can then be merged into a single tail-recursive
function which does not use the generator machinery at all:
enum Task { Cmp, Add, Eq, CoerceToString, Force};
fn Value::walk(task:Task, v1:Value, v2:Value) {
// ...
- Long term:
The long-term goal here is to use generators **only for builtins**
and [Marionette]-style remote control of the VM. In other words:
use `async` for things that actually involve concurrency. Calls
from the VM to builtins can then be blocking calls, because even
cppnix will overflow the stack if you make a MAX_STACK_DEPTH-deep
recursive call which passes through a builtin at every stack frame
(e.g. `{ func = builtins.sort (a: b: ... func ...) ...}`).
This way the inner "tight loop" of the interpreter doesn't pay the
costs of `async` and generators. These costs manifest in terms
of: performance, complex nonlocal control flow, and language
impediments (async Rust is a restricted subset of real Rust, and
is missing things like traits).
[Marionette]: https://firefox-source-docs.mozilla.org/testing/marionette/Intro.html
Change-Id: I6179b8abb2ea0492180fcb347f37595a14665777
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10039
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
For cases where clippy lints don't apply to us, or something is
misfiring, add appropriate configuration.
Relates to b/321.
Change-Id: I0af453910b4a4112bf685b2a8e9a73de10ec87ea
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9965
Reviewed-by: flokli <flokli@flokli.de>
Autosubmit: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
Relates to b/321.
Change-Id: I37284f89b186e469eb432e2bbedb37aa125a6ad4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9961
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
Autosubmit: tazjin <tazjin@tvl.su>
|
|
Delays allocation (through cloning) of the values to be compared
until *after* the keys have been compared.
Change-Id: I7d68c27d7a0fbcdcc387db7c092bce50ca4b94ea
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9900
Autosubmit: tazjin <tazjin@tvl.su>
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
This commit makes catchable errors a variant of Value.
The main downside of this approach is that we lose the ability to
use Rust's `?` syntax for propagating catchable errors.
Change-Id: Ibe89438d8a70dcec29e016df692b5bf88a5cad13
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9289
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: Adam Joseph <adam@westernsemico.com>
Tested-by: BuildkiteCI
|
|
Change-Id: I0b2fdb418c2e36280d5c551a81634e1742193903
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9105
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
|
|
When dealing with a formal argument in a function argument pattern that
has a default expression, there are two different things that can happen
at runtime: Either we select its value from the passed attribute
successfully or we need to use the default expression. Both of these may
be thunks and both of these may need finalisers. However, in the former
case this is taken care of elsewhere, the value will always be finalised
already if necessary. In the latter case we may need to finalise the
thunk resulting from the default expression. However, the thunk
corresponding to the expression may never end up in the local's stack
slot. Since finalisation goes by stack slot (and not constants), we need
to prevent a case where we don't fall back to the default expression,
but finalise anyways.
Previously, we worked around this by making `OpFinalise` ignore
non-thunks. Since finalisation of already evaluated thunks still
crashed, the faulty compilation of function pattern arguments could
still cause a crash.
As a new approach, we reinstate the old behavior of `OpFinalise` to
crash whenever encountering something that is either not a thunk or
doesn't need finalisation. This can also help catching (similar)
miscompilations in the future. To then prevent the crash, we need to
track whether we have fallen back or not at runtime. This is done using
an additional phantom on the stack that holds a new `FinaliseRequest`
value. When it comes to finalisation we check this value and
conditionally execute `OpFinalise` based on its value.
Resolves b/261 and b/265 (partially).
Change-Id: Ic04fb80ec671a2ba11fa645090769c335fb7f58b
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8705
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: sterni <sternenseemann@systemli.org>
|
|
In order for the test suite we have currently to be comparable to C++
Nix, we need to display values in the same way. This was largely the
case except in some weird cases.
* <CODE> for thunks and <CYCLE> for repeated thunks (?) are already in
use. <CODE> formatting is tested by the oracle test suite already.
* Instead of lambda, we need to use <LAMBDA>
* <<primop>> and <<primop-app>> (a formatting C++ Nix uses nowhere)
now are <PRIMOP> and <PRIMOP-APP>.
We'll probably want to have a fancier display of values (in a separate
trait) down the line. This could be used for interactive usage, e.g. the
REPL or a potential debugger.
There is a peculiarity with C++ Nix 2.3 formatting primops: import is
considered a <<PRIMOP-APP>>, since it is internally implemented by means
of scopedImport. This implementation detail no longer leaks in C++ Nix
2.13 nor in Tvix.
<CYCLE> display is untested at the moment, since we exhibit a
discrepancy to C++ Nix 2.3. Our current detection is more similar to C++
Nix 2.13—luckily it is also the more consistent of the two. See also
b/245.
Change-Id: I1d534434b02e470bf5475b3758920ea81e3420dc
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8760
Reviewed-by: tazjin <tazjin@tvl.su>
Autosubmit: sterni <sternenseemann@systemli.org>
Tested-by: BuildkiteCI
|
|
C++ Nix forces and typechecks the passed argument even if it is not
necessary in order to compute the return value of the function. I
discovered this when I thought our formals miscompilation might be that
we are too strict, but doesn't look like it in this case.
Change-Id: Ifb3c92592293052c489d1e3ae8c7c54e4b6b4dc6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8701
Tested-by: BuildkiteCI
Autosubmit: sterni <sternenseemann@systemli.org>
Reviewed-by: tazjin <tazjin@tvl.su>
|
|
This is step 1 towards being able to use all 4 spans that we know when
dealing with infinite recursion. It tracks the span at which the
force of a thunk was first requested when constructing a blackhole, so
that we can highlight the spans of the first and second forces.
These are actually the least relevant spans, but the easiest to put in
place, more coming soon.
Change-Id: I4c7e82f6211b98756439d4148a4191457cc46807
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8269
Autosubmit: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|
|
Given Rust's current lack of support for tail calls, we cannot avoid
using `async` for builtins. This is the only way to avoid
overflowing the cpu stack when we have arbitrarily deep
builtin/interpreted/builtin/interpreted/... "sandwiches"
There are only five `async fn` functions which are not builtins
(some come in multiple "flavors"):
- add_values
- resolve_with
- force, final_deep_force
- nix_eq, nix_cmp_eq
- coerce_to_string
These can be written iteratively rather than recursively (and in
fact nix_eq used to be written that way!). I volunteer to rewrite
them. If written iteratively they would no longer need to be
`async`.
There are two motivations for limiting our reliance on `async` to
only the situation (builtins) where we have no other choice:
1. Performance.
We don't really have any good measurement of the performance hit
that the Box<dyn Future>s impose on us. Right now all of our
large (nixpkgs-eval) tests are swamped by the cost of other
things (e.g. fork()ing `nix-store`) so we can't really measure
it. Builtins tend to be expensive operations anyways
(regexp-matching, sorting, etc) that are likely to already cost
more than the `async` overhead.
2. Preserving the ability to switch to `musttail` calls.
Clang/LLVM recently got `musttail` (mandatory-elimination tail
calls). Rust has refused to add this mainly because WASM doesn't
support, but WASM `tail_call` has been implemented and was
recently moved to phase 4 (standardization). It is very likely
that Rust will get tail calls sometime in the next year; if it
does, we won't need async anymore. In the meantime, I'd like to
avoid adding any further reliance on `async` in places where it
wouldn't be straightforward to replace it with a tail call.
https://reviews.llvm.org/D99517
https://github.com/WebAssembly/proposals/pull/157
https: //github.com/rust-lang/rfcs/issues/2691#issuecomment-1462152908
Change-Id: Id15945d5a92bf52c16d93456e3437f91d93bdc57
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8290
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
This rewrites nix_cmp_ordering as an iterative loop, which
eliminates the extra pinned-boxing helper function.
Change-Id: I33d0ecc913e02affd8fd4c7bc1c9ecfdf4c7deb9
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8288
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
Autosubmit: Adam Joseph <adam@westernsemico.com>
|
|
This drops the usage of serde::Serialize, as the trait can not be used
to implement the correct semantics (function colouring!).
Instead, a manual JSON serialisation function is written which
correctly handles toString, outPath and other similar weirdnesses.
Unexpectedly, the eval-okay-tojson test from the C++ Nix test suite
now passes, too.
This fixes an issue where serialising data structures containing
derivations to JSON would fail.
Change-Id: I5c39e3d8356ee93a07eda481410f88610f6dd9f8
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8209
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
|
|
It turns out that this is used not just in coerceToString, but also in
toJSON.
Change-Id: I1c324b115a0b8bb6d83446d5bf70453c9b90685e
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8203
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
|
|
This shaves another 8 bytes off Value. How did that type get so big?!
Change-Id: I65e9b59a1636bd57e3cc4aec5fea16887070b832
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8153
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
|