Age | Commit message (Collapse) | Author | Files | Lines |
|
Have ingest_entries return an Error type with only three kinds:
- Error while uploading a specific Directory
- Error while finalizing the directory upload
- Error from the producer
Move all ingestion method-specific errors to the individual
implementations.
Change-Id: I2a015cb7ebc96d084cbe2b809f40d1b53a15daf3
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11557
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
|
|
Rather than carrying around an Future in the IngestionEntry::Regular,
simply carry the plain B3Digest.
Code reading through a non-seekable data stream has no choice but to
read and upload blobs immediately, and code seeking through something
seekable (like a filesystem) probably knows better what concurrency to
pick when ingesting, rather than the consuming side.
(Our only) one of these seekable source implementations is now doing
exactly that. We produce a stream of futures, and then use
[StreamExt::buffered] to process more than one, concurrently.
We still keep the same order, to avoid shuffling things and violating
the stream order.
This also cleans up walk_path_for_ingestion in castore/import, as well
as ingest_dir_entries in glue/tvix_store_io.
Change-Id: I5eb70f3e1e372c74bcbfcf6b6e2653eba36e151d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11491
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
|
|
Implement a first pass at the fetchTarball builtin.
This uses much of the same machinery as fetchUrl, but has the extra
complexity that tarballs have to be extracted and imported as store
paths (into the directory- and blob-services) before hashing. That's
reasonably involved due to the structure of those two services.
This is (unfortunately) not easy to test in an automated way, but I've
tested it manually for now and it seems to work:
tvix-repl> (import ../. {}).third_party.nixpkgs.hello.outPath
=> "/nix/store/dbghhbq1x39yxgkv3vkgfwbxrmw9nfzi-hello-2.12.1" :: string
Co-authored-by: Connor Brewster <cbrewster@hey.com>
Change-Id: I57afc6b91bad617a608a35bb357861e782a864c8
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11020
Autosubmit: aspen <root@gws.fyi>
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Previously the store ingestion code was coupled to `walkdir::DirEntry`s
produced by the `walkdir` crate which made it impossible to reuse
ingesting from other sources like tarballs or NARs.
This introduces a `IngestionEntry` which carries enough information for
store ingestion and a future for computing the Blake3 digest of files.
This allows the producer to perform file uploads in a way that makes
sense for the source, ie. the filesystem upload could concurrently
upload multiple files at the same time, while the NAR ingestor will need
to ingest the entire blob before yielding the next blob in the stream.
In the future we can buffer small blobs and upload them concurrently,
but the full blob still needs to be read from the NAR before advancing.
Change-Id: I6d144063e2ba5b05e765bac1f27d41b3c8e7b283
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11462
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Align these names and comments with the two users, to make it more
obvious we're doing the same thing here, just use a different method to
come up with entries_per_depths.
Change-Id: I42058e397588b6b57a6299e87183bef27588b228
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11415
Tested-by: BuildkiteCI
Autosubmit: flokli <flokli@flokli.de>
Reviewed-by: Connor Brewster <cbrewster@hey.com>
|
|
Right now `builtins.hashFile` always reads the entire file into memory
before hashing, which is not ideal for large files. This replaces
`read_to_string` with `open_file` which allows calculating the hash of
the file without buffering it entirely into memory. Other callers can
continue to buffer into memory if they choose, but they still use the
`open_file` VM request and then call `read_to_string` or `read_to_end`
on the `std::io::Reader`.
Fixes b/380
Change-Id: Ifa1c8324bcee8f751604b0b449feab875c632fda
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11236
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Fixes b/392.
Output paths were created, depending on a plain store path but no
context string was attached to track that plain dependency.
Context string propagation tests are strengthened to prevent any
regression on this.
Change-Id: Ifd6671aeba6949324b0bb9f0f766b87db728d484
Signed-off-by: Ryan Lahfa <tvl@lahfa.xyz>
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11351
Reviewed-by: Alyssa Ross <hi@alyssa.is>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|
|
Now, it supports almost everything except `recursive = false;`, i.e. `flat`-ingestion
because we have no knob exposed in the tvix store import side to do it.
This has been tested to work.
Change-Id: I2e9da10ceccdfbf45b43c532077ed45d6306aa98
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10597
Tested-by: BuildkiteCI
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Reviewed-by: flokli <flokli@flokli.de>
|
|
Instead of enforcing NAR SHA256 all the time, we generalize the
`PathInfo` constructor to take a `CAHash` argument which can drive
whether we are having a flat, NAR or text scheme.
With this, it is now possible to implement flat schemes in our
evaluation builtins, e.g. `builtins.path`.
Change-Id: I15bfee0ef4f0f428bfbd2f30c57c012cdcf6a976
Signed-off-by: Ryan Lahfa <tvl@lahfa.xyz>
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11286
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Make this function async, and do the block_on on the (single) callsite.
Change-Id: Ib8b0b54ab5370fe02ef95f38a45d8866868a9d60
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11285
Reviewed-by: Connor Brewster <cbrewster@hey.com>
Tested-by: BuildkiteCI
|
|
Replace the (single) callsite with some code interacting with the tokio
runtime to block on the async version.
Change-Id: I3976496ae77b2bb8734603f303655834265e3f0a
Reviewed-on: https://cl.tvl.fyi/c/depot/+/11284
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
|
|
Previously, Nix strings were represented as a Box (within Value)
pointing to a tuple of an optional context, and another Box pointing to
the actual string allocation itself. This is pretty inefficient, both in
terms of memory usage (we use 48 whole bytes for a None context!) and in
terms of the extra indirection required to get at the actual data. It
was necessary, however, because with native Rust DSTs if we had
something like `struct NixString(Option<NixContext>, BStr)` we could
only pass around *fat* pointers to that value (with the length in the
pointer) and that'd make Value need to be bigger (which is a waste of
both memory and cache space, since that memory would be unused for all
other Values).
Instead, this commit implements *manual* allocation of a packed string
representation, with the length *in the allocation* as a field past the
context. This requires a big old pile of unsafe Rust, but the payoff is
clear:
hello outpath time: [882.18 ms 897.16 ms 911.23 ms]
change: [-15.143% -13.819% -12.500%] (p = 0.00 < 0.05)
Performance has improved.
Fortunately this change can be localized entirely within
value/string.rs, since we were abstracting things out nicely.
Change-Id: Ibf56dd16c9c503884f64facbb7f0ac596463efb6
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10852
Tested-by: BuildkiteCI
Reviewed-by: raitobezarius <tvl@lahfa.xyz>
Autosubmit: aspen <root@gws.fyi>
|
|
This reverts commit d9565a4d0af3bffd735a77aa6f1fd0ec0e03b14a.
Reason for revert: this was intentional - putting Rc::clone instead of
.clone is a common Rust idiom, and makes it explicit that we're cloning
a shared reference, not an underlying resource
Change-Id: I41a5f323ee35d7025dc7bb02f7d5d05d0051798d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10995
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|
|
We add a new set of builtins called `import_builtins`, which
will contain import-related builtins, such as `builtins.path` and
`builtins.filterSource`. Both can import paths into the store, with
various knobs to alter the result, e.g. filtering, renaming, expected
hashes.
We introduce `filtered_ingest` which will drive the filtered ingestion
via the Nix function via the generator machinery, and then we register
the root node to the path info service inside the store.
`builtins.filterSource` is very simple, `builtins.path` is a more
complicated model requiring the same logic albeit more sophisticated
with name customization, file ingestion method and expected SHA-256.
Change-Id: I1083f37808b35f7b37818c8ffb9543d9682b2de2
Reviewed-on: https://cl.tvl.fyi/c/depot/+/10654
Autosubmit: raitobezarius <tvl@lahfa.xyz>
Tested-by: BuildkiteCI
Reviewed-by: flokli <flokli@flokli.de>
|