Age | Commit message (Collapse) | Author | Files | Lines |
|
This splits the pure content-addressed layers from tvix-store into a
`castore` crate, and only leaves PathInfo related things, as well as the
CLI entrypoint in the tvix-store crate.
Notable changes:
- `fixtures` and `utils` had to be moved out of the `test` cfg, so they
can be imported from tvix-store.
- Some ad-hoc fixtures in the test were moved to proper fixtures in the
same step.
- The protos are now created by a (more static) recipe in the protos/
directory.
The (now two) golang targets are commented out, as it's not possible to
update them properly in the same CL. This will be done by a followup CL
once this is merged (and whitby deployed)
Bug: https://b.tvl.fyi/issues/301
Change-Id: I8d675d4bf1fb697eb7d479747c1b1e3635718107
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9370
Reviewed-by: tazjin <tazjin@tvl.su>
Reviewed-by: flokli <flokli@flokli.de>
Autosubmit: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: Connor Brewster <cbrewster@hey.com>
|
|
This wasn't removed yet, and no code is using/populating it so far.
It's confusing, let's update it to the state of things now, and re-
introduce it once we get there.
Change-Id: I68f5ba17a8eee604d8ccd82749da7c8be094cb99
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9351
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Whether chunking is involved or not, is an implementation detail of each
Blobstore. Consumers of a whole blob shouldn't need to worry about that.
It currently is not visible in the gRPC interface either. It
shouldn't bleed into everything.
Let the BlobService trait provide `open_read` and `open_write` methods,
which return handles providing io::Read or io::Write, and leave the
details up to the implementation.
This means, our custom BlobReader module can go away, and all the
chunking bits in there, too.
In the future, we might still want to add more chunking-aware syncing,
but as a syncing strategy some stores can expose, not as a fundamental
protocol component.
This currently needs "SyncReadIntoAsyncRead", taken and vendored in from
https://github.com/tokio-rs/tokio/pull/5669.
It provides a AsyncRead for a sync Read, which is necessary to connect
our (sync) BlobReader interface to a GRPC server implementation.
As an alternative, we could also make the BlobReader itself async, and
let consumers of the trait (EvalIO) deal with the async-ness, but this
is less of a change for now.
In terms of vendoring, I initially tried to move our tokio crate to
these commits, but ended up in version incompatibilities, so let's
vendor it in for now.
Change-Id: I5969ebbc4c0e1ceece47981be3b9e7cfb3f59ad0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8551
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
|
|
Further emphasize Read() can be used to ask for blobs OR chunks, and
that clients usually want to stat and then request (smaller) chunks,
rather than reading whole blobs.
Also clarify that the chunking used to send BlobChunks over has nothing
to do with the chunk sizes communicated in a Stat() request.
Change-Id: Ia615d190aae570611de2655b11342a14d0b75976
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8028
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
Stat exposes metadata about a given blob,
such as more granular chunking, baos.
It implicitly allows checking for existence too, as asking this for a
non-existing Blob will return a Status::not_found grpc error.
The previous version returned a Status::not_found error on the Get
request too, but there was no chance to prevent the server from starting
to stream (except sending an immediate cancellation).
Being able to check whether something exists in a BlobStore helps to
prevent from uploading in first place.
The granular chunking bits are an optional optimization - if the
BlobStore implements no more granular chunking, the Stat response can
simply contain a single chunk.
Read returns a stream of BlobChunk, which is just a stream of bytes -
not necessarily using the chunking that's returned in the reply of a
Stat() call. It can be used to read blobs or chunks.
Change-Id: I4b6030ef184ace5484c84ca273b49d710433731d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7652
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
This changes the RPC methods to return/consume a stream of chunks, instead of a
very big message containing the whole blob, to keep message sizes in manageable
sizes (less than 4MiB).
Change-Id: I2a3a50f07b059d8a2f5196860254adff98c8a352
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7651
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
Change-Id: I0898b8a0a78e704219da38e5acaabef1e640d4e4
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7321
Reviewed-by: Adam Joseph <adam@westernsemico.com>
Tested-by: BuildkiteCI
|
|
This defines a service that can be used to get and put content-addressed
chunks of data.
Change-Id: I36cf2278ed1daf71848c04fdfd14450b2268c5de
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7135
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
|