Age | Commit message (Collapse) | Author | Files | Lines |
|
This wasn't removed yet, and no code is using/populating it so far.
It's confusing, let's update it to the state of things now, and re-
introduce it once we get there.
Change-Id: I68f5ba17a8eee604d8ccd82749da7c8be094cb99
Reviewed-on: https://cl.tvl.fyi/c/depot/+/9351
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
|
|
Whether chunking is involved or not, is an implementation detail of each
Blobstore. Consumers of a whole blob shouldn't need to worry about that.
It currently is not visible in the gRPC interface either. It
shouldn't bleed into everything.
Let the BlobService trait provide `open_read` and `open_write` methods,
which return handles providing io::Read or io::Write, and leave the
details up to the implementation.
This means, our custom BlobReader module can go away, and all the
chunking bits in there, too.
In the future, we might still want to add more chunking-aware syncing,
but as a syncing strategy some stores can expose, not as a fundamental
protocol component.
This currently needs "SyncReadIntoAsyncRead", taken and vendored in from
https://github.com/tokio-rs/tokio/pull/5669.
It provides a AsyncRead for a sync Read, which is necessary to connect
our (sync) BlobReader interface to a GRPC server implementation.
As an alternative, we could also make the BlobReader itself async, and
let consumers of the trait (EvalIO) deal with the async-ness, but this
is less of a change for now.
In terms of vendoring, I initially tried to move our tokio crate to
these commits, but ended up in version incompatibilities, so let's
vendor it in for now.
Change-Id: I5969ebbc4c0e1ceece47981be3b9e7cfb3f59ad0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8551
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
|
|
* //users/wpcarro/emacs: use top level (ELPA) version of eglot, as it
was removed from MELPA:
https://github.com/melpa/melpa/commit/dc2ead17a8e5d5df59fc3729c8f0435cfcbf55ef
* //3p/nixpkgs:tdlib: 1.8.11 -> 1.8.12
* //tvix/store: regenerate protos after buf update
Change-Id: I782a8d91fda5ed461788055a3721104e8c032207
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8327
Autosubmit: sterni <sternenseemann@systemli.org>
Reviewed-by: flokli <flokli@flokli.de>
Tested-by: BuildkiteCI
Reviewed-by: wpcarro <wpcarro@gmail.com>
Reviewed-by: tazjin <tazjin@tvl.su>
Reviewed-by: sterni <sternenseemann@systemli.org>
|
|
Further emphasize Read() can be used to ask for blobs OR chunks, and
that clients usually want to stat and then request (smaller) chunks,
rather than reading whole blobs.
Also clarify that the chunking used to send BlobChunks over has nothing
to do with the chunk sizes communicated in a Stat() request.
Change-Id: Ia615d190aae570611de2655b11342a14d0b75976
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8028
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
Stat exposes metadata about a given blob,
such as more granular chunking, baos.
It implicitly allows checking for existence too, as asking this for a
non-existing Blob will return a Status::not_found grpc error.
The previous version returned a Status::not_found error on the Get
request too, but there was no chance to prevent the server from starting
to stream (except sending an immediate cancellation).
Being able to check whether something exists in a BlobStore helps to
prevent from uploading in first place.
The granular chunking bits are an optional optimization - if the
BlobStore implements no more granular chunking, the Stat response can
simply contain a single chunk.
Read returns a stream of BlobChunk, which is just a stream of bytes -
not necessarily using the chunking that's returned in the reply of a
Stat() call. It can be used to read blobs or chunks.
Change-Id: I4b6030ef184ace5484c84ca273b49d710433731d
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7652
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
This changes the RPC methods to return/consume a stream of chunks, instead of a
very big message containing the whole blob, to keep message sizes in manageable
sizes (less than 4MiB).
Change-Id: I2a3a50f07b059d8a2f5196860254adff98c8a352
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7651
Reviewed-by: tazjin <tazjin@tvl.su>
Tested-by: BuildkiteCI
|
|
This allows importing the generated .pb.go files into other go projects.
I initially looked at buildGo.protos, but it doesn't work for multi-.proto
files, and actually having LSP support for the generated structs is nice, too.
Change-Id: Idbd448008010790a10a0ea42e4059dbb609eaf1a
Reviewed-on: https://cl.tvl.fyi/c/depot/+/7322
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
|