about summary refs log tree commit diff
path: root/tvix/store/src/blobservice/mod.rs
diff options
context:
space:
mode:
authorFlorian Klink <flokli@flokli.de>2023-05-11T12·49+0300
committerflokli <flokli@flokli.de>2023-05-11T14·27+0000
commit616fa4476f93e1782e68dc713e9e8cb77a426c7d (patch)
treef76a43e95c75d848d079706fbccfd442210ebebc /tvix/store/src/blobservice/mod.rs
parentb22b685f0b2524c088deacbf4e80e7b7c73b5afc (diff)
refactor(tvix/store): remove ChunkService r/6133
Whether chunking is involved or not, is an implementation detail of each
Blobstore. Consumers of a whole blob shouldn't need to worry about that.
It currently is not visible in the gRPC interface either. It
shouldn't bleed into everything.

Let the BlobService trait provide `open_read` and `open_write` methods,
which return handles providing io::Read or io::Write, and leave the
details up to the implementation.

This means, our custom BlobReader module can go away, and all the
chunking bits in there, too.

In the future, we might still want to add more chunking-aware syncing,
but as a syncing strategy some stores can expose, not as a fundamental
protocol component.

This currently needs "SyncReadIntoAsyncRead", taken and vendored in from
https://github.com/tokio-rs/tokio/pull/5669.
It provides a AsyncRead for a sync Read, which is necessary to connect
our (sync) BlobReader interface to a GRPC server implementation.

As an alternative, we could also make the BlobReader itself async, and
let consumers of the trait (EvalIO) deal with the async-ness, but this
is less of a change for now.

In terms of vendoring, I initially tried to move our tokio crate to
these commits, but ended up in version incompatibilities, so let's
vendor it in for now.

Change-Id: I5969ebbc4c0e1ceece47981be3b9e7cfb3f59ad0
Reviewed-on: https://cl.tvl.fyi/c/depot/+/8551
Tested-by: BuildkiteCI
Reviewed-by: tazjin <tazjin@tvl.su>
Diffstat (limited to 'tvix/store/src/blobservice/mod.rs')
-rw-r--r--tvix/store/src/blobservice/mod.rs37
1 files changed, 28 insertions, 9 deletions
diff --git a/tvix/store/src/blobservice/mod.rs b/tvix/store/src/blobservice/mod.rs
index 53e941795e7e..e97ccdf335a0 100644
--- a/tvix/store/src/blobservice/mod.rs
+++ b/tvix/store/src/blobservice/mod.rs
@@ -1,4 +1,6 @@
-use crate::{proto, Error};
+use std::io;
+
+use crate::Error;
 
 mod memory;
 mod sled;
@@ -7,14 +9,31 @@ pub use self::memory::MemoryBlobService;
 pub use self::sled::SledBlobService;
 
 /// The base trait all BlobService services need to implement.
-/// It provides information about how a blob is chunked,
-/// and allows creating new blobs by creating a BlobMeta (referring to chunks
-/// in a [crate::chunkservice::ChunkService]).
+/// It provides functions to check whether a given blob exists,
+/// a way to get a [io::Read] to a blob, and a method to initiate writing a new
+/// Blob, which returns a [BlobWriter], that can be used
 pub trait BlobService {
-    /// Retrieve chunking information for a given blob
-    fn stat(&self, req: &proto::StatBlobRequest) -> Result<Option<proto::BlobMeta>, Error>;
+    type BlobReader: io::Read + Send + std::marker::Unpin;
+    type BlobWriter: BlobWriter + Send;
+
+    /// Check if the service has the blob, by its content hash.
+    fn has(&self, digest: &[u8; 32]) -> Result<bool, Error>;
+
+    /// Request a blob from the store, by its content hash. Returns a Option<BlobReader>.
+    fn open_read(&self, digest: &[u8; 32]) -> Result<Option<Self::BlobReader>, Error>;
+
+    /// Insert a new blob into the store. Returns a [BlobWriter], which
+    /// implements [io::Write] and a [BlobWriter::close].
+    /// TODO: is there any reason we want this to be a Result<>, and not just T?
+    fn open_write(&self) -> Result<Self::BlobWriter, Error>;
+}
 
-    /// Insert chunking information for a given blob.
-    /// Implementations SHOULD make sure chunks referred do exist.
-    fn put(&self, blob_digest: &[u8], blob_meta: proto::BlobMeta) -> Result<(), Error>;
+/// A [io::Write] that you need to close() afterwards, and get back the digest
+/// of the written blob.
+pub trait BlobWriter: io::Write {
+    /// Signal there's no more data to be written, and return the digest of the
+    /// contents written.
+    ///
+    /// This consumes self, so it's not possible to close twice.
+    fn close(self) -> Result<[u8; 32], Error>;
 }