Age | Commit message (Collapse) | Author | Files | Lines |
|
This adds support for s3:// URIs in all places where Nix allows URIs,
e.g. in builtins.fetchurl, builtins.fetchTarball, <nix/fetchurl.nix>
and NIX_PATH. It allows fetching resources from private S3 buckets,
using credentials obtained from the standard places (i.e. AWS_*
environment variables, ~/.aws/credentials and the EC2 metadata
server). This may not be super-useful in general, but since we already
depend on aws-sdk-cpp, it's a cheap feature to add.
|
|
This reverts commit f78126bfd6b6c8477fcdbc09b2f98772dbe9a1e7. There
really is no need for such a massive change...
|
|
|
|
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
|
|
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).
For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.
This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
|
|
|
|
|
|
|
|
|
|
|
|
This makes us more robust against 500 errors from CloudFront or S3
(assuming the 500 error isn't cached by CloudFront...).
|
|
Also, allow builtins.{fetchurl,fetchTarball} in restricted mode if a
hash is specified.
|
|
This allows readFile() to indicate that a file doesn't exist, and
might eliminate some large string copying.
|
|
Allowing stuff like
NIX_REMOTE=https://cache.nixos.org nix-store -qR /nix/store/x1p1gl3a4kkz5ci0nfbayjqlqmczp1kq-geeqie-1.1
or
NIX_REMOTE=https://cache.nixos.org nix-store --export /nix/store/x1p1gl3a4kkz5ci0nfbayjqlqmczp1kq-geeqie-1.1 | nix-store --import
|
|
Calling a class an API is a bit redundant...
|
|
Also, move a few free-standing functions into StoreAPI and Derivation.
Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
|
|
|
|
This makes it consistent with the Nixpkgs fetchurl and makes it work
in chroots. We don't need verification because the hash of the result
is checked anyway.
|
|
This ensures that 1) the derivation doesn't change when Nix changes;
2) the derivation closure doesn't contain Nix and its dependencies; 3)
we don't have to rely on ugly chroot hacks.
|