Age | Commit message (Collapse) | Author | Files | Lines |
|
Fixes #1339.
|
|
|
|
This makes all config options self-documenting.
Unknown or unparseable config settings and --option flags now cause a
warning.
|
|
|
|
We've observed it failing downloads in the wild and retrying the same URL
a few moments later seemed to fix it.
|
|
|
|
|
|
For example, if we call brotli with an empty input, it shouldn't read
from the caller's stdin.
|
|
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.
You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.
|
|
Build logs on cache.nixos.org are compressed using Brotli (since this
allows them to be decompressed automatically by Chrome and Firefox),
so it's handy if "nix log" can decompress them.
|
|
|
|
|
|
|
|
http://hydra.nixos.org/build/49490928
|
|
Issue #1254.
|
|
Previously, the Settings class allowed other code to query for string
properties, which led to a proliferation of code all over the place making
up new options without any sort of central registry of valid options. This
commit pulls all those options back into the central Settings class and
removes the public get() methods, to discourage future abuses like that.
Furthermore, because we know the full set of options ahead of time, we
now fail loudly if someone enters an unrecognized option, thus preventing
subtle typos. With some template fun, we could probably also dump the full
set of options (with documentation, defaults, etc.) to the command line,
but I'm not doing that yet here.
|
|
... and use this in Downloader::downloadCached(). This fixes
$ nix-build https://nixos.org/channels/nixos-16.09-small/nixexprs.tar.xz -A hello
error: cannot import path ‘/nix/store/csfbp1s60dkgmk9f8g0zk0mwb7hzgabd-nixexprs.tar.xz’ because it lacks a valid signature
|
|
This fixes
unable to download ‘https://cache.nixos.org/nar/077h8ji74y9b0qx7rjk71xd80vjqp6q5gy137r553jlvdlxdcdlk.nar.xz’: HTTP error 200 (curl error: Failure when receiving data from the peer)
|
|
http://hydra.nixos.org/build/49031196/nixlog/2/raw
|
|
Also get rid of Settings::processEnvironment(), it appears to be
useless.
|
|
Some sites (e.g. BitBucket) give a helpful 401 error when trying to
download a private archive if the User-Agent contains "curl", but give
a redirect to a login page otherwise (so for instance
"nix-prefetch-url" will succeed but produce useless output).
|
|
Add netrc-file support
|
|
This adds support for s3:// URIs in all places where Nix allows URIs,
e.g. in builtins.fetchurl, builtins.fetchTarball, <nix/fetchurl.nix>
and NIX_PATH. It allows fetching resources from private S3 buckets,
using credentials obtained from the standard places (i.e. AWS_*
environment variables, ~/.aws/credentials and the EC2 metadata
server). This may not be super-useful in general, but since we already
depend on aws-sdk-cpp, it's a cheap feature to add.
|
|
|
|
|
|
This is a hopefully temporary measure to diagnose the intermittent
"HTTP error 200" failures.
|
|
Closes #1182.
|
|
This allows other threads to install callbacks that run in a regular,
non-signal context. In particular, we can use this to signal the
downloader thread to quit.
Closes #1183.
|
|
This reverts commit f78126bfd6b6c8477fcdbc09b2f98772dbe9a1e7. There
really is no need for such a massive change...
|
|
|
|
The store parameter "write-nar-listing=1" will cause BinaryCacheStore
to write a file ‘<store-hash>.ls.xz’ for each ‘<store-hash>.narinfo’
added to the binary cache. This file contains an XZ-compressed JSON
file describing the contents of the NAR, excluding the contents of
regular files.
E.g.
{
"version": 1,
"root": {
"type": "directory",
"entries": {
"lib": {
"type": "directory",
"entries": {
"Mcrt1.o": {
"type": "regular",
"size": 1288
},
"Scrt1.o": {
"type": "regular",
"size": 3920
},
}
}
}
...
}
}
(The actual file has no indentation.)
This is intended to speed up the NixOS channels programs index
generator [1], since fetching gazillions of large NARs from
cache.nixos.org is currently a bottleneck for updating the regular
(non-small) channel.
[1] https://github.com/NixOS/nixos-channel-scripts/blob/master/generate-programs-index.cc
|
|
|
|
|
|
|
|
http://hydra.nixos.org/build/42025230
|
|
Commit 86e8c67efc33cf756500a1dec7fd6313658f2664 broke it, because
CURL_* are not actually #defines.
|
|
This prevents collisions with the "native" OpenSSL, in particular on
OS X.
Fixes #921.
|
|
|
|
|
|
|
|
|
|
|
|
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
|
|
It's a slight misnomer now because it actually limits *all* downloads,
not just binary cache lookups.
Also add a "enable-http2" option to allow disabling use of HTTP/2
(enabled by default).
|
|
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).
For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.
This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
|
|
|
|
|
|
|
|
|
|
|