Age | Commit message (Collapse) | Author | Files | Lines |
|
Note that this will only work if the client has a very recent Nix
version (post 15e1b2c223494ecb5efefc3ea0e3b926a6b1d7dc), otherwise the
--option flag will just be ignored.
Fixes #50.
|
|
Older versions of WWW::Curl don't support scalar references for
CURLOPT_WRITEDATA directly.
http://hydra.nixos.org/build/3017188
|
|
This handles the chroot and build hook cases, which are easy.
Supporting the non-chroot-build case will require more work (hash
rewriting!).
Issue #21.
|
|
|
|
|
|
|
|
Output names are now appended to resulting GC symlinks, e.g. by
nix-build. For backwards compatibility, if the output is named "out",
nothing is appended. E.g. doing "nix-build -A foo" on a derivation
that produces outputs "out", "bin" and "dev" will produce symlinks
"./result", "./result-bin" and "./result-dev", respectively.
|
|
Channels can now advertise a binary cache by creating a file
<channel-url>/binary-cache-url. The channel unpacker puts these in
its "binary-caches" subdirectory. Thus, the URLS of the binary caches
for the channels added by root appear in
/nix/var/nix/profiles/per-user/eelco/channels/binary-caches/*. The
binary cache substituter reads these and adds them to the list of
binary caches.
|
|
|
|
|
|
For security reasons, daemon users can only specify caches that appear
in the ‘binary-caches’ and ‘trusted-binary-caches’ options in
nix.conf.
|
|
|
|
You can use ‘--option binary-caches URLs’ instead.
|
|
The .nixpkg file format is extended to optionally include the URL of a
binary cache, which will be used in preference to the manifest URL
(which can be set to a non-existent value).
|
|
Querying all substitutable paths via "nix-env -qas" is potentially
hard on a server, since it involves sending thousands of HEAD
requests. So a binary cache must now have a meta-info file named
"nix-cache-info" that specifies whether the server wants this. It
also specifies the store prefix so that we don't send useless queries
to a binary cache for a different store prefix.
|
|
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
|
|
Instead call curl directly and pipe it into ‘nix-store --restore’.
This saves I/O and prevents creating garbage in the Nix store.
|
|
|
|
|
|
This makes all the tests succeed. Woohoo!
|
|
|
|
The file:// URI schema requires checking for errors in a more general
way. Also, don't cache file:// lookups.
|
|
|
|
Commit 6a214f3e06fa1c5f0a4d40e555f14d87691af297 reused the NixOS
environment initialisation for nix-profile.sh, but this is
inappropriate on systems that don't have multi-user support enabled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In "nix-env -qas", we don't need the substitute info, we just need to
know if it exists. This can be done using a HTTP HEAD request, which
saves bandwidth.
Note however that curl currently has a bug that prevents it from
reusing HTTP connections if HEAD requests return a 404:
https://sourceforge.net/tracker/?func=detail&aid=3542731&group_id=976&atid=100976
Without the patch attached to the issue, using HEAD is actually quite
a bit slower than GET.
|
|
|
|
|
|
|
|
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
|
|
Using WWW::Curl rather than running an external curl process for every
NAR info file halves the time it takes to get info thanks to libcurl's
support for persistent HTTP connections. (We save a roundtrip per
file.) But the real gain will come from using parallel and/or
pipelined requests.
|
|
I.e. if a NAR info file does *not* exist, we record it in the cache DB
so that we don't retry it later.
|
|
|
|
|
|
|
|
|
|
|
|
Use the hash part of the store path as a key rather than a hash of the
store path. This is enough to get the desired privacy property.
|
|
|
|
|
|
|
|
|
|
|
|
XZ compresses significantly better than bzip2. Here are the
compression ratios and execution times (using 4 cores in parallel) on
my /var/run/current-system (3.1 GiB):
bzip2: total compressed size 849.56 MiB, 30.8% [2m08]
xz -6: total compressed size 641.84 MiB, 23.4% [6m53]
xz -7: total compressed size 621.82 MiB, 22.6% [7m19]
xz -8: total compressed size 599.33 MiB, 21.8% [7m18]
xz -9: total compressed size 588.18 MiB, 21.4% [7m40]
Note that compression takes much longer. More importantly, however,
decompression is much faster:
bzip2: 1m47.274s
xz -6: 0m55.446s
xz -7: 0m54.119s
xz -8: 0m52.388s
xz -9: 0m51.842s
The only downside to using -9 is that decompression takes a fair
amount (~65 MB) of memory.
|