Age | Commit message (Collapse) | Author | Files | Lines |
|
This prevents unnecessary and slow rebuilds of NARs that already exist
in the binary cache.
|
|
|
|
I.e. do what git does. I'm too lazy to keep the builtin help text up
to date :-)
Also add ‘--help’ to various commands that lacked it
(e.g. nix-collect-garbage).
|
|
|
|
This operation allows fixing corrupted or accidentally deleted store
paths by redownloading them using substituters, if available.
Since the corrupted path cannot be replaced atomically, there is a
very small time window (one system call) during which neither the old
(corrupted) nor the new (repaired) contents are available. So
repairing should be used with some care on critical packages like
Glibc.
|
|
|
|
|
|
|
|
Commit 6a214f3e06fa1c5f0a4d40e555f14d87691af297 copied most of the Nix
shell initialisation code from NixOS to nix-profile.sh; however, that
code assumes a multi-user install and is Linux-specific (e.g. it calls
the "stat" command). So go back to the simple single-user version.
Fixes #49.
|
|
Negative lookups are purged from the DB after a day, at most once per
day. However, for non-"have" lookups (e.g. all except "nix-env
-qas"), negative lookups are ignored after one hour. This is to
ensure that you don't have to wait a day for an operation like
"nix-env -i" to start using new binaries in the cache.
Should probably make this configurable.
|
|
|
|
|
|
|
|
Note that this will only work if the client has a very recent Nix
version (post 15e1b2c223494ecb5efefc3ea0e3b926a6b1d7dc), otherwise the
--option flag will just be ignored.
Fixes #50.
|
|
Older versions of WWW::Curl don't support scalar references for
CURLOPT_WRITEDATA directly.
http://hydra.nixos.org/build/3017188
|
|
This handles the chroot and build hook cases, which are easy.
Supporting the non-chroot-build case will require more work (hash
rewriting!).
Issue #21.
|
|
|
|
|
|
|
|
Output names are now appended to resulting GC symlinks, e.g. by
nix-build. For backwards compatibility, if the output is named "out",
nothing is appended. E.g. doing "nix-build -A foo" on a derivation
that produces outputs "out", "bin" and "dev" will produce symlinks
"./result", "./result-bin" and "./result-dev", respectively.
|
|
Channels can now advertise a binary cache by creating a file
<channel-url>/binary-cache-url. The channel unpacker puts these in
its "binary-caches" subdirectory. Thus, the URLS of the binary caches
for the channels added by root appear in
/nix/var/nix/profiles/per-user/eelco/channels/binary-caches/*. The
binary cache substituter reads these and adds them to the list of
binary caches.
|
|
|
|
|
|
For security reasons, daemon users can only specify caches that appear
in the ‘binary-caches’ and ‘trusted-binary-caches’ options in
nix.conf.
|
|
|
|
You can use ‘--option binary-caches URLs’ instead.
|
|
The .nixpkg file format is extended to optionally include the URL of a
binary cache, which will be used in preference to the manifest URL
(which can be set to a non-existent value).
|
|
Querying all substitutable paths via "nix-env -qas" is potentially
hard on a server, since it involves sending thousands of HEAD
requests. So a binary cache must now have a meta-info file named
"nix-cache-info" that specifies whether the server wants this. It
also specifies the store prefix so that we don't send useless queries
to a binary cache for a different store prefix.
|
|
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
|
|
Instead call curl directly and pipe it into ‘nix-store --restore’.
This saves I/O and prevents creating garbage in the Nix store.
|
|
|
|
|
|
This makes all the tests succeed. Woohoo!
|
|
|
|
The file:// URI schema requires checking for errors in a more general
way. Also, don't cache file:// lookups.
|
|
|
|
Commit 6a214f3e06fa1c5f0a4d40e555f14d87691af297 reused the NixOS
environment initialisation for nix-profile.sh, but this is
inappropriate on systems that don't have multi-user support enabled.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
In "nix-env -qas", we don't need the substitute info, we just need to
know if it exists. This can be done using a HTTP HEAD request, which
saves bandwidth.
Note however that curl currently has a bug that prevents it from
reusing HTTP connections if HEAD requests return a 404:
https://sourceforge.net/tracker/?func=detail&aid=3542731&group_id=976&atid=100976
Without the patch attached to the issue, using HEAD is actually quite
a bit slower than GET.
|
|
|
|
|
|
|
|
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
|
|
Using WWW::Curl rather than running an external curl process for every
NAR info file halves the time it takes to get info thanks to libcurl's
support for persistent HTTP connections. (We save a roundtrip per
file.) But the real gain will come from using parallel and/or
pipelined requests.
|