Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
For example, you can now set
build-sandbox-paths = /dev/nvidiactl?
to specify that /dev/nvidiactl should only be mounted in the sandbox
if it exists in the host filesystem. This is useful e.g. for EC2
images that should support both CUDA and non-CUDA instances.
|
|
Fixes #1084
|
|
On some architectures (like x86_64 or i686, but not ARM for example)
overflow during integer division causes a crash due to SIGFPE.
Reproduces on a 64-bit system with:
nix-instantiate --eval -E '(-9223372036854775807 - 1) / -1'
The only way this can happen is when the smallest possible integer is
divided by -1, so just special-case that.
|
|
|
|
The removal of CachedFailure caused the value of TimedOut to change,
which broke timed-out handling in Hydra (so timed-out builds would
show up as "aborted" and would be retried, e.g. at
http://hydra.nixos.org/build/42537427).
|
|
|
|
The store parameter "write-nar-listing=1" will cause BinaryCacheStore
to write a file ‘<store-hash>.ls.xz’ for each ‘<store-hash>.narinfo’
added to the binary cache. This file contains an XZ-compressed JSON
file describing the contents of the NAR, excluding the contents of
regular files.
E.g.
{
"version": 1,
"root": {
"type": "directory",
"entries": {
"lib": {
"type": "directory",
"entries": {
"Mcrt1.o": {
"type": "regular",
"size": 1288
},
"Scrt1.o": {
"type": "regular",
"size": 3920
},
}
}
}
...
}
}
(The actual file has no indentation.)
This is intended to speed up the NixOS channels programs index
generator [1], since fetching gazillions of large NARs from
cache.nixos.org is currently a bottleneck for updating the regular
(non-small) channel.
[1] https://github.com/NixOS/nixos-channel-scripts/blob/master/generate-programs-index.cc
|
|
|
|
|
|
|
|
Done slightly differently from https://github.com/NixOS/nix/pull/1093.
|
|
|
|
|
|
|
|
|
|
This was broken since ff0c0b645cc1448959126185bb2fafe41cf0bddf. Since
I can't figure out how to mount a devpts instance in the sandbox,
let's just bind-mount the host devpts.
|
|
http://hydra.nixos.org/build/42025230
|
|
Commit 86e8c67efc33cf756500a1dec7fd6313658f2664 broke it, because
CURL_* are not actually #defines.
|
|
This prevents collisions with the "native" OpenSSL, in particular on
OS X.
Fixes #921.
|
|
|
|
|
|
substituter
|
|
|
|
This allows the binary cache substituter to pipeline requests.
|
|
|
|
|
|
|
|
|
|
|
|
override rx directory permissions in deletePath()
|
|
|
|
|
|
|
|
Fixes #1069.
|
|
fails
|
|
|
|
We can now write
throw Error("file '%s' not found", path);
instead of
throw Error(format("file '%s' not found") % path);
and similarly
printError("file '%s' not found", path);
instead of
printMsg(lvlError, format("file '%s' not found") % path);
|
|
|
|
|
|
|
|
We were passing "p=$PATH" rather than "p=$PATH;", resulting in some
invalid shell code.
Also, construct a separate environment for the child rather than
overwriting the parent's.
|
|
Otherwise the shell and its children will be bound to one CPU core...
|
|
|
|
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
|
|
Having the logger function potentially throw exceptions is
Heisenbuggy.
|
|
|
|
It's a slight misnomer now because it actually limits *all* downloads,
not just binary cache lookups.
Also add a "enable-http2" option to allow disabling use of HTTP/2
(enabled by default).
|
|
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).
For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.
This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
|
|
|