Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
*headdesk*
*headdesk*
*headdesk*
So since commit 22144afa8d9f8968da351618a1347072a93bd8aa, Nix hasn't
actually checked whether the content of a downloaded NAR matches the
hash specified in the manifest / NAR info file. Urghhh...
|
|
|
|
|
|
AFAIK, nobody uses it, it's not maintained, and it has no tests.
|
|
|
|
This is mostly useful for Hydra to deal with builders that get stuck
in an infinite loop writing data to stdout/stderr.
|
|
On Linux, Nix can build i686 packages even on x86_64 systems. It's not
enough to recognize this situation by settings.thisSystem, we also have
to consult uname(). E.g. we can be running on a i686 Debian with an
amd64 kernel. In that situation settings.thisSystem is i686-linux, but
we still need to change personality to i686 to make builds consistent.
|
|
On a system with multiple CPUs, running Nix operations through the
daemon is significantly slower than "direct" mode:
$ NIX_REMOTE= nix-instantiate '<nixos>' -A system
real 0m0.974s
user 0m0.875s
sys 0m0.088s
$ NIX_REMOTE=daemon nix-instantiate '<nixos>' -A system
real 0m2.118s
user 0m1.463s
sys 0m0.218s
The main reason seems to be that the client and the worker get moved
to a different CPU after every call to the worker. This patch adds a
hack to lock them to the same CPU. With this, the overhead of going
through the daemon is very small:
$ NIX_REMOTE=daemon nix-instantiate '<nixos>' -A system
real 0m1.074s
user 0m0.809s
sys 0m0.098s
|
|
This reverts commit 69b8f9980f39c14a59365a188b300a34d625a2cd.
The timeout should be enforced remotely. Otherwise, if the garbage
collector is running either locally or remotely, if will block the
build or closure copying for some time. If the garbage collector
takes too long, the build may time out, which is not what we want.
Also, on heavily loaded systems, copying large paths to and from the
remote machine can take a long time, also potentially resulting in a
timeout.
|
|
mount(2) with MS_BIND allows mounting a regular file on top of a regular
file, so there's no reason to only bind directories. This allows finer
control over just which files are and aren't included in the chroot
without having to build symlink trees or the like.
Signed-off-by: Shea Levy <shea@shealevy.com>
|
|
With C++ std::map, doing a comparison like ‘map["foo"] == ...’ has the
side-effect of adding a mapping from "foo" to the empty string if
"foo" doesn't exist in the map. So we ended up setting some
environment variables by accident.
|
|
In particular this means that "trivial" derivations such as writeText
are not substituted, reducing the number of GET requests to the binary
cache by about 200 on a typical NixOS configuration.
|
|
|
|
Before calling dumpPath(), we have to make sure the files are owned by
the build user. Otherwise, the build could contain a hard link to
(say) /etc/shadow, which would then be read by the daemon and
rewritten as a world-readable file.
This only affects systems that don't have hard link restrictions
enabled.
|
|
The assertion in canonicalisePathMetaData() failed because the
ownership of the path already changed due to the hash rewriting. The
solution is not to check the ownership of rewritten paths.
Issue #122.
|
|
Issue #122.
|
|
Otherwise subsequent invocations of "--repair" will keep rebuilding
the path. This only happens if the path content differs between
builds (e.g. due to timestamps).
|
|
This doesn't work if there is no output named "out". Hydra didn't use
it anyway.
|
|
|
|
Don't pass --timeout / --max-silent-time to the remote builder.
Instead, let the local Nix process terminate the build if it exceeds a
timeout. The remote builder will be killed as a side-effect. This
gives better error reporting (since the timeout message from the
remote side wasn't properly propagated) and handles non-Nix problems
like SSH hangs.
|
|
I'm not sure if it has ever worked correctly. The line "lastWait =
after;" seems to mean that the timer was reset every time a build
produced log output.
Note that the timeout is now per build, as documented ("the maximum
number of seconds that a builder can run").
|
|
|
|
|
|
This reverts commit 28bba8c44f484eae38e8a15dcec73cfa999156f6.
|
|
|
|
It turns out that in multi-user Nix, a builder may be able to do
ln /etc/shadow $out/foo
Afterwards, canonicalisePathMetaData() will be applied to $out/foo,
causing /etc/shadow's mode to be set to 444 (readable by everybody but
writable by nobody). That's obviously Very Bad.
Fortunately, this fails in NixOS's default configuration because
/nix/store is a bind mount, so "ln" will fail with "Invalid
cross-device link". It also fails if hard-link restrictions are
enabled, so a workaround is:
echo 1 > /proc/sys/fs/protected_hardlinks
The solution is to check that all files in $out are owned by the build
user. This means that innocuous operations like "ln
${pkgs.foo}/some-file $out/" are now rejected, but that already failed
in chroot builds anyway.
|
|
|
|
...where <XX> is the first two characters of the derivation.
Otherwise /nix/var/log/nix/drvs may become so large that we run into
all sorts of weird filesystem limits/inefficiences. For instance,
ext3/ext4 filesystems will barf with "ext4_dx_add_entry:1551:
Directory index full!" once you hit a few million files.
|
|
Doing this once makes subsequent operations like garbage collecting
more efficient since we don't have to call makeMutable() first.
|
|
substituter
Issue #77.
|
|
Fixes #77.
|
|
Fixes #24.
|
|
Waiting for the hook to shut down cleanly sometimes seems to lead to
hangs.
|
|
This reverts commit cc511fd65b7b6de9e87e72fb4bed16fc7efeb8b7.
|
|
|
|
If a derivation has multiple outputs, then we only want to download
those outputs that are actuallty needed. So if we do "nix-build -A
openssl.man", then only the "man" output should be downloaded.
Likewise if another package depends on ${openssl.man}.
The tricky part is that different derivations can depend on different
outputs of a given derivation, so we may need to restart the
corresponding derivation goal if that happens.
|
|
For example, given a derivation with outputs "out", "man" and "bin":
$ nix-build -A pkg
produces ./result pointing to the "out" output;
$ nix-build -A pkg.man
produces ./result-man pointing to the "man" output;
$ nix-build -A pkg.all
produces ./result, ./result-man and ./result-bin;
$ nix-build -A pkg.all -A pkg2
produces ./result, ./result-man, ./result-bin and ./result-2.
|
|
vfork() is just too weird. For instance, in this build:
http://hydra.nixos.org/build/3330487
the value fromHook.writeSide becomes corrupted in the parent, even
though the child only reads from it. At -O0 the problem goes away.
Probably the child is overriding some spilled temporary variable.
If I get bored I may implement using posix_spawn() instead.
|
|
Slightly scared of using std::cerr in a vforked process...
|
|
Hopefully this reduces the chance of hitting ‘unable to fork: Cannot
allocate memory’ errors. vfork() is used for everything except
starting builders.
|
|
They are unnecessary because we set the close-on-exec flag.
|
|
|
|
Fixes #57.
|
|
|
|
|
|
|
|
If we find a corrupted path in the output closure, we rebuild the
derivation that produced that particular path.
|
|
With this flag, if any valid derivation output is missing or corrupt,
it will be recreated by using a substitute if available, or by
rebuilding the derivation. The latter may use hash rewriting if
chroots are not available.
|
|
This operation allows fixing corrupted or accidentally deleted store
paths by redownloading them using substituters, if available.
Since the corrupted path cannot be replaced atomically, there is a
very small time window (one system call) during which neither the old
(corrupted) nor the new (repaired) contents are available. So
repairing should be used with some care on critical packages like
Glibc.
|