Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
I've seen operations like "nix-store --import" take much longer on one
system. So default to off until I've investigated this a bit further.
|
|
|
|
|
|
|
|
Put all Nix configuration flags in a Settings object.
|
|
Previously substituters could read nix.conf themselves, but this
didn't take --option flags into account.
|
|
|
|
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
|
|
libstore so that the Perl bindings can use it as well. It's vital
that the Perl bindings use the configuration file, because otherwise
nix-copy-closure will fail with a ‘database locked’ message if the
value of ‘use-sqlite-wal’ is changed from the default.
|
|
|
|
hook script proper, and the stdout/stderr of the builder. Only the
latter should be saved in /nix/var/log/nix/drvs.
* Allow the verbosity to be set through an option.
* Added a flag --quiet to lower the verbosity level.
|
|
expressions.
This patch adds the configuration file variable "build-cores" and the
command line argument "--cores". These settings specify the number of
CPU cores to utilize for parallel building within a job, i.e. by passing
an appropriate "-j" flag to GNU Make. The default value is 1, which
means that parallel building is *disabled*. If the number of build cores
is specified as 0 (synonymously: "guess" or "auto"), then the actual
value is supposed to be auto-detected by builders at run-time, i.e by
calling the nproc(1) utility from coreutils.
The environment variable $NIX_BUILD_CORES is available to builders, but
the contents of that variable does *not* influence the hash that goes
into the $out store path, i.e. the number of build cores to be utilized
can be changed at will without requiring any re-builds.
|
|
poll for it (i.e. if we can't acquire the lock, then let the main
select() loop wait for at most a few seconds and then try again).
This improves parallelism: if two nix-store processes are both
trying to build a path at the same time, the second one shouldn't
block; it should first see if it can build other goals. Also, it
prevents the deadlocks that have been occuring in Hydra lately,
where a process waits for a lock held by another process that's
waiting for a lock held by the first.
The downside is that polling isn't really elegant, but POSIX doesn't
provide a way to wait for locks in a select() loop. The only
solution would be to spawn a thread for each lock to do a blocking
fcntl() and then signal the main thread, but that would require
pthreads.
|
|
command line (e.g. "--option build-use-chroot true").
|
|
don't have to put the chroot in /nix/var/nix/chroots anymore.
They're back in /tmp now.
|
|
|
|
build progress.
|
|
disasters involving `rm -rf' on bind mounts. Will try the
definitive fix (per-process mounts, apparently possible via the
CLONE_NEWNS flag in clone()) some other time.
|
|
* queryDeriver in daemon mode: don't barf if the other side returns an
empty string (which means there is no deriver).
|
|
need any info on substitutable paths, we just call the substituters
(such as download-using-manifests.pl) directly. This means that
it's no longer necessary for nix-pull to register substitutes or for
nix-channel to clear them, which makes those operations much faster
(NIX-95). Also, we don't have to worry about keeping nix-pull
manifests (in /nix/var/nix/manifests) and the database in sync with
each other.
The downside is that there is some overhead in calling an external
program to get the substitutes info. For instance, "nix-env -qas"
takes a bit longer.
Abolishing the substitutes table also makes the logic in
local-store.cc simpler, as we don't need to store info for invalid
paths. On the downside, you cannot do things like "nix-store -qR"
on a substitutable but invalid path (but nobody did that anyway).
* Never catch interrupts (the Interrupted exception).
|
|
seconds without producing output on stdout or stderr (NIX-65). This
timeout can be specified using the `--max-silent-time' option or the
`build-max-silent-time' configuration setting. The default is
infinity (0).
* Fix a tricky race condition: if we kill the build user before the
child has done its setuid() to the build user uid, then it won't be
killed, and we'll potentially lock up in pid.wait(). So also send a
conventional kill to the child.
|
|
* Allow the worker path to be overriden through the NIX_WORKER
environment variable.
|
|
* Optimise header file usage a bit.
* Compile the parser as C++.
|
|
|
|
Nix config file.
|
|
64 running 64-bit SUSE). A patched ATerm library is required to run Nix
succesfully.
|
|
through the new `gc-reserved-space' option.
|
|
root (or setuid root), then builds will be performed under one of
the users listed in the `build-users' configuration variables. This
is to make it impossible to influence build results externally,
allowing locally built derivations to be shared safely between
users (see ASE-2005 paper).
To do: only one builder should be active per build user.
|
|
|
|
|
|
|
|
to derivations in user environments. Nice for developers (since it
prevents build-time-only dependencies from being GC'ed, in
conjunction with `gc-keep-outputs'). Turned off by default.
|
|
derivations should be kept.
|
|
permission to the Nix store or database. E.g., `nix-env -qa' will
work, but `nix-env -qas' won't (the latter needs DB access). The
option `--readonly-mode' forces this mode; otherwise, it's only
activated when the database cannot be opened.
|
|
* Builder output is written to standard error by default.
* The option `-B' is gone.
* The option `-Q' suppresses builder output.
The result of this is that most Nix invocations shouldn't need any
flags w.r.t. logging.
|
|
Whenever Nix attempts to realise a derivation for which a closure is
already known, but this closure cannot be realised, fall back on
normalising the derivation.
The most common scenario in which this is useful is when we have
registered substitutes in order to perform binary distribution from,
say, a network repository. If the repository is down, the
realisation of the derivation will fail. When this option is
specified, Nix will build the derivation instead. Thus, binary
installation falls back on a source installation. This option is
not the default since it is generally not desirable for a transient
failure in obtaining the substitutes to lead to a full build from
source (with the related consumption of resources).
|
|
much as possible. (This is similar to GNU Make's `-k' flag.)
* Refactoring to implement this: previously we just bombed out when
a build failed, but now we have to clean up. In particular this
means that goals must be freed quickly --- they shouldn't hang
around until the worker exits. So the worker now maintains weak
pointers in order not to prevent garbage collection.
* Documented the `-k' and `-j' flags.
|
|
no limit).
* Add missing file to distribution.
|
|
verbosity level.
|
|
* Replace all directory reading code by a generic readDirectory()
function.
|
|
|