Age | Commit message (Collapse) | Author | Files | Lines |
|
Fixes #118.
|
|
So if a path is not garbage solely because it's reachable from a root
due to the gc-keep-outputs or gc-keep-derivations settings, ‘nix-store
-q --roots’ now shows that root.
|
|
This allows repairing corrupted derivations and other source files.
|
|
With this flag, if any valid derivation output is missing or corrupt,
it will be recreated by using a substitute if available, or by
rebuilding the derivation. The latter may use hash rewriting if
chroots are not available.
|
|
|
|
|
|
That is, delete almost nothing (it will still remove unused links from
/nix/store/.links).
|
|
|
|
|
|
|
|
To implement binary caches efficiently, Hydra needs to be able to map
the hash part of a store path (e.g. "gbg...zr7") to the full store
path (e.g. "/nix/store/gbg...kzr7-subversion-1.7.5"). (The binary
cache mechanism uses hash parts as a key for looking up store paths to
ensure privacy.) However, doing a search in the Nix store for
/nix/store/<hash>* is expensive since it requires reading the entire
directory. queryPathFromHashPart() prevents this by doing a cheap
database lookup.
|
|
querySubstitutablePaths() takes a set of paths, so this greatly
reduces daemon <-> client latency.
|
|
queryValidPaths() combines multiple calls to isValidPath() in one.
This matters when using the Nix daemon because it reduces latency.
For instance, on "nix-env -qas \*" it reduces execution time from 5.7s
to 4.7s (which is indistinguishable from the non-daemon case).
|
|
|
|
Also removed querySubstitutablePathInfo().
|
|
Getting substitute information using the binary cache substituter has
non-trivial latency overhead. A package or NixOS system configuration
can have hundreds of dependencies, and in the worst case (when the
local info cache is empty) we have to do a separate HTTP request for
each of these. If the ping time to the server is t, getting N info
files will take tN seconds; e.g., with a ping time of 0.1s to
nixos.org, sequentially downloading 1000 info files (a typical NixOS
config) will take at least 100 seconds.
To fix this problem, the binary cache substituter can now perform
requests in parallel. This required changing the substituter
interface to support a function querySubstitutablePathInfos() that
queries multiple paths at the same time, and rewriting queryMissing()
to take advantage of parallelism. (Due to local caching,
parallelising queryMissing() is sufficient for most use cases, since
it's almost always called before building a derivation and thus fills
the local info cache.)
For example, parallelism speeds up querying all 1056 paths in a
particular NixOS system configuration from 116s to 2.6s. It works so
well because the eccentricity of the top-level derivation in the
dependency graph is only 9. So we only need 10 round-trips (when
using an unlimited number of parallel connections) to get everything.
Currently we do a maximum of 150 parallel connections to the server.
Thus it's important that the binary cache server (e.g. nixos.org) has
a high connection limit. Alternatively we could use HTTP pipelining,
but WWW::Curl doesn't support it and libcurl has a hard-coded limit of
5 requests per pipeline.
|
|
I.e. when multiple non-derivation arguments are passed to ‘nix-store
-r’ to be substituted, do them in parallel.
|
|
We can't open a SQLite database if the disk is full. Since this
prevents the garbage collector from running when it's most needed, we
reserve some dummy space that we can free just before doing a garbage
collection. This actually revives some old code from the Berkeley DB
days.
Fixes #27.
|
|
We don't need this anymore now that current filesystems support more
than 32,000 files in a directory.
|
|
necessary because existing code assumes that the references graph is
acyclic.
|
|
|
|
stream it's now necessary for the daemon to process the entire
sequence of exported paths, rather than letting the client do it.
|
|
daemon (which is an error), print a nicer error message than
"Connection reset by peer" or "broken pipe".
* In the daemon, log errors that occur during request parameter
processing.
|
|
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
|
|
derivation paths
This required adding a queryOutputDerivationNames function in the store API
|
|
This should also fix:
nix-instantiate: ./../boost/shared_ptr.hpp:254: T* boost::shared_ptr<T>::operator->() const [with T = nix::StoreAPI]: Assertion `px != 0' failed.
which was caused by hashDerivationModulo() calling the ‘store’
object (during store upgrades) before openStore() assigned it.
|
|
derivations added to the store by clients have "correct" output
paths (meaning that the output paths are computed by hashing the
derivation according to a certain algorithm). This means that a
malicious user could craft a special .drv file to build *any*
desired path in the store with any desired contents (so long as the
path doesn't already exist). Then the attacker just needs to wait
for a victim to come along and install the compromised path.
For instance, if Alice (the attacker) knows that the latest Firefox
derivation in Nixpkgs produces the path
/nix/store/1a5nyfd4ajxbyy97r1fslhgrv70gj8a7-firefox-5.0.1
then (provided this path doesn't already exist) she can craft a .drv
file that creates that path (i.e., has it as one of its outputs),
add it to the store using "nix-store --add", and build it with
"nix-store -r". So the fake .drv could write a Trojan to the
Firefox path. Then, if user Bob (the victim) comes along and does
$ nix-env -i firefox
$ firefox
he executes the Trojan injected by Alice.
The fix is to have the Nix daemon verify that derivation outputs are
correct (in addValidPath()). This required some refactoring to move
the hash computation code to libstore.
|
|
will approximately require.
|
|
size of the NAR serialisation of the path, i.e., `nix-store --dump
PATH'). This is useful for Hydra.
|
|
|
|
because it defines _FILE_OFFSET_BITS. Without this, on
OpenSolaris the system headers define it to be 32, and then
the 32-bit stat() ends up being called with a 64-bit "struct
stat", or vice versa.
This also ensures that we get 64-bit file sizes everywhere.
* Remove the redundant call to stat() in parseExprFromFile().
The file cannot be a symlink because that's the exit condition
of the loop before.
|
|
|
|
|
|
`nix-store --query-failed-paths'.
|
|
|
|
which requires more I/O.
|
|
with the same name as the output) and instead use the
DerivationOutputs table in the database, which is the correct way to
to do things.
|
|
|
|
is enabled by not depending on the deriver.
|
|
root symlink, not just its target. E.g.:
/nix/var/nix/profiles/system-99-link -> /nix/store/76kwf88657nq7wgk1hx3l1z5q91zb9wd-system
|
|
(Linux) machines no longer maintain the atime because it's too
expensive, and on the machines where --use-atime is useful (like the
buildfarm), reading the atimes on the entire Nix store takes way too
much time to make it practical.
|
|
18446744073709551615ULL breaks on GCC 3.3.6 (`integer constant is
too large for "long" type').
|
|
|
|
SHA-256 outputs of fixed-output derivations. I.e. they now produce
the same store path:
$ nix-store --add x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
$ nix-store --add-fixed --recursive sha256 x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
the latter being the same as the path that a derivation
derivation {
name = "x";
outputHashAlgo = "sha256";
outputHashMode = "recursive";
outputHash = "...";
...
};
produces.
This does change the output path for such fixed-output derivations.
Fortunately they are quite rare. The most common use is fetchsvn
calls with SHA-256 hashes. (There are a handful of those is
Nixpkgs, mostly unstable development packages.)
* Documented the computation of store paths (in store-api.cc).
|
|
accessed time of paths that may be deleted. Anything more recently
used won't be deleted. The time is specified in time_t,
e.g. seconds since 1970-01-01 00:00:00 UTC; use `date +%s' to
convert to time_t from the command line.
Example: to delete everything that hasn't been used in the last two
months:
$ nix-store --gc -v --max-atime $(date +%s -d "2 months ago")
|
|
order of ascending last access time. This is useful in conjunction
with --max-freed or --max-links to prefer deleting non-recently used
garbage, which is good (especially in the build farm) since garbage
may become live again.
The code could easily be modified to accept other criteria for
ordering garbage by changing the comparison operator used by the
priority queue in collectGarbage().
|
|
again. (After the previous substituter mechanism refactoring I
didn't update the code that obtains the references of substitutable
paths.) This required some refactoring: the substituter programs
are now kept running and receive/respond to info requests via
stdin/stdout.
|
|
bytes have been freed, `--max-links' to stop when the Nix store
directory has fewer than N hard links (the latter being important
for very large Nix stores on filesystems with a 32000 subdirectories
limit).
|
|
* The garbage collector now also prints the number of blocks freed.
|
|
https://svn.nixos.org/repos/nix/nix/branches/no-bdb).
|