Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
|
|
|
|
Instead call curl directly and pipe it into ‘nix-store --restore’.
This saves I/O and prevents creating garbage in the Nix store.
|
|
|
|
|
|
This makes all the tests succeed. Woohoo!
|
|
|
|
|
|
|
|
|
|
The file:// URI schema requires checking for errors in a more general
way. Also, don't cache file:// lookups.
|
|
|
|
|
|
Fixes #39.
|
|
Commit 6a214f3e06fa1c5f0a4d40e555f14d87691af297 reused the NixOS
environment initialisation for nix-profile.sh, but this is
inappropriate on systems that don't have multi-user support enabled.
|
|
|
|
heap just in case it escapes the stack frame.
|
|
|
|
attrset.
The generated attrset has drvPath and outPath with the right string context, type 'derivation', outputName with
the right name, all with a list of outputs, and an attribute for each output.
I see three uses for this (though certainly there may be more):
* Using derivations generated by something besides nix-instantiate (e.g. guix)
* Allowing packages provided by channels to be used in nix expressions. If a channel installed a valid deriver
for each package it provides into the store, then those could be imported and used as dependencies or installed
in environment.systemPackages, for example.
* Enable hydra to be consistent in how it treats inputs that are outputs of another build. Right now, if an
input is passed as an argument to the job, it is passed as a derivation, but if it is accessed via NIX_PATH
(i.e. through the <> syntax), then it is a path that can be imported. This is problematic because the build
being depended upon may have been built with non-obvious arguments passed to its jobset file. With this
feature, hydra can just set the name of that input to the path to its drv file in NIX_PATH
|
|
|
|
E.g. Darwin doesn't allow this.
|
|
|
|
|
|
Incremental optimisation requires creating links in /nix/store/.links
to all files in the store. However, this means that if we delete a
store path, no files are actually deleted because links in
/nix/store/.links still exists. So we need to check /nix/store/.links
for files with a link count of 1 and delete them.
|
|
Auto-optimisation is enabled by default. It can be turned off by
setting auto-optimise-store to false in nix.conf.
|
|
optimiseStore() now creates persistent, content-addressed hard links
in /nix/store/.links. For instance, if it encounters a file P with
hash H, it will create a hard link
P' = /nix/store/.link/<H>
to P if P' doesn't already exist; if P' exist, then P is replaced by a
hard link to P'. This is better than the previous in-memory map,
because it had the tendency to unnecessarily replace hard links with a
hard link to whatever happened to be the first file with a given hash
it encountered. It also allows on-the-fly, incremental optimisation.
|
|
|
|
Also use utimes() instead of utime() if lutimes() is not available.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
To implement binary caches efficiently, Hydra needs to be able to map
the hash part of a store path (e.g. "gbg...zr7") to the full store
path (e.g. "/nix/store/gbg...kzr7-subversion-1.7.5"). (The binary
cache mechanism uses hash parts as a key for looking up store paths to
ensure privacy.) However, doing a search in the Nix store for
/nix/store/<hash>* is expensive since it requires reading the entire
directory. queryPathFromHashPart() prevents this by doing a cheap
database lookup.
|
|
Cherry-picked from the no-manifests branch.
|
|
|
|
Exit code 100 should be returned for all permanent failures. This
includes cached failures.
Fixes #34.
|
|
|
|
|
|
|
|
|
|
Needed for Charon/Hydra interaction.
|
|
|
|
|
|
|
|
|
|
|
|
In "nix-env -qas", we don't need the substitute info, we just need to
know if it exists. This can be done using a HTTP HEAD request, which
saves bandwidth.
Note however that curl currently has a bug that prevents it from
reusing HTTP connections if HEAD requests return a 404:
https://sourceforge.net/tracker/?func=detail&aid=3542731&group_id=976&atid=100976
Without the patch attached to the issue, using HEAD is actually quite
a bit slower than GET.
|