Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
This makes things more efficient (we don't need to use an SSH master
connection, and we only start a single remote process) and gets rid of
locking issues (the remote nix-store process will keep inputs and
outputs locked as long as they're needed).
It also makes it more or less secure to connect directly to the root
account on the build machine, using a forced command
(e.g. ‘command="nix-store --serve --write"’). This bypasses the Nix
daemon and is therefore more efficient.
Also, don't call nix-store to import the output paths.
|
|
|
|
|
|
This means we no longer need an SSH master connection, since we only
execute a single command on the remote host.
|
|
|
|
|
|
There is a long-standing race condition when copying a closure to a
remote machine, particularly affecting build-remote.pl: the client
first asks the remote machine which paths it already has, then copies
over the missing paths. If the garbage collector kicks in on the
remote machine between the first and second step, the already-present
paths may be deleted. The missing paths may then refer to deleted
paths, causing nix-copy-closure to fail. The client now performs both
steps using a single remote Nix call (using ‘nix-store --serve’),
locking all paths in the closure while querying.
I changed the --serve protocol a bit (getting rid of QueryCommand), so
this breaks the SSH substituter from older versions. But it was marked
experimental anyway.
Fixes #141.
|
|
|
|
Conflicts:
src/libexpr/eval.cc
|
|
NAR info files in binary caches can now have a cryptographic signature
that Nix will verify before using the corresponding NAR file.
To create a private/public key pair for signing and verifying a binary
cache, do:
$ openssl genrsa -out ./cache-key.sec 2048
$ openssl rsa -in ./cache-key.sec -pubout > ./cache-key.pub
You should also come up with a symbolic name for the key, such as
"cache.example.org-1". This will be used by clients to look up the
public key. (It's a good idea to number keys, in case you ever need
to revoke/replace one.)
To create a binary cache signed with the private key:
$ nix-push --dest /path/to/binary-cache --key ./cache-key.sec --key-name cache.example.org-1
The public key (cache-key.pub) should be distributed to the clients.
They should have a nix.conf should contain something like:
signed-binary-caches = *
binary-cache-public-key-cache.example.org-1 = /path/to/cache-key.pub
If all works well, then if Nix fetches something from the signed
binary cache, you will see a message like:
*** Downloading ‘http://cache.example.org/nar/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’ (signed by ‘cache.example.org-1’) to ‘/nix/store/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’...
On the other hand, if the signature is wrong, you get a message like
NAR info file `http://cache.example.org/7dppcj5sc1nda7l54rjc0g5l1hamj09j.narinfo' has an invalid signature; ignoring
Signatures are implemented as a single line appended to the NAR info
file, which looks like this:
Signature: 1;cache.example.org-1;HQ9Xzyanq9iV...muQ==
Thus the signature has 3 fields: a version (currently "1"), the ID of
key, and the base64-encoded signature of the SHA-256 hash of the
contents of the NAR info file up to but not including the Signature
line.
Issue #75.
|
|
If the database is opened through perl bindings (and even though nix.conf has
use-sqlite-wal set to false), the database is automatically converted into WAL
mode. This makes the next nix process to access the database convert it back to
"truncate". If the database is still open at the time in wal mode by the perl
program, this fails and crashes the nix doing the wal -> truncate conversion.
|
|
|
|
|
|
Ever since SQLite in Nixpkgs was updated to 3.8.0.2, Nix has randomly
segfaulted on Darwin:
http://hydra.nixos.org/build/6175515
http://hydra.nixos.org/build/6611038
It turns out that this is because the binary cache substituter somehow
ends up loading two versions of SQLite: the one in Nixpkgs and the
other from /usr/lib/libsqlite3.dylib. It's not exactly clear why the
latter is loaded, but it appears to be because WWW::Curl indirectly loads
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation,
which in turn seems to load /usr/lib/libsqlite3.dylib. This leads to
a segfault when Perl exits:
#0 0x00000001010375f4 in sqlite3_finalize ()
#1 0x000000010125806e in sqlite_st_destroy ()
#2 0x000000010124bc30 in XS_DBD__SQLite__st_DESTROY ()
#3 0x00000001001c8155 in XS_DBI_dispatch ()
...
#14 0x0000000100023224 in perl_destruct ()
#15 0x0000000100000d6a in main ()
...
The workaround is to explicitly load DBD::SQLite before WWW::Curl.
|
|
As discovered by Todd Veldhuizen, the shell started by nix-shell has
its affinity set to a single CPU. This is because nix-shell connects
to the Nix daemon, which causes the affinity hack to be applied. So
we turn this off for Perl programs.
|
|
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
|
|
|
|
Problem noticed by niksnut.
|
|
Based on https://github.com/NixOS/nix/pull/6 from shlevy
|
|
This reverts commit 28bba8c44f484eae38e8a15dcec73cfa999156f6.
|
|
I.e.
Subroutine Nix::Store::isValidPath redefined at /nix/store/clfzsf6gi7qh5i9c0vks1ifjam47rijn-perl-5.16.2/lib/perl5/5.16.2/XSLoader.pm line 92.
and so on.
|
|
|
|
|
|
|
|
|
|
This prevents unnecessary and slow rebuilds of NARs that already exist
in the binary cache.
|
|
|
|
|
|
|
|
|
|
Put all Nix configuration flags in a Settings object.
|
|
Previously substituters could read nix.conf themselves, but this
didn't take --option flags into account.
|
|
|
|
To implement binary caches efficiently, Hydra needs to be able to map
the hash part of a store path (e.g. "gbg...zr7") to the full store
path (e.g. "/nix/store/gbg...kzr7-subversion-1.7.5"). (The binary
cache mechanism uses hash parts as a key for looking up store paths to
ensure privacy.) However, doing a search in the Nix store for
/nix/store/<hash>* is expensive since it requires reading the entire
directory. queryPathFromHashPart() prevents this by doing a cheap
database lookup.
|
|
Cherry-picked from the no-manifests branch.
|
|
|
|
|
|
|
|
|
|
XZ compresses significantly better than bzip2. Here are the
compression ratios and execution times (using 4 cores in parallel) on
my /var/run/current-system (3.1 GiB):
bzip2: total compressed size 849.56 MiB, 30.8% [2m08]
xz -6: total compressed size 641.84 MiB, 23.4% [6m53]
xz -7: total compressed size 621.82 MiB, 22.6% [7m19]
xz -8: total compressed size 599.33 MiB, 21.8% [7m18]
xz -9: total compressed size 588.18 MiB, 21.4% [7m40]
Note that compression takes much longer. More importantly, however,
decompression is much faster:
bzip2: 1m47.274s
xz -6: 0m55.446s
xz -7: 0m54.119s
xz -8: 0m52.388s
xz -9: 0m51.842s
The only downside to using -9 is that decompression takes a fair
amount (~65 MB) of memory.
|
|
Since the Perl bindings require shared libraries, this is required on
platforms such as Cygwin where we do a static build.
|
|
so that network progress is what's measured
|
|
|
|
|
|
This command builds or fetches all dependencies of the given
derivation, then starts a shell with the environment variables from
the derivation. This shell also sources $stdenv/setup to initialise
the environment further.
The current directory is not changed. Thus this is a convenient way
to reproduce a build environment in an existing working tree.
Existing environment variables are left untouched (unless the
derivation overrides them). As a special hack, the original value of
$PATH is appended to the $PATH produced by $stdenv/setup.
Example session:
$ nix-build --run-env '<nixpkgs>' -A xterm
(the dependencies of xterm are built/fetched...)
$ tar xf $src
$ ./configure
$ make
$ emacs
(... hack source ...)
$ make
$ ./xterm
|
|
We're already printing progress on stderr, so printing them on stdout
afterwards is kind of useless.
|