Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
E.g. "local?store=/tmp/store&state=/tmp/var".
|
|
|
|
|
|
As a side effect, this ensures that signatures are propagated when
copying paths between stores.
Also refactored import/export to make use of this.
|
|
|
|
|
|
This is to allow store-specific configuration,
e.g. s3://my-cache?compression=bzip2&secret-key=/path/to/key.
|
|
Substitution is now simply a Store -> Store copy operation, most
typically from BinaryCacheStore to LocalStore.
|
|
|
|
This allows commands like "nix verify --all" or "nix path-info --all"
to work on S3 caches.
Unfortunately, this requires some ugly hackery: when querying the
contents of the bucket, we don't want to have to read every .narinfo
file. But the S3 bucket keys only include the hash part of each store
path, not the name part. So as a special exception
queryAllValidPaths() can now return store paths *without* the name
part, and queryPathInfo() accepts such store paths (returning a
ValidPathInfo object containing the full name).
|
|
This re-implements the binary cache database in C++, allowing it to be
used by other Store backends, in particular the S3 backend.
|
|
Caching path info is generally useful. For instance, it speeds up "nix
path-info -rS /run/current-system" (i.e. showing the closure sizes of
all paths in the closure of the current system) from 5.6s to 0.15s.
This also eliminates some APIs like Store::queryDeriver() and
Store::queryReferences().
|
|
This specifies the number of distinct signatures required to consider
each path "trusted".
Also renamed ‘--no-sigs’ to ‘--no-trust’ for the flag that disables
verifying whether a path is trusted (since a path can also be trusted
if it has no signatures, but was built locally).
|
|
E.g.
$ nix sign-paths -k ./secret -r $(type -p geeqie)
signs geeqie and all its dependencies using the key in ./secret.
|
|
|
|
|
|
This for instance allows hydra-queue-runner to add the S3 backend
at runtime.
|
|
This is primary to allow hydra-queue-runner to extract files like
"nix-support/hydra-build-products" from NARs in binary caches.
|
|
|
|
|
|
So you can now do:
$ NIX_REMOTE=file:///tmp/binary-cache nix-store -qR /nix/store/...
|
|
|
|
|
|
|
|
|
|
Calling a class an API is a bit redundant...
|
|
Also, move a few free-standing functions into StoreAPI and Derivation.
Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
|
|
|
|
Fixes #609.
|
|
|
|
|
|
|
|
|
|
Only indirect roots (symlinks to symlinks to the Nix store) are now
supported.
|
|
|
|
That is, delete almost nothing (it will still remove unused links from
/nix/store/.links).
|
|
Put all Nix configuration flags in a Settings object.
|
|
We can't open a SQLite database if the disk is full. Since this
prevents the garbage collector from running when it's most needed, we
reserve some dummy space that we can free just before doing a garbage
collection. This actually revives some old code from the Berkeley DB
days.
Fixes #27.
|
|
We don't need this anymore now that current filesystems support more
than 32,000 files in a directory.
|
|
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
|
|
derivations added to the store by clients have "correct" output
paths (meaning that the output paths are computed by hashing the
derivation according to a certain algorithm). This means that a
malicious user could craft a special .drv file to build *any*
desired path in the store with any desired contents (so long as the
path doesn't already exist). Then the attacker just needs to wait
for a victim to come along and install the compromised path.
For instance, if Alice (the attacker) knows that the latest Firefox
derivation in Nixpkgs produces the path
/nix/store/1a5nyfd4ajxbyy97r1fslhgrv70gj8a7-firefox-5.0.1
then (provided this path doesn't already exist) she can craft a .drv
file that creates that path (i.e., has it as one of its outputs),
add it to the store using "nix-store --add", and build it with
"nix-store -r". So the fake .drv could write a Trojan to the
Firefox path. Then, if user Bob (the victim) comes along and does
$ nix-env -i firefox
$ firefox
he executes the Trojan injected by Alice.
The fix is to have the Nix daemon verify that derivation outputs are
correct (in addValidPath()). This required some refactoring to move
the hash computation code to libstore.
|
|
size of the NAR serialisation of the path, i.e., `nix-store --dump
PATH'). This is useful for Hydra.
|
|
with the same name as the output) and instead use the
DerivationOutputs table in the database, which is the correct way to
to do things.
|
|
is enabled by not depending on the deriver.
|
|
(Linux) machines no longer maintain the atime because it's too
expensive, and on the machines where --use-atime is useful (like the
buildfarm), reading the atimes on the entire Nix store takes way too
much time to make it practical.
|
|
|
|
18446744073709551615ULL breaks on GCC 3.3.6 (`integer constant is
too large for "long" type').
|
|
|