Age | Commit message (Collapse) | Author | Files | Lines |
|
stream it's now necessary for the daemon to process the entire
sequence of exported paths, rather than letting the client do it.
|
|
(way fewer roundtrips) by allowing the client to send data in bigger
chunks.
* Some refactoring.
|
|
* Buffer the HashSink. This speeds up hashing a bit because it
prevents lots of calls to the hash update functions (e.g. nix-hash
went from 9.3s to 8.7s of user time on the closure of my
/var/run/current-system).
|
|
|
|
rather than complain about them.
|
|
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
|
|
intermittent problems are gone by now. WAL mode is preferrable
because it does way fewer fsyncs.
|
|
causing a deadlock.
|
|
This should also fix:
nix-instantiate: ./../boost/shared_ptr.hpp:254: T* boost::shared_ptr<T>::operator->() const [with T = nix::StoreAPI]: Assertion `px != 0' failed.
which was caused by hashDerivationModulo() calling the ‘store’
object (during store upgrades) before openStore() assigned it.
|
|
derivations added to the store by clients have "correct" output
paths (meaning that the output paths are computed by hashing the
derivation according to a certain algorithm). This means that a
malicious user could craft a special .drv file to build *any*
desired path in the store with any desired contents (so long as the
path doesn't already exist). Then the attacker just needs to wait
for a victim to come along and install the compromised path.
For instance, if Alice (the attacker) knows that the latest Firefox
derivation in Nixpkgs produces the path
/nix/store/1a5nyfd4ajxbyy97r1fslhgrv70gj8a7-firefox-5.0.1
then (provided this path doesn't already exist) she can craft a .drv
file that creates that path (i.e., has it as one of its outputs),
add it to the store using "nix-store --add", and build it with
"nix-store -r". So the fake .drv could write a Trojan to the
Firefox path. Then, if user Bob (the victim) comes along and does
$ nix-env -i firefox
$ firefox
he executes the Trojan injected by Alice.
The fix is to have the Nix daemon verify that derivation outputs are
correct (in addValidPath()). This required some refactoring to move
the hash computation code to libstore.
|
|
|
|
registerValidPaths() now handles busy errors and registerValidPath()
is simply a wrapper around it.
|
|
|
|
|
|
while checking the contents, since this operation can take a very
long time to finish. Also, fill in missing narSize fields in the DB
while doing this.
|
|
even with a very long busy timeout, because SQLITE_BUSY is also
returned to resolve deadlocks. This should get rid of random
"database is locked" errors. This is kind of hard to test though.
* Fix a horrible bug in deleteFromStore(): deletePathWrapped() should
be called after committing the transaction, not before, because the
commit might not succeed.
|
|
will approximately require.
|
|
|
|
size of the NAR serialisation of the path, i.e., `nix-store --dump
PATH'). This is useful for Hydra.
|
|
race with other processes that add new referrers to a path,
resulting in the garbage collector crashing with "foreign key
constraint failed". (Nix/4)
* Make --gc --print-dead etc. interruptible.
|
|
differs from the desired mode. There is an open SQLite ticket
`Executing "PRAGMA journal_mode" may delete journal file while it is
in use.'
|
|
* If a path has disappeared, check its referrers first, and don't try
to invalidate paths that have valid referrers. Otherwise we get a
foreign key constraint violation.
* Read the whole Nix store directory instead of statting each valid
path, which is slower.
* Acquire the global GC lock.
|
|
|
|
faster than the old mode when fsyncs are enabled, because it only
performs an fsync() when doing a checkpoint, rather than at every
commit. Some timings for doing a "nix-instantiate /etc/nixos/nixos
-A system" after modifying the stdenv setup script:
42.5s - SQLite 3.6.23 with truncate mode and fsync
3.4s - SQLite 3.6.23 with truncate mode and no fsync
32.1s - SQLite 3.7.0 with truncate mode and fsync
16.8s - SQLite 3.7.0 with WAL mode and fsync, auto-checkpoint
every 1000 pages
8.3s - SQLite 3.7.0 with WAL mode and fsync, auto-checkpoint
every 8192 pages
1.7s - SQLite 3.7.0 with WAL mode and no fsync
The default is now to use WAL mode with fsyncs. Because WAL doesn't
work on remote filesystems such as NFS (as it uses shared memory),
truncate mode can be re-enabled by setting the "use-sqlite-wal"
option to false.
|
|
doesn't work because the garbage collector doesn't actually look at
locks. So r22253 was stupid. Use addTempRoot() instead. Also,
locking the temporary directory in exportPath() was silly because it
isn't even in the store.
|
|
|
|
prevent it from being deleted by the garbage collector.
|
|
violation of the Refs table. So don't do that.
|
|
|
|
the "failed" status of the given store paths. The special value `*'
clears all failed paths.
|
|
failed paths (when using the `build-cache-failure' option).
|
|
|
|
|
|
changed. This prevents corrupt paths from spreading to other
machines. Note that checking the hash is cheap because we're
hashing anyway (because of the --sign feature).
|
|
|
|
|
|
|
|
|
|
* Don't refer to config.h in util.hh, because config.h is not
installed (http://hydra.nixos.org/build/303053).
|
|
|
|
|
|
false.
* Change the default for `fsync-metadata' to true.
* Disable `fsync-metadata' in `make check'.
|
|
E.g. it cuts the runtime of the referrers test from 50s to 23s.
|
|
the description at http://www.sqlite.org/atomiccommit.html should be
safe enough.
|
|
|
|
prevent a foreign key constraint violation on the Refs table when
deleting a path.
|
|
which requires more I/O.
|
|
with the same name as the output) and instead use the
DerivationOutputs table in the database, which is the correct way to
to do things.
|
|
garbage collector.
|
|
it at startup.
* Implement negative caching. Now `make check' passes.
|