Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
|
|
|
|
delete obstructing invalid paths.
|
|
only caused a crash if the program was *not* invoked with a high
verbosity level.
|
|
|
|
improve throughput.
* Don't build the `substitute-rev' table for now, since it caused
Theta(N^2) time and log file consumption when adding N substitutes.
Maybe we can do without it.
|
|
flag doesn't seem to work as advertised.
|
|
* A better substitute mechanism.
Instead of generating a store expression for each store path for
which we have a substitute, we can have a single store expression
that builds a generic program that is invoked to build the desired
store path, which is passed as an argument.
This means that operations like `nix-pull' only produce O(1) files
instead of O(N) files in the store when registering N substitutes.
(It consumes O(N) database storage, of course, but that's not a
performance problem).
* Added a test for the substitute mechanism.
* `nix-store --substitute' reads the substitutes from standard input,
instead of from the command line. This prevents us from running
into the kernel's limit on command line length.
|
|
|
|
|
|
approach. This makes it much easier to add extra complexity in the
normaliser / realiser (e.g., build hooks, substitutes).
|
|
|
|
|
|
hack.
|
|
Linux), so use setpgid().
|
|
* When a fast build wakes up a goal, try to start that goal in the
same iteration of the startBuild() loop of run(). Otherwise no job
might be started until the next job terminates.
|
|
(Though the `build-remote.pl' script has a gigantic race condition).
|
|
per-machine maximum number of parallel jobs on a remote machine.
|
|
in parallel. Hooks are more efficient: locks on output paths are
only acquired when the hook says that it is willing to accept a
build job. Hooks now work in two phases. First, they should first
tell Nix whether they are willing to accept a job. Nix guarantuees
that no two hooks will ever be in the first phase at the same time
(this simplifies the implementation of hooks, since they don't have
to perform locking (?)). Second, if they accept a job, they are
then responsible for building it (on the remote system), and copying
the result back. These can be run in parallel with other hooks and
locally executed jobs.
The implementation is a bit messy right now, though.
* The directory `distributed' shows a (hacky) example of a hook that
distributes build jobs over a set of machines listed in a
configuration file.
|
|
no limit).
* Add missing file to distribution.
|
|
distributing a build action to another machine. In particular, the
paths in the input closures, the output paths, and successor mapping
for sub-derivations.
|
|
|
|
parallel as possible (similar to GNU Make's `-j' switch). This is
useful on SMP systems, but it is especially useful for doing builds
on multiple machines. The idea is that a large derivation is
initiated on one master machine, which then distributes
sub-derivations to any number of slave machines. This should not
happen synchronously or in lock-step, so the master must be capable
of dealing with multiple parallel build jobs. We now have the
infrastructure to support this.
TODO: substitutes are currently broken.
|
|
|
|
|
|
builders to point to the store and the temporary build directory,
respectively. Useful for purity checking.
* Also set TEMPDIR, TMPDIR, TEMP, and TEMP to NIX_BUILD_TOP to make
sure that tools in the builder store temporary files in the right
location.
|
|
|
|
chroot() environment.
* A operation `--validpath' to register path validity. Useful for
bootstrapping in a pure Nix environment.
* Safety checks: ensure that files involved in store operations are in
the store.
|
|
|
|
Nix. This is to prevent Berkeley DB from becoming wedged.
Unfortunately it is not possible to throw C++ exceptions from a
signal handler. In fact, you can't do much of anything except
change variables of type `volatile sig_atomic_t'. So we set an
interrupt flag in the signal handler and check it at various
strategic locations in the code (by calling checkInterrupt()).
Since this is unlikely to cover all cases (e.g., (semi-)infinite
loops), sometimes SIGTERM may now be required to kill Nix.
|
|
verbosity level.
|
|
|
|
recovery fails.
|
|
|
|
(following the Usenix paper).
|
|
it automatically removes log files when they are no longer needed.
*** IMPORTANT ***
If you have an existing Nix installation, you must checkpoint the
Nix database to prevent recent transactions from being undone. Do
the following:
- optional: make a backup of $prefix/var/nix/db.
- run `db_checkpoint' from Berkeley DB 4.1:
$ db_checkpoint -h $prefix/var/nix/db -1
- optional (?): run `db_recover' from Berkeley DB 4.1:
$ db_recover -h $prefix/var/nix/db
- remove $prefix/var/nix/db/log* and $prefix/var/nix/db/__db*
|
|
path of the Nix expression to be used with the import, upgrade, and
query commands. For instance,
$ nix-env -I ~/nixpkgs/pkgs/system/i686-linux.nix
$ nix-env --query --available [aka -qa]
sylpheed-0.9.7
bison-1.875
pango-1.2.5
subversion-0.35.1
...
$ nix-env -i sylpheed
$ nix-env -u subversion
There can be only one default at a time.
* If the path to a Nix expression is a symlink, follow the symlink
prior to resolving relative path references in the expression.
|
|
|
|
removal.
|
|
* Integrity: check in successor / substitute registration whether
the target path exists or has a substitute.
|
|
"i686-linux" instead of "i686-suse-linux").
|
|
|
|
|
|
* More consistency checks.
|
|
deleting a path in the store.
* Allow absolute paths in Nix expressions.
* Get nix-prefetch-url to work again.
* Various other fixes.
|
|
|
|
|
|
* Replace all directory reading code by a generic readDirectory()
function.
|