Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
Imports the package set twice in the builder expression: Once
configured for the target system, once configured for the native
system.
This makes it possible to fetch the actual image contents for the
required architecture, but use local tools to assemble the symlink
layer and metadata.
|
|
Specifying this meta-package toggles support for ARM64 images, for
example:
# Pull a default x86_64 image
docker pull nixery.dev/hello
# Pull an ARM64 image
docker pull nixery.dev/arm64/hello
|
|
Adds the CPU architecture to the image configuration. This will make
it possible to let users toggle architecture via meta-packages.
Relates to #13
|
|
|
|
|
|
|
|
Adds an implementation of popcount that, instead of realising
derivations locally, just queries the cache's narinfo files.
The downside of this is that calculating popularity for arbitrary Nix
package sets is not possible with this implementation. The upside is
that calculating the popularity for an entire Nix channel can now be
done in ~10 seconds[0].
This fixes #65.
[0]: Assuming a /fast/ internet connection.
|
|
Real-life experience has shown that the weighting of the metric
produced here is appropriate.
|
|
This case should not be possible unless something manually constructs
a logrus entry with a non-error value in the log.ErrorKey field, but
it's better to be safe than sorry.
|
|
|
|
|
|
Both of these no longer matched the reality of what was actually going
on in Nixery.
|
|
|
|
Previously background contexts where created where necessary (e.g. in
GCS interactions). Should I begin to use request timeouts or other
context-dependent things in the future, it's useful to have the actual
HTTP request context around.
This threads the request context through the application to all places
that need it.
|
|
This is very annoying otherwise.
|
|
|
|
The point at which files are moved happens to also (initially) be the
point where the `layers` directory is created. For this reason
renaming must ensure that all path components exist, which this commit
takes care of.
|
|
I assumed (incorrectly) that logrus would already take care of
surfacing error messages in human-readable form.
|
|
The filesystem storage backend can be enabled by setting
`NIXERY_STORAGE_BACKEND` to `filesystem` and `STORAGE_PATH` to a disk
location from which Nixery can serve files.
|
|
The request object is required for some serving methods (e.g. the
filesystem one).
|
|
This allows users to store and serve layers from a local filesystem
path.
|
|
|
|
Logical implementation is mostly identical to the previous one, but
adhering to the new storage.Backend interface.
|
|
This abstracts over the functionality of Google Cloud Storage and
other potential underlying storage backends to make it possible to
replace these in Nixery.
The GCS backend is not yet reimplemented.
|
|
In most cases this is not useful for users without the wrapper script,
so users should always build nixery-bin anyways.
|
|
This key is now taken straight from the configured service account
key.
|
|
The JSON file generated for service account keys already contains the
required information for signing URLs in GCS, thus the environment
variables for toggling signing behaviour have been removed.
Signing is now enabled automatically in the presence of service
account credentials (i.e. `GOOGLE_APPLICATION_CREDENTIALS`).
|
|
Some Nix download mechanisms will add a second hash in the store path,
which had been added to the source hash output (breaking argument
interpolation).
|
|
Instead of compressing & decompressing again to get the underlying tar
hash, use a similar mechanism as for store path layers for the symlink
layer and only compress it once while uploading.
|
|
Docker expects hashes of compressed tarballs in the manifest (as these
are used to fetch from the content-addressable layer store), but for
some reason it expects hashes in the configuration layer to be of
uncompressed tarballs.
To achieve this an additional SHA256 hash is calculcated while
creating the layer tarballs, but before passing them to the gzip
writer.
In the current constellation the symlink layer is first compressed and
then decompressed again to calculate its hash. This can be refactored
in a future change.
|
|
This fixes #62
|
|
This has become an issue recently with changes such as GZIP
compression, where CI runs no longer work because they conflict with
the production bucket for the public instance.
|
|
Makes use of the `.WithError` and `.WithField` convenience functions
in logrus to simplify log statement construction.
This has the added benefit of making it easier to correctly log
errors.
|
|
|
|
This rewrites all existing log statements into the structured logrus
format. For consistency, all errors are always logged separately from
the primary message in a field called `error`.
Only the "info", "error" and "warn" severities are used.
|
|
The output format now writes a `severity` field that follows that
format that should be recognised by Stackdriver Logging.
|
|
Uses a hash of Nixery's sources as the version displayed when Nixery
launches or logs an error. This makes it possible to distinguish
between errors logged from different versions.
The source hashes should be reproducible between different checkouts
of the same source tree.
|
|
This formatter has basic support for the Stackdriver Error Reporting
format, but several things are still lacking:
* the service version (preferably git commit?) needs to be included in
the server somehow
* log streams should be split between stdout/stderr as that is how
AppEngine (and several other GCP services?) seemingly differentiate
between info/error logs
|
|
With these changes it is possible to keep Nixery in $GOPATH and build
the server in there, while still having things work correctly via Nix.
|
|
This introduces a structured logging library that can be used (next
step) to attach additional metadata to log entries.
|
|
These two packages almost always end up being required by programs,
but people don't necessarily consider them.
They will now always be added and their popularity is artificially
inflated to ensure they end up at the top of the layer list.
|
|
Cache writes might not be flushed without this call.
|
|
Image layers in manifests are now sorted in a stable (descending)
order based on their merge rating, meaning that layers more likely to
be shared between images come first.
The reason for this change is Docker's handling of image layers on
overlayfs2: Images are condensed into a single representation on disk
after downloading.
Due to this Docker will constantly redownload all layers that are
applied in a different order in different images (layer order matters
in imperatively created images), based on something it calls the
'ChainID'.
Sorting the layers this way raises the likelihood of a long chain of
matching layers at the beginning of an image.
This relates to #39.
|
|
This functionality has been rolled into the server component and is no
longer required.
|
|
This will create, upload and hash the layer tarballs in one disk read.
|
|
This previously invoked a Nix derivation that spent a few seconds on
making an empty object in JSON ...
|
|
The last missing puzzle piece for #50!
|
|
Implements a local manifest cache that uses the temporary directory to
cache manifest builds.
This is necessary due to the size of manifests: Keeping them entirely
in-memory would quickly balloon the memory usage of Nixery, unless
some mechanism for cache eviction is implemented.
|