From 30424447574a0bc0ac8a7c9862b4000c70da846f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 19:24:51 +0000
Subject: chore: Import Nixery from experimental
Moves the existing Nixery code base to a git repository and switches
to public equivalents of libraries used.
---
tools/nixery/README.md | 99 +++++++++++
tools/nixery/app.yaml | 14 ++
tools/nixery/build-registry-image.nix | 167 ++++++++++++++++++
tools/nixery/index.html | 90 ++++++++++
tools/nixery/main.go | 309 ++++++++++++++++++++++++++++++++++
5 files changed, 679 insertions(+)
create mode 100644 tools/nixery/README.md
create mode 100644 tools/nixery/app.yaml
create mode 100644 tools/nixery/build-registry-image.nix
create mode 100644 tools/nixery/index.html
create mode 100644 tools/nixery/main.go
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
new file mode 100644
index 000000000000..6b1db469648a
--- /dev/null
+++ b/tools/nixery/README.md
@@ -0,0 +1,99 @@
+# Nixery
+
+This package implements a Docker-compatible container registry that is capable
+of transparently building and serving container images using [Nix][].
+
+The project started out with the intention of becoming a Kubernetes controller
+that can serve declarative image specifications specified in CRDs as container
+images. The design for this is outlined in [a public gist][gist].
+
+Currently it focuses on the ad-hoc creation of container images as outlined
+below with an example instance available at
+[nixery.appspot.com](https://nixery.appspot.com).
+
+This is not an officially supported Google project.
+
+## Ad-hoc container images
+
+Nixery supports building images on-demand based on the *image name*. Every
+package that the user intends to include in the image is specified as a path
+component of the image name.
+
+The path components refer to top-level keys in `nixpkgs` and are used to build a
+container image using Nix's [buildLayeredImage][] functionality.
+
+The special meta-package `shell` provides an image base with many core
+components (such as `bash` and `coreutils`) that users commonly expect in
+interactive images.
+
+## Usage example
+
+Using the publicly available Nixery instance at `nixery.appspot.com`, one could
+retrieve a container image containing `curl` and an interactive shell like this:
+
+```shell
+tazjin@tazbox:~$ sudo docker run -ti nixery.appspot.com/shell/curl bash
+Unable to find image 'nixery.appspot.com/shell/curl:latest' locally
+latest: Pulling from shell/curl
+7734b79e1ba1: Already exists
+b0d2008d18cd: Pull complete
+< ... some layers omitted ...>
+Digest: sha256:178270bfe84f74548b6a43347d73524e5c2636875b673675db1547ec427cf302
+Status: Downloaded newer image for nixery.appspot.com/shell/curl:latest
+bash-4.4# curl --version
+curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1
+```
+
+## Known issues
+
+* Initial build times for an image can be somewhat slow while Nixery retrieves
+ the required derivations from the Nix cache under-the-hood.
+
+ Due to how the Docker Registry API works, there is no way to provide
+ feedback to the user during this period - hence the UX (in interactive mode)
+ is currently that "nothing is happening" for a while after the `Unable to
+ find image` message is printed.
+
+* For some reason these images do not currently work in GKE clusters.
+ Launching a Kubernetes pod that uses a Nixery image results in an error
+ stating `unable to convert a nil pointer to a runtime API image:
+ ImageInspectError`.
+
+ This error comes from
+ [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/dockershim/convert.go#L35)
+ and it occurs *after* the Kubernetes node has retrieved the image from
+ Nixery (as per the Nixery logs).
+
+## Kubernetes integration (in the future)
+
+**Note**: The Kubernetes integration is not yet implemented.
+
+The basic idea of the Kubernetes integration is to provide a way for users to
+specify the contents of a container image as an API object in Kubernetes which
+will be transparently built by Nix when the container is started up.
+
+For example, given a resource that looks like this:
+
+```yaml
+---
+apiVersion: k8s.nixos.org/v1alpha
+kind: NixImage
+metadata:
+ name: curl-and-jq
+data:
+ tag: v1
+ contents:
+ - curl
+ - jq
+ - bash
+```
+
+One could create a container that references the `curl-and-jq` image, which will
+then be created by Nix when the container image is pulled.
+
+The controller itself runs as a daemonset on every node in the cluster,
+providing a host-mounted `/nix/store`-folder for caching purposes.
+
+[Nix]: https://nixos.org/
+[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
+[buildLayeredImage]: https://grahamc.com/blog/nix-and-layered-docker-images
diff --git a/tools/nixery/app.yaml b/tools/nixery/app.yaml
new file mode 100644
index 000000000000..223fa75829fa
--- /dev/null
+++ b/tools/nixery/app.yaml
@@ -0,0 +1,14 @@
+env: flex
+runtime: custom
+
+resources:
+ cpu: 2
+ memory_gb: 4
+ disk_size_gb: 50
+
+automatic_scaling:
+ max_num_instances: 3
+ cool_down_period_sec: 60
+
+env_variables:
+ BUCKET: "nixery-layers"
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
new file mode 100644
index 000000000000..11030d38a553
--- /dev/null
+++ b/tools/nixery/build-registry-image.nix
@@ -0,0 +1,167 @@
+# This file contains a modified version of dockerTools.buildImage that, instead
+# of outputting a single tarball which can be imported into a running Docker
+# daemon, builds a manifest file that can be used for serving the image over a
+# registry API.
+
+{
+ # Image Name
+ name,
+ # Image tag, the Nix's output hash will be used if null
+ tag ? null,
+ # Files to put on the image (a nix store path or list of paths).
+ contents ? [],
+ # Packages to install by name (which must refer to top-level attributes of
+ # nixpkgs). This is passed in as a JSON-array in string form.
+ packages ? "[]",
+ # Optional bash script to run on the files prior to fixturizing the layer.
+ extraCommands ? "", uid ? 0, gid ? 0,
+ # Docker's lowest maximum layer limit is 42-layers for an old
+ # version of the AUFS graph driver. We pick 24 to ensure there is
+ # plenty of room for extension. I believe the actual maximum is
+ # 128.
+ maxLayers ? 24,
+ # Nix package set to use
+ pkgs ? (import {})
+}:
+
+# Since this is essentially a re-wrapping of some of the functionality that is
+# implemented in the dockerTools, we need all of its components in our top-level
+# namespace.
+with pkgs;
+with dockerTools;
+
+let
+ tarLayer = "application/vnd.docker.image.rootfs.diff.tar";
+ baseName = baseNameOf name;
+
+ # deepFetch traverses the top-level Nix package set to retrieve an item via a
+ # path specified in string form.
+ #
+ # For top-level items, the name of the key yields the result directly. Nested
+ # items are fetched by using dot-syntax, as in Nix itself.
+ #
+ # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev`.
+ deepFetch = s: n:
+ let path = lib.strings.splitString "." n;
+ err = builtins.throw "Could not find '${n}' in package set";
+ in lib.attrsets.attrByPath path err s;
+
+ # allContents is the combination of all derivations and store paths passed in
+ # directly, as well as packages referred to by name.
+ allContents = contents ++ (map (deepFetch pkgs) (builtins.fromJSON packages));
+
+ contentsEnv = symlinkJoin {
+ name = "bulk-layers";
+ paths = allContents;
+ };
+
+ # The image build infrastructure expects to be outputting a slightly different
+ # format than the one we serve over the registry protocol. To work around its
+ # expectations we need to provide an empty JSON file that it can write some
+ # fun data into.
+ emptyJson = writeText "empty.json" "{}";
+
+ bulkLayers = mkManyPureLayers {
+ name = baseName;
+ configJson = emptyJson;
+ closure = writeText "closure" "${contentsEnv} ${emptyJson}";
+ # One layer will be taken up by the customisationLayer, so
+ # take up one less.
+ maxLayers = maxLayers - 1;
+ };
+
+ customisationLayer = mkCustomisationLayer {
+ name = baseName;
+ contents = contentsEnv;
+ baseJson = emptyJson;
+ inherit uid gid extraCommands;
+ };
+
+ # Inspect the returned bulk layers to determine which layers belong to the
+ # image and how to serve them.
+ #
+ # This computes both an MD5 and a SHA256 hash of each layer, which are used
+ # for different purposes. See the registry server implementation for details.
+ #
+ # Some of this logic is copied straight from `buildLayeredImage`.
+ allLayersJson = runCommand "fs-layer-list.json" {
+ buildInputs = [ coreutils findutils jq openssl ];
+ } ''
+ find ${bulkLayers} -mindepth 1 -maxdepth 1 | sort -t/ -k5 -n > layer-list
+ echo ${customisationLayer} >> layer-list
+
+ for layer in $(cat layer-list); do
+ layerPath="$layer/layer.tar"
+ layerSha256=$(sha256sum $layerPath | cut -d ' ' -f1)
+ # The server application compares binary MD5 hashes and expects base64
+ # encoding instead of hex.
+ layerMd5=$(openssl dgst -md5 -binary $layerPath | openssl enc -base64)
+ layerSize=$(wc -c $layerPath | cut -d ' ' -f1)
+
+ jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layerPath \
+ '{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> fs-layers
+ done
+
+ cat fs-layers | jq -s -c '.' > $out
+ '';
+ allLayers = builtins.fromJSON (builtins.readFile allLayersJson);
+
+ # Image configuration corresponding to the OCI specification for the file type
+ # 'application/vnd.oci.image.config.v1+json'
+ config = {
+ architecture = "amd64";
+ os = "linux";
+ rootfs.type = "layers";
+ rootfs.diff_ids = map (layer: "sha256:${layer.sha256}") allLayers;
+ };
+ configJson = writeText "${baseName}-config.json" (builtins.toJSON config);
+ configMetadata = with builtins; fromJSON (readFile (runCommand "config-meta" {
+ buildInputs = [ jq openssl ];
+ } ''
+ size=$(wc -c ${configJson} | cut -d ' ' -f1)
+ sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
+ md5=$(openssl dgst -md5 -binary $layerPath | openssl enc -base64)
+ jq -n -c --arg size $size --arg sha256 $sha256 --arg md5 $md5 \
+ '{ size: ($size | tonumber), sha256: $sha256, md5: $md5 }' \
+ >> $out
+ ''));
+
+ # Corresponds to the manifest JSON expected by the Registry API.
+ #
+ # This is Docker's "Image Manifest V2, Schema 2":
+ # https://docs.docker.com/registry/spec/manifest-v2-2/
+ manifest = {
+ schemaVersion = 2;
+ mediaType = "application/vnd.docker.distribution.manifest.v2+json";
+
+ config = {
+ mediaType = "application/vnd.docker.container.image.v1+json";
+ size = configMetadata.size;
+ digest = "sha256:${configMetadata.sha256}";
+ };
+
+ layers = map (layer: {
+ mediaType = tarLayer;
+ digest = "sha256:${layer.sha256}";
+ size = layer.size;
+ }) allLayers;
+ };
+
+ # This structure maps each layer digest to the actual tarball that will need
+ # to be served. It is used by the controller to cache the paths during a pull.
+ layerLocations = {
+ "${configMetadata.sha256}" = {
+ path = configJson;
+ md5 = configMetadata.md5;
+ };
+ } // (builtins.listToAttrs (map (layer: {
+ name = "${layer.sha256}";
+ value = {
+ path = layer.path;
+ md5 = layer.md5;
+ };
+ }) allLayers));
+
+in writeText "manifest-output.json" (builtins.toJSON {
+ inherit manifest layerLocations;
+})
diff --git a/tools/nixery/index.html b/tools/nixery/index.html
new file mode 100644
index 000000000000..ebec9968c0c9
--- /dev/null
+++ b/tools/nixery/index.html
@@ -0,0 +1,90 @@
+
+
+
+
+
+ Nixery
+
+
+
+
+
Nixery
+
+
+
What is this?
+
+ Nixery provides the ability to pull ad-hoc container images from a Docker-compatible registry
+ server. The image names specify the contents the image should contain, which are then
+ retrieved and built by the Nix package manager.
+
+
+ Nix is also responsible for the creation of the container images themselves. To do this it
+ uses an interesting layering strategy described in
+ this blog post.
+
+
How does it work?
+
+ Simply point your local Docker installation (or other compatible registry client) at Nixery
+ and ask for an image with the contents you desire. Image contents are path separated in the
+ name, so for example if you needed an image that contains a shell and emacs you
+ could pull it as such:
+
+
+ nixery.appspot.com/shell/emacs25-nox
+
+
+ Image tags are currently ignored. Every package name needs to correspond to a key in the
+ nixpkgs package set.
+
+
+ There are some special meta-packages which you must specify as the
+ first package in an image. These are:
+
+
+
shell: Provides default packages you would expect in an interactive environment
+
builder: Provides the above as well as Nix's standard build environment
+
+
+ Hence if you needed an interactive image with, for example, htop installed you
+ could run docker run -ti nixery.appspot.com/shell/htop bash.
+
+
FAQ
+
+ Technically speaking none of these are frequently-asked questions (because no questions have
+ been asked so far), but I'm going to take a guess at a few anyways:
+
+
+
+ Where is the source code for this?
+
+ Not yet public, sorry. Check back later(tm).
+
+
+ Which revision of nixpkgs is used?
+
+ Currently whatever was HEAD at the time I deployed this. One idea I've had is
+ to let users specify tags on images that correspond to commits in nixpkgs, however there is
+ some potential for abuse there (e.g. by triggering lots of builds on commits that have
+ broken Hydra builds) and I don't want to deal with that yet.
+
+
+
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
new file mode 100644
index 000000000000..29b22f301823
--- /dev/null
+++ b/tools/nixery/main.go
@@ -0,0 +1,309 @@
+// Package main provides the implementation of a container registry that
+// transparently builds container images based on Nix derivations.
+//
+// The Nix derivation used for image creation is responsible for creating
+// objects that are compatible with the registry API. The targeted registry
+// protocol is currently Docker's.
+//
+// When an image is requested, the required contents are parsed out of the
+// request and a Nix-build is initiated that eventually responds with the
+// manifest as well as information linking each layer digest to a local
+// filesystem path.
+//
+// Nixery caches the filesystem paths and returns the manifest to the client.
+// Subsequent requests for layer content per digest are then fulfilled by
+// serving the files from disk.
+package main
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "os"
+ "os/exec"
+ "regexp"
+ "strings"
+
+ "cloud.google.com/go/storage"
+)
+
+// ManifestMediaType stores the Content-Type used for the manifest itself. This
+// corresponds to the "Image Manifest V2, Schema 2" described on this page:
+//
+// https://docs.docker.com/registry/spec/manifest-v2-2/
+const ManifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
+
+// Image represents the information necessary for building a container image. This can
+// be either a list of package names (corresponding to keys in the nixpkgs set) or a
+// Nix expression that results in a *list* of derivations.
+type image struct {
+ // Name of the container image.
+ name string
+
+ // Names of packages to include in the image. These must correspond directly to
+ // top-level names of Nix packages in the nixpkgs tree.
+ packages []string
+}
+
+// BuildResult represents the output of calling the Nix derivation responsible for building
+// registry images.
+//
+// The `layerLocations` field contains the local filesystem paths to each individual image layer
+// that will need to be served, while the `manifest` field contains the JSON-representation of
+// the manifest that needs to be served to the client.
+//
+// The later field is simply treated as opaque JSON and passed through.
+type BuildResult struct {
+ Manifest json.RawMessage `json:"manifest"`
+ LayerLocations map[string]struct {
+ Path string `json:"path"`
+ Md5 []byte `json:"md5"`
+ } `json:"layerLocations"`
+}
+
+// imageFromName parses an image name into the corresponding structure which can
+// be used to invoke Nix.
+//
+// It will expand convenience names under the hood (see the `convenienceNames` function below).
+func imageFromName(name string) image {
+ packages := strings.Split(name, "/")
+ return image{
+ name: name,
+ packages: convenienceNames(packages),
+ }
+}
+
+// convenienceNames expands convenience package names defined by Nixery which let users
+// include commonly required sets of tools in a container quickly.
+//
+// Convenience names must be specified as the first package in an image.
+//
+// Currently defined convenience names are:
+//
+// * `shell`: Includes bash, coreutils and other common command-line tools
+// * `builder`: Includes the standard build environment, as well as everything from `shell`
+func convenienceNames(packages []string) []string {
+ shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
+ builderPackages := append(shellPackages, "stdenv")
+
+ if packages[0] == "shell" {
+ return append(packages[1:], shellPackages...)
+ } else if packages[0] == "builder" {
+ return append(packages[1:], builderPackages...)
+ } else {
+ return packages
+ }
+}
+
+// Call out to Nix and request that an image be built. Nix will, upon success, return
+// a manifest for the container image.
+func buildImage(image *image, ctx *context.Context, bucket *storage.BucketHandle) ([]byte, error) {
+ // This file is made available at runtime via Blaze. See the `data` declaration in `BUILD`
+ nixPath := "experimental/users/tazjin/nixery/build-registry-image.nix"
+
+ packages, err := json.Marshal(image.packages)
+ if err != nil {
+ return nil, err
+ }
+
+ cmd := exec.Command("nix-build", "--no-out-link", "--show-trace", "--argstr", "name", image.name, "--argstr", "packages", string(packages), nixPath)
+
+ outpipe, err := cmd.StdoutPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ errpipe, err := cmd.StderrPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ if err = cmd.Start(); err != nil {
+ log.Println("Error starting nix-build:", err)
+ return nil, err
+ }
+ log.Printf("Started Nix image build for ''%s'", image.name)
+
+ stdout, _ := ioutil.ReadAll(outpipe)
+ stderr, _ := ioutil.ReadAll(errpipe)
+
+ if err = cmd.Wait(); err != nil {
+ // TODO(tazjin): Propagate errors upwards in a usable format.
+ log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
+ return nil, err
+ }
+
+ log.Println("Finished Nix image build")
+
+ buildOutput, err := ioutil.ReadFile(strings.TrimSpace(string(stdout)))
+ if err != nil {
+ return nil, err
+ }
+
+ // The build output returned by Nix is deserialised to add all contained layers to the
+ // bucket. Only the manifest itself is re-serialised to JSON and returned.
+ var result BuildResult
+ err = json.Unmarshal(buildOutput, &result)
+ if err != nil {
+ return nil, err
+ }
+
+ for layer, meta := range result.LayerLocations {
+ err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ return json.Marshal(result.Manifest)
+}
+
+// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing any data
+// the bucket is probed to see if the file already exists.
+//
+// If the file does exist, its MD5 hash is verified to ensure that the stored file is
+// not - for example - a fragment of a previous, incomplete upload.
+func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
+ layerKey := fmt.Sprintf("layers/%s", layer)
+ obj := bucket.Object(layerKey)
+
+ // Before uploading a layer to the bucket, probe whether it already exists.
+ //
+ // If it does and the MD5 checksum matches the expected one, the layer upload
+ // can be skipped.
+ attrs, err := obj.Attrs(*ctx)
+
+ if err == nil && bytes.Equal(attrs.MD5, md5) {
+ log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
+ } else {
+ writer := obj.NewWriter(*ctx)
+ file, err := os.Open(path)
+
+ if err != nil {
+ return fmt.Errorf("failed to open layer %s from path %s: %v", layer, path, err)
+ }
+
+ size, err := io.Copy(writer, file)
+ if err != nil {
+ return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ }
+
+ if err = writer.Close(); err != nil {
+ return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ }
+
+ log.Printf("Uploaded layer sha256:%s (%v bytes written)\n", layer, size)
+ }
+
+ return nil
+}
+
+// layerRedirect constructs the public URL of the layer object in the Cloud Storage bucket
+// and redirects the client there.
+//
+// The Docker client is known to follow redirects, but this might not be true for all other
+// registry clients.
+func layerRedirect(w http.ResponseWriter, bucket string, digest string) {
+ log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, bucket)
+ url := fmt.Sprintf("https://storage.googleapis.com/%s/layers/%s", bucket, digest)
+ w.Header().Set("Location", url)
+ w.WriteHeader(303)
+}
+
+// prepareBucket configures the handle to a Cloud Storage bucket in which individual layers will be
+// stored after Nix builds. Nixery does not directly serve layers to registry clients, instead it
+// redirects them to the public URLs of the Cloud Storage bucket.
+//
+// The bucket is required for Nixery to function correctly, hence fatal errors are generated in case
+// it fails to be set up correctly.
+func prepareBucket(ctx *context.Context, bucket string) *storage.BucketHandle {
+ client, err := storage.NewClient(*ctx)
+ if err != nil {
+ log.Fatalln("Failed to set up Cloud Storage client:", err)
+ }
+
+ bkt := client.Bucket(bucket)
+
+ if _, err := bkt.Attrs(*ctx); err != nil {
+ log.Fatalln("Could not access configured bucket", err)
+ }
+
+ return bkt
+}
+
+var manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/(\w+)$`)
+var layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+
+func main() {
+ bucketName := os.Getenv("BUCKET")
+ if bucketName == "" {
+ log.Fatalln("GCS bucket for layer storage must be specified")
+ }
+
+ port := os.Getenv("PORT")
+ if port == "" {
+ port = "5726"
+ }
+
+ ctx := context.Background()
+ bucket := prepareBucket(&ctx, bucketName)
+
+ log.Printf("Starting Kubernetes Nix controller on port %s\n", port)
+
+ log.Fatal(http.ListenAndServe(":"+port, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ // When running on AppEngine, HTTP traffic should be redirected to HTTPS.
+ //
+ // This is achieved here by enforcing HSTS (with a one week duration) on responses.
+ if r.Header.Get("X-Forwarded-Proto") == "http" && strings.Contains(r.Host, "appspot.com") {
+ w.Header().Add("Strict-Transport-Security", "max-age=604800")
+ }
+
+ // Serve an index page to anyone who visits the registry's base URL:
+ if r.RequestURI == "/" {
+ index, _ := ioutil.ReadFile("experimental/users/tazjin/nixery/index.html")
+ w.Header().Add("Content-Type", "text/html")
+ w.Write(index)
+ return
+ }
+
+ // Acknowledge that we speak V2
+ if r.RequestURI == "/v2/" {
+ fmt.Fprintln(w)
+ return
+ }
+
+ // Serve the manifest (straight from Nix)
+ manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
+ if len(manifestMatches) == 3 {
+ imageName := manifestMatches[1]
+ log.Printf("Requesting manifest for image '%s'", imageName)
+ image := imageFromName(manifestMatches[1])
+ manifest, err := buildImage(&image, &ctx, bucket)
+
+ if err != nil {
+ log.Println("Failed to build image manifest", err)
+ return
+ }
+
+ w.Header().Add("Content-Type", ManifestMediaType)
+ w.Write(manifest)
+ return
+ }
+
+ // Serve an image layer. For this we need to first ask Nix for the
+ // manifest, then proceed to extract the correct layer from it.
+ layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
+ if len(layerMatches) == 3 {
+ digest := layerMatches[2]
+ layerRedirect(w, bucketName, digest)
+ return
+ }
+
+ w.WriteHeader(404)
+ })))
+}
--
cgit 1.4.1
From 5d9b32977ddd332f47f89aa30202b80906d3e719 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 21:48:27 +0100
Subject: feat(build): Introduce build configuration using Nix
Rather than migrating to Bazel, it seems more appropriate to use Nix
for this project.
The project is split into several different components (for data
dependencies and binaries). A derivation for building an image for
Nixery itself will be added.
---
tools/nixery/default.nix | 43 ++++++++++++++++
tools/nixery/go-deps.nix | 111 +++++++++++++++++++++++++++++++++++++++++
tools/nixery/index.html | 90 ---------------------------------
tools/nixery/static/index.html | 90 +++++++++++++++++++++++++++++++++
4 files changed, 244 insertions(+), 90 deletions(-)
create mode 100644 tools/nixery/default.nix
create mode 100644 tools/nixery/go-deps.nix
delete mode 100644 tools/nixery/index.html
create mode 100644 tools/nixery/static/index.html
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
new file mode 100644
index 000000000000..19e7df963a50
--- /dev/null
+++ b/tools/nixery/default.nix
@@ -0,0 +1,43 @@
+{ pkgs ? import {} }:
+
+with pkgs;
+
+rec {
+ # Go implementation of the Nixery server which implements the
+ # container registry interface.
+ #
+ # Users will usually not want to use this directly, instead see the
+ # 'nixery' derivation below, which automatically includes runtime
+ # data dependencies.
+ nixery-server = buildGoPackage {
+ name = "nixery-server";
+
+ # Technically people should not be building Nixery through 'go get'
+ # or similar (as other required files will not be included), but
+ # buildGoPackage requires a package path.
+ goPackagePath = "github.com/google/nixery";
+
+ goDeps = ./go-deps.nix;
+ src = ./.;
+
+ meta = {
+ description = "Container image build serving Nix-backed images";
+ homepage = "https://github.com/google/nixery";
+ license = lib.licenses.ascl20;
+ maintainers = [ lib.maintainers.tazjin ];
+ };
+ };
+
+ # Nix expression (unimported!) which is used by Nixery to build
+ # container images.
+ nixery-builder = runCommand "build-registry-image.nix" {} ''
+ cat ${./build-registry-image.nix} > $out
+ '';
+
+ # Static files to serve on the Nixery index. This is used primarily
+ # for the demo instance running at nixery.appspot.com and provides
+ # some background information for what Nixery is.
+ nixery-static = runCommand "nixery-static" {} ''
+ cp -r ${./static} $out
+ '';
+}
diff --git a/tools/nixery/go-deps.nix b/tools/nixery/go-deps.nix
new file mode 100644
index 000000000000..57d9ecd64931
--- /dev/null
+++ b/tools/nixery/go-deps.nix
@@ -0,0 +1,111 @@
+# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
+[
+ {
+ goPackagePath = "cloud.google.com/go";
+ fetch = {
+ type = "git";
+ url = "https://code.googlesource.com/gocloud";
+ rev = "edd0968ab5054ee810843a77774d81069989494b";
+ sha256 = "1mh8i72h6a1z9lp4cy9bwa2j87bm905zcsvmqwskdqi8z58cif4a";
+ };
+ }
+ {
+ goPackagePath = "github.com/golang/protobuf";
+ fetch = {
+ type = "git";
+ url = "https://github.com/golang/protobuf";
+ rev = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7";
+ sha256 = "1k1wb4zr0qbwgpvz9q5ws9zhlal8hq7dmq62pwxxriksayl6hzym";
+ };
+ }
+ {
+ goPackagePath = "github.com/googleapis/gax-go";
+ fetch = {
+ type = "git";
+ url = "https://github.com/googleapis/gax-go";
+ rev = "bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2";
+ sha256 = "1lxawwngv6miaqd25s3ba0didfzylbwisd2nz7r4gmbmin6jsjrx";
+ };
+ }
+ {
+ goPackagePath = "github.com/hashicorp/golang-lru";
+ fetch = {
+ type = "git";
+ url = "https://github.com/hashicorp/golang-lru";
+ rev = "59383c442f7d7b190497e9bb8fc17a48d06cd03f";
+ sha256 = "0yzwl592aa32vfy73pl7wdc21855w17zssrp85ckw2nisky8rg9c";
+ };
+ }
+ {
+ goPackagePath = "go.opencensus.io";
+ fetch = {
+ type = "git";
+ url = "https://github.com/census-instrumentation/opencensus-go";
+ rev = "b4a14686f0a98096416fe1b4cb848e384fb2b22b";
+ sha256 = "1aidyp301v5ngwsnnc8v1s09vvbsnch1jc4vd615f7qv77r9s7dn";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/net";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/net";
+ rev = "da137c7871d730100384dbcf36e6f8fa493aef5b";
+ sha256 = "1qsiyr3irmb6ii06hivm9p2c7wqyxczms1a9v1ss5698yjr3fg47";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/oauth2";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/oauth2";
+ rev = "0f29369cfe4552d0e4bcddc57cc75f4d7e672a33";
+ sha256 = "06jwpvx0x2gjn2y959drbcir5kd7vg87k0r1216abk6rrdzzrzi2";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/sys";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/sys";
+ rev = "fae7ac547cb717d141c433a2a173315e216b64c4";
+ sha256 = "11pl0dycm5d8ar7g1l1w5q2cx0lms8i15n8mxhilhkdd2xpmh8f0";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/text";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/text";
+ rev = "342b2e1fbaa52c93f31447ad2c6abc048c63e475";
+ sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/api";
+ fetch = {
+ type = "git";
+ url = "https://code.googlesource.com/google-api-go-client";
+ rev = "069bea57b1be6ad0671a49ea7a1128025a22b73f";
+ sha256 = "19q2b610lkf3z3y9hn6rf11dd78xr9q4340mdyri7kbijlj2r44q";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/genproto";
+ fetch = {
+ type = "git";
+ url = "https://github.com/google/go-genproto";
+ rev = "c506a9f9061087022822e8da603a52fc387115a8";
+ sha256 = "03hh80aqi58dqi5ykj4shk3chwkzrgq2f3k6qs5qhgvmcy79y2py";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/grpc";
+ fetch = {
+ type = "git";
+ url = "https://github.com/grpc/grpc-go";
+ rev = "977142214c45640483838b8672a43c46f89f90cb";
+ sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
+ };
+ }
+]
diff --git a/tools/nixery/index.html b/tools/nixery/index.html
deleted file mode 100644
index ebec9968c0c9..000000000000
--- a/tools/nixery/index.html
+++ /dev/null
@@ -1,90 +0,0 @@
-
-
-
-
-
- Nixery
-
-
-
-
-
Nixery
-
-
-
What is this?
-
- Nixery provides the ability to pull ad-hoc container images from a Docker-compatible registry
- server. The image names specify the contents the image should contain, which are then
- retrieved and built by the Nix package manager.
-
-
- Nix is also responsible for the creation of the container images themselves. To do this it
- uses an interesting layering strategy described in
- this blog post.
-
-
How does it work?
-
- Simply point your local Docker installation (or other compatible registry client) at Nixery
- and ask for an image with the contents you desire. Image contents are path separated in the
- name, so for example if you needed an image that contains a shell and emacs you
- could pull it as such:
-
-
- nixery.appspot.com/shell/emacs25-nox
-
-
- Image tags are currently ignored. Every package name needs to correspond to a key in the
- nixpkgs package set.
-
-
- There are some special meta-packages which you must specify as the
- first package in an image. These are:
-
-
-
shell: Provides default packages you would expect in an interactive environment
-
builder: Provides the above as well as Nix's standard build environment
-
-
- Hence if you needed an interactive image with, for example, htop installed you
- could run docker run -ti nixery.appspot.com/shell/htop bash.
-
-
FAQ
-
- Technically speaking none of these are frequently-asked questions (because no questions have
- been asked so far), but I'm going to take a guess at a few anyways:
-
-
-
- Where is the source code for this?
-
- Not yet public, sorry. Check back later(tm).
-
-
- Which revision of nixpkgs is used?
-
- Currently whatever was HEAD at the time I deployed this. One idea I've had is
- to let users specify tags on images that correspond to commits in nixpkgs, however there is
- some potential for abuse there (e.g. by triggering lots of builds on commits that have
- broken Hydra builds) and I don't want to deal with that yet.
-
+ Nixery provides the ability to pull ad-hoc container images from a Docker-compatible registry
+ server. The image names specify the contents the image should contain, which are then
+ retrieved and built by the Nix package manager.
+
+
+ Nix is also responsible for the creation of the container images themselves. To do this it
+ uses an interesting layering strategy described in
+ this blog post.
+
+
How does it work?
+
+ Simply point your local Docker installation (or other compatible registry client) at Nixery
+ and ask for an image with the contents you desire. Image contents are path separated in the
+ name, so for example if you needed an image that contains a shell and emacs you
+ could pull it as such:
+
+
+ nixery.appspot.com/shell/emacs25-nox
+
+
+ Image tags are currently ignored. Every package name needs to correspond to a key in the
+ nixpkgs package set.
+
+
+ There are some special meta-packages which you must specify as the
+ first package in an image. These are:
+
+
+
shell: Provides default packages you would expect in an interactive environment
+
builder: Provides the above as well as Nix's standard build environment
+
+
+ Hence if you needed an interactive image with, for example, htop installed you
+ could run docker run -ti nixery.appspot.com/shell/htop bash.
+
+
FAQ
+
+ Technically speaking none of these are frequently-asked questions (because no questions have
+ been asked so far), but I'm going to take a guess at a few anyways:
+
+
+
+ Where is the source code for this?
+
+ Not yet public, sorry. Check back later(tm).
+
+
+ Which revision of nixpkgs is used?
+
+ Currently whatever was HEAD at the time I deployed this. One idea I've had is
+ to let users specify tags on images that correspond to commits in nixpkgs, however there is
+ some potential for abuse there (e.g. by triggering lots of builds on commits that have
+ broken Hydra builds) and I don't want to deal with that yet.
+
+
+
--
cgit 1.4.1
From db1086a5bbb273d50c7a8645daa9c79801040e58 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 22:48:16 +0100
Subject: feat(main): Add additional envvars to configure Nixery
Previously the code had hardcoded paths to runtime data (the Nix
builder & web files), which have now been moved into configuration
options.
Additionally configuration for the application is now centralised in a
single config struct, an instance of which is passed around the
application.
This makes it possible to implement a wrapper in Nix that will
configure the runtime data locations automatically.
---
tools/nixery/main.go | 81 +++++++++++++++++++++++++++++++---------------------
1 file changed, 49 insertions(+), 32 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 29b22f301823..5db200fd32a4 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -32,11 +32,20 @@ import (
"cloud.google.com/go/storage"
)
-// ManifestMediaType stores the Content-Type used for the manifest itself. This
-// corresponds to the "Image Manifest V2, Schema 2" described on this page:
+// config holds the Nixery configuration options.
+type config struct {
+ bucket string // GCS bucket to cache & serve layers
+ builder string // Nix derivation for building images
+ web string // Static files to serve over HTTP
+ port string // Port on which to launch HTTP server
+}
+
+// ManifestMediaType is the Content-Type used for the manifest itself.
+// This corresponds to the "Image Manifest V2, Schema 2" described on
+// this page:
//
// https://docs.docker.com/registry/spec/manifest-v2-2/
-const ManifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
+const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
// Image represents the information necessary for building a container image. This can
// be either a list of package names (corresponding to keys in the nixpkgs set) or a
@@ -102,16 +111,19 @@ func convenienceNames(packages []string) []string {
// Call out to Nix and request that an image be built. Nix will, upon success, return
// a manifest for the container image.
-func buildImage(image *image, ctx *context.Context, bucket *storage.BucketHandle) ([]byte, error) {
- // This file is made available at runtime via Blaze. See the `data` declaration in `BUILD`
- nixPath := "experimental/users/tazjin/nixery/build-registry-image.nix"
-
+func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) ([]byte, error) {
packages, err := json.Marshal(image.packages)
if err != nil {
return nil, err
}
- cmd := exec.Command("nix-build", "--no-out-link", "--show-trace", "--argstr", "name", image.name, "--argstr", "packages", string(packages), nixPath)
+ cmd := exec.Command(
+ "nix-build",
+ "--no-out-link",
+ "--show-trace",
+ "--argstr", "name", image.name,
+ "--argstr", "packages", string(packages), cfg.builder,
+ )
outpipe, err := cmd.StdoutPipe()
if err != nil {
@@ -206,11 +218,11 @@ func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer strin
// layerRedirect constructs the public URL of the layer object in the Cloud Storage bucket
// and redirects the client there.
//
-// The Docker client is known to follow redirects, but this might not be true for all other
-// registry clients.
-func layerRedirect(w http.ResponseWriter, bucket string, digest string) {
- log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, bucket)
- url := fmt.Sprintf("https://storage.googleapis.com/%s/layers/%s", bucket, digest)
+// The Docker client is known to follow redirects, but this might not
+// be true for all other registry clients.
+func layerRedirect(w http.ResponseWriter, cfg *config, digest string) {
+ log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.bucket)
+ url := fmt.Sprintf("https://storage.googleapis.com/%s/layers/%s", cfg.bucket, digest)
w.Header().Set("Location", url)
w.WriteHeader(303)
}
@@ -219,15 +231,15 @@ func layerRedirect(w http.ResponseWriter, bucket string, digest string) {
// stored after Nix builds. Nixery does not directly serve layers to registry clients, instead it
// redirects them to the public URLs of the Cloud Storage bucket.
//
-// The bucket is required for Nixery to function correctly, hence fatal errors are generated in case
-// it fails to be set up correctly.
-func prepareBucket(ctx *context.Context, bucket string) *storage.BucketHandle {
+// The bucket is required for Nixery to function correctly, hence
+// fatal errors are generated in case it fails to be set up correctly.
+func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
client, err := storage.NewClient(*ctx)
if err != nil {
log.Fatalln("Failed to set up Cloud Storage client:", err)
}
- bkt := client.Bucket(bucket)
+ bkt := client.Bucket(cfg.bucket)
if _, err := bkt.Attrs(*ctx); err != nil {
log.Fatalln("Could not access configured bucket", err)
@@ -239,24 +251,29 @@ func prepareBucket(ctx *context.Context, bucket string) *storage.BucketHandle {
var manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/(\w+)$`)
var layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
-func main() {
- bucketName := os.Getenv("BUCKET")
- if bucketName == "" {
- log.Fatalln("GCS bucket for layer storage must be specified")
+func getConfig(key, desc string) string {
+ value := os.Getenv(key)
+ if value == "" {
+ log.Fatalln(desc + " must be specified")
}
- port := os.Getenv("PORT")
- if port == "" {
- port = "5726"
+ return value
+}
+
+func main() {
+ cfg := &config{
+ bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
+ builder: getConfig("NIX_BUILDER", "Nix image builder code"),
+ web: getConfig("WEB_DIR", "Static web file dir"),
+ port: getConfig("PORT", "HTTP port"),
}
ctx := context.Background()
- bucket := prepareBucket(&ctx, bucketName)
+ bucket := prepareBucket(&ctx, cfg)
- log.Printf("Starting Kubernetes Nix controller on port %s\n", port)
+ log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
- log.Fatal(http.ListenAndServe(":"+port, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- // When running on AppEngine, HTTP traffic should be redirected to HTTPS.
+ log.Fatal(http.ListenAndServe(":"+cfg.port, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
//
// This is achieved here by enforcing HSTS (with a one week duration) on responses.
if r.Header.Get("X-Forwarded-Proto") == "http" && strings.Contains(r.Host, "appspot.com") {
@@ -265,7 +282,7 @@ func main() {
// Serve an index page to anyone who visits the registry's base URL:
if r.RequestURI == "/" {
- index, _ := ioutil.ReadFile("experimental/users/tazjin/nixery/index.html")
+ index, _ := ioutil.ReadFile(cfg.web + "/index.html")
w.Header().Add("Content-Type", "text/html")
w.Write(index)
return
@@ -283,14 +300,14 @@ func main() {
imageName := manifestMatches[1]
log.Printf("Requesting manifest for image '%s'", imageName)
image := imageFromName(manifestMatches[1])
- manifest, err := buildImage(&image, &ctx, bucket)
+ manifest, err := buildImage(&ctx, cfg, &image, bucket)
if err != nil {
log.Println("Failed to build image manifest", err)
return
}
- w.Header().Add("Content-Type", ManifestMediaType)
+ w.Header().Add("Content-Type", manifestMediaType)
w.Write(manifest)
return
}
@@ -300,7 +317,7 @@ func main() {
layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
if len(layerMatches) == 3 {
digest := layerMatches[2]
- layerRedirect(w, bucketName, digest)
+ layerRedirect(w, cfg, digest)
return
}
--
cgit 1.4.1
From 83145681997d549461e9f3ff15daf819fb1b3e3b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 22:56:18 +0100
Subject: style(main): Reflow comments to 80 characters maximum
---
tools/nixery/main.go | 93 ++++++++++++++++++++++++++++------------------------
1 file changed, 51 insertions(+), 42 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 5db200fd32a4..044d791517bc 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -40,31 +40,31 @@ type config struct {
port string // Port on which to launch HTTP server
}
-// ManifestMediaType is the Content-Type used for the manifest itself.
-// This corresponds to the "Image Manifest V2, Schema 2" described on
-// this page:
+// ManifestMediaType is the Content-Type used for the manifest itself. This
+// corresponds to the "Image Manifest V2, Schema 2" described on this page:
//
// https://docs.docker.com/registry/spec/manifest-v2-2/
const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
-// Image represents the information necessary for building a container image. This can
-// be either a list of package names (corresponding to keys in the nixpkgs set) or a
-// Nix expression that results in a *list* of derivations.
+// Image represents the information necessary for building a container image.
+// This can be either a list of package names (corresponding to keys in the
+// nixpkgs set) or a Nix expression that results in a *list* of derivations.
type image struct {
// Name of the container image.
name string
- // Names of packages to include in the image. These must correspond directly to
- // top-level names of Nix packages in the nixpkgs tree.
+ // Names of packages to include in the image. These must correspond
+ // directly to top-level names of Nix packages in the nixpkgs tree.
packages []string
}
-// BuildResult represents the output of calling the Nix derivation responsible for building
-// registry images.
+// BuildResult represents the output of calling the Nix derivation responsible
+// for building registry images.
//
-// The `layerLocations` field contains the local filesystem paths to each individual image layer
-// that will need to be served, while the `manifest` field contains the JSON-representation of
-// the manifest that needs to be served to the client.
+// The `layerLocations` field contains the local filesystem paths to each
+// individual image layer that will need to be served, while the `manifest`
+// field contains the JSON-representation of the manifest that needs to be
+// served to the client.
//
// The later field is simply treated as opaque JSON and passed through.
type BuildResult struct {
@@ -78,7 +78,8 @@ type BuildResult struct {
// imageFromName parses an image name into the corresponding structure which can
// be used to invoke Nix.
//
-// It will expand convenience names under the hood (see the `convenienceNames` function below).
+// It will expand convenience names under the hood (see the `convenienceNames`
+// function below).
func imageFromName(name string) image {
packages := strings.Split(name, "/")
return image{
@@ -87,15 +88,15 @@ func imageFromName(name string) image {
}
}
-// convenienceNames expands convenience package names defined by Nixery which let users
-// include commonly required sets of tools in a container quickly.
+// convenienceNames expands convenience package names defined by Nixery which
+// let users include commonly required sets of tools in a container quickly.
//
// Convenience names must be specified as the first package in an image.
//
// Currently defined convenience names are:
//
// * `shell`: Includes bash, coreutils and other common command-line tools
-// * `builder`: Includes the standard build environment, as well as everything from `shell`
+// * `builder`: All of the above and the standard build environment
func convenienceNames(packages []string) []string {
shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
builderPackages := append(shellPackages, "stdenv")
@@ -109,8 +110,8 @@ func convenienceNames(packages []string) []string {
}
}
-// Call out to Nix and request that an image be built. Nix will, upon success, return
-// a manifest for the container image.
+// Call out to Nix and request that an image be built. Nix will, upon success,
+// return a manifest for the container image.
func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) ([]byte, error) {
packages, err := json.Marshal(image.packages)
if err != nil {
@@ -139,7 +140,7 @@ func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage
log.Println("Error starting nix-build:", err)
return nil, err
}
- log.Printf("Started Nix image build for ''%s'", image.name)
+ log.Printf("Started Nix image build for '%s'", image.name)
stdout, _ := ioutil.ReadAll(outpipe)
stderr, _ := ioutil.ReadAll(errpipe)
@@ -157,8 +158,9 @@ func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage
return nil, err
}
- // The build output returned by Nix is deserialised to add all contained layers to the
- // bucket. Only the manifest itself is re-serialised to JSON and returned.
+ // The build output returned by Nix is deserialised to add all
+ // contained layers to the bucket. Only the manifest itself is
+ // re-serialised to JSON and returned.
var result BuildResult
err = json.Unmarshal(buildOutput, &result)
if err != nil {
@@ -175,19 +177,20 @@ func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage
return json.Marshal(result.Manifest)
}
-// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing any data
-// the bucket is probed to see if the file already exists.
+// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
+// any data the bucket is probed to see if the file already exists.
//
-// If the file does exist, its MD5 hash is verified to ensure that the stored file is
-// not - for example - a fragment of a previous, incomplete upload.
+// If the file does exist, its MD5 hash is verified to ensure that the stored
+// file is not - for example - a fragment of a previous, incomplete upload.
func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
layerKey := fmt.Sprintf("layers/%s", layer)
obj := bucket.Object(layerKey)
- // Before uploading a layer to the bucket, probe whether it already exists.
+ // Before uploading a layer to the bucket, probe whether it already
+ // exists.
//
- // If it does and the MD5 checksum matches the expected one, the layer upload
- // can be skipped.
+ // If it does and the MD5 checksum matches the expected one, the layer
+ // upload can be skipped.
attrs, err := obj.Attrs(*ctx)
if err == nil && bytes.Equal(attrs.MD5, md5) {
@@ -215,11 +218,11 @@ func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer strin
return nil
}
-// layerRedirect constructs the public URL of the layer object in the Cloud Storage bucket
-// and redirects the client there.
+// layerRedirect constructs the public URL of the layer object in the Cloud
+// Storage bucket and redirects the client there.
//
-// The Docker client is known to follow redirects, but this might not
-// be true for all other registry clients.
+// The Docker client is known to follow redirects, but this might not be true
+// for all other registry clients.
func layerRedirect(w http.ResponseWriter, cfg *config, digest string) {
log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.bucket)
url := fmt.Sprintf("https://storage.googleapis.com/%s/layers/%s", cfg.bucket, digest)
@@ -227,12 +230,13 @@ func layerRedirect(w http.ResponseWriter, cfg *config, digest string) {
w.WriteHeader(303)
}
-// prepareBucket configures the handle to a Cloud Storage bucket in which individual layers will be
-// stored after Nix builds. Nixery does not directly serve layers to registry clients, instead it
-// redirects them to the public URLs of the Cloud Storage bucket.
+// prepareBucket configures the handle to a Cloud Storage bucket in which
+// individual layers will be stored after Nix builds. Nixery does not directly
+// serve layers to registry clients, instead it redirects them to the public
+// URLs of the Cloud Storage bucket.
//
-// The bucket is required for Nixery to function correctly, hence
-// fatal errors are generated in case it fails to be set up correctly.
+// The bucket is required for Nixery to function correctly, hence fatal errors
+// are generated in case it fails to be set up correctly.
func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
client, err := storage.NewClient(*ctx)
if err != nil {
@@ -274,13 +278,17 @@ func main() {
log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
log.Fatal(http.ListenAndServe(":"+cfg.port, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ // When running on AppEngine, HTTP traffic should be redirected
+ // to HTTPS.
//
- // This is achieved here by enforcing HSTS (with a one week duration) on responses.
+ // This is achieved here by enforcing HSTS (with a one week
+ // duration) on responses.
if r.Header.Get("X-Forwarded-Proto") == "http" && strings.Contains(r.Host, "appspot.com") {
w.Header().Add("Strict-Transport-Security", "max-age=604800")
}
- // Serve an index page to anyone who visits the registry's base URL:
+ // Serve an index page to anyone who visits the registry's base
+ // URL:
if r.RequestURI == "/" {
index, _ := ioutil.ReadFile(cfg.web + "/index.html")
w.Header().Add("Content-Type", "text/html")
@@ -312,8 +320,9 @@ func main() {
return
}
- // Serve an image layer. For this we need to first ask Nix for the
- // manifest, then proceed to extract the correct layer from it.
+ // Serve an image layer. For this we need to first ask Nix for
+ // the manifest, then proceed to extract the correct layer from
+ // it.
layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
if len(layerMatches) == 3 {
digest := layerMatches[2]
--
cgit 1.4.1
From 5f471392cf074156857ce913002071ed0ac0fce2 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 23:22:18 +0100
Subject: feat(build): Add wrapper script & container image setup
Introduces a wrapper script which automatically sets the paths to the
required runtime data dependencies.
Additionally configures a container image derivation which will output
a derivation with Nixery, Nix and other dependencies.
---
tools/nixery/default.nix | 27 ++++++++++++++++++++++++++-
1 file changed, 26 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 19e7df963a50..ebfb41bf6b0f 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -38,6 +38,31 @@ rec {
# for the demo instance running at nixery.appspot.com and provides
# some background information for what Nixery is.
nixery-static = runCommand "nixery-static" {} ''
- cp -r ${./static} $out
+ mkdir $out
+ cp ${./static}/* $out
'';
+
+ # Wrapper script running the Nixery server with the above two data
+ # dependencies configured.
+ #
+ # In most cases, this will be the derivation a user wants if they
+ # are installing Nixery directly.
+ nixery-bin = writeShellScriptBin "nixery" ''
+ export NIX_BUILDER="${nixery-builder}"
+ export WEB_DIR="${nixery-static}"
+ exec ${nixery-server}/bin/nixery
+ '';
+
+ # Container image containing Nixery and Nix itself. This image can
+ # be run on Kubernetes, published on AppEngine or whatever else is
+ # desired.
+ nixery-image = dockerTools.buildLayeredImage {
+ name = "nixery";
+ contents = [
+ bashInteractive
+ coreutils
+ nix
+ nixery-bin
+ ];
+ };
}
--
cgit 1.4.1
From 23260e59d9ca52cb695d14a4722d6a6ee23b163c Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 23:32:56 +0100
Subject: chore: Add license scaffolding & contribution guidelines
---
tools/nixery/CONTRIBUTING.md | 28 +++++
tools/nixery/LICENSE | 202 ++++++++++++++++++++++++++++++++++
tools/nixery/build-registry-image.nix | 14 +++
tools/nixery/default.nix | 13 +++
tools/nixery/main.go | 14 +++
5 files changed, 271 insertions(+)
create mode 100644 tools/nixery/CONTRIBUTING.md
create mode 100644 tools/nixery/LICENSE
(limited to 'tools')
diff --git a/tools/nixery/CONTRIBUTING.md b/tools/nixery/CONTRIBUTING.md
new file mode 100644
index 000000000000..939e5341e74d
--- /dev/null
+++ b/tools/nixery/CONTRIBUTING.md
@@ -0,0 +1,28 @@
+# How to Contribute
+
+We'd love to accept your patches and contributions to this project. There are
+just a few small guidelines you need to follow.
+
+## Contributor License Agreement
+
+Contributions to this project must be accompanied by a Contributor License
+Agreement. You (or your employer) retain the copyright to your contribution;
+this simply gives us permission to use and redistribute your contributions as
+part of the project. Head over to to see
+your current agreements on file or to sign a new one.
+
+You generally only need to submit a CLA once, so if you've already submitted one
+(even if it was for a different project), you probably don't need to do it
+again.
+
+## Code reviews
+
+All submissions, including submissions by project members, require review. We
+use GitHub pull requests for this purpose. Consult
+[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more
+information on using pull requests.
+
+## Community Guidelines
+
+This project follows [Google's Open Source Community
+Guidelines](https://opensource.google.com/conduct/).
diff --git a/tools/nixery/LICENSE b/tools/nixery/LICENSE
new file mode 100644
index 000000000000..d64569567334
--- /dev/null
+++ b/tools/nixery/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 11030d38a553..5c5579fa2e0b 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -1,3 +1,17 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
# This file contains a modified version of dockerTools.buildImage that, instead
# of outputting a single tarball which can be imported into a running Docker
# daemon, builds a manifest file that can be used for serving the image over a
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index ebfb41bf6b0f..c7f284f033d1 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -1,3 +1,16 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
{ pkgs ? import {} }:
with pkgs;
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 044d791517bc..c0a57bc6d022 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// Package main provides the implementation of a container registry that
// transparently builds container images based on Nix derivations.
//
--
cgit 1.4.1
From 28bb3924ff35b1adba8e058d7b17bd1e58bacef1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 23:33:22 +0100
Subject: chore: Add gitignore to ignore Nix build results
---
tools/nixery/.gitignore | 2 ++
1 file changed, 2 insertions(+)
create mode 100644 tools/nixery/.gitignore
(limited to 'tools')
diff --git a/tools/nixery/.gitignore b/tools/nixery/.gitignore
new file mode 100644
index 000000000000..b2dec4547703
--- /dev/null
+++ b/tools/nixery/.gitignore
@@ -0,0 +1,2 @@
+result
+result-bin
--
cgit 1.4.1
From 18b4ae9f28ae4b56df1529eaca8039e326df64e1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 23 Jul 2019 22:37:40 +0000
Subject: chore: Remove AppEngine configuration file
---
tools/nixery/app.yaml | 14 --------------
1 file changed, 14 deletions(-)
delete mode 100644 tools/nixery/app.yaml
(limited to 'tools')
diff --git a/tools/nixery/app.yaml b/tools/nixery/app.yaml
deleted file mode 100644
index 223fa75829fa..000000000000
--- a/tools/nixery/app.yaml
+++ /dev/null
@@ -1,14 +0,0 @@
-env: flex
-runtime: custom
-
-resources:
- cpu: 2
- memory_gb: 4
- disk_size_gb: 50
-
-automatic_scaling:
- max_num_instances: 3
- cool_down_period_sec: 60
-
-env_variables:
- BUCKET: "nixery-layers"
--
cgit 1.4.1
From 948f308025e5d1a3a4575b41d4b20d97f363c5c2 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 24 Jul 2019 17:46:39 +0000
Subject: feat(build): Configure Nixery image builder to set up env correctly
When running Nix inside of a container image, there are several
environment-specific details that need to be configured appropriately.
Most importantly, since one of the recent Nix 2.x releases, sandboxing
during builds is enabled by default. This, however, requires kernel
privileges which commonly aren't available to containers.
Nixery's demo instance (for instance, hehe) is deployed on AppEngine
where this type of container configuration is difficult, hence this
change.
Specifically the following were changed:
* additional tools (such as tar/gzip) were introduced into the image
because the builtins-toolset in Nix does not reference these tools
via their store paths, which leads to them not being included
automatically
* Nix sandboxing was disabled in the container image
* the users/groups required by Nix were added to the container setup.
Note that these are being configured manually instead of via the
tools from the 'shadow'-package, because the latter requires some
user information (such as root) to be present already, which is not
the case inside of the container
---
tools/nixery/default.nix | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index c7f284f033d1..64cb061c308c 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -36,7 +36,7 @@ rec {
meta = {
description = "Container image build serving Nix-backed images";
homepage = "https://github.com/google/nixery";
- license = lib.licenses.ascl20;
+ license = lib.licenses.asl20;
maintainers = [ lib.maintainers.tazjin ];
};
};
@@ -69,13 +69,37 @@ rec {
# Container image containing Nixery and Nix itself. This image can
# be run on Kubernetes, published on AppEngine or whatever else is
# desired.
- nixery-image = dockerTools.buildLayeredImage {
+ nixery-image = let
+ # Wrapper script for the wrapper script (meta!) which configures
+ # the container environment appropriately.
+ #
+ # Most importantly, sandboxing is disabled to avoid privilege
+ # issues in containers.
+ nixery-launch-script = writeShellScriptBin "nixery" ''
+ set -e
+ export NIX_SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt
+ mkdir /tmp
+
+ # Create the build user/group required by Nix
+ echo 'nixbld:x:30000:nixbld' >> /etc/group
+ echo 'nixbld:x:30000:30000:nixbld:/tmp:/bin/bash' >> /etc/passwd
+
+ # Disable sandboxing to avoid running into privilege issues
+ mkdir -p /etc/nix
+ echo 'sandbox = false' >> /etc/nix/nix.conf
+
+ exec ${nixery-bin}/bin/nixery
+ '';
+ in dockerTools.buildLayeredImage {
name = "nixery";
contents = [
bashInteractive
+ cacert
coreutils
nix
- nixery-bin
+ nixery-launch-script
+ gnutar
+ gzip
];
};
}
--
cgit 1.4.1
From 6dd0ac3189559fa4934fabe3bf56850f4865e77f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 24 Jul 2019 17:53:08 +0000
Subject: feat(nix): Import nixpkgs from a configured Nix channel
Instead of using whatever the current system default is, import a Nix
channel when building an image.
This will use Nix' internal caching behaviour for tarballs fetched
without a SHA-hash.
For now the downloaded channel is pinned to nixos-19.03.
---
tools/nixery/build-registry-image.nix | 10 ++++++++--
tools/nixery/static/index.html | 13 +++++++++----
2 files changed, 17 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 5c5579fa2e0b..25d1f59e714e 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -34,10 +34,16 @@
# plenty of room for extension. I believe the actual maximum is
# 128.
maxLayers ? 24,
- # Nix package set to use
- pkgs ? (import {})
+ # Nix channel to use
+ channel ? "nixos-19.03"
}:
+# Import the specified channel directly from Github.
+let
+ channelUrl = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
+ pkgs = import (builtins.fetchTarball channelUrl) {};
+in
+
# Since this is essentially a re-wrapping of some of the functionality that is
# implemented in the dockerTools, we need all of its components in our top-level
# namespace.
diff --git a/tools/nixery/static/index.html b/tools/nixery/static/index.html
index ebec9968c0c9..908fb3821a69 100644
--- a/tools/nixery/static/index.html
+++ b/tools/nixery/static/index.html
@@ -75,10 +75,15 @@
Which revision of nixpkgs is used?
- Currently whatever was HEAD at the time I deployed this. One idea I've had is
- to let users specify tags on images that correspond to commits in nixpkgs, however there is
- some potential for abuse there (e.g. by triggering lots of builds on commits that have
- broken Hydra builds) and I don't want to deal with that yet.
+ Nixery imports a Nix channel
+ via builtins.fetchTarball. Currently the channel
+ to which this instance is pinned is NixOS 19.03.
+
+ One idea I've had is to let users specify tags on images that
+ correspond to commits in nixpkgs, however there is some
+ potential for abuse there (e.g. by triggering lots of builds
+ on commits that have broken Hydra builds) and I don't want to
+ deal with that yet.
Who made this?
--
cgit 1.4.1
From 620778243488df5ab28d7af6ce61c30ed3de5510 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 26 Jul 2019 10:22:31 +0000
Subject: fix(build): Specify default command for Nixery's own image
When running on AppEngine, the image is expected to be configured with
a default entry point / command.
This sets the command to the wrapper script, so that the image can
actually run properly when deployed.
---
tools/nixery/default.nix | 1 +
1 file changed, 1 insertion(+)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 64cb061c308c..23c776e117ff 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -92,6 +92,7 @@ rec {
'';
in dockerTools.buildLayeredImage {
name = "nixery";
+ config.Cmd = ["${nixery-launch-script}/bin/nixery"];
contents = [
bashInteractive
cacert
--
cgit 1.4.1
From 93a3985298c31168836a2b3f0585a47889db65ea Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 29 Jul 2019 21:02:46 +0100
Subject: docs(README): Remove known issues from README
These issues have been moved to the issue tracker.
---
tools/nixery/README.md | 20 --------------------
1 file changed, 20 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 6b1db469648a..5a4d766b0314 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -44,26 +44,6 @@ bash-4.4# curl --version
curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1
```
-## Known issues
-
-* Initial build times for an image can be somewhat slow while Nixery retrieves
- the required derivations from the Nix cache under-the-hood.
-
- Due to how the Docker Registry API works, there is no way to provide
- feedback to the user during this period - hence the UX (in interactive mode)
- is currently that "nothing is happening" for a while after the `Unable to
- find image` message is printed.
-
-* For some reason these images do not currently work in GKE clusters.
- Launching a Kubernetes pod that uses a Nixery image results in an error
- stating `unable to convert a nil pointer to a runtime API image:
- ImageInspectError`.
-
- This error comes from
- [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/dockershim/convert.go#L35)
- and it occurs *after* the Kubernetes node has retrieved the image from
- Nixery (as per the Nixery logs).
-
## Kubernetes integration (in the future)
**Note**: The Kubernetes integration is not yet implemented.
--
cgit 1.4.1
From 33d876fda87e98477de487cff3b553941eca30db Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 29 Jul 2019 21:03:04 +0100
Subject: docs(README): Update roadmap information
Adds information about Kubernetes integration & custom repository
support as well as links to the relevant tracking issues.
---
tools/nixery/README.md | 44 +++++++++++++++++---------------------------
1 file changed, 17 insertions(+), 27 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 5a4d766b0314..72185047e6d0 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -44,35 +44,25 @@ bash-4.4# curl --version
curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1
```
-## Kubernetes integration (in the future)
-
-**Note**: The Kubernetes integration is not yet implemented.
-
-The basic idea of the Kubernetes integration is to provide a way for users to
-specify the contents of a container image as an API object in Kubernetes which
-will be transparently built by Nix when the container is started up.
-
-For example, given a resource that looks like this:
-
-```yaml
----
-apiVersion: k8s.nixos.org/v1alpha
-kind: NixImage
-metadata:
- name: curl-and-jq
-data:
- tag: v1
- contents:
- - curl
- - jq
- - bash
-```
+## Roadmap
+
+### Custom Nix repository support
+
+One part of the Nixery vision is support for a custom Nix repository that
+provides, for example, the internal packages of an organisation.
+
+It should be possible to configure Nixery to build images from such a repository
+and serve them in order to make container images themselves close to invisible
+to the user.
+
+See [issue #3](https://github.com/google/nixery/issues/3).
+
+### Kubernetes integration (in the future)
-One could create a container that references the `curl-and-jq` image, which will
-then be created by Nix when the container image is pulled.
+It should be trivial to deploy Nixery inside of a Kubernetes cluster with
+correct caching behaviour, addressing and so on.
-The controller itself runs as a daemonset on every node in the cluster,
-providing a host-mounted `/nix/store`-folder for caching purposes.
+See [issue #4](https://github.com/google/nixery/issues/4).
[Nix]: https://nixos.org/
[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
--
cgit 1.4.1
From 90948a48e1cbc4b43970ce0a6d55064c0a5eb3e4 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 29 Jul 2019 21:06:47 +0100
Subject: docs(CONTRIBUTING): Mention commit message format
---
tools/nixery/CONTRIBUTING.md | 7 +++++++
1 file changed, 7 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/CONTRIBUTING.md b/tools/nixery/CONTRIBUTING.md
index 939e5341e74d..ecad21b04508 100644
--- a/tools/nixery/CONTRIBUTING.md
+++ b/tools/nixery/CONTRIBUTING.md
@@ -15,6 +15,11 @@ You generally only need to submit a CLA once, so if you've already submitted one
(even if it was for a different project), you probably don't need to do it
again.
+## Commit messages
+
+Commits in this repository follow the [Angular commit message
+guidelines][commits].
+
## Code reviews
All submissions, including submissions by project members, require review. We
@@ -26,3 +31,5 @@ information on using pull requests.
This project follows [Google's Open Source Community
Guidelines](https://opensource.google.com/conduct/).
+
+[commits]: https://github.com/angular/angular/blob/master/CONTRIBUTING.md#commit
--
cgit 1.4.1
From fc2e508ab882b10b324ddbf430fcf6365e79b264 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 30 Jul 2019 12:39:29 +0100
Subject: feat(build): Add Travis configuration to build everything
The default Travis build command for Nix is `nix-build`, which will
build all derivations specified in the default.nix.
---
tools/nixery/.gitignore | 2 +-
tools/nixery/.travis.yml | 1 +
2 files changed, 2 insertions(+), 1 deletion(-)
create mode 100644 tools/nixery/.travis.yml
(limited to 'tools')
diff --git a/tools/nixery/.gitignore b/tools/nixery/.gitignore
index b2dec4547703..750baebf4151 100644
--- a/tools/nixery/.gitignore
+++ b/tools/nixery/.gitignore
@@ -1,2 +1,2 @@
result
-result-bin
+result-*
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
new file mode 100644
index 000000000000..d8cc8efa432a
--- /dev/null
+++ b/tools/nixery/.travis.yml
@@ -0,0 +1 @@
+language: nix
--
cgit 1.4.1
From 02eba0336e30714c859f6344ffc7fd641148d2c1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 30 Jul 2019 13:15:44 +0100
Subject: refactor(main): Introduce more flexible request routing
Instead of just dispatching on URL regexes, use handlers to split the
routes into registry-related handlers and otherwise(tm).
For now the otherwise(tm) consists of a file server serving the static
directory, rather than just a plain match on the index route.
---
tools/nixery/main.go | 119 +++++++++++++++++++++++++++------------------------
1 file changed, 62 insertions(+), 57 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index c0a57bc6d022..63004c0cea43 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -266,8 +266,13 @@ func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
return bkt
}
-var manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/(\w+)$`)
-var layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+// Regexes matching the V2 Registry API routes. This only includes the
+// routes required for serving images, since pushing and other such
+// functionality is not available.
+var (
+ manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/(\w+)$`)
+ layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+)
func getConfig(key, desc string) string {
value := os.Getenv(key)
@@ -278,12 +283,51 @@ func getConfig(key, desc string) string {
return value
}
+type registryHandler struct {
+ cfg *config
+ ctx *context.Context
+ bucket *storage.BucketHandle
+}
+
+func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ // Serve the manifest (straight from Nix)
+ manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
+ if len(manifestMatches) == 3 {
+ imageName := manifestMatches[1]
+ log.Printf("Requesting manifest for image '%s'", imageName)
+ image := imageFromName(manifestMatches[1])
+ manifest, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
+
+ if err != nil {
+ log.Println("Failed to build image manifest", err)
+ return
+ }
+
+ w.Header().Add("Content-Type", manifestMediaType)
+ w.Write(manifest)
+ return
+ }
+
+ // Serve an image layer. For this we need to first ask Nix for
+ // the manifest, then proceed to extract the correct layer from
+ // it.
+ layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
+ if len(layerMatches) == 3 {
+ digest := layerMatches[2]
+ layerRedirect(w, h.cfg, digest)
+ return
+ }
+
+ log.Printf("Unsupported registry route: %s\n", r.RequestURI)
+ w.WriteHeader(404)
+}
+
func main() {
cfg := &config{
- bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
+ bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
builder: getConfig("NIX_BUILDER", "Nix image builder code"),
- web: getConfig("WEB_DIR", "Static web file dir"),
- port: getConfig("PORT", "HTTP port"),
+ web: getConfig("WEB_DIR", "Static web file dir"),
+ port: getConfig("PORT", "HTTP port"),
}
ctx := context.Background()
@@ -291,59 +335,20 @@ func main() {
log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
- log.Fatal(http.ListenAndServe(":"+cfg.port, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- // When running on AppEngine, HTTP traffic should be redirected
- // to HTTPS.
- //
- // This is achieved here by enforcing HSTS (with a one week
- // duration) on responses.
- if r.Header.Get("X-Forwarded-Proto") == "http" && strings.Contains(r.Host, "appspot.com") {
- w.Header().Add("Strict-Transport-Security", "max-age=604800")
- }
-
- // Serve an index page to anyone who visits the registry's base
- // URL:
- if r.RequestURI == "/" {
- index, _ := ioutil.ReadFile(cfg.web + "/index.html")
- w.Header().Add("Content-Type", "text/html")
- w.Write(index)
- return
- }
-
- // Acknowledge that we speak V2
- if r.RequestURI == "/v2/" {
- fmt.Fprintln(w)
- return
- }
+ // Acknowledge that we speak V2
+ http.HandleFunc("/v2", func(w http.ResponseWriter, r *http.Request) {
+ fmt.Fprintln(w)
+ })
- // Serve the manifest (straight from Nix)
- manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
- if len(manifestMatches) == 3 {
- imageName := manifestMatches[1]
- log.Printf("Requesting manifest for image '%s'", imageName)
- image := imageFromName(manifestMatches[1])
- manifest, err := buildImage(&ctx, cfg, &image, bucket)
-
- if err != nil {
- log.Println("Failed to build image manifest", err)
- return
- }
-
- w.Header().Add("Content-Type", manifestMediaType)
- w.Write(manifest)
- return
- }
+ // All other /v2/ requests belong to the registry handler.
+ http.Handle("/v2/", ®istryHandler{
+ cfg: cfg,
+ ctx: &ctx,
+ bucket: bucket,
+ })
- // Serve an image layer. For this we need to first ask Nix for
- // the manifest, then proceed to extract the correct layer from
- // it.
- layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
- if len(layerMatches) == 3 {
- digest := layerMatches[2]
- layerRedirect(w, cfg, digest)
- return
- }
+ // All other roots are served by the static file server.
+ http.Handle("/", http.FileServer(http.Dir(cfg.web)))
- w.WriteHeader(404)
- })))
+ log.Fatal(http.ListenAndServe(":"+cfg.port, nil))
}
--
cgit 1.4.1
From 1b37d8ecf62e1f192352b1fc2cc1aafa31325d53 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 30 Jul 2019 13:23:31 +0100
Subject: feat(static): Add logo & favicon resources
---
tools/nixery/.gitignore | 1 +
tools/nixery/static/favicon.ico | Bin 0 -> 157995 bytes
tools/nixery/static/nixery-logo.png | Bin 0 -> 79360 bytes
3 files changed, 1 insertion(+)
create mode 100644 tools/nixery/static/favicon.ico
create mode 100644 tools/nixery/static/nixery-logo.png
(limited to 'tools')
diff --git a/tools/nixery/.gitignore b/tools/nixery/.gitignore
index 750baebf4151..1e5c28c012f5 100644
--- a/tools/nixery/.gitignore
+++ b/tools/nixery/.gitignore
@@ -1,2 +1,3 @@
result
result-*
+.envrc
diff --git a/tools/nixery/static/favicon.ico b/tools/nixery/static/favicon.ico
new file mode 100644
index 000000000000..405dbc175b4e
Binary files /dev/null and b/tools/nixery/static/favicon.ico differ
diff --git a/tools/nixery/static/nixery-logo.png b/tools/nixery/static/nixery-logo.png
new file mode 100644
index 000000000000..5835cd50e1bb
Binary files /dev/null and b/tools/nixery/static/nixery-logo.png differ
--
cgit 1.4.1
From 9753df9255552b8e37aace321717262919696da3 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 30 Jul 2019 13:25:04 +0100
Subject: docs(README): Add logo & build status
---
tools/nixery/README.md | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 72185047e6d0..8d4f84598cef 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -1,7 +1,13 @@
-# Nixery
+
+
+
-This package implements a Docker-compatible container registry that is capable
-of transparently building and serving container images using [Nix][].
+-----------------
+
+[![Build Status](https://travis-ci.org/google/nixery.svg?branch=master)](https://travis-ci.org/google/nixery)
+
+**Nixery** is a Docker-compatible container registry that is capable of
+transparently building and serving container images using [Nix][].
The project started out with the intention of becoming a Kubernetes controller
that can serve declarative image specifications specified in CRDs as container
--
cgit 1.4.1
From 4802727408b7afeb8b5245e698053a700ebf6775 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 30 Jul 2019 13:35:30 +0100
Subject: docs(static): Update index page with post-launch information
Points people at the repository and removes some outdated information.
---
tools/nixery/static/index.html | 69 ++++++++++++++++++++++--------------------
1 file changed, 36 insertions(+), 33 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/static/index.html b/tools/nixery/static/index.html
index 908fb3821a69..8cbda7360ef9 100644
--- a/tools/nixery/static/index.html
+++ b/tools/nixery/static/index.html
@@ -14,6 +14,10 @@
padding: 010px
}
+ .logo {
+ max-width: 650px;
+ }
+
h1, h2, h3 {
line-height: 1.2
}
@@ -21,56 +25,55 @@
-
Nixery
+
+
+
+
-
What is this?
+
- Nixery provides the ability to pull ad-hoc container images from a Docker-compatible registry
- server. The image names specify the contents the image should contain, which are then
- retrieved and built by the Nix package manager.
+ This is an instance
+ of Nixery, which
+ provides the ability to pull ad-hoc container images from a
+ Docker-compatible registry server. The image names specify the
+ contents the image should contain, which are then retrieved and
+ built by the Nix package manager.
- Nix is also responsible for the creation of the container images themselves. To do this it
- uses an interesting layering strategy described in
+ Nix is also responsible for the creation of the container images
+ themselves. To do this it uses an interesting layering strategy
+ described in
this blog post.
How does it work?
- Simply point your local Docker installation (or other compatible registry client) at Nixery
- and ask for an image with the contents you desire. Image contents are path separated in the
- name, so for example if you needed an image that contains a shell and emacs you
- could pull it as such:
+ Simply point your local Docker installation (or other compatible
+ registry client) at Nixery and ask for an image with the
+ contents you desire. Image contents are path separated in the
+ name, so for example if you needed an image that contains a
+ shell and emacs you could pull it as such:
nixery.appspot.com/shell/emacs25-nox
- Image tags are currently ignored. Every package name needs to correspond to a key in the
+ Image tags are currently ignored. Every package name needs to
+ correspond to a key in the
nixpkgs package set.
- There are some special meta-packages which you must specify as the
- first package in an image. These are:
-
-
-
shell: Provides default packages you would expect in an interactive environment
-
builder: Provides the above as well as Nix's standard build environment
-
-
- Hence if you needed an interactive image with, for example, htop installed you
- could run docker run -ti nixery.appspot.com/shell/htop bash.
+ The special meta-package shell provides default packages
+ you would expect in an interactive environment (such as an
+ interactively configured bash). If you use this package
+ you must specify it as the first package in an image.
FAQ
-
- Technically speaking none of these are frequently-asked questions (because no questions have
- been asked so far), but I'm going to take a guess at a few anyways:
-
Where is the source code for this?
- Not yet public, sorry. Check back later(tm).
+ Over on Github.
Which revision of nixpkgs is used?
@@ -78,17 +81,17 @@
Nixery imports a Nix channel
via builtins.fetchTarball. Currently the channel
to which this instance is pinned is NixOS 19.03.
+
+
+ Is this an official Google project?
- One idea I've had is to let users specify tags on images that
- correspond to commits in nixpkgs, however there is some
- potential for abuse there (e.g. by triggering lots of builds
- on commits that have broken Hydra builds) and I don't want to
- deal with that yet.
+ No. Nixery is not officially supported by
+ Google.
--
cgit 1.4.1
From 2e4b1f85eef04c0abd83f5f47e1aa93c8ea84cb9 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 30 Jul 2019 23:55:28 +0100
Subject: fix(nix): Add empty image config to allow k8s usage
Introduce an empty runtime configuration object in each built layer.
This is required because Kubernetes expects the configuration to be
present (even if it's just empty values).
Providing an empty configuration will make Docker's API return a full
configuration struct with default (i.e. empty) values rather than
`null`, which works for Kubernetes.
This fixes issue #1. See the issue for additional details.
---
tools/nixery/build-registry-image.nix | 2 ++
1 file changed, 2 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 25d1f59e714e..0d61f3b713f5 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -133,6 +133,8 @@ let
os = "linux";
rootfs.type = "layers";
rootfs.diff_ids = map (layer: "sha256:${layer.sha256}") allLayers;
+ # Required to let Kubernetes import Nixery images
+ config = {};
};
configJson = writeText "${baseName}-config.json" (builtins.toJSON config);
configMetadata = with builtins; fromJSON (readFile (runCommand "config-meta" {
--
cgit 1.4.1
From a83701a14b4b620d9b24656e638d31e746f3a06e Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 00:39:31 +0100
Subject: feat(build): Add dependencies for custom repo clones
Adds git & SSH as part of the Nixery image, which are required to use
Nix's builtins.fetchGit.
The dependency on interactive tools is dropped, as it was only
required during development when debugging the image building process
itself.
---
tools/nixery/default.nix | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 23c776e117ff..0f28911668ce 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -94,13 +94,13 @@ rec {
name = "nixery";
config.Cmd = ["${nixery-launch-script}/bin/nixery"];
contents = [
- bashInteractive
cacert
- coreutils
- nix
- nixery-launch-script
+ git
gnutar
gzip
+ nix
+ nixery-launch-script
+ openssh
];
};
}
--
cgit 1.4.1
From 2db92243e748bba726e9d829f89f65dd5bf16afd Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 01:45:09 +0100
Subject: feat(nix): Support package set imports from different sources
This extends the package set import mechanism in
build-registry-image.nix with several different options:
1. Importing a nixpkgs channel from Github (the default, pinned to
nixos-19.03)
2. Importing a custom Nix git repository. This uses builtins.fetchGit
and can thus rely on git/SSH configuration in the environment (such
as keys)
3. Importing a local filesystem path
As long as the repository pointed at is either a checkout of nixpkgs,
or nixpkgs overlaid with custom packages this will work.
A special syntax has been defined for how these three options are
passed in, but users should not need to concern themselves with it as
it will be taken care of by the server component.
This relates to #3.
---
tools/nixery/build-registry-image.nix | 62 ++++++++++++++++++++++++++++++++---
1 file changed, 57 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 0d61f3b713f5..9315071135dc 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -34,14 +34,66 @@
# plenty of room for extension. I believe the actual maximum is
# 128.
maxLayers ? 24,
- # Nix channel to use
- channel ? "nixos-19.03"
+
+ # Configuration for which package set to use when building.
+ #
+ # Both channels of the public nixpkgs repository as well as imports
+ # from private repositories are supported.
+ #
+ # This setting can be invoked with three different formats:
+ #
+ # 1. nixpkgs!$channel (e.g. nixpkgs!nixos-19.03)
+ # 2. git!$repo!$rev (e.g. git!git@github.com:NixOS/nixpkgs.git!master)
+ # 3. path!$path (e.g. path!/var/local/nixpkgs)
+ #
+ # '!' was chosen as the separator because `builtins.split` does not
+ # support regex escapes and there are few other candidates. It
+ # doesn't matter much because this is invoked by the server.
+ pkgSource ? "nixpkgs!nixos-19.03"
}:
-# Import the specified channel directly from Github.
let
- channelUrl = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
- pkgs = import (builtins.fetchTarball channelUrl) {};
+ # If a nixpkgs channel is requested, it is retrieved from Github (as
+ # a tarball) and imported.
+ fetchImportChannel = channel:
+ let url = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
+ in import (builtins.fetchTarball url) {};
+
+ # If a git repository is requested, it is retrieved via
+ # builtins.fetchGit which defaults to the git configuration of the
+ # outside environment. This means that user-configured SSH
+ # credentials etc. are going to work as expected.
+ fetchImportGit = url: rev:
+ let
+ # builtins.fetchGit needs to know whether 'rev' is a reference
+ # (e.g. a branch/tag) or a revision (i.e. a commit hash)
+ #
+ # Since this data is being extrapolated from the supplied image
+ # tag, we have to guess if we want to avoid specifying a format.
+ #
+ # There are some additional caveats around whether the default
+ # branch contains the specified revision, which need to be
+ # explained to users.
+ spec = if (builtins.stringLength rev) == 40 then {
+ inherit url rev;
+ } else {
+ inherit url;
+ ref = rev;
+ };
+ in import (builtins.fetchGit spec) {};
+
+ importPath = path: import (builtins.toPath path) {};
+
+ source = builtins.split "!" pkgSource;
+ sourceType = builtins.elemAt source 0;
+ pkgs = with builtins;
+ if sourceType == "nixpkgs"
+ then fetchImportChannel (elemAt source 2)
+ else if sourceType == "git"
+ then fetchImportGit (elemAt source 2) (elemAt source 4)
+ else if sourceType == "path"
+ then importPath (elemAt source 2)
+ else builtins.throw("Invalid package set source specification: ${pkgSource}");
in
# Since this is essentially a re-wrapping of some of the functionality that is
--
cgit 1.4.1
From 3bc04530a7fc27a0196ed3640611554c197843cb Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 14:17:11 +0100
Subject: feat(go): Add environment configuration for package set sources
Adds environment variables with which users can configure the package
set source to use. Not setting a source lets Nix default to a recent
NixOS channel (currently nixos-19.03).
---
tools/nixery/main.go | 85 +++++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 74 insertions(+), 11 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 63004c0cea43..faee731b4e75 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -46,12 +46,65 @@ import (
"cloud.google.com/go/storage"
)
+// pkgSource represents the source from which the Nix package set used
+// by Nixery is imported. Users configure the source by setting one of
+// the supported environment variables.
+type pkgSource struct {
+ srcType string
+ args string
+}
+
+// Convert the package source into the representation required by Nix.
+func (p *pkgSource) renderSource(tag string) string {
+ // The 'git' source requires a tag to be present.
+ if p.srcType == "git" {
+ if tag == "latest" || tag == "" {
+ tag = "master"
+ }
+
+ return fmt.Sprintf("git!%s!%s", p.args, tag)
+ }
+
+ return fmt.Sprintf("%s!%s", p.srcType, p.args)
+}
+
+// Retrieve a package source from the environment. If no source is
+// specified, the Nix code will default to a recent NixOS channel.
+func pkgSourceFromEnv() *pkgSource {
+ if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
+ log.Printf("Using Nix package set from Nix channel %q\n", channel)
+ return &pkgSource{
+ srcType: "nixpkgs",
+ args: channel,
+ }
+ }
+
+ if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
+ log.Printf("Using Nix package set from git repository at %q\n", git)
+ return &pkgSource{
+ srcType: "git",
+ args: git,
+ }
+ }
+
+ if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
+ log.Printf("Using Nix package set from path %q\n", path)
+ return &pkgSource{
+ srcType: "path",
+ args: path,
+ }
+ }
+
+ return nil
+}
+
// config holds the Nixery configuration options.
type config struct {
- bucket string // GCS bucket to cache & serve layers
- builder string // Nix derivation for building images
- web string // Static files to serve over HTTP
- port string // Port on which to launch HTTP server
+ bucket string // GCS bucket to cache & serve layers
+ builder string // Nix derivation for building images
+ web string // Static files to serve over HTTP
+ port string // Port on which to launch HTTP server
+ pkgs *pkgSource // Source for Nix package set
}
// ManifestMediaType is the Content-Type used for the manifest itself. This
@@ -67,6 +120,9 @@ type image struct {
// Name of the container image.
name string
+ // Tag requested (only relevant for package sets from git repositories)
+ tag string
+
// Names of packages to include in the image. These must correspond
// directly to top-level names of Nix packages in the nixpkgs tree.
packages []string
@@ -94,10 +150,11 @@ type BuildResult struct {
//
// It will expand convenience names under the hood (see the `convenienceNames`
// function below).
-func imageFromName(name string) image {
+func imageFromName(name string, tag string) image {
packages := strings.Split(name, "/")
return image{
name: name,
+ tag: tag,
packages: convenienceNames(packages),
}
}
@@ -132,13 +189,17 @@ func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage
return nil, err
}
- cmd := exec.Command(
- "nix-build",
+ args := []string{
"--no-out-link",
"--show-trace",
"--argstr", "name", image.name,
"--argstr", "packages", string(packages), cfg.builder,
- )
+ }
+
+ if cfg.pkgs != nil {
+ args = append(args, "--argstr", "pkgSource", cfg.pkgs.renderSource(image.tag))
+ }
+ cmd := exec.Command("nix-build", args...)
outpipe, err := cmd.StdoutPipe()
if err != nil {
@@ -270,7 +331,7 @@ func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
// routes required for serving images, since pushing and other such
// functionality is not available.
var (
- manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/(\w+)$`)
+ manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
)
@@ -294,8 +355,9 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
if len(manifestMatches) == 3 {
imageName := manifestMatches[1]
- log.Printf("Requesting manifest for image '%s'", imageName)
- image := imageFromName(manifestMatches[1])
+ imageTag := manifestMatches[2]
+ log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
+ image := imageFromName(imageName, imageTag)
manifest, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
if err != nil {
@@ -328,6 +390,7 @@ func main() {
builder: getConfig("NIX_BUILDER", "Nix image builder code"),
web: getConfig("WEB_DIR", "Static web file dir"),
port: getConfig("PORT", "HTTP port"),
+ pkgs: pkgSourceFromEnv(),
}
ctx := context.Background()
--
cgit 1.4.1
From ec8e9eed5db5dc76d161257fb26b463326ec9c81 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 14:36:32 +0100
Subject: docs(README): Revamp with updated information on package sources
Adds documentation for configuration options and supported features.
---
tools/nixery/README.md | 77 ++++++++++++++++++++++++++++++++++----------------
1 file changed, 53 insertions(+), 24 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 8d4f84598cef..f100cb1b65c4 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -9,29 +9,21 @@
**Nixery** is a Docker-compatible container registry that is capable of
transparently building and serving container images using [Nix][].
+Images are built on-demand based on the *image name*. Every package that the
+user intends to include in the image is specified as a path component of the
+image name.
+
+The path components refer to top-level keys in `nixpkgs` and are used to build a
+container image using Nix's [buildLayeredImage][] functionality.
+
The project started out with the intention of becoming a Kubernetes controller
that can serve declarative image specifications specified in CRDs as container
images. The design for this is outlined in [a public gist][gist].
-Currently it focuses on the ad-hoc creation of container images as outlined
-below with an example instance available at
-[nixery.appspot.com](https://nixery.appspot.com).
+An example instance is available at [nixery.appspot.com][demo].
This is not an officially supported Google project.
-## Ad-hoc container images
-
-Nixery supports building images on-demand based on the *image name*. Every
-package that the user intends to include in the image is specified as a path
-component of the image name.
-
-The path components refer to top-level keys in `nixpkgs` and are used to build a
-container image using Nix's [buildLayeredImage][] functionality.
-
-The special meta-package `shell` provides an image base with many core
-components (such as `bash` and `coreutils`) that users commonly expect in
-interactive images.
-
## Usage example
Using the publicly available Nixery instance at `nixery.appspot.com`, one could
@@ -50,18 +42,53 @@ bash-4.4# curl --version
curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1
```
-## Roadmap
+The special meta-package `shell` provides an image base with many core
+components (such as `bash` and `coreutils`) that users commonly expect in
+interactive images.
+
+## Feature overview
+
+* Serve container images on-demand using image names as content specifications
+
+ Specify package names as path components and Nixery will create images, using
+ the most efficient caching strategy it can to share data between different
+ images.
+
+* Use private package sets from various sources
-### Custom Nix repository support
+ In addition to building images from the publicly available Nix/NixOS channels,
+ a private Nixery instance can be configured to serve images built from a
+ package set hosted in a custom git repository or filesystem path.
-One part of the Nixery vision is support for a custom Nix repository that
-provides, for example, the internal packages of an organisation.
+ When using this feature with custom git repositories, Nixery will forward the
+ specified image tags as git references.
-It should be possible to configure Nixery to build images from such a repository
-and serve them in order to make container images themselves close to invisible
-to the user.
+ For example, if a company used a custom repository overlaying their packages
+ on the Nix package set, images could be built from a git tag `release-v2`:
-See [issue #3](https://github.com/google/nixery/issues/3).
+ `docker pull nixery.thecompany.website/custom-service:release-v2`
+
+* Efficient serving of image layers from Google Cloud Storage
+
+ After building an image, Nixery stores all of its layers in a GCS bucket and
+ forwards requests to retrieve layers to the bucket. This enables efficient
+ serving of layers, as well as sharing of image layers between redundant
+ instances.
+
+## Configuration
+
+Nixery supports the following configuration options, provided via environment
+variables:
+
+* `BUCKET`: [Google Cloud Storage][gcs] bucket to store & serve image layers
+* `PORT`: HTTP port on which Nixery should listen
+* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
+* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
+ locally configured SSH/git credentials)
+* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
+ for building
+
+## Roadmap
### Kubernetes integration (in the future)
@@ -73,3 +100,5 @@ See [issue #4](https://github.com/google/nixery/issues/4).
[Nix]: https://nixos.org/
[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
[buildLayeredImage]: https://grahamc.com/blog/nix-and-layered-docker-images
+[demo]: https://nixery.appspot.com
+[gcs]: https://cloud.google.com/storage/
--
cgit 1.4.1
From 3070d88051674bc5562412b8a98dbd90e92959f0 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 21:23:44 +0100
Subject: feat(nix): Return structured errors if packages are not found
Changes the return format of Nixery's build procedure to return a JSON
structure that can indicate which errors have occured.
The server can use this information to send appropriate status codes
back to clients.
---
tools/nixery/build-registry-image.nix | 47 +++++++++++++++++++++++++++--------
1 file changed, 37 insertions(+), 10 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 9315071135dc..d06c9f3bfae3 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -99,6 +99,7 @@ in
# Since this is essentially a re-wrapping of some of the functionality that is
# implemented in the dockerTools, we need all of its components in our top-level
# namespace.
+with builtins;
with pkgs;
with dockerTools;
@@ -115,16 +116,29 @@ let
# For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev`.
deepFetch = s: n:
let path = lib.strings.splitString "." n;
- err = builtins.throw "Could not find '${n}' in package set";
+ err = { error = "not_found"; pkg = n; };
in lib.attrsets.attrByPath path err s;
# allContents is the combination of all derivations and store paths passed in
# directly, as well as packages referred to by name.
- allContents = contents ++ (map (deepFetch pkgs) (builtins.fromJSON packages));
+ #
+ # It accumulates potential errors about packages that could not be found to
+ # return this information back to the server.
+ allContents =
+ # Folds over the results of 'deepFetch' on all requested packages to
+ # separate them into errors and content. This allows the program to
+ # terminate early and return only the errors if any are encountered.
+ let splitter = attrs: res:
+ if hasAttr "error" res
+ then attrs // { errors = attrs.errors ++ [ res ]; }
+ else attrs // { contents = attrs.contents ++ [ res ]; };
+ init = { inherit contents; errors = []; };
+ fetched = (map (deepFetch pkgs) (fromJSON packages));
+ in foldl' splitter init fetched;
contentsEnv = symlinkJoin {
name = "bulk-layers";
- paths = allContents;
+ paths = allContents.contents;
};
# The image build infrastructure expects to be outputting a slightly different
@@ -176,7 +190,7 @@ let
cat fs-layers | jq -s -c '.' > $out
'';
- allLayers = builtins.fromJSON (builtins.readFile allLayersJson);
+ allLayers = fromJSON (readFile allLayersJson);
# Image configuration corresponding to the OCI specification for the file type
# 'application/vnd.oci.image.config.v1+json'
@@ -188,8 +202,8 @@ let
# Required to let Kubernetes import Nixery images
config = {};
};
- configJson = writeText "${baseName}-config.json" (builtins.toJSON config);
- configMetadata = with builtins; fromJSON (readFile (runCommand "config-meta" {
+ configJson = writeText "${baseName}-config.json" (toJSON config);
+ configMetadata = fromJSON (readFile (runCommand "config-meta" {
buildInputs = [ jq openssl ];
} ''
size=$(wc -c ${configJson} | cut -d ' ' -f1)
@@ -228,7 +242,7 @@ let
path = configJson;
md5 = configMetadata.md5;
};
- } // (builtins.listToAttrs (map (layer: {
+ } // (listToAttrs (map (layer: {
name = "${layer.sha256}";
value = {
path = layer.path;
@@ -236,6 +250,19 @@ let
};
}) allLayers));
-in writeText "manifest-output.json" (builtins.toJSON {
- inherit manifest layerLocations;
-})
+ # Final output structure returned to the controller in the case of a
+ # successful build.
+ manifestOutput = {
+ inherit manifest layerLocations;
+ };
+
+ # Output structure returned if errors occured during the build. Currently the
+ # only error type that is returned in a structured way is 'not_found'.
+ errorOutput = {
+ error = "not_found";
+ pkgs = map (err: err.pkg) allContents.errors;
+ };
+in writeText "manifest-output.json" (if (length allContents.errors) == 0
+ then toJSON (trace manifestOutput manifestOutput)
+ else toJSON (trace errorOutput errorOutput)
+)
--
cgit 1.4.1
From 2f1bc55597c382496930f4ba8f4ef25914fb9748 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 21:25:36 +0100
Subject: fix(go): Return response code 500 if Nix builds fail
---
tools/nixery/main.go | 1 +
1 file changed, 1 insertion(+)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index faee731b4e75..a4e0b9a80a16 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -362,6 +362,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if err != nil {
log.Println("Failed to build image manifest", err)
+ w.WriteHeader(500)
return
}
--
cgit 1.4.1
From 119af77b43775999e39b6e88911d46b5cc60b93d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 31 Jul 2019 21:36:25 +0100
Subject: feat(go): Return errors with correct status codes to clients
Uses the structured errors feature introduced in the Nix code to
return more sensible errors to clients. For now this is quite limited,
but already a lot better than before:
* packages that could not be found result in 404s
* all other errors result in 500s
This way the registry clients will not attempt to interpret the
returned garbage data/empty response as something useful.
---
tools/nixery/main.go | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index a4e0b9a80a16..91a9e6d4943f 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -138,6 +138,9 @@ type image struct {
//
// The later field is simply treated as opaque JSON and passed through.
type BuildResult struct {
+ Error string `json:"error`
+ Pkgs []string `json:"pkgs"`
+
Manifest json.RawMessage `json:"manifest"`
LayerLocations map[string]struct {
Path string `json:"path"`
@@ -183,7 +186,7 @@ func convenienceNames(packages []string) []string {
// Call out to Nix and request that an image be built. Nix will, upon success,
// return a manifest for the container image.
-func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) ([]byte, error) {
+func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) (*BuildResult, error) {
packages, err := json.Marshal(image.packages)
if err != nil {
return nil, err
@@ -249,7 +252,7 @@ func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage
}
}
- return json.Marshal(result.Manifest)
+ return &result, nil
}
// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
@@ -358,7 +361,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
imageTag := manifestMatches[2]
log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
image := imageFromName(imageName, imageTag)
- manifest, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
+ buildResult, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
if err != nil {
log.Println("Failed to build image manifest", err)
@@ -366,6 +369,17 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
return
}
+ // Some error types have special handling, which is applied
+ // here.
+ if buildResult.Error == "not_found" {
+ log.Printf("Could not find packages: %v\n", buildResult.Pkgs)
+ w.WriteHeader(404)
+ return
+ }
+
+ // This marshaling error is ignored because we know that this
+ // field represents valid JSON data.
+ manifest, _ := json.Marshal(buildResult.Manifest)
w.Header().Add("Content-Type", manifestMediaType)
w.Write(manifest)
return
--
cgit 1.4.1
From 3d0596596ac03957db6646836e6b4ec0d222e23a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 2 Aug 2019 00:45:22 +0100
Subject: feat(go): Return error responses in registry format
The registry specifies a format for how errors should be returned and
this commit implements it:
https://docs.docker.com/registry/spec/api/#errors
---
tools/nixery/main.go | 44 +++++++++++++++++++++++++++++++++++---------
1 file changed, 35 insertions(+), 9 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 91a9e6d4943f..9574a3a68c65 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -138,7 +138,7 @@ type image struct {
//
// The later field is simply treated as opaque JSON and passed through.
type BuildResult struct {
- Error string `json:"error`
+ Error string `json:"error"`
Pkgs []string `json:"pkgs"`
Manifest json.RawMessage `json:"manifest"`
@@ -338,13 +338,29 @@ var (
layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
)
-func getConfig(key, desc string) string {
- value := os.Getenv(key)
- if value == "" {
- log.Fatalln(desc + " must be specified")
+// Error format corresponding to the registry protocol V2 specification. This
+// allows feeding back errors to clients in a way that can be presented to
+// users.
+type registryError struct {
+ Code string `json:"code"`
+ Message string `json:"message"`
+}
+
+type registryErrors struct {
+ Errors []registryError `json:"errors"`
+}
+
+func writeError(w http.ResponseWriter, status int, code, message string) {
+ err := registryErrors{
+ Errors: []registryError{
+ {code, message},
+ },
}
+ json, _ := json.Marshal(err)
- return value
+ w.WriteHeader(status)
+ w.Header().Add("Content-Type", "application/json")
+ w.Write(json)
}
type registryHandler struct {
@@ -364,16 +380,17 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
buildResult, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
if err != nil {
+ writeError(w, 500, "UNKNOWN", "image build failure")
log.Println("Failed to build image manifest", err)
- w.WriteHeader(500)
return
}
// Some error types have special handling, which is applied
// here.
if buildResult.Error == "not_found" {
- log.Printf("Could not find packages: %v\n", buildResult.Pkgs)
- w.WriteHeader(404)
+ s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
+ writeError(w, 404, "MANIFEST_UNKNOWN", s)
+ log.Println(s)
return
}
@@ -399,6 +416,15 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(404)
}
+func getConfig(key, desc string) string {
+ value := os.Getenv(key)
+ if value == "" {
+ log.Fatalln(desc + " must be specified")
+ }
+
+ return value
+}
+
func main() {
cfg := &config{
bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
--
cgit 1.4.1
From bf34bb327ccdd4a8f1ff5dd10a9197e5114b7379 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 2 Aug 2019 01:02:40 +0100
Subject: fix(nix): Calculate MD5 sum of config layer correctly
The MD5 sum is used for verifying contents in the layer cache before
accidentally re-uploading, but the syntax of the hash invocation was
incorrect leading to a cache-bust on the manifest layer on every
single build (even for identical images).
---
tools/nixery/build-registry-image.nix | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index d06c9f3bfae3..20eb6d9e98d4 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -208,7 +208,7 @@ let
} ''
size=$(wc -c ${configJson} | cut -d ' ' -f1)
sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
- md5=$(openssl dgst -md5 -binary $layerPath | openssl enc -base64)
+ md5=$(openssl dgst -md5 -binary ${configJson} | openssl enc -base64)
jq -n -c --arg size $size --arg sha256 $sha256 --arg md5 $md5 \
'{ size: ($size | tonumber), sha256: $sha256, md5: $md5 }' \
>> $out
--
cgit 1.4.1
From 92f1758014f49ecc2edcf883e6294bec636e69aa Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 2 Aug 2019 01:11:26 +0100
Subject: docs(static): Note that the demo instance is just a demo
People should not start depending on the demo instance. There have
been discussions around making a NixOS-official instance, but the
project needs to mature a little bit first.
---
tools/nixery/static/index.html | 10 ++++++++++
1 file changed, 10 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/static/index.html b/tools/nixery/static/index.html
index 8cbda7360ef9..dc37eebc2106 100644
--- a/tools/nixery/static/index.html
+++ b/tools/nixery/static/index.html
@@ -88,6 +88,16 @@
No. Nixery is not officially supported by
Google.
+
+ Can I depend on the demo instance in production?
+
+ No. The demo instance is just a demo. It
+ might go down, move, or disappear entirely at any point.
+
+ To make use of Nixery in your project, please deploy a private
+ instance. Stay tuned for instructions for how to do this on
+ GKE.
+
Who made this?
--
cgit 1.4.1
From 02dfff393a2afa3b0be3fe134d12aebce9d47c27 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 2 Aug 2019 01:28:55 +0100
Subject: fix(build): coreutils are still required by launch script
Mea culpa!
---
tools/nixery/default.nix | 1 +
1 file changed, 1 insertion(+)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 0f28911668ce..2b3ec0ef4333 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -95,6 +95,7 @@ rec {
config.Cmd = ["${nixery-launch-script}/bin/nixery"];
contents = [
cacert
+ coreutils
git
gnutar
gzip
--
cgit 1.4.1
From 1f885e43b67dd26ec0b1bcfff9411bddc2674bea Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 2 Aug 2019 17:00:32 +0100
Subject: style(static): Update Nixery logo to a healthier version
This might not yet be the final version, but it's going in the right
direction.
Additionally the favicon has been reduced to just the coloured Nix
logo, because details are pretty much invisible at that size anyways.
---
tools/nixery/static/favicon.ico | Bin 157995 -> 140163 bytes
tools/nixery/static/nixery-logo.png | Bin 79360 -> 194153 bytes
2 files changed, 0 insertions(+), 0 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/static/favicon.ico b/tools/nixery/static/favicon.ico
index 405dbc175b4e..fbaad005001f 100644
Binary files a/tools/nixery/static/favicon.ico and b/tools/nixery/static/favicon.ico differ
diff --git a/tools/nixery/static/nixery-logo.png b/tools/nixery/static/nixery-logo.png
index 5835cd50e1bb..7ef2f25ee68a 100644
Binary files a/tools/nixery/static/nixery-logo.png and b/tools/nixery/static/nixery-logo.png differ
--
cgit 1.4.1
From da5df525c8e7ca9e3907a4f90412883dfc1b28a7 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 2 Aug 2019 17:05:33 +0100
Subject: docs: Update all nixery.appspot.com references to nixery.dev
Shiny, new domain is much better and eliminates the TLS redirect issue
because there is a HSTS preload for the entire .dev TLD (which, by the
way, is awesome!)
---
tools/nixery/README.md | 12 ++++++------
tools/nixery/default.nix | 4 ++--
tools/nixery/static/index.html | 2 +-
3 files changed, 9 insertions(+), 9 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index f100cb1b65c4..e0e634c3cb18 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -20,24 +20,24 @@ The project started out with the intention of becoming a Kubernetes controller
that can serve declarative image specifications specified in CRDs as container
images. The design for this is outlined in [a public gist][gist].
-An example instance is available at [nixery.appspot.com][demo].
+An example instance is available at [nixery.dev][demo].
This is not an officially supported Google project.
## Usage example
-Using the publicly available Nixery instance at `nixery.appspot.com`, one could
+Using the publicly available Nixery instance at `nixery.dev`, one could
retrieve a container image containing `curl` and an interactive shell like this:
```shell
-tazjin@tazbox:~$ sudo docker run -ti nixery.appspot.com/shell/curl bash
-Unable to find image 'nixery.appspot.com/shell/curl:latest' locally
+tazjin@tazbox:~$ sudo docker run -ti nixery.dev/shell/curl bash
+Unable to find image 'nixery.dev/shell/curl:latest' locally
latest: Pulling from shell/curl
7734b79e1ba1: Already exists
b0d2008d18cd: Pull complete
< ... some layers omitted ...>
Digest: sha256:178270bfe84f74548b6a43347d73524e5c2636875b673675db1547ec427cf302
-Status: Downloaded newer image for nixery.appspot.com/shell/curl:latest
+Status: Downloaded newer image for nixery.dev/shell/curl:latest
bash-4.4# curl --version
curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1
```
@@ -100,5 +100,5 @@ See [issue #4](https://github.com/google/nixery/issues/4).
[Nix]: https://nixos.org/
[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
[buildLayeredImage]: https://grahamc.com/blog/nix-and-layered-docker-images
-[demo]: https://nixery.appspot.com
+[demo]: https://nixery.dev
[gcs]: https://cloud.google.com/storage/
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 2b3ec0ef4333..8a7ce8b34453 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -48,8 +48,8 @@ rec {
'';
# Static files to serve on the Nixery index. This is used primarily
- # for the demo instance running at nixery.appspot.com and provides
- # some background information for what Nixery is.
+ # for the demo instance running at nixery.dev and provides some
+ # background information for what Nixery is.
nixery-static = runCommand "nixery-static" {} ''
mkdir $out
cp ${./static}/* $out
diff --git a/tools/nixery/static/index.html b/tools/nixery/static/index.html
index dc37eebc2106..24aa879a535d 100644
--- a/tools/nixery/static/index.html
+++ b/tools/nixery/static/index.html
@@ -55,7 +55,7 @@
shell and emacs you could pull it as such:
Image tags are currently ignored. Every package name needs to
--
cgit 1.4.1
From a4c0d3e8d30b69d3e1d5e3a2665475c6a6a72634 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 00:38:32 +0100
Subject: chore(go): Remove 'builder' metapackage
This metapackage isn't actually particularly useful (stdenv is rarely
what users want).
---
tools/nixery/main.go | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 9574a3a68c65..54dd8ab4d136 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -173,15 +173,12 @@ func imageFromName(name string, tag string) image {
// * `builder`: All of the above and the standard build environment
func convenienceNames(packages []string) []string {
shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
- builderPackages := append(shellPackages, "stdenv")
if packages[0] == "shell" {
return append(packages[1:], shellPackages...)
- } else if packages[0] == "builder" {
- return append(packages[1:], builderPackages...)
- } else {
- return packages
}
+
+ return packages
}
// Call out to Nix and request that an image be built. Nix will, upon success,
--
cgit 1.4.1
From c84543a9b5be48c55924a0f6f948cecb6e138808 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 00:44:32 +0100
Subject: style(static): Fix favicon background colour
---
tools/nixery/static/favicon.ico | Bin 140163 -> 167584 bytes
1 file changed, 0 insertions(+), 0 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/static/favicon.ico b/tools/nixery/static/favicon.ico
index fbaad005001f..7523e8513950 100644
Binary files a/tools/nixery/static/favicon.ico and b/tools/nixery/static/favicon.ico differ
--
cgit 1.4.1
From 62ade18b7b3b6f30caf316762f5669c56cabeb74 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 01:10:04 +0100
Subject: fix(static): Fix logo nitpick (smoothened λ edges)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
tools/nixery/static/nixery-logo.png | Bin 194153 -> 194098 bytes
1 file changed, 0 insertions(+), 0 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/static/nixery-logo.png b/tools/nixery/static/nixery-logo.png
index 7ef2f25ee68a..fcf77df3d6a9 100644
Binary files a/tools/nixery/static/nixery-logo.png and b/tools/nixery/static/nixery-logo.png differ
--
cgit 1.4.1
From ecee1ec1b888bb2ee745e4255158c829b78c3d3d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 01:11:05 +0100
Subject: chore: Prevent accidental key leaks via gitignore
---
tools/nixery/.gitignore | 6 ++++++
1 file changed, 6 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/.gitignore b/tools/nixery/.gitignore
index 1e5c28c012f5..4203fee19569 100644
--- a/tools/nixery/.gitignore
+++ b/tools/nixery/.gitignore
@@ -1,3 +1,9 @@
result
result-*
.envrc
+debug/
+
+# Just to be sure, since we're occasionally handling test keys:
+*.pem
+*.p12
+*.json
--
cgit 1.4.1
From 3347c38ba7b79f0251ca16c332867d4488976ac5 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 01:18:14 +0100
Subject: fix(go): Registry API acknowledgement URI has a trailing slash
Previously the acknowledgement calls from Docker were receiving a
404 (which apparently doesn't bother it?!). This corrects the URL,
which meant that acknowledgement had to move inside of the
registryHandler.
---
tools/nixery/main.go | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 54dd8ab4d136..a78250d4c4f8 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -367,6 +367,11 @@ type registryHandler struct {
}
func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ // Acknowledge that we speak V2 with an empty response
+ if r.RequestURI == "/v2/" {
+ return
+ }
+
// Serve the manifest (straight from Nix)
manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
if len(manifestMatches) == 3 {
@@ -436,12 +441,7 @@ func main() {
log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
- // Acknowledge that we speak V2
- http.HandleFunc("/v2", func(w http.ResponseWriter, r *http.Request) {
- fmt.Fprintln(w)
- })
-
- // All other /v2/ requests belong to the registry handler.
+ // All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
cfg: cfg,
ctx: &ctx,
--
cgit 1.4.1
From 07ef06dcfadef108cfb43df7610db710568a1c45 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 01:21:21 +0100
Subject: feat(go): Support signed GCS URLs with static keys
Google Cloud Storage supports granting access to protected objects via
time-restricted URLs that are cryptographically signed.
This makes it possible to store private data in buckets and to
distribute it to eligible clients without having to make those clients
aware of GCS authentication methods.
Nixery now uses this feature to sign URLs for GCS buckets when
returning layer URLs to clients on image pulls. This means that a
private Nixery instance can run a bucket with restricted access just
fine.
Under the hood Nixery uses a key provided via environment
variables to sign the URL with a 5 minute expiration time.
This can be set up by adding the following two environment variables:
* GCS_SIGNING_KEY: Path to the PEM file containing the signing key.
* GCS_SIGNING_ACCOUNT: Account ("e-mail" address) to use for signing.
If the variables are not set, the previous behaviour is not modified.
---
tools/nixery/main.go | 77 ++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 57 insertions(+), 20 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index a78250d4c4f8..291cdf52d7e6 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -23,10 +23,6 @@
// request and a Nix-build is initiated that eventually responds with the
// manifest as well as information linking each layer digest to a local
// filesystem path.
-//
-// Nixery caches the filesystem paths and returns the manifest to the client.
-// Subsequent requests for layer content per digest are then fulfilled by
-// serving the files from disk.
package main
import (
@@ -42,6 +38,7 @@ import (
"os/exec"
"regexp"
"strings"
+ "time"
"cloud.google.com/go/storage"
)
@@ -98,13 +95,37 @@ func pkgSourceFromEnv() *pkgSource {
return nil
}
+// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
+// GCS_SIGNING_ACCOUNT envvars.
+func signingOptsFromEnv() *storage.SignedURLOptions {
+ path := os.Getenv("GCS_SIGNING_KEY")
+ id := os.Getenv("GCS_SIGNING_ACCOUNT")
+
+ if path == "" || id == "" {
+ log.Println("GCS URL signing disabled")
+ return nil
+ }
+
+ log.Printf("GCS URL signing enabled with account %q\n", id)
+ k, err := ioutil.ReadFile(path)
+ if err != nil {
+ log.Fatalf("Failed to read GCS signing key: %s\n", err)
+ }
+
+ return &storage.SignedURLOptions{
+ GoogleAccessID: id,
+ PrivateKey: k,
+ Method: "GET",
+ }
+}
+
// config holds the Nixery configuration options.
type config struct {
- bucket string // GCS bucket to cache & serve layers
- builder string // Nix derivation for building images
- web string // Static files to serve over HTTP
- port string // Port on which to launch HTTP server
- pkgs *pkgSource // Source for Nix package set
+ bucket string // GCS bucket to cache & serve layers
+ signing *storage.SignedURLOptions // Signing options to use for GCS URLs
+ builder string // Nix derivation for building images
+ port string // Port on which to launch HTTP server
+ pkgs *pkgSource // Source for Nix package set
}
// ManifestMediaType is the Content-Type used for the manifest itself. This
@@ -117,10 +138,7 @@ const manifestMediaType string = "application/vnd.docker.distribution.manifest.v
// This can be either a list of package names (corresponding to keys in the
// nixpkgs set) or a Nix expression that results in a *list* of derivations.
type image struct {
- // Name of the container image.
name string
-
- // Tag requested (only relevant for package sets from git repositories)
tag string
// Names of packages to include in the image. These must correspond
@@ -294,15 +312,24 @@ func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer strin
}
// layerRedirect constructs the public URL of the layer object in the Cloud
-// Storage bucket and redirects the client there.
+// Storage bucket, signs it and redirects the user there.
+//
+// Signing the URL allows unauthenticated clients to retrieve objects from the
+// bucket.
//
// The Docker client is known to follow redirects, but this might not be true
// for all other registry clients.
-func layerRedirect(w http.ResponseWriter, cfg *config, digest string) {
+func constructLayerUrl(cfg *config, digest string) (string, error) {
log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.bucket)
- url := fmt.Sprintf("https://storage.googleapis.com/%s/layers/%s", cfg.bucket, digest)
- w.Header().Set("Location", url)
- w.WriteHeader(303)
+ object := "layers/" + digest
+
+ if cfg.signing != nil {
+ opts := *cfg.signing
+ opts.Expires = time.Now().Add(5 * time.Minute)
+ return storage.SignedURL(cfg.bucket, object, &opts)
+ } else {
+ return ("https://storage.googleapis.com" + cfg.bucket + "/" + object), nil
+ }
}
// prepareBucket configures the handle to a Cloud Storage bucket in which
@@ -410,7 +437,16 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
if len(layerMatches) == 3 {
digest := layerMatches[2]
- layerRedirect(w, h.cfg, digest)
+ url, err := constructLayerUrl(h.cfg, digest)
+
+ if err != nil {
+ log.Printf("Failed to sign GCS URL: %s\n", err)
+ writeError(w, 500, "UNKNOWN", "could not serve layer")
+ return
+ }
+
+ w.Header().Set("Location", url)
+ w.WriteHeader(303)
return
}
@@ -431,9 +467,9 @@ func main() {
cfg := &config{
bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
builder: getConfig("NIX_BUILDER", "Nix image builder code"),
- web: getConfig("WEB_DIR", "Static web file dir"),
port: getConfig("PORT", "HTTP port"),
pkgs: pkgSourceFromEnv(),
+ signing: signingOptsFromEnv(),
}
ctx := context.Background()
@@ -449,7 +485,8 @@ func main() {
})
// All other roots are served by the static file server.
- http.Handle("/", http.FileServer(http.Dir(cfg.web)))
+ webDir := http.Dir(getConfig("WEB_DIR", "Static web file dir"))
+ http.Handle("/", http.FileServer(webDir))
log.Fatal(http.ListenAndServe(":"+cfg.port, nil))
}
--
cgit 1.4.1
From aa260af1fff8a07a34c776e5b4be2b5c63b96826 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 01:25:21 +0100
Subject: docs: Add GCS signing envvars to README
---
tools/nixery/README.md | 4 ++++
1 file changed, 4 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index e0e634c3cb18..bcb8e40ec794 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -87,6 +87,10 @@ variables:
locally configured SSH/git credentials)
* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
for building
+* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
+ used to sign Cloud Storage URLs
+* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
+ to
## Roadmap
--
cgit 1.4.1
From 20103640fa6ab0678c72d3ba8a3ddc29ac264973 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 3 Aug 2019 18:27:26 +0100
Subject: fix(nix): Support retrieving differently cased top-level attributes
As described in issue #14, the registry API does not allow image names
with uppercase-characters in them.
However, the Nix package set has several top-level keys with uppercase
characters in them which could previously not be retrieved using
Nixery.
This change implements a method for retrieving those keys, but it is
explicitly only working for the top-level package set as nested
sets (such as `haskellPackages`) often contain packages that differ in
case only.
---
tools/nixery/build-registry-image.nix | 33 +++++++++++++++++++++++++++++----
1 file changed, 29 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 20eb6d9e98d4..6a5312b444d0 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -113,11 +113,36 @@ let
# For top-level items, the name of the key yields the result directly. Nested
# items are fetched by using dot-syntax, as in Nix itself.
#
- # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev`.
- deepFetch = s: n:
- let path = lib.strings.splitString "." n;
+ # Due to a restriction of the registry API specification it is not possible to
+ # pass uppercase characters in an image name, however the Nix package set
+ # makes use of camelCasing repeatedly (for example for `haskellPackages`).
+ #
+ # To work around this, if no value is found on the top-level a second lookup
+ # is done on the package set using lowercase-names. This is not done for
+ # nested sets, as they often have keys that only differ in case.
+ #
+ # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev` and
+ # `deepFetch haskellpackages.stylish-haskell` retrieves
+ # `haskellPackages.stylish-haskell`.
+ deepFetch = with lib; s: n:
+ let path = splitString "." n;
err = { error = "not_found"; pkg = n; };
- in lib.attrsets.attrByPath path err s;
+ # The most efficient way I've found to do a lookup against
+ # case-differing versions of an attribute is to first construct a
+ # mapping of all lowercased attribute names to their differently cased
+ # equivalents.
+ #
+ # This map is then used for a second lookup if the top-level
+ # (case-sensitive) one does not yield a result.
+ hasUpper = str: (match ".*[A-Z].*" str) != null;
+ allUpperKeys = filter hasUpper (attrNames s);
+ lowercased = listToAttrs (map (k: {
+ name = toLower k;
+ value = k;
+ }) allUpperKeys);
+ caseAmendedPath = map (v: if hasAttr v lowercased then lowercased."${v}" else v) path;
+ fetchLower = attrByPath caseAmendedPath err s;
+ in attrByPath path fetchLower s;
# allContents is the combination of all derivations and store paths passed in
# directly, as well as packages referred to by name.
--
cgit 1.4.1
From a0d7d693d373569b61597440537bedb1f1384450 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 4 Aug 2019 00:48:52 +0100
Subject: feat(build): Support additional pre-launch commands in image
This makes it possible for users to hook basically arbitrary things
into the Nixery container image.
---
tools/nixery/default.nix | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 8a7ce8b34453..7c7ad0b6c0eb 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -11,7 +11,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-{ pkgs ? import {} }:
+{ pkgs ? import {}
+, preLaunch ? "" }:
with pkgs;
@@ -88,6 +89,8 @@ rec {
mkdir -p /etc/nix
echo 'sandbox = false' >> /etc/nix/nix.conf
+ ${preLaunch}
+
exec ${nixery-bin}/bin/nixery
'';
in dockerTools.buildLayeredImage {
--
cgit 1.4.1
From 099c99b7ad4fa93d1b5c8b7bfef0416f79edad59 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 4 Aug 2019 01:30:24 +0100
Subject: feat(build): Configure Cachix for build caching in CI
The CI setup is configured with an appropriate key to enable pushes to
the nixery.cachix.org binary cache.
---
tools/nixery/.travis.yml | 5 +++++
1 file changed, 5 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index d8cc8efa432a..96f1ec0fc15b 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -1 +1,6 @@
language: nix
+before_script:
+ - nix-env -iA nixpkgs.cachix
+ - cachix use nixery
+script:
+ - nix-build | cachix push nixery
--
cgit 1.4.1
From 7c41a7a8723c8bd6606c0c2f3fed3a546f1efb24 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 4 Aug 2019 22:38:51 +0100
Subject: docs: Replace static page with mdBook site
Uses mdBook[1] to generate a documentation overview page instead of
the previous HTML site.
This makes it possible to add more elaborate documentation without
having to deal with finicky markup.
[1]: https://github.com/rust-lang-nursery/mdBook
---
tools/nixery/.gitattributes | 2 +
tools/nixery/docs/.gitignore | 1 +
tools/nixery/docs/book.toml | 8 +++
tools/nixery/docs/src/SUMMARY.md | 4 ++
tools/nixery/docs/src/nix-1p.md | 2 +
tools/nixery/docs/src/nixery-logo.png | Bin 0 -> 194098 bytes
tools/nixery/docs/src/nixery.md | 77 ++++++++++++++++++++++++
tools/nixery/docs/theme/favicon.png | Bin 0 -> 16053 bytes
tools/nixery/docs/theme/nixery.css | 3 +
tools/nixery/static/favicon.ico | Bin 167584 -> 0 bytes
tools/nixery/static/index.html | 108 ----------------------------------
tools/nixery/static/nixery-logo.png | Bin 194098 -> 0 bytes
12 files changed, 97 insertions(+), 108 deletions(-)
create mode 100644 tools/nixery/.gitattributes
create mode 100644 tools/nixery/docs/.gitignore
create mode 100644 tools/nixery/docs/book.toml
create mode 100644 tools/nixery/docs/src/SUMMARY.md
create mode 100644 tools/nixery/docs/src/nix-1p.md
create mode 100644 tools/nixery/docs/src/nixery-logo.png
create mode 100644 tools/nixery/docs/src/nixery.md
create mode 100644 tools/nixery/docs/theme/favicon.png
create mode 100644 tools/nixery/docs/theme/nixery.css
delete mode 100644 tools/nixery/static/favicon.ico
delete mode 100644 tools/nixery/static/index.html
delete mode 100644 tools/nixery/static/nixery-logo.png
(limited to 'tools')
diff --git a/tools/nixery/.gitattributes b/tools/nixery/.gitattributes
new file mode 100644
index 000000000000..74464db942e9
--- /dev/null
+++ b/tools/nixery/.gitattributes
@@ -0,0 +1,2 @@
+# Ignore stylesheet modifications for the book in Linguist stats
+*.css linguist-detectable=false
diff --git a/tools/nixery/docs/.gitignore b/tools/nixery/docs/.gitignore
new file mode 100644
index 000000000000..7585238efedf
--- /dev/null
+++ b/tools/nixery/docs/.gitignore
@@ -0,0 +1 @@
+book
diff --git a/tools/nixery/docs/book.toml b/tools/nixery/docs/book.toml
new file mode 100644
index 000000000000..bf6ccbb27f35
--- /dev/null
+++ b/tools/nixery/docs/book.toml
@@ -0,0 +1,8 @@
+[book]
+authors = ["Vincent Ambo "]
+language = "en"
+multilingual = false
+src = "src"
+
+[output.html]
+additional-css = ["theme/nixery.css"]
diff --git a/tools/nixery/docs/src/SUMMARY.md b/tools/nixery/docs/src/SUMMARY.md
new file mode 100644
index 000000000000..5d680b82e8d9
--- /dev/null
+++ b/tools/nixery/docs/src/SUMMARY.md
@@ -0,0 +1,4 @@
+# Summary
+
+- [Nixery](./nixery.md)
+- [Nix, the language](./nix-1p.md)
diff --git a/tools/nixery/docs/src/nix-1p.md b/tools/nixery/docs/src/nix-1p.md
new file mode 100644
index 000000000000..a21234150fc7
--- /dev/null
+++ b/tools/nixery/docs/src/nix-1p.md
@@ -0,0 +1,2 @@
+This page is a placeholder. During the build process, it is replaced by the
+actual `nix-1p` guide from https://github.com/tazjin/nix-1p
diff --git a/tools/nixery/docs/src/nixery-logo.png b/tools/nixery/docs/src/nixery-logo.png
new file mode 100644
index 000000000000..fcf77df3d6a9
Binary files /dev/null and b/tools/nixery/docs/src/nixery-logo.png differ
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
new file mode 100644
index 000000000000..d3d1911d2880
--- /dev/null
+++ b/tools/nixery/docs/src/nixery.md
@@ -0,0 +1,77 @@
+![Nixery](./nixery-logo.png)
+
+------------
+
+Welcome to this instance of [Nixery][]. It provides ad-hoc container images that
+contain packages from the [Nix][] package manager. Images with arbitrary
+packages can be requested via the image name.
+
+Nix not only provides the packages to include in the images, but also builds the
+images themselves by using an interesting layering strategy described in [this
+blog post][layers].
+
+## Quick start
+
+Simply pull an image from this registry, separating each package you want
+included by a slash:
+
+ docker pull nixery.dev/shell/git/htop
+
+This gives you an image with `git`, `htop` and an interactively configured
+shell. You could run it like this:
+
+ docker run -ti nixery.dev/shell/git/htop bash
+
+Each path segment corresponds either to a key in the Nix package set, or a
+meta-package that automatically expands to several other packages.
+
+Meta-packages **must** be the first path component if they are used. Currently
+the only meta-package is `shell`, which provides a `bash`-shell with interactive
+configuration and standard tools like `coreutils`.
+
+**Tip:** When pulling from a private Nixery instance, replace `nixery.dev` in
+the above examples with your registry address.
+
+## FAQ
+
+If you have a question that is not answered here, feel free to file an issue on
+Github so that we can get it included in this section. The volume of questions
+is quite low, thus by definition your question is already frequently asked.
+
+### Where is the source code for this?
+
+Over [on Github][Nixery]. It is licensed under the Apache 2.0 license. Consult
+the documentation entries in the sidebar for information on how to set up your
+own instance of Nixery.
+
+### Which revision of `nixpkgs` is used for the builds?
+
+The instance at `nixery.dev` tracks a recent NixOS channel, currently NixOS
+19.03. The channel is updated several times a day.
+
+Private registries might be configured to track a different channel (such as
+`nixos-unstable`) or even track a git repository with custom packages.
+
+### Is this an official Google project?
+
+**No.** Nixery is not officially supported by Google.
+
+### Should I depend on `nixery.dev` in production?
+
+While we appreciate the enthusiasm, if you would like to use Nixery in your
+production project we recommend setting up a private instance. The public Nixery
+at `nixery.dev` is run on a best-effort basis and we make no guarantees about
+availability.
+
+### Who made this?
+
+Nixery was written mostly by [tazjin][].
+
+[grahamc][] authored the image layering strategy. Many people have contributed
+to Nix over time, maybe you could become one of them?
+
+[Nixery]: https://github.com/google/nixery
+[Nix]: https://nixos.org/nix
+[layers]: https://grahamc.com/blog/nix-and-layered-docker-images
+[tazjin]: https://github.com/tazjin
+[grahamc]: https://github.com/grahamc
diff --git a/tools/nixery/docs/theme/favicon.png b/tools/nixery/docs/theme/favicon.png
new file mode 100644
index 000000000000..f510bde197ac
Binary files /dev/null and b/tools/nixery/docs/theme/favicon.png differ
diff --git a/tools/nixery/docs/theme/nixery.css b/tools/nixery/docs/theme/nixery.css
new file mode 100644
index 000000000000..c240e693d550
--- /dev/null
+++ b/tools/nixery/docs/theme/nixery.css
@@ -0,0 +1,3 @@
+h2, h3 {
+ margin-top: 1em;
+}
diff --git a/tools/nixery/static/favicon.ico b/tools/nixery/static/favicon.ico
deleted file mode 100644
index 7523e8513950..000000000000
Binary files a/tools/nixery/static/favicon.ico and /dev/null differ
diff --git a/tools/nixery/static/index.html b/tools/nixery/static/index.html
deleted file mode 100644
index 24aa879a535d..000000000000
--- a/tools/nixery/static/index.html
+++ /dev/null
@@ -1,108 +0,0 @@
-
-
-
-
-
- Nixery
-
-
-
-
-
-
-
-
-
-
-
-
- This is an instance
- of Nixery, which
- provides the ability to pull ad-hoc container images from a
- Docker-compatible registry server. The image names specify the
- contents the image should contain, which are then retrieved and
- built by the Nix package manager.
-
-
- Nix is also responsible for the creation of the container images
- themselves. To do this it uses an interesting layering strategy
- described in
- this blog post.
-
-
How does it work?
-
- Simply point your local Docker installation (or other compatible
- registry client) at Nixery and ask for an image with the
- contents you desire. Image contents are path separated in the
- name, so for example if you needed an image that contains a
- shell and emacs you could pull it as such:
-
-
- nixery.dev/shell/emacs25-nox
-
-
- Image tags are currently ignored. Every package name needs to
- correspond to a key in the
- nixpkgs package set.
-
-
- The special meta-package shell provides default packages
- you would expect in an interactive environment (such as an
- interactively configured bash). If you use this package
- you must specify it as the first package in an image.
-
-
FAQ
-
-
- Where is the source code for this?
-
- Over on Github.
-
-
- Which revision of nixpkgs is used?
-
- Nixery imports a Nix channel
- via builtins.fetchTarball. Currently the channel
- to which this instance is pinned is NixOS 19.03.
-
-
- Is this an official Google project?
-
- No. Nixery is not officially supported by
- Google.
-
-
- Can I depend on the demo instance in production?
-
- No. The demo instance is just a demo. It
- might go down, move, or disappear entirely at any point.
-
- To make use of Nixery in your project, please deploy a private
- instance. Stay tuned for instructions for how to do this on
- GKE.
-
-
-
diff --git a/tools/nixery/static/nixery-logo.png b/tools/nixery/static/nixery-logo.png
deleted file mode 100644
index fcf77df3d6a9..000000000000
Binary files a/tools/nixery/static/nixery-logo.png and /dev/null differ
--
cgit 1.4.1
From 85e8d760fcf16e50ba8e055561ac418f4f5bce58 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 4 Aug 2019 22:43:22 +0100
Subject: feat(build): Add mdBook 0.3.1 to build environment
Upstream nixpkgs currently only has an older versin of mdBook. Until
that changes, we keep a different version in here.
---
tools/nixery/default.nix | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 7c7ad0b6c0eb..28b94af5bd59 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -55,6 +55,24 @@ rec {
mkdir $out
cp ${./static}/* $out
'';
+ # nixpkgs currently has an old version of mdBook. A new version is
+ # built here, but eventually the update will be upstreamed
+ # (nixpkgs#65890)
+ mdbook = rustPlatform.buildRustPackage rec {
+ name = "mdbook-${version}";
+ version = "0.3.1";
+ doCheck = false;
+
+ src = fetchFromGitHub {
+ owner = "rust-lang-nursery";
+ repo = "mdBook";
+ rev = "v${version}";
+ sha256 = "0py69267jbs6b7zw191hcs011cm1v58jz8mglqx3ajkffdfl3ghw";
+ };
+
+ cargoSha256 = "0qwhc42a86jpvjcaysmfcw8kmwa150lmz01flmlg74g6qnimff5m";
+ };
+
# Wrapper script running the Nixery server with the above two data
# dependencies configured.
--
cgit 1.4.1
From 2bef0ba2403f20826fbd615b0abb91cb2aff0350 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 4 Aug 2019 22:45:23 +0100
Subject: feat(build): Build Nixery book and embed it into Nixery image
Executes the previously added mdBook on the previously added book
source to yield a directory that can be served by Nixery on its index
page.
This is one of those 'I <3 Nix' things due to how easy it is to do.
---
tools/nixery/default.nix | 18 ++++++++++--------
tools/nixery/docs/default.nix | 36 ++++++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+), 8 deletions(-)
create mode 100644 tools/nixery/docs/default.nix
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 28b94af5bd59..4f0b14c90394 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -48,13 +48,6 @@ rec {
cat ${./build-registry-image.nix} > $out
'';
- # Static files to serve on the Nixery index. This is used primarily
- # for the demo instance running at nixery.dev and provides some
- # background information for what Nixery is.
- nixery-static = runCommand "nixery-static" {} ''
- mkdir $out
- cp ${./static}/* $out
- '';
# nixpkgs currently has an old version of mdBook. A new version is
# built here, but eventually the update will be upstreamed
# (nixpkgs#65890)
@@ -73,6 +66,10 @@ rec {
cargoSha256 = "0qwhc42a86jpvjcaysmfcw8kmwa150lmz01flmlg74g6qnimff5m";
};
+ # Use mdBook to build a static asset page which Nixery can then
+ # serve. This is primarily used for the public instance at
+ # nixery.dev.
+ nixery-book = callPackage ./docs { inherit mdbook; };
# Wrapper script running the Nixery server with the above two data
# dependencies configured.
@@ -81,7 +78,7 @@ rec {
# are installing Nixery directly.
nixery-bin = writeShellScriptBin "nixery" ''
export NIX_BUILDER="${nixery-builder}"
- export WEB_DIR="${nixery-static}"
+ export WEB_DIR="${nixery-book}"
exec ${nixery-server}/bin/nixery
'';
@@ -107,6 +104,11 @@ rec {
mkdir -p /etc/nix
echo 'sandbox = false' >> /etc/nix/nix.conf
+ # In some cases users building their own image might want to
+ # customise something on the inside (e.g. set up an environment
+ # for keys or whatever).
+ #
+ # This can be achieved by setting a 'preLaunch' script.
${preLaunch}
exec ${nixery-bin}/bin/nixery
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
new file mode 100644
index 000000000000..ba652cef9c26
--- /dev/null
+++ b/tools/nixery/docs/default.nix
@@ -0,0 +1,36 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Builds the documentation page using the Rust project's 'mdBook'
+# tool.
+#
+# Some of the documentation is pulled in and included from other
+# sources.
+
+{ fetchFromGitHub, mdbook, runCommand }:
+
+let
+ nix-1p = fetchFromGitHub {
+ owner = "tazjin";
+ repo = "nix-1p";
+ rev = "aab846cd3d79fcd092b1bfea1346c587b2a56095";
+ sha256 = "12dl0xrwgg2d4wyv9zxgdn0hzqnanczjg23vqn3356rywxlzzwak";
+ };
+in runCommand "nixery-book" {} ''
+ mkdir -p $out
+ cp -r ${./.}/* .
+ chmod -R a+w src
+ cp ${nix-1p}/README.md src/nix-1p.md
+ ${mdbook}/bin/mdbook build -d $out
+''
--
cgit 1.4.1
From a3f6278913d756c74d4f5c636c88b91a1b3748f6 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 4 Aug 2019 23:42:57 +0100
Subject: docs: Add an "under-the-hood" page explaining the build process
This page describes the various steps that Nixery goes through when
"procuring" an image.
The intention is to give users some more visibility into what is going
on and to make it clear that this is not just an image storage
service.
---
tools/nixery/docs/src/SUMMARY.md | 1 +
tools/nixery/docs/src/nixery.md | 8 +--
tools/nixery/docs/src/under-the-hood.md | 105 ++++++++++++++++++++++++++++++++
3 files changed, 110 insertions(+), 4 deletions(-)
create mode 100644 tools/nixery/docs/src/under-the-hood.md
(limited to 'tools')
diff --git a/tools/nixery/docs/src/SUMMARY.md b/tools/nixery/docs/src/SUMMARY.md
index 5d680b82e8d9..f5ba3e9b084a 100644
--- a/tools/nixery/docs/src/SUMMARY.md
+++ b/tools/nixery/docs/src/SUMMARY.md
@@ -1,4 +1,5 @@
# Summary
- [Nixery](./nixery.md)
+- [Under the hood](./under-the-hood.md)
- [Nix, the language](./nix-1p.md)
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index d3d1911d2880..83e1aac52bdf 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -52,10 +52,6 @@ The instance at `nixery.dev` tracks a recent NixOS channel, currently NixOS
Private registries might be configured to track a different channel (such as
`nixos-unstable`) or even track a git repository with custom packages.
-### Is this an official Google project?
-
-**No.** Nixery is not officially supported by Google.
-
### Should I depend on `nixery.dev` in production?
While we appreciate the enthusiasm, if you would like to use Nixery in your
@@ -63,6 +59,10 @@ production project we recommend setting up a private instance. The public Nixery
at `nixery.dev` is run on a best-effort basis and we make no guarantees about
availability.
+### Is this an official Google project?
+
+**No.** Nixery is not officially supported by Google.
+
### Who made this?
Nixery was written mostly by [tazjin][].
diff --git a/tools/nixery/docs/src/under-the-hood.md b/tools/nixery/docs/src/under-the-hood.md
new file mode 100644
index 000000000000..3791707b1cd2
--- /dev/null
+++ b/tools/nixery/docs/src/under-the-hood.md
@@ -0,0 +1,105 @@
+# Under the hood
+
+This page serves as a quick explanation of what happens under-the-hood when an
+image is requested from Nixery.
+
+
+
+- [1. The image manifest is requested](#1-the-image-manifest-is-requested)
+- [2. Nix builds the image](#2-nix-builds-the-image)
+- [3. Layers are uploaded to Nixery's storage](#3-layers-are-uploaded-to-nixerys-storage)
+- [4. The image manifest is sent back](#4-the-image-manifest-is-sent-back)
+- [5. Image layers are requested](#5-image-layers-are-requested)
+
+
+
+--------
+
+## 1. The image manifest is requested
+
+When container registry clients such as Docker pull an image, the first thing
+they do is ask for the image manifest. This is a JSON document describing which
+layers are contained in an image, as well as some additional auxiliary
+information.
+
+This request is of the form `GET /v2/$imageName/manifests/$imageTag`.
+
+Nixery receives this request and begins by splitting the image name into its
+path components and substituting meta-packages (such as `shell`) for their
+contents.
+
+For example, requesting `shell/htop/git` results in Nixery expanding the image
+name to `["bashInteractive", "coreutils", "htop", "git"]`.
+
+If Nixery is configured with a private Nix repository, it also looks at the
+image tag and substitutes `latest` with `master`.
+
+It then invokes Nix with three parameters:
+
+1. image contents (as above)
+2. image tag
+3. configured package set source
+
+## 2. Nix builds the image
+
+Using the parameters above, Nix imports the package set and begins by mapping
+the image names to attributes in the package set.
+
+A special case during this process is packages with uppercase characters in
+their name, for example anything under `haskellPackages`. The registry protocol
+does not allow uppercase characters, so the Nix code will translate something
+like `haskellpackages` (lowercased) to the correct attribute name.
+
+After identifying all contents, Nix determines the contents of each layer while
+optimising for the best possible cache efficiency.
+
+Finally it builds each layer, assembles the image manifest as JSON structure,
+and yields this manifest back to the web server.
+
+*Note:* While this step is running (which can take some time in the case of
+large first-time image builds), the registry client is left hanging waiting for
+an HTTP response. Unfortunately the registry protocol does not allow for any
+feedback back to the user at this point, so from the user's perspective things
+just ... hang, for a moment.
+
+## 3. Layers are uploaded to Nixery's storage
+
+Nixery inspects the returned manifest and uploads each layer to the configured
+[Google Cloud Storage][gcs] bucket. To avoid unnecessary uploading, it will
+first check whether layers are already present in the bucket and - just to be
+safe - compare their MD5-hashes against what was built.
+
+## 4. The image manifest is sent back
+
+If everything went well at this point, Nixery responds to the registry client
+with the image manifest.
+
+The client now inspects the manifest and basically sees a list of SHA256-hashes,
+each corresponding to one layer of the image. Most clients will now consult
+their local layer storage and determine which layers they are missing.
+
+Each of the missing layers is then requested from Nixery.
+
+## 5. Image layers are requested
+
+For each image layer that it needs to retrieve, the registry client assembles a
+request that looks like this:
+
+`GET /v2/${imageName}/blob/sha256:${layerHash}`
+
+Nixery receives these requests and *rewrites* them to Google Cloud Storage URLs,
+responding with an `HTTP 303 See Other` status code and the actual download URL
+of the layer.
+
+Nixery supports using private buckets which are not generally world-readable, in
+which case [signed URLs][] are constructed using a private key. These allow the
+registry client to download each layer without needing to care about how the
+underlying authentication works.
+
+---------
+
+That's it. After these five steps the registry client has retrieved all it needs
+to run the image produced by Nixery.
+
+[gcs]: https://cloud.google.com/storage/
+[signed URLs]: https://cloud.google.com/storage/docs/access-control/signed-urls
--
cgit 1.4.1
From 6293d69fd94f709d029c5c121956900cad3d24c1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 5 Aug 2019 00:27:39 +0100
Subject: docs: Add a section on running your own Nixery
---
tools/nixery/docs/src/SUMMARY.md | 6 +-
tools/nixery/docs/src/run-your-own.md | 141 ++++++++++++++++++++++++++++++++++
2 files changed, 145 insertions(+), 2 deletions(-)
create mode 100644 tools/nixery/docs/src/run-your-own.md
(limited to 'tools')
diff --git a/tools/nixery/docs/src/SUMMARY.md b/tools/nixery/docs/src/SUMMARY.md
index f5ba3e9b084a..677f328972bb 100644
--- a/tools/nixery/docs/src/SUMMARY.md
+++ b/tools/nixery/docs/src/SUMMARY.md
@@ -1,5 +1,7 @@
# Summary
- [Nixery](./nixery.md)
-- [Under the hood](./under-the-hood.md)
-- [Nix, the language](./nix-1p.md)
+ - [Under the hood](./under-the-hood.md)
+ - [Run your own Nixery](./run-your-own.md)
+- [Nix](./nix.md)
+ - [Nix, the language](./nix-1p.md)
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
new file mode 100644
index 000000000000..0539ab02fcff
--- /dev/null
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -0,0 +1,141 @@
+## Run your own Nixery
+
+
+
+- [0. Prerequisites](#0-prerequisites)
+- [1. Choose a package set](#1-choose-a-package-set)
+- [2. Build Nixery itself](#2-build-nixery-itself)
+- [3. Prepare configuration](#3-prepare-configuration)
+- [4. Deploy Nixery](#4-deploy-nixery)
+- [5. Productionise](#5-productionise)
+
+
+
+
+---------
+
+⚠ This page is still under construction! ⚠
+
+--------
+
+Running your own Nixery is not difficult, but requires some setup. Follow the
+steps below to get up & running.
+
+*Note:* Nixery can be run inside of a [GKE][] cluster, providing a local service
+from which images can be requested. Documentation for how to set this up is
+forthcoming, please see [nixery#4][].
+
+## 0. Prerequisites
+
+To run Nixery, you must have:
+
+* [Nix][] (to build Nixery itself)
+* Somewhere to run it (your own server, Google AppEngine, a Kubernetes cluster,
+ whatever!)
+* A [Google Cloud Storage][gcs] bucket in which to store & serve layers
+
+## 1. Choose a package set
+
+When running your own Nixery you need to decide which package set you want to
+serve. By default, Nixery builds packages from a recent NixOS channel which
+ensures that most packages are cached upstream and no expensive builds need to
+be performed for trivial things.
+
+However if you are running a private Nixery, chances are high that you intend to
+use it with your own packages. There are three options available:
+
+1. Specify an upstream Nix/NixOS channel[^1], such as `nixos-19.03` or
+ `nixos-unstable`.
+2. Specify your own git-repository with a custom package set[^2]. This makes it
+ possible to pull different tags, branches or commits by modifying the image
+ tag.
+3. Specify a local file path containing a Nix package set. Where this comes from
+ or what it contains is up to you.
+
+## 2. Build Nixery itself
+
+Building Nixery creates a container image. This section assumes that the
+container runtime used is Docker, please modify instructions correspondingly if
+you are using something else.
+
+With a working Nix installation, building Nixery is done by invoking `nix-build
+-A nixery-image` from a checkout of the [Nixery repository][repo].
+
+This will create a `result`-symlink which points to a tarball containing the
+image. In Docker, this tarball can be loaded by using `docker load -i result`.
+
+## 3. Prepare configuration
+
+Nixery is configured via environment variables.
+
+You must set *all* of these:
+
+* `BUCKET`: [Google Cloud Storage][gcs] bucket to store & serve image layers
+* `PORT`: HTTP port on which Nixery should listen
+
+You may set *one* of these, if unset Nixery defaults to `nixos-19.03`:
+
+* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
+* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
+ locally configured SSH/git credentials)
+* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
+ for building
+
+You may set *both* of these:
+
+* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
+ used to [sign Cloud Storage URLs][signed-urls]
+* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
+ to
+
+To authenticate to the configured GCS bucket, Nixery uses Google's [Application
+Default Credentials][ADC]. Depending on your environment this may require
+additional configuration.
+
+## 4. Deploy Nixery
+
+With the above environment variables configured, you can run the image that was
+built in step 2.
+
+How this works depends on the environment you are using and is, for now, outside
+of the scope of this tutorial.
+
+Once Nixery is running you can immediately start requesting images from it.
+
+## 5. Productionise
+
+(⚠ Here be dragons! ⚠)
+
+Nixery is still an early project and has not yet been deployed in any production
+environments and some caveats apply.
+
+Notably, Nixery currently does not support any authentication methods, so anyone
+with network access to the registry can retrieve images.
+
+Running a Nixery inside of a fenced-off environment (such as internal to a
+Kubernetes cluster) should be fine, but you should consider to do all of the
+following:
+
+* Issue a TLS certificate for the hostname you are assigning to Nixery. In fact,
+ Docker will refuse to pull images from registries that do not use TLS (with
+ the exception of `.local` domains).
+* Configure signed GCS URLs to avoid having to make your bucket world-readable.
+* Configure request timeouts for Nixery if you have your own web server in front
+ of it. This will be natively supported by Nixery in the future.
+
+-------
+
+[^1]: Nixery will not work with Nix channels older than `nixos-19.03`.
+
+[^2]: This documentation will be updated with instructions on how to best set up
+ a custom Nix repository. Nixery expects custom package sets to be a superset
+ of `nixpkgs`, as it uses `lib` and other features from `nixpkgs`
+ extensively.
+
+[GKE]: https://cloud.google.com/kubernetes-engine/
+[nixery#4]: https://github.com/google/nixery/issues/4
+[Nix]: https://nixos.org/nix
+[gcs]: https://cloud.google.com/storage/
+[repo]: https://github.com/google/nixery
+[signed-urls]: under-the-hood.html#5-image-layers-are-requested
+[ADC]: https://cloud.google.com/docs/authentication/production#finding_credentials_automatically
--
cgit 1.4.1
From d87662b7b562d68561775adeefd29fffb065465c Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 5 Aug 2019 00:27:47 +0100
Subject: docs: Add a section on Nix itself
---
tools/nixery/docs/src/nix.md | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)
create mode 100644 tools/nixery/docs/src/nix.md
(limited to 'tools')
diff --git a/tools/nixery/docs/src/nix.md b/tools/nixery/docs/src/nix.md
new file mode 100644
index 000000000000..2bfd75a6925c
--- /dev/null
+++ b/tools/nixery/docs/src/nix.md
@@ -0,0 +1,31 @@
+# Nix
+
+These sections are designed to give some background information on what Nix is.
+If you've never heard of Nix before looking at Nixery, this might just be the
+page for you!
+
+[Nix][] is a functional package-manager that comes with a number of advantages
+over traditional package managers, such as side-by-side installs of different
+package versions, atomic updates, easy customisability, simple binary caching
+and much more. Feel free to explore the [Nix website][Nix] for an overview of
+Nix itself.
+
+Nix uses a custom programming language also called Nix, which is explained here
+[on its own page][nix-1p].
+
+In addition to the package manager and language, the Nix project also maintains
+[NixOS][] - a Linux distribution built entirely on Nix. On NixOS, users can
+declaratively describe the *entire* configuration of their system and perform
+updates/rollbacks to other system configurations with ease.
+
+Most Nix packages are tracked in the [Nix package set][nixpkgs], usually simply
+referred to as `nixpkgs`. It contains tens of thousands of packages already!
+
+Nixery (which you are looking at!) provides an easy & simple way to get started
+with Nix, in fact you don't even need to know that you're using Nix to make use
+of Nixery.
+
+[Nix]: https://nixos.org/nix/
+[nix-1p]: nix-1p.html
+[NixOS]: https://nixos.org/
+[nixpkgs]: https://github.com/nixos/nixpkgs
--
cgit 1.4.1
From 12a853fab74de9d146580891d8b70a7c216d2c83 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 5 Aug 2019 01:20:52 +0100
Subject: docs: Minor fixes to README after new website release
---
tools/nixery/README.md | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index bcb8e40ec794..9c323a57fa12 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -1,5 +1,5 @@
-
+
-----------------
@@ -16,12 +16,13 @@ image name.
The path components refer to top-level keys in `nixpkgs` and are used to build a
container image using Nix's [buildLayeredImage][] functionality.
+A public instance as well as additional documentation is available at
+[nixery.dev][public].
+
The project started out with the intention of becoming a Kubernetes controller
that can serve declarative image specifications specified in CRDs as container
images. The design for this is outlined in [a public gist][gist].
-An example instance is available at [nixery.dev][demo].
-
This is not an officially supported Google project.
## Usage example
@@ -94,7 +95,7 @@ variables:
## Roadmap
-### Kubernetes integration (in the future)
+### Kubernetes integration
It should be trivial to deploy Nixery inside of a Kubernetes cluster with
correct caching behaviour, addressing and so on.
@@ -104,5 +105,5 @@ See [issue #4](https://github.com/google/nixery/issues/4).
[Nix]: https://nixos.org/
[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
[buildLayeredImage]: https://grahamc.com/blog/nix-and-layered-docker-images
-[demo]: https://nixery.dev
+[public]: https://nixery.dev
[gcs]: https://cloud.google.com/storage/
--
cgit 1.4.1
From 993fda337755e02e4505b0a85f14581baa8ed1d0 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 8 Aug 2019 17:47:46 +0100
Subject: fix(go): Fix breakage in unsigned URLs
This affected the public instance which is still running without URL
signing. Should add some monitoring!
---
tools/nixery/main.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 291cdf52d7e6..d20ede2eb587 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -328,7 +328,7 @@ func constructLayerUrl(cfg *config, digest string) (string, error) {
opts.Expires = time.Now().Add(5 * time.Minute)
return storage.SignedURL(cfg.bucket, object, &opts)
} else {
- return ("https://storage.googleapis.com" + cfg.bucket + "/" + object), nil
+ return ("https://storage.googleapis.com/" + cfg.bucket + "/" + object), nil
}
}
--
cgit 1.4.1
From c727b3ca9ea44e5dbc2f4c7b280af26f8c53a526 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 8 Aug 2019 20:57:11 +0100
Subject: chore(nix): Increase maximum number of layers to 96
This uses a significantly larger percentage of the total available
layers (125) than before, which means that cache hits for layers
become more likely between images.
---
tools/nixery/build-registry-image.nix | 9 ++++-----
tools/nixery/default.nix | 2 +-
2 files changed, 5 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
index 6a5312b444d0..255f1ca9b1d0 100644
--- a/tools/nixery/build-registry-image.nix
+++ b/tools/nixery/build-registry-image.nix
@@ -29,11 +29,10 @@
packages ? "[]",
# Optional bash script to run on the files prior to fixturizing the layer.
extraCommands ? "", uid ? 0, gid ? 0,
- # Docker's lowest maximum layer limit is 42-layers for an old
- # version of the AUFS graph driver. We pick 24 to ensure there is
- # plenty of room for extension. I believe the actual maximum is
- # 128.
- maxLayers ? 24,
+ # Docker's modern image storage mechanisms have a maximum of 125
+ # layers. To allow for some extensibility (via additional layers),
+ # the default here is set to something a little less than that.
+ maxLayers ? 96,
# Configuration for which package set to use when building.
#
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 4f0b14c90394..092c76e9c5b9 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -30,7 +30,6 @@ rec {
# or similar (as other required files will not be included), but
# buildGoPackage requires a package path.
goPackagePath = "github.com/google/nixery";
-
goDeps = ./go-deps.nix;
src = ./.;
@@ -116,6 +115,7 @@ rec {
in dockerTools.buildLayeredImage {
name = "nixery";
config.Cmd = ["${nixery-launch-script}/bin/nixery"];
+ maxLayers = 96;
contents = [
cacert
coreutils
--
cgit 1.4.1
From 3e385dc379de4a1e8dd619a7a47714925e99490a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 8 Aug 2019 21:02:08 +0100
Subject: docs: Update embedded nix-1p
The new commit has an operator table, which is nice to have!
---
tools/nixery/docs/default.nix | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
index ba652cef9c26..deebdffd7fd9 100644
--- a/tools/nixery/docs/default.nix
+++ b/tools/nixery/docs/default.nix
@@ -24,8 +24,8 @@ let
nix-1p = fetchFromGitHub {
owner = "tazjin";
repo = "nix-1p";
- rev = "aab846cd3d79fcd092b1bfea1346c587b2a56095";
- sha256 = "12dl0xrwgg2d4wyv9zxgdn0hzqnanczjg23vqn3356rywxlzzwak";
+ rev = "3cd0f7d7b4f487d04a3f1e3ca8f2eb1ab958c49b";
+ sha256 = "02lpda03q580gyspkbmlgnb2cbm66rrcgqsv99aihpbwyjym81af";
};
in runCommand "nixery-book" {} ''
mkdir -p $out
--
cgit 1.4.1
From ce31598f4264a3c8af749b3fd533a905dcd69edc Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 11 Aug 2019 16:44:05 +0100
Subject: feat(group-layers): Implement first half of new layering strategy
The strategy is described in-depth in the comment at the top of the
implementation file, as well as in the design document:
https://storage.googleapis.com/nixdoc/nixery-layers.html
---
tools/nixery/group-layers/group-layers.go | 267 ++++++++++++++++++++++++++++++
1 file changed, 267 insertions(+)
create mode 100644 tools/nixery/group-layers/group-layers.go
(limited to 'tools')
diff --git a/tools/nixery/group-layers/group-layers.go b/tools/nixery/group-layers/group-layers.go
new file mode 100644
index 000000000000..1ee67433deeb
--- /dev/null
+++ b/tools/nixery/group-layers/group-layers.go
@@ -0,0 +1,267 @@
+// This program reads an export reference graph (i.e. a graph representing the
+// runtime dependencies of a set of derivations) created by Nix and groups them
+// in a way that is likely to match the grouping for other derivation sets with
+// overlapping dependencies.
+//
+// This is used to determine which derivations to include in which layers of a
+// container image.
+//
+// # Inputs
+//
+// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
+// * a file containing absolute popularity values of packages in the
+// Nix package set (in the form of a direct reference count)
+// * a maximum number of layers to allocate for the image (the "layer budget")
+//
+// # Algorithm
+//
+// It works by first creating a (directed) dependency tree:
+//
+// img (root node)
+// │
+// ├───> A ─────┐
+// │ v
+// ├───> B ───> E
+// │ ^
+// ├───> C ─────┘
+// │ │
+// │ v
+// └───> D ───> F
+// │
+// └────> G
+//
+// Each node (i.e. package) is then visited to determine how important
+// it is to separate this node into its own layer, specifically:
+//
+// 1. Is the node within a certain threshold percentile of absolute
+// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
+//
+// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
+//
+// In either case, a bit is flipped for this node representing each
+// condition and an edge to it is inserted directly from the image
+// root, if it does not already exist.
+//
+// For the rest of the example we assume 'G' is above the threshold
+// size and 'E' is popular.
+//
+// This tree is then transformed into a dominator tree:
+//
+// img
+// │
+// ├───> A
+// ├───> B
+// ├───> C
+// ├───> E
+// ├───> D ───> F
+// └───> G
+//
+// Specifically this means that the paths to A, B, C, E, G, and D
+// always pass through the root (i.e. are dominated by it), whilst F
+// is dominated by D (all paths go through it).
+//
+// The top-level subtrees are considered as the initially selected
+// layers.
+//
+// If the list of layers fits within the layer budget, it is returned.
+//
+// Otherwise layers are merged together in this order:
+//
+// * layers whose root meets neither condition above
+// * layers whose root is popular
+// * layers whose root is big
+// * layers whose root meets both conditions
+//
+// # Threshold values
+//
+// Threshold values for the partitioning conditions mentioned above
+// have not yet been determined, but we will make a good first guess
+// based on gut feeling and proceed to measure their impact on cache
+// hits/misses.
+//
+// # Example
+//
+// Using the logic described above as well as the example presented in
+// the introduction, this program would create the following layer
+// groupings (assuming no additional partitioning):
+//
+// Layer budget: 1
+// Layers: { A, B, C, D, E, F, G }
+//
+// Layer budget: 2
+// Layers: { G }, { A, B, C, D, E, F }
+//
+// Layer budget: 3
+// Layers: { G }, { E }, { A, B, C, D, F }
+//
+// Layer budget: 4
+// Layers: { G }, { E }, { D, F }, { A, B, C }
+//
+// ...
+//
+// Layer budget: 10
+// Layers: { E }, { D, F }, { A }, { B }, { C }
+package main
+
+import (
+ "encoding/json"
+ "flag"
+ "io/ioutil"
+ "log"
+ "fmt"
+ "regexp"
+ "os"
+
+ "gonum.org/v1/gonum/graph/simple"
+ "gonum.org/v1/gonum/graph/flow"
+ "gonum.org/v1/gonum/graph/encoding/dot"
+)
+
+// closureGraph represents the structured attributes Nix outputs when asking it
+// for the exportReferencesGraph of a list of derivations.
+type exportReferences struct {
+ References struct {
+ Graph []string `json:"graph"`
+ } `json:"exportReferencesGraph"`
+
+ Graph []struct {
+ Size uint64 `json:"closureSize`
+ Path string `json:"path"`
+ Refs []string `json:"references"`
+ } `json:"graph"`
+}
+
+// closure as pointed to by the graph nodes.
+type closure struct {
+ GraphID int64
+ Path string
+ Size uint64
+ Refs []string
+ // TODO(tazjin): popularity and other funny business
+}
+
+func (c *closure) ID() int64 {
+ return c.GraphID
+}
+
+var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
+func (c *closure) DOTID() string {
+ return nixRegexp.ReplaceAllString(c.Path, "")
+}
+
+func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
+ for _, c := range node.Refs {
+ // Nix adds a self reference to each node, which
+ // should not be inserted.
+ if c != node.Path {
+ edge := graph.NewEdge(node, (*cmap)[c])
+ graph.SetEdge(edge)
+ }
+ }
+}
+
+// Create a graph structure from the references supplied by Nix.
+func buildGraph(refs *exportReferences) *simple.DirectedGraph {
+ cmap := make(map[string]*closure)
+ graph := simple.NewDirectedGraph()
+
+ // Insert all closures into the graph, as well as a fake root
+ // closure which serves as the top of the tree.
+ //
+ // A map from store paths to IDs is kept to actually insert
+ // edges below.
+ root := &closure {
+ GraphID: 0,
+ Path: "image_root",
+ }
+ graph.AddNode(root)
+
+ for idx, c := range refs.Graph {
+ node := &closure {
+ GraphID: int64(idx + 1), // inc because of root node
+ Path: c.Path,
+ Size: c.Size,
+ Refs: c.Refs,
+ }
+
+ graph.AddNode(node)
+ cmap[c.Path] = node
+ }
+
+ // Insert the top-level closures with edges from the root
+ // node, then insert all edges for each closure.
+ for _, p := range refs.References.Graph {
+ edge := graph.NewEdge(root, cmap[p])
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range cmap {
+ insertEdges(graph, &cmap, c)
+ }
+
+ gv, err := dot.Marshal(graph, "deps", "", "")
+ if err != nil {
+ log.Fatalf("Could not encode graph: %s\n", err)
+ }
+ fmt.Print(string(gv))
+ os.Exit(0)
+
+ return graph
+}
+
+// Calculate the dominator tree of the entire package set and group
+// each top-level subtree into a layer.
+func dominate(graph *simple.DirectedGraph) {
+ dt := flow.Dominators(graph.Node(0), graph)
+
+ // convert dominator tree back into encodable graph
+ dg := simple.NewDirectedGraph()
+
+ for nodes := graph.Nodes(); nodes.Next(); {
+ dg.AddNode(nodes.Node())
+ }
+
+ for nodes := dg.Nodes(); nodes.Next(); {
+ node := nodes.Node()
+ for _, child := range dt.DominatedBy(node.ID()) {
+ edge := dg.NewEdge(node, child)
+ dg.SetEdge(edge)
+ }
+ }
+
+ gv, err := dot.Marshal(dg, "deps", "", "")
+ if err != nil {
+ log.Fatalf("Could not encode graph: %s\n", err)
+ }
+ fmt.Print(string(gv))
+
+ // fmt.Printf("%v edges in the graph\n", graph.Edges().Len())
+ // top := 0
+ // for _, n := range dt.DominatedBy(0) {
+ // fmt.Printf("%q is top-level\n", n.(*closure).Path)
+ // top++
+ // }
+ // fmt.Printf("%v total top-level nodes\n", top)
+ // root := dt.Root().(*closure)
+ // fmt.Printf("dominator tree root is %q\n", root.Path)
+ // fmt.Printf("%v nodes can reach to 1\n", nodes.Len())
+}
+
+func main() {
+ inputFile := flag.String("input", ".attrs.json", "Input file containing graph")
+ flag.Parse()
+
+ file, err := ioutil.ReadFile(*inputFile)
+ if err != nil {
+ log.Fatalf("Failed to load input: %s\n", err)
+ }
+
+ var refs exportReferences
+ err = json.Unmarshal(file, &refs)
+ if err != nil {
+ log.Fatalf("Failed to deserialise input: %s\n", err)
+ }
+
+ graph := buildGraph(&refs)
+ dominate(graph)
+}
--
cgit 1.4.1
From 92078527dbe8f853d984a4038e718c662c016741 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 11 Aug 2019 20:05:12 +0100
Subject: feat(group-layers): Add preliminary size & popularity considerations
As described in the design document, this adds considerations for
closure size and popularity. All closures meeting a certain threshold
for either value will have an extra edge from the image root to
themselves inserted in the graph, which will cause them to be
considered for inclusion in a separate layer.
This is preliminary because popularity is considered as a boolean
toggle (the input I generated only contains the top ~200 most popular
packages), but it should be using either absolute popularity values or
percentiles (needs some experimentation).
---
tools/nixery/group-layers/group-layers.go | 92 ++++++++++++++++++++++---------
1 file changed, 66 insertions(+), 26 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/group-layers/group-layers.go b/tools/nixery/group-layers/group-layers.go
index 1ee67433deeb..7be89e4e6761 100644
--- a/tools/nixery/group-layers/group-layers.go
+++ b/tools/nixery/group-layers/group-layers.go
@@ -108,9 +108,7 @@ import (
"flag"
"io/ioutil"
"log"
- "fmt"
"regexp"
- "os"
"gonum.org/v1/gonum/graph/simple"
"gonum.org/v1/gonum/graph/flow"
@@ -131,12 +129,20 @@ type exportReferences struct {
} `json:"graph"`
}
+// Popularity data for each Nix package that was calculated in advance.
+//
+// Popularity is a number from 1-100 that represents the
+// popularity percentile in which this package resides inside
+// of the nixpkgs tree.
+type pkgsMetadata = map[string]int
+
// closure as pointed to by the graph nodes.
type closure struct {
GraphID int64
Path string
Size uint64
Refs []string
+ Popularity int
// TODO(tazjin): popularity and other funny business
}
@@ -149,7 +155,36 @@ func (c *closure) DOTID() string {
return nixRegexp.ReplaceAllString(c.Path, "")
}
-func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
+// bigOrPopular checks whether this closure should be considered for
+// separation into its own layer, even if it would otherwise only
+// appear in a subtree of the dominator tree.
+func (c *closure) bigOrPopular(pkgs *pkgsMetadata) bool {
+ const sizeThreshold = 100 * 1000000 // 100MB
+
+ if c.Size > sizeThreshold {
+ return true
+ }
+
+ // TODO(tazjin): After generating the full data, this should
+ // be changed to something other than a simple inclusion
+ // (currently the test-data only contains the top 200
+ // packages).
+ pop, ok := (*pkgs)[c.DOTID()]
+ if ok {
+ log.Printf("%q is popular!\n", c.DOTID())
+ }
+ c.Popularity = pop
+ return ok
+}
+
+func insertEdges(graph *simple.DirectedGraph, pop *pkgsMetadata, cmap *map[string]*closure, node *closure) {
+ // Big or popular nodes get a separate edge from the top to
+ // flag them for their own layer.
+ if node.bigOrPopular(pop) && !graph.HasEdgeFromTo(0, node.ID()) {
+ edge := graph.NewEdge(graph.Node(0), node)
+ graph.SetEdge(edge)
+ }
+
for _, c := range node.Refs {
// Nix adds a self reference to each node, which
// should not be inserted.
@@ -161,7 +196,7 @@ func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *c
}
// Create a graph structure from the references supplied by Nix.
-func buildGraph(refs *exportReferences) *simple.DirectedGraph {
+func buildGraph(refs *exportReferences, pop *pkgsMetadata) *simple.DirectedGraph {
cmap := make(map[string]*closure)
graph := simple.NewDirectedGraph()
@@ -196,15 +231,15 @@ func buildGraph(refs *exportReferences) *simple.DirectedGraph {
}
for _, c := range cmap {
- insertEdges(graph, &cmap, c)
+ insertEdges(graph, pop, &cmap, c)
}
- gv, err := dot.Marshal(graph, "deps", "", "")
- if err != nil {
- log.Fatalf("Could not encode graph: %s\n", err)
- }
- fmt.Print(string(gv))
- os.Exit(0)
+ // gv, err := dot.Marshal(graph, "deps", "", "")
+ // if err != nil {
+ // log.Fatalf("Could not encode graph: %s\n", err)
+ // }
+ // fmt.Print(string(gv))
+ // os.Exit(0)
return graph
}
@@ -233,25 +268,16 @@ func dominate(graph *simple.DirectedGraph) {
if err != nil {
log.Fatalf("Could not encode graph: %s\n", err)
}
- fmt.Print(string(gv))
-
- // fmt.Printf("%v edges in the graph\n", graph.Edges().Len())
- // top := 0
- // for _, n := range dt.DominatedBy(0) {
- // fmt.Printf("%q is top-level\n", n.(*closure).Path)
- // top++
- // }
- // fmt.Printf("%v total top-level nodes\n", top)
- // root := dt.Root().(*closure)
- // fmt.Printf("dominator tree root is %q\n", root.Path)
- // fmt.Printf("%v nodes can reach to 1\n", nodes.Len())
+ ioutil.WriteFile("graph.dot", gv, 0644)
}
func main() {
- inputFile := flag.String("input", ".attrs.json", "Input file containing graph")
+ graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
+ popFile := flag.String("pop", "popularity.json", "Package popularity data")
flag.Parse()
- file, err := ioutil.ReadFile(*inputFile)
+ // Parse graph data
+ file, err := ioutil.ReadFile(*graphFile)
if err != nil {
log.Fatalf("Failed to load input: %s\n", err)
}
@@ -262,6 +288,20 @@ func main() {
log.Fatalf("Failed to deserialise input: %s\n", err)
}
- graph := buildGraph(&refs)
+ // Parse popularity data
+ popBytes, err := ioutil.ReadFile(*popFile)
+ if err != nil {
+ log.Fatalf("Failed to load input: %s\n", err)
+ }
+
+ var pop pkgsMetadata
+ err = json.Unmarshal(popBytes, &pop)
+ if err != nil {
+ log.Fatalf("Failed to deserialise input: %s\n", err)
+ }
+
+ log.Printf("%v\n", pop)
+
+ graph := buildGraph(&refs, &pop)
dominate(graph)
}
--
cgit 1.4.1
From 590ce994bb18b15c9b654c2aa426866750c3c494 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 11 Aug 2019 20:10:03 +0100
Subject: feat(group-layers): Add initial popcount scripts
This script generates an entry in a text file for each time a
derivation is referred to by another in nixpkgs.
For initial testing, this output can be turned into group-layers
compatible JSON with this *trivial* invocation:
cat output | awk '{ print "{\"" $2 "\":" $1 "}"}' | jq -s '. | add | with_entries(.key |= sub("/nix/store/[a-z0-9]+-";""))' > test-data.json
---
tools/nixery/group-layers/popcount | 13 +++++++++
tools/nixery/group-layers/popcount.nix | 51 ++++++++++++++++++++++++++++++++++
2 files changed, 64 insertions(+)
create mode 100755 tools/nixery/group-layers/popcount
create mode 100644 tools/nixery/group-layers/popcount.nix
(limited to 'tools')
diff --git a/tools/nixery/group-layers/popcount b/tools/nixery/group-layers/popcount
new file mode 100755
index 000000000000..83baf3045da7
--- /dev/null
+++ b/tools/nixery/group-layers/popcount
@@ -0,0 +1,13 @@
+#!/bin/bash
+set -ueo pipefail
+
+function graphsFor() {
+ local pkg="${1}"
+ local graphs=$(nix-build --timeout 2 --argstr target "${pkg}" popcount.nix || echo -n 'empty.json')
+ cat $graphs | jq -r -cM '.[] | .references[]'
+}
+
+for pkg in $(cat all-top-level.json | jq -r '.[]'); do
+ graphsFor "${pkg}" 2>/dev/null
+ echo "Printed refs for ${pkg}" >&2
+done
diff --git a/tools/nixery/group-layers/popcount.nix b/tools/nixery/group-layers/popcount.nix
new file mode 100644
index 000000000000..e21d7367724b
--- /dev/null
+++ b/tools/nixery/group-layers/popcount.nix
@@ -0,0 +1,51 @@
+{ pkgs ? import { config.allowUnfree = false; }
+, target }:
+
+let
+ inherit (pkgs) coreutils runCommand writeText;
+ inherit (builtins) replaceStrings readFile toFile fromJSON toJSON foldl' listToAttrs;
+
+ path = [ pkgs."${target}" ];
+
+ # graphJSON abuses feature in Nix that makes structured runtime
+ # closure information available to builders. This data is imported
+ # back via IFD to process it for layering data.
+ graphJSON =
+ path:
+ runCommand "build-graph" {
+ __structuredAttrs = true;
+ exportReferencesGraph.graph = path;
+ PATH = "${coreutils}/bin";
+ builder = toFile "builder" ''
+ . .attrs.sh
+ cat .attrs.json > ''${outputs[out]}
+ '';
+ } "";
+
+ buildClosures = paths: (fromJSON (readFile (graphJSON paths)));
+
+ buildGraph = paths: listToAttrs (map (c: {
+ name = c.path;
+ value = {
+ inherit (c) closureSize references;
+ };
+ }) (buildClosures paths));
+
+ # Nix does not allow attrbute set keys to refer to store paths, but
+ # we need them to for the purpose of the calculation. To work around
+ # it, the store path prefix is replaced with the string 'closure/'
+ # and later replaced again.
+ fromStorePath = replaceStrings [ "/nix/store" ] [ "closure/" ];
+ toStorePath = replaceStrings [ "closure/" ] [ "/nix/store/" ];
+
+ buildTree = paths:
+ let
+ graph = buildGraph paths;
+ top = listToAttrs (map (p: {
+ name = fromStorePath (toString p);
+ value = {};
+ }) paths);
+ in top;
+
+ outputJson = thing: writeText "the-thing.json" (builtins.toJSON thing);
+in outputJson (buildClosures path).graph
--
cgit 1.4.1
From 56a426952c5ce4e20b4673a0087e6a2c5604fdf5 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 01:41:17 +0100
Subject: feat(group-layers): Finish layering algorithm implementation
This commit adds the actual logic for extracting layer groupings and
merging them until the layer budget is satisfied.
The implementation conforms to the design doc as of the time of this
commit.
---
tools/nixery/group-layers/group-layers.go | 161 +++++++++++++++++++-----------
1 file changed, 103 insertions(+), 58 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/group-layers/group-layers.go b/tools/nixery/group-layers/group-layers.go
index 7be89e4e6761..93f2e520ace9 100644
--- a/tools/nixery/group-layers/group-layers.go
+++ b/tools/nixery/group-layers/group-layers.go
@@ -65,12 +65,11 @@
//
// If the list of layers fits within the layer budget, it is returned.
//
-// Otherwise layers are merged together in this order:
+// Otherwise, a merge rating is calculated for each layer. This is the
+// product of the layer's total size and its root node's popularity.
//
-// * layers whose root meets neither condition above
-// * layers whose root is popular
-// * layers whose root is big
-// * layers whose root meets both conditions
+// Layers are then merged in ascending order of merge ratings until
+// they fit into the layer budget.
//
// # Threshold values
//
@@ -109,10 +108,10 @@ import (
"io/ioutil"
"log"
"regexp"
+ "sort"
- "gonum.org/v1/gonum/graph/simple"
"gonum.org/v1/gonum/graph/flow"
- "gonum.org/v1/gonum/graph/encoding/dot"
+ "gonum.org/v1/gonum/graph/simple"
)
// closureGraph represents the structured attributes Nix outputs when asking it
@@ -123,7 +122,7 @@ type exportReferences struct {
} `json:"exportReferencesGraph"`
Graph []struct {
- Size uint64 `json:"closureSize`
+ Size uint64 `json:"closureSize"`
Path string `json:"path"`
Refs []string `json:"references"`
} `json:"graph"`
@@ -136,14 +135,26 @@ type exportReferences struct {
// of the nixpkgs tree.
type pkgsMetadata = map[string]int
+// layer represents the data returned for each layer that Nix should
+// build for the container image.
+type layer struct {
+ Contents []string `json:"contents"`
+ mergeRating uint64
+}
+
+func (a layer) merge(b layer) layer {
+ a.Contents = append(a.Contents, b.Contents...)
+ a.mergeRating += b.mergeRating
+ return a
+}
+
// closure as pointed to by the graph nodes.
type closure struct {
- GraphID int64
- Path string
- Size uint64
- Refs []string
+ GraphID int64
+ Path string
+ Size uint64
+ Refs []string
Popularity int
- // TODO(tazjin): popularity and other funny business
}
func (c *closure) ID() int64 {
@@ -151,6 +162,7 @@ func (c *closure) ID() int64 {
}
var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
+
func (c *closure) DOTID() string {
return nixRegexp.ReplaceAllString(c.Path, "")
}
@@ -158,29 +170,30 @@ func (c *closure) DOTID() string {
// bigOrPopular checks whether this closure should be considered for
// separation into its own layer, even if it would otherwise only
// appear in a subtree of the dominator tree.
-func (c *closure) bigOrPopular(pkgs *pkgsMetadata) bool {
+func (c *closure) bigOrPopular() bool {
const sizeThreshold = 100 * 1000000 // 100MB
if c.Size > sizeThreshold {
return true
}
- // TODO(tazjin): After generating the full data, this should
- // be changed to something other than a simple inclusion
- // (currently the test-data only contains the top 200
- // packages).
- pop, ok := (*pkgs)[c.DOTID()]
- if ok {
- log.Printf("%q is popular!\n", c.DOTID())
+ // The threshold value used here is currently roughly the
+ // minimum number of references that only 1% of packages in
+ // the entire package set have.
+ //
+ // TODO(tazjin): Do this more elegantly by calculating
+ // percentiles for each package and using those instead.
+ if c.Popularity >= 1000 {
+ return true
}
- c.Popularity = pop
- return ok
+
+ return false
}
-func insertEdges(graph *simple.DirectedGraph, pop *pkgsMetadata, cmap *map[string]*closure, node *closure) {
+func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
// Big or popular nodes get a separate edge from the top to
// flag them for their own layer.
- if node.bigOrPopular(pop) && !graph.HasEdgeFromTo(0, node.ID()) {
+ if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
edge := graph.NewEdge(graph.Node(0), node)
graph.SetEdge(edge)
}
@@ -205,18 +218,24 @@ func buildGraph(refs *exportReferences, pop *pkgsMetadata) *simple.DirectedGraph
//
// A map from store paths to IDs is kept to actually insert
// edges below.
- root := &closure {
+ root := &closure{
GraphID: 0,
- Path: "image_root",
+ Path: "image_root",
}
graph.AddNode(root)
for idx, c := range refs.Graph {
- node := &closure {
+ node := &closure{
GraphID: int64(idx + 1), // inc because of root node
- Path: c.Path,
- Size: c.Size,
- Refs: c.Refs,
+ Path: c.Path,
+ Size: c.Size,
+ Refs: c.Refs,
+ }
+
+ if p, ok := (*pop)[node.DOTID()]; ok {
+ node.Popularity = p
+ } else {
+ node.Popularity = 1
}
graph.AddNode(node)
@@ -231,49 +250,74 @@ func buildGraph(refs *exportReferences, pop *pkgsMetadata) *simple.DirectedGraph
}
for _, c := range cmap {
- insertEdges(graph, pop, &cmap, c)
+ insertEdges(graph, &cmap, c)
}
- // gv, err := dot.Marshal(graph, "deps", "", "")
- // if err != nil {
- // log.Fatalf("Could not encode graph: %s\n", err)
- // }
- // fmt.Print(string(gv))
- // os.Exit(0)
-
return graph
}
+// Extracts a subgraph starting at the specified root from the
+// dominator tree. The subgraph is converted into a flat list of
+// layers, each containing the store paths and merge rating.
+func groupLayer(dt *flow.DominatorTree, root *closure) layer {
+ size := root.Size
+ contents := []string{root.Path}
+ children := dt.DominatedBy(root.ID())
+
+ // This iteration does not use 'range' because the list being
+ // iterated is modified during the iteration (yes, I'm sorry).
+ for i := 0; i < len(children); i++ {
+ child := children[i].(*closure)
+ size += child.Size
+ contents = append(contents, child.Path)
+ children = append(children, dt.DominatedBy(child.ID())...)
+ }
+
+ return layer{
+ Contents: contents,
+ // TODO(tazjin): The point of this is to factor in
+ // both the size and the popularity when making merge
+ // decisions, but there might be a smarter way to do
+ // it than a plain multiplication.
+ mergeRating: uint64(root.Popularity) * size,
+ }
+}
+
// Calculate the dominator tree of the entire package set and group
// each top-level subtree into a layer.
-func dominate(graph *simple.DirectedGraph) {
+//
+// Layers are merged together until they fit into the layer budget,
+// based on their merge rating.
+func dominate(budget int, graph *simple.DirectedGraph) []layer {
dt := flow.Dominators(graph.Node(0), graph)
- // convert dominator tree back into encodable graph
- dg := simple.NewDirectedGraph()
-
- for nodes := graph.Nodes(); nodes.Next(); {
- dg.AddNode(nodes.Node())
+ var layers []layer
+ for _, n := range dt.DominatedBy(dt.Root().ID()) {
+ layers = append(layers, groupLayer(&dt, n.(*closure)))
}
- for nodes := dg.Nodes(); nodes.Next(); {
- node := nodes.Node()
- for _, child := range dt.DominatedBy(node.ID()) {
- edge := dg.NewEdge(node, child)
- dg.SetEdge(edge)
- }
+ sort.Slice(layers, func(i, j int) bool {
+ return layers[i].mergeRating < layers[j].mergeRating
+ })
+
+ if len(layers) > budget {
+ log.Printf("Ideal image has %v layers, but budget is %v\n", len(layers), budget)
}
- gv, err := dot.Marshal(dg, "deps", "", "")
- if err != nil {
- log.Fatalf("Could not encode graph: %s\n", err)
+ for len(layers) > budget {
+ merged := layers[0].merge(layers[1])
+ layers[1] = merged
+ layers = layers[1:]
}
- ioutil.WriteFile("graph.dot", gv, 0644)
+
+ return layers
}
func main() {
graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
popFile := flag.String("pop", "popularity.json", "Package popularity data")
+ outFile := flag.String("out", "layers.json", "File to write layers to")
+ layerBudget := flag.Int("budget", 94, "Total layer budget available")
flag.Parse()
// Parse graph data
@@ -300,8 +344,9 @@ func main() {
log.Fatalf("Failed to deserialise input: %s\n", err)
}
- log.Printf("%v\n", pop)
-
graph := buildGraph(&refs, &pop)
- dominate(graph)
+ layers := dominate(*layerBudget, graph)
+
+ j, _ := json.Marshal(layers)
+ ioutil.WriteFile(*outFile, j, 0644)
}
--
cgit 1.4.1
From d699f7f91cb0b90f15199f6d6884442594cb6c56 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 14:59:41 +0100
Subject: chore(build): Update Go dependencies & add gonum
---
tools/nixery/go-deps.nix | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/go-deps.nix b/tools/nixery/go-deps.nix
index 57d9ecd64931..ebd1576db5dd 100644
--- a/tools/nixery/go-deps.nix
+++ b/tools/nixery/go-deps.nix
@@ -5,8 +5,8 @@
fetch = {
type = "git";
url = "https://code.googlesource.com/gocloud";
- rev = "edd0968ab5054ee810843a77774d81069989494b";
- sha256 = "1mh8i72h6a1z9lp4cy9bwa2j87bm905zcsvmqwskdqi8z58cif4a";
+ rev = "77f6a3a292a7dbf66a5329de0d06326f1906b450";
+ sha256 = "1c9pkx782nbcp8jnl5lprcbzf97van789ky5qsncjgywjyymhigi";
};
}
{
@@ -68,8 +68,8 @@
fetch = {
type = "git";
url = "https://go.googlesource.com/sys";
- rev = "fae7ac547cb717d141c433a2a173315e216b64c4";
- sha256 = "11pl0dycm5d8ar7g1l1w5q2cx0lms8i15n8mxhilhkdd2xpmh8f0";
+ rev = "51ab0e2deafac1f46c46ad59cf0921be2f180c3d";
+ sha256 = "0xdhpckbql3bsqkpc2k5b1cpnq3q1qjqjjq2j3p707rfwb8nm91a";
};
}
{
@@ -81,6 +81,15 @@
sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
};
}
+ {
+ goPackagePath = "gonum.org/v1/gonum";
+ fetch = {
+ type = "git";
+ url = "https://github.com/gonum/gonum";
+ rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
+ sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
+ };
+ }
{
goPackagePath = "google.golang.org/api";
fetch = {
--
cgit 1.4.1
From 1fa93fe6f640bfdbb5e9ecedb2dbf2cacc5e8945 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 15:14:24 +0100
Subject: refactor: Move registry server to a subfolder
---
tools/nixery/default.nix | 22 +-
tools/nixery/go-deps.nix | 120 ----------
tools/nixery/main.go | 492 ----------------------------------------
tools/nixery/server/default.nix | 16 ++
tools/nixery/server/go-deps.nix | 111 +++++++++
tools/nixery/server/main.go | 492 ++++++++++++++++++++++++++++++++++++++++
6 files changed, 621 insertions(+), 632 deletions(-)
delete mode 100644 tools/nixery/go-deps.nix
delete mode 100644 tools/nixery/main.go
create mode 100644 tools/nixery/server/default.nix
create mode 100644 tools/nixery/server/go-deps.nix
create mode 100644 tools/nixery/server/main.go
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 092c76e9c5b9..dee5713c64af 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -23,29 +23,11 @@ rec {
# Users will usually not want to use this directly, instead see the
# 'nixery' derivation below, which automatically includes runtime
# data dependencies.
- nixery-server = buildGoPackage {
- name = "nixery-server";
-
- # Technically people should not be building Nixery through 'go get'
- # or similar (as other required files will not be included), but
- # buildGoPackage requires a package path.
- goPackagePath = "github.com/google/nixery";
- goDeps = ./go-deps.nix;
- src = ./.;
-
- meta = {
- description = "Container image build serving Nix-backed images";
- homepage = "https://github.com/google/nixery";
- license = lib.licenses.asl20;
- maintainers = [ lib.maintainers.tazjin ];
- };
- };
+ nixery-server = callPackage ./server {};
# Nix expression (unimported!) which is used by Nixery to build
# container images.
- nixery-builder = runCommand "build-registry-image.nix" {} ''
- cat ${./build-registry-image.nix} > $out
- '';
+ nixery-builder = ./build-registry-image.nix;
# nixpkgs currently has an old version of mdBook. A new version is
# built here, but eventually the update will be upstreamed
diff --git a/tools/nixery/go-deps.nix b/tools/nixery/go-deps.nix
deleted file mode 100644
index ebd1576db5dd..000000000000
--- a/tools/nixery/go-deps.nix
+++ /dev/null
@@ -1,120 +0,0 @@
-# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
-[
- {
- goPackagePath = "cloud.google.com/go";
- fetch = {
- type = "git";
- url = "https://code.googlesource.com/gocloud";
- rev = "77f6a3a292a7dbf66a5329de0d06326f1906b450";
- sha256 = "1c9pkx782nbcp8jnl5lprcbzf97van789ky5qsncjgywjyymhigi";
- };
- }
- {
- goPackagePath = "github.com/golang/protobuf";
- fetch = {
- type = "git";
- url = "https://github.com/golang/protobuf";
- rev = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7";
- sha256 = "1k1wb4zr0qbwgpvz9q5ws9zhlal8hq7dmq62pwxxriksayl6hzym";
- };
- }
- {
- goPackagePath = "github.com/googleapis/gax-go";
- fetch = {
- type = "git";
- url = "https://github.com/googleapis/gax-go";
- rev = "bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2";
- sha256 = "1lxawwngv6miaqd25s3ba0didfzylbwisd2nz7r4gmbmin6jsjrx";
- };
- }
- {
- goPackagePath = "github.com/hashicorp/golang-lru";
- fetch = {
- type = "git";
- url = "https://github.com/hashicorp/golang-lru";
- rev = "59383c442f7d7b190497e9bb8fc17a48d06cd03f";
- sha256 = "0yzwl592aa32vfy73pl7wdc21855w17zssrp85ckw2nisky8rg9c";
- };
- }
- {
- goPackagePath = "go.opencensus.io";
- fetch = {
- type = "git";
- url = "https://github.com/census-instrumentation/opencensus-go";
- rev = "b4a14686f0a98096416fe1b4cb848e384fb2b22b";
- sha256 = "1aidyp301v5ngwsnnc8v1s09vvbsnch1jc4vd615f7qv77r9s7dn";
- };
- }
- {
- goPackagePath = "golang.org/x/net";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/net";
- rev = "da137c7871d730100384dbcf36e6f8fa493aef5b";
- sha256 = "1qsiyr3irmb6ii06hivm9p2c7wqyxczms1a9v1ss5698yjr3fg47";
- };
- }
- {
- goPackagePath = "golang.org/x/oauth2";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/oauth2";
- rev = "0f29369cfe4552d0e4bcddc57cc75f4d7e672a33";
- sha256 = "06jwpvx0x2gjn2y959drbcir5kd7vg87k0r1216abk6rrdzzrzi2";
- };
- }
- {
- goPackagePath = "golang.org/x/sys";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/sys";
- rev = "51ab0e2deafac1f46c46ad59cf0921be2f180c3d";
- sha256 = "0xdhpckbql3bsqkpc2k5b1cpnq3q1qjqjjq2j3p707rfwb8nm91a";
- };
- }
- {
- goPackagePath = "golang.org/x/text";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/text";
- rev = "342b2e1fbaa52c93f31447ad2c6abc048c63e475";
- sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
- };
- }
- {
- goPackagePath = "gonum.org/v1/gonum";
- fetch = {
- type = "git";
- url = "https://github.com/gonum/gonum";
- rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
- sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
- };
- }
- {
- goPackagePath = "google.golang.org/api";
- fetch = {
- type = "git";
- url = "https://code.googlesource.com/google-api-go-client";
- rev = "069bea57b1be6ad0671a49ea7a1128025a22b73f";
- sha256 = "19q2b610lkf3z3y9hn6rf11dd78xr9q4340mdyri7kbijlj2r44q";
- };
- }
- {
- goPackagePath = "google.golang.org/genproto";
- fetch = {
- type = "git";
- url = "https://github.com/google/go-genproto";
- rev = "c506a9f9061087022822e8da603a52fc387115a8";
- sha256 = "03hh80aqi58dqi5ykj4shk3chwkzrgq2f3k6qs5qhgvmcy79y2py";
- };
- }
- {
- goPackagePath = "google.golang.org/grpc";
- fetch = {
- type = "git";
- url = "https://github.com/grpc/grpc-go";
- rev = "977142214c45640483838b8672a43c46f89f90cb";
- sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
- };
- }
-]
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
deleted file mode 100644
index d20ede2eb587..000000000000
--- a/tools/nixery/main.go
+++ /dev/null
@@ -1,492 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Package main provides the implementation of a container registry that
-// transparently builds container images based on Nix derivations.
-//
-// The Nix derivation used for image creation is responsible for creating
-// objects that are compatible with the registry API. The targeted registry
-// protocol is currently Docker's.
-//
-// When an image is requested, the required contents are parsed out of the
-// request and a Nix-build is initiated that eventually responds with the
-// manifest as well as information linking each layer digest to a local
-// filesystem path.
-package main
-
-import (
- "bytes"
- "context"
- "encoding/json"
- "fmt"
- "io"
- "io/ioutil"
- "log"
- "net/http"
- "os"
- "os/exec"
- "regexp"
- "strings"
- "time"
-
- "cloud.google.com/go/storage"
-)
-
-// pkgSource represents the source from which the Nix package set used
-// by Nixery is imported. Users configure the source by setting one of
-// the supported environment variables.
-type pkgSource struct {
- srcType string
- args string
-}
-
-// Convert the package source into the representation required by Nix.
-func (p *pkgSource) renderSource(tag string) string {
- // The 'git' source requires a tag to be present.
- if p.srcType == "git" {
- if tag == "latest" || tag == "" {
- tag = "master"
- }
-
- return fmt.Sprintf("git!%s!%s", p.args, tag)
- }
-
- return fmt.Sprintf("%s!%s", p.srcType, p.args)
-}
-
-// Retrieve a package source from the environment. If no source is
-// specified, the Nix code will default to a recent NixOS channel.
-func pkgSourceFromEnv() *pkgSource {
- if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
- log.Printf("Using Nix package set from Nix channel %q\n", channel)
- return &pkgSource{
- srcType: "nixpkgs",
- args: channel,
- }
- }
-
- if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.Printf("Using Nix package set from git repository at %q\n", git)
- return &pkgSource{
- srcType: "git",
- args: git,
- }
- }
-
- if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
- log.Printf("Using Nix package set from path %q\n", path)
- return &pkgSource{
- srcType: "path",
- args: path,
- }
- }
-
- return nil
-}
-
-// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
-// GCS_SIGNING_ACCOUNT envvars.
-func signingOptsFromEnv() *storage.SignedURLOptions {
- path := os.Getenv("GCS_SIGNING_KEY")
- id := os.Getenv("GCS_SIGNING_ACCOUNT")
-
- if path == "" || id == "" {
- log.Println("GCS URL signing disabled")
- return nil
- }
-
- log.Printf("GCS URL signing enabled with account %q\n", id)
- k, err := ioutil.ReadFile(path)
- if err != nil {
- log.Fatalf("Failed to read GCS signing key: %s\n", err)
- }
-
- return &storage.SignedURLOptions{
- GoogleAccessID: id,
- PrivateKey: k,
- Method: "GET",
- }
-}
-
-// config holds the Nixery configuration options.
-type config struct {
- bucket string // GCS bucket to cache & serve layers
- signing *storage.SignedURLOptions // Signing options to use for GCS URLs
- builder string // Nix derivation for building images
- port string // Port on which to launch HTTP server
- pkgs *pkgSource // Source for Nix package set
-}
-
-// ManifestMediaType is the Content-Type used for the manifest itself. This
-// corresponds to the "Image Manifest V2, Schema 2" described on this page:
-//
-// https://docs.docker.com/registry/spec/manifest-v2-2/
-const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
-
-// Image represents the information necessary for building a container image.
-// This can be either a list of package names (corresponding to keys in the
-// nixpkgs set) or a Nix expression that results in a *list* of derivations.
-type image struct {
- name string
- tag string
-
- // Names of packages to include in the image. These must correspond
- // directly to top-level names of Nix packages in the nixpkgs tree.
- packages []string
-}
-
-// BuildResult represents the output of calling the Nix derivation responsible
-// for building registry images.
-//
-// The `layerLocations` field contains the local filesystem paths to each
-// individual image layer that will need to be served, while the `manifest`
-// field contains the JSON-representation of the manifest that needs to be
-// served to the client.
-//
-// The later field is simply treated as opaque JSON and passed through.
-type BuildResult struct {
- Error string `json:"error"`
- Pkgs []string `json:"pkgs"`
-
- Manifest json.RawMessage `json:"manifest"`
- LayerLocations map[string]struct {
- Path string `json:"path"`
- Md5 []byte `json:"md5"`
- } `json:"layerLocations"`
-}
-
-// imageFromName parses an image name into the corresponding structure which can
-// be used to invoke Nix.
-//
-// It will expand convenience names under the hood (see the `convenienceNames`
-// function below).
-func imageFromName(name string, tag string) image {
- packages := strings.Split(name, "/")
- return image{
- name: name,
- tag: tag,
- packages: convenienceNames(packages),
- }
-}
-
-// convenienceNames expands convenience package names defined by Nixery which
-// let users include commonly required sets of tools in a container quickly.
-//
-// Convenience names must be specified as the first package in an image.
-//
-// Currently defined convenience names are:
-//
-// * `shell`: Includes bash, coreutils and other common command-line tools
-// * `builder`: All of the above and the standard build environment
-func convenienceNames(packages []string) []string {
- shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
-
- if packages[0] == "shell" {
- return append(packages[1:], shellPackages...)
- }
-
- return packages
-}
-
-// Call out to Nix and request that an image be built. Nix will, upon success,
-// return a manifest for the container image.
-func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) (*BuildResult, error) {
- packages, err := json.Marshal(image.packages)
- if err != nil {
- return nil, err
- }
-
- args := []string{
- "--no-out-link",
- "--show-trace",
- "--argstr", "name", image.name,
- "--argstr", "packages", string(packages), cfg.builder,
- }
-
- if cfg.pkgs != nil {
- args = append(args, "--argstr", "pkgSource", cfg.pkgs.renderSource(image.tag))
- }
- cmd := exec.Command("nix-build", args...)
-
- outpipe, err := cmd.StdoutPipe()
- if err != nil {
- return nil, err
- }
-
- errpipe, err := cmd.StderrPipe()
- if err != nil {
- return nil, err
- }
-
- if err = cmd.Start(); err != nil {
- log.Println("Error starting nix-build:", err)
- return nil, err
- }
- log.Printf("Started Nix image build for '%s'", image.name)
-
- stdout, _ := ioutil.ReadAll(outpipe)
- stderr, _ := ioutil.ReadAll(errpipe)
-
- if err = cmd.Wait(); err != nil {
- // TODO(tazjin): Propagate errors upwards in a usable format.
- log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
- return nil, err
- }
-
- log.Println("Finished Nix image build")
-
- buildOutput, err := ioutil.ReadFile(strings.TrimSpace(string(stdout)))
- if err != nil {
- return nil, err
- }
-
- // The build output returned by Nix is deserialised to add all
- // contained layers to the bucket. Only the manifest itself is
- // re-serialised to JSON and returned.
- var result BuildResult
- err = json.Unmarshal(buildOutput, &result)
- if err != nil {
- return nil, err
- }
-
- for layer, meta := range result.LayerLocations {
- err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
- if err != nil {
- return nil, err
- }
- }
-
- return &result, nil
-}
-
-// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
-// any data the bucket is probed to see if the file already exists.
-//
-// If the file does exist, its MD5 hash is verified to ensure that the stored
-// file is not - for example - a fragment of a previous, incomplete upload.
-func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
- layerKey := fmt.Sprintf("layers/%s", layer)
- obj := bucket.Object(layerKey)
-
- // Before uploading a layer to the bucket, probe whether it already
- // exists.
- //
- // If it does and the MD5 checksum matches the expected one, the layer
- // upload can be skipped.
- attrs, err := obj.Attrs(*ctx)
-
- if err == nil && bytes.Equal(attrs.MD5, md5) {
- log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
- } else {
- writer := obj.NewWriter(*ctx)
- file, err := os.Open(path)
-
- if err != nil {
- return fmt.Errorf("failed to open layer %s from path %s: %v", layer, path, err)
- }
-
- size, err := io.Copy(writer, file)
- if err != nil {
- return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
- }
-
- if err = writer.Close(); err != nil {
- return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
- }
-
- log.Printf("Uploaded layer sha256:%s (%v bytes written)\n", layer, size)
- }
-
- return nil
-}
-
-// layerRedirect constructs the public URL of the layer object in the Cloud
-// Storage bucket, signs it and redirects the user there.
-//
-// Signing the URL allows unauthenticated clients to retrieve objects from the
-// bucket.
-//
-// The Docker client is known to follow redirects, but this might not be true
-// for all other registry clients.
-func constructLayerUrl(cfg *config, digest string) (string, error) {
- log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.bucket)
- object := "layers/" + digest
-
- if cfg.signing != nil {
- opts := *cfg.signing
- opts.Expires = time.Now().Add(5 * time.Minute)
- return storage.SignedURL(cfg.bucket, object, &opts)
- } else {
- return ("https://storage.googleapis.com/" + cfg.bucket + "/" + object), nil
- }
-}
-
-// prepareBucket configures the handle to a Cloud Storage bucket in which
-// individual layers will be stored after Nix builds. Nixery does not directly
-// serve layers to registry clients, instead it redirects them to the public
-// URLs of the Cloud Storage bucket.
-//
-// The bucket is required for Nixery to function correctly, hence fatal errors
-// are generated in case it fails to be set up correctly.
-func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
- client, err := storage.NewClient(*ctx)
- if err != nil {
- log.Fatalln("Failed to set up Cloud Storage client:", err)
- }
-
- bkt := client.Bucket(cfg.bucket)
-
- if _, err := bkt.Attrs(*ctx); err != nil {
- log.Fatalln("Could not access configured bucket", err)
- }
-
- return bkt
-}
-
-// Regexes matching the V2 Registry API routes. This only includes the
-// routes required for serving images, since pushing and other such
-// functionality is not available.
-var (
- manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
- layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
-)
-
-// Error format corresponding to the registry protocol V2 specification. This
-// allows feeding back errors to clients in a way that can be presented to
-// users.
-type registryError struct {
- Code string `json:"code"`
- Message string `json:"message"`
-}
-
-type registryErrors struct {
- Errors []registryError `json:"errors"`
-}
-
-func writeError(w http.ResponseWriter, status int, code, message string) {
- err := registryErrors{
- Errors: []registryError{
- {code, message},
- },
- }
- json, _ := json.Marshal(err)
-
- w.WriteHeader(status)
- w.Header().Add("Content-Type", "application/json")
- w.Write(json)
-}
-
-type registryHandler struct {
- cfg *config
- ctx *context.Context
- bucket *storage.BucketHandle
-}
-
-func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- // Acknowledge that we speak V2 with an empty response
- if r.RequestURI == "/v2/" {
- return
- }
-
- // Serve the manifest (straight from Nix)
- manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
- if len(manifestMatches) == 3 {
- imageName := manifestMatches[1]
- imageTag := manifestMatches[2]
- log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
- image := imageFromName(imageName, imageTag)
- buildResult, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
-
- if err != nil {
- writeError(w, 500, "UNKNOWN", "image build failure")
- log.Println("Failed to build image manifest", err)
- return
- }
-
- // Some error types have special handling, which is applied
- // here.
- if buildResult.Error == "not_found" {
- s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
- writeError(w, 404, "MANIFEST_UNKNOWN", s)
- log.Println(s)
- return
- }
-
- // This marshaling error is ignored because we know that this
- // field represents valid JSON data.
- manifest, _ := json.Marshal(buildResult.Manifest)
- w.Header().Add("Content-Type", manifestMediaType)
- w.Write(manifest)
- return
- }
-
- // Serve an image layer. For this we need to first ask Nix for
- // the manifest, then proceed to extract the correct layer from
- // it.
- layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
- if len(layerMatches) == 3 {
- digest := layerMatches[2]
- url, err := constructLayerUrl(h.cfg, digest)
-
- if err != nil {
- log.Printf("Failed to sign GCS URL: %s\n", err)
- writeError(w, 500, "UNKNOWN", "could not serve layer")
- return
- }
-
- w.Header().Set("Location", url)
- w.WriteHeader(303)
- return
- }
-
- log.Printf("Unsupported registry route: %s\n", r.RequestURI)
- w.WriteHeader(404)
-}
-
-func getConfig(key, desc string) string {
- value := os.Getenv(key)
- if value == "" {
- log.Fatalln(desc + " must be specified")
- }
-
- return value
-}
-
-func main() {
- cfg := &config{
- bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
- builder: getConfig("NIX_BUILDER", "Nix image builder code"),
- port: getConfig("PORT", "HTTP port"),
- pkgs: pkgSourceFromEnv(),
- signing: signingOptsFromEnv(),
- }
-
- ctx := context.Background()
- bucket := prepareBucket(&ctx, cfg)
-
- log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
-
- // All /v2/ requests belong to the registry handler.
- http.Handle("/v2/", ®istryHandler{
- cfg: cfg,
- ctx: &ctx,
- bucket: bucket,
- })
-
- // All other roots are served by the static file server.
- webDir := http.Dir(getConfig("WEB_DIR", "Static web file dir"))
- http.Handle("/", http.FileServer(webDir))
-
- log.Fatal(http.ListenAndServe(":"+cfg.port, nil))
-}
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
new file mode 100644
index 000000000000..394d2b27b442
--- /dev/null
+++ b/tools/nixery/server/default.nix
@@ -0,0 +1,16 @@
+{ buildGoPackage, lib }:
+
+buildGoPackage {
+ name = "nixery-server";
+ goDeps = ./go-deps.nix;
+ src = ./.;
+
+ goPackagePath = "github.com/google/nixery";
+
+ meta = {
+ description = "Container image builder serving Nix-backed images";
+ homepage = "https://github.com/google/nixery";
+ license = lib.licenses.asl20;
+ maintainers = [ lib.maintainers.tazjin ];
+ };
+}
diff --git a/tools/nixery/server/go-deps.nix b/tools/nixery/server/go-deps.nix
new file mode 100644
index 000000000000..a223ef0a7021
--- /dev/null
+++ b/tools/nixery/server/go-deps.nix
@@ -0,0 +1,111 @@
+# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
+[
+ {
+ goPackagePath = "cloud.google.com/go";
+ fetch = {
+ type = "git";
+ url = "https://code.googlesource.com/gocloud";
+ rev = "77f6a3a292a7dbf66a5329de0d06326f1906b450";
+ sha256 = "1c9pkx782nbcp8jnl5lprcbzf97van789ky5qsncjgywjyymhigi";
+ };
+ }
+ {
+ goPackagePath = "github.com/golang/protobuf";
+ fetch = {
+ type = "git";
+ url = "https://github.com/golang/protobuf";
+ rev = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7";
+ sha256 = "1k1wb4zr0qbwgpvz9q5ws9zhlal8hq7dmq62pwxxriksayl6hzym";
+ };
+ }
+ {
+ goPackagePath = "github.com/googleapis/gax-go";
+ fetch = {
+ type = "git";
+ url = "https://github.com/googleapis/gax-go";
+ rev = "bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2";
+ sha256 = "1lxawwngv6miaqd25s3ba0didfzylbwisd2nz7r4gmbmin6jsjrx";
+ };
+ }
+ {
+ goPackagePath = "github.com/hashicorp/golang-lru";
+ fetch = {
+ type = "git";
+ url = "https://github.com/hashicorp/golang-lru";
+ rev = "59383c442f7d7b190497e9bb8fc17a48d06cd03f";
+ sha256 = "0yzwl592aa32vfy73pl7wdc21855w17zssrp85ckw2nisky8rg9c";
+ };
+ }
+ {
+ goPackagePath = "go.opencensus.io";
+ fetch = {
+ type = "git";
+ url = "https://github.com/census-instrumentation/opencensus-go";
+ rev = "b4a14686f0a98096416fe1b4cb848e384fb2b22b";
+ sha256 = "1aidyp301v5ngwsnnc8v1s09vvbsnch1jc4vd615f7qv77r9s7dn";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/net";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/net";
+ rev = "da137c7871d730100384dbcf36e6f8fa493aef5b";
+ sha256 = "1qsiyr3irmb6ii06hivm9p2c7wqyxczms1a9v1ss5698yjr3fg47";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/oauth2";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/oauth2";
+ rev = "0f29369cfe4552d0e4bcddc57cc75f4d7e672a33";
+ sha256 = "06jwpvx0x2gjn2y959drbcir5kd7vg87k0r1216abk6rrdzzrzi2";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/sys";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/sys";
+ rev = "51ab0e2deafac1f46c46ad59cf0921be2f180c3d";
+ sha256 = "0xdhpckbql3bsqkpc2k5b1cpnq3q1qjqjjq2j3p707rfwb8nm91a";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/text";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/text";
+ rev = "342b2e1fbaa52c93f31447ad2c6abc048c63e475";
+ sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/api";
+ fetch = {
+ type = "git";
+ url = "https://code.googlesource.com/google-api-go-client";
+ rev = "069bea57b1be6ad0671a49ea7a1128025a22b73f";
+ sha256 = "19q2b610lkf3z3y9hn6rf11dd78xr9q4340mdyri7kbijlj2r44q";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/genproto";
+ fetch = {
+ type = "git";
+ url = "https://github.com/google/go-genproto";
+ rev = "c506a9f9061087022822e8da603a52fc387115a8";
+ sha256 = "03hh80aqi58dqi5ykj4shk3chwkzrgq2f3k6qs5qhgvmcy79y2py";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/grpc";
+ fetch = {
+ type = "git";
+ url = "https://github.com/grpc/grpc-go";
+ rev = "977142214c45640483838b8672a43c46f89f90cb";
+ sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
+ };
+ }
+]
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
new file mode 100644
index 000000000000..d20ede2eb587
--- /dev/null
+++ b/tools/nixery/server/main.go
@@ -0,0 +1,492 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package main provides the implementation of a container registry that
+// transparently builds container images based on Nix derivations.
+//
+// The Nix derivation used for image creation is responsible for creating
+// objects that are compatible with the registry API. The targeted registry
+// protocol is currently Docker's.
+//
+// When an image is requested, the required contents are parsed out of the
+// request and a Nix-build is initiated that eventually responds with the
+// manifest as well as information linking each layer digest to a local
+// filesystem path.
+package main
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "os"
+ "os/exec"
+ "regexp"
+ "strings"
+ "time"
+
+ "cloud.google.com/go/storage"
+)
+
+// pkgSource represents the source from which the Nix package set used
+// by Nixery is imported. Users configure the source by setting one of
+// the supported environment variables.
+type pkgSource struct {
+ srcType string
+ args string
+}
+
+// Convert the package source into the representation required by Nix.
+func (p *pkgSource) renderSource(tag string) string {
+ // The 'git' source requires a tag to be present.
+ if p.srcType == "git" {
+ if tag == "latest" || tag == "" {
+ tag = "master"
+ }
+
+ return fmt.Sprintf("git!%s!%s", p.args, tag)
+ }
+
+ return fmt.Sprintf("%s!%s", p.srcType, p.args)
+}
+
+// Retrieve a package source from the environment. If no source is
+// specified, the Nix code will default to a recent NixOS channel.
+func pkgSourceFromEnv() *pkgSource {
+ if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
+ log.Printf("Using Nix package set from Nix channel %q\n", channel)
+ return &pkgSource{
+ srcType: "nixpkgs",
+ args: channel,
+ }
+ }
+
+ if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
+ log.Printf("Using Nix package set from git repository at %q\n", git)
+ return &pkgSource{
+ srcType: "git",
+ args: git,
+ }
+ }
+
+ if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
+ log.Printf("Using Nix package set from path %q\n", path)
+ return &pkgSource{
+ srcType: "path",
+ args: path,
+ }
+ }
+
+ return nil
+}
+
+// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
+// GCS_SIGNING_ACCOUNT envvars.
+func signingOptsFromEnv() *storage.SignedURLOptions {
+ path := os.Getenv("GCS_SIGNING_KEY")
+ id := os.Getenv("GCS_SIGNING_ACCOUNT")
+
+ if path == "" || id == "" {
+ log.Println("GCS URL signing disabled")
+ return nil
+ }
+
+ log.Printf("GCS URL signing enabled with account %q\n", id)
+ k, err := ioutil.ReadFile(path)
+ if err != nil {
+ log.Fatalf("Failed to read GCS signing key: %s\n", err)
+ }
+
+ return &storage.SignedURLOptions{
+ GoogleAccessID: id,
+ PrivateKey: k,
+ Method: "GET",
+ }
+}
+
+// config holds the Nixery configuration options.
+type config struct {
+ bucket string // GCS bucket to cache & serve layers
+ signing *storage.SignedURLOptions // Signing options to use for GCS URLs
+ builder string // Nix derivation for building images
+ port string // Port on which to launch HTTP server
+ pkgs *pkgSource // Source for Nix package set
+}
+
+// ManifestMediaType is the Content-Type used for the manifest itself. This
+// corresponds to the "Image Manifest V2, Schema 2" described on this page:
+//
+// https://docs.docker.com/registry/spec/manifest-v2-2/
+const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
+
+// Image represents the information necessary for building a container image.
+// This can be either a list of package names (corresponding to keys in the
+// nixpkgs set) or a Nix expression that results in a *list* of derivations.
+type image struct {
+ name string
+ tag string
+
+ // Names of packages to include in the image. These must correspond
+ // directly to top-level names of Nix packages in the nixpkgs tree.
+ packages []string
+}
+
+// BuildResult represents the output of calling the Nix derivation responsible
+// for building registry images.
+//
+// The `layerLocations` field contains the local filesystem paths to each
+// individual image layer that will need to be served, while the `manifest`
+// field contains the JSON-representation of the manifest that needs to be
+// served to the client.
+//
+// The later field is simply treated as opaque JSON and passed through.
+type BuildResult struct {
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+
+ Manifest json.RawMessage `json:"manifest"`
+ LayerLocations map[string]struct {
+ Path string `json:"path"`
+ Md5 []byte `json:"md5"`
+ } `json:"layerLocations"`
+}
+
+// imageFromName parses an image name into the corresponding structure which can
+// be used to invoke Nix.
+//
+// It will expand convenience names under the hood (see the `convenienceNames`
+// function below).
+func imageFromName(name string, tag string) image {
+ packages := strings.Split(name, "/")
+ return image{
+ name: name,
+ tag: tag,
+ packages: convenienceNames(packages),
+ }
+}
+
+// convenienceNames expands convenience package names defined by Nixery which
+// let users include commonly required sets of tools in a container quickly.
+//
+// Convenience names must be specified as the first package in an image.
+//
+// Currently defined convenience names are:
+//
+// * `shell`: Includes bash, coreutils and other common command-line tools
+// * `builder`: All of the above and the standard build environment
+func convenienceNames(packages []string) []string {
+ shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
+
+ if packages[0] == "shell" {
+ return append(packages[1:], shellPackages...)
+ }
+
+ return packages
+}
+
+// Call out to Nix and request that an image be built. Nix will, upon success,
+// return a manifest for the container image.
+func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) (*BuildResult, error) {
+ packages, err := json.Marshal(image.packages)
+ if err != nil {
+ return nil, err
+ }
+
+ args := []string{
+ "--no-out-link",
+ "--show-trace",
+ "--argstr", "name", image.name,
+ "--argstr", "packages", string(packages), cfg.builder,
+ }
+
+ if cfg.pkgs != nil {
+ args = append(args, "--argstr", "pkgSource", cfg.pkgs.renderSource(image.tag))
+ }
+ cmd := exec.Command("nix-build", args...)
+
+ outpipe, err := cmd.StdoutPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ errpipe, err := cmd.StderrPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ if err = cmd.Start(); err != nil {
+ log.Println("Error starting nix-build:", err)
+ return nil, err
+ }
+ log.Printf("Started Nix image build for '%s'", image.name)
+
+ stdout, _ := ioutil.ReadAll(outpipe)
+ stderr, _ := ioutil.ReadAll(errpipe)
+
+ if err = cmd.Wait(); err != nil {
+ // TODO(tazjin): Propagate errors upwards in a usable format.
+ log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
+ return nil, err
+ }
+
+ log.Println("Finished Nix image build")
+
+ buildOutput, err := ioutil.ReadFile(strings.TrimSpace(string(stdout)))
+ if err != nil {
+ return nil, err
+ }
+
+ // The build output returned by Nix is deserialised to add all
+ // contained layers to the bucket. Only the manifest itself is
+ // re-serialised to JSON and returned.
+ var result BuildResult
+ err = json.Unmarshal(buildOutput, &result)
+ if err != nil {
+ return nil, err
+ }
+
+ for layer, meta := range result.LayerLocations {
+ err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ return &result, nil
+}
+
+// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
+// any data the bucket is probed to see if the file already exists.
+//
+// If the file does exist, its MD5 hash is verified to ensure that the stored
+// file is not - for example - a fragment of a previous, incomplete upload.
+func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
+ layerKey := fmt.Sprintf("layers/%s", layer)
+ obj := bucket.Object(layerKey)
+
+ // Before uploading a layer to the bucket, probe whether it already
+ // exists.
+ //
+ // If it does and the MD5 checksum matches the expected one, the layer
+ // upload can be skipped.
+ attrs, err := obj.Attrs(*ctx)
+
+ if err == nil && bytes.Equal(attrs.MD5, md5) {
+ log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
+ } else {
+ writer := obj.NewWriter(*ctx)
+ file, err := os.Open(path)
+
+ if err != nil {
+ return fmt.Errorf("failed to open layer %s from path %s: %v", layer, path, err)
+ }
+
+ size, err := io.Copy(writer, file)
+ if err != nil {
+ return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ }
+
+ if err = writer.Close(); err != nil {
+ return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ }
+
+ log.Printf("Uploaded layer sha256:%s (%v bytes written)\n", layer, size)
+ }
+
+ return nil
+}
+
+// layerRedirect constructs the public URL of the layer object in the Cloud
+// Storage bucket, signs it and redirects the user there.
+//
+// Signing the URL allows unauthenticated clients to retrieve objects from the
+// bucket.
+//
+// The Docker client is known to follow redirects, but this might not be true
+// for all other registry clients.
+func constructLayerUrl(cfg *config, digest string) (string, error) {
+ log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.bucket)
+ object := "layers/" + digest
+
+ if cfg.signing != nil {
+ opts := *cfg.signing
+ opts.Expires = time.Now().Add(5 * time.Minute)
+ return storage.SignedURL(cfg.bucket, object, &opts)
+ } else {
+ return ("https://storage.googleapis.com/" + cfg.bucket + "/" + object), nil
+ }
+}
+
+// prepareBucket configures the handle to a Cloud Storage bucket in which
+// individual layers will be stored after Nix builds. Nixery does not directly
+// serve layers to registry clients, instead it redirects them to the public
+// URLs of the Cloud Storage bucket.
+//
+// The bucket is required for Nixery to function correctly, hence fatal errors
+// are generated in case it fails to be set up correctly.
+func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
+ client, err := storage.NewClient(*ctx)
+ if err != nil {
+ log.Fatalln("Failed to set up Cloud Storage client:", err)
+ }
+
+ bkt := client.Bucket(cfg.bucket)
+
+ if _, err := bkt.Attrs(*ctx); err != nil {
+ log.Fatalln("Could not access configured bucket", err)
+ }
+
+ return bkt
+}
+
+// Regexes matching the V2 Registry API routes. This only includes the
+// routes required for serving images, since pushing and other such
+// functionality is not available.
+var (
+ manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
+ layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+)
+
+// Error format corresponding to the registry protocol V2 specification. This
+// allows feeding back errors to clients in a way that can be presented to
+// users.
+type registryError struct {
+ Code string `json:"code"`
+ Message string `json:"message"`
+}
+
+type registryErrors struct {
+ Errors []registryError `json:"errors"`
+}
+
+func writeError(w http.ResponseWriter, status int, code, message string) {
+ err := registryErrors{
+ Errors: []registryError{
+ {code, message},
+ },
+ }
+ json, _ := json.Marshal(err)
+
+ w.WriteHeader(status)
+ w.Header().Add("Content-Type", "application/json")
+ w.Write(json)
+}
+
+type registryHandler struct {
+ cfg *config
+ ctx *context.Context
+ bucket *storage.BucketHandle
+}
+
+func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ // Acknowledge that we speak V2 with an empty response
+ if r.RequestURI == "/v2/" {
+ return
+ }
+
+ // Serve the manifest (straight from Nix)
+ manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
+ if len(manifestMatches) == 3 {
+ imageName := manifestMatches[1]
+ imageTag := manifestMatches[2]
+ log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
+ image := imageFromName(imageName, imageTag)
+ buildResult, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
+
+ if err != nil {
+ writeError(w, 500, "UNKNOWN", "image build failure")
+ log.Println("Failed to build image manifest", err)
+ return
+ }
+
+ // Some error types have special handling, which is applied
+ // here.
+ if buildResult.Error == "not_found" {
+ s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
+ writeError(w, 404, "MANIFEST_UNKNOWN", s)
+ log.Println(s)
+ return
+ }
+
+ // This marshaling error is ignored because we know that this
+ // field represents valid JSON data.
+ manifest, _ := json.Marshal(buildResult.Manifest)
+ w.Header().Add("Content-Type", manifestMediaType)
+ w.Write(manifest)
+ return
+ }
+
+ // Serve an image layer. For this we need to first ask Nix for
+ // the manifest, then proceed to extract the correct layer from
+ // it.
+ layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
+ if len(layerMatches) == 3 {
+ digest := layerMatches[2]
+ url, err := constructLayerUrl(h.cfg, digest)
+
+ if err != nil {
+ log.Printf("Failed to sign GCS URL: %s\n", err)
+ writeError(w, 500, "UNKNOWN", "could not serve layer")
+ return
+ }
+
+ w.Header().Set("Location", url)
+ w.WriteHeader(303)
+ return
+ }
+
+ log.Printf("Unsupported registry route: %s\n", r.RequestURI)
+ w.WriteHeader(404)
+}
+
+func getConfig(key, desc string) string {
+ value := os.Getenv(key)
+ if value == "" {
+ log.Fatalln(desc + " must be specified")
+ }
+
+ return value
+}
+
+func main() {
+ cfg := &config{
+ bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
+ builder: getConfig("NIX_BUILDER", "Nix image builder code"),
+ port: getConfig("PORT", "HTTP port"),
+ pkgs: pkgSourceFromEnv(),
+ signing: signingOptsFromEnv(),
+ }
+
+ ctx := context.Background()
+ bucket := prepareBucket(&ctx, cfg)
+
+ log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
+
+ // All /v2/ requests belong to the registry handler.
+ http.Handle("/v2/", ®istryHandler{
+ cfg: cfg,
+ ctx: &ctx,
+ bucket: bucket,
+ })
+
+ // All other roots are served by the static file server.
+ webDir := http.Dir(getConfig("WEB_DIR", "Static web file dir"))
+ http.Handle("/", http.FileServer(webDir))
+
+ log.Fatal(http.ListenAndServe(":"+cfg.port, nil))
+}
--
cgit 1.4.1
From 819b4602788195cacde48cf8bb36ab242d240512 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 17:10:34 +0100
Subject: chore(docs): Move mdBook derivation to docs/default.nix
---
tools/nixery/default.nix | 23 +----------------------
tools/nixery/docs/default.nix | 20 +++++++++++++++++++-
2 files changed, 20 insertions(+), 23 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index dee5713c64af..fe5afdb8ed8b 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -25,32 +25,11 @@ rec {
# data dependencies.
nixery-server = callPackage ./server {};
- # Nix expression (unimported!) which is used by Nixery to build
- # container images.
- nixery-builder = ./build-registry-image.nix;
-
- # nixpkgs currently has an old version of mdBook. A new version is
- # built here, but eventually the update will be upstreamed
- # (nixpkgs#65890)
- mdbook = rustPlatform.buildRustPackage rec {
- name = "mdbook-${version}";
- version = "0.3.1";
- doCheck = false;
-
- src = fetchFromGitHub {
- owner = "rust-lang-nursery";
- repo = "mdBook";
- rev = "v${version}";
- sha256 = "0py69267jbs6b7zw191hcs011cm1v58jz8mglqx3ajkffdfl3ghw";
- };
-
- cargoSha256 = "0qwhc42a86jpvjcaysmfcw8kmwa150lmz01flmlg74g6qnimff5m";
- };
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
# nixery.dev.
- nixery-book = callPackage ./docs { inherit mdbook; };
+ nixery-book = callPackage ./docs {};
# Wrapper script running the Nixery server with the above two data
# dependencies configured.
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
index deebdffd7fd9..6a31be4fd4e0 100644
--- a/tools/nixery/docs/default.nix
+++ b/tools/nixery/docs/default.nix
@@ -18,9 +18,27 @@
# Some of the documentation is pulled in and included from other
# sources.
-{ fetchFromGitHub, mdbook, runCommand }:
+{ fetchFromGitHub, mdbook, runCommand, rustPlatform }:
let
+ # nixpkgs currently has an old version of mdBook. A new version is
+ # built here, but eventually the update will be upstreamed
+ # (nixpkgs#65890)
+ mdbook = rustPlatform.buildRustPackage rec {
+ name = "mdbook-${version}";
+ version = "0.3.1";
+ doCheck = false;
+
+ src = fetchFromGitHub {
+ owner = "rust-lang-nursery";
+ repo = "mdBook";
+ rev = "v${version}";
+ sha256 = "0py69267jbs6b7zw191hcs011cm1v58jz8mglqx3ajkffdfl3ghw";
+ };
+
+ cargoSha256 = "0qwhc42a86jpvjcaysmfcw8kmwa150lmz01flmlg74g6qnimff5m";
+ };
+
nix-1p = fetchFromGitHub {
owner = "tazjin";
repo = "nix-1p";
--
cgit 1.4.1
From 6d718bf2713a7e2209197247976390b878f51313 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 17:14:00 +0100
Subject: refactor(server): Use wrapper script to avoid path dependency
Instead of requiring the server component to be made aware of the
location of the Nix builder via environment variables, this commit
introduces a wrapper script for the builder that can simply exist on
the builders $PATH.
This is one step towards a slightly nicer out-of-the-box experience
when using `nix-build -A nixery-bin`.
---
tools/nixery/build-image/build-image.nix | 292 +++++++++++++++++++++++++
tools/nixery/build-image/default.nix | 40 ++++
tools/nixery/build-image/go-deps.nix | 12 +
tools/nixery/build-image/group-layers.go | 352 ++++++++++++++++++++++++++++++
tools/nixery/build-registry-image.nix | 292 -------------------------
tools/nixery/default.nix | 4 +-
tools/nixery/group-layers/group-layers.go | 352 ------------------------------
tools/nixery/server/default.nix | 14 ++
tools/nixery/server/main.go | 8 +-
9 files changed, 715 insertions(+), 651 deletions(-)
create mode 100644 tools/nixery/build-image/build-image.nix
create mode 100644 tools/nixery/build-image/default.nix
create mode 100644 tools/nixery/build-image/go-deps.nix
create mode 100644 tools/nixery/build-image/group-layers.go
delete mode 100644 tools/nixery/build-registry-image.nix
delete mode 100644 tools/nixery/group-layers/group-layers.go
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
new file mode 100644
index 000000000000..37156905fa38
--- /dev/null
+++ b/tools/nixery/build-image/build-image.nix
@@ -0,0 +1,292 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This file contains a modified version of dockerTools.buildImage that, instead
+# of outputting a single tarball which can be imported into a running Docker
+# daemon, builds a manifest file that can be used for serving the image over a
+# registry API.
+
+{
+ # Image Name
+ name,
+ # Image tag, the Nix's output hash will be used if null
+ tag ? null,
+ # Files to put on the image (a nix store path or list of paths).
+ contents ? [],
+ # Packages to install by name (which must refer to top-level attributes of
+ # nixpkgs). This is passed in as a JSON-array in string form.
+ packages ? "[]",
+ # Optional bash script to run on the files prior to fixturizing the layer.
+ extraCommands ? "", uid ? 0, gid ? 0,
+ # Docker's modern image storage mechanisms have a maximum of 125
+ # layers. To allow for some extensibility (via additional layers),
+ # the default here is set to something a little less than that.
+ maxLayers ? 96,
+
+ # Configuration for which package set to use when building.
+ #
+ # Both channels of the public nixpkgs repository as well as imports
+ # from private repositories are supported.
+ #
+ # This setting can be invoked with three different formats:
+ #
+ # 1. nixpkgs!$channel (e.g. nixpkgs!nixos-19.03)
+ # 2. git!$repo!$rev (e.g. git!git@github.com:NixOS/nixpkgs.git!master)
+ # 3. path!$path (e.g. path!/var/local/nixpkgs)
+ #
+ # '!' was chosen as the separator because `builtins.split` does not
+ # support regex escapes and there are few other candidates. It
+ # doesn't matter much because this is invoked by the server.
+ pkgSource ? "nixpkgs!nixos-19.03"
+}:
+
+let
+ # If a nixpkgs channel is requested, it is retrieved from Github (as
+ # a tarball) and imported.
+ fetchImportChannel = channel:
+ let url = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
+ in import (builtins.fetchTarball url) {};
+
+ # If a git repository is requested, it is retrieved via
+ # builtins.fetchGit which defaults to the git configuration of the
+ # outside environment. This means that user-configured SSH
+ # credentials etc. are going to work as expected.
+ fetchImportGit = url: rev:
+ let
+ # builtins.fetchGit needs to know whether 'rev' is a reference
+ # (e.g. a branch/tag) or a revision (i.e. a commit hash)
+ #
+ # Since this data is being extrapolated from the supplied image
+ # tag, we have to guess if we want to avoid specifying a format.
+ #
+ # There are some additional caveats around whether the default
+ # branch contains the specified revision, which need to be
+ # explained to users.
+ spec = if (builtins.stringLength rev) == 40 then {
+ inherit url rev;
+ } else {
+ inherit url;
+ ref = rev;
+ };
+ in import (builtins.fetchGit spec) {};
+
+ importPath = path: import (builtins.toPath path) {};
+
+ source = builtins.split "!" pkgSource;
+ sourceType = builtins.elemAt source 0;
+ pkgs = with builtins;
+ if sourceType == "nixpkgs"
+ then fetchImportChannel (elemAt source 2)
+ else if sourceType == "git"
+ then fetchImportGit (elemAt source 2) (elemAt source 4)
+ else if sourceType == "path"
+ then importPath (elemAt source 2)
+ else builtins.throw("Invalid package set source specification: ${pkgSource}");
+in
+
+# Since this is essentially a re-wrapping of some of the functionality that is
+# implemented in the dockerTools, we need all of its components in our top-level
+# namespace.
+with builtins;
+with pkgs;
+with dockerTools;
+
+let
+ tarLayer = "application/vnd.docker.image.rootfs.diff.tar";
+ baseName = baseNameOf name;
+
+ # deepFetch traverses the top-level Nix package set to retrieve an item via a
+ # path specified in string form.
+ #
+ # For top-level items, the name of the key yields the result directly. Nested
+ # items are fetched by using dot-syntax, as in Nix itself.
+ #
+ # Due to a restriction of the registry API specification it is not possible to
+ # pass uppercase characters in an image name, however the Nix package set
+ # makes use of camelCasing repeatedly (for example for `haskellPackages`).
+ #
+ # To work around this, if no value is found on the top-level a second lookup
+ # is done on the package set using lowercase-names. This is not done for
+ # nested sets, as they often have keys that only differ in case.
+ #
+ # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev` and
+ # `deepFetch haskellpackages.stylish-haskell` retrieves
+ # `haskellPackages.stylish-haskell`.
+ deepFetch = with lib; s: n:
+ let path = splitString "." n;
+ err = { error = "not_found"; pkg = n; };
+ # The most efficient way I've found to do a lookup against
+ # case-differing versions of an attribute is to first construct a
+ # mapping of all lowercased attribute names to their differently cased
+ # equivalents.
+ #
+ # This map is then used for a second lookup if the top-level
+ # (case-sensitive) one does not yield a result.
+ hasUpper = str: (match ".*[A-Z].*" str) != null;
+ allUpperKeys = filter hasUpper (attrNames s);
+ lowercased = listToAttrs (map (k: {
+ name = toLower k;
+ value = k;
+ }) allUpperKeys);
+ caseAmendedPath = map (v: if hasAttr v lowercased then lowercased."${v}" else v) path;
+ fetchLower = attrByPath caseAmendedPath err s;
+ in attrByPath path fetchLower s;
+
+ # allContents is the combination of all derivations and store paths passed in
+ # directly, as well as packages referred to by name.
+ #
+ # It accumulates potential errors about packages that could not be found to
+ # return this information back to the server.
+ allContents =
+ # Folds over the results of 'deepFetch' on all requested packages to
+ # separate them into errors and content. This allows the program to
+ # terminate early and return only the errors if any are encountered.
+ let splitter = attrs: res:
+ if hasAttr "error" res
+ then attrs // { errors = attrs.errors ++ [ res ]; }
+ else attrs // { contents = attrs.contents ++ [ res ]; };
+ init = { inherit contents; errors = []; };
+ fetched = (map (deepFetch pkgs) (fromJSON packages));
+ in foldl' splitter init fetched;
+
+ contentsEnv = symlinkJoin {
+ name = "bulk-layers";
+ paths = allContents.contents;
+ };
+
+ # The image build infrastructure expects to be outputting a slightly different
+ # format than the one we serve over the registry protocol. To work around its
+ # expectations we need to provide an empty JSON file that it can write some
+ # fun data into.
+ emptyJson = writeText "empty.json" "{}";
+
+ bulkLayers = mkManyPureLayers {
+ name = baseName;
+ configJson = emptyJson;
+ closure = writeText "closure" "${contentsEnv} ${emptyJson}";
+ # One layer will be taken up by the customisationLayer, so
+ # take up one less.
+ maxLayers = maxLayers - 1;
+ };
+
+ customisationLayer = mkCustomisationLayer {
+ name = baseName;
+ contents = contentsEnv;
+ baseJson = emptyJson;
+ inherit uid gid extraCommands;
+ };
+
+ # Inspect the returned bulk layers to determine which layers belong to the
+ # image and how to serve them.
+ #
+ # This computes both an MD5 and a SHA256 hash of each layer, which are used
+ # for different purposes. See the registry server implementation for details.
+ #
+ # Some of this logic is copied straight from `buildLayeredImage`.
+ allLayersJson = runCommand "fs-layer-list.json" {
+ buildInputs = [ coreutils findutils jq openssl ];
+ } ''
+ find ${bulkLayers} -mindepth 1 -maxdepth 1 | sort -t/ -k5 -n > layer-list
+ echo ${customisationLayer} >> layer-list
+
+ for layer in $(cat layer-list); do
+ layerPath="$layer/layer.tar"
+ layerSha256=$(sha256sum $layerPath | cut -d ' ' -f1)
+ # The server application compares binary MD5 hashes and expects base64
+ # encoding instead of hex.
+ layerMd5=$(openssl dgst -md5 -binary $layerPath | openssl enc -base64)
+ layerSize=$(wc -c $layerPath | cut -d ' ' -f1)
+
+ jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layerPath \
+ '{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> fs-layers
+ done
+
+ cat fs-layers | jq -s -c '.' > $out
+ '';
+ allLayers = fromJSON (readFile allLayersJson);
+
+ # Image configuration corresponding to the OCI specification for the file type
+ # 'application/vnd.oci.image.config.v1+json'
+ config = {
+ architecture = "amd64";
+ os = "linux";
+ rootfs.type = "layers";
+ rootfs.diff_ids = map (layer: "sha256:${layer.sha256}") allLayers;
+ # Required to let Kubernetes import Nixery images
+ config = {};
+ };
+ configJson = writeText "${baseName}-config.json" (toJSON config);
+ configMetadata = fromJSON (readFile (runCommand "config-meta" {
+ buildInputs = [ jq openssl ];
+ } ''
+ size=$(wc -c ${configJson} | cut -d ' ' -f1)
+ sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
+ md5=$(openssl dgst -md5 -binary ${configJson} | openssl enc -base64)
+ jq -n -c --arg size $size --arg sha256 $sha256 --arg md5 $md5 \
+ '{ size: ($size | tonumber), sha256: $sha256, md5: $md5 }' \
+ >> $out
+ ''));
+
+ # Corresponds to the manifest JSON expected by the Registry API.
+ #
+ # This is Docker's "Image Manifest V2, Schema 2":
+ # https://docs.docker.com/registry/spec/manifest-v2-2/
+ manifest = {
+ schemaVersion = 2;
+ mediaType = "application/vnd.docker.distribution.manifest.v2+json";
+
+ config = {
+ mediaType = "application/vnd.docker.container.image.v1+json";
+ size = configMetadata.size;
+ digest = "sha256:${configMetadata.sha256}";
+ };
+
+ layers = map (layer: {
+ mediaType = tarLayer;
+ digest = "sha256:${layer.sha256}";
+ size = layer.size;
+ }) allLayers;
+ };
+
+ # This structure maps each layer digest to the actual tarball that will need
+ # to be served. It is used by the controller to cache the paths during a pull.
+ layerLocations = {
+ "${configMetadata.sha256}" = {
+ path = configJson;
+ md5 = configMetadata.md5;
+ };
+ } // (listToAttrs (map (layer: {
+ name = "${layer.sha256}";
+ value = {
+ path = layer.path;
+ md5 = layer.md5;
+ };
+ }) allLayers));
+
+ # Final output structure returned to the controller in the case of a
+ # successful build.
+ manifestOutput = {
+ inherit manifest layerLocations;
+ };
+
+ # Output structure returned if errors occured during the build. Currently the
+ # only error type that is returned in a structured way is 'not_found'.
+ errorOutput = {
+ error = "not_found";
+ pkgs = map (err: err.pkg) allContents.errors;
+ };
+in writeText "manifest-output.json" (if (length allContents.errors) == 0
+ then toJSON manifestOutput
+ else toJSON errorOutput
+)
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
new file mode 100644
index 000000000000..4962e07deee9
--- /dev/null
+++ b/tools/nixery/build-image/default.nix
@@ -0,0 +1,40 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This file builds the tool used to calculate layer distribution and
+# moves the files needed to call the Nix builds at runtime in the
+# correct locations.
+
+{ buildGoPackage, lib, nix, writeShellScriptBin }:
+
+let
+ group-layers = buildGoPackage {
+ name = "group-layers";
+ goDeps = ./go-deps.nix;
+ src = ./.;
+
+ goPackagePath = "github.com/google/nixery/group-layers";
+
+ meta = {
+ description = "Tool to group a set of packages into container image layers";
+ license = lib.licenses.asl20;
+ maintainers = [ lib.maintainers.tazjin ];
+ };
+ };
+
+ # Wrapper script which is called by the Nixery server to trigger an
+ # actual image build.
+in writeShellScriptBin "nixery-build-image" ''
+ exec ${nix}/bin/nix-build --show-trace --no-out-link "$@" ${./build-image.nix}
+''
diff --git a/tools/nixery/build-image/go-deps.nix b/tools/nixery/build-image/go-deps.nix
new file mode 100644
index 000000000000..235c3c4c6dbe
--- /dev/null
+++ b/tools/nixery/build-image/go-deps.nix
@@ -0,0 +1,12 @@
+# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
+[
+ {
+ goPackagePath = "gonum.org/v1/gonum";
+ fetch = {
+ type = "git";
+ url = "https://github.com/gonum/gonum";
+ rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
+ sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
+ };
+ }
+]
diff --git a/tools/nixery/build-image/group-layers.go b/tools/nixery/build-image/group-layers.go
new file mode 100644
index 000000000000..93f2e520ace9
--- /dev/null
+++ b/tools/nixery/build-image/group-layers.go
@@ -0,0 +1,352 @@
+// This program reads an export reference graph (i.e. a graph representing the
+// runtime dependencies of a set of derivations) created by Nix and groups them
+// in a way that is likely to match the grouping for other derivation sets with
+// overlapping dependencies.
+//
+// This is used to determine which derivations to include in which layers of a
+// container image.
+//
+// # Inputs
+//
+// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
+// * a file containing absolute popularity values of packages in the
+// Nix package set (in the form of a direct reference count)
+// * a maximum number of layers to allocate for the image (the "layer budget")
+//
+// # Algorithm
+//
+// It works by first creating a (directed) dependency tree:
+//
+// img (root node)
+// │
+// ├───> A ─────┐
+// │ v
+// ├───> B ───> E
+// │ ^
+// ├───> C ─────┘
+// │ │
+// │ v
+// └───> D ───> F
+// │
+// └────> G
+//
+// Each node (i.e. package) is then visited to determine how important
+// it is to separate this node into its own layer, specifically:
+//
+// 1. Is the node within a certain threshold percentile of absolute
+// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
+//
+// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
+//
+// In either case, a bit is flipped for this node representing each
+// condition and an edge to it is inserted directly from the image
+// root, if it does not already exist.
+//
+// For the rest of the example we assume 'G' is above the threshold
+// size and 'E' is popular.
+//
+// This tree is then transformed into a dominator tree:
+//
+// img
+// │
+// ├───> A
+// ├───> B
+// ├───> C
+// ├───> E
+// ├───> D ───> F
+// └───> G
+//
+// Specifically this means that the paths to A, B, C, E, G, and D
+// always pass through the root (i.e. are dominated by it), whilst F
+// is dominated by D (all paths go through it).
+//
+// The top-level subtrees are considered as the initially selected
+// layers.
+//
+// If the list of layers fits within the layer budget, it is returned.
+//
+// Otherwise, a merge rating is calculated for each layer. This is the
+// product of the layer's total size and its root node's popularity.
+//
+// Layers are then merged in ascending order of merge ratings until
+// they fit into the layer budget.
+//
+// # Threshold values
+//
+// Threshold values for the partitioning conditions mentioned above
+// have not yet been determined, but we will make a good first guess
+// based on gut feeling and proceed to measure their impact on cache
+// hits/misses.
+//
+// # Example
+//
+// Using the logic described above as well as the example presented in
+// the introduction, this program would create the following layer
+// groupings (assuming no additional partitioning):
+//
+// Layer budget: 1
+// Layers: { A, B, C, D, E, F, G }
+//
+// Layer budget: 2
+// Layers: { G }, { A, B, C, D, E, F }
+//
+// Layer budget: 3
+// Layers: { G }, { E }, { A, B, C, D, F }
+//
+// Layer budget: 4
+// Layers: { G }, { E }, { D, F }, { A, B, C }
+//
+// ...
+//
+// Layer budget: 10
+// Layers: { E }, { D, F }, { A }, { B }, { C }
+package main
+
+import (
+ "encoding/json"
+ "flag"
+ "io/ioutil"
+ "log"
+ "regexp"
+ "sort"
+
+ "gonum.org/v1/gonum/graph/flow"
+ "gonum.org/v1/gonum/graph/simple"
+)
+
+// closureGraph represents the structured attributes Nix outputs when asking it
+// for the exportReferencesGraph of a list of derivations.
+type exportReferences struct {
+ References struct {
+ Graph []string `json:"graph"`
+ } `json:"exportReferencesGraph"`
+
+ Graph []struct {
+ Size uint64 `json:"closureSize"`
+ Path string `json:"path"`
+ Refs []string `json:"references"`
+ } `json:"graph"`
+}
+
+// Popularity data for each Nix package that was calculated in advance.
+//
+// Popularity is a number from 1-100 that represents the
+// popularity percentile in which this package resides inside
+// of the nixpkgs tree.
+type pkgsMetadata = map[string]int
+
+// layer represents the data returned for each layer that Nix should
+// build for the container image.
+type layer struct {
+ Contents []string `json:"contents"`
+ mergeRating uint64
+}
+
+func (a layer) merge(b layer) layer {
+ a.Contents = append(a.Contents, b.Contents...)
+ a.mergeRating += b.mergeRating
+ return a
+}
+
+// closure as pointed to by the graph nodes.
+type closure struct {
+ GraphID int64
+ Path string
+ Size uint64
+ Refs []string
+ Popularity int
+}
+
+func (c *closure) ID() int64 {
+ return c.GraphID
+}
+
+var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
+
+func (c *closure) DOTID() string {
+ return nixRegexp.ReplaceAllString(c.Path, "")
+}
+
+// bigOrPopular checks whether this closure should be considered for
+// separation into its own layer, even if it would otherwise only
+// appear in a subtree of the dominator tree.
+func (c *closure) bigOrPopular() bool {
+ const sizeThreshold = 100 * 1000000 // 100MB
+
+ if c.Size > sizeThreshold {
+ return true
+ }
+
+ // The threshold value used here is currently roughly the
+ // minimum number of references that only 1% of packages in
+ // the entire package set have.
+ //
+ // TODO(tazjin): Do this more elegantly by calculating
+ // percentiles for each package and using those instead.
+ if c.Popularity >= 1000 {
+ return true
+ }
+
+ return false
+}
+
+func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
+ // Big or popular nodes get a separate edge from the top to
+ // flag them for their own layer.
+ if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
+ edge := graph.NewEdge(graph.Node(0), node)
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range node.Refs {
+ // Nix adds a self reference to each node, which
+ // should not be inserted.
+ if c != node.Path {
+ edge := graph.NewEdge(node, (*cmap)[c])
+ graph.SetEdge(edge)
+ }
+ }
+}
+
+// Create a graph structure from the references supplied by Nix.
+func buildGraph(refs *exportReferences, pop *pkgsMetadata) *simple.DirectedGraph {
+ cmap := make(map[string]*closure)
+ graph := simple.NewDirectedGraph()
+
+ // Insert all closures into the graph, as well as a fake root
+ // closure which serves as the top of the tree.
+ //
+ // A map from store paths to IDs is kept to actually insert
+ // edges below.
+ root := &closure{
+ GraphID: 0,
+ Path: "image_root",
+ }
+ graph.AddNode(root)
+
+ for idx, c := range refs.Graph {
+ node := &closure{
+ GraphID: int64(idx + 1), // inc because of root node
+ Path: c.Path,
+ Size: c.Size,
+ Refs: c.Refs,
+ }
+
+ if p, ok := (*pop)[node.DOTID()]; ok {
+ node.Popularity = p
+ } else {
+ node.Popularity = 1
+ }
+
+ graph.AddNode(node)
+ cmap[c.Path] = node
+ }
+
+ // Insert the top-level closures with edges from the root
+ // node, then insert all edges for each closure.
+ for _, p := range refs.References.Graph {
+ edge := graph.NewEdge(root, cmap[p])
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range cmap {
+ insertEdges(graph, &cmap, c)
+ }
+
+ return graph
+}
+
+// Extracts a subgraph starting at the specified root from the
+// dominator tree. The subgraph is converted into a flat list of
+// layers, each containing the store paths and merge rating.
+func groupLayer(dt *flow.DominatorTree, root *closure) layer {
+ size := root.Size
+ contents := []string{root.Path}
+ children := dt.DominatedBy(root.ID())
+
+ // This iteration does not use 'range' because the list being
+ // iterated is modified during the iteration (yes, I'm sorry).
+ for i := 0; i < len(children); i++ {
+ child := children[i].(*closure)
+ size += child.Size
+ contents = append(contents, child.Path)
+ children = append(children, dt.DominatedBy(child.ID())...)
+ }
+
+ return layer{
+ Contents: contents,
+ // TODO(tazjin): The point of this is to factor in
+ // both the size and the popularity when making merge
+ // decisions, but there might be a smarter way to do
+ // it than a plain multiplication.
+ mergeRating: uint64(root.Popularity) * size,
+ }
+}
+
+// Calculate the dominator tree of the entire package set and group
+// each top-level subtree into a layer.
+//
+// Layers are merged together until they fit into the layer budget,
+// based on their merge rating.
+func dominate(budget int, graph *simple.DirectedGraph) []layer {
+ dt := flow.Dominators(graph.Node(0), graph)
+
+ var layers []layer
+ for _, n := range dt.DominatedBy(dt.Root().ID()) {
+ layers = append(layers, groupLayer(&dt, n.(*closure)))
+ }
+
+ sort.Slice(layers, func(i, j int) bool {
+ return layers[i].mergeRating < layers[j].mergeRating
+ })
+
+ if len(layers) > budget {
+ log.Printf("Ideal image has %v layers, but budget is %v\n", len(layers), budget)
+ }
+
+ for len(layers) > budget {
+ merged := layers[0].merge(layers[1])
+ layers[1] = merged
+ layers = layers[1:]
+ }
+
+ return layers
+}
+
+func main() {
+ graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
+ popFile := flag.String("pop", "popularity.json", "Package popularity data")
+ outFile := flag.String("out", "layers.json", "File to write layers to")
+ layerBudget := flag.Int("budget", 94, "Total layer budget available")
+ flag.Parse()
+
+ // Parse graph data
+ file, err := ioutil.ReadFile(*graphFile)
+ if err != nil {
+ log.Fatalf("Failed to load input: %s\n", err)
+ }
+
+ var refs exportReferences
+ err = json.Unmarshal(file, &refs)
+ if err != nil {
+ log.Fatalf("Failed to deserialise input: %s\n", err)
+ }
+
+ // Parse popularity data
+ popBytes, err := ioutil.ReadFile(*popFile)
+ if err != nil {
+ log.Fatalf("Failed to load input: %s\n", err)
+ }
+
+ var pop pkgsMetadata
+ err = json.Unmarshal(popBytes, &pop)
+ if err != nil {
+ log.Fatalf("Failed to deserialise input: %s\n", err)
+ }
+
+ graph := buildGraph(&refs, &pop)
+ layers := dominate(*layerBudget, graph)
+
+ j, _ := json.Marshal(layers)
+ ioutil.WriteFile(*outFile, j, 0644)
+}
diff --git a/tools/nixery/build-registry-image.nix b/tools/nixery/build-registry-image.nix
deleted file mode 100644
index 255f1ca9b1d0..000000000000
--- a/tools/nixery/build-registry-image.nix
+++ /dev/null
@@ -1,292 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This file contains a modified version of dockerTools.buildImage that, instead
-# of outputting a single tarball which can be imported into a running Docker
-# daemon, builds a manifest file that can be used for serving the image over a
-# registry API.
-
-{
- # Image Name
- name,
- # Image tag, the Nix's output hash will be used if null
- tag ? null,
- # Files to put on the image (a nix store path or list of paths).
- contents ? [],
- # Packages to install by name (which must refer to top-level attributes of
- # nixpkgs). This is passed in as a JSON-array in string form.
- packages ? "[]",
- # Optional bash script to run on the files prior to fixturizing the layer.
- extraCommands ? "", uid ? 0, gid ? 0,
- # Docker's modern image storage mechanisms have a maximum of 125
- # layers. To allow for some extensibility (via additional layers),
- # the default here is set to something a little less than that.
- maxLayers ? 96,
-
- # Configuration for which package set to use when building.
- #
- # Both channels of the public nixpkgs repository as well as imports
- # from private repositories are supported.
- #
- # This setting can be invoked with three different formats:
- #
- # 1. nixpkgs!$channel (e.g. nixpkgs!nixos-19.03)
- # 2. git!$repo!$rev (e.g. git!git@github.com:NixOS/nixpkgs.git!master)
- # 3. path!$path (e.g. path!/var/local/nixpkgs)
- #
- # '!' was chosen as the separator because `builtins.split` does not
- # support regex escapes and there are few other candidates. It
- # doesn't matter much because this is invoked by the server.
- pkgSource ? "nixpkgs!nixos-19.03"
-}:
-
-let
- # If a nixpkgs channel is requested, it is retrieved from Github (as
- # a tarball) and imported.
- fetchImportChannel = channel:
- let url = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
- in import (builtins.fetchTarball url) {};
-
- # If a git repository is requested, it is retrieved via
- # builtins.fetchGit which defaults to the git configuration of the
- # outside environment. This means that user-configured SSH
- # credentials etc. are going to work as expected.
- fetchImportGit = url: rev:
- let
- # builtins.fetchGit needs to know whether 'rev' is a reference
- # (e.g. a branch/tag) or a revision (i.e. a commit hash)
- #
- # Since this data is being extrapolated from the supplied image
- # tag, we have to guess if we want to avoid specifying a format.
- #
- # There are some additional caveats around whether the default
- # branch contains the specified revision, which need to be
- # explained to users.
- spec = if (builtins.stringLength rev) == 40 then {
- inherit url rev;
- } else {
- inherit url;
- ref = rev;
- };
- in import (builtins.fetchGit spec) {};
-
- importPath = path: import (builtins.toPath path) {};
-
- source = builtins.split "!" pkgSource;
- sourceType = builtins.elemAt source 0;
- pkgs = with builtins;
- if sourceType == "nixpkgs"
- then fetchImportChannel (elemAt source 2)
- else if sourceType == "git"
- then fetchImportGit (elemAt source 2) (elemAt source 4)
- else if sourceType == "path"
- then importPath (elemAt source 2)
- else builtins.throw("Invalid package set source specification: ${pkgSource}");
-in
-
-# Since this is essentially a re-wrapping of some of the functionality that is
-# implemented in the dockerTools, we need all of its components in our top-level
-# namespace.
-with builtins;
-with pkgs;
-with dockerTools;
-
-let
- tarLayer = "application/vnd.docker.image.rootfs.diff.tar";
- baseName = baseNameOf name;
-
- # deepFetch traverses the top-level Nix package set to retrieve an item via a
- # path specified in string form.
- #
- # For top-level items, the name of the key yields the result directly. Nested
- # items are fetched by using dot-syntax, as in Nix itself.
- #
- # Due to a restriction of the registry API specification it is not possible to
- # pass uppercase characters in an image name, however the Nix package set
- # makes use of camelCasing repeatedly (for example for `haskellPackages`).
- #
- # To work around this, if no value is found on the top-level a second lookup
- # is done on the package set using lowercase-names. This is not done for
- # nested sets, as they often have keys that only differ in case.
- #
- # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev` and
- # `deepFetch haskellpackages.stylish-haskell` retrieves
- # `haskellPackages.stylish-haskell`.
- deepFetch = with lib; s: n:
- let path = splitString "." n;
- err = { error = "not_found"; pkg = n; };
- # The most efficient way I've found to do a lookup against
- # case-differing versions of an attribute is to first construct a
- # mapping of all lowercased attribute names to their differently cased
- # equivalents.
- #
- # This map is then used for a second lookup if the top-level
- # (case-sensitive) one does not yield a result.
- hasUpper = str: (match ".*[A-Z].*" str) != null;
- allUpperKeys = filter hasUpper (attrNames s);
- lowercased = listToAttrs (map (k: {
- name = toLower k;
- value = k;
- }) allUpperKeys);
- caseAmendedPath = map (v: if hasAttr v lowercased then lowercased."${v}" else v) path;
- fetchLower = attrByPath caseAmendedPath err s;
- in attrByPath path fetchLower s;
-
- # allContents is the combination of all derivations and store paths passed in
- # directly, as well as packages referred to by name.
- #
- # It accumulates potential errors about packages that could not be found to
- # return this information back to the server.
- allContents =
- # Folds over the results of 'deepFetch' on all requested packages to
- # separate them into errors and content. This allows the program to
- # terminate early and return only the errors if any are encountered.
- let splitter = attrs: res:
- if hasAttr "error" res
- then attrs // { errors = attrs.errors ++ [ res ]; }
- else attrs // { contents = attrs.contents ++ [ res ]; };
- init = { inherit contents; errors = []; };
- fetched = (map (deepFetch pkgs) (fromJSON packages));
- in foldl' splitter init fetched;
-
- contentsEnv = symlinkJoin {
- name = "bulk-layers";
- paths = allContents.contents;
- };
-
- # The image build infrastructure expects to be outputting a slightly different
- # format than the one we serve over the registry protocol. To work around its
- # expectations we need to provide an empty JSON file that it can write some
- # fun data into.
- emptyJson = writeText "empty.json" "{}";
-
- bulkLayers = mkManyPureLayers {
- name = baseName;
- configJson = emptyJson;
- closure = writeText "closure" "${contentsEnv} ${emptyJson}";
- # One layer will be taken up by the customisationLayer, so
- # take up one less.
- maxLayers = maxLayers - 1;
- };
-
- customisationLayer = mkCustomisationLayer {
- name = baseName;
- contents = contentsEnv;
- baseJson = emptyJson;
- inherit uid gid extraCommands;
- };
-
- # Inspect the returned bulk layers to determine which layers belong to the
- # image and how to serve them.
- #
- # This computes both an MD5 and a SHA256 hash of each layer, which are used
- # for different purposes. See the registry server implementation for details.
- #
- # Some of this logic is copied straight from `buildLayeredImage`.
- allLayersJson = runCommand "fs-layer-list.json" {
- buildInputs = [ coreutils findutils jq openssl ];
- } ''
- find ${bulkLayers} -mindepth 1 -maxdepth 1 | sort -t/ -k5 -n > layer-list
- echo ${customisationLayer} >> layer-list
-
- for layer in $(cat layer-list); do
- layerPath="$layer/layer.tar"
- layerSha256=$(sha256sum $layerPath | cut -d ' ' -f1)
- # The server application compares binary MD5 hashes and expects base64
- # encoding instead of hex.
- layerMd5=$(openssl dgst -md5 -binary $layerPath | openssl enc -base64)
- layerSize=$(wc -c $layerPath | cut -d ' ' -f1)
-
- jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layerPath \
- '{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> fs-layers
- done
-
- cat fs-layers | jq -s -c '.' > $out
- '';
- allLayers = fromJSON (readFile allLayersJson);
-
- # Image configuration corresponding to the OCI specification for the file type
- # 'application/vnd.oci.image.config.v1+json'
- config = {
- architecture = "amd64";
- os = "linux";
- rootfs.type = "layers";
- rootfs.diff_ids = map (layer: "sha256:${layer.sha256}") allLayers;
- # Required to let Kubernetes import Nixery images
- config = {};
- };
- configJson = writeText "${baseName}-config.json" (toJSON config);
- configMetadata = fromJSON (readFile (runCommand "config-meta" {
- buildInputs = [ jq openssl ];
- } ''
- size=$(wc -c ${configJson} | cut -d ' ' -f1)
- sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
- md5=$(openssl dgst -md5 -binary ${configJson} | openssl enc -base64)
- jq -n -c --arg size $size --arg sha256 $sha256 --arg md5 $md5 \
- '{ size: ($size | tonumber), sha256: $sha256, md5: $md5 }' \
- >> $out
- ''));
-
- # Corresponds to the manifest JSON expected by the Registry API.
- #
- # This is Docker's "Image Manifest V2, Schema 2":
- # https://docs.docker.com/registry/spec/manifest-v2-2/
- manifest = {
- schemaVersion = 2;
- mediaType = "application/vnd.docker.distribution.manifest.v2+json";
-
- config = {
- mediaType = "application/vnd.docker.container.image.v1+json";
- size = configMetadata.size;
- digest = "sha256:${configMetadata.sha256}";
- };
-
- layers = map (layer: {
- mediaType = tarLayer;
- digest = "sha256:${layer.sha256}";
- size = layer.size;
- }) allLayers;
- };
-
- # This structure maps each layer digest to the actual tarball that will need
- # to be served. It is used by the controller to cache the paths during a pull.
- layerLocations = {
- "${configMetadata.sha256}" = {
- path = configJson;
- md5 = configMetadata.md5;
- };
- } // (listToAttrs (map (layer: {
- name = "${layer.sha256}";
- value = {
- path = layer.path;
- md5 = layer.md5;
- };
- }) allLayers));
-
- # Final output structure returned to the controller in the case of a
- # successful build.
- manifestOutput = {
- inherit manifest layerLocations;
- };
-
- # Output structure returned if errors occured during the build. Currently the
- # only error type that is returned in a structured way is 'not_found'.
- errorOutput = {
- error = "not_found";
- pkgs = map (err: err.pkg) allContents.errors;
- };
-in writeText "manifest-output.json" (if (length allContents.errors) == 0
- then toJSON (trace manifestOutput manifestOutput)
- else toJSON (trace errorOutput errorOutput)
-)
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index fe5afdb8ed8b..7d201869dc90 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -25,6 +25,8 @@ rec {
# data dependencies.
nixery-server = callPackage ./server {};
+ # Implementation of the image building & layering logic
+ nixery-build-image = callPackage ./build-image {};
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
@@ -37,7 +39,6 @@ rec {
# In most cases, this will be the derivation a user wants if they
# are installing Nixery directly.
nixery-bin = writeShellScriptBin "nixery" ''
- export NIX_BUILDER="${nixery-builder}"
export WEB_DIR="${nixery-book}"
exec ${nixery-server}/bin/nixery
'';
@@ -84,6 +85,7 @@ rec {
gnutar
gzip
nix
+ nixery-build-image
nixery-launch-script
openssh
];
diff --git a/tools/nixery/group-layers/group-layers.go b/tools/nixery/group-layers/group-layers.go
deleted file mode 100644
index 93f2e520ace9..000000000000
--- a/tools/nixery/group-layers/group-layers.go
+++ /dev/null
@@ -1,352 +0,0 @@
-// This program reads an export reference graph (i.e. a graph representing the
-// runtime dependencies of a set of derivations) created by Nix and groups them
-// in a way that is likely to match the grouping for other derivation sets with
-// overlapping dependencies.
-//
-// This is used to determine which derivations to include in which layers of a
-// container image.
-//
-// # Inputs
-//
-// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
-// * a file containing absolute popularity values of packages in the
-// Nix package set (in the form of a direct reference count)
-// * a maximum number of layers to allocate for the image (the "layer budget")
-//
-// # Algorithm
-//
-// It works by first creating a (directed) dependency tree:
-//
-// img (root node)
-// │
-// ├───> A ─────┐
-// │ v
-// ├───> B ───> E
-// │ ^
-// ├───> C ─────┘
-// │ │
-// │ v
-// └───> D ───> F
-// │
-// └────> G
-//
-// Each node (i.e. package) is then visited to determine how important
-// it is to separate this node into its own layer, specifically:
-//
-// 1. Is the node within a certain threshold percentile of absolute
-// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
-//
-// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
-//
-// In either case, a bit is flipped for this node representing each
-// condition and an edge to it is inserted directly from the image
-// root, if it does not already exist.
-//
-// For the rest of the example we assume 'G' is above the threshold
-// size and 'E' is popular.
-//
-// This tree is then transformed into a dominator tree:
-//
-// img
-// │
-// ├───> A
-// ├───> B
-// ├───> C
-// ├───> E
-// ├───> D ───> F
-// └───> G
-//
-// Specifically this means that the paths to A, B, C, E, G, and D
-// always pass through the root (i.e. are dominated by it), whilst F
-// is dominated by D (all paths go through it).
-//
-// The top-level subtrees are considered as the initially selected
-// layers.
-//
-// If the list of layers fits within the layer budget, it is returned.
-//
-// Otherwise, a merge rating is calculated for each layer. This is the
-// product of the layer's total size and its root node's popularity.
-//
-// Layers are then merged in ascending order of merge ratings until
-// they fit into the layer budget.
-//
-// # Threshold values
-//
-// Threshold values for the partitioning conditions mentioned above
-// have not yet been determined, but we will make a good first guess
-// based on gut feeling and proceed to measure their impact on cache
-// hits/misses.
-//
-// # Example
-//
-// Using the logic described above as well as the example presented in
-// the introduction, this program would create the following layer
-// groupings (assuming no additional partitioning):
-//
-// Layer budget: 1
-// Layers: { A, B, C, D, E, F, G }
-//
-// Layer budget: 2
-// Layers: { G }, { A, B, C, D, E, F }
-//
-// Layer budget: 3
-// Layers: { G }, { E }, { A, B, C, D, F }
-//
-// Layer budget: 4
-// Layers: { G }, { E }, { D, F }, { A, B, C }
-//
-// ...
-//
-// Layer budget: 10
-// Layers: { E }, { D, F }, { A }, { B }, { C }
-package main
-
-import (
- "encoding/json"
- "flag"
- "io/ioutil"
- "log"
- "regexp"
- "sort"
-
- "gonum.org/v1/gonum/graph/flow"
- "gonum.org/v1/gonum/graph/simple"
-)
-
-// closureGraph represents the structured attributes Nix outputs when asking it
-// for the exportReferencesGraph of a list of derivations.
-type exportReferences struct {
- References struct {
- Graph []string `json:"graph"`
- } `json:"exportReferencesGraph"`
-
- Graph []struct {
- Size uint64 `json:"closureSize"`
- Path string `json:"path"`
- Refs []string `json:"references"`
- } `json:"graph"`
-}
-
-// Popularity data for each Nix package that was calculated in advance.
-//
-// Popularity is a number from 1-100 that represents the
-// popularity percentile in which this package resides inside
-// of the nixpkgs tree.
-type pkgsMetadata = map[string]int
-
-// layer represents the data returned for each layer that Nix should
-// build for the container image.
-type layer struct {
- Contents []string `json:"contents"`
- mergeRating uint64
-}
-
-func (a layer) merge(b layer) layer {
- a.Contents = append(a.Contents, b.Contents...)
- a.mergeRating += b.mergeRating
- return a
-}
-
-// closure as pointed to by the graph nodes.
-type closure struct {
- GraphID int64
- Path string
- Size uint64
- Refs []string
- Popularity int
-}
-
-func (c *closure) ID() int64 {
- return c.GraphID
-}
-
-var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
-
-func (c *closure) DOTID() string {
- return nixRegexp.ReplaceAllString(c.Path, "")
-}
-
-// bigOrPopular checks whether this closure should be considered for
-// separation into its own layer, even if it would otherwise only
-// appear in a subtree of the dominator tree.
-func (c *closure) bigOrPopular() bool {
- const sizeThreshold = 100 * 1000000 // 100MB
-
- if c.Size > sizeThreshold {
- return true
- }
-
- // The threshold value used here is currently roughly the
- // minimum number of references that only 1% of packages in
- // the entire package set have.
- //
- // TODO(tazjin): Do this more elegantly by calculating
- // percentiles for each package and using those instead.
- if c.Popularity >= 1000 {
- return true
- }
-
- return false
-}
-
-func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
- // Big or popular nodes get a separate edge from the top to
- // flag them for their own layer.
- if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
- edge := graph.NewEdge(graph.Node(0), node)
- graph.SetEdge(edge)
- }
-
- for _, c := range node.Refs {
- // Nix adds a self reference to each node, which
- // should not be inserted.
- if c != node.Path {
- edge := graph.NewEdge(node, (*cmap)[c])
- graph.SetEdge(edge)
- }
- }
-}
-
-// Create a graph structure from the references supplied by Nix.
-func buildGraph(refs *exportReferences, pop *pkgsMetadata) *simple.DirectedGraph {
- cmap := make(map[string]*closure)
- graph := simple.NewDirectedGraph()
-
- // Insert all closures into the graph, as well as a fake root
- // closure which serves as the top of the tree.
- //
- // A map from store paths to IDs is kept to actually insert
- // edges below.
- root := &closure{
- GraphID: 0,
- Path: "image_root",
- }
- graph.AddNode(root)
-
- for idx, c := range refs.Graph {
- node := &closure{
- GraphID: int64(idx + 1), // inc because of root node
- Path: c.Path,
- Size: c.Size,
- Refs: c.Refs,
- }
-
- if p, ok := (*pop)[node.DOTID()]; ok {
- node.Popularity = p
- } else {
- node.Popularity = 1
- }
-
- graph.AddNode(node)
- cmap[c.Path] = node
- }
-
- // Insert the top-level closures with edges from the root
- // node, then insert all edges for each closure.
- for _, p := range refs.References.Graph {
- edge := graph.NewEdge(root, cmap[p])
- graph.SetEdge(edge)
- }
-
- for _, c := range cmap {
- insertEdges(graph, &cmap, c)
- }
-
- return graph
-}
-
-// Extracts a subgraph starting at the specified root from the
-// dominator tree. The subgraph is converted into a flat list of
-// layers, each containing the store paths and merge rating.
-func groupLayer(dt *flow.DominatorTree, root *closure) layer {
- size := root.Size
- contents := []string{root.Path}
- children := dt.DominatedBy(root.ID())
-
- // This iteration does not use 'range' because the list being
- // iterated is modified during the iteration (yes, I'm sorry).
- for i := 0; i < len(children); i++ {
- child := children[i].(*closure)
- size += child.Size
- contents = append(contents, child.Path)
- children = append(children, dt.DominatedBy(child.ID())...)
- }
-
- return layer{
- Contents: contents,
- // TODO(tazjin): The point of this is to factor in
- // both the size and the popularity when making merge
- // decisions, but there might be a smarter way to do
- // it than a plain multiplication.
- mergeRating: uint64(root.Popularity) * size,
- }
-}
-
-// Calculate the dominator tree of the entire package set and group
-// each top-level subtree into a layer.
-//
-// Layers are merged together until they fit into the layer budget,
-// based on their merge rating.
-func dominate(budget int, graph *simple.DirectedGraph) []layer {
- dt := flow.Dominators(graph.Node(0), graph)
-
- var layers []layer
- for _, n := range dt.DominatedBy(dt.Root().ID()) {
- layers = append(layers, groupLayer(&dt, n.(*closure)))
- }
-
- sort.Slice(layers, func(i, j int) bool {
- return layers[i].mergeRating < layers[j].mergeRating
- })
-
- if len(layers) > budget {
- log.Printf("Ideal image has %v layers, but budget is %v\n", len(layers), budget)
- }
-
- for len(layers) > budget {
- merged := layers[0].merge(layers[1])
- layers[1] = merged
- layers = layers[1:]
- }
-
- return layers
-}
-
-func main() {
- graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
- popFile := flag.String("pop", "popularity.json", "Package popularity data")
- outFile := flag.String("out", "layers.json", "File to write layers to")
- layerBudget := flag.Int("budget", 94, "Total layer budget available")
- flag.Parse()
-
- // Parse graph data
- file, err := ioutil.ReadFile(*graphFile)
- if err != nil {
- log.Fatalf("Failed to load input: %s\n", err)
- }
-
- var refs exportReferences
- err = json.Unmarshal(file, &refs)
- if err != nil {
- log.Fatalf("Failed to deserialise input: %s\n", err)
- }
-
- // Parse popularity data
- popBytes, err := ioutil.ReadFile(*popFile)
- if err != nil {
- log.Fatalf("Failed to load input: %s\n", err)
- }
-
- var pop pkgsMetadata
- err = json.Unmarshal(popBytes, &pop)
- if err != nil {
- log.Fatalf("Failed to deserialise input: %s\n", err)
- }
-
- graph := buildGraph(&refs, &pop)
- layers := dominate(*layerBudget, graph)
-
- j, _ := json.Marshal(layers)
- ioutil.WriteFile(*outFile, j, 0644)
-}
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
index 394d2b27b442..0d0c056a56f4 100644
--- a/tools/nixery/server/default.nix
+++ b/tools/nixery/server/default.nix
@@ -1,3 +1,17 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
{ buildGoPackage, lib }:
buildGoPackage {
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index d20ede2eb587..3e015e8587fc 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -123,7 +123,6 @@ func signingOptsFromEnv() *storage.SignedURLOptions {
type config struct {
bucket string // GCS bucket to cache & serve layers
signing *storage.SignedURLOptions // Signing options to use for GCS URLs
- builder string // Nix derivation for building images
port string // Port on which to launch HTTP server
pkgs *pkgSource // Source for Nix package set
}
@@ -208,16 +207,14 @@ func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage
}
args := []string{
- "--no-out-link",
- "--show-trace",
"--argstr", "name", image.name,
- "--argstr", "packages", string(packages), cfg.builder,
+ "--argstr", "packages", string(packages),
}
if cfg.pkgs != nil {
args = append(args, "--argstr", "pkgSource", cfg.pkgs.renderSource(image.tag))
}
- cmd := exec.Command("nix-build", args...)
+ cmd := exec.Command("nixery-build-image", args...)
outpipe, err := cmd.StdoutPipe()
if err != nil {
@@ -466,7 +463,6 @@ func getConfig(key, desc string) string {
func main() {
cfg := &config{
bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
- builder: getConfig("NIX_BUILDER", "Nix image builder code"),
port: getConfig("PORT", "HTTP port"),
pkgs: pkgSourceFromEnv(),
signing: signingOptsFromEnv(),
--
cgit 1.4.1
From 6035bf36eb93bc30db6ac40739913358e71d1121 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 17:47:27 +0100
Subject: feat(popcount): Clean up popularity counting script
Adds the script used to generate the popularity information for all of
nixpkgs.
The README lists the (currently somewhat rough) usage instructions.
---
tools/nixery/group-layers/popcount | 13 ---------
tools/nixery/group-layers/popcount.nix | 51 --------------------------------
tools/nixery/popcount/README.md | 39 +++++++++++++++++++++++++
tools/nixery/popcount/empty.json | 1 +
tools/nixery/popcount/popcount | 13 +++++++++
tools/nixery/popcount/popcount.nix | 53 ++++++++++++++++++++++++++++++++++
6 files changed, 106 insertions(+), 64 deletions(-)
delete mode 100755 tools/nixery/group-layers/popcount
delete mode 100644 tools/nixery/group-layers/popcount.nix
create mode 100644 tools/nixery/popcount/README.md
create mode 100644 tools/nixery/popcount/empty.json
create mode 100755 tools/nixery/popcount/popcount
create mode 100644 tools/nixery/popcount/popcount.nix
(limited to 'tools')
diff --git a/tools/nixery/group-layers/popcount b/tools/nixery/group-layers/popcount
deleted file mode 100755
index 83baf3045da7..000000000000
--- a/tools/nixery/group-layers/popcount
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-set -ueo pipefail
-
-function graphsFor() {
- local pkg="${1}"
- local graphs=$(nix-build --timeout 2 --argstr target "${pkg}" popcount.nix || echo -n 'empty.json')
- cat $graphs | jq -r -cM '.[] | .references[]'
-}
-
-for pkg in $(cat all-top-level.json | jq -r '.[]'); do
- graphsFor "${pkg}" 2>/dev/null
- echo "Printed refs for ${pkg}" >&2
-done
diff --git a/tools/nixery/group-layers/popcount.nix b/tools/nixery/group-layers/popcount.nix
deleted file mode 100644
index e21d7367724b..000000000000
--- a/tools/nixery/group-layers/popcount.nix
+++ /dev/null
@@ -1,51 +0,0 @@
-{ pkgs ? import { config.allowUnfree = false; }
-, target }:
-
-let
- inherit (pkgs) coreutils runCommand writeText;
- inherit (builtins) replaceStrings readFile toFile fromJSON toJSON foldl' listToAttrs;
-
- path = [ pkgs."${target}" ];
-
- # graphJSON abuses feature in Nix that makes structured runtime
- # closure information available to builders. This data is imported
- # back via IFD to process it for layering data.
- graphJSON =
- path:
- runCommand "build-graph" {
- __structuredAttrs = true;
- exportReferencesGraph.graph = path;
- PATH = "${coreutils}/bin";
- builder = toFile "builder" ''
- . .attrs.sh
- cat .attrs.json > ''${outputs[out]}
- '';
- } "";
-
- buildClosures = paths: (fromJSON (readFile (graphJSON paths)));
-
- buildGraph = paths: listToAttrs (map (c: {
- name = c.path;
- value = {
- inherit (c) closureSize references;
- };
- }) (buildClosures paths));
-
- # Nix does not allow attrbute set keys to refer to store paths, but
- # we need them to for the purpose of the calculation. To work around
- # it, the store path prefix is replaced with the string 'closure/'
- # and later replaced again.
- fromStorePath = replaceStrings [ "/nix/store" ] [ "closure/" ];
- toStorePath = replaceStrings [ "closure/" ] [ "/nix/store/" ];
-
- buildTree = paths:
- let
- graph = buildGraph paths;
- top = listToAttrs (map (p: {
- name = fromStorePath (toString p);
- value = {};
- }) paths);
- in top;
-
- outputJson = thing: writeText "the-thing.json" (builtins.toJSON thing);
-in outputJson (buildClosures path).graph
diff --git a/tools/nixery/popcount/README.md b/tools/nixery/popcount/README.md
new file mode 100644
index 000000000000..8485a4d30e9c
--- /dev/null
+++ b/tools/nixery/popcount/README.md
@@ -0,0 +1,39 @@
+popcount
+========
+
+This script is used to count the popularity for each package in `nixpkgs`, by
+determining how many other packages depend on it.
+
+It skips over all packages that fail to build, are not cached or are unfree -
+but these omissions do not meaningfully affect the statistics.
+
+It currently does not evaluate nested attribute sets (such as
+`haskellPackages`).
+
+## Usage
+
+1. Generate a list of all top-level attributes in `nixpkgs`:
+
+ ```shell
+ nix eval '(with builtins; toJSON (attrNames (import {})))' | jq -r | jq > all-top-level.json
+ ```
+
+2. Run `./popcount > all-runtime-deps.txt`
+
+3. Collect and count the results with the following magic incantation:
+
+ ```shell
+ cat all-runtime-deps.txt \
+ | sed -r 's|/nix/store/[a-z0-9]+-||g' \
+ | sort \
+ | uniq -c \
+ | sort -n -r \
+ | awk '{ print "{\"" $2 "\":" $1 "}"}' \
+ | jq -c -s '. | add | with_entries(select(.value > 1))' \
+ > your-output-file
+ ```
+
+ In essence, this will trim Nix's store paths and hashes from the output,
+ count the occurences of each package and return the output as JSON. All
+ packages that have no references other than themselves are removed from the
+ output.
diff --git a/tools/nixery/popcount/empty.json b/tools/nixery/popcount/empty.json
new file mode 100644
index 000000000000..fe51488c7066
--- /dev/null
+++ b/tools/nixery/popcount/empty.json
@@ -0,0 +1 @@
+[]
diff --git a/tools/nixery/popcount/popcount b/tools/nixery/popcount/popcount
new file mode 100755
index 000000000000..83baf3045da7
--- /dev/null
+++ b/tools/nixery/popcount/popcount
@@ -0,0 +1,13 @@
+#!/bin/bash
+set -ueo pipefail
+
+function graphsFor() {
+ local pkg="${1}"
+ local graphs=$(nix-build --timeout 2 --argstr target "${pkg}" popcount.nix || echo -n 'empty.json')
+ cat $graphs | jq -r -cM '.[] | .references[]'
+}
+
+for pkg in $(cat all-top-level.json | jq -r '.[]'); do
+ graphsFor "${pkg}" 2>/dev/null
+ echo "Printed refs for ${pkg}" >&2
+done
diff --git a/tools/nixery/popcount/popcount.nix b/tools/nixery/popcount/popcount.nix
new file mode 100644
index 000000000000..54fd2ad589ee
--- /dev/null
+++ b/tools/nixery/popcount/popcount.nix
@@ -0,0 +1,53 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This script, given a target attribute in `nixpkgs`, builds the
+# target derivations' runtime closure and returns its reference graph.
+#
+# This is invoked by popcount.sh for each package in nixpkgs to
+# collect all package references, so that package popularity can be
+# tracked.
+#
+# Check out build-image/group-layers.go for an in-depth explanation of
+# what the popularity counts are used for.
+
+{ pkgs ? import { config.allowUnfree = false; }, target }:
+
+let
+ inherit (pkgs) coreutils runCommand writeText;
+ inherit (builtins) readFile toFile fromJSON toJSON listToAttrs;
+
+ # graphJSON abuses feature in Nix that makes structured runtime
+ # closure information available to builders. This data is imported
+ # back via IFD to process it for layering data.
+ graphJSON = path:
+ runCommand "build-graph" {
+ __structuredAttrs = true;
+ exportReferencesGraph.graph = path;
+ PATH = "${coreutils}/bin";
+ builder = toFile "builder" ''
+ . .attrs.sh
+ cat .attrs.json > ''${outputs[out]}
+ '';
+ } "";
+
+ buildClosures = paths: (fromJSON (readFile (graphJSON paths)));
+
+ buildGraph = paths:
+ listToAttrs (map (c: {
+ name = c.path;
+ value = { inherit (c) closureSize references; };
+ }) (buildClosures paths));
+in writeText "${target}-graph"
+(toJSON (buildClosures [ pkgs."${target}" ]).graph)
--
cgit 1.4.1
From f60f702274191f87b23dab5393420b27a50952fc Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 17:53:46 +0100
Subject: feat: Add shell.nix for running a local Nixery
---
tools/nixery/shell.nix | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
create mode 100644 tools/nixery/shell.nix
(limited to 'tools')
diff --git a/tools/nixery/shell.nix b/tools/nixery/shell.nix
new file mode 100644
index 000000000000..49d0e5581971
--- /dev/null
+++ b/tools/nixery/shell.nix
@@ -0,0 +1,27 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Configures a shell environment that builds required local packages to
+# run Nixery.
+{pkgs ? import {} }:
+
+let nixery = import ./default.nix { inherit pkgs; };
+in pkgs.stdenv.mkDerivation {
+ name = "nixery-dev-shell";
+
+ buildInputs = with pkgs;[
+ jq
+ nixery.nixery-build-image
+ ];
+}
--
cgit 1.4.1
From 7214d0aa4f05d1de25911ec6b99d3feb3dcbd1b5 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 12 Aug 2019 19:02:13 +0100
Subject: feat(build-image): Introduce a terrifying hack to build group-layers
The issue is described in detail in a comment in
`build-image/default.nix`, please read it.
---
tools/nixery/build-image/build-image.nix | 26 +++++++++++-
tools/nixery/build-image/default.nix | 71 +++++++++++++++++++++++++++-----
tools/nixery/default.nix | 2 +-
3 files changed, 86 insertions(+), 13 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 37156905fa38..e5b195a63d11 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -22,6 +22,8 @@
name,
# Image tag, the Nix's output hash will be used if null
tag ? null,
+ # Tool used to determine layer grouping
+ groupLayers,
# Files to put on the image (a nix store path or list of paths).
contents ? [],
# Packages to install by name (which must refer to top-level attributes of
@@ -48,7 +50,8 @@
# '!' was chosen as the separator because `builtins.split` does not
# support regex escapes and there are few other candidates. It
# doesn't matter much because this is invoked by the server.
- pkgSource ? "nixpkgs!nixos-19.03"
+ pkgSource ? "nixpkgs!nixos-19.03",
+ ...
}:
let
@@ -165,6 +168,25 @@ let
paths = allContents.contents;
};
+ # Before actually creating any image layers, the store paths that need to be
+ # included in the image must be sorted into the layers that they should go
+ # into.
+ #
+ # How contents are allocated to each layer is decided by the `group-layers.go`
+ # program. The mechanism used is described at the top of the program's source
+ # code, or alternatively in the layering design document:
+ #
+ # https://storage.googleapis.com/nixdoc/nixery-layers.html
+ #
+ # To invoke the program, a graph of all runtime references is created via
+ # Nix's exportReferencesGraph feature - the resulting layers are read back
+ # into Nix using import-from-derivation.
+ groupedLayers = runCommand "grouped-layers.json" {
+ buildInputs = [ groupLayers ];
+ } ''
+ group-layers --fnorg
+ '';
+
# The image build infrastructure expects to be outputting a slightly different
# format than the one we serve over the registry protocol. To work around its
# expectations we need to provide an empty JSON file that it can write some
@@ -287,6 +309,6 @@ let
pkgs = map (err: err.pkg) allContents.errors;
};
in writeText "manifest-output.json" (if (length allContents.errors) == 0
- then toJSON manifestOutput
+ then toJSON groupedLayers # manifestOutput
else toJSON errorOutput
)
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index 4962e07deee9..cff403995884 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -16,25 +16,76 @@
# moves the files needed to call the Nix builds at runtime in the
# correct locations.
-{ buildGoPackage, lib, nix, writeShellScriptBin }:
+{ pkgs ? import { }, self ? ./.
-let
- group-layers = buildGoPackage {
+ # Because of the insanity occuring below, this function must mirror
+ # all arguments of build-image.nix.
+, tag ? null, name ? null, packages ? null, maxLayers ? null, pkgSource ? null
+}@args:
+
+with pkgs; rec {
+ groupLayers = buildGoPackage {
name = "group-layers";
goDeps = ./go-deps.nix;
- src = ./.;
-
goPackagePath = "github.com/google/nixery/group-layers";
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ # WARNING: HERE BE DRAGONS! #
+ # #
+ # The hack below is used to break evaluation purity. The issue is #
+ # that Nixery's build instructions (the default.nix in the folder #
+ # above this one) must build a program that can invoke Nix at #
+ # runtime, with a derivation that needs a program tracked in this #
+ # source tree (`group-layers`). #
+ # #
+ # Simply installing that program in the $PATH of Nixery does not #
+ # work, because the runtime Nix builds use their own isolated #
+ # environment. #
+ # #
+ # I first attempted to naively copy the sources into the Nix #
+ # store, so that Nixery could build `group-layers` when it starts #
+ # up - however those sources are not available to a nested Nix #
+ # build because they're not part of the context of the nested #
+ # invocation. #
+ # #
+ # Nix has several primitives under `builtins.` that can break #
+ # evaluation purity, these (namely readDir and readFile) are used #
+ # below to break out of the isolated environment and reconstruct #
+ # the source tree for `group-layers`. #
+ # #
+ # There might be a better way to do this, but I don't know what #
+ # it is. #
+ # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
+ src = runCommand "group-layers-srcs" { } ''
+ mkdir -p $out
+ ${with builtins;
+ let
+ files =
+ (attrNames (lib.filterAttrs (_: t: t != "symlink") (readDir self)));
+ commands =
+ map (f: "cp ${toFile f (readFile "${self}/${f}")} $out/${f}") files;
+ in lib.concatStringsSep "\n" commands}
+ '';
+
meta = {
- description = "Tool to group a set of packages into container image layers";
+ description =
+ "Tool to group a set of packages into container image layers";
license = lib.licenses.asl20;
maintainers = [ lib.maintainers.tazjin ];
};
};
+ buildImage = import ./build-image.nix
+ ({ inherit groupLayers; } // (lib.filterAttrs (_: v: v != null) args));
+
# Wrapper script which is called by the Nixery server to trigger an
- # actual image build.
-in writeShellScriptBin "nixery-build-image" ''
- exec ${nix}/bin/nix-build --show-trace --no-out-link "$@" ${./build-image.nix}
-''
+ # actual image build. This exists to avoid having to specify the
+ # location of build-image.nix at runtime.
+ wrapper = writeShellScriptBin "nixery-build-image" ''
+ exec ${nix}/bin/nix-build \
+ --show-trace \
+ --no-out-link "$@" \
+ --argstr self "${./.}" \
+ -A buildImage ${./.}
+ '';
+}
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 7d201869dc90..926ab0d19f6b 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -26,7 +26,7 @@ rec {
nixery-server = callPackage ./server {};
# Implementation of the image building & layering logic
- nixery-build-image = callPackage ./build-image {};
+ nixery-build-image = (import ./build-image { inherit pkgs; }).wrapper;
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
--
cgit 1.4.1
From 6285cd8dbfb77c287fc5b30263bbfd5770b47413 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 13 Aug 2019 00:29:08 +0100
Subject: feat(build-image): Use new image layering algorithm for images
Removes usage of the old layering algorithm and replaces it with the
new one.
Apart from the new layer layout this means that each layer is now
built in a separate derivation, which hopefully leads to better
cacheability.
---
tools/nixery/build-image/build-image.nix | 85 ++++++++++++++++++--------------
1 file changed, 47 insertions(+), 38 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index e5b195a63d11..d68ed6d37844 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -54,12 +54,12 @@
...
}:
-let
+with builtins; let
# If a nixpkgs channel is requested, it is retrieved from Github (as
# a tarball) and imported.
fetchImportChannel = channel:
let url = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
- in import (builtins.fetchTarball url) {};
+ in import (fetchTarball url) {};
# If a git repository is requested, it is retrieved via
# builtins.fetchGit which defaults to the git configuration of the
@@ -76,32 +76,31 @@ let
# There are some additional caveats around whether the default
# branch contains the specified revision, which need to be
# explained to users.
- spec = if (builtins.stringLength rev) == 40 then {
+ spec = if (stringLength rev) == 40 then {
inherit url rev;
} else {
inherit url;
ref = rev;
};
- in import (builtins.fetchGit spec) {};
+ in import (fetchGit spec) {};
- importPath = path: import (builtins.toPath path) {};
+ importPath = path: import (toPath path) {};
- source = builtins.split "!" pkgSource;
- sourceType = builtins.elemAt source 0;
- pkgs = with builtins;
+ source = split "!" pkgSource;
+ sourceType = elemAt source 0;
+ pkgs =
if sourceType == "nixpkgs"
then fetchImportChannel (elemAt source 2)
else if sourceType == "git"
then fetchImportGit (elemAt source 2) (elemAt source 4)
else if sourceType == "path"
then importPath (elemAt source 2)
- else builtins.throw("Invalid package set source specification: ${pkgSource}");
+ else throw("Invalid package set source specification: ${pkgSource}");
in
# Since this is essentially a re-wrapping of some of the functionality that is
# implemented in the dockerTools, we need all of its components in our top-level
# namespace.
-with builtins;
with pkgs;
with dockerTools;
@@ -168,6 +167,11 @@ let
paths = allContents.contents;
};
+ popularity = builtins.fetchurl {
+ url = "https://storage.googleapis.com/nixery-layers/popularity/nixos-19.03-20190812.json";
+ sha256 = "16sxd49vqqg2nrhwynm36ba6bc2yff5cd5hf83wi0hanw5sx3svk";
+ };
+
# Before actually creating any image layers, the store paths that need to be
# included in the image must be sorted into the layers that they should go
# into.
@@ -181,31 +185,37 @@ let
# To invoke the program, a graph of all runtime references is created via
# Nix's exportReferencesGraph feature - the resulting layers are read back
# into Nix using import-from-derivation.
- groupedLayers = runCommand "grouped-layers.json" {
- buildInputs = [ groupLayers ];
+ groupedLayers = fromJSON (readFile (runCommand "grouped-layers.json" {
+ __structuredAttrs = true;
+ exportReferencesGraph.graph = allContents.contents;
+ PATH = "${groupLayers}/bin";
+ builder = toFile "builder" ''
+ . .attrs.sh
+ group-layers --budget ${toString (maxLayers - 1)} --pop ${popularity} --out ''${outputs[out]}
+ '';
+ } ""));
+
+ # Given a list of store paths, create an image layer tarball with
+ # their contents.
+ pathsToLayer = paths: runCommand "layer.tar" {
} ''
- group-layers --fnorg
- '';
+ tar --no-recursion -rf "$out" \
+ --mtime="@$SOURCE_DATE_EPOCH" \
+ --owner=0 --group=0 /nix /nix/store
- # The image build infrastructure expects to be outputting a slightly different
- # format than the one we serve over the registry protocol. To work around its
- # expectations we need to provide an empty JSON file that it can write some
- # fun data into.
- emptyJson = writeText "empty.json" "{}";
+ tar -rpf "$out" --hard-dereference --sort=name \
+ --mtime="@$SOURCE_DATE_EPOCH" \
+ --owner=0 --group=0 ${lib.concatStringsSep " " paths}
+ '';
- bulkLayers = mkManyPureLayers {
- name = baseName;
- configJson = emptyJson;
- closure = writeText "closure" "${contentsEnv} ${emptyJson}";
- # One layer will be taken up by the customisationLayer, so
- # take up one less.
- maxLayers = maxLayers - 1;
- };
+ bulkLayers = writeText "bulk-layers"
+ (lib.concatStringsSep "\n" (map (layer: pathsToLayer layer.contents)
+ groupedLayers));
customisationLayer = mkCustomisationLayer {
name = baseName;
contents = contentsEnv;
- baseJson = emptyJson;
+ baseJson = writeText "empty.json" "{}";
inherit uid gid extraCommands;
};
@@ -214,23 +224,22 @@ let
#
# This computes both an MD5 and a SHA256 hash of each layer, which are used
# for different purposes. See the registry server implementation for details.
- #
- # Some of this logic is copied straight from `buildLayeredImage`.
allLayersJson = runCommand "fs-layer-list.json" {
buildInputs = [ coreutils findutils jq openssl ];
} ''
- find ${bulkLayers} -mindepth 1 -maxdepth 1 | sort -t/ -k5 -n > layer-list
- echo ${customisationLayer} >> layer-list
+ cat ${bulkLayers} | sort -t/ -k5 -n > layer-list
+ echo -n layer-list:
+ cat layer-list
+ echo ${customisationLayer}/layer.tar >> layer-list
for layer in $(cat layer-list); do
- layerPath="$layer/layer.tar"
- layerSha256=$(sha256sum $layerPath | cut -d ' ' -f1)
+ layerSha256=$(sha256sum $layer | cut -d ' ' -f1)
# The server application compares binary MD5 hashes and expects base64
# encoding instead of hex.
- layerMd5=$(openssl dgst -md5 -binary $layerPath | openssl enc -base64)
- layerSize=$(wc -c $layerPath | cut -d ' ' -f1)
+ layerMd5=$(openssl dgst -md5 -binary $layer | openssl enc -base64)
+ layerSize=$(wc -c $layer | cut -d ' ' -f1)
- jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layerPath \
+ jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layer \
'{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> fs-layers
done
@@ -309,6 +318,6 @@ let
pkgs = map (err: err.pkg) allContents.errors;
};
in writeText "manifest-output.json" (if (length allContents.errors) == 0
- then toJSON groupedLayers # manifestOutput
+ then toJSON manifestOutput
else toJSON errorOutput
)
--
cgit 1.4.1
From 3939722063f3d08a547fa98e17aac609f7f765ac Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 13 Aug 2019 00:35:42 +0100
Subject: style: Apply nixfmt to trivial Nix files
ALl the ones except for build-image.nix are considered trivial. On the
latter, nixfmt makes some useful changes but by-and-large it is not
ready for that code yet.
---
tools/nixery/build-image/go-deps.nix | 20 +++++++++-----------
tools/nixery/default.nix | 9 ++++-----
tools/nixery/docs/default.nix | 8 ++++----
tools/nixery/server/default.nix | 4 ++--
tools/nixery/shell.nix | 7 ++-----
5 files changed, 21 insertions(+), 27 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/go-deps.nix b/tools/nixery/build-image/go-deps.nix
index 235c3c4c6dbe..0f22a7088f52 100644
--- a/tools/nixery/build-image/go-deps.nix
+++ b/tools/nixery/build-image/go-deps.nix
@@ -1,12 +1,10 @@
# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
-[
- {
- goPackagePath = "gonum.org/v1/gonum";
- fetch = {
- type = "git";
- url = "https://github.com/gonum/gonum";
- rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
- sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
- };
- }
-]
+[{
+ goPackagePath = "gonum.org/v1/gonum";
+ fetch = {
+ type = "git";
+ url = "https://github.com/gonum/gonum";
+ rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
+ sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
+ };
+}]
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 926ab0d19f6b..686c230553f0 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -11,8 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-{ pkgs ? import {}
-, preLaunch ? "" }:
+{ pkgs ? import { }, preLaunch ? "" }:
with pkgs;
@@ -23,7 +22,7 @@ rec {
# Users will usually not want to use this directly, instead see the
# 'nixery' derivation below, which automatically includes runtime
# data dependencies.
- nixery-server = callPackage ./server {};
+ nixery-server = callPackage ./server { };
# Implementation of the image building & layering logic
nixery-build-image = (import ./build-image { inherit pkgs; }).wrapper;
@@ -31,7 +30,7 @@ rec {
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
# nixery.dev.
- nixery-book = callPackage ./docs {};
+ nixery-book = callPackage ./docs { };
# Wrapper script running the Nixery server with the above two data
# dependencies configured.
@@ -76,7 +75,7 @@ rec {
'';
in dockerTools.buildLayeredImage {
name = "nixery";
- config.Cmd = ["${nixery-launch-script}/bin/nixery"];
+ config.Cmd = [ "${nixery-launch-script}/bin/nixery" ];
maxLayers = 96;
contents = [
cacert
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
index 6a31be4fd4e0..aae2fdde42a5 100644
--- a/tools/nixery/docs/default.nix
+++ b/tools/nixery/docs/default.nix
@@ -40,12 +40,12 @@ let
};
nix-1p = fetchFromGitHub {
- owner = "tazjin";
- repo = "nix-1p";
- rev = "3cd0f7d7b4f487d04a3f1e3ca8f2eb1ab958c49b";
+ owner = "tazjin";
+ repo = "nix-1p";
+ rev = "3cd0f7d7b4f487d04a3f1e3ca8f2eb1ab958c49b";
sha256 = "02lpda03q580gyspkbmlgnb2cbm66rrcgqsv99aihpbwyjym81af";
};
-in runCommand "nixery-book" {} ''
+in runCommand "nixery-book" { } ''
mkdir -p $out
cp -r ${./.}/* .
chmod -R a+w src
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
index 0d0c056a56f4..05ad64261fa5 100644
--- a/tools/nixery/server/default.nix
+++ b/tools/nixery/server/default.nix
@@ -15,9 +15,9 @@
{ buildGoPackage, lib }:
buildGoPackage {
- name = "nixery-server";
+ name = "nixery-server";
goDeps = ./go-deps.nix;
- src = ./.;
+ src = ./.;
goPackagePath = "github.com/google/nixery";
diff --git a/tools/nixery/shell.nix b/tools/nixery/shell.nix
index 49d0e5581971..93cd1f4cec62 100644
--- a/tools/nixery/shell.nix
+++ b/tools/nixery/shell.nix
@@ -14,14 +14,11 @@
# Configures a shell environment that builds required local packages to
# run Nixery.
-{pkgs ? import {} }:
+{ pkgs ? import { } }:
let nixery = import ./default.nix { inherit pkgs; };
in pkgs.stdenv.mkDerivation {
name = "nixery-dev-shell";
- buildInputs = with pkgs;[
- jq
- nixery.nixery-build-image
- ];
+ buildInputs = with pkgs; [ jq nixery.nixery-build-image ];
}
--
cgit 1.4.1
From d9168e3e4d8ee0be01cbe994d171d933af215f2c Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 13 Aug 2019 23:03:16 +0100
Subject: refactor(build-image): Extract package set loading into helper
Some upcoming changes might require the Nix build to be split into
multiple separate nix-build invocations of different expressions, thus
splitting this out is useful.
It also fixes an issue where `build-image/default.nix` might be called
in an environment where no Nix channels are configured.
---
tools/nixery/build-image/build-image.nix | 64 ++--------------------------
tools/nixery/build-image/default.nix | 11 +++--
tools/nixery/build-image/load-pkgs.nix | 73 ++++++++++++++++++++++++++++++++
tools/nixery/default.nix | 4 +-
4 files changed, 87 insertions(+), 65 deletions(-)
create mode 100644 tools/nixery/build-image/load-pkgs.nix
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index d68ed6d37844..b67fef6ceb88 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -18,9 +18,11 @@
# registry API.
{
+ # Package set to used (this will usually be loaded by load-pkgs.nix)
+ pkgs,
# Image Name
name,
- # Image tag, the Nix's output hash will be used if null
+ # Image tag, the Nix output's hash will be used if null
tag ? null,
# Tool used to determine layer grouping
groupLayers,
@@ -36,71 +38,13 @@
# the default here is set to something a little less than that.
maxLayers ? 96,
- # Configuration for which package set to use when building.
- #
- # Both channels of the public nixpkgs repository as well as imports
- # from private repositories are supported.
- #
- # This setting can be invoked with three different formats:
- #
- # 1. nixpkgs!$channel (e.g. nixpkgs!nixos-19.03)
- # 2. git!$repo!$rev (e.g. git!git@github.com:NixOS/nixpkgs.git!master)
- # 3. path!$path (e.g. path!/var/local/nixpkgs)
- #
- # '!' was chosen as the separator because `builtins.split` does not
- # support regex escapes and there are few other candidates. It
- # doesn't matter much because this is invoked by the server.
- pkgSource ? "nixpkgs!nixos-19.03",
...
}:
-with builtins; let
- # If a nixpkgs channel is requested, it is retrieved from Github (as
- # a tarball) and imported.
- fetchImportChannel = channel:
- let url = "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
- in import (fetchTarball url) {};
-
- # If a git repository is requested, it is retrieved via
- # builtins.fetchGit which defaults to the git configuration of the
- # outside environment. This means that user-configured SSH
- # credentials etc. are going to work as expected.
- fetchImportGit = url: rev:
- let
- # builtins.fetchGit needs to know whether 'rev' is a reference
- # (e.g. a branch/tag) or a revision (i.e. a commit hash)
- #
- # Since this data is being extrapolated from the supplied image
- # tag, we have to guess if we want to avoid specifying a format.
- #
- # There are some additional caveats around whether the default
- # branch contains the specified revision, which need to be
- # explained to users.
- spec = if (stringLength rev) == 40 then {
- inherit url rev;
- } else {
- inherit url;
- ref = rev;
- };
- in import (fetchGit spec) {};
-
- importPath = path: import (toPath path) {};
-
- source = split "!" pkgSource;
- sourceType = elemAt source 0;
- pkgs =
- if sourceType == "nixpkgs"
- then fetchImportChannel (elemAt source 2)
- else if sourceType == "git"
- then fetchImportGit (elemAt source 2) (elemAt source 4)
- else if sourceType == "path"
- then importPath (elemAt source 2)
- else throw("Invalid package set source specification: ${pkgSource}");
-in
-
# Since this is essentially a re-wrapping of some of the functionality that is
# implemented in the dockerTools, we need all of its components in our top-level
# namespace.
+with builtins;
with pkgs;
with dockerTools;
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index cff403995884..0d3002cb404e 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -16,14 +16,17 @@
# moves the files needed to call the Nix builds at runtime in the
# correct locations.
-{ pkgs ? import { }, self ? ./.
+{ pkgs ? null, self ? ./.
# Because of the insanity occuring below, this function must mirror
# all arguments of build-image.nix.
-, tag ? null, name ? null, packages ? null, maxLayers ? null, pkgSource ? null
+, pkgSource ? "nixpkgs!nixos-19.03"
+, tag ? null, name ? null, packages ? null, maxLayers ? null
}@args:
-with pkgs; rec {
+let pkgs = import ./load-pkgs.nix { inherit pkgSource; };
+in with pkgs; rec {
+
groupLayers = buildGoPackage {
name = "group-layers";
goDeps = ./go-deps.nix;
@@ -76,7 +79,7 @@ with pkgs; rec {
};
buildImage = import ./build-image.nix
- ({ inherit groupLayers; } // (lib.filterAttrs (_: v: v != null) args));
+ ({ inherit pkgs groupLayers; } // (lib.filterAttrs (_: v: v != null) args));
# Wrapper script which is called by the Nixery server to trigger an
# actual image build. This exists to avoid having to specify the
diff --git a/tools/nixery/build-image/load-pkgs.nix b/tools/nixery/build-image/load-pkgs.nix
new file mode 100644
index 000000000000..3e8b450c45d2
--- /dev/null
+++ b/tools/nixery/build-image/load-pkgs.nix
@@ -0,0 +1,73 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Load a Nix package set from a source specified in one of the following
+# formats:
+#
+# 1. nixpkgs!$channel (e.g. nixpkgs!nixos-19.03)
+# 2. git!$repo!$rev (e.g. git!git@github.com:NixOS/nixpkgs.git!master)
+# 3. path!$path (e.g. path!/var/local/nixpkgs)
+#
+# '!' was chosen as the separator because `builtins.split` does not
+# support regex escapes and there are few other candidates. It
+# doesn't matter much because this is invoked by the server.
+{ pkgSource, args ? { } }:
+
+with builtins;
+let
+ # If a nixpkgs channel is requested, it is retrieved from Github (as
+ # a tarball) and imported.
+ fetchImportChannel = channel:
+ let
+ url =
+ "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
+ in import (fetchTarball url) args;
+
+ # If a git repository is requested, it is retrieved via
+ # builtins.fetchGit which defaults to the git configuration of the
+ # outside environment. This means that user-configured SSH
+ # credentials etc. are going to work as expected.
+ fetchImportGit = url: rev:
+ let
+ # builtins.fetchGit needs to know whether 'rev' is a reference
+ # (e.g. a branch/tag) or a revision (i.e. a commit hash)
+ #
+ # Since this data is being extrapolated from the supplied image
+ # tag, we have to guess if we want to avoid specifying a format.
+ #
+ # There are some additional caveats around whether the default
+ # branch contains the specified revision, which need to be
+ # explained to users.
+ spec = if (stringLength rev) == 40 then {
+ inherit url rev;
+ } else {
+ inherit url;
+ ref = rev;
+ };
+ in import (fetchGit spec) args;
+
+ # No special handling is used for paths, so users are expected to pass one
+ # that will work natively with Nix.
+ importPath = path: import (toPath path) args;
+
+ source = split "!" pkgSource;
+ sourceType = elemAt source 0;
+in if sourceType == "nixpkgs" then
+ fetchImportChannel (elemAt source 2)
+else if sourceType == "git" then
+ fetchImportGit (elemAt source 2) (elemAt source 4)
+else if sourceType == "path" then
+ importPath (elemAt source 2)
+else
+ throw ("Invalid package set source specification: ${pkgSource}")
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 686c230553f0..734a72d57e0b 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -25,7 +25,9 @@ rec {
nixery-server = callPackage ./server { };
# Implementation of the image building & layering logic
- nixery-build-image = (import ./build-image { inherit pkgs; }).wrapper;
+ nixery-build-image = (import ./build-image {
+ pkgSource = "path!${}";
+ }).wrapper;
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
--
cgit 1.4.1
From 58380e331340d5fb19726531e1a5b50999b260dc Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 14 Aug 2019 17:20:01 +0100
Subject: refactor(server): Extract build logic into separate module
This module is going to get more complex as the implementation of #32
progresses.
---
tools/nixery/server/builder/builder.go | 208 +++++++++++++++++++++
tools/nixery/server/config/config.go | 131 +++++++++++++
tools/nixery/server/main.go | 330 +++------------------------------
3 files changed, 365 insertions(+), 304 deletions(-)
create mode 100644 tools/nixery/server/builder/builder.go
create mode 100644 tools/nixery/server/config/config.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
new file mode 100644
index 000000000000..c53b702e0537
--- /dev/null
+++ b/tools/nixery/server/builder/builder.go
@@ -0,0 +1,208 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package builder implements the code required to build images via Nix. Image
+// build data is cached for up to 24 hours to avoid duplicated calls to Nix
+// (which are costly even if no building is performed).
+package builder
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "log"
+ "os"
+ "os/exec"
+ "strings"
+
+ "cloud.google.com/go/storage"
+ "github.com/google/nixery/config"
+)
+
+// Image represents the information necessary for building a container image.
+// This can be either a list of package names (corresponding to keys in the
+// nixpkgs set) or a Nix expression that results in a *list* of derivations.
+type Image struct {
+ Name string
+ Tag string
+
+ // Names of packages to include in the image. These must correspond
+ // directly to top-level names of Nix packages in the nixpkgs tree.
+ Packages []string
+}
+
+// ImageFromName parses an image name into the corresponding structure which can
+// be used to invoke Nix.
+//
+// It will expand convenience names under the hood (see the `convenienceNames`
+// function below).
+func ImageFromName(name string, tag string) Image {
+ packages := strings.Split(name, "/")
+ return Image{
+ Name: name,
+ Tag: tag,
+ Packages: convenienceNames(packages),
+ }
+}
+
+// BuildResult represents the output of calling the Nix derivation responsible
+// for building registry images.
+//
+// The `layerLocations` field contains the local filesystem paths to each
+// individual image layer that will need to be served, while the `manifest`
+// field contains the JSON-representation of the manifest that needs to be
+// served to the client.
+//
+// The later field is simply treated as opaque JSON and passed through.
+type BuildResult struct {
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+
+ Manifest json.RawMessage `json:"manifest"`
+ LayerLocations map[string]struct {
+ Path string `json:"path"`
+ Md5 []byte `json:"md5"`
+ } `json:"layerLocations"`
+}
+
+// convenienceNames expands convenience package names defined by Nixery which
+// let users include commonly required sets of tools in a container quickly.
+//
+// Convenience names must be specified as the first package in an image.
+//
+// Currently defined convenience names are:
+//
+// * `shell`: Includes bash, coreutils and other common command-line tools
+func convenienceNames(packages []string) []string {
+ shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
+
+ if packages[0] == "shell" {
+ return append(packages[1:], shellPackages...)
+ }
+
+ return packages
+}
+
+// Call out to Nix and request that an image be built. Nix will, upon success,
+// return a manifest for the container image.
+func BuildImage(ctx *context.Context, cfg *config.Config, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
+ packages, err := json.Marshal(image.Packages)
+ if err != nil {
+ return nil, err
+ }
+
+ args := []string{
+ "--argstr", "name", image.Name,
+ "--argstr", "packages", string(packages),
+ }
+
+ if cfg.Pkgs != nil {
+ args = append(args, "--argstr", "pkgSource", cfg.Pkgs.Render(image.Tag))
+ }
+ cmd := exec.Command("nixery-build-image", args...)
+
+ outpipe, err := cmd.StdoutPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ errpipe, err := cmd.StderrPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ if err = cmd.Start(); err != nil {
+ log.Println("Error starting nix-build:", err)
+ return nil, err
+ }
+ log.Printf("Started Nix image build for '%s'", image.Name)
+
+ stdout, _ := ioutil.ReadAll(outpipe)
+ stderr, _ := ioutil.ReadAll(errpipe)
+
+ if err = cmd.Wait(); err != nil {
+ // TODO(tazjin): Propagate errors upwards in a usable format.
+ log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
+ return nil, err
+ }
+
+ log.Println("Finished Nix image build")
+
+ buildOutput, err := ioutil.ReadFile(strings.TrimSpace(string(stdout)))
+ if err != nil {
+ return nil, err
+ }
+
+ // The build output returned by Nix is deserialised to add all
+ // contained layers to the bucket. Only the manifest itself is
+ // re-serialised to JSON and returned.
+ var result BuildResult
+ err = json.Unmarshal(buildOutput, &result)
+ if err != nil {
+ return nil, err
+ }
+
+ for layer, meta := range result.LayerLocations {
+ err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ return &result, nil
+}
+
+// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
+// any data the bucket is probed to see if the file already exists.
+//
+// If the file does exist, its MD5 hash is verified to ensure that the stored
+// file is not - for example - a fragment of a previous, incomplete upload.
+func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
+ layerKey := fmt.Sprintf("layers/%s", layer)
+ obj := bucket.Object(layerKey)
+
+ // Before uploading a layer to the bucket, probe whether it already
+ // exists.
+ //
+ // If it does and the MD5 checksum matches the expected one, the layer
+ // upload can be skipped.
+ attrs, err := obj.Attrs(*ctx)
+
+ if err == nil && bytes.Equal(attrs.MD5, md5) {
+ log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
+ } else {
+ writer := obj.NewWriter(*ctx)
+ file, err := os.Open(path)
+
+ if err != nil {
+ return fmt.Errorf("failed to open layer %s from path %s: %v", layer, path, err)
+ }
+
+ size, err := io.Copy(writer, file)
+ if err != nil {
+ return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ }
+
+ if err = writer.Close(); err != nil {
+ return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ }
+
+ log.Printf("Uploaded layer sha256:%s (%v bytes written)\n", layer, size)
+ }
+
+ return nil
+}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
new file mode 100644
index 000000000000..4e3b70dcdc22
--- /dev/null
+++ b/tools/nixery/server/config/config.go
@@ -0,0 +1,131 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package config implements structures to store Nixery's configuration at
+// runtime as well as the logic for instantiating this configuration from the
+// environment.
+package config
+
+import (
+ "fmt"
+ "io/ioutil"
+ "log"
+ "os"
+
+ "cloud.google.com/go/storage"
+)
+
+// pkgSource represents the source from which the Nix package set used
+// by Nixery is imported. Users configure the source by setting one of
+// the supported environment variables.
+type PkgSource struct {
+ srcType string
+ args string
+}
+
+// Convert the package source into the representation required by Nix.
+func (p *PkgSource) Render(tag string) string {
+ // The 'git' source requires a tag to be present.
+ if p.srcType == "git" {
+ if tag == "latest" || tag == "" {
+ tag = "master"
+ }
+
+ return fmt.Sprintf("git!%s!%s", p.args, tag)
+ }
+
+ return fmt.Sprintf("%s!%s", p.srcType, p.args)
+}
+
+// Retrieve a package source from the environment. If no source is
+// specified, the Nix code will default to a recent NixOS channel.
+func pkgSourceFromEnv() *PkgSource {
+ if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
+ log.Printf("Using Nix package set from Nix channel %q\n", channel)
+ return &PkgSource{
+ srcType: "nixpkgs",
+ args: channel,
+ }
+ }
+
+ if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
+ log.Printf("Using Nix package set from git repository at %q\n", git)
+ return &PkgSource{
+ srcType: "git",
+ args: git,
+ }
+ }
+
+ if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
+ log.Printf("Using Nix package set from path %q\n", path)
+ return &PkgSource{
+ srcType: "path",
+ args: path,
+ }
+ }
+
+ return nil
+}
+
+// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
+// GCS_SIGNING_ACCOUNT envvars.
+func signingOptsFromEnv() *storage.SignedURLOptions {
+ path := os.Getenv("GCS_SIGNING_KEY")
+ id := os.Getenv("GCS_SIGNING_ACCOUNT")
+
+ if path == "" || id == "" {
+ log.Println("GCS URL signing disabled")
+ return nil
+ }
+
+ log.Printf("GCS URL signing enabled with account %q\n", id)
+ k, err := ioutil.ReadFile(path)
+ if err != nil {
+ log.Fatalf("Failed to read GCS signing key: %s\n", err)
+ }
+
+ return &storage.SignedURLOptions{
+ GoogleAccessID: id,
+ PrivateKey: k,
+ Method: "GET",
+ }
+}
+
+func getConfig(key, desc string) string {
+ value := os.Getenv(key)
+ if value == "" {
+ log.Fatalln(desc + " must be specified")
+ }
+
+ return value
+}
+
+// config holds the Nixery configuration options.
+type Config struct {
+ Bucket string // GCS bucket to cache & serve layers
+ Signing *storage.SignedURLOptions // Signing options to use for GCS URLs
+ Port string // Port on which to launch HTTP server
+ Pkgs *PkgSource // Source for Nix package set
+ WebDir string
+}
+
+func FromEnv() *Config {
+ return &Config{
+ Bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
+ Port: getConfig("PORT", "HTTP port"),
+ Pkgs: pkgSourceFromEnv(),
+ Signing: signingOptsFromEnv(),
+ WebDir: getConfig("WEB_DIR", "Static web file dir"),
+ }
+}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 3e015e8587fc..5d7dcd2adfc2 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -12,8 +12,8 @@
// License for the specific language governing permissions and limitations under
// the License.
-// Package main provides the implementation of a container registry that
-// transparently builds container images based on Nix derivations.
+// The nixery server implements a container registry that transparently builds
+// container images based on Nix derivations.
//
// The Nix derivation used for image creation is responsible for creating
// objects that are compatible with the registry API. The targeted registry
@@ -26,287 +26,32 @@
package main
import (
- "bytes"
"context"
"encoding/json"
"fmt"
- "io"
- "io/ioutil"
"log"
"net/http"
- "os"
- "os/exec"
"regexp"
- "strings"
"time"
"cloud.google.com/go/storage"
+ "github.com/google/nixery/builder"
+ "github.com/google/nixery/config"
)
-// pkgSource represents the source from which the Nix package set used
-// by Nixery is imported. Users configure the source by setting one of
-// the supported environment variables.
-type pkgSource struct {
- srcType string
- args string
-}
-
-// Convert the package source into the representation required by Nix.
-func (p *pkgSource) renderSource(tag string) string {
- // The 'git' source requires a tag to be present.
- if p.srcType == "git" {
- if tag == "latest" || tag == "" {
- tag = "master"
- }
-
- return fmt.Sprintf("git!%s!%s", p.args, tag)
- }
-
- return fmt.Sprintf("%s!%s", p.srcType, p.args)
-}
-
-// Retrieve a package source from the environment. If no source is
-// specified, the Nix code will default to a recent NixOS channel.
-func pkgSourceFromEnv() *pkgSource {
- if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
- log.Printf("Using Nix package set from Nix channel %q\n", channel)
- return &pkgSource{
- srcType: "nixpkgs",
- args: channel,
- }
- }
-
- if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.Printf("Using Nix package set from git repository at %q\n", git)
- return &pkgSource{
- srcType: "git",
- args: git,
- }
- }
-
- if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
- log.Printf("Using Nix package set from path %q\n", path)
- return &pkgSource{
- srcType: "path",
- args: path,
- }
- }
-
- return nil
-}
-
-// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
-// GCS_SIGNING_ACCOUNT envvars.
-func signingOptsFromEnv() *storage.SignedURLOptions {
- path := os.Getenv("GCS_SIGNING_KEY")
- id := os.Getenv("GCS_SIGNING_ACCOUNT")
-
- if path == "" || id == "" {
- log.Println("GCS URL signing disabled")
- return nil
- }
-
- log.Printf("GCS URL signing enabled with account %q\n", id)
- k, err := ioutil.ReadFile(path)
- if err != nil {
- log.Fatalf("Failed to read GCS signing key: %s\n", err)
- }
-
- return &storage.SignedURLOptions{
- GoogleAccessID: id,
- PrivateKey: k,
- Method: "GET",
- }
-}
-
-// config holds the Nixery configuration options.
-type config struct {
- bucket string // GCS bucket to cache & serve layers
- signing *storage.SignedURLOptions // Signing options to use for GCS URLs
- port string // Port on which to launch HTTP server
- pkgs *pkgSource // Source for Nix package set
-}
-
// ManifestMediaType is the Content-Type used for the manifest itself. This
// corresponds to the "Image Manifest V2, Schema 2" described on this page:
//
// https://docs.docker.com/registry/spec/manifest-v2-2/
const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
-// Image represents the information necessary for building a container image.
-// This can be either a list of package names (corresponding to keys in the
-// nixpkgs set) or a Nix expression that results in a *list* of derivations.
-type image struct {
- name string
- tag string
-
- // Names of packages to include in the image. These must correspond
- // directly to top-level names of Nix packages in the nixpkgs tree.
- packages []string
-}
-
-// BuildResult represents the output of calling the Nix derivation responsible
-// for building registry images.
-//
-// The `layerLocations` field contains the local filesystem paths to each
-// individual image layer that will need to be served, while the `manifest`
-// field contains the JSON-representation of the manifest that needs to be
-// served to the client.
-//
-// The later field is simply treated as opaque JSON and passed through.
-type BuildResult struct {
- Error string `json:"error"`
- Pkgs []string `json:"pkgs"`
-
- Manifest json.RawMessage `json:"manifest"`
- LayerLocations map[string]struct {
- Path string `json:"path"`
- Md5 []byte `json:"md5"`
- } `json:"layerLocations"`
-}
-
-// imageFromName parses an image name into the corresponding structure which can
-// be used to invoke Nix.
-//
-// It will expand convenience names under the hood (see the `convenienceNames`
-// function below).
-func imageFromName(name string, tag string) image {
- packages := strings.Split(name, "/")
- return image{
- name: name,
- tag: tag,
- packages: convenienceNames(packages),
- }
-}
-
-// convenienceNames expands convenience package names defined by Nixery which
-// let users include commonly required sets of tools in a container quickly.
-//
-// Convenience names must be specified as the first package in an image.
-//
-// Currently defined convenience names are:
-//
-// * `shell`: Includes bash, coreutils and other common command-line tools
-// * `builder`: All of the above and the standard build environment
-func convenienceNames(packages []string) []string {
- shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
-
- if packages[0] == "shell" {
- return append(packages[1:], shellPackages...)
- }
-
- return packages
-}
-
-// Call out to Nix and request that an image be built. Nix will, upon success,
-// return a manifest for the container image.
-func buildImage(ctx *context.Context, cfg *config, image *image, bucket *storage.BucketHandle) (*BuildResult, error) {
- packages, err := json.Marshal(image.packages)
- if err != nil {
- return nil, err
- }
-
- args := []string{
- "--argstr", "name", image.name,
- "--argstr", "packages", string(packages),
- }
-
- if cfg.pkgs != nil {
- args = append(args, "--argstr", "pkgSource", cfg.pkgs.renderSource(image.tag))
- }
- cmd := exec.Command("nixery-build-image", args...)
-
- outpipe, err := cmd.StdoutPipe()
- if err != nil {
- return nil, err
- }
-
- errpipe, err := cmd.StderrPipe()
- if err != nil {
- return nil, err
- }
-
- if err = cmd.Start(); err != nil {
- log.Println("Error starting nix-build:", err)
- return nil, err
- }
- log.Printf("Started Nix image build for '%s'", image.name)
-
- stdout, _ := ioutil.ReadAll(outpipe)
- stderr, _ := ioutil.ReadAll(errpipe)
-
- if err = cmd.Wait(); err != nil {
- // TODO(tazjin): Propagate errors upwards in a usable format.
- log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
- return nil, err
- }
-
- log.Println("Finished Nix image build")
-
- buildOutput, err := ioutil.ReadFile(strings.TrimSpace(string(stdout)))
- if err != nil {
- return nil, err
- }
-
- // The build output returned by Nix is deserialised to add all
- // contained layers to the bucket. Only the manifest itself is
- // re-serialised to JSON and returned.
- var result BuildResult
- err = json.Unmarshal(buildOutput, &result)
- if err != nil {
- return nil, err
- }
-
- for layer, meta := range result.LayerLocations {
- err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
- if err != nil {
- return nil, err
- }
- }
-
- return &result, nil
-}
-
-// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
-// any data the bucket is probed to see if the file already exists.
-//
-// If the file does exist, its MD5 hash is verified to ensure that the stored
-// file is not - for example - a fragment of a previous, incomplete upload.
-func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
- layerKey := fmt.Sprintf("layers/%s", layer)
- obj := bucket.Object(layerKey)
-
- // Before uploading a layer to the bucket, probe whether it already
- // exists.
- //
- // If it does and the MD5 checksum matches the expected one, the layer
- // upload can be skipped.
- attrs, err := obj.Attrs(*ctx)
-
- if err == nil && bytes.Equal(attrs.MD5, md5) {
- log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
- } else {
- writer := obj.NewWriter(*ctx)
- file, err := os.Open(path)
-
- if err != nil {
- return fmt.Errorf("failed to open layer %s from path %s: %v", layer, path, err)
- }
-
- size, err := io.Copy(writer, file)
- if err != nil {
- return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
- }
-
- if err = writer.Close(); err != nil {
- return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
- }
-
- log.Printf("Uploaded layer sha256:%s (%v bytes written)\n", layer, size)
- }
-
- return nil
-}
+// Regexes matching the V2 Registry API routes. This only includes the
+// routes required for serving images, since pushing and other such
+// functionality is not available.
+var (
+ manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
+ layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+)
// layerRedirect constructs the public URL of the layer object in the Cloud
// Storage bucket, signs it and redirects the user there.
@@ -316,16 +61,16 @@ func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer strin
//
// The Docker client is known to follow redirects, but this might not be true
// for all other registry clients.
-func constructLayerUrl(cfg *config, digest string) (string, error) {
- log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.bucket)
+func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
+ log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.Bucket)
object := "layers/" + digest
- if cfg.signing != nil {
- opts := *cfg.signing
+ if cfg.Signing != nil {
+ opts := *cfg.Signing
opts.Expires = time.Now().Add(5 * time.Minute)
- return storage.SignedURL(cfg.bucket, object, &opts)
+ return storage.SignedURL(cfg.Bucket, object, &opts)
} else {
- return ("https://storage.googleapis.com/" + cfg.bucket + "/" + object), nil
+ return ("https://storage.googleapis.com/" + cfg.Bucket + "/" + object), nil
}
}
@@ -336,13 +81,13 @@ func constructLayerUrl(cfg *config, digest string) (string, error) {
//
// The bucket is required for Nixery to function correctly, hence fatal errors
// are generated in case it fails to be set up correctly.
-func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
+func prepareBucket(ctx *context.Context, cfg *config.Config) *storage.BucketHandle {
client, err := storage.NewClient(*ctx)
if err != nil {
log.Fatalln("Failed to set up Cloud Storage client:", err)
}
- bkt := client.Bucket(cfg.bucket)
+ bkt := client.Bucket(cfg.Bucket)
if _, err := bkt.Attrs(*ctx); err != nil {
log.Fatalln("Could not access configured bucket", err)
@@ -351,14 +96,6 @@ func prepareBucket(ctx *context.Context, cfg *config) *storage.BucketHandle {
return bkt
}
-// Regexes matching the V2 Registry API routes. This only includes the
-// routes required for serving images, since pushing and other such
-// functionality is not available.
-var (
- manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
- layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
-)
-
// Error format corresponding to the registry protocol V2 specification. This
// allows feeding back errors to clients in a way that can be presented to
// users.
@@ -385,7 +122,7 @@ func writeError(w http.ResponseWriter, status int, code, message string) {
}
type registryHandler struct {
- cfg *config
+ cfg *config.Config
ctx *context.Context
bucket *storage.BucketHandle
}
@@ -402,8 +139,8 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
imageName := manifestMatches[1]
imageTag := manifestMatches[2]
log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
- image := imageFromName(imageName, imageTag)
- buildResult, err := buildImage(h.ctx, h.cfg, &image, h.bucket)
+ image := builder.ImageFromName(imageName, imageTag)
+ buildResult, err := builder.BuildImage(h.ctx, h.cfg, &image, h.bucket)
if err != nil {
writeError(w, 500, "UNKNOWN", "image build failure")
@@ -451,27 +188,12 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(404)
}
-func getConfig(key, desc string) string {
- value := os.Getenv(key)
- if value == "" {
- log.Fatalln(desc + " must be specified")
- }
-
- return value
-}
-
func main() {
- cfg := &config{
- bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
- port: getConfig("PORT", "HTTP port"),
- pkgs: pkgSourceFromEnv(),
- signing: signingOptsFromEnv(),
- }
-
+ cfg := config.FromEnv()
ctx := context.Background()
bucket := prepareBucket(&ctx, cfg)
- log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.port)
+ log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.Port)
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
@@ -481,8 +203,8 @@ func main() {
})
// All other roots are served by the static file server.
- webDir := http.Dir(getConfig("WEB_DIR", "Static web file dir"))
+ webDir := http.Dir(cfg.WebDir)
http.Handle("/", http.FileServer(webDir))
- log.Fatal(http.ListenAndServe(":"+cfg.port, nil))
+ log.Fatal(http.ListenAndServe(":"+cfg.Port, nil))
}
--
cgit 1.4.1
From cf227c153f0eb55b2d931c780d3f7b020e8844bb Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 14 Aug 2019 20:02:52 +0100
Subject: feat(builder): Implement build cache for manifests & layers
Implements a cache that keeps track of:
a) Manifests that have already been built (for up to 6 hours)
b) Layers that have already been seen (and uploaded to GCS)
This significantly speeds up response times for images that are full
or partial matches with previous images served by an instance.
---
tools/nixery/server/builder/builder.go | 94 ++++++++++++++++++---------------
tools/nixery/server/builder/cache.go | 95 ++++++++++++++++++++++++++++++++++
tools/nixery/server/main.go | 5 +-
3 files changed, 152 insertions(+), 42 deletions(-)
create mode 100644 tools/nixery/server/builder/cache.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index c53b702e0537..e241d3d0b004 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -69,10 +69,10 @@ func ImageFromName(name string, tag string) Image {
//
// The later field is simply treated as opaque JSON and passed through.
type BuildResult struct {
- Error string `json:"error"`
- Pkgs []string `json:"pkgs"`
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+ Manifest json.RawMessage `json:"manifest"`
- Manifest json.RawMessage `json:"manifest"`
LayerLocations map[string]struct {
Path string `json:"path"`
Md5 []byte `json:"md5"`
@@ -99,50 +99,57 @@ func convenienceNames(packages []string) []string {
// Call out to Nix and request that an image be built. Nix will, upon success,
// return a manifest for the container image.
-func BuildImage(ctx *context.Context, cfg *config.Config, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
- packages, err := json.Marshal(image.Packages)
- if err != nil {
- return nil, err
- }
+func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
+ resultFile, cached := cache.manifestFromCache(image)
- args := []string{
- "--argstr", "name", image.Name,
- "--argstr", "packages", string(packages),
- }
+ if !cached {
+ packages, err := json.Marshal(image.Packages)
+ if err != nil {
+ return nil, err
+ }
- if cfg.Pkgs != nil {
- args = append(args, "--argstr", "pkgSource", cfg.Pkgs.Render(image.Tag))
- }
- cmd := exec.Command("nixery-build-image", args...)
+ args := []string{
+ "--argstr", "name", image.Name,
+ "--argstr", "packages", string(packages),
+ }
- outpipe, err := cmd.StdoutPipe()
- if err != nil {
- return nil, err
- }
+ if cfg.Pkgs != nil {
+ args = append(args, "--argstr", "pkgSource", cfg.Pkgs.Render(image.Tag))
+ }
+ cmd := exec.Command("nixery-build-image", args...)
- errpipe, err := cmd.StderrPipe()
- if err != nil {
- return nil, err
- }
+ outpipe, err := cmd.StdoutPipe()
+ if err != nil {
+ return nil, err
+ }
- if err = cmd.Start(); err != nil {
- log.Println("Error starting nix-build:", err)
- return nil, err
- }
- log.Printf("Started Nix image build for '%s'", image.Name)
+ errpipe, err := cmd.StderrPipe()
+ if err != nil {
+ return nil, err
+ }
- stdout, _ := ioutil.ReadAll(outpipe)
- stderr, _ := ioutil.ReadAll(errpipe)
+ if err = cmd.Start(); err != nil {
+ log.Println("Error starting nix-build:", err)
+ return nil, err
+ }
+ log.Printf("Started Nix image build for '%s'", image.Name)
- if err = cmd.Wait(); err != nil {
- // TODO(tazjin): Propagate errors upwards in a usable format.
- log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
- return nil, err
- }
+ stdout, _ := ioutil.ReadAll(outpipe)
+ stderr, _ := ioutil.ReadAll(errpipe)
- log.Println("Finished Nix image build")
+ if err = cmd.Wait(); err != nil {
+ // TODO(tazjin): Propagate errors upwards in a usable format.
+ log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
+ return nil, err
+ }
+
+ log.Println("Finished Nix image build")
+
+ resultFile = strings.TrimSpace(string(stdout))
+ cache.cacheManifest(image, resultFile)
+ }
- buildOutput, err := ioutil.ReadFile(strings.TrimSpace(string(stdout)))
+ buildOutput, err := ioutil.ReadFile(resultFile)
if err != nil {
return nil, err
}
@@ -151,15 +158,20 @@ func BuildImage(ctx *context.Context, cfg *config.Config, image *Image, bucket *
// contained layers to the bucket. Only the manifest itself is
// re-serialised to JSON and returned.
var result BuildResult
+
err = json.Unmarshal(buildOutput, &result)
if err != nil {
return nil, err
}
for layer, meta := range result.LayerLocations {
- err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
- if err != nil {
- return nil, err
+ if !cache.hasSeenLayer(layer) {
+ err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
+ if err != nil {
+ return nil, err
+ }
+
+ cache.sawLayer(layer)
}
}
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
new file mode 100644
index 000000000000..0014789afff5
--- /dev/null
+++ b/tools/nixery/server/builder/cache.go
@@ -0,0 +1,95 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package builder
+
+import (
+ "sync"
+ "time"
+)
+
+// recencyThreshold is the amount of time that a manifest build will be cached
+// for. When using the channel mechanism for retrieving nixpkgs, Nix will
+// occasionally re-fetch the channel so things can in fact change while the
+// instance is running.
+const recencyThreshold = time.Duration(6) * time.Hour
+
+type manifestEntry struct {
+ built time.Time
+ path string
+}
+
+type void struct{}
+
+type BuildCache struct {
+ mmtx sync.RWMutex
+ mcache map[string]manifestEntry
+
+ lmtx sync.RWMutex
+ lcache map[string]void
+}
+
+func NewCache() BuildCache {
+ return BuildCache{
+ mcache: make(map[string]manifestEntry),
+ lcache: make(map[string]void),
+ }
+}
+
+// Has this layer hash already been seen by this Nixery instance? If
+// yes, we can skip upload checking and such because it has already
+// been done.
+func (c *BuildCache) hasSeenLayer(hash string) bool {
+ c.lmtx.RLock()
+ defer c.lmtx.RUnlock()
+ _, seen := c.lcache[hash]
+ return seen
+}
+
+// Layer has now been seen and should be stored.
+func (c *BuildCache) sawLayer(hash string) {
+ c.lmtx.Lock()
+ defer c.lmtx.Unlock()
+ c.lcache[hash] = void{}
+}
+
+// Has this manifest been built already? If yes, we can reuse the
+// result given that the build happened recently enough.
+func (c *BuildCache) manifestFromCache(image *Image) (string, bool) {
+ c.mmtx.RLock()
+
+ entry, ok := c.mcache[image.Name+image.Tag]
+ c.mmtx.RUnlock()
+
+ if !ok {
+ return "", false
+ }
+
+ if time.Since(entry.built) > recencyThreshold {
+ return "", false
+ }
+
+ return entry.path, true
+}
+
+// Adds the result of a manifest build to the cache.
+func (c *BuildCache) cacheManifest(image *Image, path string) {
+ entry := manifestEntry{
+ built: time.Now(),
+ path: path,
+ }
+
+ c.mmtx.Lock()
+ c.mcache[image.Name+image.Tag] = entry
+ c.mmtx.Unlock()
+}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 5d7dcd2adfc2..fd307f79d4b4 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -125,6 +125,7 @@ type registryHandler struct {
cfg *config.Config
ctx *context.Context
bucket *storage.BucketHandle
+ cache *builder.BuildCache
}
func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
@@ -140,7 +141,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
imageTag := manifestMatches[2]
log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
image := builder.ImageFromName(imageName, imageTag)
- buildResult, err := builder.BuildImage(h.ctx, h.cfg, &image, h.bucket)
+ buildResult, err := builder.BuildImage(h.ctx, h.cfg, h.cache, &image, h.bucket)
if err != nil {
writeError(w, 500, "UNKNOWN", "image build failure")
@@ -192,6 +193,7 @@ func main() {
cfg := config.FromEnv()
ctx := context.Background()
bucket := prepareBucket(&ctx, cfg)
+ cache := builder.NewCache()
log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.Port)
@@ -200,6 +202,7 @@ func main() {
cfg: cfg,
ctx: &ctx,
bucket: bucket,
+ cache: &cache,
})
// All other roots are served by the static file server.
--
cgit 1.4.1
From 36d50d1f19f8c55e1eb707639fe73906d8dd30e8 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 14 Aug 2019 20:08:40 +0100
Subject: fix(server): Print correct project name during startup
They grow up so fast :')
---
tools/nixery/server/main.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index fd307f79d4b4..da181249b285 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -195,7 +195,7 @@ func main() {
bucket := prepareBucket(&ctx, cfg)
cache := builder.NewCache()
- log.Printf("Starting Kubernetes Nix controller on port %s\n", cfg.Port)
+ log.Printf("Starting Nixery on port %s\n", cfg.Port)
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
--
cgit 1.4.1
From 85b9c30749efb734c02e85a0460b563d9292344f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 14 Aug 2019 20:13:35 +0100
Subject: chore(server): Add 'go vet' to build process
---
tools/nixery/server/default.nix | 8 ++++++++
1 file changed, 8 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
index 05ad64261fa5..9df52721857c 100644
--- a/tools/nixery/server/default.nix
+++ b/tools/nixery/server/default.nix
@@ -21,6 +21,14 @@ buildGoPackage {
goPackagePath = "github.com/google/nixery";
+ # Enable checks and configure check-phase to include vet:
+ doCheck = true;
+ preCheck = ''
+ for pkg in $(getGoDirs ""); do
+ buildGoDir vet "$pkg"
+ done
+ '';
+
meta = {
description = "Container image builder serving Nix-backed images";
homepage = "https://github.com/google/nixery";
--
cgit 1.4.1
From ca1ffb397d979239b0246c1c72c73b74bc96635e Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 14 Aug 2019 23:46:21 +0100
Subject: feat(build): Add an integration test that runs on Travis
This test, after performing the usual Nixery build, loads the built
image into Docker, runs it, pulls an image from Nixery and runs that
image.
To make this work, there is some configuration on the Travis side.
Most importantly, the following environment variables have special
values:
* `GOOGLE_KEY`: This is set to a base64-encoded service account key to
be used in the test.
* `GCS_SIGNING_PEM`: This is set to a base64-encoded signing key (in
PEM) that is used for signing URLs.
Both of these are available to all branches in the Nixery repository.
---
tools/nixery/.travis.yml | 49 ++++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 47 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 96f1ec0fc15b..950de21f70c6 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -1,6 +1,51 @@
language: nix
+services:
+ - docker
before_script:
- - nix-env -iA nixpkgs.cachix
- - cachix use nixery
+ - |
+ mkdir test-files
+ echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
+ echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
+ nix-env -iA nixpkgs.cachix -A nixpkgs.go
+ cachix use nixery
script:
+ - test -z $(gofmt -l server/ build-image/)
- nix-build | cachix push nixery
+
+ # This integration test makes sure that the container image built
+ # for Nixery itself runs fine in Docker, and that images pulled
+ # from it work in Docker.
+ #
+ # Output from the Nixery container is printed at the end of the
+ # test regardless of test status.
+ - IMG=$(docker load -q -i $(nix-build -A nixery-image) | awk '{ print $3 }')
+ - echo "Loaded Nixery image as ${IMG}"
+
+ - |
+ docker run -d -p 8080:8080 --name nixery \
+ -v ${PWD}/test-files:/var/nixery \
+ -e PORT=8080 \
+ -e BUCKET=nixery-layers \
+ -e GOOGLE_CLOUD_PROJECT=nixery \
+ -e GOOGLE_APPLICATION_CREDENTIALS=/var/nixery/key.json \
+ ${IMG}
+
+ # print all of the container's logs regardless of success
+ - |
+ function print_logs {
+ echo "Nixery container logs:"
+ docker logs nixery
+ }
+ trap print_logs EXIT
+
+ # Give the container ~20 seconds to come up
+ - |
+ attempts=0
+ echo -n "Waiting for Nixery to start ..."
+ until $(curl --fail --silent "http://localhost:8080/v2/"); do
+ [[ attempts -eq 30 ]] && echo "Nixery container failed to start!" && exit 1
+ ((attempts++))
+ echo -n "."
+ sleep 1
+ done
+ - docker run --rm localhost:8080/hello hello
--
cgit 1.4.1
From 0ec369d76c2b151fa82839230e6eb4d58015b1dc Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 15 Aug 2019 11:50:23 +0100
Subject: docs(book): Update information on new layering strategy
---
tools/nixery/docs/src/nixery.md | 15 ++++++++-------
tools/nixery/docs/src/under-the-hood.md | 4 +++-
2 files changed, 11 insertions(+), 8 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index 83e1aac52bdf..3eaeb6be4e97 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -7,8 +7,11 @@ contain packages from the [Nix][] package manager. Images with arbitrary
packages can be requested via the image name.
Nix not only provides the packages to include in the images, but also builds the
-images themselves by using an interesting layering strategy described in [this
-blog post][layers].
+images themselves by using a special [layering strategy][] that optimises for
+cache efficiency.
+
+For general information on why using Nix makes sense for container images, check
+out [this blog post][layers].
## Quick start
@@ -65,13 +68,11 @@ availability.
### Who made this?
-Nixery was written mostly by [tazjin][].
-
-[grahamc][] authored the image layering strategy. Many people have contributed
-to Nix over time, maybe you could become one of them?
+Nixery was written by [tazjin][], but many people have contributed to Nix over
+time, maybe you could become one of them?
[Nixery]: https://github.com/google/nixery
[Nix]: https://nixos.org/nix
+[layering-strategy]: https://storage.googleapis.com/nixdoc/nixery-layers.html
[layers]: https://grahamc.com/blog/nix-and-layered-docker-images
[tazjin]: https://github.com/tazjin
-[grahamc]: https://github.com/grahamc
diff --git a/tools/nixery/docs/src/under-the-hood.md b/tools/nixery/docs/src/under-the-hood.md
index 3791707b1cd2..6b5e5e9bbf21 100644
--- a/tools/nixery/docs/src/under-the-hood.md
+++ b/tools/nixery/docs/src/under-the-hood.md
@@ -51,7 +51,8 @@ does not allow uppercase characters, so the Nix code will translate something
like `haskellpackages` (lowercased) to the correct attribute name.
After identifying all contents, Nix determines the contents of each layer while
-optimising for the best possible cache efficiency.
+optimising for the best possible cache efficiency (see the [layering design
+doc][] for details).
Finally it builds each layer, assembles the image manifest as JSON structure,
and yields this manifest back to the web server.
@@ -103,3 +104,4 @@ to run the image produced by Nixery.
[gcs]: https://cloud.google.com/storage/
[signed URLs]: https://cloud.google.com/storage/docs/access-control/signed-urls
+[layering design doc]: https://storage.googleapis.com/nixdoc/nixery-layers.html
--
cgit 1.4.1
From 3f232e017075f5b45c7d11cdf27225a51f47c085 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 15 Aug 2019 12:00:36 +0100
Subject: docs: Add asciinema demo to README & book
---
tools/nixery/README.md | 25 ++++++++-----------------
tools/nixery/docs/src/nixery.md | 4 ++++
2 files changed, 12 insertions(+), 17 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 9c323a57fa12..8225e430b36e 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -25,23 +25,14 @@ images. The design for this is outlined in [a public gist][gist].
This is not an officially supported Google project.
-## Usage example
-
-Using the publicly available Nixery instance at `nixery.dev`, one could
-retrieve a container image containing `curl` and an interactive shell like this:
-
-```shell
-tazjin@tazbox:~$ sudo docker run -ti nixery.dev/shell/curl bash
-Unable to find image 'nixery.dev/shell/curl:latest' locally
-latest: Pulling from shell/curl
-7734b79e1ba1: Already exists
-b0d2008d18cd: Pull complete
-< ... some layers omitted ...>
-Digest: sha256:178270bfe84f74548b6a43347d73524e5c2636875b673675db1547ec427cf302
-Status: Downloaded newer image for nixery.dev/shell/curl:latest
-bash-4.4# curl --version
-curl 7.64.0 (x86_64-pc-linux-gnu) libcurl/7.64.0 OpenSSL/1.0.2q zlib/1.2.11 libssh2/1.8.0 nghttp2/1.35.1
-```
+## Demo
+
+Click the image to see an example in which an image containing an interactive
+shell and GNU `hello` is downloaded.
+
+[![asciicast](https://asciinema.org/a/262583.png)](https://asciinema.org/a/262583?autoplay=1)
+
+To try it yourself, head to [nixery.dev][public]!
The special meta-package `shell` provides an image base with many core
components (such as `bash` and `coreutils`) that users commonly expect in
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index 3eaeb6be4e97..edea53bab62e 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -13,6 +13,10 @@ cache efficiency.
For general information on why using Nix makes sense for container images, check
out [this blog post][layers].
+## Demo
+
+
+
## Quick start
Simply pull an image from this registry, separating each package you want
--
cgit 1.4.1
From 501e6ded5f80215879ad7467cd7ca250f902422d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 15 Aug 2019 12:02:32 +0100
Subject: fix(build): Ensure GCS signing is used in CI
---
tools/nixery/.travis.yml | 2 ++
1 file changed, 2 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 950de21f70c6..46c2a9778cb6 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -28,6 +28,8 @@ script:
-e BUCKET=nixery-layers \
-e GOOGLE_CLOUD_PROJECT=nixery \
-e GOOGLE_APPLICATION_CREDENTIALS=/var/nixery/key.json \
+ -e GCS_SIGNING_ACCOUNT="${GCS_SIGNING_ACCOUNT}" \
+ -e GCS_SIGNING_KEY=/var/nixery/gcs.pem \
${IMG}
# print all of the container's logs regardless of success
--
cgit 1.4.1
From 3b65fc8c726a91eb155ff2e5bf116e5aeee488aa Mon Sep 17 00:00:00 2001
From: Florian Klink
Date: Fri, 16 Aug 2019 21:18:43 +0200
Subject: feat(server): add iana-etc and cacert to the shell convenience
package
These probably should be part of every container image by default, but
adding it to the "shell" convenience name probably is our best bet for
now.
---
tools/nixery/server/builder/builder.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index e241d3d0b004..46301d6c6a0c 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -88,7 +88,7 @@ type BuildResult struct {
//
// * `shell`: Includes bash, coreutils and other common command-line tools
func convenienceNames(packages []string) []string {
- shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
+ shellPackages := []string{"bashInteractive", "cacert", "coreutils", "iana-etc", "moreutils", "nano"}
if packages[0] == "shell" {
return append(packages[1:], shellPackages...)
--
cgit 1.4.1
From 0ee239874b76bea75a3d3a90201118b3c4294576 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 17 Aug 2019 10:07:46 +0100
Subject: docs(README): Update links to layering strategy
---
tools/nixery/README.md | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 8225e430b36e..cdae23bc4b72 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -14,14 +14,16 @@ user intends to include in the image is specified as a path component of the
image name.
The path components refer to top-level keys in `nixpkgs` and are used to build a
-container image using Nix's [buildLayeredImage][] functionality.
+container image using a [layering strategy][] that optimises for caching popular
+and/or large dependencies.
A public instance as well as additional documentation is available at
[nixery.dev][public].
-The project started out with the intention of becoming a Kubernetes controller
-that can serve declarative image specifications specified in CRDs as container
-images. The design for this is outlined in [a public gist][gist].
+The project started out inspired by the [buildLayeredImage][] blog post with the
+intention of becoming a Kubernetes controller that can serve declarative image
+specifications specified in CRDs as container images. The design for this was
+outlined in [a public gist][gist].
This is not an officially supported Google project.
@@ -94,6 +96,7 @@ correct caching behaviour, addressing and so on.
See [issue #4](https://github.com/google/nixery/issues/4).
[Nix]: https://nixos.org/
+[layering strategy]: https://storage.googleapis.com/nixdoc/nixery-layers.html
[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
[buildLayeredImage]: https://grahamc.com/blog/nix-and-layered-docker-images
[public]: https://nixery.dev
--
cgit 1.4.1
From 9a95c4124f911dbc07d4eabc0c4b8fc2d44c74d6 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 17 Aug 2019 10:18:15 +0100
Subject: fix(server): Sort requested packages in image name & spec
Before this change, Nixery would pass on the image name unmodified to
Nix which would lead it to cache-bust the manifest and configuration
layers for images that are content-identical but have different
package ordering.
This fixes #38.
---
tools/nixery/server/builder/builder.go | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 46301d6c6a0c..a249384d9fef 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -27,6 +27,7 @@ import (
"log"
"os"
"os/exec"
+ "sort"
"strings"
"cloud.google.com/go/storage"
@@ -50,12 +51,21 @@ type Image struct {
//
// It will expand convenience names under the hood (see the `convenienceNames`
// function below).
+//
+// Once assembled the image structure uses a sorted representation of
+// the name. This is to avoid unnecessarily cache-busting images if
+// only the order of requested packages has changed.
func ImageFromName(name string, tag string) Image {
- packages := strings.Split(name, "/")
+ pkgs := strings.Split(name, "/")
+ expanded := convenienceNames(pkgs)
+
+ sort.Strings(pkgs)
+ sort.Strings(expanded)
+
return Image{
- Name: name,
+ Name: strings.Join(pkgs, "/"),
Tag: tag,
- Packages: convenienceNames(packages),
+ Packages: expanded,
}
}
--
cgit 1.4.1
From 745b7ce0b821b1d46b7259c8ba704bf767ad31d6 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 17 Aug 2019 09:29:56 +0000
Subject: fix(build): Ensure root user is known inside of container
This is required by git in cases where Nixery is configured with a
custom git repository.
I've also added a shell back into the image to make debugging a
running Nixery easier. It turns out some of the dependencies already
pull in bash anyways, so this is just surfacing it to $PATH.
---
tools/nixery/default.nix | 4 ++++
1 file changed, 4 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 734a72d57e0b..194cf54608e2 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -61,6 +61,8 @@ rec {
# Create the build user/group required by Nix
echo 'nixbld:x:30000:nixbld' >> /etc/group
echo 'nixbld:x:30000:30000:nixbld:/tmp:/bin/bash' >> /etc/passwd
+ echo 'root:x:0:0:root:/root:/bin/bash' >> /etc/passwd
+ echo 'root:x:0:' >> /etc/group
# Disable sandboxing to avoid running into privilege issues
mkdir -p /etc/nix
@@ -80,6 +82,7 @@ rec {
config.Cmd = [ "${nixery-launch-script}/bin/nixery" ];
maxLayers = 96;
contents = [
+ bashInteractive
cacert
coreutils
git
@@ -89,6 +92,7 @@ rec {
nixery-build-image
nixery-launch-script
openssh
+ zlib
];
};
}
--
cgit 1.4.1
From ffae282eac86f4df48a5c39f61d82dd88ccca1ec Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 18 Aug 2019 02:44:34 +0100
Subject: fix(docs): Correct link to layering strategy
---
tools/nixery/docs/src/nixery.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index edea53bab62e..6cc16431c9e8 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -77,6 +77,6 @@ time, maybe you could become one of them?
[Nixery]: https://github.com/google/nixery
[Nix]: https://nixos.org/nix
-[layering-strategy]: https://storage.googleapis.com/nixdoc/nixery-layers.html
+[layering strategy]: https://storage.googleapis.com/nixdoc/nixery-layers.html
[layers]: https://grahamc.com/blog/nix-and-layered-docker-images
[tazjin]: https://github.com/tazjin
--
cgit 1.4.1
From e7d7f73f7d191c4149409a0be83c924127972b2a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 19 Aug 2019 01:10:21 +0100
Subject: feat(build): Add 'extraPackages' parameter
This makes it possible to inject additional programs (e.g. Cachix)
into a Nixery container.
---
tools/nixery/default.nix | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 194cf54608e2..31b15396a5fd 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -11,7 +11,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-{ pkgs ? import { }, preLaunch ? "" }:
+{ pkgs ? import { }
+, preLaunch ? ""
+, extraPackages ? [] }:
with pkgs;
@@ -93,6 +95,6 @@ rec {
nixery-launch-script
openssh
zlib
- ];
+ ] ++ extraPackages;
};
}
--
cgit 1.4.1
From ccf6a95f94b4abad2fed94e613888a4407f3ec93 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 19 Aug 2019 01:36:32 +0100
Subject: chore(build): Pin nixpkgs to a specific commit
This is the same commit for which Nixery has popularity data, but that
isn't particularly relevant.
---
tools/nixery/.travis.yml | 2 ++
1 file changed, 2 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 46c2a9778cb6..864eeaa9021e 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -1,6 +1,8 @@
language: nix
services:
- docker
+env:
+ - NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/88d9f776091896cfe57dc6fbdf246e7d27d5f105.tar.gz
before_script:
- |
mkdir test-files
--
cgit 1.4.1
From daa6196c2a5575a9394cf6e389aee8e050df08ec Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 19 Aug 2019 01:34:20 +0100
Subject: fix(build): Force nix-env to use NIX_PATH
Thanks to clever!
---
tools/nixery/.travis.yml | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 864eeaa9021e..471037f36474 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -4,12 +4,11 @@ services:
env:
- NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/88d9f776091896cfe57dc6fbdf246e7d27d5f105.tar.gz
before_script:
- - |
- mkdir test-files
- echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
- echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
- nix-env -iA nixpkgs.cachix -A nixpkgs.go
- cachix use nixery
+ - mkdir test-files
+ - echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
+ - echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
+ - nix-env -f '' -iA cachix -A go
+ - cachix use nixery
script:
- test -z $(gofmt -l server/ build-image/)
- nix-build | cachix push nixery
--
cgit 1.4.1
From bb5427a47a29161f854bd9b57e388849ea26e818 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 21 Aug 2019 10:21:45 +0100
Subject: chore(docs): Update embedded nix-1p version
The new version of the document has syntactic fixes that render pipes
in code blocks in tables correctly across dialects.
Fixes #44
---
tools/nixery/docs/default.nix | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
index aae2fdde42a5..11eda3ff7052 100644
--- a/tools/nixery/docs/default.nix
+++ b/tools/nixery/docs/default.nix
@@ -42,8 +42,8 @@ let
nix-1p = fetchFromGitHub {
owner = "tazjin";
repo = "nix-1p";
- rev = "3cd0f7d7b4f487d04a3f1e3ca8f2eb1ab958c49b";
- sha256 = "02lpda03q580gyspkbmlgnb2cbm66rrcgqsv99aihpbwyjym81af";
+ rev = "e0a051a016b9118bea90ec293d6cd346b9707e77";
+ sha256 = "0d1lfkxg03lki8dc3229g1cgqiq3nfrqgrknw99p6w0zk1pjd4dj";
};
in runCommand "nixery-book" { } ''
mkdir -p $out
--
cgit 1.4.1
From 306e12787a9977334d44f215eece8f4ae89fe03f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 21 Aug 2019 10:22:44 +0100
Subject: chore(build): Add iana-etc to Nixery's own image
This package is used by a variety of programs that users may want to
embed into Nixery in addition, for example cachix, but those packages
don't refer to it explicitly.
---
tools/nixery/default.nix | 1 +
1 file changed, 1 insertion(+)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 31b15396a5fd..1f908b609897 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -90,6 +90,7 @@ rec {
git
gnutar
gzip
+ iana-etc
nix
nixery-build-image
nixery-launch-script
--
cgit 1.4.1
From 92270fcbe472c2cef3cbd8f3f92b950aa78bc777 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 1 Sep 2019 23:28:57 +0100
Subject: refactor(build-image): Simplify customisation layer builder
Moves the relevant parts of the customisation layer construction from
dockerTools.mkCustomisationLayer into the Nixery code base.
The version in dockerTools builds additional files (including via
hashing of potentially large files) which are not required when
serving an image over the registry protocol.
---
tools/nixery/build-image/build-image.nix | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index b67fef6ceb88..7b0f2cac9739 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -31,8 +31,6 @@
# Packages to install by name (which must refer to top-level attributes of
# nixpkgs). This is passed in as a JSON-array in string form.
packages ? "[]",
- # Optional bash script to run on the files prior to fixturizing the layer.
- extraCommands ? "", uid ? 0, gid ? 0,
# Docker's modern image storage mechanisms have a maximum of 125
# layers. To allow for some extensibility (via additional layers),
# the default here is set to something a little less than that.
@@ -106,11 +104,6 @@ let
fetched = (map (deepFetch pkgs) (fromJSON packages));
in foldl' splitter init fetched;
- contentsEnv = symlinkJoin {
- name = "bulk-layers";
- paths = allContents.contents;
- };
-
popularity = builtins.fetchurl {
url = "https://storage.googleapis.com/nixery-layers/popularity/nixos-19.03-20190812.json";
sha256 = "16sxd49vqqg2nrhwynm36ba6bc2yff5cd5hf83wi0hanw5sx3svk";
@@ -156,13 +149,23 @@ let
(lib.concatStringsSep "\n" (map (layer: pathsToLayer layer.contents)
groupedLayers));
- customisationLayer = mkCustomisationLayer {
- name = baseName;
- contents = contentsEnv;
- baseJson = writeText "empty.json" "{}";
- inherit uid gid extraCommands;
+ # Create a symlink forest into all top-level store paths.
+ contentsEnv = symlinkJoin {
+ name = "bulk-layers";
+ paths = allContents.contents;
};
+ # This customisation layer which contains the symlink forest
+ # required at container runtime is assembled with a simplified
+ # version of dockerTools.mkCustomisationLayer.
+ #
+ # No metadata creation (such as layer hashing) is required when
+ # serving images over the API.
+ customisationLayer = runCommand "customisation-layer.tar" {} ''
+ cp -r ${contentsEnv}/ ./layer
+ tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -cf $out .
+ '';
+
# Inspect the returned bulk layers to determine which layers belong to the
# image and how to serve them.
#
@@ -172,9 +175,7 @@ let
buildInputs = [ coreutils findutils jq openssl ];
} ''
cat ${bulkLayers} | sort -t/ -k5 -n > layer-list
- echo -n layer-list:
- cat layer-list
- echo ${customisationLayer}/layer.tar >> layer-list
+ echo ${customisationLayer} >> layer-list
for layer in $(cat layer-list); do
layerSha256=$(sha256sum $layer | cut -d ' ' -f1)
--
cgit 1.4.1
From ce8635833b226df8497be34a8009d0e47cb0399e Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 1 Sep 2019 23:55:34 +0100
Subject: refactor(build-image): Remove implicit import of entire package set
Explicitly refer to where things come from, and also don't import
dockerTools as it is no longer used for anything.
---
tools/nixery/build-image/build-image.nix | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 7b0f2cac9739..cd0ef91b3135 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -39,14 +39,11 @@
...
}:
-# Since this is essentially a re-wrapping of some of the functionality that is
-# implemented in the dockerTools, we need all of its components in our top-level
-# namespace.
with builtins;
-with pkgs;
-with dockerTools;
let
+ inherit (pkgs) lib runCommand writeText;
+
tarLayer = "application/vnd.docker.image.rootfs.diff.tar";
baseName = baseNameOf name;
@@ -150,7 +147,7 @@ let
groupedLayers));
# Create a symlink forest into all top-level store paths.
- contentsEnv = symlinkJoin {
+ contentsEnv = pkgs.symlinkJoin {
name = "bulk-layers";
paths = allContents.contents;
};
@@ -172,7 +169,7 @@ let
# This computes both an MD5 and a SHA256 hash of each layer, which are used
# for different purposes. See the registry server implementation for details.
allLayersJson = runCommand "fs-layer-list.json" {
- buildInputs = [ coreutils findutils jq openssl ];
+ buildInputs = with pkgs; [ coreutils jq openssl ];
} ''
cat ${bulkLayers} | sort -t/ -k5 -n > layer-list
echo ${customisationLayer} >> layer-list
@@ -204,7 +201,7 @@ let
};
configJson = writeText "${baseName}-config.json" (toJSON config);
configMetadata = fromJSON (readFile (runCommand "config-meta" {
- buildInputs = [ jq openssl ];
+ buildInputs = with pkgs; [ jq openssl ];
} ''
size=$(wc -c ${configJson} | cut -d ' ' -f1)
sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
--
cgit 1.4.1
From 32b9b5099eaeabc5672b6236e26743736b85cc41 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 2 Sep 2019 23:32:36 +0100
Subject: feat(server): Add configuration option for Nix build timeouts
Adds a NIX_TIMEOUT environment variable which can be set to a number
of seconds that is the maximum allowed time each Nix builder can run.
By default this is set to 60 seconds, which should be plenty for most
use-cases as Nixery is not expected to be performing builds of
uncached binaries in most production cases.
Currently the errors Nix throws on a build timeout are not separated
from other types of errors, meaning that users will see a generic 500
server error in case of a timeout.
This fixes #47
---
tools/nixery/server/builder/builder.go | 1 +
tools/nixery/server/config/config.go | 16 ++++++++++------
2 files changed, 11 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index a249384d9fef..35a2c2f71283 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -119,6 +119,7 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, ima
}
args := []string{
+ "--timeout", cfg.Timeout,
"--argstr", "name", image.Name,
"--argstr", "packages", string(packages),
}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 4e3b70dcdc22..5fba0e658ae0 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -102,10 +102,12 @@ func signingOptsFromEnv() *storage.SignedURLOptions {
}
}
-func getConfig(key, desc string) string {
+func getConfig(key, desc, def string) string {
value := os.Getenv(key)
- if value == "" {
+ if value == "" && def == "" {
log.Fatalln(desc + " must be specified")
+ } else if value == "" {
+ return def
}
return value
@@ -117,15 +119,17 @@ type Config struct {
Signing *storage.SignedURLOptions // Signing options to use for GCS URLs
Port string // Port on which to launch HTTP server
Pkgs *PkgSource // Source for Nix package set
- WebDir string
+ Timeout string // Timeout for a single Nix builder (seconds)
+ WebDir string // Directory with static web assets
}
func FromEnv() *Config {
return &Config{
- Bucket: getConfig("BUCKET", "GCS bucket for layer storage"),
- Port: getConfig("PORT", "HTTP port"),
+ Bucket: getConfig("BUCKET", "GCS bucket for layer storage", ""),
+ Port: getConfig("PORT", "HTTP port", ""),
Pkgs: pkgSourceFromEnv(),
Signing: signingOptsFromEnv(),
- WebDir: getConfig("WEB_DIR", "Static web file dir"),
+ Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
+ WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
}
}
--
cgit 1.4.1
From 496a4ab84742279fb7bbd1c8ac9d4ebd1a44b148 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 2 Sep 2019 23:35:50 +0100
Subject: docs: Add information about NIX_TIMEOUT variable
---
tools/nixery/README.md | 2 ++
tools/nixery/docs/src/run-your-own.md | 4 +++-
2 files changed, 5 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index cdae23bc4b72..3bc6df845a7f 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -81,6 +81,8 @@ variables:
locally configured SSH/git credentials)
* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
for building
+* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
+ (defaults to 60
* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
used to sign Cloud Storage URLs
* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
index 0539ab02fcff..7a294f56055e 100644
--- a/tools/nixery/docs/src/run-your-own.md
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -81,8 +81,10 @@ You may set *one* of these, if unset Nixery defaults to `nixos-19.03`:
* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
for building
-You may set *both* of these:
+You may set *all* of these:
+* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
+ (defaults to 60)
* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
used to [sign Cloud Storage URLs][signed-urls]
* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
--
cgit 1.4.1
From 980f5e218761fa340b746e6336db62abf63c953a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 8 Sep 2019 21:53:22 +0100
Subject: refactor(server): Move package source management logic to server
Introduces three new types representing each of the possible package
sources and moves the logic for specifying the package source to the
server.
Concrete changes:
* Determining whether a specified git reference is a commit vs. a
branch/tag is now done in the server, and is done more precisely by
using a regular expression.
* Package sources now have a new `CacheKey` function which can be used
to retrieve a key under which a build manifest can be cached *if*
the package source is not a moving target (i.e. a full git commit
hash of either nixpkgs or a private repository).
This function is not yet used.
* Users *must* now specify a package source, Nixery no longer defaults
to anything and will fail to launch if no source is configured.
---
tools/nixery/.travis.yml | 1 +
tools/nixery/build-image/default.nix | 5 +-
tools/nixery/build-image/load-pkgs.nix | 54 +++--------
tools/nixery/default.nix | 3 +-
tools/nixery/server/builder/builder.go | 7 +-
tools/nixery/server/config/config.go | 66 ++------------
tools/nixery/server/config/pkgsource.go | 155 ++++++++++++++++++++++++++++++++
tools/nixery/server/main.go | 6 +-
8 files changed, 192 insertions(+), 105 deletions(-)
create mode 100644 tools/nixery/server/config/pkgsource.go
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 471037f36474..f670ab0e2cf5 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -31,6 +31,7 @@ script:
-e GOOGLE_APPLICATION_CREDENTIALS=/var/nixery/key.json \
-e GCS_SIGNING_ACCOUNT="${GCS_SIGNING_ACCOUNT}" \
-e GCS_SIGNING_KEY=/var/nixery/gcs.pem \
+ -e NIXERY_CHANNEL=nixos-unstable \
${IMG}
# print all of the container's logs regardless of success
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index 0d3002cb404e..6b1cea6f0ca2 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -20,11 +20,12 @@
# Because of the insanity occuring below, this function must mirror
# all arguments of build-image.nix.
-, pkgSource ? "nixpkgs!nixos-19.03"
+, srcType ? "nixpkgs"
+, srcArgs ? "nixos-19.03"
, tag ? null, name ? null, packages ? null, maxLayers ? null
}@args:
-let pkgs = import ./load-pkgs.nix { inherit pkgSource; };
+let pkgs = import ./load-pkgs.nix { inherit srcType srcArgs; };
in with pkgs; rec {
groupLayers = buildGoPackage {
diff --git a/tools/nixery/build-image/load-pkgs.nix b/tools/nixery/build-image/load-pkgs.nix
index 3e8b450c45d2..cceebfc14dae 100644
--- a/tools/nixery/build-image/load-pkgs.nix
+++ b/tools/nixery/build-image/load-pkgs.nix
@@ -12,17 +12,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-# Load a Nix package set from a source specified in one of the following
-# formats:
-#
-# 1. nixpkgs!$channel (e.g. nixpkgs!nixos-19.03)
-# 2. git!$repo!$rev (e.g. git!git@github.com:NixOS/nixpkgs.git!master)
-# 3. path!$path (e.g. path!/var/local/nixpkgs)
-#
-# '!' was chosen as the separator because `builtins.split` does not
-# support regex escapes and there are few other candidates. It
-# doesn't matter much because this is invoked by the server.
-{ pkgSource, args ? { } }:
+# Load a Nix package set from one of the supported source types
+# (nixpkgs, git, path).
+{ srcType, srcArgs, importArgs ? { } }:
with builtins;
let
@@ -32,42 +24,22 @@ let
let
url =
"https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
- in import (fetchTarball url) args;
+ in import (fetchTarball url) importArgs;
# If a git repository is requested, it is retrieved via
# builtins.fetchGit which defaults to the git configuration of the
# outside environment. This means that user-configured SSH
# credentials etc. are going to work as expected.
- fetchImportGit = url: rev:
- let
- # builtins.fetchGit needs to know whether 'rev' is a reference
- # (e.g. a branch/tag) or a revision (i.e. a commit hash)
- #
- # Since this data is being extrapolated from the supplied image
- # tag, we have to guess if we want to avoid specifying a format.
- #
- # There are some additional caveats around whether the default
- # branch contains the specified revision, which need to be
- # explained to users.
- spec = if (stringLength rev) == 40 then {
- inherit url rev;
- } else {
- inherit url;
- ref = rev;
- };
- in import (fetchGit spec) args;
+ fetchImportGit = spec: import (fetchGit spec) importArgs;
# No special handling is used for paths, so users are expected to pass one
# that will work natively with Nix.
- importPath = path: import (toPath path) args;
-
- source = split "!" pkgSource;
- sourceType = elemAt source 0;
-in if sourceType == "nixpkgs" then
- fetchImportChannel (elemAt source 2)
-else if sourceType == "git" then
- fetchImportGit (elemAt source 2) (elemAt source 4)
-else if sourceType == "path" then
- importPath (elemAt source 2)
+ importPath = path: import (toPath path) importArgs;
+in if srcType == "nixpkgs" then
+ fetchImportChannel srcArgs
+else if srcType == "git" then
+ fetchImportGit (fromJSON srcArgs)
+else if srcType == "path" then
+ importPath srcArgs
else
- throw ("Invalid package set source specification: ${pkgSource}")
+ throw ("Invalid package set source specification: ${srcType} (${srcArgs})")
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 1f908b609897..f7a2a1712bfb 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -28,7 +28,8 @@ rec {
# Implementation of the image building & layering logic
nixery-build-image = (import ./build-image {
- pkgSource = "path!${}";
+ srcType = "path";
+ srcArgs = ;
}).wrapper;
# Use mdBook to build a static asset page which Nixery can then
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 35a2c2f71283..ce88d2dd894f 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -118,15 +118,16 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, ima
return nil, err
}
+ srcType, srcArgs := cfg.Pkgs.Render(image.Tag)
+
args := []string{
"--timeout", cfg.Timeout,
"--argstr", "name", image.Name,
"--argstr", "packages", string(packages),
+ "--argstr", "srcType", srcType,
+ "--argstr", "srcArgs", srcArgs,
}
- if cfg.Pkgs != nil {
- args = append(args, "--argstr", "pkgSource", cfg.Pkgs.Render(image.Tag))
- }
cmd := exec.Command("nixery-build-image", args...)
outpipe, err := cmd.StdoutPipe()
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 5fba0e658ae0..ea1bb1ab4532 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -18,7 +18,6 @@
package config
import (
- "fmt"
"io/ioutil"
"log"
"os"
@@ -26,58 +25,6 @@ import (
"cloud.google.com/go/storage"
)
-// pkgSource represents the source from which the Nix package set used
-// by Nixery is imported. Users configure the source by setting one of
-// the supported environment variables.
-type PkgSource struct {
- srcType string
- args string
-}
-
-// Convert the package source into the representation required by Nix.
-func (p *PkgSource) Render(tag string) string {
- // The 'git' source requires a tag to be present.
- if p.srcType == "git" {
- if tag == "latest" || tag == "" {
- tag = "master"
- }
-
- return fmt.Sprintf("git!%s!%s", p.args, tag)
- }
-
- return fmt.Sprintf("%s!%s", p.srcType, p.args)
-}
-
-// Retrieve a package source from the environment. If no source is
-// specified, the Nix code will default to a recent NixOS channel.
-func pkgSourceFromEnv() *PkgSource {
- if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
- log.Printf("Using Nix package set from Nix channel %q\n", channel)
- return &PkgSource{
- srcType: "nixpkgs",
- args: channel,
- }
- }
-
- if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.Printf("Using Nix package set from git repository at %q\n", git)
- return &PkgSource{
- srcType: "git",
- args: git,
- }
- }
-
- if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
- log.Printf("Using Nix package set from path %q\n", path)
- return &PkgSource{
- srcType: "path",
- args: path,
- }
- }
-
- return nil
-}
-
// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
// GCS_SIGNING_ACCOUNT envvars.
func signingOptsFromEnv() *storage.SignedURLOptions {
@@ -118,18 +65,23 @@ type Config struct {
Bucket string // GCS bucket to cache & serve layers
Signing *storage.SignedURLOptions // Signing options to use for GCS URLs
Port string // Port on which to launch HTTP server
- Pkgs *PkgSource // Source for Nix package set
+ Pkgs PkgSource // Source for Nix package set
Timeout string // Timeout for a single Nix builder (seconds)
WebDir string // Directory with static web assets
}
-func FromEnv() *Config {
+func FromEnv() (*Config, error) {
+ pkgs, err := pkgSourceFromEnv()
+ if err != nil {
+ return nil, err
+ }
+
return &Config{
Bucket: getConfig("BUCKET", "GCS bucket for layer storage", ""),
Port: getConfig("PORT", "HTTP port", ""),
- Pkgs: pkgSourceFromEnv(),
+ Pkgs: pkgs,
Signing: signingOptsFromEnv(),
Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
- }
+ }, nil
}
diff --git a/tools/nixery/server/config/pkgsource.go b/tools/nixery/server/config/pkgsource.go
new file mode 100644
index 000000000000..61bea33dfe62
--- /dev/null
+++ b/tools/nixery/server/config/pkgsource.go
@@ -0,0 +1,155 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package config
+
+import (
+ "crypto/sha1"
+ "encoding/json"
+ "fmt"
+ "log"
+ "os"
+ "regexp"
+ "strings"
+)
+
+// PkgSource represents the source from which the Nix package set used
+// by Nixery is imported. Users configure the source by setting one of
+// the supported environment variables.
+type PkgSource interface {
+ // Convert the package source into the representation required
+ // for calling Nix.
+ Render(tag string) (string, string)
+
+ // Create a key by which builds for this source and iamge
+ // combination can be cached.
+ //
+ // The empty string means that this value is not cacheable due
+ // to the package source being a moving target (such as a
+ // channel).
+ CacheKey(pkgs []string, tag string) string
+}
+
+type GitSource struct {
+ repository string
+}
+
+// Regex to determine whether a git reference is a commit hash or
+// something else (branch/tag).
+//
+// Used to check whether a git reference is cacheable, and to pass the
+// correct git structure to Nix.
+//
+// Note: If a user creates a branch or tag with the name of a commit
+// and references it intentionally, this heuristic will fail.
+var commitRegex = regexp.MustCompile(`^[0-9a-f]{40}$`)
+
+func (g *GitSource) Render(tag string) (string, string) {
+ args := map[string]string{
+ "url": g.repository,
+ }
+
+ // The 'git' source requires a tag to be present. If the user
+ // has not specified one, it is assumed that the default
+ // 'master' branch should be used.
+ if tag == "latest" || tag == "" {
+ tag = "master"
+ }
+
+ if commitRegex.MatchString(tag) {
+ args["rev"] = tag
+ } else {
+ args["ref"] = tag
+ }
+
+ j, _ := json.Marshal(args)
+
+ return "git", string(j)
+}
+
+func (g *GitSource) CacheKey(pkgs []string, tag string) string {
+ // Only full commit hashes can be used for caching, as
+ // everything else is potentially a moving target.
+ if !commitRegex.MatchString(tag) {
+ return ""
+ }
+
+ unhashed := strings.Join(pkgs, "") + tag
+ hashed := fmt.Sprintf("%x", sha1.Sum([]byte(unhashed)))
+
+ return hashed
+}
+
+type NixChannel struct {
+ channel string
+}
+
+func (n *NixChannel) Render(tag string) (string, string) {
+ return "nixpkgs", n.channel
+}
+
+func (n *NixChannel) CacheKey(pkgs []string, tag string) string {
+ // Since Nix channels are downloaded from the nixpkgs-channels
+ // Github, users can specify full commit hashes as the
+ // "channel", in which case builds are cacheable.
+ if !commitRegex.MatchString(n.channel) {
+ return ""
+ }
+
+ unhashed := strings.Join(pkgs, "") + n.channel
+ hashed := fmt.Sprintf("%x", sha1.Sum([]byte(unhashed)))
+
+ return hashed
+}
+
+type PkgsPath struct {
+ path string
+}
+
+func (p *PkgsPath) Render(tag string) (string, string) {
+ return "path", p.path
+}
+
+func (p *PkgsPath) CacheKey(pkgs []string, tag string) string {
+ // Path-based builds are not currently cacheable because we
+ // have no local hash of the package folder's state easily
+ // available.
+ return ""
+}
+
+// Retrieve a package source from the environment. If no source is
+// specified, the Nix code will default to a recent NixOS channel.
+func pkgSourceFromEnv() (PkgSource, error) {
+ if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
+ log.Printf("Using Nix package set from Nix channel %q\n", channel)
+ return &NixChannel{
+ channel: channel,
+ }, nil
+ }
+
+ if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
+ log.Printf("Using Nix package set from git repository at %q\n", git)
+ return &GitSource{
+ repository: git,
+ }, nil
+ }
+
+ if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
+ log.Printf("Using Nix package set from path %q\n", path)
+ return &PkgsPath{
+ path: path,
+ }, nil
+ }
+
+ return nil, fmt.Errorf("no valid package source has been specified")
+}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index da181249b285..49db1cdf8a5f 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -190,7 +190,11 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
func main() {
- cfg := config.FromEnv()
+ cfg, err := config.FromEnv()
+ if err != nil {
+ log.Fatalln("Failed to load configuration", err)
+ }
+
ctx := context.Background()
bucket := prepareBucket(&ctx, cfg)
cache := builder.NewCache()
--
cgit 1.4.1
From 051eb77b3de81d9393e5c5443c06b62b6abf1535 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 8 Sep 2019 22:21:14 +0100
Subject: refactor(server): Use package source specific cache keys
Use the PackageSource.CacheKey function introduced in the previous
commit to determine the key at which a manifest should be cached in
the local cache.
Due to this change, manifests for moving target sources are no longer
cached and the recency threshold logic has been removed.
---
tools/nixery/server/builder/builder.go | 4 +--
tools/nixery/server/builder/cache.go | 49 +++++++++++++---------------------
2 files changed, 21 insertions(+), 32 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index ce88d2dd894f..cfe03511f68e 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -110,7 +110,7 @@ func convenienceNames(packages []string) []string {
// Call out to Nix and request that an image be built. Nix will, upon success,
// return a manifest for the container image.
func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
- resultFile, cached := cache.manifestFromCache(image)
+ resultFile, cached := cache.manifestFromCache(cfg.Pkgs, image)
if !cached {
packages, err := json.Marshal(image.Packages)
@@ -158,7 +158,7 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, ima
log.Println("Finished Nix image build")
resultFile = strings.TrimSpace(string(stdout))
- cache.cacheManifest(image, resultFile)
+ cache.cacheManifest(cfg.Pkgs, image, resultFile)
}
buildOutput, err := ioutil.ReadFile(resultFile)
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 0014789afff5..8ade88037265 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -14,26 +14,15 @@
package builder
import (
+ "github.com/google/nixery/config"
"sync"
- "time"
)
-// recencyThreshold is the amount of time that a manifest build will be cached
-// for. When using the channel mechanism for retrieving nixpkgs, Nix will
-// occasionally re-fetch the channel so things can in fact change while the
-// instance is running.
-const recencyThreshold = time.Duration(6) * time.Hour
-
-type manifestEntry struct {
- built time.Time
- path string
-}
-
type void struct{}
type BuildCache struct {
mmtx sync.RWMutex
- mcache map[string]manifestEntry
+ mcache map[string]string
lmtx sync.RWMutex
lcache map[string]void
@@ -41,7 +30,7 @@ type BuildCache struct {
func NewCache() BuildCache {
return BuildCache{
- mcache: make(map[string]manifestEntry),
+ mcache: make(map[string]string),
lcache: make(map[string]void),
}
}
@@ -63,33 +52,33 @@ func (c *BuildCache) sawLayer(hash string) {
c.lcache[hash] = void{}
}
-// Has this manifest been built already? If yes, we can reuse the
-// result given that the build happened recently enough.
-func (c *BuildCache) manifestFromCache(image *Image) (string, bool) {
- c.mmtx.RLock()
+// Retrieve a cached manifest if the build is cacheable and it exists.
+func (c *BuildCache) manifestFromCache(src config.PkgSource, image *Image) (string, bool) {
+ key := src.CacheKey(image.Packages, image.Tag)
+ if key == "" {
+ return "", false
+ }
- entry, ok := c.mcache[image.Name+image.Tag]
+ c.mmtx.RLock()
+ path, ok := c.mcache[key]
c.mmtx.RUnlock()
if !ok {
return "", false
}
- if time.Since(entry.built) > recencyThreshold {
- return "", false
- }
-
- return entry.path, true
+ return path, true
}
-// Adds the result of a manifest build to the cache.
-func (c *BuildCache) cacheManifest(image *Image, path string) {
- entry := manifestEntry{
- built: time.Now(),
- path: path,
+// Adds the result of a manifest build to the cache, if the manifest
+// is considered cacheable.
+func (c *BuildCache) cacheManifest(src config.PkgSource, image *Image, path string) {
+ key := src.CacheKey(image.Packages, image.Tag)
+ if key == "" {
+ return
}
c.mmtx.Lock()
- c.mcache[image.Name+image.Tag] = entry
+ c.mcache[key] = path
c.mmtx.Unlock()
}
--
cgit 1.4.1
From 4a58b0ab4d21473723834dec651c876da2dec220 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 9 Sep 2019 16:41:52 +0100
Subject: feat(server): Cache built manifests to the GCS bucket
Caches manifests under `manifests/$cacheKey` in the GCS bucket and
introduces two-tiered retrieval of manifests from the caches (local
first, bucket second).
There is some cleanup to be done in this code, but the initial version
works.
---
tools/nixery/server/builder/builder.go | 6 +-
tools/nixery/server/builder/cache.go | 111 +++++++++++++++++++++++++++------
tools/nixery/server/main.go | 2 +-
3 files changed, 96 insertions(+), 23 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index cfe03511f68e..dd26ccc310aa 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -109,8 +109,8 @@ func convenienceNames(packages []string) []string {
// Call out to Nix and request that an image be built. Nix will, upon success,
// return a manifest for the container image.
-func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
- resultFile, cached := cache.manifestFromCache(cfg.Pkgs, image)
+func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
+ resultFile, cached := manifestFromCache(ctx, bucket, cfg.Pkgs, cache, image)
if !cached {
packages, err := json.Marshal(image.Packages)
@@ -158,7 +158,7 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *BuildCache, ima
log.Println("Finished Nix image build")
resultFile = strings.TrimSpace(string(stdout))
- cache.cacheManifest(cfg.Pkgs, image, resultFile)
+ cacheManifest(ctx, bucket, cfg.Pkgs, cache, image, resultFile)
}
buildOutput, err := ioutil.ReadFile(resultFile)
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 8ade88037265..32f55e3a681c 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -14,13 +14,21 @@
package builder
import (
- "github.com/google/nixery/config"
+ "context"
+ "io"
+ "log"
+ "os"
"sync"
+
+ "cloud.google.com/go/storage"
+ "github.com/google/nixery/config"
)
type void struct{}
-type BuildCache struct {
+// LocalCache implements the structure used for local caching of
+// manifests and layer uploads.
+type LocalCache struct {
mmtx sync.RWMutex
mcache map[string]string
@@ -28,8 +36,8 @@ type BuildCache struct {
lcache map[string]void
}
-func NewCache() BuildCache {
- return BuildCache{
+func NewCache() LocalCache {
+ return LocalCache{
mcache: make(map[string]string),
lcache: make(map[string]void),
}
@@ -38,7 +46,7 @@ func NewCache() BuildCache {
// Has this layer hash already been seen by this Nixery instance? If
// yes, we can skip upload checking and such because it has already
// been done.
-func (c *BuildCache) hasSeenLayer(hash string) bool {
+func (c *LocalCache) hasSeenLayer(hash string) bool {
c.lmtx.RLock()
defer c.lmtx.RUnlock()
_, seen := c.lcache[hash]
@@ -46,19 +54,14 @@ func (c *BuildCache) hasSeenLayer(hash string) bool {
}
// Layer has now been seen and should be stored.
-func (c *BuildCache) sawLayer(hash string) {
+func (c *LocalCache) sawLayer(hash string) {
c.lmtx.Lock()
defer c.lmtx.Unlock()
c.lcache[hash] = void{}
}
// Retrieve a cached manifest if the build is cacheable and it exists.
-func (c *BuildCache) manifestFromCache(src config.PkgSource, image *Image) (string, bool) {
- key := src.CacheKey(image.Packages, image.Tag)
- if key == "" {
- return "", false
- }
-
+func (c *LocalCache) manifestFromLocalCache(key string) (string, bool) {
c.mmtx.RLock()
path, ok := c.mcache[key]
c.mmtx.RUnlock()
@@ -70,15 +73,85 @@ func (c *BuildCache) manifestFromCache(src config.PkgSource, image *Image) (stri
return path, true
}
-// Adds the result of a manifest build to the cache, if the manifest
-// is considered cacheable.
-func (c *BuildCache) cacheManifest(src config.PkgSource, image *Image, path string) {
- key := src.CacheKey(image.Packages, image.Tag)
+// Adds the result of a manifest build to the local cache, if the
+// manifest is considered cacheable.
+func (c *LocalCache) localCacheManifest(key, path string) {
+ c.mmtx.Lock()
+ c.mcache[key] = path
+ c.mmtx.Unlock()
+}
+
+// Retrieve a manifest from the cache(s). First the local cache is
+// checked, then the GCS-bucket cache.
+func manifestFromCache(ctx *context.Context, bucket *storage.BucketHandle, pkgs config.PkgSource, cache *LocalCache, image *Image) (string, bool) {
+ key := pkgs.CacheKey(image.Packages, image.Tag)
+ if key == "" {
+ return "", false
+ }
+
+ path, cached := cache.manifestFromLocalCache(key)
+ if cached {
+ return path, true
+ }
+
+ obj := bucket.Object("manifests/" + key)
+
+ // Probe whether the file exists before trying to fetch it.
+ _, err := obj.Attrs(*ctx)
+ if err != nil {
+ return "", false
+ }
+
+ r, err := obj.NewReader(*ctx)
+ if err != nil {
+ log.Printf("Failed to retrieve manifest '%s' from cache: %s\n", key, err)
+ return "", false
+ }
+ defer r.Close()
+
+ path = os.TempDir() + "/" + key
+ f, _ := os.Create(path)
+ defer f.Close()
+
+ _, err = io.Copy(f, r)
+ if err != nil {
+ log.Printf("Failed to read cached manifest for '%s': %s\n", key, err)
+ }
+
+ log.Printf("Retrieved manifest for '%s' (%s) from GCS\n", image.Name, key)
+ cache.localCacheManifest(key, path)
+
+ return path, true
+}
+
+func cacheManifest(ctx *context.Context, bucket *storage.BucketHandle, pkgs config.PkgSource, cache *LocalCache, image *Image, path string) {
+ key := pkgs.CacheKey(image.Packages, image.Tag)
if key == "" {
return
}
- c.mmtx.Lock()
- c.mcache[key] = path
- c.mmtx.Unlock()
+ cache.localCacheManifest(key, path)
+
+ obj := bucket.Object("manifests/" + key)
+ w := obj.NewWriter(*ctx)
+
+ f, err := os.Open(path)
+ if err != nil {
+ log.Printf("failed to open '%s' manifest for cache upload: %s\n", image.Name, err)
+ return
+ }
+ defer f.Close()
+
+ size, err := io.Copy(w, f)
+ if err != nil {
+ log.Printf("failed to cache manifest sha1:%s: %s\n", key, err)
+ return
+ }
+
+ if err = w.Close(); err != nil {
+ log.Printf("failed to cache manifest sha1:%s: %s\n", key, err)
+ return
+ }
+
+ log.Printf("Cached manifest sha1:%s (%v bytes written)\n", key, size)
}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 49db1cdf8a5f..9242a3731af0 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -125,7 +125,7 @@ type registryHandler struct {
cfg *config.Config
ctx *context.Context
bucket *storage.BucketHandle
- cache *builder.BuildCache
+ cache *builder.LocalCache
}
func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
--
cgit 1.4.1
From 5a002fe067e52d503062307515179670b5e3de13 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 10 Sep 2019 11:13:10 +0100
Subject: refactor(builder): Calculate image cache key only once
---
tools/nixery/server/builder/builder.go | 13 +++++++++++--
tools/nixery/server/builder/cache.go | 19 ++++---------------
2 files changed, 15 insertions(+), 17 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index dd26ccc310aa..0ded94dfad01 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -110,7 +110,13 @@ func convenienceNames(packages []string) []string {
// Call out to Nix and request that an image be built. Nix will, upon success,
// return a manifest for the container image.
func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
- resultFile, cached := manifestFromCache(ctx, bucket, cfg.Pkgs, cache, image)
+ var resultFile string
+ cached := false
+
+ key := cfg.Pkgs.CacheKey(image.Packages, image.Tag)
+ if key != "" {
+ resultFile, cached = manifestFromCache(ctx, cache, bucket, key)
+ }
if !cached {
packages, err := json.Marshal(image.Packages)
@@ -158,7 +164,10 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, ima
log.Println("Finished Nix image build")
resultFile = strings.TrimSpace(string(stdout))
- cacheManifest(ctx, bucket, cfg.Pkgs, cache, image, resultFile)
+
+ if key != "" {
+ cacheManifest(ctx, cache, bucket, key, resultFile)
+ }
}
buildOutput, err := ioutil.ReadFile(resultFile)
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 32f55e3a681c..52765293f3e4 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -21,7 +21,6 @@ import (
"sync"
"cloud.google.com/go/storage"
- "github.com/google/nixery/config"
)
type void struct{}
@@ -83,12 +82,7 @@ func (c *LocalCache) localCacheManifest(key, path string) {
// Retrieve a manifest from the cache(s). First the local cache is
// checked, then the GCS-bucket cache.
-func manifestFromCache(ctx *context.Context, bucket *storage.BucketHandle, pkgs config.PkgSource, cache *LocalCache, image *Image) (string, bool) {
- key := pkgs.CacheKey(image.Packages, image.Tag)
- if key == "" {
- return "", false
- }
-
+func manifestFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (string, bool) {
path, cached := cache.manifestFromLocalCache(key)
if cached {
return path, true
@@ -118,18 +112,13 @@ func manifestFromCache(ctx *context.Context, bucket *storage.BucketHandle, pkgs
log.Printf("Failed to read cached manifest for '%s': %s\n", key, err)
}
- log.Printf("Retrieved manifest for '%s' (%s) from GCS\n", image.Name, key)
+ log.Printf("Retrieved manifest for sha1:%s from GCS\n", key)
cache.localCacheManifest(key, path)
return path, true
}
-func cacheManifest(ctx *context.Context, bucket *storage.BucketHandle, pkgs config.PkgSource, cache *LocalCache, image *Image, path string) {
- key := pkgs.CacheKey(image.Packages, image.Tag)
- if key == "" {
- return
- }
-
+func cacheManifest(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key, path string) {
cache.localCacheManifest(key, path)
obj := bucket.Object("manifests/" + key)
@@ -137,7 +126,7 @@ func cacheManifest(ctx *context.Context, bucket *storage.BucketHandle, pkgs conf
f, err := os.Open(path)
if err != nil {
- log.Printf("failed to open '%s' manifest for cache upload: %s\n", image.Name, err)
+ log.Printf("failed to open manifest sha1:%s for cache upload: %s\n", key, err)
return
}
defer f.Close()
--
cgit 1.4.1
From e4d03fdb17041ead530aa3e115f84988148a3b21 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 12:04:04 +0100
Subject: chore(docs): Remove mdbook override
The change has been upstreamed in Nixpkgs.
---
tools/nixery/docs/default.nix | 18 ------------------
1 file changed, 18 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
index 11eda3ff7052..fdf0e6ff9ea5 100644
--- a/tools/nixery/docs/default.nix
+++ b/tools/nixery/docs/default.nix
@@ -21,24 +21,6 @@
{ fetchFromGitHub, mdbook, runCommand, rustPlatform }:
let
- # nixpkgs currently has an old version of mdBook. A new version is
- # built here, but eventually the update will be upstreamed
- # (nixpkgs#65890)
- mdbook = rustPlatform.buildRustPackage rec {
- name = "mdbook-${version}";
- version = "0.3.1";
- doCheck = false;
-
- src = fetchFromGitHub {
- owner = "rust-lang-nursery";
- repo = "mdBook";
- rev = "v${version}";
- sha256 = "0py69267jbs6b7zw191hcs011cm1v58jz8mglqx3ajkffdfl3ghw";
- };
-
- cargoSha256 = "0qwhc42a86jpvjcaysmfcw8kmwa150lmz01flmlg74g6qnimff5m";
- };
-
nix-1p = fetchFromGitHub {
owner = "tazjin";
repo = "nix-1p";
--
cgit 1.4.1
From 64f74abc4df6676e3cd4c7f34210fd2aea433f16 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 12:15:38 +0100
Subject: feat: Add configuration option for popularity data URL
---
tools/nixery/README.md | 1 +
tools/nixery/build-image/build-image.nix | 8 ++++----
tools/nixery/build-image/default.nix | 2 +-
tools/nixery/server/builder/builder.go | 4 ++++
tools/nixery/server/config/config.go | 2 ++
5 files changed, 12 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 3bc6df845a7f..3026451c74e0 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -83,6 +83,7 @@ variables:
for building
* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
(defaults to 60
+* `NIX_POPULARITY_URL`: URL to a file containing popularity data for the package set (see `popcount/`)
* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
used to sign Cloud Storage URLs
* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index cd0ef91b3135..8f54f53261b2 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -35,6 +35,9 @@
# layers. To allow for some extensibility (via additional layers),
# the default here is set to something a little less than that.
maxLayers ? 96,
+ # Popularity data for layer solving is fetched from the URL passed
+ # in here.
+ popularityUrl ? "https://storage.googleapis.com/nixery-layers/popularity/popularity-19.03.173490.5271f8dddc0.json",
...
}:
@@ -101,10 +104,7 @@ let
fetched = (map (deepFetch pkgs) (fromJSON packages));
in foldl' splitter init fetched;
- popularity = builtins.fetchurl {
- url = "https://storage.googleapis.com/nixery-layers/popularity/nixos-19.03-20190812.json";
- sha256 = "16sxd49vqqg2nrhwynm36ba6bc2yff5cd5hf83wi0hanw5sx3svk";
- };
+ popularity = builtins.fetchurl popularityUrl;
# Before actually creating any image layers, the store paths that need to be
# included in the image must be sorted into the layers that they should go
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index 6b1cea6f0ca2..3bb5d62fb0d5 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -22,7 +22,7 @@
# all arguments of build-image.nix.
, srcType ? "nixpkgs"
, srcArgs ? "nixos-19.03"
-, tag ? null, name ? null, packages ? null, maxLayers ? null
+, tag ? null, name ? null, packages ? null, maxLayers ? null, popularityUrl ? null
}@args:
let pkgs = import ./load-pkgs.nix { inherit srcType srcArgs; };
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 0ded94dfad01..b2e183b5a2b7 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -134,6 +134,10 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, ima
"--argstr", "srcArgs", srcArgs,
}
+ if cfg.PopUrl != "" {
+ args = append(args, "--argstr", "popularityUrl", cfg.PopUrl)
+ }
+
cmd := exec.Command("nixery-build-image", args...)
outpipe, err := cmd.StdoutPipe()
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index ea1bb1ab4532..ac8820f23116 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -68,6 +68,7 @@ type Config struct {
Pkgs PkgSource // Source for Nix package set
Timeout string // Timeout for a single Nix builder (seconds)
WebDir string // Directory with static web assets
+ PopUrl string // URL to the Nix package popularity count
}
func FromEnv() (*Config, error) {
@@ -83,5 +84,6 @@ func FromEnv() (*Config, error) {
Signing: signingOptsFromEnv(),
Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
+ PopUrl: os.Getenv("NIX_POPULARITY_URL"),
}, nil
}
--
cgit 1.4.1
From f0b69638e1c2354c91d2457998c17ee388665bd1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 12:18:04 +0100
Subject: chore(build): Bump nixpkgs version used in Travis
This version matches the updated popularity URL.
---
tools/nixery/.travis.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index f670ab0e2cf5..5f0bbfd3b40f 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -2,7 +2,7 @@ language: nix
services:
- docker
env:
- - NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/88d9f776091896cfe57dc6fbdf246e7d27d5f105.tar.gz
+ - NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/5271f8dddc0f2e54f55bd2fc1868c09ff72ac980.tar.gz
before_script:
- mkdir test-files
- echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
--
cgit 1.4.1
From da6fd1d79e03795e0432553c1875774a61267826 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 12:35:18 +0100
Subject: fix(build): Ensure nixery-build-image is on Nixery's PATH
This is useful when running Nixery locally.
---
tools/nixery/default.nix | 1 +
1 file changed, 1 insertion(+)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index f7a2a1712bfb..dd07d34936e4 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -44,6 +44,7 @@ rec {
# are installing Nixery directly.
nixery-bin = writeShellScriptBin "nixery" ''
export WEB_DIR="${nixery-book}"
+ export PATH="${nixery-build-image}/bin:$PATH"
exec ${nixery-server}/bin/nixery
'';
--
cgit 1.4.1
From c391a7b7f81f30de59b3bf186bf0e51ad0e308ce Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 15:03:46 +0100
Subject: fix(build-image): Use absolute paths in tarballs
---
tools/nixery/build-image/build-image.nix | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 8f54f53261b2..9cfb806e47b1 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -133,11 +133,11 @@ let
# their contents.
pathsToLayer = paths: runCommand "layer.tar" {
} ''
- tar --no-recursion -rf "$out" \
+ tar --no-recursion -Prf "$out" \
--mtime="@$SOURCE_DATE_EPOCH" \
--owner=0 --group=0 /nix /nix/store
- tar -rpf "$out" --hard-dereference --sort=name \
+ tar -Prpf "$out" --hard-dereference --sort=name \
--mtime="@$SOURCE_DATE_EPOCH" \
--owner=0 --group=0 ${lib.concatStringsSep " " paths}
'';
--
cgit 1.4.1
From 0000b956bb2333cc09fb52fb063d383ec1fd90e3 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 15:04:13 +0100
Subject: feat(server): Log Nix output live during the builds
Instead of dumping all Nix output as one at the end of the build
process, stream it live as the lines come in.
This is a lot more useful for debugging stuff like where manifest
retrievals get stuck.
---
tools/nixery/server/builder/builder.go | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index b2e183b5a2b7..a5744a85348a 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -18,6 +18,7 @@
package builder
import (
+ "bufio"
"bytes"
"context"
"encoding/json"
@@ -107,6 +108,15 @@ func convenienceNames(packages []string) []string {
return packages
}
+// logNix logs each output line from Nix. It runs in a goroutine per
+// output channel that should be live-logged.
+func logNix(name string, r io.ReadCloser) {
+ scanner := bufio.NewScanner(r)
+ for scanner.Scan() {
+ log.Printf("\x1b[31m[nix - %s]\x1b[39m %s\n", name, scanner.Text())
+ }
+}
+
// Call out to Nix and request that an image be built. Nix will, upon success,
// return a manifest for the container image.
func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
@@ -149,6 +159,7 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, ima
if err != nil {
return nil, err
}
+ go logNix(image.Name, errpipe)
if err = cmd.Start(); err != nil {
log.Println("Error starting nix-build:", err)
@@ -157,11 +168,9 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, ima
log.Printf("Started Nix image build for '%s'", image.Name)
stdout, _ := ioutil.ReadAll(outpipe)
- stderr, _ := ioutil.ReadAll(errpipe)
if err = cmd.Wait(); err != nil {
- // TODO(tazjin): Propagate errors upwards in a usable format.
- log.Printf("nix-build execution error: %s\nstdout: %s\nstderr: %s\n", err, stdout, stderr)
+ log.Printf("nix-build execution error: %s\nstdout: %s\n", err, stdout)
return nil, err
}
--
cgit 1.4.1
From 21a17b33f49f3f016e35e535af0b81fdb53f0846 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 21 Sep 2019 15:15:14 +0100
Subject: fix(build): Ensure launch script compatibility with other runtimes
Fixes two launch script compatibility issues with other container
runtimes (such as gvisor):
* don't fail if /tmp already exists
* don't fail if the environment becomes unset
---
tools/nixery/default.nix | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index dd07d34936e4..66155eefa060 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -59,8 +59,9 @@ rec {
# issues in containers.
nixery-launch-script = writeShellScriptBin "nixery" ''
set -e
+ export PATH=${coreutils}/bin:$PATH
export NIX_SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt
- mkdir /tmp
+ mkdir -p /tmp
# Create the build user/group required by Nix
echo 'nixbld:x:30000:nixbld' >> /etc/group
--
cgit 1.4.1
From 7b987530d1e1d25c74b6f8a85a9a25ef61a32c15 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 22 Sep 2019 17:45:44 +0100
Subject: refactor(build-image): Minor tweak to layer construction script
---
tools/nixery/build-image/build-image.nix | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 9cfb806e47b1..012cc7f36d26 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -179,7 +179,7 @@ let
# The server application compares binary MD5 hashes and expects base64
# encoding instead of hex.
layerMd5=$(openssl dgst -md5 -binary $layer | openssl enc -base64)
- layerSize=$(wc -c $layer | cut -d ' ' -f1)
+ layerSize=$(stat --printf '%s' $layer)
jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layer \
'{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> fs-layers
@@ -203,7 +203,7 @@ let
configMetadata = fromJSON (readFile (runCommand "config-meta" {
buildInputs = with pkgs; [ jq openssl ];
} ''
- size=$(wc -c ${configJson} | cut -d ' ' -f1)
+ size=$(stat --printf '%s' ${configJson})
sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
md5=$(openssl dgst -md5 -binary ${configJson} | openssl enc -base64)
jq -n -c --arg size $size --arg sha256 $sha256 --arg md5 $md5 \
--
cgit 1.4.1
From ad9b3eb2629b0716d93ce07411831d859237edff Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 22 Sep 2019 17:51:39 +0100
Subject: refactor(build): Add group-layers to top-level Nix derivations
This makes CI build the group-layers tool (and cache it to Cachix!)
---
tools/nixery/default.nix | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 66155eefa060..95540e11daf2 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -17,7 +17,11 @@
with pkgs;
-rec {
+let buildImage = import ./build-image {
+ srcType = "path";
+ srcArgs = ;
+};
+in rec {
# Go implementation of the Nixery server which implements the
# container registry interface.
#
@@ -27,10 +31,8 @@ rec {
nixery-server = callPackage ./server { };
# Implementation of the image building & layering logic
- nixery-build-image = (import ./build-image {
- srcType = "path";
- srcArgs = ;
- }).wrapper;
+ nixery-build-image = buildImage.wrapper;
+ nixery-group-layers = buildImage.groupLayers;
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
--
cgit 1.4.1
From 712b38cbbcb5671135dbf04492de3c4e97fa1b87 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 29 Sep 2019 22:49:50 +0100
Subject: refactor(build-image): Do not assemble image layers in Nix
This is the first step towards a more granular build process where
some of the build responsibility moves into the server component.
Rather than assembling all layers inside of Nix, it will only create
the symlink forest and return information about the runtime paths
required by the image.
The server is then responsible for grouping these paths into layers,
and assembling the layers themselves.
Relates to #50.
---
tools/nixery/build-image/build-image.nix | 219 ++++++++-----------------------
1 file changed, 57 insertions(+), 162 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 012cc7f36d26..1d97ba59b97d 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -12,43 +12,38 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-# This file contains a modified version of dockerTools.buildImage that, instead
-# of outputting a single tarball which can be imported into a running Docker
-# daemon, builds a manifest file that can be used for serving the image over a
-# registry API.
+# This file contains a derivation that outputs structured information
+# about the runtime dependencies of an image with a given set of
+# packages. This is used by Nixery to determine the layer grouping and
+# assemble each layer.
+#
+# In addition it creates and outputs a meta-layer with the symlink
+# structure required for using the image together with the individual
+# package layers.
{
- # Package set to used (this will usually be loaded by load-pkgs.nix)
- pkgs,
- # Image Name
- name,
- # Image tag, the Nix output's hash will be used if null
- tag ? null,
- # Tool used to determine layer grouping
- groupLayers,
- # Files to put on the image (a nix store path or list of paths).
- contents ? [],
+ # Description of the package set to be used (will be loaded by load-pkgs.nix)
+ srcType ? "nixpkgs",
+ srcArgs ? "nixos-19.03",
+ importArgs ? { },
# Packages to install by name (which must refer to top-level attributes of
# nixpkgs). This is passed in as a JSON-array in string form.
- packages ? "[]",
- # Docker's modern image storage mechanisms have a maximum of 125
- # layers. To allow for some extensibility (via additional layers),
- # the default here is set to something a little less than that.
- maxLayers ? 96,
- # Popularity data for layer solving is fetched from the URL passed
- # in here.
- popularityUrl ? "https://storage.googleapis.com/nixery-layers/popularity/popularity-19.03.173490.5271f8dddc0.json",
-
- ...
+ packages ? "[]"
}:
-with builtins;
-
let
+ inherit (builtins)
+ foldl'
+ fromJSON
+ hasAttr
+ length
+ readFile
+ toFile
+ toJSON;
+
inherit (pkgs) lib runCommand writeText;
- tarLayer = "application/vnd.docker.image.rootfs.diff.tar";
- baseName = baseNameOf name;
+ pkgs = import ./load-pkgs.nix { inherit srcType srcArgs importArgs; };
# deepFetch traverses the top-level Nix package set to retrieve an item via a
# path specified in string form.
@@ -87,11 +82,11 @@ let
fetchLower = attrByPath caseAmendedPath err s;
in attrByPath path fetchLower s;
- # allContents is the combination of all derivations and store paths passed in
- # directly, as well as packages referred to by name.
+ # allContents contains all packages successfully retrieved by name
+ # from the package set, as well as any errors encountered while
+ # attempting to fetch a package.
#
- # It accumulates potential errors about packages that could not be found to
- # return this information back to the server.
+ # Accumulated error information is returned back to the server.
allContents =
# Folds over the results of 'deepFetch' on all requested packages to
# separate them into errors and content. This allows the program to
@@ -100,157 +95,57 @@ let
if hasAttr "error" res
then attrs // { errors = attrs.errors ++ [ res ]; }
else attrs // { contents = attrs.contents ++ [ res ]; };
- init = { inherit contents; errors = []; };
+ init = { contents = []; errors = []; };
fetched = (map (deepFetch pkgs) (fromJSON packages));
in foldl' splitter init fetched;
- popularity = builtins.fetchurl popularityUrl;
-
- # Before actually creating any image layers, the store paths that need to be
- # included in the image must be sorted into the layers that they should go
- # into.
- #
- # How contents are allocated to each layer is decided by the `group-layers.go`
- # program. The mechanism used is described at the top of the program's source
- # code, or alternatively in the layering design document:
+ # Contains the export references graph of all retrieved packages,
+ # which has information about all runtime dependencies of the image.
#
- # https://storage.googleapis.com/nixdoc/nixery-layers.html
- #
- # To invoke the program, a graph of all runtime references is created via
- # Nix's exportReferencesGraph feature - the resulting layers are read back
- # into Nix using import-from-derivation.
- groupedLayers = fromJSON (readFile (runCommand "grouped-layers.json" {
+ # This is used by Nixery to group closures into image layers.
+ runtimeGraph = runCommand "runtime-graph.json" {
__structuredAttrs = true;
exportReferencesGraph.graph = allContents.contents;
- PATH = "${groupLayers}/bin";
+ PATH = "${pkgs.coreutils}/bin";
builder = toFile "builder" ''
. .attrs.sh
- group-layers --budget ${toString (maxLayers - 1)} --pop ${popularity} --out ''${outputs[out]}
+ cp .attrs.json ''${outputs[out]}
'';
- } ""));
-
- # Given a list of store paths, create an image layer tarball with
- # their contents.
- pathsToLayer = paths: runCommand "layer.tar" {
- } ''
- tar --no-recursion -Prf "$out" \
- --mtime="@$SOURCE_DATE_EPOCH" \
- --owner=0 --group=0 /nix /nix/store
-
- tar -Prpf "$out" --hard-dereference --sort=name \
- --mtime="@$SOURCE_DATE_EPOCH" \
- --owner=0 --group=0 ${lib.concatStringsSep " " paths}
- '';
-
- bulkLayers = writeText "bulk-layers"
- (lib.concatStringsSep "\n" (map (layer: pathsToLayer layer.contents)
- groupedLayers));
+ } "";
- # Create a symlink forest into all top-level store paths.
+ # Create a symlink forest into all top-level store paths of the
+ # image contents.
contentsEnv = pkgs.symlinkJoin {
name = "bulk-layers";
paths = allContents.contents;
};
- # This customisation layer which contains the symlink forest
- # required at container runtime is assembled with a simplified
- # version of dockerTools.mkCustomisationLayer.
- #
- # No metadata creation (such as layer hashing) is required when
- # serving images over the API.
- customisationLayer = runCommand "customisation-layer.tar" {} ''
+ # Image layer that contains the symlink forest created above. This
+ # must be included in the image to ensure that the filesystem has a
+ # useful layout at runtime.
+ symlinkLayer = runCommand "symlink-layer.tar" {} ''
cp -r ${contentsEnv}/ ./layer
tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -cf $out .
'';
- # Inspect the returned bulk layers to determine which layers belong to the
- # image and how to serve them.
- #
- # This computes both an MD5 and a SHA256 hash of each layer, which are used
- # for different purposes. See the registry server implementation for details.
- allLayersJson = runCommand "fs-layer-list.json" {
+ # Metadata about the symlink layer which is required for serving it.
+ # Two different hashes are computed for different usages (inclusion
+ # in manifest vs. content-checking in the layer cache).
+ symlinkLayerMeta = fromJSON (readFile (runCommand "symlink-layer-meta.json" {
buildInputs = with pkgs; [ coreutils jq openssl ];
- } ''
- cat ${bulkLayers} | sort -t/ -k5 -n > layer-list
- echo ${customisationLayer} >> layer-list
-
- for layer in $(cat layer-list); do
- layerSha256=$(sha256sum $layer | cut -d ' ' -f1)
- # The server application compares binary MD5 hashes and expects base64
- # encoding instead of hex.
- layerMd5=$(openssl dgst -md5 -binary $layer | openssl enc -base64)
- layerSize=$(stat --printf '%s' $layer)
-
- jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path $layer \
- '{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> fs-layers
- done
+ }''
+ layerSha256=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
+ layerMd5=$(openssl dgst -md5 -binary ${symlinkLayer} | openssl enc -base64)
+ layerSize=$(stat --printf '%s' ${symlinkLayer})
- cat fs-layers | jq -s -c '.' > $out
- '';
- allLayers = fromJSON (readFile allLayersJson);
-
- # Image configuration corresponding to the OCI specification for the file type
- # 'application/vnd.oci.image.config.v1+json'
- config = {
- architecture = "amd64";
- os = "linux";
- rootfs.type = "layers";
- rootfs.diff_ids = map (layer: "sha256:${layer.sha256}") allLayers;
- # Required to let Kubernetes import Nixery images
- config = {};
- };
- configJson = writeText "${baseName}-config.json" (toJSON config);
- configMetadata = fromJSON (readFile (runCommand "config-meta" {
- buildInputs = with pkgs; [ jq openssl ];
- } ''
- size=$(stat --printf '%s' ${configJson})
- sha256=$(sha256sum ${configJson} | cut -d ' ' -f1)
- md5=$(openssl dgst -md5 -binary ${configJson} | openssl enc -base64)
- jq -n -c --arg size $size --arg sha256 $sha256 --arg md5 $md5 \
- '{ size: ($size | tonumber), sha256: $sha256, md5: $md5 }' \
- >> $out
+ jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path ${symlinkLayer} \
+ '{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> $out
''));
- # Corresponds to the manifest JSON expected by the Registry API.
- #
- # This is Docker's "Image Manifest V2, Schema 2":
- # https://docs.docker.com/registry/spec/manifest-v2-2/
- manifest = {
- schemaVersion = 2;
- mediaType = "application/vnd.docker.distribution.manifest.v2+json";
-
- config = {
- mediaType = "application/vnd.docker.container.image.v1+json";
- size = configMetadata.size;
- digest = "sha256:${configMetadata.sha256}";
- };
-
- layers = map (layer: {
- mediaType = tarLayer;
- digest = "sha256:${layer.sha256}";
- size = layer.size;
- }) allLayers;
- };
-
- # This structure maps each layer digest to the actual tarball that will need
- # to be served. It is used by the controller to cache the paths during a pull.
- layerLocations = {
- "${configMetadata.sha256}" = {
- path = configJson;
- md5 = configMetadata.md5;
- };
- } // (listToAttrs (map (layer: {
- name = "${layer.sha256}";
- value = {
- path = layer.path;
- md5 = layer.md5;
- };
- }) allLayers));
-
- # Final output structure returned to the controller in the case of a
- # successful build.
- manifestOutput = {
- inherit manifest layerLocations;
+ # Final output structure returned to Nixery if the build succeeded
+ buildOutput = {
+ runtimeGraph = fromJSON (readFile runtimeGraph);
+ symlinkLayer = symlinkLayerMeta;
};
# Output structure returned if errors occured during the build. Currently the
@@ -259,7 +154,7 @@ let
error = "not_found";
pkgs = map (err: err.pkg) allContents.errors;
};
-in writeText "manifest-output.json" (if (length allContents.errors) == 0
- then toJSON manifestOutput
+in writeText "build-output.json" (if (length allContents.errors) == 0
+ then toJSON buildOutput
else toJSON errorOutput
)
--
cgit 1.4.1
From 0898d8a96175958b82d1713d13304423f8b19f77 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 29 Sep 2019 22:58:52 +0100
Subject: chore(build-image): Simplify wrapper build & remove layer grouping
Simplifies the wrapper script used to invoke Nix builds from Nixery to
just contain the essentials, since the layer grouping logic is moving
into the server itself.
---
tools/nixery/build-image/build-image.nix | 4 +-
tools/nixery/build-image/default.nix | 96 ++-------
tools/nixery/build-image/go-deps.nix | 10 -
tools/nixery/build-image/group-layers.go | 352 -------------------------------
tools/nixery/default.nix | 11 +-
5 files changed, 21 insertions(+), 452 deletions(-)
delete mode 100644 tools/nixery/build-image/go-deps.nix
delete mode 100644 tools/nixery/build-image/group-layers.go
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 1d97ba59b97d..33500dbb9e80 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -26,6 +26,8 @@
srcType ? "nixpkgs",
srcArgs ? "nixos-19.03",
importArgs ? { },
+ # Path to load-pkgs.nix
+ loadPkgs ? ./load-pkgs.nix,
# Packages to install by name (which must refer to top-level attributes of
# nixpkgs). This is passed in as a JSON-array in string form.
packages ? "[]"
@@ -43,7 +45,7 @@ let
inherit (pkgs) lib runCommand writeText;
- pkgs = import ./load-pkgs.nix { inherit srcType srcArgs importArgs; };
+ pkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
# deepFetch traverses the top-level Nix package set to retrieve an item via a
# path specified in string form.
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index 3bb5d62fb0d5..a61ac06bdd92 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -12,84 +12,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-# This file builds the tool used to calculate layer distribution and
-# moves the files needed to call the Nix builds at runtime in the
-# correct locations.
-
-{ pkgs ? null, self ? ./.
-
- # Because of the insanity occuring below, this function must mirror
- # all arguments of build-image.nix.
-, srcType ? "nixpkgs"
-, srcArgs ? "nixos-19.03"
-, tag ? null, name ? null, packages ? null, maxLayers ? null, popularityUrl ? null
-}@args:
-
-let pkgs = import ./load-pkgs.nix { inherit srcType srcArgs; };
-in with pkgs; rec {
-
- groupLayers = buildGoPackage {
- name = "group-layers";
- goDeps = ./go-deps.nix;
- goPackagePath = "github.com/google/nixery/group-layers";
-
- # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
- # WARNING: HERE BE DRAGONS! #
- # #
- # The hack below is used to break evaluation purity. The issue is #
- # that Nixery's build instructions (the default.nix in the folder #
- # above this one) must build a program that can invoke Nix at #
- # runtime, with a derivation that needs a program tracked in this #
- # source tree (`group-layers`). #
- # #
- # Simply installing that program in the $PATH of Nixery does not #
- # work, because the runtime Nix builds use their own isolated #
- # environment. #
- # #
- # I first attempted to naively copy the sources into the Nix #
- # store, so that Nixery could build `group-layers` when it starts #
- # up - however those sources are not available to a nested Nix #
- # build because they're not part of the context of the nested #
- # invocation. #
- # #
- # Nix has several primitives under `builtins.` that can break #
- # evaluation purity, these (namely readDir and readFile) are used #
- # below to break out of the isolated environment and reconstruct #
- # the source tree for `group-layers`. #
- # #
- # There might be a better way to do this, but I don't know what #
- # it is. #
- # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
- src = runCommand "group-layers-srcs" { } ''
- mkdir -p $out
- ${with builtins;
- let
- files =
- (attrNames (lib.filterAttrs (_: t: t != "symlink") (readDir self)));
- commands =
- map (f: "cp ${toFile f (readFile "${self}/${f}")} $out/${f}") files;
- in lib.concatStringsSep "\n" commands}
- '';
-
- meta = {
- description =
- "Tool to group a set of packages into container image layers";
- license = lib.licenses.asl20;
- maintainers = [ lib.maintainers.tazjin ];
- };
- };
-
- buildImage = import ./build-image.nix
- ({ inherit pkgs groupLayers; } // (lib.filterAttrs (_: v: v != null) args));
-
- # Wrapper script which is called by the Nixery server to trigger an
- # actual image build. This exists to avoid having to specify the
- # location of build-image.nix at runtime.
- wrapper = writeShellScriptBin "nixery-build-image" ''
- exec ${nix}/bin/nix-build \
- --show-trace \
- --no-out-link "$@" \
- --argstr self "${./.}" \
- -A buildImage ${./.}
- '';
-}
+# This file builds a wrapper script called by Nixery to ask for the
+# content information for a given image.
+#
+# The purpose of using a wrapper script is to ensure that the paths to
+# all required Nix files are set correctly at runtime.
+
+{ pkgs ? import {} }:
+
+pkgs.writeShellScriptBin "nixery-build-image" ''
+ exec ${pkgs.nix}/bin/nix-build \
+ --show-trace \
+ --no-out-link "$@" \
+ --argstr loadPkgs ${./load-pkgs.nix} \
+ ${./build-image.nix}
+''
diff --git a/tools/nixery/build-image/go-deps.nix b/tools/nixery/build-image/go-deps.nix
deleted file mode 100644
index 0f22a7088f52..000000000000
--- a/tools/nixery/build-image/go-deps.nix
+++ /dev/null
@@ -1,10 +0,0 @@
-# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
-[{
- goPackagePath = "gonum.org/v1/gonum";
- fetch = {
- type = "git";
- url = "https://github.com/gonum/gonum";
- rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
- sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
- };
-}]
diff --git a/tools/nixery/build-image/group-layers.go b/tools/nixery/build-image/group-layers.go
deleted file mode 100644
index 93f2e520ace9..000000000000
--- a/tools/nixery/build-image/group-layers.go
+++ /dev/null
@@ -1,352 +0,0 @@
-// This program reads an export reference graph (i.e. a graph representing the
-// runtime dependencies of a set of derivations) created by Nix and groups them
-// in a way that is likely to match the grouping for other derivation sets with
-// overlapping dependencies.
-//
-// This is used to determine which derivations to include in which layers of a
-// container image.
-//
-// # Inputs
-//
-// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
-// * a file containing absolute popularity values of packages in the
-// Nix package set (in the form of a direct reference count)
-// * a maximum number of layers to allocate for the image (the "layer budget")
-//
-// # Algorithm
-//
-// It works by first creating a (directed) dependency tree:
-//
-// img (root node)
-// │
-// ├───> A ─────┐
-// │ v
-// ├───> B ───> E
-// │ ^
-// ├───> C ─────┘
-// │ │
-// │ v
-// └───> D ───> F
-// │
-// └────> G
-//
-// Each node (i.e. package) is then visited to determine how important
-// it is to separate this node into its own layer, specifically:
-//
-// 1. Is the node within a certain threshold percentile of absolute
-// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
-//
-// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
-//
-// In either case, a bit is flipped for this node representing each
-// condition and an edge to it is inserted directly from the image
-// root, if it does not already exist.
-//
-// For the rest of the example we assume 'G' is above the threshold
-// size and 'E' is popular.
-//
-// This tree is then transformed into a dominator tree:
-//
-// img
-// │
-// ├───> A
-// ├───> B
-// ├───> C
-// ├───> E
-// ├───> D ───> F
-// └───> G
-//
-// Specifically this means that the paths to A, B, C, E, G, and D
-// always pass through the root (i.e. are dominated by it), whilst F
-// is dominated by D (all paths go through it).
-//
-// The top-level subtrees are considered as the initially selected
-// layers.
-//
-// If the list of layers fits within the layer budget, it is returned.
-//
-// Otherwise, a merge rating is calculated for each layer. This is the
-// product of the layer's total size and its root node's popularity.
-//
-// Layers are then merged in ascending order of merge ratings until
-// they fit into the layer budget.
-//
-// # Threshold values
-//
-// Threshold values for the partitioning conditions mentioned above
-// have not yet been determined, but we will make a good first guess
-// based on gut feeling and proceed to measure their impact on cache
-// hits/misses.
-//
-// # Example
-//
-// Using the logic described above as well as the example presented in
-// the introduction, this program would create the following layer
-// groupings (assuming no additional partitioning):
-//
-// Layer budget: 1
-// Layers: { A, B, C, D, E, F, G }
-//
-// Layer budget: 2
-// Layers: { G }, { A, B, C, D, E, F }
-//
-// Layer budget: 3
-// Layers: { G }, { E }, { A, B, C, D, F }
-//
-// Layer budget: 4
-// Layers: { G }, { E }, { D, F }, { A, B, C }
-//
-// ...
-//
-// Layer budget: 10
-// Layers: { E }, { D, F }, { A }, { B }, { C }
-package main
-
-import (
- "encoding/json"
- "flag"
- "io/ioutil"
- "log"
- "regexp"
- "sort"
-
- "gonum.org/v1/gonum/graph/flow"
- "gonum.org/v1/gonum/graph/simple"
-)
-
-// closureGraph represents the structured attributes Nix outputs when asking it
-// for the exportReferencesGraph of a list of derivations.
-type exportReferences struct {
- References struct {
- Graph []string `json:"graph"`
- } `json:"exportReferencesGraph"`
-
- Graph []struct {
- Size uint64 `json:"closureSize"`
- Path string `json:"path"`
- Refs []string `json:"references"`
- } `json:"graph"`
-}
-
-// Popularity data for each Nix package that was calculated in advance.
-//
-// Popularity is a number from 1-100 that represents the
-// popularity percentile in which this package resides inside
-// of the nixpkgs tree.
-type pkgsMetadata = map[string]int
-
-// layer represents the data returned for each layer that Nix should
-// build for the container image.
-type layer struct {
- Contents []string `json:"contents"`
- mergeRating uint64
-}
-
-func (a layer) merge(b layer) layer {
- a.Contents = append(a.Contents, b.Contents...)
- a.mergeRating += b.mergeRating
- return a
-}
-
-// closure as pointed to by the graph nodes.
-type closure struct {
- GraphID int64
- Path string
- Size uint64
- Refs []string
- Popularity int
-}
-
-func (c *closure) ID() int64 {
- return c.GraphID
-}
-
-var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
-
-func (c *closure) DOTID() string {
- return nixRegexp.ReplaceAllString(c.Path, "")
-}
-
-// bigOrPopular checks whether this closure should be considered for
-// separation into its own layer, even if it would otherwise only
-// appear in a subtree of the dominator tree.
-func (c *closure) bigOrPopular() bool {
- const sizeThreshold = 100 * 1000000 // 100MB
-
- if c.Size > sizeThreshold {
- return true
- }
-
- // The threshold value used here is currently roughly the
- // minimum number of references that only 1% of packages in
- // the entire package set have.
- //
- // TODO(tazjin): Do this more elegantly by calculating
- // percentiles for each package and using those instead.
- if c.Popularity >= 1000 {
- return true
- }
-
- return false
-}
-
-func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
- // Big or popular nodes get a separate edge from the top to
- // flag them for their own layer.
- if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
- edge := graph.NewEdge(graph.Node(0), node)
- graph.SetEdge(edge)
- }
-
- for _, c := range node.Refs {
- // Nix adds a self reference to each node, which
- // should not be inserted.
- if c != node.Path {
- edge := graph.NewEdge(node, (*cmap)[c])
- graph.SetEdge(edge)
- }
- }
-}
-
-// Create a graph structure from the references supplied by Nix.
-func buildGraph(refs *exportReferences, pop *pkgsMetadata) *simple.DirectedGraph {
- cmap := make(map[string]*closure)
- graph := simple.NewDirectedGraph()
-
- // Insert all closures into the graph, as well as a fake root
- // closure which serves as the top of the tree.
- //
- // A map from store paths to IDs is kept to actually insert
- // edges below.
- root := &closure{
- GraphID: 0,
- Path: "image_root",
- }
- graph.AddNode(root)
-
- for idx, c := range refs.Graph {
- node := &closure{
- GraphID: int64(idx + 1), // inc because of root node
- Path: c.Path,
- Size: c.Size,
- Refs: c.Refs,
- }
-
- if p, ok := (*pop)[node.DOTID()]; ok {
- node.Popularity = p
- } else {
- node.Popularity = 1
- }
-
- graph.AddNode(node)
- cmap[c.Path] = node
- }
-
- // Insert the top-level closures with edges from the root
- // node, then insert all edges for each closure.
- for _, p := range refs.References.Graph {
- edge := graph.NewEdge(root, cmap[p])
- graph.SetEdge(edge)
- }
-
- for _, c := range cmap {
- insertEdges(graph, &cmap, c)
- }
-
- return graph
-}
-
-// Extracts a subgraph starting at the specified root from the
-// dominator tree. The subgraph is converted into a flat list of
-// layers, each containing the store paths and merge rating.
-func groupLayer(dt *flow.DominatorTree, root *closure) layer {
- size := root.Size
- contents := []string{root.Path}
- children := dt.DominatedBy(root.ID())
-
- // This iteration does not use 'range' because the list being
- // iterated is modified during the iteration (yes, I'm sorry).
- for i := 0; i < len(children); i++ {
- child := children[i].(*closure)
- size += child.Size
- contents = append(contents, child.Path)
- children = append(children, dt.DominatedBy(child.ID())...)
- }
-
- return layer{
- Contents: contents,
- // TODO(tazjin): The point of this is to factor in
- // both the size and the popularity when making merge
- // decisions, but there might be a smarter way to do
- // it than a plain multiplication.
- mergeRating: uint64(root.Popularity) * size,
- }
-}
-
-// Calculate the dominator tree of the entire package set and group
-// each top-level subtree into a layer.
-//
-// Layers are merged together until they fit into the layer budget,
-// based on their merge rating.
-func dominate(budget int, graph *simple.DirectedGraph) []layer {
- dt := flow.Dominators(graph.Node(0), graph)
-
- var layers []layer
- for _, n := range dt.DominatedBy(dt.Root().ID()) {
- layers = append(layers, groupLayer(&dt, n.(*closure)))
- }
-
- sort.Slice(layers, func(i, j int) bool {
- return layers[i].mergeRating < layers[j].mergeRating
- })
-
- if len(layers) > budget {
- log.Printf("Ideal image has %v layers, but budget is %v\n", len(layers), budget)
- }
-
- for len(layers) > budget {
- merged := layers[0].merge(layers[1])
- layers[1] = merged
- layers = layers[1:]
- }
-
- return layers
-}
-
-func main() {
- graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
- popFile := flag.String("pop", "popularity.json", "Package popularity data")
- outFile := flag.String("out", "layers.json", "File to write layers to")
- layerBudget := flag.Int("budget", 94, "Total layer budget available")
- flag.Parse()
-
- // Parse graph data
- file, err := ioutil.ReadFile(*graphFile)
- if err != nil {
- log.Fatalf("Failed to load input: %s\n", err)
- }
-
- var refs exportReferences
- err = json.Unmarshal(file, &refs)
- if err != nil {
- log.Fatalf("Failed to deserialise input: %s\n", err)
- }
-
- // Parse popularity data
- popBytes, err := ioutil.ReadFile(*popFile)
- if err != nil {
- log.Fatalf("Failed to load input: %s\n", err)
- }
-
- var pop pkgsMetadata
- err = json.Unmarshal(popBytes, &pop)
- if err != nil {
- log.Fatalf("Failed to deserialise input: %s\n", err)
- }
-
- graph := buildGraph(&refs, &pop)
- layers := dominate(*layerBudget, graph)
-
- j, _ := json.Marshal(layers)
- ioutil.WriteFile(*outFile, j, 0644)
-}
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 95540e11daf2..f321b07a9c7a 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -17,11 +17,7 @@
with pkgs;
-let buildImage = import ./build-image {
- srcType = "path";
- srcArgs = ;
-};
-in rec {
+rec {
# Go implementation of the Nixery server which implements the
# container registry interface.
#
@@ -30,9 +26,8 @@ in rec {
# data dependencies.
nixery-server = callPackage ./server { };
- # Implementation of the image building & layering logic
- nixery-build-image = buildImage.wrapper;
- nixery-group-layers = buildImage.groupLayers;
+ # Implementation of the Nix image building logic
+ nixery-build-image = import ./build-image { inherit pkgs; };
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
--
cgit 1.4.1
From 8c79d085ae7596b81f3e19e7a49443075ebccbb6 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 29 Sep 2019 23:00:20 +0100
Subject: chore(server): Import layer grouping logic into server component
---
tools/nixery/server/go-deps.nix | 9 +
tools/nixery/server/layers/grouping.go | 352 +++++++++++++++++++++++++++++++++
2 files changed, 361 insertions(+)
create mode 100644 tools/nixery/server/layers/grouping.go
(limited to 'tools')
diff --git a/tools/nixery/server/go-deps.nix b/tools/nixery/server/go-deps.nix
index a223ef0a7021..7c40c6dde0f1 100644
--- a/tools/nixery/server/go-deps.nix
+++ b/tools/nixery/server/go-deps.nix
@@ -108,4 +108,13 @@
sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
};
}
+ {
+ goPackagePath = "gonum.org/v1/gonum";
+ fetch = {
+ type = "git";
+ url = "https://github.com/gonum/gonum";
+ rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
+ sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
+ };
+ }
]
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
new file mode 100644
index 000000000000..a75046e0fa3e
--- /dev/null
+++ b/tools/nixery/server/layers/grouping.go
@@ -0,0 +1,352 @@
+// This program reads an export reference graph (i.e. a graph representing the
+// runtime dependencies of a set of derivations) created by Nix and groups them
+// in a way that is likely to match the grouping for other derivation sets with
+// overlapping dependencies.
+//
+// This is used to determine which derivations to include in which layers of a
+// container image.
+//
+// # Inputs
+//
+// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
+// * a file containing absolute popularity values of packages in the
+// Nix package set (in the form of a direct reference count)
+// * a maximum number of layers to allocate for the image (the "layer budget")
+//
+// # Algorithm
+//
+// It works by first creating a (directed) dependency tree:
+//
+// img (root node)
+// │
+// ├───> A ─────┐
+// │ v
+// ├───> B ───> E
+// │ ^
+// ├───> C ─────┘
+// │ │
+// │ v
+// └───> D ───> F
+// │
+// └────> G
+//
+// Each node (i.e. package) is then visited to determine how important
+// it is to separate this node into its own layer, specifically:
+//
+// 1. Is the node within a certain threshold percentile of absolute
+// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
+//
+// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
+//
+// In either case, a bit is flipped for this node representing each
+// condition and an edge to it is inserted directly from the image
+// root, if it does not already exist.
+//
+// For the rest of the example we assume 'G' is above the threshold
+// size and 'E' is popular.
+//
+// This tree is then transformed into a dominator tree:
+//
+// img
+// │
+// ├───> A
+// ├───> B
+// ├───> C
+// ├───> E
+// ├───> D ───> F
+// └───> G
+//
+// Specifically this means that the paths to A, B, C, E, G, and D
+// always pass through the root (i.e. are dominated by it), whilst F
+// is dominated by D (all paths go through it).
+//
+// The top-level subtrees are considered as the initially selected
+// layers.
+//
+// If the list of layers fits within the layer budget, it is returned.
+//
+// Otherwise, a merge rating is calculated for each layer. This is the
+// product of the layer's total size and its root node's popularity.
+//
+// Layers are then merged in ascending order of merge ratings until
+// they fit into the layer budget.
+//
+// # Threshold values
+//
+// Threshold values for the partitioning conditions mentioned above
+// have not yet been determined, but we will make a good first guess
+// based on gut feeling and proceed to measure their impact on cache
+// hits/misses.
+//
+// # Example
+//
+// Using the logic described above as well as the example presented in
+// the introduction, this program would create the following layer
+// groupings (assuming no additional partitioning):
+//
+// Layer budget: 1
+// Layers: { A, B, C, D, E, F, G }
+//
+// Layer budget: 2
+// Layers: { G }, { A, B, C, D, E, F }
+//
+// Layer budget: 3
+// Layers: { G }, { E }, { A, B, C, D, F }
+//
+// Layer budget: 4
+// Layers: { G }, { E }, { D, F }, { A, B, C }
+//
+// ...
+//
+// Layer budget: 10
+// Layers: { E }, { D, F }, { A }, { B }, { C }
+package layers
+
+import (
+ "encoding/json"
+ "flag"
+ "io/ioutil"
+ "log"
+ "regexp"
+ "sort"
+
+ "gonum.org/v1/gonum/graph/flow"
+ "gonum.org/v1/gonum/graph/simple"
+)
+
+// closureGraph represents the structured attributes Nix outputs when asking it
+// for the exportReferencesGraph of a list of derivations.
+type exportReferences struct {
+ References struct {
+ Graph []string `json:"graph"`
+ } `json:"exportReferencesGraph"`
+
+ Graph []struct {
+ Size uint64 `json:"closureSize"`
+ Path string `json:"path"`
+ Refs []string `json:"references"`
+ } `json:"graph"`
+}
+
+// Popularity data for each Nix package that was calculated in advance.
+//
+// Popularity is a number from 1-100 that represents the
+// popularity percentile in which this package resides inside
+// of the nixpkgs tree.
+type Popularity = map[string]int
+
+// layer represents the data returned for each layer that Nix should
+// build for the container image.
+type layer struct {
+ Contents []string `json:"contents"`
+ mergeRating uint64
+}
+
+func (a layer) merge(b layer) layer {
+ a.Contents = append(a.Contents, b.Contents...)
+ a.mergeRating += b.mergeRating
+ return a
+}
+
+// closure as pointed to by the graph nodes.
+type closure struct {
+ GraphID int64
+ Path string
+ Size uint64
+ Refs []string
+ Popularity int
+}
+
+func (c *closure) ID() int64 {
+ return c.GraphID
+}
+
+var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
+
+func (c *closure) DOTID() string {
+ return nixRegexp.ReplaceAllString(c.Path, "")
+}
+
+// bigOrPopular checks whether this closure should be considered for
+// separation into its own layer, even if it would otherwise only
+// appear in a subtree of the dominator tree.
+func (c *closure) bigOrPopular() bool {
+ const sizeThreshold = 100 * 1000000 // 100MB
+
+ if c.Size > sizeThreshold {
+ return true
+ }
+
+ // The threshold value used here is currently roughly the
+ // minimum number of references that only 1% of packages in
+ // the entire package set have.
+ //
+ // TODO(tazjin): Do this more elegantly by calculating
+ // percentiles for each package and using those instead.
+ if c.Popularity >= 1000 {
+ return true
+ }
+
+ return false
+}
+
+func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
+ // Big or popular nodes get a separate edge from the top to
+ // flag them for their own layer.
+ if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
+ edge := graph.NewEdge(graph.Node(0), node)
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range node.Refs {
+ // Nix adds a self reference to each node, which
+ // should not be inserted.
+ if c != node.Path {
+ edge := graph.NewEdge(node, (*cmap)[c])
+ graph.SetEdge(edge)
+ }
+ }
+}
+
+// Create a graph structure from the references supplied by Nix.
+func buildGraph(refs *exportReferences, pop *Popularity) *simple.DirectedGraph {
+ cmap := make(map[string]*closure)
+ graph := simple.NewDirectedGraph()
+
+ // Insert all closures into the graph, as well as a fake root
+ // closure which serves as the top of the tree.
+ //
+ // A map from store paths to IDs is kept to actually insert
+ // edges below.
+ root := &closure{
+ GraphID: 0,
+ Path: "image_root",
+ }
+ graph.AddNode(root)
+
+ for idx, c := range refs.Graph {
+ node := &closure{
+ GraphID: int64(idx + 1), // inc because of root node
+ Path: c.Path,
+ Size: c.Size,
+ Refs: c.Refs,
+ }
+
+ if p, ok := (*pop)[node.DOTID()]; ok {
+ node.Popularity = p
+ } else {
+ node.Popularity = 1
+ }
+
+ graph.AddNode(node)
+ cmap[c.Path] = node
+ }
+
+ // Insert the top-level closures with edges from the root
+ // node, then insert all edges for each closure.
+ for _, p := range refs.References.Graph {
+ edge := graph.NewEdge(root, cmap[p])
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range cmap {
+ insertEdges(graph, &cmap, c)
+ }
+
+ return graph
+}
+
+// Extracts a subgraph starting at the specified root from the
+// dominator tree. The subgraph is converted into a flat list of
+// layers, each containing the store paths and merge rating.
+func groupLayer(dt *flow.DominatorTree, root *closure) layer {
+ size := root.Size
+ contents := []string{root.Path}
+ children := dt.DominatedBy(root.ID())
+
+ // This iteration does not use 'range' because the list being
+ // iterated is modified during the iteration (yes, I'm sorry).
+ for i := 0; i < len(children); i++ {
+ child := children[i].(*closure)
+ size += child.Size
+ contents = append(contents, child.Path)
+ children = append(children, dt.DominatedBy(child.ID())...)
+ }
+
+ return layer{
+ Contents: contents,
+ // TODO(tazjin): The point of this is to factor in
+ // both the size and the popularity when making merge
+ // decisions, but there might be a smarter way to do
+ // it than a plain multiplication.
+ mergeRating: uint64(root.Popularity) * size,
+ }
+}
+
+// Calculate the dominator tree of the entire package set and group
+// each top-level subtree into a layer.
+//
+// Layers are merged together until they fit into the layer budget,
+// based on their merge rating.
+func dominate(budget int, graph *simple.DirectedGraph) []layer {
+ dt := flow.Dominators(graph.Node(0), graph)
+
+ var layers []layer
+ for _, n := range dt.DominatedBy(dt.Root().ID()) {
+ layers = append(layers, groupLayer(&dt, n.(*closure)))
+ }
+
+ sort.Slice(layers, func(i, j int) bool {
+ return layers[i].mergeRating < layers[j].mergeRating
+ })
+
+ if len(layers) > budget {
+ log.Printf("Ideal image has %v layers, but budget is %v\n", len(layers), budget)
+ }
+
+ for len(layers) > budget {
+ merged := layers[0].merge(layers[1])
+ layers[1] = merged
+ layers = layers[1:]
+ }
+
+ return layers
+}
+
+func main() {
+ graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
+ popFile := flag.String("pop", "popularity.json", "Package popularity data")
+ outFile := flag.String("out", "layers.json", "File to write layers to")
+ layerBudget := flag.Int("budget", 94, "Total layer budget available")
+ flag.Parse()
+
+ // Parse graph data
+ file, err := ioutil.ReadFile(*graphFile)
+ if err != nil {
+ log.Fatalf("Failed to load input: %s\n", err)
+ }
+
+ var refs exportReferences
+ err = json.Unmarshal(file, &refs)
+ if err != nil {
+ log.Fatalf("Failed to deserialise input: %s\n", err)
+ }
+
+ // Parse popularity data
+ popBytes, err := ioutil.ReadFile(*popFile)
+ if err != nil {
+ log.Fatalf("Failed to load input: %s\n", err)
+ }
+
+ var pop Popularity
+ err = json.Unmarshal(popBytes, &pop)
+ if err != nil {
+ log.Fatalf("Failed to deserialise input: %s\n", err)
+ }
+
+ graph := buildGraph(&refs, &pop)
+ layers := dominate(*layerBudget, graph)
+
+ j, _ := json.Marshal(layers)
+ ioutil.WriteFile(*outFile, j, 0644)
+}
--
cgit 1.4.1
From 9c3c622403cec6e1bf7e4b9aecfbc8bea1e1f6fd Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 29 Sep 2019 23:57:56 +0100
Subject: refactor(server): Expose layer grouping logic via a function
Refactors the layer grouping package (which previously compiled to a
separate binary) to expose the layer grouping logic via a function
instead.
This is the next step towards creating layers inside of the server
component instead of in Nix.
Relates to #50.
---
tools/nixery/server/layers/grouping.go | 79 +++++++++++-----------------------
1 file changed, 24 insertions(+), 55 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index a75046e0fa3e..e8666bf4dd0e 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -1,6 +1,6 @@
-// This program reads an export reference graph (i.e. a graph representing the
-// runtime dependencies of a set of derivations) created by Nix and groups them
-// in a way that is likely to match the grouping for other derivation sets with
+// This package reads an export reference graph (i.e. a graph representing the
+// runtime dependencies of a set of derivations) created by Nix and groups it in
+// a way that is likely to match the grouping for other derivation sets with
// overlapping dependencies.
//
// This is used to determine which derivations to include in which layers of a
@@ -9,8 +9,8 @@
// # Inputs
//
// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
-// * a file containing absolute popularity values of packages in the
-// Nix package set (in the form of a direct reference count)
+// * popularity values of each package in the Nix package set (in the form of a
+// direct reference count)
// * a maximum number of layers to allocate for the image (the "layer budget")
//
// # Algorithm
@@ -103,9 +103,6 @@
package layers
import (
- "encoding/json"
- "flag"
- "io/ioutil"
"log"
"regexp"
"sort"
@@ -114,9 +111,11 @@ import (
"gonum.org/v1/gonum/graph/simple"
)
-// closureGraph represents the structured attributes Nix outputs when asking it
-// for the exportReferencesGraph of a list of derivations.
-type exportReferences struct {
+// RuntimeGraph represents structured information from Nix about the runtime
+// dependencies of a derivation.
+//
+// This is generated in Nix by using the exportReferencesGraph feature.
+type RuntimeGraph struct {
References struct {
Graph []string `json:"graph"`
} `json:"exportReferencesGraph"`
@@ -135,14 +134,14 @@ type exportReferences struct {
// of the nixpkgs tree.
type Popularity = map[string]int
-// layer represents the data returned for each layer that Nix should
+// Layer represents the data returned for each layer that Nix should
// build for the container image.
-type layer struct {
+type Layer struct {
Contents []string `json:"contents"`
mergeRating uint64
}
-func (a layer) merge(b layer) layer {
+func (a Layer) merge(b Layer) Layer {
a.Contents = append(a.Contents, b.Contents...)
a.mergeRating += b.mergeRating
return a
@@ -209,7 +208,7 @@ func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *c
}
// Create a graph structure from the references supplied by Nix.
-func buildGraph(refs *exportReferences, pop *Popularity) *simple.DirectedGraph {
+func buildGraph(refs *RuntimeGraph, pop *Popularity) *simple.DirectedGraph {
cmap := make(map[string]*closure)
graph := simple.NewDirectedGraph()
@@ -259,7 +258,7 @@ func buildGraph(refs *exportReferences, pop *Popularity) *simple.DirectedGraph {
// Extracts a subgraph starting at the specified root from the
// dominator tree. The subgraph is converted into a flat list of
// layers, each containing the store paths and merge rating.
-func groupLayer(dt *flow.DominatorTree, root *closure) layer {
+func groupLayer(dt *flow.DominatorTree, root *closure) Layer {
size := root.Size
contents := []string{root.Path}
children := dt.DominatedBy(root.ID())
@@ -273,7 +272,7 @@ func groupLayer(dt *flow.DominatorTree, root *closure) layer {
children = append(children, dt.DominatedBy(child.ID())...)
}
- return layer{
+ return Layer{
Contents: contents,
// TODO(tazjin): The point of this is to factor in
// both the size and the popularity when making merge
@@ -288,10 +287,10 @@ func groupLayer(dt *flow.DominatorTree, root *closure) layer {
//
// Layers are merged together until they fit into the layer budget,
// based on their merge rating.
-func dominate(budget int, graph *simple.DirectedGraph) []layer {
+func dominate(budget int, graph *simple.DirectedGraph) []Layer {
dt := flow.Dominators(graph.Node(0), graph)
- var layers []layer
+ var layers []Layer
for _, n := range dt.DominatedBy(dt.Root().ID()) {
layers = append(layers, groupLayer(&dt, n.(*closure)))
}
@@ -313,40 +312,10 @@ func dominate(budget int, graph *simple.DirectedGraph) []layer {
return layers
}
-func main() {
- graphFile := flag.String("graph", ".attrs.json", "Input file containing graph")
- popFile := flag.String("pop", "popularity.json", "Package popularity data")
- outFile := flag.String("out", "layers.json", "File to write layers to")
- layerBudget := flag.Int("budget", 94, "Total layer budget available")
- flag.Parse()
-
- // Parse graph data
- file, err := ioutil.ReadFile(*graphFile)
- if err != nil {
- log.Fatalf("Failed to load input: %s\n", err)
- }
-
- var refs exportReferences
- err = json.Unmarshal(file, &refs)
- if err != nil {
- log.Fatalf("Failed to deserialise input: %s\n", err)
- }
-
- // Parse popularity data
- popBytes, err := ioutil.ReadFile(*popFile)
- if err != nil {
- log.Fatalf("Failed to load input: %s\n", err)
- }
-
- var pop Popularity
- err = json.Unmarshal(popBytes, &pop)
- if err != nil {
- log.Fatalf("Failed to deserialise input: %s\n", err)
- }
-
- graph := buildGraph(&refs, &pop)
- layers := dominate(*layerBudget, graph)
-
- j, _ := json.Marshal(layers)
- ioutil.WriteFile(*outFile, j, 0644)
+// GroupLayers applies the algorithm described above the its input and returns a
+// list of layers, each consisting of a list of Nix store paths that it should
+// contain.
+func GroupLayers(refs *RuntimeGraph, pop *Popularity, budget int) []Layer {
+ graph := buildGraph(refs, pop)
+ return dominate(budget, graph)
}
--
cgit 1.4.1
From e60805c9b213a4054afe84d29c16058277014d0b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 30 Sep 2019 11:34:17 +0100
Subject: feat(server): Introduce function to hash contents of a layer
This creates a cache key which can be used to check if a layer has
already been built.
---
tools/nixery/server/layers/grouping.go | 13 +++++++++++++
1 file changed, 13 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index e8666bf4dd0e..f8259ab989ff 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -103,9 +103,12 @@
package layers
import (
+ "crypto/sha1"
+ "fmt"
"log"
"regexp"
"sort"
+ "strings"
"gonum.org/v1/gonum/graph/flow"
"gonum.org/v1/gonum/graph/simple"
@@ -141,6 +144,13 @@ type Layer struct {
mergeRating uint64
}
+// Hash the contents of a layer to create a deterministic identifier that can be
+// used for caching.
+func (l *Layer) Hash() string {
+ sum := sha1.Sum([]byte(strings.Join(l.Contents, ":")))
+ return fmt.Sprintf("%x", sum)
+}
+
func (a Layer) merge(b Layer) Layer {
a.Contents = append(a.Contents, b.Contents...)
a.mergeRating += b.mergeRating
@@ -272,6 +282,9 @@ func groupLayer(dt *flow.DominatorTree, root *closure) Layer {
children = append(children, dt.DominatedBy(child.ID())...)
}
+ // Contents are sorted to ensure that hashing is consistent
+ sort.Strings(contents)
+
return Layer{
Contents: contents,
// TODO(tazjin): The point of this is to factor in
--
cgit 1.4.1
From 2c8ef634f67f91c8efc1cf6a58a271f4c44544dd Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 30 Sep 2019 11:51:36 +0100
Subject: docs(caching): Add information about Nixery's caching strategies
---
tools/nixery/docs/src/SUMMARY.md | 1 +
tools/nixery/docs/src/caching.md | 70 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 71 insertions(+)
create mode 100644 tools/nixery/docs/src/caching.md
(limited to 'tools')
diff --git a/tools/nixery/docs/src/SUMMARY.md b/tools/nixery/docs/src/SUMMARY.md
index 677f328972bb..f1d68a3ac451 100644
--- a/tools/nixery/docs/src/SUMMARY.md
+++ b/tools/nixery/docs/src/SUMMARY.md
@@ -2,6 +2,7 @@
- [Nixery](./nixery.md)
- [Under the hood](./under-the-hood.md)
+ - [Caching](./caching.md)
- [Run your own Nixery](./run-your-own.md)
- [Nix](./nix.md)
- [Nix, the language](./nix-1p.md)
diff --git a/tools/nixery/docs/src/caching.md b/tools/nixery/docs/src/caching.md
new file mode 100644
index 000000000000..175fe04d7084
--- /dev/null
+++ b/tools/nixery/docs/src/caching.md
@@ -0,0 +1,70 @@
+# Caching in Nixery
+
+This page gives a quick overview over the caching done by Nixery. All cache data
+is written to Nixery's storage bucket and is based on deterministic identifiers
+or content-addressing, meaning that cache entries under the same key *never
+change*.
+
+## Manifests
+
+Manifests of builds are cached at `$BUCKET/manifests/$KEY`. The effect of this
+cache is that multiple instances of Nixery do not need to rebuild the same
+manifest from scratch.
+
+Since the manifest cache is populated only *after* layers are uploaded, Nixery
+can immediately return the manifest to its clients without needing to check
+whether layers have been uploaded already.
+
+`$KEY` is generated by creating a SHA1 hash of the requested content of a
+manifest plus the package source specification.
+
+Manifests are *only* cached if the package source specification is *not* a
+moving target.
+
+Manifest caching *only* applies in the following cases:
+
+* package source specification is a specific git commit
+* package source specification is a specific NixOS/nixpkgs commit
+
+Manifest caching *never* applies in the following cases:
+
+* package source specification is a local file path (i.e. `NIXERY_PKGS_PATH`)
+* package source specification is a NixOS channel (e.g. `NIXERY_CHANNEL=nixos-19.03`)
+* package source specification is a git branch or tag (e.g. `staging`, `master` or `latest`)
+
+It is thus always preferable to request images from a fully-pinned package
+source.
+
+Manifests can be removed from the manifest cache without negative consequences.
+
+## Layer tarballs
+
+Layer tarballs are the files that Nixery clients retrieve from the storage
+bucket to download an image.
+
+They are stored content-addressably at `$BUCKET/layers/$SHA256HASH` and layer
+requests sent to Nixery will redirect directly to this storage location.
+
+The effect of this cache is that Nixery does not need to upload identical layers
+repeatedly. When Nixery notices that a layer already exists in GCS, it will use
+the object metadata to compare its MD5-hash with the locally computed one and
+skip uploading.
+
+Removing layers from the cache is *potentially problematic* if there are cached
+manifests or layer builds referencing those layers.
+
+To clean up layers, a user must ensure that no other cached resources still
+reference these layers.
+
+## Layer builds
+
+Layer builds are cached at `$BUCKET/builds/$HASH`, where `$HASH` is a SHA1 of
+the Nix store paths included in the layer.
+
+The content of the cached entries is a JSON-object that contains the MD5 and
+SHA256 hashes of the built layer.
+
+The effect of this cache is that different instances of Nixery will not build,
+hash and upload layers that have identical contents across different instances.
+
+Layer builds can be removed from the cache without negative consequences.
--
cgit 1.4.1
From 6262dec8aacf25ae6004de739353089cd635cea5 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 30 Sep 2019 14:19:11 +0100
Subject: feat(nix): Add derivation to create layer tars from a store path set
This introduces a new Nix derivation that, given an attribute set of
layer hashes mapped to store paths, will create a layer tarball for
each of the store paths.
This is going to be used by the builder to create layers that are not
present in the cache.
Relates to #50.
---
tools/nixery/build-image/build-layers.nix | 47 +++++++++++++++++++++++++++++++
tools/nixery/build-image/default.nix | 24 +++++++++++-----
tools/nixery/default.nix | 7 +++--
3 files changed, 69 insertions(+), 9 deletions(-)
create mode 100644 tools/nixery/build-image/build-layers.nix
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-layers.nix b/tools/nixery/build-image/build-layers.nix
new file mode 100644
index 000000000000..8a8bfbe9edf1
--- /dev/null
+++ b/tools/nixery/build-image/build-layers.nix
@@ -0,0 +1,47 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+{
+ # Description of the package set to be used (will be loaded by load-pkgs.nix)
+ srcType ? "nixpkgs",
+ srcArgs ? "nixos-19.03",
+ importArgs ? { },
+ # Path to load-pkgs.nix
+ loadPkgs ? ./load-pkgs.nix,
+ # Layers to assemble into tarballs
+ layers ? "{}"
+}:
+
+let
+ inherit (builtins) fromJSON mapAttrs toJSON;
+ inherit (pkgs) lib runCommand;
+
+ pkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
+
+ # Given a list of store paths, create an image layer tarball with
+ # their contents.
+ pathsToLayer = paths: runCommand "layer.tar" {
+ } ''
+ tar --no-recursion -Prf "$out" \
+ --mtime="@$SOURCE_DATE_EPOCH" \
+ --owner=0 --group=0 /nix /nix/store
+
+ tar -Prpf "$out" --hard-dereference --sort=name \
+ --mtime="@$SOURCE_DATE_EPOCH" \
+ --owner=0 --group=0 ${lib.concatStringsSep " " paths}
+ '';
+
+
+ layerTarballs = mapAttrs (_: pathsToLayer ) (fromJSON layers);
+in writeText "layer-tarballs.json" (toJSON layerTarballs)
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index a61ac06bdd92..0800eb95987f 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -20,10 +20,20 @@
{ pkgs ? import {} }:
-pkgs.writeShellScriptBin "nixery-build-image" ''
- exec ${pkgs.nix}/bin/nix-build \
- --show-trace \
- --no-out-link "$@" \
- --argstr loadPkgs ${./load-pkgs.nix} \
- ${./build-image.nix}
-''
+{
+ build-image = pkgs.writeShellScriptBin "nixery-build-image" ''
+ exec ${pkgs.nix}/bin/nix-build \
+ --show-trace \
+ --no-out-link "$@" \
+ --argstr loadPkgs ${./load-pkgs.nix} \
+ ${./build-image.nix}
+ '';
+
+ build-layers = pkgs.writeShellScriptBin "nixery-build-layers" ''
+ exec ${pkgs.nix}/bin/nix-build \
+ --show-trace \
+ --no-out-link "$@" \
+ --argstr loadPkgs ${./load-pkgs.nix} \
+ ${./build-layers.nix}
+ '';
+}
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index f321b07a9c7a..925edbf6dc84 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -11,13 +11,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
+
{ pkgs ? import { }
, preLaunch ? ""
, extraPackages ? [] }:
with pkgs;
-rec {
+let builders = import ./build-image { inherit pkgs; };
+in rec {
# Go implementation of the Nixery server which implements the
# container registry interface.
#
@@ -27,7 +29,8 @@ rec {
nixery-server = callPackage ./server { };
# Implementation of the Nix image building logic
- nixery-build-image = import ./build-image { inherit pkgs; };
+ nixery-build-image = builders.build-image;
+ nixery-build-layers = builders.build-layers;
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
--
cgit 1.4.1
From 6e2b84f475fb302bf6d8a43c2f7497040ad82cac Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 30 Sep 2019 14:25:55 +0100
Subject: feat(server): Add cache for layer builds in GCS & local cache
This cache is going to be used for looking up whether a layer build
has taken place already (based on a hash of the layer contents).
See the caching section in the updated documentation for details.
Relates to #50.
---
tools/nixery/server/builder/cache.go | 86 ++++++++++++++++++++++++++++++++++++
1 file changed, 86 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 52765293f3e4..edb1b711d853 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -14,7 +14,9 @@
package builder
import (
+ "bytes"
"context"
+ "encoding/json"
"io"
"log"
"os"
@@ -25,14 +27,25 @@ import (
type void struct{}
+type Build struct {
+ SHA256 string `json:"sha256"`
+ MD5 string `json:"md5"`
+}
+
// LocalCache implements the structure used for local caching of
// manifests and layer uploads.
type LocalCache struct {
+ // Manifest cache
mmtx sync.RWMutex
mcache map[string]string
+ // Layer (tarball) cache
lmtx sync.RWMutex
lcache map[string]void
+
+ // Layer (build) cache
+ bmtx sync.RWMutex
+ bcache map[string]Build
}
func NewCache() LocalCache {
@@ -80,6 +93,22 @@ func (c *LocalCache) localCacheManifest(key, path string) {
c.mmtx.Unlock()
}
+// Retrieve a cached build from the local cache.
+func (c *LocalCache) buildFromLocalCache(key string) (*Build, bool) {
+ c.bmtx.RLock()
+ b, ok := c.bcache[key]
+ c.bmtx.RUnlock()
+
+ return &b, ok
+}
+
+// Add a build result to the local cache.
+func (c *LocalCache) localCacheBuild(key string, b Build) {
+ c.bmtx.Lock()
+ c.bcache[key] = b
+ c.bmtx.Unlock()
+}
+
// Retrieve a manifest from the cache(s). First the local cache is
// checked, then the GCS-bucket cache.
func manifestFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (string, bool) {
@@ -118,6 +147,7 @@ func manifestFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.
return path, true
}
+// Add a manifest to the bucket & local caches
func cacheManifest(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key, path string) {
cache.localCacheManifest(key, path)
@@ -144,3 +174,59 @@ func cacheManifest(ctx *context.Context, cache *LocalCache, bucket *storage.Buck
log.Printf("Cached manifest sha1:%s (%v bytes written)\n", key, size)
}
+
+// Retrieve a build from the cache, first checking the local cache
+// followed by the bucket cache.
+func buildFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (*Build, bool) {
+ build, cached := cache.buildFromLocalCache(key)
+ if cached {
+ return build, true
+ }
+
+ obj := bucket.Object("builds/" + key)
+ _, err := obj.Attrs(*ctx)
+ if err != nil {
+ return nil, false
+ }
+
+ r, err := obj.NewReader(*ctx)
+ if err != nil {
+ log.Printf("Failed to retrieve build '%s' from cache: %s\n", key, err)
+ return nil, false
+ }
+ defer r.Close()
+
+ jb := bytes.NewBuffer([]byte{})
+ _, err = io.Copy(jb, r)
+ if err != nil {
+ log.Printf("Failed to read build '%s' from cache: %s\n", key, err)
+ return nil, false
+ }
+
+ var b Build
+ err = json.Unmarshal(jb.Bytes(), &build)
+ if err != nil {
+ log.Printf("Failed to unmarshal build '%s' from cache: %s\n", key, err)
+ return nil, false
+ }
+
+ return &b, true
+}
+
+func cacheBuild(ctx context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string, build Build) {
+ cache.localCacheBuild(key, build)
+
+ obj := bucket.Object("builds/" + key)
+
+ j, _ := json.Marshal(&build)
+
+ w := obj.NewWriter(ctx)
+
+ size, err := io.Copy(w, bytes.NewReader(j))
+ if err != nil {
+ log.Printf("failed to cache build '%s': %s\n", key, err)
+ return
+ }
+
+ log.Printf("cached build '%s' (%v bytes written)\n", key, size)
+}
--
cgit 1.4.1
From 61269175c046681711cf88370d220eb97cd621cf Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 30 Sep 2019 17:38:41 +0100
Subject: refactor(server): Introduce a state type to carry runtime state
The state type contains things such as the bucket handle and Nixery's
configuration which need to be passed around in the builder.
This is only added for convenience.
---
tools/nixery/server/builder/state.go | 24 ++++++++++++++++++++++++
tools/nixery/server/main.go | 18 +++++++-----------
2 files changed, 31 insertions(+), 11 deletions(-)
create mode 100644 tools/nixery/server/builder/state.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/state.go b/tools/nixery/server/builder/state.go
new file mode 100644
index 000000000000..1c7f58821b6b
--- /dev/null
+++ b/tools/nixery/server/builder/state.go
@@ -0,0 +1,24 @@
+package builder
+
+import (
+ "cloud.google.com/go/storage"
+ "github.com/google/nixery/config"
+ "github.com/google/nixery/layers"
+)
+
+// State holds the runtime state that is carried around in Nixery and
+// passed to builder functions.
+type State struct {
+ Bucket *storage.BucketHandle
+ Cache LocalCache
+ Cfg config.Config
+ Pop layers.Popularity
+}
+
+func NewState(bucket *storage.BucketHandle, cfg config.Config) State {
+ return State{
+ Bucket: bucket,
+ Cfg: cfg,
+ Cache: NewCache(),
+ }
+}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 9242a3731af0..ae8dd3ab2dbf 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -122,10 +122,8 @@ func writeError(w http.ResponseWriter, status int, code, message string) {
}
type registryHandler struct {
- cfg *config.Config
- ctx *context.Context
- bucket *storage.BucketHandle
- cache *builder.LocalCache
+ ctx *context.Context
+ state *builder.State
}
func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
@@ -141,7 +139,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
imageTag := manifestMatches[2]
log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
image := builder.ImageFromName(imageName, imageTag)
- buildResult, err := builder.BuildImage(h.ctx, h.cfg, h.cache, &image, h.bucket)
+ buildResult, err := builder.BuildImage(h.ctx, h.state, &image)
if err != nil {
writeError(w, 500, "UNKNOWN", "image build failure")
@@ -172,7 +170,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
if len(layerMatches) == 3 {
digest := layerMatches[2]
- url, err := constructLayerUrl(h.cfg, digest)
+ url, err := constructLayerUrl(&h.state.Cfg, digest)
if err != nil {
log.Printf("Failed to sign GCS URL: %s\n", err)
@@ -197,16 +195,14 @@ func main() {
ctx := context.Background()
bucket := prepareBucket(&ctx, cfg)
- cache := builder.NewCache()
+ state := builder.NewState(bucket, *cfg)
log.Printf("Starting Nixery on port %s\n", cfg.Port)
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
- cfg: cfg,
- ctx: &ctx,
- bucket: bucket,
- cache: &cache,
+ ctx: &ctx,
+ state: &state,
})
// All other roots are served by the static file server.
--
cgit 1.4.1
From 87e196757b4e0cb22ec9c9f5ee8475573597382a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 30 Sep 2019 17:40:01 +0100
Subject: feat(server): Reimplement creation & uploading of layers
The new build process can now call out to Nix to create layers and
upload them to the bucket if necessary.
The layer cache is populated, but not yet used.
---
tools/nixery/server/builder/builder.go | 333 ++++++++++++++++++++++++---------
tools/nixery/server/config/config.go | 2 +-
tools/nixery/server/layers/grouping.go | 2 +-
3 files changed, 250 insertions(+), 87 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index a5744a85348a..303d796df6bf 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -21,20 +21,33 @@ import (
"bufio"
"bytes"
"context"
+ "crypto/md5"
+ "crypto/sha256"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
+ "net/http"
+ "net/url"
"os"
"os/exec"
"sort"
"strings"
"cloud.google.com/go/storage"
- "github.com/google/nixery/config"
+ "github.com/google/nixery/layers"
+ "golang.org/x/oauth2/google"
)
+// The maximum number of layers in an image is 125. To allow for
+// extensibility, the actual number of layers Nixery is "allowed" to
+// use up is set at a lower point.
+const LayerBudget int = 94
+
+// HTTP client to use for direct calls to APIs that are not part of the SDK
+var client = &http.Client{}
+
// Image represents the information necessary for building a container image.
// This can be either a list of package names (corresponding to keys in the
// nixpkgs set) or a Nix expression that results in a *list* of derivations.
@@ -47,6 +60,14 @@ type Image struct {
Packages []string
}
+// TODO(tazjin): docstring
+type BuildResult struct {
+ Error string
+ Pkgs []string
+
+ Manifest struct{} // TODO(tazjin): OCIv1 manifest
+}
+
// ImageFromName parses an image name into the corresponding structure which can
// be used to invoke Nix.
//
@@ -70,24 +91,21 @@ func ImageFromName(name string, tag string) Image {
}
}
-// BuildResult represents the output of calling the Nix derivation responsible
-// for building registry images.
-//
-// The `layerLocations` field contains the local filesystem paths to each
-// individual image layer that will need to be served, while the `manifest`
-// field contains the JSON-representation of the manifest that needs to be
-// served to the client.
-//
-// The later field is simply treated as opaque JSON and passed through.
-type BuildResult struct {
- Error string `json:"error"`
- Pkgs []string `json:"pkgs"`
- Manifest json.RawMessage `json:"manifest"`
-
- LayerLocations map[string]struct {
- Path string `json:"path"`
- Md5 []byte `json:"md5"`
- } `json:"layerLocations"`
+// ImageResult represents the output of calling the Nix derivation
+// responsible for preparing an image.
+type ImageResult struct {
+ // These fields are populated in case of an error
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+
+ // These fields are populated in case of success
+ Graph layers.RuntimeGraph `json:"runtimeGraph"`
+ SymlinkLayer struct {
+ Size int `json:"size"`
+ SHA256 string `json:"sha256"`
+ MD5 string `json:"md5"`
+ Path string `json:"path"`
+ } `json:"symlinkLayer"`
}
// convenienceNames expands convenience package names defined by Nixery which
@@ -117,99 +135,244 @@ func logNix(name string, r io.ReadCloser) {
}
}
-// Call out to Nix and request that an image be built. Nix will, upon success,
-// return a manifest for the container image.
-func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, image *Image, bucket *storage.BucketHandle) (*BuildResult, error) {
- var resultFile string
- cached := false
+func callNix(program string, name string, args []string) ([]byte, error) {
+ cmd := exec.Command(program, args...)
- key := cfg.Pkgs.CacheKey(image.Packages, image.Tag)
- if key != "" {
- resultFile, cached = manifestFromCache(ctx, cache, bucket, key)
+ outpipe, err := cmd.StdoutPipe()
+ if err != nil {
+ return nil, err
}
- if !cached {
- packages, err := json.Marshal(image.Packages)
- if err != nil {
- return nil, err
- }
+ errpipe, err := cmd.StderrPipe()
+ if err != nil {
+ return nil, err
+ }
+ go logNix(name, errpipe)
- srcType, srcArgs := cfg.Pkgs.Render(image.Tag)
+ stdout, _ := ioutil.ReadAll(outpipe)
- args := []string{
- "--timeout", cfg.Timeout,
- "--argstr", "name", image.Name,
- "--argstr", "packages", string(packages),
- "--argstr", "srcType", srcType,
- "--argstr", "srcArgs", srcArgs,
- }
+ if err = cmd.Wait(); err != nil {
+ log.Printf("%s execution error: %s\nstdout: %s\n", program, err, stdout)
+ return nil, err
+ }
- if cfg.PopUrl != "" {
- args = append(args, "--argstr", "popularityUrl", cfg.PopUrl)
- }
+ resultFile := strings.TrimSpace(string(stdout))
+ buildOutput, err := ioutil.ReadFile(resultFile)
+ if err != nil {
+ return nil, err
+ }
- cmd := exec.Command("nixery-build-image", args...)
+ return buildOutput, nil
+}
- outpipe, err := cmd.StdoutPipe()
- if err != nil {
- return nil, err
- }
+// Call out to Nix and request metadata for the image to be built. All
+// required store paths for the image will be realised, but layers
+// will not yet be created from them.
+//
+// This function is only invoked if the manifest is not found in any
+// cache.
+func prepareImage(s *State, image *Image) (*ImageResult, error) {
+ packages, err := json.Marshal(image.Packages)
+ if err != nil {
+ return nil, err
+ }
- errpipe, err := cmd.StderrPipe()
- if err != nil {
- return nil, err
- }
- go logNix(image.Name, errpipe)
+ srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
- if err = cmd.Start(); err != nil {
- log.Println("Error starting nix-build:", err)
- return nil, err
- }
- log.Printf("Started Nix image build for '%s'", image.Name)
+ args := []string{
+ "--timeout", s.Cfg.Timeout,
+ "--argstr", "packages", string(packages),
+ "--argstr", "srcType", srcType,
+ "--argstr", "srcArgs", srcArgs,
+ }
- stdout, _ := ioutil.ReadAll(outpipe)
+ output, err := callNix("nixery-build-image", image.Name, args)
+ if err != nil {
+ log.Printf("failed to call nixery-build-image: %s\n", err)
+ return nil, err
+ }
+ log.Printf("Finished image preparation for '%s' via Nix\n", image.Name)
- if err = cmd.Wait(); err != nil {
- log.Printf("nix-build execution error: %s\nstdout: %s\n", err, stdout)
- return nil, err
- }
+ var result ImageResult
+ err = json.Unmarshal(output, &result)
+ if err != nil {
+ return nil, err
+ }
- log.Println("Finished Nix image build")
+ return &result, nil
+}
- resultFile = strings.TrimSpace(string(stdout))
+// Groups layers and checks whether they are present in the cache
+// already, otherwise calls out to Nix to assemble layers.
+//
+// Returns information about all data layers that need to be included
+// in the manifest, as well as information about which layers need to
+// be uploaded (and from where).
+func prepareLayers(ctx *context.Context, s *State, image *Image, graph *layers.RuntimeGraph) (map[string]string, error) {
+ grouped := layers.Group(graph, &s.Pop, LayerBudget)
+
+ // TODO(tazjin): Introduce caching strategy, for now this will
+ // build all layers.
+ srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
+ args := []string{
+ "--argstr", "srcType", srcType,
+ "--argstr", "srcArgs", srcArgs,
+ }
- if key != "" {
- cacheManifest(ctx, cache, bucket, key, resultFile)
+ var layerInput map[string][]string
+ for _, l := range grouped {
+ layerInput[l.Hash()] = l.Contents
+
+ // The derivation responsible for building layers does not
+ // have the derivations that resulted in the required store
+ // paths in its context, which means that its sandbox will not
+ // contain the necessary paths if sandboxing is enabled.
+ //
+ // To work around this, all required store paths are added as
+ // 'extra-sandbox-paths' parameters.
+ for _, p := range l.Contents {
+ args = append(args, "--option", "extra-sandbox-paths", p)
}
}
- buildOutput, err := ioutil.ReadFile(resultFile)
+ j, _ := json.Marshal(layerInput)
+ args = append(args, "--argstr", "layers", string(j))
+
+ output, err := callNix("nixery-build-layers", image.Name, args)
if err != nil {
+ log.Printf("failed to call nixery-build-layers: %s\n", err)
return nil, err
}
- // The build output returned by Nix is deserialised to add all
- // contained layers to the bucket. Only the manifest itself is
- // re-serialised to JSON and returned.
- var result BuildResult
+ result := make(map[string]string)
+ err = json.Unmarshal(output, &result)
+ if err != nil {
+ return nil, err
+ }
+
+ return result, nil
+}
+
+// renameObject renames an object in the specified Cloud Storage
+// bucket.
+//
+// The Go API for Cloud Storage does not support renaming objects, but
+// the HTTP API does. The code below makes the relevant call manually.
+func renameObject(ctx context.Context, s *State, old, new string) error {
+ bucket := s.Cfg.Bucket
+
+ creds, err := google.FindDefaultCredentials(ctx)
+ if err != nil {
+ return err
+ }
+
+ token, err := creds.TokenSource.Token()
+ if err != nil {
+ return err
+ }
+
+ // as per https://cloud.google.com/storage/docs/renaming-copying-moving-objects#rename
+ url := fmt.Sprintf(
+ "https://www.googleapis.com/storage/v1/b/%s/o/%s/rewriteTo/b/%s/o/%s",
+ url.PathEscape(bucket), url.PathEscape(old),
+ url.PathEscape(bucket), url.PathEscape(new),
+ )
- err = json.Unmarshal(buildOutput, &result)
+ req, err := http.NewRequest("POST", url, nil)
+ req.Header.Add("Authorization", "Bearer "+token.AccessToken)
+ _, err = client.Do(req)
+ if err != nil {
+ return err
+ }
+
+ // It seems that 'rewriteTo' copies objects instead of
+ // renaming/moving them, hence a deletion call afterwards is
+ // required.
+ if err = s.Bucket.Object(old).Delete(ctx); err != nil {
+ log.Printf("failed to delete renamed object '%s': %s\n", old, err)
+ // this error should not break renaming and is not returned
+ }
+
+ return nil
+}
+
+// Upload a to the storage bucket, while hashing it at the same time.
+//
+// The initial upload is performed in a 'staging' folder, as the
+// SHA256-hash is not yet available when the upload is initiated.
+//
+// After a successful upload, the file is moved to its final location
+// in the bucket and the build cache is populated.
+//
+// The return value is the layer's SHA256 hash, which is used in the
+// image manifest.
+func uploadHashLayer(ctx context.Context, s *State, key, path string) (string, error) {
+ staging := s.Bucket.Object("staging/" + key)
+
+ // Set up a writer that simultaneously runs both hash
+ // algorithms and uploads to the bucket
+ sw := staging.NewWriter(ctx)
+ shasum := sha256.New()
+ md5sum := md5.New()
+ multi := io.MultiWriter(sw, shasum, md5sum)
+
+ f, err := os.Open(path)
+ if err != nil {
+ log.Printf("failed to open layer at '%s' for reading: %s\n", path, err)
+ return "", err
+ }
+ defer f.Close()
+
+ size, err := io.Copy(multi, f)
+ if err != nil {
+ log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
+ return "", err
+ }
+
+ if err = sw.Close(); err != nil {
+ log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
+ return "", err
+ }
+
+ build := Build{
+ SHA256: fmt.Sprintf("%x", shasum.Sum([]byte{})),
+ MD5: fmt.Sprintf("%x", md5sum.Sum([]byte{})),
+ }
+
+ // Hashes are now known and the object is in the bucket, what
+ // remains is to move it to the correct location and cache it.
+ err = renameObject(ctx, s, "staging/"+key, "layers/"+build.SHA256)
+ if err != nil {
+ log.Printf("failed to move layer '%s' from staging: %s\n", key, err)
+ return "", err
+ }
+
+ cacheBuild(ctx, &s.Cache, s.Bucket, key, build)
+
+ log.Printf("Uploaded layer sha256:%s (%v bytes written)", build.SHA256, size)
+
+ return build.SHA256, nil
+}
+
+func BuildImage(ctx *context.Context, s *State, image *Image) (*BuildResult, error) {
+ imageResult, err := prepareImage(s, image)
if err != nil {
return nil, err
}
- for layer, meta := range result.LayerLocations {
- if !cache.hasSeenLayer(layer) {
- err = uploadLayer(ctx, bucket, layer, meta.Path, meta.Md5)
- if err != nil {
- return nil, err
- }
+ if imageResult.Error != "" {
+ return &BuildResult{
+ Error: imageResult.Error,
+ Pkgs: imageResult.Pkgs,
+ }, nil
+ }
- cache.sawLayer(layer)
- }
+ _, err = prepareLayers(ctx, s, image, &imageResult.Graph)
+ if err != nil {
+ return nil, err
}
- return &result, nil
+ return nil, nil
}
// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
@@ -217,7 +380,7 @@ func BuildImage(ctx *context.Context, cfg *config.Config, cache *LocalCache, ima
//
// If the file does exist, its MD5 hash is verified to ensure that the stored
// file is not - for example - a fragment of a previous, incomplete upload.
-func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
+func uploadLayer(ctx context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
layerKey := fmt.Sprintf("layers/%s", layer)
obj := bucket.Object(layerKey)
@@ -226,12 +389,12 @@ func uploadLayer(ctx *context.Context, bucket *storage.BucketHandle, layer strin
//
// If it does and the MD5 checksum matches the expected one, the layer
// upload can be skipped.
- attrs, err := obj.Attrs(*ctx)
+ attrs, err := obj.Attrs(ctx)
if err == nil && bytes.Equal(attrs.MD5, md5) {
log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
} else {
- writer := obj.NewWriter(*ctx)
+ writer := obj.NewWriter(ctx)
file, err := os.Open(path)
if err != nil {
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index ac8820f23116..30f727db1112 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -60,7 +60,7 @@ func getConfig(key, desc, def string) string {
return value
}
-// config holds the Nixery configuration options.
+// Config holds the Nixery configuration options.
type Config struct {
Bucket string // GCS bucket to cache & serve layers
Signing *storage.SignedURLOptions // Signing options to use for GCS URLs
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index f8259ab989ff..07a9e0e230a5 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -328,7 +328,7 @@ func dominate(budget int, graph *simple.DirectedGraph) []Layer {
// GroupLayers applies the algorithm described above the its input and returns a
// list of layers, each consisting of a list of Nix store paths that it should
// contain.
-func GroupLayers(refs *RuntimeGraph, pop *Popularity, budget int) []Layer {
+func Group(refs *RuntimeGraph, pop *Popularity, budget int) []Layer {
graph := buildGraph(refs, pop)
return dominate(budget, graph)
}
--
cgit 1.4.1
From 3f40c0a2d2e9ca03253da13ec53bc57e9c524f00 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 1 Oct 2019 23:24:55 +0100
Subject: feat(server): Implement package for creating image manifests
The new manifest package creates image manifests and their
configuration. This previously happened in Nix, but is now part of the
server's workload.
This relates to #50.
---
tools/nixery/server/manifest/manifest.go | 114 +++++++++++++++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100644 tools/nixery/server/manifest/manifest.go
(limited to 'tools')
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
new file mode 100644
index 000000000000..dd447796cc78
--- /dev/null
+++ b/tools/nixery/server/manifest/manifest.go
@@ -0,0 +1,114 @@
+// Package image implements logic for creating the image metadata
+// (such as the image manifest and configuration).
+package manifest
+
+import (
+ "crypto/md5"
+ "crypto/sha256"
+ "encoding/json"
+ "fmt"
+)
+
+const (
+ // manifest constants
+ schemaVersion = 2
+
+ // media types
+ manifestType = "application/vnd.docker.distribution.manifest.v2+json"
+ layerType = "application/vnd.docker.image.rootfs.diff.tar"
+ configType = "application/vnd.docker.container.image.v1+json"
+
+ // image config constants
+ arch = "amd64"
+ os = "linux"
+ fsType = "layers"
+)
+
+type Entry struct {
+ MediaType string `json:"mediaType"`
+ Size int64 `json:"size"`
+ Digest string `json:"digest"`
+}
+
+type manifest struct {
+ SchemaVersion int `json:"schemaVersion"`
+ MediaType string `json:"mediaType"`
+ Config Entry `json:"config"`
+ Layers []Entry `json:"layers"`
+}
+
+type imageConfig struct {
+ Architecture string `json:"architecture"`
+ OS string `json:"os"`
+
+ RootFS struct {
+ FSType string `json:"type"`
+ DiffIDs []string `json:"diff_ids"`
+ } `json:"rootfs"`
+
+ // sic! empty struct (rather than `null`) is required by the
+ // image metadata deserialiser in Kubernetes
+ Config struct{} `json:"config"`
+}
+
+// ConfigLayer represents the configuration layer to be included in
+// the manifest, containing its JSON-serialised content and the SHA256
+// & MD5 hashes of its input.
+type ConfigLayer struct {
+ Config []byte
+ SHA256 string
+ MD5 string
+}
+
+// imageConfig creates an image configuration with the values set to
+// the constant defaults.
+//
+// Outside of this module the image configuration is treated as an
+// opaque blob and it is thus returned as an already serialised byte
+// array and its SHA256-hash.
+func configLayer(hashes []string) ConfigLayer {
+ c := imageConfig{}
+ c.Architecture = arch
+ c.OS = os
+ c.RootFS.FSType = fsType
+ c.RootFS.DiffIDs = hashes
+
+ j, _ := json.Marshal(c)
+
+ return ConfigLayer{
+ Config: j,
+ SHA256: fmt.Sprintf("%x", sha256.Sum256(j)),
+ MD5: fmt.Sprintf("%x", md5.Sum(j)),
+ }
+}
+
+// Manifest creates an image manifest from the specified layer entries
+// and returns its JSON-serialised form as well as the configuration
+// layer.
+//
+// Callers do not need to set the media type for the layer entries.
+func Manifest(layers []Entry) (json.RawMessage, ConfigLayer) {
+ hashes := make([]string, len(layers))
+ for i, l := range layers {
+ l.MediaType = "application/vnd.docker.image.rootfs.diff.tar"
+ layers[i] = l
+ hashes[i] = l.Digest
+ }
+
+ c := configLayer(hashes)
+
+ m := manifest{
+ SchemaVersion: schemaVersion,
+ MediaType: manifestType,
+ Config: Entry{
+ MediaType: configType,
+ Size: int64(len(c.Config)),
+ Digest: "sha256:" + c.SHA256,
+ },
+ Layers: layers,
+ }
+
+ j, _ := json.Marshal(m)
+
+ return json.RawMessage(j), c
+}
--
cgit 1.4.1
From ef2623d168fc3f91020fdb574f5287b01d7534e6 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 1 Oct 2019 23:25:51 +0100
Subject: fix(nix): Minor fixes to derivations for new build process
---
tools/nixery/build-image/build-layers.nix | 2 +-
tools/nixery/default.nix | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-layers.nix b/tools/nixery/build-image/build-layers.nix
index 8a8bfbe9edf1..9a8742f13f73 100644
--- a/tools/nixery/build-image/build-layers.nix
+++ b/tools/nixery/build-image/build-layers.nix
@@ -25,7 +25,7 @@
let
inherit (builtins) fromJSON mapAttrs toJSON;
- inherit (pkgs) lib runCommand;
+ inherit (pkgs) lib runCommand writeText;
pkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 925edbf6dc84..af506eea32d2 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -44,7 +44,7 @@ in rec {
# are installing Nixery directly.
nixery-bin = writeShellScriptBin "nixery" ''
export WEB_DIR="${nixery-book}"
- export PATH="${nixery-build-image}/bin:$PATH"
+ export PATH="${nixery-build-layers}/bin:${nixery-build-image}/bin:$PATH"
exec ${nixery-server}/bin/nixery
'';
@@ -96,6 +96,7 @@ in rec {
iana-etc
nix
nixery-build-image
+ nixery-build-layers
nixery-launch-script
openssh
zlib
--
cgit 1.4.1
From 17adda03552d44b6354c3a801614559975c82144 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 1 Oct 2019 23:26:09 +0100
Subject: fix(server): Minor fixes to updated new builder code
---
tools/nixery/server/builder/cache.go | 11 +++++------
tools/nixery/server/main.go | 4 ++--
2 files changed, 7 insertions(+), 8 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index edb1b711d853..254f32d8306d 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -52,6 +52,7 @@ func NewCache() LocalCache {
return LocalCache{
mcache: make(map[string]string),
lcache: make(map[string]void),
+ bcache: make(map[string]Build),
}
}
@@ -111,7 +112,7 @@ func (c *LocalCache) localCacheBuild(key string, b Build) {
// Retrieve a manifest from the cache(s). First the local cache is
// checked, then the GCS-bucket cache.
-func manifestFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (string, bool) {
+func manifestFromCache(ctx context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (string, bool) {
path, cached := cache.manifestFromLocalCache(key)
if cached {
return path, true
@@ -120,12 +121,12 @@ func manifestFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.
obj := bucket.Object("manifests/" + key)
// Probe whether the file exists before trying to fetch it.
- _, err := obj.Attrs(*ctx)
+ _, err := obj.Attrs(ctx)
if err != nil {
return "", false
}
- r, err := obj.NewReader(*ctx)
+ r, err := obj.NewReader(ctx)
if err != nil {
log.Printf("Failed to retrieve manifest '%s' from cache: %s\n", key, err)
return "", false
@@ -222,11 +223,9 @@ func cacheBuild(ctx context.Context, cache *LocalCache, bucket *storage.BucketHa
w := obj.NewWriter(ctx)
- size, err := io.Copy(w, bytes.NewReader(j))
+ _, err := io.Copy(w, bytes.NewReader(j))
if err != nil {
log.Printf("failed to cache build '%s': %s\n", key, err)
return
}
-
- log.Printf("cached build '%s' (%v bytes written)\n", key, size)
}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index ae8dd3ab2dbf..c3a0b6460a39 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -122,7 +122,7 @@ func writeError(w http.ResponseWriter, status int, code, message string) {
}
type registryHandler struct {
- ctx *context.Context
+ ctx context.Context
state *builder.State
}
@@ -201,7 +201,7 @@ func main() {
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
- ctx: &ctx,
+ ctx: ctx,
state: &state,
})
--
cgit 1.4.1
From aa02ae142166af23c1b6d8533b8eea5d6fa3e9a1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 1 Oct 2019 23:27:26 +0100
Subject: feat(server): Implement new build process core
Implements the new build process to the point where it can actually
construct and serve image manifests.
It is worth noting that this build process works even if the Nix
sandbox is enabled!
It is also worth nothing that none of the caching functionality that
the new build process enables (such as per-layer build caching) is
actually in use yet, hence running Nixery at this commit is prone to
doing more work than previously.
This relates to #50.
---
tools/nixery/server/builder/builder.go | 110 ++++++++++++++++-----------------
1 file changed, 52 insertions(+), 58 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 303d796df6bf..1bdd9212c770 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -35,8 +35,8 @@ import (
"sort"
"strings"
- "cloud.google.com/go/storage"
"github.com/google/nixery/layers"
+ "github.com/google/nixery/manifest"
"golang.org/x/oauth2/google"
)
@@ -62,10 +62,9 @@ type Image struct {
// TODO(tazjin): docstring
type BuildResult struct {
- Error string
- Pkgs []string
-
- Manifest struct{} // TODO(tazjin): OCIv1 manifest
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+ Manifest json.RawMessage `json:"manifest"`
}
// ImageFromName parses an image name into the corresponding structure which can
@@ -149,6 +148,12 @@ func callNix(program string, name string, args []string) ([]byte, error) {
}
go logNix(name, errpipe)
+ if err = cmd.Start(); err != nil {
+ log.Printf("Error starting %s: %s\n", program, err)
+ return nil, err
+ }
+ log.Printf("Invoked Nix build (%s) for '%s'\n", program, name)
+
stdout, _ := ioutil.ReadAll(outpipe)
if err = cmd.Wait(); err != nil {
@@ -208,7 +213,7 @@ func prepareImage(s *State, image *Image) (*ImageResult, error) {
// Returns information about all data layers that need to be included
// in the manifest, as well as information about which layers need to
// be uploaded (and from where).
-func prepareLayers(ctx *context.Context, s *State, image *Image, graph *layers.RuntimeGraph) (map[string]string, error) {
+func prepareLayers(ctx context.Context, s *State, image *Image, graph *layers.RuntimeGraph) (map[string]string, error) {
grouped := layers.Group(graph, &s.Pop, LayerBudget)
// TODO(tazjin): Introduce caching strategy, for now this will
@@ -219,7 +224,8 @@ func prepareLayers(ctx *context.Context, s *State, image *Image, graph *layers.R
"--argstr", "srcArgs", srcArgs,
}
- var layerInput map[string][]string
+ layerInput := make(map[string][]string)
+ allPaths := []string{}
for _, l := range grouped {
layerInput[l.Hash()] = l.Contents
@@ -231,10 +237,12 @@ func prepareLayers(ctx *context.Context, s *State, image *Image, graph *layers.R
// To work around this, all required store paths are added as
// 'extra-sandbox-paths' parameters.
for _, p := range l.Contents {
- args = append(args, "--option", "extra-sandbox-paths", p)
+ allPaths = append(allPaths, p)
}
}
+ args = append(args, "--option", "extra-sandbox-paths", strings.Join(allPaths, " "))
+
j, _ := json.Marshal(layerInput)
args = append(args, "--argstr", "layers", string(j))
@@ -243,6 +251,7 @@ func prepareLayers(ctx *context.Context, s *State, image *Image, graph *layers.R
log.Printf("failed to call nixery-build-layers: %s\n", err)
return nil, err
}
+ log.Printf("Finished layer preparation for '%s' via Nix\n", image.Name)
result := make(map[string]string)
err = json.Unmarshal(output, &result)
@@ -306,32 +315,25 @@ func renameObject(ctx context.Context, s *State, old, new string) error {
//
// The return value is the layer's SHA256 hash, which is used in the
// image manifest.
-func uploadHashLayer(ctx context.Context, s *State, key, path string) (string, error) {
+func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader) (*manifest.Entry, error) {
staging := s.Bucket.Object("staging/" + key)
- // Set up a writer that simultaneously runs both hash
+ // Sets up a "multiwriter" that simultaneously runs both hash
// algorithms and uploads to the bucket
sw := staging.NewWriter(ctx)
shasum := sha256.New()
md5sum := md5.New()
multi := io.MultiWriter(sw, shasum, md5sum)
- f, err := os.Open(path)
- if err != nil {
- log.Printf("failed to open layer at '%s' for reading: %s\n", path, err)
- return "", err
- }
- defer f.Close()
-
- size, err := io.Copy(multi, f)
+ size, err := io.Copy(multi, data)
if err != nil {
log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
- return "", err
+ return nil, err
}
if err = sw.Close(); err != nil {
log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
- return "", err
+ return nil, err
}
build := Build{
@@ -344,20 +346,25 @@ func uploadHashLayer(ctx context.Context, s *State, key, path string) (string, e
err = renameObject(ctx, s, "staging/"+key, "layers/"+build.SHA256)
if err != nil {
log.Printf("failed to move layer '%s' from staging: %s\n", key, err)
- return "", err
+ return nil, err
}
cacheBuild(ctx, &s.Cache, s.Bucket, key, build)
log.Printf("Uploaded layer sha256:%s (%v bytes written)", build.SHA256, size)
- return build.SHA256, nil
+ return &manifest.Entry{
+ Digest: "sha256:" + build.SHA256,
+ Size: size,
+ }, nil
}
-func BuildImage(ctx *context.Context, s *State, image *Image) (*BuildResult, error) {
+func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, error) {
+ // TODO(tazjin): Use the build cache
+
imageResult, err := prepareImage(s, image)
if err != nil {
- return nil, err
+ return nil, fmt.Errorf("failed to prepare image '%s': %s", image.Name, err)
}
if imageResult.Error != "" {
@@ -367,51 +374,38 @@ func BuildImage(ctx *context.Context, s *State, image *Image) (*BuildResult, err
}, nil
}
- _, err = prepareLayers(ctx, s, image, &imageResult.Graph)
+ layerResult, err := prepareLayers(ctx, s, image, &imageResult.Graph)
if err != nil {
return nil, err
}
- return nil, nil
-}
-
-// uploadLayer uploads a single layer to Cloud Storage bucket. Before writing
-// any data the bucket is probed to see if the file already exists.
-//
-// If the file does exist, its MD5 hash is verified to ensure that the stored
-// file is not - for example - a fragment of a previous, incomplete upload.
-func uploadLayer(ctx context.Context, bucket *storage.BucketHandle, layer string, path string, md5 []byte) error {
- layerKey := fmt.Sprintf("layers/%s", layer)
- obj := bucket.Object(layerKey)
-
- // Before uploading a layer to the bucket, probe whether it already
- // exists.
- //
- // If it does and the MD5 checksum matches the expected one, the layer
- // upload can be skipped.
- attrs, err := obj.Attrs(ctx)
-
- if err == nil && bytes.Equal(attrs.MD5, md5) {
- log.Printf("Layer sha256:%s already exists in bucket, skipping upload", layer)
- } else {
- writer := obj.NewWriter(ctx)
- file, err := os.Open(path)
-
+ layers := []manifest.Entry{}
+ for key, path := range layerResult {
+ f, err := os.Open(path)
if err != nil {
- return fmt.Errorf("failed to open layer %s from path %s: %v", layer, path, err)
+ log.Printf("failed to open layer at '%s': %s\n", path, err)
+ return nil, err
}
- size, err := io.Copy(writer, file)
+ entry, err := uploadHashLayer(ctx, s, key, f)
+ f.Close()
if err != nil {
- return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
+ return nil, err
}
- if err = writer.Close(); err != nil {
- return fmt.Errorf("failed to write layer %s to Cloud Storage: %v", layer, err)
- }
+ layers = append(layers, *entry)
+ }
- log.Printf("Uploaded layer sha256:%s (%v bytes written)\n", layer, size)
+ m, c := manifest.Manifest(layers)
+ if _, err = uploadHashLayer(ctx, s, c.SHA256, bytes.NewReader(c.Config)); err != nil {
+ log.Printf("failed to upload config for %s: %s\n", image.Name, err)
+ return nil, err
}
- return nil
+ result := BuildResult{
+ Manifest: m,
+ }
+ // TODO: cache manifest
+
+ return &result, nil
}
--
cgit 1.4.1
From f4f290957305a5a81292edef717a18a7c36be4bf Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 2 Oct 2019 15:19:28 +0100
Subject: fix(server): Specify correct authentication scope for GCS
When retrieving tokens for service service accounts, some methods of
retrieval require a scope to be specified.
---
tools/nixery/server/builder/builder.go | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 1bdd9212c770..ddfd4a078229 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -45,6 +45,9 @@ import (
// use up is set at a lower point.
const LayerBudget int = 94
+// API scope needed for renaming objects in GCS
+const gcsScope = "https://www.googleapis.com/auth/devstorage.read_write"
+
// HTTP client to use for direct calls to APIs that are not part of the SDK
var client = &http.Client{}
@@ -270,7 +273,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, graph *layers.Ru
func renameObject(ctx context.Context, s *State, old, new string) error {
bucket := s.Cfg.Bucket
- creds, err := google.FindDefaultCredentials(ctx)
+ creds, err := google.FindDefaultCredentials(ctx, gcsScope)
if err != nil {
return err
}
--
cgit 1.4.1
From 64fca61ea1d898c01893f56f0e03913f36468f5d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 2 Oct 2019 15:31:57 +0100
Subject: fix(server): Upload symlink layer created by first Nix build
This layer is needed in addition to those that are built in the second
Nix build.
---
tools/nixery/server/builder/builder.go | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index ddfd4a078229..87776735f9e7 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -63,7 +63,10 @@ type Image struct {
Packages []string
}
-// TODO(tazjin): docstring
+// BuildResult represents the data returned from the server to the
+// HTTP handlers. Error information is propagated straight from Nix
+// for errors inside of the build that should be fed back to the
+// client (such as missing packages).
type BuildResult struct {
Error string `json:"error"`
Pkgs []string `json:"pkgs"`
@@ -382,6 +385,8 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
return nil, err
}
+ layerResult[imageResult.SymlinkLayer.SHA256] = imageResult.SymlinkLayer.Path
+
layers := []manifest.Entry{}
for key, path := range layerResult {
f, err := os.Open(path)
--
cgit 1.4.1
From 0698d7f2aafc62c1ef6fca172668310727fdaef2 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 2 Oct 2019 17:48:58 +0100
Subject: chore(server): Remove "layer seen" cache
This cache is no longer required as it is implicit because the layer
cache (mapping store path hashes to layer hashes) implies that a layer
has been seen.
---
tools/nixery/server/builder/cache.go | 34 +++++-----------------------------
1 file changed, 5 insertions(+), 29 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 254f32d8306d..1ed87b40a130 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -25,8 +25,6 @@ import (
"cloud.google.com/go/storage"
)
-type void struct{}
-
type Build struct {
SHA256 string `json:"sha256"`
MD5 string `json:"md5"`
@@ -39,40 +37,18 @@ type LocalCache struct {
mmtx sync.RWMutex
mcache map[string]string
- // Layer (tarball) cache
+ // Layer cache
lmtx sync.RWMutex
- lcache map[string]void
-
- // Layer (build) cache
- bmtx sync.RWMutex
- bcache map[string]Build
+ lcache map[string]Build
}
func NewCache() LocalCache {
return LocalCache{
mcache: make(map[string]string),
- lcache: make(map[string]void),
- bcache: make(map[string]Build),
+ lcache: make(map[string]Build),
}
}
-// Has this layer hash already been seen by this Nixery instance? If
-// yes, we can skip upload checking and such because it has already
-// been done.
-func (c *LocalCache) hasSeenLayer(hash string) bool {
- c.lmtx.RLock()
- defer c.lmtx.RUnlock()
- _, seen := c.lcache[hash]
- return seen
-}
-
-// Layer has now been seen and should be stored.
-func (c *LocalCache) sawLayer(hash string) {
- c.lmtx.Lock()
- defer c.lmtx.Unlock()
- c.lcache[hash] = void{}
-}
-
// Retrieve a cached manifest if the build is cacheable and it exists.
func (c *LocalCache) manifestFromLocalCache(key string) (string, bool) {
c.mmtx.RLock()
@@ -97,7 +73,7 @@ func (c *LocalCache) localCacheManifest(key, path string) {
// Retrieve a cached build from the local cache.
func (c *LocalCache) buildFromLocalCache(key string) (*Build, bool) {
c.bmtx.RLock()
- b, ok := c.bcache[key]
+ b, ok := c.lcache[key]
c.bmtx.RUnlock()
return &b, ok
@@ -106,7 +82,7 @@ func (c *LocalCache) buildFromLocalCache(key string) (*Build, bool) {
// Add a build result to the local cache.
func (c *LocalCache) localCacheBuild(key string, b Build) {
c.bmtx.Lock()
- c.bcache[key] = b
+ c.lcache[key] = b
c.bmtx.Unlock()
}
--
cgit 1.4.1
From 1308a6e1fd8e5f9cbd0d6b5d872628ec234114d5 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 2 Oct 2019 18:00:35 +0100
Subject: refactor(server): Clean up cache implementation
A couple of minor fixes and improvements to the cache implementation.
---
tools/nixery/server/builder/builder.go | 2 +-
tools/nixery/server/builder/cache.go | 43 +++++++++++++++++-----------------
tools/nixery/server/main.go | 8 +++----
3 files changed, 26 insertions(+), 27 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 87776735f9e7..d0650648fbcb 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -355,7 +355,7 @@ func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader)
return nil, err
}
- cacheBuild(ctx, &s.Cache, s.Bucket, key, build)
+ cacheBuild(ctx, s, key, build)
log.Printf("Uploaded layer sha256:%s (%v bytes written)", build.SHA256, size)
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 1ed87b40a130..582e28d81abc 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -21,8 +21,6 @@ import (
"log"
"os"
"sync"
-
- "cloud.google.com/go/storage"
)
type Build struct {
@@ -72,29 +70,29 @@ func (c *LocalCache) localCacheManifest(key, path string) {
// Retrieve a cached build from the local cache.
func (c *LocalCache) buildFromLocalCache(key string) (*Build, bool) {
- c.bmtx.RLock()
+ c.lmtx.RLock()
b, ok := c.lcache[key]
- c.bmtx.RUnlock()
+ c.lmtx.RUnlock()
return &b, ok
}
// Add a build result to the local cache.
func (c *LocalCache) localCacheBuild(key string, b Build) {
- c.bmtx.Lock()
+ c.lmtx.Lock()
c.lcache[key] = b
- c.bmtx.Unlock()
+ c.lmtx.Unlock()
}
// Retrieve a manifest from the cache(s). First the local cache is
// checked, then the GCS-bucket cache.
-func manifestFromCache(ctx context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (string, bool) {
- path, cached := cache.manifestFromLocalCache(key)
+func manifestFromCache(ctx context.Context, s *State, key string) (string, bool) {
+ path, cached := s.Cache.manifestFromLocalCache(key)
if cached {
return path, true
}
- obj := bucket.Object("manifests/" + key)
+ obj := s.Bucket.Object("manifests/" + key)
// Probe whether the file exists before trying to fetch it.
_, err := obj.Attrs(ctx)
@@ -119,17 +117,17 @@ func manifestFromCache(ctx context.Context, cache *LocalCache, bucket *storage.B
}
log.Printf("Retrieved manifest for sha1:%s from GCS\n", key)
- cache.localCacheManifest(key, path)
+ go s.Cache.localCacheManifest(key, path)
return path, true
}
// Add a manifest to the bucket & local caches
-func cacheManifest(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key, path string) {
- cache.localCacheManifest(key, path)
+func cacheManifest(ctx context.Context, s *State, key, path string) {
+ go s.Cache.localCacheManifest(key, path)
- obj := bucket.Object("manifests/" + key)
- w := obj.NewWriter(*ctx)
+ obj := s.Bucket.Object("manifests/" + key)
+ w := obj.NewWriter(ctx)
f, err := os.Open(path)
if err != nil {
@@ -154,19 +152,19 @@ func cacheManifest(ctx *context.Context, cache *LocalCache, bucket *storage.Buck
// Retrieve a build from the cache, first checking the local cache
// followed by the bucket cache.
-func buildFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string) (*Build, bool) {
- build, cached := cache.buildFromLocalCache(key)
+func buildFromCache(ctx context.Context, s *State, key string) (*Build, bool) {
+ build, cached := s.Cache.buildFromLocalCache(key)
if cached {
return build, true
}
- obj := bucket.Object("builds/" + key)
- _, err := obj.Attrs(*ctx)
+ obj := s.Bucket.Object("builds/" + key)
+ _, err := obj.Attrs(ctx)
if err != nil {
return nil, false
}
- r, err := obj.NewReader(*ctx)
+ r, err := obj.NewReader(ctx)
if err != nil {
log.Printf("Failed to retrieve build '%s' from cache: %s\n", key, err)
return nil, false
@@ -187,13 +185,14 @@ func buildFromCache(ctx *context.Context, cache *LocalCache, bucket *storage.Buc
return nil, false
}
+ go s.Cache.localCacheBuild(key, b)
return &b, true
}
-func cacheBuild(ctx context.Context, cache *LocalCache, bucket *storage.BucketHandle, key string, build Build) {
- cache.localCacheBuild(key, build)
+func cacheBuild(ctx context.Context, s *State, key string, build Build) {
+ go s.Cache.localCacheBuild(key, build)
- obj := bucket.Object("builds/" + key)
+ obj := s.Bucket.Object("builds/" + key)
j, _ := json.Marshal(&build)
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index c3a0b6460a39..1ff3a471f94f 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -81,15 +81,15 @@ func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
//
// The bucket is required for Nixery to function correctly, hence fatal errors
// are generated in case it fails to be set up correctly.
-func prepareBucket(ctx *context.Context, cfg *config.Config) *storage.BucketHandle {
- client, err := storage.NewClient(*ctx)
+func prepareBucket(ctx context.Context, cfg *config.Config) *storage.BucketHandle {
+ client, err := storage.NewClient(ctx)
if err != nil {
log.Fatalln("Failed to set up Cloud Storage client:", err)
}
bkt := client.Bucket(cfg.Bucket)
- if _, err := bkt.Attrs(*ctx); err != nil {
+ if _, err := bkt.Attrs(ctx); err != nil {
log.Fatalln("Could not access configured bucket", err)
}
@@ -194,7 +194,7 @@ func main() {
}
ctx := context.Background()
- bucket := prepareBucket(&ctx, cfg)
+ bucket := prepareBucket(ctx, cfg)
state := builder.NewState(bucket, *cfg)
log.Printf("Starting Nixery on port %s\n", cfg.Port)
--
cgit 1.4.1
From 355fe3f5ec05c3c698ea3ba21a5d57454daeceef Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 11:23:04 +0100
Subject: feat(server): Reintroduce manifest caching to GCS
The new builder now caches and reads cached manifests to/from GCS. The
in-memory cache is disabled, as manifests are no longer written to
local file and the caching of file paths does not work (unless we
reintroduce reading/writing from temp files as part of the local
cache).
---
tools/nixery/server/builder/builder.go | 15 +++++++++---
tools/nixery/server/builder/cache.go | 43 ++++++++++++++--------------------
2 files changed, 29 insertions(+), 29 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index d0650648fbcb..f3342f9918f8 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -366,7 +366,14 @@ func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader)
}
func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, error) {
- // TODO(tazjin): Use the build cache
+ key := s.Cfg.Pkgs.CacheKey(image.Packages, image.Tag)
+ if key != "" {
+ if m, c := manifestFromCache(ctx, s, key); c {
+ return &BuildResult{
+ Manifest: m,
+ }, nil
+ }
+ }
imageResult, err := prepareImage(s, image)
if err != nil {
@@ -410,10 +417,12 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
return nil, err
}
+ if key != "" {
+ go cacheManifest(ctx, s, key, m)
+ }
+
result := BuildResult{
Manifest: m,
}
- // TODO: cache manifest
-
return &result, nil
}
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 582e28d81abc..a5cbbf6ce469 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -18,8 +18,8 @@ import (
"context"
"encoding/json"
"io"
+ "io/ioutil"
"log"
- "os"
"sync"
)
@@ -86,57 +86,48 @@ func (c *LocalCache) localCacheBuild(key string, b Build) {
// Retrieve a manifest from the cache(s). First the local cache is
// checked, then the GCS-bucket cache.
-func manifestFromCache(ctx context.Context, s *State, key string) (string, bool) {
- path, cached := s.Cache.manifestFromLocalCache(key)
- if cached {
- return path, true
- }
+func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessage, bool) {
+ // path, cached := s.Cache.manifestFromLocalCache(key)
+ // if cached {
+ // return path, true
+ // }
+ // TODO: local cache?
obj := s.Bucket.Object("manifests/" + key)
// Probe whether the file exists before trying to fetch it.
_, err := obj.Attrs(ctx)
if err != nil {
- return "", false
+ return nil, false
}
r, err := obj.NewReader(ctx)
if err != nil {
log.Printf("Failed to retrieve manifest '%s' from cache: %s\n", key, err)
- return "", false
+ return nil, false
}
defer r.Close()
- path = os.TempDir() + "/" + key
- f, _ := os.Create(path)
- defer f.Close()
-
- _, err = io.Copy(f, r)
+ m, err := ioutil.ReadAll(r)
if err != nil {
log.Printf("Failed to read cached manifest for '%s': %s\n", key, err)
}
+ // TODO: locally cache manifest, but the cache needs to be changed
log.Printf("Retrieved manifest for sha1:%s from GCS\n", key)
- go s.Cache.localCacheManifest(key, path)
-
- return path, true
+ return json.RawMessage(m), true
}
// Add a manifest to the bucket & local caches
-func cacheManifest(ctx context.Context, s *State, key, path string) {
- go s.Cache.localCacheManifest(key, path)
+func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage) {
+ // go s.Cache.localCacheManifest(key, path)
+ // TODO local cache
obj := s.Bucket.Object("manifests/" + key)
w := obj.NewWriter(ctx)
+ r := bytes.NewReader([]byte(m))
- f, err := os.Open(path)
- if err != nil {
- log.Printf("failed to open manifest sha1:%s for cache upload: %s\n", key, err)
- return
- }
- defer f.Close()
-
- size, err := io.Copy(w, f)
+ size, err := io.Copy(w, r)
if err != nil {
log.Printf("failed to cache manifest sha1:%s: %s\n", key, err)
return
--
cgit 1.4.1
From f6b40ed6c78a69dd417bd9e0f64a207904755af4 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 12:09:24 +0100
Subject: refactor(server): Cache manifest entries for layer builds
MD5 hash checking is no longer performed by Nixery (it does not seem
to be necessary), hence the layer cache now only keeps the SHA256 hash
and size in the form of the manifest entry.
This makes it possible to restructure the builder code to perform
cache-fetching and cache-populating for layers in the same place.
---
tools/nixery/server/builder/cache.go | 56 +++++++++++++++-----------------
tools/nixery/server/manifest/manifest.go | 2 +-
2 files changed, 27 insertions(+), 31 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index a5cbbf6ce469..ab0021f6d2ff 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -21,12 +21,9 @@ import (
"io/ioutil"
"log"
"sync"
-)
-type Build struct {
- SHA256 string `json:"sha256"`
- MD5 string `json:"md5"`
-}
+ "github.com/google/nixery/manifest"
+)
// LocalCache implements the structure used for local caching of
// manifests and layer uploads.
@@ -37,13 +34,13 @@ type LocalCache struct {
// Layer cache
lmtx sync.RWMutex
- lcache map[string]Build
+ lcache map[string]manifest.Entry
}
func NewCache() LocalCache {
return LocalCache{
mcache: make(map[string]string),
- lcache: make(map[string]Build),
+ lcache: make(map[string]manifest.Entry),
}
}
@@ -68,19 +65,19 @@ func (c *LocalCache) localCacheManifest(key, path string) {
c.mmtx.Unlock()
}
-// Retrieve a cached build from the local cache.
-func (c *LocalCache) buildFromLocalCache(key string) (*Build, bool) {
+// Retrieve a layer build from the local cache.
+func (c *LocalCache) layerFromLocalCache(key string) (*manifest.Entry, bool) {
c.lmtx.RLock()
- b, ok := c.lcache[key]
+ e, ok := c.lcache[key]
c.lmtx.RUnlock()
- return &b, ok
+ return &e, ok
}
-// Add a build result to the local cache.
-func (c *LocalCache) localCacheBuild(key string, b Build) {
+// Add a layer build result to the local cache.
+func (c *LocalCache) localCacheLayer(key string, e manifest.Entry) {
c.lmtx.Lock()
- c.lcache[key] = b
+ c.lcache[key] = e
c.lmtx.Unlock()
}
@@ -141,12 +138,11 @@ func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage)
log.Printf("Cached manifest sha1:%s (%v bytes written)\n", key, size)
}
-// Retrieve a build from the cache, first checking the local cache
-// followed by the bucket cache.
-func buildFromCache(ctx context.Context, s *State, key string) (*Build, bool) {
- build, cached := s.Cache.buildFromLocalCache(key)
- if cached {
- return build, true
+// Retrieve a layer build from the cache, first checking the local
+// cache followed by the bucket cache.
+func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry, bool) {
+ if entry, cached := s.Cache.layerFromLocalCache(key); cached {
+ return entry, true
}
obj := s.Bucket.Object("builds/" + key)
@@ -157,7 +153,7 @@ func buildFromCache(ctx context.Context, s *State, key string) (*Build, bool) {
r, err := obj.NewReader(ctx)
if err != nil {
- log.Printf("Failed to retrieve build '%s' from cache: %s\n", key, err)
+ log.Printf("Failed to retrieve layer build '%s' from cache: %s\n", key, err)
return nil, false
}
defer r.Close()
@@ -165,27 +161,27 @@ func buildFromCache(ctx context.Context, s *State, key string) (*Build, bool) {
jb := bytes.NewBuffer([]byte{})
_, err = io.Copy(jb, r)
if err != nil {
- log.Printf("Failed to read build '%s' from cache: %s\n", key, err)
+ log.Printf("Failed to read layer build '%s' from cache: %s\n", key, err)
return nil, false
}
- var b Build
- err = json.Unmarshal(jb.Bytes(), &build)
+ var entry manifest.Entry
+ err = json.Unmarshal(jb.Bytes(), &entry)
if err != nil {
- log.Printf("Failed to unmarshal build '%s' from cache: %s\n", key, err)
+ log.Printf("Failed to unmarshal layer build '%s' from cache: %s\n", key, err)
return nil, false
}
- go s.Cache.localCacheBuild(key, b)
- return &b, true
+ go s.Cache.localCacheLayer(key, entry)
+ return &entry, true
}
-func cacheBuild(ctx context.Context, s *State, key string, build Build) {
- go s.Cache.localCacheBuild(key, build)
+func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry) {
+ s.Cache.localCacheLayer(key, entry)
obj := s.Bucket.Object("builds/" + key)
- j, _ := json.Marshal(&build)
+ j, _ := json.Marshal(&entry)
w := obj.NewWriter(ctx)
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index dd447796cc78..61d280a7fbab 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -25,7 +25,7 @@ const (
)
type Entry struct {
- MediaType string `json:"mediaType"`
+ MediaType string `json:"mediaType,omitempty"`
Size int64 `json:"size"`
Digest string `json:"digest"`
}
--
cgit 1.4.1
From 53906024ff0612b6946cff4122dc28e85a414b6b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 12:11:46 +0100
Subject: refactor: Remove remaining MD5-hash mentions and computations
---
tools/nixery/build-image/build-image.nix | 5 ++---
tools/nixery/docs/src/caching.md | 9 ++++-----
tools/nixery/docs/src/under-the-hood.md | 3 +--
tools/nixery/server/builder/builder.go | 24 +++++++++---------------
tools/nixery/server/manifest/manifest.go | 7 ++-----
5 files changed, 18 insertions(+), 30 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 33500dbb9e80..70049885ab1c 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -137,11 +137,10 @@ let
buildInputs = with pkgs; [ coreutils jq openssl ];
}''
layerSha256=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
- layerMd5=$(openssl dgst -md5 -binary ${symlinkLayer} | openssl enc -base64)
layerSize=$(stat --printf '%s' ${symlinkLayer})
- jq -n -c --arg sha256 $layerSha256 --arg md5 $layerMd5 --arg size $layerSize --arg path ${symlinkLayer} \
- '{ size: ($size | tonumber), sha256: $sha256, md5: $md5, path: $path }' >> $out
+ jq -n -c --arg sha256 $layerSha256 --arg size $layerSize --arg path ${symlinkLayer} \
+ '{ size: ($size | tonumber), sha256: $sha256, path: $path }' >> $out
''));
# Final output structure returned to Nixery if the build succeeded
diff --git a/tools/nixery/docs/src/caching.md b/tools/nixery/docs/src/caching.md
index 175fe04d7084..b07d9e22f046 100644
--- a/tools/nixery/docs/src/caching.md
+++ b/tools/nixery/docs/src/caching.md
@@ -46,9 +46,8 @@ They are stored content-addressably at `$BUCKET/layers/$SHA256HASH` and layer
requests sent to Nixery will redirect directly to this storage location.
The effect of this cache is that Nixery does not need to upload identical layers
-repeatedly. When Nixery notices that a layer already exists in GCS, it will use
-the object metadata to compare its MD5-hash with the locally computed one and
-skip uploading.
+repeatedly. When Nixery notices that a layer already exists in GCS it will skip
+uploading this layer.
Removing layers from the cache is *potentially problematic* if there are cached
manifests or layer builds referencing those layers.
@@ -61,8 +60,8 @@ reference these layers.
Layer builds are cached at `$BUCKET/builds/$HASH`, where `$HASH` is a SHA1 of
the Nix store paths included in the layer.
-The content of the cached entries is a JSON-object that contains the MD5 and
-SHA256 hashes of the built layer.
+The content of the cached entries is a JSON-object that contains the SHA256
+hashes and sizes of the built layer.
The effect of this cache is that different instances of Nixery will not build,
hash and upload layers that have identical contents across different instances.
diff --git a/tools/nixery/docs/src/under-the-hood.md b/tools/nixery/docs/src/under-the-hood.md
index 6b5e5e9bbf21..b58a21d0d4ec 100644
--- a/tools/nixery/docs/src/under-the-hood.md
+++ b/tools/nixery/docs/src/under-the-hood.md
@@ -67,8 +67,7 @@ just ... hang, for a moment.
Nixery inspects the returned manifest and uploads each layer to the configured
[Google Cloud Storage][gcs] bucket. To avoid unnecessary uploading, it will
-first check whether layers are already present in the bucket and - just to be
-safe - compare their MD5-hashes against what was built.
+check whether layers are already present in the bucket.
## 4. The image manifest is sent back
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index f3342f9918f8..64cfed14399b 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -21,7 +21,6 @@ import (
"bufio"
"bytes"
"context"
- "crypto/md5"
"crypto/sha256"
"encoding/json"
"fmt"
@@ -108,7 +107,6 @@ type ImageResult struct {
SymlinkLayer struct {
Size int `json:"size"`
SHA256 string `json:"sha256"`
- MD5 string `json:"md5"`
Path string `json:"path"`
} `json:"symlinkLayer"`
}
@@ -328,8 +326,7 @@ func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader)
// algorithms and uploads to the bucket
sw := staging.NewWriter(ctx)
shasum := sha256.New()
- md5sum := md5.New()
- multi := io.MultiWriter(sw, shasum, md5sum)
+ multi := io.MultiWriter(sw, shasum)
size, err := io.Copy(multi, data)
if err != nil {
@@ -342,27 +339,24 @@ func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader)
return nil, err
}
- build := Build{
- SHA256: fmt.Sprintf("%x", shasum.Sum([]byte{})),
- MD5: fmt.Sprintf("%x", md5sum.Sum([]byte{})),
- }
+ sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
// Hashes are now known and the object is in the bucket, what
// remains is to move it to the correct location and cache it.
- err = renameObject(ctx, s, "staging/"+key, "layers/"+build.SHA256)
+ err = renameObject(ctx, s, "staging/"+key, "layers/"+sha256sum)
if err != nil {
log.Printf("failed to move layer '%s' from staging: %s\n", key, err)
return nil, err
}
- cacheBuild(ctx, s, key, build)
-
- log.Printf("Uploaded layer sha256:%s (%v bytes written)", build.SHA256, size)
+ log.Printf("Uploaded layer sha256:%s (%v bytes written)", sha256sum, size)
- return &manifest.Entry{
- Digest: "sha256:" + build.SHA256,
+ entry := manifest.Entry{
+ Digest: "sha256:" + sha256sum,
Size: size,
- }, nil
+ }
+
+ return &entry, nil
}
func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, error) {
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index 61d280a7fbab..f777e3f585df 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -3,7 +3,6 @@
package manifest
import (
- "crypto/md5"
"crypto/sha256"
"encoding/json"
"fmt"
@@ -52,12 +51,11 @@ type imageConfig struct {
}
// ConfigLayer represents the configuration layer to be included in
-// the manifest, containing its JSON-serialised content and the SHA256
-// & MD5 hashes of its input.
+// the manifest, containing its JSON-serialised content and SHA256
+// hash.
type ConfigLayer struct {
Config []byte
SHA256 string
- MD5 string
}
// imageConfig creates an image configuration with the values set to
@@ -78,7 +76,6 @@ func configLayer(hashes []string) ConfigLayer {
return ConfigLayer{
Config: j,
SHA256: fmt.Sprintf("%x", sha256.Sum256(j)),
- MD5: fmt.Sprintf("%x", md5.Sum(j)),
}
}
--
cgit 1.4.1
From 313e5d08f12f4c8573e9347f7f04493ab99b5abf Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 12:12:06 +0100
Subject: refactor(builder): Streamline layer creation & reintroduce caching
The functions used for layer creation are now easier to follow and
have clear points at which the layer cache is checked and populated.
This relates to #50.
---
tools/nixery/server/builder/builder.go | 81 ++++++++++++++++++++++------------
1 file changed, 53 insertions(+), 28 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 64cfed14399b..9f529b226918 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -214,14 +214,58 @@ func prepareImage(s *State, image *Image) (*ImageResult, error) {
// Groups layers and checks whether they are present in the cache
// already, otherwise calls out to Nix to assemble layers.
//
-// Returns information about all data layers that need to be included
-// in the manifest, as well as information about which layers need to
-// be uploaded (and from where).
-func prepareLayers(ctx context.Context, s *State, image *Image, graph *layers.RuntimeGraph) (map[string]string, error) {
- grouped := layers.Group(graph, &s.Pop, LayerBudget)
-
- // TODO(tazjin): Introduce caching strategy, for now this will
- // build all layers.
+// Newly built layers are uploaded to the bucket. Cache entries are
+// added only after successful uploads, which guarantees that entries
+// retrieved from the cache are present in the bucket.
+func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageResult) ([]manifest.Entry, error) {
+ grouped := layers.Group(&result.Graph, &s.Pop, LayerBudget)
+
+ var entries []manifest.Entry
+ var missing []layers.Layer
+
+ // Splits the layers into those which are already present in
+ // the cache, and those that are missing (i.e. need to be
+ // built by Nix).
+ for _, l := range grouped {
+ if entry, cached := layerFromCache(ctx, s, l.Hash()); cached {
+ entries = append(entries, *entry)
+ } else {
+ missing = append(missing, l)
+ }
+ }
+
+ built, err := buildLayers(s, image, missing)
+ if err != nil {
+ log.Printf("Failed to build missing layers: %s\n", err)
+ return nil, err
+ }
+
+ // Symlink layer (built in the first Nix build) needs to be
+ // included when hashing & uploading
+ built[result.SymlinkLayer.SHA256] = result.SymlinkLayer.Path
+
+ for key, path := range built {
+ f, err := os.Open(path)
+ if err != nil {
+ log.Printf("failed to open layer at '%s': %s\n", path, err)
+ return nil, err
+ }
+
+ entry, err := uploadHashLayer(ctx, s, key, f)
+ f.Close()
+ if err != nil {
+ return nil, err
+ }
+
+ entries = append(entries, *entry)
+ go cacheLayer(ctx, s, key, *entry)
+ }
+
+ return entries, nil
+}
+
+// Builds remaining layers (those not already cached) via Nix.
+func buildLayers(s *State, image *Image, grouped []layers.Layer) (map[string]string, error) {
srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
args := []string{
"--argstr", "srcType", srcType,
@@ -381,30 +425,11 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
}, nil
}
- layerResult, err := prepareLayers(ctx, s, image, &imageResult.Graph)
+ layers, err := prepareLayers(ctx, s, image, imageResult)
if err != nil {
return nil, err
}
- layerResult[imageResult.SymlinkLayer.SHA256] = imageResult.SymlinkLayer.Path
-
- layers := []manifest.Entry{}
- for key, path := range layerResult {
- f, err := os.Open(path)
- if err != nil {
- log.Printf("failed to open layer at '%s': %s\n", path, err)
- return nil, err
- }
-
- entry, err := uploadHashLayer(ctx, s, key, f)
- f.Close()
- if err != nil {
- return nil, err
- }
-
- layers = append(layers, *entry)
- }
-
m, c := manifest.Manifest(layers)
if _, err = uploadHashLayer(ctx, s, c.SHA256, bytes.NewReader(c.Config)); err != nil {
log.Printf("failed to upload config for %s: %s\n", image.Name, err)
--
cgit 1.4.1
From 43a642435b653d04d730de50735d310f1f1083eb Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 12:49:26 +0100
Subject: feat(server): Reimplement local manifest cache backed by files
Implements a local manifest cache that uses the temporary directory to
cache manifest builds.
This is necessary due to the size of manifests: Keeping them entirely
in-memory would quickly balloon the memory usage of Nixery, unless
some mechanism for cache eviction is implemented.
---
tools/nixery/server/builder/builder.go | 11 ++++++
tools/nixery/server/builder/cache.go | 67 +++++++++++++++++++++++-----------
tools/nixery/server/builder/state.go | 24 ------------
tools/nixery/server/config/config.go | 6 +--
tools/nixery/server/main.go | 13 ++++++-
5 files changed, 70 insertions(+), 51 deletions(-)
delete mode 100644 tools/nixery/server/builder/state.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 9f529b226918..e622f815a423 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -34,6 +34,8 @@ import (
"sort"
"strings"
+ "cloud.google.com/go/storage"
+ "github.com/google/nixery/config"
"github.com/google/nixery/layers"
"github.com/google/nixery/manifest"
"golang.org/x/oauth2/google"
@@ -50,6 +52,15 @@ const gcsScope = "https://www.googleapis.com/auth/devstorage.read_write"
// HTTP client to use for direct calls to APIs that are not part of the SDK
var client = &http.Client{}
+// State holds the runtime state that is carried around in Nixery and
+// passed to builder functions.
+type State struct {
+ Bucket *storage.BucketHandle
+ Cache *LocalCache
+ Cfg config.Config
+ Pop layers.Popularity
+}
+
// Image represents the information necessary for building a container image.
// This can be either a list of package names (corresponding to keys in the
// nixpkgs set) or a Nix expression that results in a *list* of derivations.
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index ab0021f6d2ff..060ed9a84b13 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -20,6 +20,7 @@ import (
"io"
"io/ioutil"
"log"
+ "os"
"sync"
"github.com/google/nixery/manifest"
@@ -29,40 +30,64 @@ import (
// manifests and layer uploads.
type LocalCache struct {
// Manifest cache
- mmtx sync.RWMutex
- mcache map[string]string
+ mmtx sync.RWMutex
+ mdir string
// Layer cache
lmtx sync.RWMutex
lcache map[string]manifest.Entry
}
-func NewCache() LocalCache {
+// Creates an in-memory cache and ensures that the local file path for
+// manifest caching exists.
+func NewCache() (LocalCache, error) {
+ path := os.TempDir() + "/nixery"
+ err := os.MkdirAll(path, 0755)
+ if err != nil {
+ return LocalCache{}, err
+ }
+
return LocalCache{
- mcache: make(map[string]string),
+ mdir: path + "/",
lcache: make(map[string]manifest.Entry),
- }
+ }, nil
}
// Retrieve a cached manifest if the build is cacheable and it exists.
-func (c *LocalCache) manifestFromLocalCache(key string) (string, bool) {
+func (c *LocalCache) manifestFromLocalCache(key string) (json.RawMessage, bool) {
c.mmtx.RLock()
- path, ok := c.mcache[key]
- c.mmtx.RUnlock()
+ defer c.mmtx.RUnlock()
- if !ok {
- return "", false
+ f, err := os.Open(c.mdir + key)
+ if err != nil {
+ // TODO(tazjin): Once log levels are available, this
+ // might warrant a debug log.
+ return nil, false
}
+ defer f.Close()
- return path, true
+ m, err := ioutil.ReadAll(f)
+ if err != nil {
+ log.Printf("Failed to read manifest '%s' from local cache: %s\n", key, err)
+ return nil, false
+ }
+
+ return json.RawMessage(m), true
}
// Adds the result of a manifest build to the local cache, if the
// manifest is considered cacheable.
-func (c *LocalCache) localCacheManifest(key, path string) {
+//
+// Manifests can be quite large and are cached on disk instead of in
+// memory.
+func (c *LocalCache) localCacheManifest(key string, m json.RawMessage) {
c.mmtx.Lock()
- c.mcache[key] = path
- c.mmtx.Unlock()
+ defer c.mmtx.Unlock()
+
+ err := ioutil.WriteFile(c.mdir+key, []byte(m), 0644)
+ if err != nil {
+ log.Printf("Failed to locally cache manifest for '%s': %s\n", key, err)
+ }
}
// Retrieve a layer build from the local cache.
@@ -84,11 +109,9 @@ func (c *LocalCache) localCacheLayer(key string, e manifest.Entry) {
// Retrieve a manifest from the cache(s). First the local cache is
// checked, then the GCS-bucket cache.
func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessage, bool) {
- // path, cached := s.Cache.manifestFromLocalCache(key)
- // if cached {
- // return path, true
- // }
- // TODO: local cache?
+ if m, cached := s.Cache.manifestFromLocalCache(key); cached {
+ return m, true
+ }
obj := s.Bucket.Object("manifests/" + key)
@@ -110,15 +133,15 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
log.Printf("Failed to read cached manifest for '%s': %s\n", key, err)
}
- // TODO: locally cache manifest, but the cache needs to be changed
+ go s.Cache.localCacheManifest(key, m)
log.Printf("Retrieved manifest for sha1:%s from GCS\n", key)
+
return json.RawMessage(m), true
}
// Add a manifest to the bucket & local caches
func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage) {
- // go s.Cache.localCacheManifest(key, path)
- // TODO local cache
+ go s.Cache.localCacheManifest(key, m)
obj := s.Bucket.Object("manifests/" + key)
w := obj.NewWriter(ctx)
diff --git a/tools/nixery/server/builder/state.go b/tools/nixery/server/builder/state.go
deleted file mode 100644
index 1c7f58821b6b..000000000000
--- a/tools/nixery/server/builder/state.go
+++ /dev/null
@@ -1,24 +0,0 @@
-package builder
-
-import (
- "cloud.google.com/go/storage"
- "github.com/google/nixery/config"
- "github.com/google/nixery/layers"
-)
-
-// State holds the runtime state that is carried around in Nixery and
-// passed to builder functions.
-type State struct {
- Bucket *storage.BucketHandle
- Cache LocalCache
- Cfg config.Config
- Pop layers.Popularity
-}
-
-func NewState(bucket *storage.BucketHandle, cfg config.Config) State {
- return State{
- Bucket: bucket,
- Cfg: cfg,
- Cache: NewCache(),
- }
-}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 30f727db1112..84c2b89d13d0 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -71,13 +71,13 @@ type Config struct {
PopUrl string // URL to the Nix package popularity count
}
-func FromEnv() (*Config, error) {
+func FromEnv() (Config, error) {
pkgs, err := pkgSourceFromEnv()
if err != nil {
- return nil, err
+ return Config{}, err
}
- return &Config{
+ return Config{
Bucket: getConfig("BUCKET", "GCS bucket for layer storage", ""),
Port: getConfig("PORT", "HTTP port", ""),
Pkgs: pkgs,
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 1ff3a471f94f..aeb70163da6a 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -194,8 +194,17 @@ func main() {
}
ctx := context.Background()
- bucket := prepareBucket(ctx, cfg)
- state := builder.NewState(bucket, *cfg)
+ bucket := prepareBucket(ctx, &cfg)
+ cache, err := builder.NewCache()
+ if err != nil {
+ log.Fatalln("Failed to instantiate build cache", err)
+ }
+
+ state := builder.State{
+ Bucket: bucket,
+ Cache: &cache,
+ Cfg: cfg,
+ }
log.Printf("Starting Nixery on port %s\n", cfg.Port)
--
cgit 1.4.1
From feba42e40933fe932b1ca330d2c919ae018a9a7f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 13:02:48 +0100
Subject: feat(server): Fetch popularity data on launch
The last missing puzzle piece for #50!
---
tools/nixery/server/main.go | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index aeb70163da6a..b87af650650a 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -29,6 +29,7 @@ import (
"context"
"encoding/json"
"fmt"
+ "io/ioutil"
"log"
"net/http"
"regexp"
@@ -37,6 +38,7 @@ import (
"cloud.google.com/go/storage"
"github.com/google/nixery/builder"
"github.com/google/nixery/config"
+ "github.com/google/nixery/layers"
)
// ManifestMediaType is the Content-Type used for the manifest itself. This
@@ -96,6 +98,32 @@ func prepareBucket(ctx context.Context, cfg *config.Config) *storage.BucketHandl
return bkt
}
+// Downloads the popularity information for the package set from the
+// URL specified in Nixery's configuration.
+func downloadPopularity(url string) (layers.Popularity, error) {
+ resp, err := http.Get(url)
+ if err != nil {
+ return nil, err
+ }
+
+ if resp.StatusCode != 200 {
+ return nil, fmt.Errorf("popularity download from '%s' returned status: %s\n", url, resp.Status)
+ }
+
+ j, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return nil, err
+ }
+
+ var pop layers.Popularity
+ err = json.Unmarshal(j, &pop)
+ if err != nil {
+ return nil, err
+ }
+
+ return pop, nil
+}
+
// Error format corresponding to the registry protocol V2 specification. This
// allows feeding back errors to clients in a way that can be presented to
// users.
@@ -200,10 +228,19 @@ func main() {
log.Fatalln("Failed to instantiate build cache", err)
}
+ var pop layers.Popularity
+ if cfg.PopUrl != "" {
+ pop, err = downloadPopularity(cfg.PopUrl)
+ if err != nil {
+ log.Fatalln("Failed to fetch popularity information", err)
+ }
+ }
+
state := builder.State{
Bucket: bucket,
Cache: &cache,
Cfg: cfg,
+ Pop: pop,
}
log.Printf("Starting Nixery on port %s\n", cfg.Port)
--
cgit 1.4.1
From 1124b8c236a2f01a1eec2420131627d29f678c9d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 13:08:23 +0100
Subject: fix(server): Do not invoke layer build if no layers are missing
This previously invoked a Nix derivation that spent a few seconds on
making an empty object in JSON ...
---
tools/nixery/server/builder/builder.go | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index e622f815a423..81dbd26db188 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -277,6 +277,11 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
// Builds remaining layers (those not already cached) via Nix.
func buildLayers(s *State, image *Image, grouped []layers.Layer) (map[string]string, error) {
+ result := make(map[string]string)
+ if len(grouped) == 0 {
+ return result, nil
+ }
+
srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
args := []string{
"--argstr", "srcType", srcType,
@@ -312,7 +317,6 @@ func buildLayers(s *State, image *Image, grouped []layers.Layer) (map[string]str
}
log.Printf("Finished layer preparation for '%s' via Nix\n", image.Name)
- result := make(map[string]string)
err = json.Unmarshal(output, &result)
if err != nil {
return nil, err
--
cgit 1.4.1
From 6b06fe27be2dcf0741ff7981838fbb3a022181b7 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 18:18:52 +0100
Subject: feat(server): Implement creation of layer tarballs in the server
This will create, upload and hash the layer tarballs in one disk read.
---
tools/nixery/server/builder/archive.go | 92 +++++++++++++++++++++
tools/nixery/server/builder/builder.go | 145 +++++++++++++++------------------
2 files changed, 158 insertions(+), 79 deletions(-)
create mode 100644 tools/nixery/server/builder/archive.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
new file mode 100644
index 000000000000..43fd197083e1
--- /dev/null
+++ b/tools/nixery/server/builder/archive.go
@@ -0,0 +1,92 @@
+package builder
+
+// This file implements logic for walking through a directory and creating a
+// tarball of it.
+//
+// The tarball is written straight to the supplied reader, which makes it
+// possible to create an image layer from the specified store paths, hash it and
+// upload it in one reading pass.
+
+import (
+ "archive/tar"
+ "io"
+ "log"
+ "os"
+ "path/filepath"
+
+ "github.com/google/nixery/layers"
+)
+
+// Create a new tarball from each of the paths in the list and write the tarball
+// to the supplied writer.
+func tarStorePaths(l *layers.Layer, w io.Writer) error {
+ t := tar.NewWriter(w)
+
+ for _, path := range l.Contents {
+ err := filepath.Walk(path, tarStorePath(t))
+ if err != nil {
+ return err
+ }
+ }
+
+ if err := t.Close(); err != nil {
+ return err
+ }
+
+ log.Printf("Created layer for '%s'\n", l.Hash())
+ return nil
+}
+
+func tarStorePath(w *tar.Writer) filepath.WalkFunc {
+ return func(path string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ // If the entry is not a symlink or regular file, skip it.
+ if info.Mode()&os.ModeSymlink == 0 && !info.Mode().IsRegular() {
+ return nil
+ }
+
+ // the symlink target is read if this entry is a symlink, as it
+ // is required when creating the file header
+ var link string
+ if info.Mode()&os.ModeSymlink != 0 {
+ link, err = os.Readlink(path)
+ if err != nil {
+ return err
+ }
+ }
+
+ header, err := tar.FileInfoHeader(info, link)
+ if err != nil {
+ return err
+ }
+
+ // The name retrieved from os.FileInfo only contains the file's
+ // basename, but the full path is required within the layer
+ // tarball.
+ header.Name = path
+ if err = w.WriteHeader(header); err != nil {
+ return err
+ }
+
+ // At this point, return if no file content needs to be written
+ if !info.Mode().IsRegular() {
+ return nil
+ }
+
+ f, err := os.Open(path)
+ if err != nil {
+ return err
+ }
+
+ if _, err := io.Copy(w, f); err != nil {
+ return err
+ }
+
+ f.Close()
+
+ return nil
+ }
+}
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 81dbd26db188..d8cbbb8f21c9 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -232,97 +232,53 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
grouped := layers.Group(&result.Graph, &s.Pop, LayerBudget)
var entries []manifest.Entry
- var missing []layers.Layer
// Splits the layers into those which are already present in
- // the cache, and those that are missing (i.e. need to be
- // built by Nix).
+ // the cache, and those that are missing.
+ //
+ // Missing layers are built and uploaded to the storage
+ // bucket.
for _, l := range grouped {
if entry, cached := layerFromCache(ctx, s, l.Hash()); cached {
entries = append(entries, *entry)
} else {
- missing = append(missing, l)
- }
- }
-
- built, err := buildLayers(s, image, missing)
- if err != nil {
- log.Printf("Failed to build missing layers: %s\n", err)
- return nil, err
- }
+ lw := func(w io.Writer) error {
+ return tarStorePaths(&l, w)
+ }
- // Symlink layer (built in the first Nix build) needs to be
- // included when hashing & uploading
- built[result.SymlinkLayer.SHA256] = result.SymlinkLayer.Path
-
- for key, path := range built {
- f, err := os.Open(path)
- if err != nil {
- log.Printf("failed to open layer at '%s': %s\n", path, err)
- return nil, err
- }
+ entry, err := uploadHashLayer(ctx, s, l.Hash(), lw)
+ if err != nil {
+ return nil, err
+ }
- entry, err := uploadHashLayer(ctx, s, key, f)
- f.Close()
- if err != nil {
- return nil, err
+ go cacheLayer(ctx, s, l.Hash(), *entry)
+ entries = append(entries, *entry)
}
-
- entries = append(entries, *entry)
- go cacheLayer(ctx, s, key, *entry)
- }
-
- return entries, nil
-}
-
-// Builds remaining layers (those not already cached) via Nix.
-func buildLayers(s *State, image *Image, grouped []layers.Layer) (map[string]string, error) {
- result := make(map[string]string)
- if len(grouped) == 0 {
- return result, nil
- }
-
- srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
- args := []string{
- "--argstr", "srcType", srcType,
- "--argstr", "srcArgs", srcArgs,
}
- layerInput := make(map[string][]string)
- allPaths := []string{}
- for _, l := range grouped {
- layerInput[l.Hash()] = l.Contents
-
- // The derivation responsible for building layers does not
- // have the derivations that resulted in the required store
- // paths in its context, which means that its sandbox will not
- // contain the necessary paths if sandboxing is enabled.
- //
- // To work around this, all required store paths are added as
- // 'extra-sandbox-paths' parameters.
- for _, p := range l.Contents {
- allPaths = append(allPaths, p)
+ // Symlink layer (built in the first Nix build) needs to be
+ // included here manually:
+ slkey := result.SymlinkLayer.SHA256
+ entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
+ f, err := os.Open(result.SymlinkLayer.Path)
+ if err != nil {
+ log.Printf("failed to upload symlink layer '%s': %s\n", slkey, err)
+ return err
}
- }
-
- args = append(args, "--option", "extra-sandbox-paths", strings.Join(allPaths, " "))
+ defer f.Close()
- j, _ := json.Marshal(layerInput)
- args = append(args, "--argstr", "layers", string(j))
+ _, err = io.Copy(w, f)
+ return err
+ })
- output, err := callNix("nixery-build-layers", image.Name, args)
if err != nil {
- log.Printf("failed to call nixery-build-layers: %s\n", err)
return nil, err
}
- log.Printf("Finished layer preparation for '%s' via Nix\n", image.Name)
- err = json.Unmarshal(output, &result)
- if err != nil {
- return nil, err
- }
+ go cacheLayer(ctx, s, slkey, *entry)
+ entries = append(entries, *entry)
- return result, nil
+ return entries, nil
}
// renameObject renames an object in the specified Cloud Storage
@@ -368,7 +324,30 @@ func renameObject(ctx context.Context, s *State, old, new string) error {
return nil
}
-// Upload a to the storage bucket, while hashing it at the same time.
+// layerWriter is the type for functions that can write a layer to the
+// multiwriter used for uploading & hashing.
+//
+// This type exists to avoid duplication between the handling of
+// symlink layers and store path layers.
+type layerWriter func(w io.Writer) error
+
+// byteCounter is a special io.Writer that counts all bytes written to
+// it and does nothing else.
+//
+// This is required because the ad-hoc writing of tarballs leaves no
+// single place to count the final tarball size otherwise.
+type byteCounter struct {
+ count int64
+}
+
+func (b *byteCounter) Write(p []byte) (n int, err error) {
+ b.count += int64(len(p))
+ return len(p), nil
+}
+
+// Upload a layer tarball to the storage bucket, while hashing it at
+// the same time. The supplied function is expected to provide the
+// layer data to the writer.
//
// The initial upload is performed in a 'staging' folder, as the
// SHA256-hash is not yet available when the upload is initiated.
@@ -378,24 +357,24 @@ func renameObject(ctx context.Context, s *State, old, new string) error {
//
// The return value is the layer's SHA256 hash, which is used in the
// image manifest.
-func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader) (*manifest.Entry, error) {
+func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter) (*manifest.Entry, error) {
staging := s.Bucket.Object("staging/" + key)
// Sets up a "multiwriter" that simultaneously runs both hash
// algorithms and uploads to the bucket
sw := staging.NewWriter(ctx)
shasum := sha256.New()
- multi := io.MultiWriter(sw, shasum)
+ counter := &byteCounter{}
+ multi := io.MultiWriter(sw, shasum, counter)
- size, err := io.Copy(multi, data)
+ err := lw(multi)
if err != nil {
- log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
+ log.Printf("failed to create and upload layer '%s': %s\n", key, err)
return nil, err
}
if err = sw.Close(); err != nil {
log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
- return nil, err
}
sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
@@ -408,6 +387,7 @@ func uploadHashLayer(ctx context.Context, s *State, key string, data io.Reader)
return nil, err
}
+ size := counter.count
log.Printf("Uploaded layer sha256:%s (%v bytes written)", sha256sum, size)
entry := manifest.Entry{
@@ -446,7 +426,14 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
}
m, c := manifest.Manifest(layers)
- if _, err = uploadHashLayer(ctx, s, c.SHA256, bytes.NewReader(c.Config)); err != nil {
+
+ lw := func(w io.Writer) error {
+ r := bytes.NewReader(c.Config)
+ _, err := io.Copy(w, r)
+ return err
+ }
+
+ if _, err = uploadHashLayer(ctx, s, c.SHA256, lw); err != nil {
log.Printf("failed to upload config for %s: %s\n", image.Name, err)
return nil, err
}
--
cgit 1.4.1
From 0d820423e973727ddfc4b461a1063f719873743c Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 22:16:37 +0100
Subject: chore(build-image): Remove nixery-build-layers
This functionality has been rolled into the server component and is no
longer required.
---
tools/nixery/build-image/build-layers.nix | 47 -------------------------------
tools/nixery/build-image/default.nix | 24 +++++-----------
tools/nixery/default.nix | 9 ++----
3 files changed, 10 insertions(+), 70 deletions(-)
delete mode 100644 tools/nixery/build-image/build-layers.nix
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-layers.nix b/tools/nixery/build-image/build-layers.nix
deleted file mode 100644
index 9a8742f13f73..000000000000
--- a/tools/nixery/build-image/build-layers.nix
+++ /dev/null
@@ -1,47 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-{
- # Description of the package set to be used (will be loaded by load-pkgs.nix)
- srcType ? "nixpkgs",
- srcArgs ? "nixos-19.03",
- importArgs ? { },
- # Path to load-pkgs.nix
- loadPkgs ? ./load-pkgs.nix,
- # Layers to assemble into tarballs
- layers ? "{}"
-}:
-
-let
- inherit (builtins) fromJSON mapAttrs toJSON;
- inherit (pkgs) lib runCommand writeText;
-
- pkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
-
- # Given a list of store paths, create an image layer tarball with
- # their contents.
- pathsToLayer = paths: runCommand "layer.tar" {
- } ''
- tar --no-recursion -Prf "$out" \
- --mtime="@$SOURCE_DATE_EPOCH" \
- --owner=0 --group=0 /nix /nix/store
-
- tar -Prpf "$out" --hard-dereference --sort=name \
- --mtime="@$SOURCE_DATE_EPOCH" \
- --owner=0 --group=0 ${lib.concatStringsSep " " paths}
- '';
-
-
- layerTarballs = mapAttrs (_: pathsToLayer ) (fromJSON layers);
-in writeText "layer-tarballs.json" (toJSON layerTarballs)
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
index 0800eb95987f..a61ac06bdd92 100644
--- a/tools/nixery/build-image/default.nix
+++ b/tools/nixery/build-image/default.nix
@@ -20,20 +20,10 @@
{ pkgs ? import {} }:
-{
- build-image = pkgs.writeShellScriptBin "nixery-build-image" ''
- exec ${pkgs.nix}/bin/nix-build \
- --show-trace \
- --no-out-link "$@" \
- --argstr loadPkgs ${./load-pkgs.nix} \
- ${./build-image.nix}
- '';
-
- build-layers = pkgs.writeShellScriptBin "nixery-build-layers" ''
- exec ${pkgs.nix}/bin/nix-build \
- --show-trace \
- --no-out-link "$@" \
- --argstr loadPkgs ${./load-pkgs.nix} \
- ${./build-layers.nix}
- '';
-}
+pkgs.writeShellScriptBin "nixery-build-image" ''
+ exec ${pkgs.nix}/bin/nix-build \
+ --show-trace \
+ --no-out-link "$@" \
+ --argstr loadPkgs ${./load-pkgs.nix} \
+ ${./build-image.nix}
+''
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index af506eea32d2..b194079b9a29 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -18,8 +18,7 @@
with pkgs;
-let builders = import ./build-image { inherit pkgs; };
-in rec {
+rec {
# Go implementation of the Nixery server which implements the
# container registry interface.
#
@@ -29,8 +28,7 @@ in rec {
nixery-server = callPackage ./server { };
# Implementation of the Nix image building logic
- nixery-build-image = builders.build-image;
- nixery-build-layers = builders.build-layers;
+ nixery-build-image = import ./build-image { inherit pkgs; };
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
@@ -44,7 +42,7 @@ in rec {
# are installing Nixery directly.
nixery-bin = writeShellScriptBin "nixery" ''
export WEB_DIR="${nixery-book}"
- export PATH="${nixery-build-layers}/bin:${nixery-build-image}/bin:$PATH"
+ export PATH="${nixery-build-image}/bin:$PATH"
exec ${nixery-server}/bin/nixery
'';
@@ -96,7 +94,6 @@ in rec {
iana-etc
nix
nixery-build-image
- nixery-build-layers
nixery-launch-script
openssh
zlib
--
cgit 1.4.1
From 48a5ecda97e4b2ea9faa2d3031376078ccc301be Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 20:18:40 +0100
Subject: feat(server): Order layers in image manifest based on merge rating
Image layers in manifests are now sorted in a stable (descending)
order based on their merge rating, meaning that layers more likely to
be shared between images come first.
The reason for this change is Docker's handling of image layers on
overlayfs2: Images are condensed into a single representation on disk
after downloading.
Due to this Docker will constantly redownload all layers that are
applied in a different order in different images (layer order matters
in imperatively created images), based on something it calls the
'ChainID'.
Sorting the layers this way raises the likelihood of a long chain of
matching layers at the beginning of an image.
This relates to #39.
---
tools/nixery/server/builder/builder.go | 1 +
tools/nixery/server/layers/grouping.go | 8 ++++----
tools/nixery/server/manifest/manifest.go | 15 +++++++++++++++
3 files changed, 20 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index d8cbbb8f21c9..614291e660c5 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -250,6 +250,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
if err != nil {
return nil, err
}
+ entry.MergeRating = l.MergeRating
go cacheLayer(ctx, s, l.Hash(), *entry)
entries = append(entries, *entry)
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index 07a9e0e230a5..9992cd3c13d6 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -141,7 +141,7 @@ type Popularity = map[string]int
// build for the container image.
type Layer struct {
Contents []string `json:"contents"`
- mergeRating uint64
+ MergeRating uint64
}
// Hash the contents of a layer to create a deterministic identifier that can be
@@ -153,7 +153,7 @@ func (l *Layer) Hash() string {
func (a Layer) merge(b Layer) Layer {
a.Contents = append(a.Contents, b.Contents...)
- a.mergeRating += b.mergeRating
+ a.MergeRating += b.MergeRating
return a
}
@@ -291,7 +291,7 @@ func groupLayer(dt *flow.DominatorTree, root *closure) Layer {
// both the size and the popularity when making merge
// decisions, but there might be a smarter way to do
// it than a plain multiplication.
- mergeRating: uint64(root.Popularity) * size,
+ MergeRating: uint64(root.Popularity) * size,
}
}
@@ -309,7 +309,7 @@ func dominate(budget int, graph *simple.DirectedGraph) []Layer {
}
sort.Slice(layers, func(i, j int) bool {
- return layers[i].mergeRating < layers[j].mergeRating
+ return layers[i].MergeRating < layers[j].MergeRating
})
if len(layers) > budget {
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index f777e3f585df..2f236178b65f 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -6,6 +6,7 @@ import (
"crypto/sha256"
"encoding/json"
"fmt"
+ "sort"
)
const (
@@ -27,6 +28,10 @@ type Entry struct {
MediaType string `json:"mediaType,omitempty"`
Size int64 `json:"size"`
Digest string `json:"digest"`
+
+ // This field is internal to Nixery and not part of the
+ // serialised entry.
+ MergeRating uint64 `json:"-"`
}
type manifest struct {
@@ -85,6 +90,16 @@ func configLayer(hashes []string) ConfigLayer {
//
// Callers do not need to set the media type for the layer entries.
func Manifest(layers []Entry) (json.RawMessage, ConfigLayer) {
+ // Sort layers by their merge rating, from highest to lowest.
+ // This makes it likely for a contiguous chain of shared image
+ // layers to appear at the beginning of a layer.
+ //
+ // Due to moby/moby#38446 Docker considers the order of layers
+ // when deciding which layers to download again.
+ sort.Slice(layers, func(i, j int) bool {
+ return layers[i].MergeRating > layers[j].MergeRating
+ })
+
hashes := make([]string, len(layers))
for i, l := range layers {
l.MediaType = "application/vnd.docker.image.rootfs.diff.tar"
--
cgit 1.4.1
From 9bb6d0ae255c1340fe16687d740fad948e6a9335 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 22:13:13 +0100
Subject: fix(server): Ensure build cache objects are written to GCS
Cache writes might not be flushed without this call.
---
tools/nixery/server/builder/cache.go | 5 +++++
1 file changed, 5 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 060ed9a84b13..b3b9dffab7d3 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -213,4 +213,9 @@ func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry)
log.Printf("failed to cache build '%s': %s\n", key, err)
return
}
+
+ if err = w.Close(); err != nil {
+ log.Printf("failed to cache build '%s': %s\n", key, err)
+ return
+ }
}
--
cgit 1.4.1
From d9b329ef59e35ae6070eae867cf06a5230ae3d51 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 3 Oct 2019 22:13:40 +0100
Subject: refactor(server): Always include 'cacert' & 'iana-etc'
These two packages almost always end up being required by programs,
but people don't necessarily consider them.
They will now always be added and their popularity is artificially
inflated to ensure they end up at the top of the layer list.
---
tools/nixery/server/builder/builder.go | 5 +++--
tools/nixery/server/layers/grouping.go | 24 ++++++++++++++++--------
2 files changed, 19 insertions(+), 10 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 614291e660c5..7f391838f604 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -87,7 +87,7 @@ type BuildResult struct {
// be used to invoke Nix.
//
// It will expand convenience names under the hood (see the `convenienceNames`
-// function below).
+// function below) and append packages that are always included (cacert, iana-etc).
//
// Once assembled the image structure uses a sorted representation of
// the name. This is to avoid unnecessarily cache-busting images if
@@ -95,6 +95,7 @@ type BuildResult struct {
func ImageFromName(name string, tag string) Image {
pkgs := strings.Split(name, "/")
expanded := convenienceNames(pkgs)
+ expanded = append(expanded, "cacert", "iana-etc")
sort.Strings(pkgs)
sort.Strings(expanded)
@@ -131,7 +132,7 @@ type ImageResult struct {
//
// * `shell`: Includes bash, coreutils and other common command-line tools
func convenienceNames(packages []string) []string {
- shellPackages := []string{"bashInteractive", "cacert", "coreutils", "iana-etc", "moreutils", "nano"}
+ shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
if packages[0] == "shell" {
return append(packages[1:], shellPackages...)
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index 9992cd3c13d6..9dbd5e88ce56 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -186,13 +186,11 @@ func (c *closure) bigOrPopular() bool {
return true
}
- // The threshold value used here is currently roughly the
- // minimum number of references that only 1% of packages in
- // the entire package set have.
- //
- // TODO(tazjin): Do this more elegantly by calculating
- // percentiles for each package and using those instead.
- if c.Popularity >= 1000 {
+ // Threshold value is picked arbitrarily right now. The reason
+ // for this is that some packages (such as `cacert`) have very
+ // few direct dependencies, but are required by pretty much
+ // everything.
+ if c.Popularity >= 100 {
return true
}
@@ -241,7 +239,17 @@ func buildGraph(refs *RuntimeGraph, pop *Popularity) *simple.DirectedGraph {
Refs: c.Refs,
}
- if p, ok := (*pop)[node.DOTID()]; ok {
+ // The packages `nss-cacert` and `iana-etc` are added
+ // by Nixery to *every single image* and should have a
+ // very high popularity.
+ //
+ // Other popularity values are populated from the data
+ // set assembled by Nixery's popcount.
+ id := node.DOTID()
+ if strings.HasPrefix(id, "nss-cacert") || strings.HasPrefix(id, "iana-etc") {
+ // glibc has ~300k references, these packages need *more*
+ node.Popularity = 500000
+ } else if p, ok := (*pop)[id]; ok {
node.Popularity = p
} else {
node.Popularity = 1
--
cgit 1.4.1
From f4bf3518f63501ddff42592b255fd5feaf846863 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 4 Oct 2019 22:17:11 +0100
Subject: refactor(server): Replace log calls with logrus
This introduces a structured logging library that can be used (next
step) to attach additional metadata to log entries.
---
tools/nixery/server/builder/archive.go | 2 +-
tools/nixery/server/builder/builder.go | 2 +-
tools/nixery/server/builder/cache.go | 2 +-
tools/nixery/server/config/config.go | 2 +-
tools/nixery/server/config/pkgsource.go | 3 ++-
tools/nixery/server/go-deps.nix | 9 +++++++++
tools/nixery/server/layers/grouping.go | 2 +-
tools/nixery/server/main.go | 2 +-
8 files changed, 17 insertions(+), 7 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
index 43fd197083e1..6d5033f13182 100644
--- a/tools/nixery/server/builder/archive.go
+++ b/tools/nixery/server/builder/archive.go
@@ -10,11 +10,11 @@ package builder
import (
"archive/tar"
"io"
- "log"
"os"
"path/filepath"
"github.com/google/nixery/layers"
+ log "github.com/sirupsen/logrus"
)
// Create a new tarball from each of the paths in the list and write the tarball
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 7f391838f604..3f085c565554 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -26,7 +26,6 @@ import (
"fmt"
"io"
"io/ioutil"
- "log"
"net/http"
"net/url"
"os"
@@ -38,6 +37,7 @@ import (
"github.com/google/nixery/config"
"github.com/google/nixery/layers"
"github.com/google/nixery/manifest"
+ log "github.com/sirupsen/logrus"
"golang.org/x/oauth2/google"
)
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index b3b9dffab7d3..4a060ba5ea17 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -19,11 +19,11 @@ import (
"encoding/json"
"io"
"io/ioutil"
- "log"
"os"
"sync"
"github.com/google/nixery/manifest"
+ log "github.com/sirupsen/logrus"
)
// LocalCache implements the structure used for local caching of
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 84c2b89d13d0..abc067855a88 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -19,10 +19,10 @@ package config
import (
"io/ioutil"
- "log"
"os"
"cloud.google.com/go/storage"
+ log "github.com/sirupsen/logrus"
)
// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
diff --git a/tools/nixery/server/config/pkgsource.go b/tools/nixery/server/config/pkgsource.go
index 61bea33dfe62..98719ecceabb 100644
--- a/tools/nixery/server/config/pkgsource.go
+++ b/tools/nixery/server/config/pkgsource.go
@@ -17,10 +17,11 @@ import (
"crypto/sha1"
"encoding/json"
"fmt"
- "log"
"os"
"regexp"
"strings"
+
+ log "github.com/sirupsen/logrus"
)
// PkgSource represents the source from which the Nix package set used
diff --git a/tools/nixery/server/go-deps.nix b/tools/nixery/server/go-deps.nix
index 7c40c6dde0f1..847b44dce63c 100644
--- a/tools/nixery/server/go-deps.nix
+++ b/tools/nixery/server/go-deps.nix
@@ -117,4 +117,13 @@
sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
};
}
+ {
+ goPackagePath = "github.com/sirupsen/logrus";
+ fetch = {
+ type = "git";
+ url = "https://github.com/sirupsen/logrus";
+ rev = "de736cf91b921d56253b4010270681d33fdf7cb5";
+ sha256 = "1qixss8m5xy7pzbf0qz2k3shjw0asklm9sj6zyczp7mryrari0aj";
+ };
+ }
]
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index 9dbd5e88ce56..95198c90d4b3 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -105,11 +105,11 @@ package layers
import (
"crypto/sha1"
"fmt"
- "log"
"regexp"
"sort"
"strings"
+ log "github.com/sirupsen/logrus"
"gonum.org/v1/gonum/graph/flow"
"gonum.org/v1/gonum/graph/simple"
)
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index b87af650650a..bad4c190c680 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -30,7 +30,6 @@ import (
"encoding/json"
"fmt"
"io/ioutil"
- "log"
"net/http"
"regexp"
"time"
@@ -39,6 +38,7 @@ import (
"github.com/google/nixery/builder"
"github.com/google/nixery/config"
"github.com/google/nixery/layers"
+ log "github.com/sirupsen/logrus"
)
// ManifestMediaType is the Content-Type used for the manifest itself. This
--
cgit 1.4.1
From 0642f7044dea2127b1c7dab1d88d90638536183a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 5 Oct 2019 14:54:49 +0100
Subject: fix(server): Amend package path for Go tooling compatibility
With these changes it is possible to keep Nixery in $GOPATH and build
the server in there, while still having things work correctly via Nix.
---
tools/nixery/default.nix | 2 +-
tools/nixery/server/builder/archive.go | 2 +-
tools/nixery/server/builder/builder.go | 6 +++---
tools/nixery/server/builder/cache.go | 2 +-
tools/nixery/server/default.nix | 2 +-
tools/nixery/server/main.go | 6 +++---
6 files changed, 10 insertions(+), 10 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index b194079b9a29..c1a3c9f7dca5 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -43,7 +43,7 @@ rec {
nixery-bin = writeShellScriptBin "nixery" ''
export WEB_DIR="${nixery-book}"
export PATH="${nixery-build-image}/bin:$PATH"
- exec ${nixery-server}/bin/nixery
+ exec ${nixery-server}/bin/server
'';
# Container image containing Nixery and Nix itself. This image can
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
index 6d5033f13182..6a2bd8e4b0f0 100644
--- a/tools/nixery/server/builder/archive.go
+++ b/tools/nixery/server/builder/archive.go
@@ -13,7 +13,7 @@ import (
"os"
"path/filepath"
- "github.com/google/nixery/layers"
+ "github.com/google/nixery/server/layers"
log "github.com/sirupsen/logrus"
)
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 3f085c565554..b675da0f7763 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -34,9 +34,9 @@ import (
"strings"
"cloud.google.com/go/storage"
- "github.com/google/nixery/config"
- "github.com/google/nixery/layers"
- "github.com/google/nixery/manifest"
+ "github.com/google/nixery/server/config"
+ "github.com/google/nixery/server/layers"
+ "github.com/google/nixery/server/manifest"
log "github.com/sirupsen/logrus"
"golang.org/x/oauth2/google"
)
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 4a060ba5ea17..5b6bf078b228 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -22,7 +22,7 @@ import (
"os"
"sync"
- "github.com/google/nixery/manifest"
+ "github.com/google/nixery/server/manifest"
log "github.com/sirupsen/logrus"
)
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
index 9df52721857c..573447a6c3df 100644
--- a/tools/nixery/server/default.nix
+++ b/tools/nixery/server/default.nix
@@ -19,7 +19,7 @@ buildGoPackage {
goDeps = ./go-deps.nix;
src = ./.;
- goPackagePath = "github.com/google/nixery";
+ goPackagePath = "github.com/google/nixery/server";
# Enable checks and configure check-phase to include vet:
doCheck = true;
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index bad4c190c680..878e59ff6d50 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -35,9 +35,9 @@ import (
"time"
"cloud.google.com/go/storage"
- "github.com/google/nixery/builder"
- "github.com/google/nixery/config"
- "github.com/google/nixery/layers"
+ "github.com/google/nixery/server/builder"
+ "github.com/google/nixery/server/config"
+ "github.com/google/nixery/server/layers"
log "github.com/sirupsen/logrus"
)
--
cgit 1.4.1
From 95abb1bcde75253aa35669eed26f734d02c6a870 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 5 Oct 2019 14:55:40 +0100
Subject: feat(server): Initial Stackdriver-compatible log formatter
This formatter has basic support for the Stackdriver Error Reporting
format, but several things are still lacking:
* the service version (preferably git commit?) needs to be included in
the server somehow
* log streams should be split between stdout/stderr as that is how
AppEngine (and several other GCP services?) seemingly differentiate
between info/error logs
---
tools/nixery/server/logs.go | 68 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 tools/nixery/server/logs.go
(limited to 'tools')
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
new file mode 100644
index 000000000000..55e0a13a03ea
--- /dev/null
+++ b/tools/nixery/server/logs.go
@@ -0,0 +1,68 @@
+package main
+
+// This file configures different log formatters via logrus. The
+// standard formatter uses a structured JSON format that is compatible
+// with Stackdriver Error Reporting.
+//
+// https://cloud.google.com/error-reporting/docs/formatting-error-messages
+
+import (
+ "bytes"
+ "encoding/json"
+ log "github.com/sirupsen/logrus"
+)
+
+type stackdriverFormatter struct{}
+
+type serviceContext struct {
+ Service string `json:"service"`
+ Version string `json:"version"`
+}
+
+type reportLocation struct {
+ FilePath string `json:"filePath"`
+ LineNumber int `json:"lineNumber"`
+ FunctionName string `json:"functionName"`
+}
+
+var nixeryContext = serviceContext{
+ Service: "nixery",
+ Version: "TODO(tazjin)", // angry?
+}
+
+// isError determines whether an entry should be logged as an error
+// (i.e. with attached `context`).
+//
+// This requires the caller information to be present on the log
+// entry, as stacktraces are not available currently.
+func isError(e *log.Entry) bool {
+ l := e.Level
+ return (l == log.ErrorLevel || l == log.FatalLevel || l == log.PanicLevel) &&
+ e.HasCaller()
+}
+
+func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
+ msg := e.Data
+ msg["serviceContext"] = &nixeryContext
+ msg["message"] = &e.Message
+ msg["eventTime"] = &e.Time
+
+ if isError(e) {
+ loc := reportLocation{
+ FilePath: e.Caller.File,
+ LineNumber: e.Caller.Line,
+ FunctionName: e.Caller.Function,
+ }
+ msg["context"] = &loc
+ }
+
+ b := new(bytes.Buffer)
+ err := json.NewEncoder(b).Encode(&msg)
+
+ return b.Bytes(), err
+}
+
+func init() {
+ log.SetReportCaller(true)
+ log.SetFormatter(stackdriverFormatter{})
+}
--
cgit 1.4.1
From 6912658c72291caefd7c7ea6312a35c3a686cf61 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 5 Oct 2019 22:33:41 +0100
Subject: feat(server): Use hash of Nixery source as version
Uses a hash of Nixery's sources as the version displayed when Nixery
launches or logs an error. This makes it possible to distinguish
between errors logged from different versions.
The source hashes should be reproducible between different checkouts
of the same source tree.
---
tools/nixery/default.nix | 11 ++++++++++-
tools/nixery/server/default.nix | 40 ++++++++++++++++++++++++++++++++--------
tools/nixery/server/logs.go | 2 +-
tools/nixery/server/main.go | 6 +++++-
4 files changed, 48 insertions(+), 11 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index c1a3c9f7dca5..20a5b50220e1 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -19,13 +19,22 @@
with pkgs;
rec {
+ # Hash of all Nixery sources - this is used as the Nixery version in
+ # builds to distinguish errors between deployed versions, see
+ # server/logs.go for details.
+ nixery-src-hash = pkgs.runCommand "nixery-src-hash" {} ''
+ echo ${./.} | grep -Eo '[a-z0-9]{32}' > $out
+ '';
+
# Go implementation of the Nixery server which implements the
# container registry interface.
#
# Users will usually not want to use this directly, instead see the
# 'nixery' derivation below, which automatically includes runtime
# data dependencies.
- nixery-server = callPackage ./server { };
+ nixery-server = callPackage ./server {
+ srcHash = nixery-src-hash;
+ };
# Implementation of the Nix image building logic
nixery-build-image = import ./build-image { inherit pkgs; };
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
index 573447a6c3df..d497f106b02e 100644
--- a/tools/nixery/server/default.nix
+++ b/tools/nixery/server/default.nix
@@ -12,21 +12,45 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-{ buildGoPackage, lib }:
+{ buildGoPackage, go, lib, srcHash }:
-buildGoPackage {
+buildGoPackage rec {
name = "nixery-server";
goDeps = ./go-deps.nix;
src = ./.;
goPackagePath = "github.com/google/nixery/server";
-
- # Enable checks and configure check-phase to include vet:
doCheck = true;
- preCheck = ''
- for pkg in $(getGoDirs ""); do
- buildGoDir vet "$pkg"
- done
+
+ # The following phase configurations work around the overengineered
+ # Nix build configuration for Go.
+ #
+ # All I want this to do is produce a binary in the standard Nix
+ # output path, so pretty much all the phases except for the initial
+ # configuration of the "dependency forest" in $GOPATH have been
+ # overridden.
+ #
+ # This is necessary because the upstream builder does wonky things
+ # with the build arguments to the compiler, but I need to set some
+ # complex flags myself
+
+ outputs = [ "out" ];
+ preConfigure = "bin=$out";
+ buildPhase = ''
+ runHook preBuild
+ runHook renameImport
+
+ export GOBIN="$out/bin"
+ go install -ldflags "-X main.version=$(cat ${srcHash})" ${goPackagePath}
+ '';
+
+ fixupPhase = ''
+ remove-references-to -t ${go} $out/bin/server
+ '';
+
+ checkPhase = ''
+ go vet ${goPackagePath}
+ go test ${goPackagePath}
'';
meta = {
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
index 55e0a13a03ea..9d1f17aed5cf 100644
--- a/tools/nixery/server/logs.go
+++ b/tools/nixery/server/logs.go
@@ -27,7 +27,6 @@ type reportLocation struct {
var nixeryContext = serviceContext{
Service: "nixery",
- Version: "TODO(tazjin)", // angry?
}
// isError determines whether an entry should be logged as an error
@@ -63,6 +62,7 @@ func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
}
func init() {
+ nixeryContext.Version = version
log.SetReportCaller(true)
log.SetFormatter(stackdriverFormatter{})
}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 878e59ff6d50..ca1f3c69f2d8 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -47,6 +47,10 @@ import (
// https://docs.docker.com/registry/spec/manifest-v2-2/
const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
+// This variable will be initialised during the build process and set
+// to the hash of the entire Nixery source tree.
+var version string = "devel"
+
// Regexes matching the V2 Registry API routes. This only includes the
// routes required for serving images, since pushing and other such
// functionality is not available.
@@ -243,7 +247,7 @@ func main() {
Pop: pop,
}
- log.Printf("Starting Nixery on port %s\n", cfg.Port)
+ log.Printf("Starting Nixery (version %s) on port %s\n", version, cfg.Port)
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
--
cgit 1.4.1
From f77c93b6aeb3e847fe00099ea5c52dc98cf74b4d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 5 Oct 2019 22:48:19 +0100
Subject: feat(server): Add log level to severity mapping
The output format now writes a `severity` field that follows that
format that should be recognised by Stackdriver Logging.
---
tools/nixery/server/logs.go | 34 ++++++++++++++++++++++++++++++++--
tools/nixery/server/main.go | 5 ++++-
2 files changed, 36 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
index 9d1f17aed5cf..dec4a410fb06 100644
--- a/tools/nixery/server/logs.go
+++ b/tools/nixery/server/logs.go
@@ -40,16 +40,46 @@ func isError(e *log.Entry) bool {
e.HasCaller()
}
+// logSeverity formats the entry's severity into a format compatible
+// with Stackdriver Logging.
+//
+// The two formats that are being mapped do not have an equivalent set
+// of severities/levels, so the mapping is somewhat arbitrary for a
+// handful of them.
+//
+// https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity
+func logSeverity(l log.Level) string {
+ switch l {
+ case log.TraceLevel:
+ return "DEBUG"
+ case log.DebugLevel:
+ return "DEBUG"
+ case log.InfoLevel:
+ return "INFO"
+ case log.WarnLevel:
+ return "WARNING"
+ case log.ErrorLevel:
+ return "ERROR"
+ case log.FatalLevel:
+ return "CRITICAL"
+ case log.PanicLevel:
+ return "EMERGENCY"
+ default:
+ return "DEFAULT"
+ }
+}
+
func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
msg := e.Data
msg["serviceContext"] = &nixeryContext
msg["message"] = &e.Message
msg["eventTime"] = &e.Time
+ msg["severity"] = logSeverity(e.Level)
if isError(e) {
loc := reportLocation{
- FilePath: e.Caller.File,
- LineNumber: e.Caller.Line,
+ FilePath: e.Caller.File,
+ LineNumber: e.Caller.Line,
FunctionName: e.Caller.Function,
}
msg["context"] = &loc
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index ca1f3c69f2d8..06de6a96a143 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -247,7 +247,10 @@ func main() {
Pop: pop,
}
- log.Printf("Starting Nixery (version %s) on port %s\n", version, cfg.Port)
+ log.WithFields(log.Fields{
+ "version": version,
+ "port": cfg.Port,
+ }).Info("Starting Nixery")
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
--
cgit 1.4.1
From 6f148f789f43bf753b345b039d01d8a429f194e9 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 6 Oct 2019 03:18:38 +0100
Subject: refactor(server): Convert existing log entries to structured format
This rewrites all existing log statements into the structured logrus
format. For consistency, all errors are always logged separately from
the primary message in a field called `error`.
Only the "info", "error" and "warn" severities are used.
---
tools/nixery/server/builder/archive.go | 2 -
tools/nixery/server/builder/builder.go | 105 ++++++++++++++++++++++++++------
tools/nixery/server/builder/cache.go | 85 +++++++++++++++++++++-----
tools/nixery/server/config/config.go | 17 ++++--
tools/nixery/server/config/pkgsource.go | 15 ++++-
tools/nixery/server/layers/grouping.go | 13 +++-
tools/nixery/server/main.go | 60 ++++++++++++++----
7 files changed, 243 insertions(+), 54 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
index 6a2bd8e4b0f0..63ea9c73814d 100644
--- a/tools/nixery/server/builder/archive.go
+++ b/tools/nixery/server/builder/archive.go
@@ -14,7 +14,6 @@ import (
"path/filepath"
"github.com/google/nixery/server/layers"
- log "github.com/sirupsen/logrus"
)
// Create a new tarball from each of the paths in the list and write the tarball
@@ -33,7 +32,6 @@ func tarStorePaths(l *layers.Layer, w io.Writer) error {
return err
}
- log.Printf("Created layer for '%s'\n", l.Hash())
return nil
}
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index b675da0f7763..14696f58a80f 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -143,14 +143,17 @@ func convenienceNames(packages []string) []string {
// logNix logs each output line from Nix. It runs in a goroutine per
// output channel that should be live-logged.
-func logNix(name string, r io.ReadCloser) {
+func logNix(image, cmd string, r io.ReadCloser) {
scanner := bufio.NewScanner(r)
for scanner.Scan() {
- log.Printf("\x1b[31m[nix - %s]\x1b[39m %s\n", name, scanner.Text())
+ log.WithFields(log.Fields{
+ "image": image,
+ "cmd": cmd,
+ }).Info("[nix] " + scanner.Text())
}
}
-func callNix(program string, name string, args []string) ([]byte, error) {
+func callNix(program, image string, args []string) ([]byte, error) {
cmd := exec.Command(program, args...)
outpipe, err := cmd.StdoutPipe()
@@ -162,24 +165,45 @@ func callNix(program string, name string, args []string) ([]byte, error) {
if err != nil {
return nil, err
}
- go logNix(name, errpipe)
+ go logNix(program, image, errpipe)
if err = cmd.Start(); err != nil {
- log.Printf("Error starting %s: %s\n", program, err)
+ log.WithFields(log.Fields{
+ "image": image,
+ "cmd": program,
+ "error": err,
+ }).Error("error starting command")
+
return nil, err
}
- log.Printf("Invoked Nix build (%s) for '%s'\n", program, name)
+
+ log.WithFields(log.Fields{
+ "cmd": program,
+ "image": image,
+ }).Info("invoked Nix build")
stdout, _ := ioutil.ReadAll(outpipe)
if err = cmd.Wait(); err != nil {
- log.Printf("%s execution error: %s\nstdout: %s\n", program, err, stdout)
+ log.WithFields(log.Fields{
+ "image": image,
+ "cmd": program,
+ "error": err,
+ "stdout": stdout,
+ }).Info("Nix execution failed")
+
return nil, err
}
resultFile := strings.TrimSpace(string(stdout))
buildOutput, err := ioutil.ReadFile(resultFile)
if err != nil {
+ log.WithFields(log.Fields{
+ "image": image,
+ "file": resultFile,
+ "error": err,
+ }).Info("failed to read Nix result file")
+
return nil, err
}
@@ -209,10 +233,14 @@ func prepareImage(s *State, image *Image) (*ImageResult, error) {
output, err := callNix("nixery-build-image", image.Name, args)
if err != nil {
- log.Printf("failed to call nixery-build-image: %s\n", err)
+ // granular error logging is performed in callNix already
return nil, err
}
- log.Printf("Finished image preparation for '%s' via Nix\n", image.Name)
+
+ log.WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ }).Info("finished image preparation via Nix")
var result ImageResult
err = json.Unmarshal(output, &result)
@@ -243,16 +271,27 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
if entry, cached := layerFromCache(ctx, s, l.Hash()); cached {
entries = append(entries, *entry)
} else {
+ lh := l.Hash()
lw := func(w io.Writer) error {
return tarStorePaths(&l, w)
}
- entry, err := uploadHashLayer(ctx, s, l.Hash(), lw)
+ entry, err := uploadHashLayer(ctx, s, lh, lw)
if err != nil {
return nil, err
}
entry.MergeRating = l.MergeRating
+ var pkgs []string
+ for _, p := range l.Contents {
+ pkgs = append(pkgs, layers.PackageFromPath(p))
+ }
+
+ log.WithFields(log.Fields{
+ "layer": lh,
+ "packages": pkgs,
+ }).Info("created image layer")
+
go cacheLayer(ctx, s, l.Hash(), *entry)
entries = append(entries, *entry)
}
@@ -264,7 +303,13 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
f, err := os.Open(result.SymlinkLayer.Path)
if err != nil {
- log.Printf("failed to upload symlink layer '%s': %s\n", slkey, err)
+ log.WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ "error": err,
+ "layer": slkey,
+ }).Error("failed to upload symlink layer")
+
return err
}
defer f.Close()
@@ -319,7 +364,12 @@ func renameObject(ctx context.Context, s *State, old, new string) error {
// renaming/moving them, hence a deletion call afterwards is
// required.
if err = s.Bucket.Object(old).Delete(ctx); err != nil {
- log.Printf("failed to delete renamed object '%s': %s\n", old, err)
+ log.WithFields(log.Fields{
+ "new": new,
+ "old": old,
+ "error": err,
+ }).Warn("failed to delete renamed object")
+
// this error should not break renaming and is not returned
}
@@ -371,12 +421,19 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
err := lw(multi)
if err != nil {
- log.Printf("failed to create and upload layer '%s': %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to create and upload layer")
+
return nil, err
}
if err = sw.Close(); err != nil {
- log.Printf("failed to upload layer '%s' to staging: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to upload layer to staging")
}
sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
@@ -385,12 +442,21 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
// remains is to move it to the correct location and cache it.
err = renameObject(ctx, s, "staging/"+key, "layers/"+sha256sum)
if err != nil {
- log.Printf("failed to move layer '%s' from staging: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to move layer from staging")
+
return nil, err
}
size := counter.count
- log.Printf("Uploaded layer sha256:%s (%v bytes written)", sha256sum, size)
+
+ log.WithFields(log.Fields{
+ "layer": key,
+ "sha256": sha256sum,
+ "size": size,
+ }).Info("uploaded layer")
entry := manifest.Entry{
Digest: "sha256:" + sha256sum,
@@ -436,7 +502,12 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
}
if _, err = uploadHashLayer(ctx, s, c.SHA256, lw); err != nil {
- log.Printf("failed to upload config for %s: %s\n", image.Name, err)
+ log.WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ "error": err,
+ }).Error("failed to upload config")
+
return nil, err
}
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 5b6bf078b228..916de3af168c 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -60,15 +60,25 @@ func (c *LocalCache) manifestFromLocalCache(key string) (json.RawMessage, bool)
f, err := os.Open(c.mdir + key)
if err != nil {
- // TODO(tazjin): Once log levels are available, this
- // might warrant a debug log.
+ // This is a debug log statement because failure to
+ // read the manifest key is currently expected if it
+ // is not cached.
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Debug("failed to read manifest from local cache")
+
return nil, false
}
defer f.Close()
m, err := ioutil.ReadAll(f)
if err != nil {
- log.Printf("Failed to read manifest '%s' from local cache: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Error("failed to read manifest from local cache")
+
return nil, false
}
@@ -86,7 +96,10 @@ func (c *LocalCache) localCacheManifest(key string, m json.RawMessage) {
err := ioutil.WriteFile(c.mdir+key, []byte(m), 0644)
if err != nil {
- log.Printf("Failed to locally cache manifest for '%s': %s\n", key, err)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Error("failed to locally cache manifest")
}
}
@@ -123,18 +136,29 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
r, err := obj.NewReader(ctx)
if err != nil {
- log.Printf("Failed to retrieve manifest '%s' from cache: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Error("failed to retrieve manifest from bucket cache")
+
return nil, false
}
defer r.Close()
m, err := ioutil.ReadAll(r)
if err != nil {
- log.Printf("Failed to read cached manifest for '%s': %s\n", key, err)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Error("failed to read cached manifest from bucket")
+
+ return nil, false
}
go s.Cache.localCacheManifest(key, m)
- log.Printf("Retrieved manifest for sha1:%s from GCS\n", key)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ }).Info("retrieved manifest from GCS")
return json.RawMessage(m), true
}
@@ -149,16 +173,27 @@ func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage)
size, err := io.Copy(w, r)
if err != nil {
- log.Printf("failed to cache manifest sha1:%s: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Error("failed to cache manifest to GCS")
+
return
}
if err = w.Close(); err != nil {
- log.Printf("failed to cache manifest sha1:%s: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "error": err,
+ }).Error("failed to cache manifest to GCS")
+
return
}
- log.Printf("Cached manifest sha1:%s (%v bytes written)\n", key, size)
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "size": size,
+ }).Info("cached manifest to GCS")
}
// Retrieve a layer build from the cache, first checking the local
@@ -176,7 +211,11 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
r, err := obj.NewReader(ctx)
if err != nil {
- log.Printf("Failed to retrieve layer build '%s' from cache: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to retrieve cached layer from GCS")
+
return nil, false
}
defer r.Close()
@@ -184,14 +223,22 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
jb := bytes.NewBuffer([]byte{})
_, err = io.Copy(jb, r)
if err != nil {
- log.Printf("Failed to read layer build '%s' from cache: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to read cached layer from GCS")
+
return nil, false
}
var entry manifest.Entry
err = json.Unmarshal(jb.Bytes(), &entry)
if err != nil {
- log.Printf("Failed to unmarshal layer build '%s' from cache: %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to unmarshal cached layer")
+
return nil, false
}
@@ -210,12 +257,20 @@ func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry)
_, err := io.Copy(w, bytes.NewReader(j))
if err != nil {
- log.Printf("failed to cache build '%s': %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to cache layer")
+
return
}
if err = w.Close(); err != nil {
- log.Printf("failed to cache build '%s': %s\n", key, err)
+ log.WithFields(log.Fields{
+ "layer": key,
+ "error": err,
+ }).Error("failed to cache layer")
+
return
}
}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index abc067855a88..9447975aa9a2 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -32,14 +32,20 @@ func signingOptsFromEnv() *storage.SignedURLOptions {
id := os.Getenv("GCS_SIGNING_ACCOUNT")
if path == "" || id == "" {
- log.Println("GCS URL signing disabled")
+ log.Info("GCS URL signing disabled")
return nil
}
- log.Printf("GCS URL signing enabled with account %q\n", id)
+ log.WithFields(log.Fields{
+ "account": id,
+ }).Info("GCS URL signing enabled")
+
k, err := ioutil.ReadFile(path)
if err != nil {
- log.Fatalf("Failed to read GCS signing key: %s\n", err)
+ log.WithFields(log.Fields{
+ "file": path,
+ "error": err,
+ }).Fatal("failed to read GCS signing key")
}
return &storage.SignedURLOptions{
@@ -52,7 +58,10 @@ func signingOptsFromEnv() *storage.SignedURLOptions {
func getConfig(key, desc, def string) string {
value := os.Getenv(key)
if value == "" && def == "" {
- log.Fatalln(desc + " must be specified")
+ log.WithFields(log.Fields{
+ "option": key,
+ "description": desc,
+ }).Fatal("missing required configuration envvar")
} else if value == "" {
return def
}
diff --git a/tools/nixery/server/config/pkgsource.go b/tools/nixery/server/config/pkgsource.go
index 98719ecceabb..3a817f3d76d3 100644
--- a/tools/nixery/server/config/pkgsource.go
+++ b/tools/nixery/server/config/pkgsource.go
@@ -132,21 +132,30 @@ func (p *PkgsPath) CacheKey(pkgs []string, tag string) string {
// specified, the Nix code will default to a recent NixOS channel.
func pkgSourceFromEnv() (PkgSource, error) {
if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
- log.Printf("Using Nix package set from Nix channel %q\n", channel)
+ log.WithFields(log.Fields{
+ "channel": channel,
+ }).Info("using Nix package set from Nix channel or commit")
+
return &NixChannel{
channel: channel,
}, nil
}
if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.Printf("Using Nix package set from git repository at %q\n", git)
+ log.WithFields(log.Fields{
+ "repo": git,
+ }).Info("using NIx package set from git repository")
+
return &GitSource{
repository: git,
}, nil
}
if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
- log.Printf("Using Nix package set from path %q\n", path)
+ log.WithFields(log.Fields{
+ "path": path,
+ }).Info("using Nix package set at local path")
+
return &PkgsPath{
path: path,
}, nil
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index 95198c90d4b3..1fbbf33db3d7 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -172,8 +172,14 @@ func (c *closure) ID() int64 {
var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
+// PackageFromPath returns the name of a Nix package based on its
+// output store path.
+func PackageFromPath(path string) string {
+ return nixRegexp.ReplaceAllString(path, "")
+}
+
func (c *closure) DOTID() string {
- return nixRegexp.ReplaceAllString(c.Path, "")
+ return PackageFromPath(c.Path)
}
// bigOrPopular checks whether this closure should be considered for
@@ -321,7 +327,10 @@ func dominate(budget int, graph *simple.DirectedGraph) []Layer {
})
if len(layers) > budget {
- log.Printf("Ideal image has %v layers, but budget is %v\n", len(layers), budget)
+ log.WithFields(log.Fields{
+ "layers": len(layers),
+ "budget": budget,
+ }).Info("ideal image exceeds layer budget")
}
for len(layers) > budget {
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 06de6a96a143..babf50790f64 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -68,7 +68,9 @@ var (
// The Docker client is known to follow redirects, but this might not be true
// for all other registry clients.
func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
- log.Printf("Redirecting layer '%s' request to bucket '%s'\n", digest, cfg.Bucket)
+ log.WithFields(log.Fields{
+ "layer": digest,
+ }).Info("redirecting layer request to bucket")
object := "layers/" + digest
if cfg.Signing != nil {
@@ -90,13 +92,18 @@ func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
func prepareBucket(ctx context.Context, cfg *config.Config) *storage.BucketHandle {
client, err := storage.NewClient(ctx)
if err != nil {
- log.Fatalln("Failed to set up Cloud Storage client:", err)
+ log.WithFields(log.Fields{
+ "error": err,
+ }).Fatal("failed to set up Cloud Storage client")
}
bkt := client.Bucket(cfg.Bucket)
if _, err := bkt.Attrs(ctx); err != nil {
- log.Fatalln("Could not access configured bucket", err)
+ log.WithFields(log.Fields{
+ "error": err,
+ "bucket": cfg.Bucket,
+ }).Fatal("could not access configured bucket")
}
return bkt
@@ -169,13 +176,24 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if len(manifestMatches) == 3 {
imageName := manifestMatches[1]
imageTag := manifestMatches[2]
- log.Printf("Requesting manifest for image %q at tag %q", imageName, imageTag)
+
+ log.WithFields(log.Fields{
+ "image": imageName,
+ "tag": imageTag,
+ }).Info("requesting image manifest")
+
image := builder.ImageFromName(imageName, imageTag)
buildResult, err := builder.BuildImage(h.ctx, h.state, &image)
if err != nil {
writeError(w, 500, "UNKNOWN", "image build failure")
- log.Println("Failed to build image manifest", err)
+
+ log.WithFields(log.Fields{
+ "image": imageName,
+ "tag": imageTag,
+ "error": err,
+ }).Error("failed to build image manifest")
+
return
}
@@ -184,7 +202,13 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if buildResult.Error == "not_found" {
s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
writeError(w, 404, "MANIFEST_UNKNOWN", s)
- log.Println(s)
+
+ log.WithFields(log.Fields{
+ "image": imageName,
+ "tag": imageTag,
+ "packages": buildResult.Pkgs,
+ }).Error("could not find Nix packages")
+
return
}
@@ -205,7 +229,11 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
url, err := constructLayerUrl(&h.state.Cfg, digest)
if err != nil {
- log.Printf("Failed to sign GCS URL: %s\n", err)
+ log.WithFields(log.Fields{
+ "layer": digest,
+ "error": err,
+ }).Error("failed to sign GCS URL")
+
writeError(w, 500, "UNKNOWN", "could not serve layer")
return
}
@@ -215,28 +243,38 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
return
}
- log.Printf("Unsupported registry route: %s\n", r.RequestURI)
+ log.WithFields(log.Fields{
+ "uri": r.RequestURI,
+ }).Info("unsupported registry route")
+
w.WriteHeader(404)
}
func main() {
cfg, err := config.FromEnv()
if err != nil {
- log.Fatalln("Failed to load configuration", err)
+ log.WithFields(log.Fields{
+ "error": err,
+ }).Fatal("failed to load configuration")
}
ctx := context.Background()
bucket := prepareBucket(ctx, &cfg)
cache, err := builder.NewCache()
if err != nil {
- log.Fatalln("Failed to instantiate build cache", err)
+ log.WithFields(log.Fields{
+ "error": err,
+ }).Fatal("failed to instantiate build cache")
}
var pop layers.Popularity
if cfg.PopUrl != "" {
pop, err = downloadPopularity(cfg.PopUrl)
if err != nil {
- log.Fatalln("Failed to fetch popularity information", err)
+ log.WithFields(log.Fields{
+ "error": err,
+ "popURL": cfg.PopUrl,
+ }).Fatal("failed to fetch popularity information")
}
}
--
cgit 1.4.1
From c1020754a2a2a4058fd70d4b0a8276ccadd9545f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 6 Oct 2019 03:57:35 +0100
Subject: fix(build-image): Import 'match' from builtins
---
tools/nixery/build-image/build-image.nix | 1 +
1 file changed, 1 insertion(+)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 70049885ab1c..68a061290ed5 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -39,6 +39,7 @@ let
fromJSON
hasAttr
length
+ match
readFile
toFile
toJSON;
--
cgit 1.4.1
From d7ffbbdea47738acac24593f7e4448dd9c1df8ff Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 6 Oct 2019 14:48:24 +0100
Subject: refactor(server): Use logrus convenience functions for logs
Makes use of the `.WithError` and `.WithField` convenience functions
in logrus to simplify log statement construction.
This has the added benefit of making it easier to correctly log
errors.
---
tools/nixery/server/builder/builder.go | 46 ++++++++------------
tools/nixery/server/builder/cache.go | 76 +++++++++++----------------------
tools/nixery/server/config/config.go | 9 +---
tools/nixery/server/config/pkgsource.go | 12 ++----
tools/nixery/server/main.go | 44 ++++++-------------
5 files changed, 59 insertions(+), 128 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 14696f58a80f..78d09b55b1ec 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -168,11 +168,10 @@ func callNix(program, image string, args []string) ([]byte, error) {
go logNix(program, image, errpipe)
if err = cmd.Start(); err != nil {
- log.WithFields(log.Fields{
+ log.WithError(err).WithFields(log.Fields{
"image": image,
"cmd": program,
- "error": err,
- }).Error("error starting command")
+ }).Error("error invoking Nix")
return nil, err
}
@@ -185,12 +184,11 @@ func callNix(program, image string, args []string) ([]byte, error) {
stdout, _ := ioutil.ReadAll(outpipe)
if err = cmd.Wait(); err != nil {
- log.WithFields(log.Fields{
+ log.WithError(err).WithFields(log.Fields{
"image": image,
"cmd": program,
- "error": err,
"stdout": stdout,
- }).Info("Nix execution failed")
+ }).Info("failed to invoke Nix")
return nil, err
}
@@ -198,10 +196,9 @@ func callNix(program, image string, args []string) ([]byte, error) {
resultFile := strings.TrimSpace(string(stdout))
buildOutput, err := ioutil.ReadFile(resultFile)
if err != nil {
- log.WithFields(log.Fields{
+ log.WithError(err).WithFields(log.Fields{
"image": image,
"file": resultFile,
- "error": err,
}).Info("failed to read Nix result file")
return nil, err
@@ -303,10 +300,9 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
f, err := os.Open(result.SymlinkLayer.Path)
if err != nil {
- log.WithFields(log.Fields{
+ log.WithError(err).WithFields(log.Fields{
"image": image.Name,
"tag": image.Tag,
- "error": err,
"layer": slkey,
}).Error("failed to upload symlink layer")
@@ -364,10 +360,9 @@ func renameObject(ctx context.Context, s *State, old, new string) error {
// renaming/moving them, hence a deletion call afterwards is
// required.
if err = s.Bucket.Object(old).Delete(ctx); err != nil {
- log.WithFields(log.Fields{
- "new": new,
- "old": old,
- "error": err,
+ log.WithError(err).WithFields(log.Fields{
+ "new": new,
+ "old": old,
}).Warn("failed to delete renamed object")
// this error should not break renaming and is not returned
@@ -421,19 +416,15 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
err := lw(multi)
if err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to create and upload layer")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to create and upload layer")
return nil, err
}
if err = sw.Close(); err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to upload layer to staging")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to upload layer to staging")
}
sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
@@ -442,10 +433,8 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
// remains is to move it to the correct location and cache it.
err = renameObject(ctx, s, "staging/"+key, "layers/"+sha256sum)
if err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to move layer from staging")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to move layer from staging")
return nil, err
}
@@ -478,7 +467,7 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
imageResult, err := prepareImage(s, image)
if err != nil {
- return nil, fmt.Errorf("failed to prepare image '%s': %s", image.Name, err)
+ return nil, err
}
if imageResult.Error != "" {
@@ -502,10 +491,9 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
}
if _, err = uploadHashLayer(ctx, s, c.SHA256, lw); err != nil {
- log.WithFields(log.Fields{
+ log.WithError(err).WithFields(log.Fields{
"image": image.Name,
"tag": image.Tag,
- "error": err,
}).Error("failed to upload config")
return nil, err
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 916de3af168c..88bf30de4812 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -63,10 +63,8 @@ func (c *LocalCache) manifestFromLocalCache(key string) (json.RawMessage, bool)
// This is a debug log statement because failure to
// read the manifest key is currently expected if it
// is not cached.
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Debug("failed to read manifest from local cache")
+ log.WithError(err).WithField("manifest", key).
+ Debug("failed to read manifest from local cache")
return nil, false
}
@@ -74,10 +72,8 @@ func (c *LocalCache) manifestFromLocalCache(key string) (json.RawMessage, bool)
m, err := ioutil.ReadAll(f)
if err != nil {
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Error("failed to read manifest from local cache")
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to read manifest from local cache")
return nil, false
}
@@ -96,10 +92,8 @@ func (c *LocalCache) localCacheManifest(key string, m json.RawMessage) {
err := ioutil.WriteFile(c.mdir+key, []byte(m), 0644)
if err != nil {
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Error("failed to locally cache manifest")
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to locally cache manifest")
}
}
@@ -136,10 +130,8 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
r, err := obj.NewReader(ctx)
if err != nil {
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Error("failed to retrieve manifest from bucket cache")
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to retrieve manifest from bucket cache")
return nil, false
}
@@ -147,18 +139,14 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
m, err := ioutil.ReadAll(r)
if err != nil {
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Error("failed to read cached manifest from bucket")
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to read cached manifest from bucket")
return nil, false
}
go s.Cache.localCacheManifest(key, m)
- log.WithFields(log.Fields{
- "manifest": key,
- }).Info("retrieved manifest from GCS")
+ log.WithField("manifest", key).Info("retrieved manifest from GCS")
return json.RawMessage(m), true
}
@@ -173,19 +161,15 @@ func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage)
size, err := io.Copy(w, r)
if err != nil {
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Error("failed to cache manifest to GCS")
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to cache manifest to GCS")
return
}
if err = w.Close(); err != nil {
- log.WithFields(log.Fields{
- "manifest": key,
- "error": err,
- }).Error("failed to cache manifest to GCS")
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to cache manifest to GCS")
return
}
@@ -211,10 +195,8 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
r, err := obj.NewReader(ctx)
if err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to retrieve cached layer from GCS")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to retrieve cached layer from GCS")
return nil, false
}
@@ -223,10 +205,8 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
jb := bytes.NewBuffer([]byte{})
_, err = io.Copy(jb, r)
if err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to read cached layer from GCS")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to read cached layer from GCS")
return nil, false
}
@@ -234,10 +214,8 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
var entry manifest.Entry
err = json.Unmarshal(jb.Bytes(), &entry)
if err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to unmarshal cached layer")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to unmarshal cached layer")
return nil, false
}
@@ -257,19 +235,15 @@ func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry)
_, err := io.Copy(w, bytes.NewReader(j))
if err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to cache layer")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to cache layer")
return
}
if err = w.Close(); err != nil {
- log.WithFields(log.Fields{
- "layer": key,
- "error": err,
- }).Error("failed to cache layer")
+ log.WithError(err).WithField("layer", key).
+ Error("failed to cache layer")
return
}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 9447975aa9a2..fe05734ee6ac 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -36,16 +36,11 @@ func signingOptsFromEnv() *storage.SignedURLOptions {
return nil
}
- log.WithFields(log.Fields{
- "account": id,
- }).Info("GCS URL signing enabled")
+ log.WithField("account", id).Info("GCS URL signing enabled")
k, err := ioutil.ReadFile(path)
if err != nil {
- log.WithFields(log.Fields{
- "file": path,
- "error": err,
- }).Fatal("failed to read GCS signing key")
+ log.WithError(err).WithField("file", path).Fatal("failed to read GCS signing key")
}
return &storage.SignedURLOptions{
diff --git a/tools/nixery/server/config/pkgsource.go b/tools/nixery/server/config/pkgsource.go
index 3a817f3d76d3..95236c4b0d15 100644
--- a/tools/nixery/server/config/pkgsource.go
+++ b/tools/nixery/server/config/pkgsource.go
@@ -132,9 +132,7 @@ func (p *PkgsPath) CacheKey(pkgs []string, tag string) string {
// specified, the Nix code will default to a recent NixOS channel.
func pkgSourceFromEnv() (PkgSource, error) {
if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
- log.WithFields(log.Fields{
- "channel": channel,
- }).Info("using Nix package set from Nix channel or commit")
+ log.WithField("channel", channel).Info("using Nix package set from Nix channel or commit")
return &NixChannel{
channel: channel,
@@ -142,9 +140,7 @@ func pkgSourceFromEnv() (PkgSource, error) {
}
if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.WithFields(log.Fields{
- "repo": git,
- }).Info("using NIx package set from git repository")
+ log.WithField("repo", git).Info("using NIx package set from git repository")
return &GitSource{
repository: git,
@@ -152,9 +148,7 @@ func pkgSourceFromEnv() (PkgSource, error) {
}
if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
- log.WithFields(log.Fields{
- "path": path,
- }).Info("using Nix package set at local path")
+ log.WithField("path", path).Info("using Nix package set at local path")
return &PkgsPath{
path: path,
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index babf50790f64..f38fab2f2abd 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -68,9 +68,7 @@ var (
// The Docker client is known to follow redirects, but this might not be true
// for all other registry clients.
func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
- log.WithFields(log.Fields{
- "layer": digest,
- }).Info("redirecting layer request to bucket")
+ log.WithField("layer", digest).Info("redirecting layer request to bucket")
object := "layers/" + digest
if cfg.Signing != nil {
@@ -92,18 +90,13 @@ func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
func prepareBucket(ctx context.Context, cfg *config.Config) *storage.BucketHandle {
client, err := storage.NewClient(ctx)
if err != nil {
- log.WithFields(log.Fields{
- "error": err,
- }).Fatal("failed to set up Cloud Storage client")
+ log.WithError(err).Fatal("failed to set up Cloud Storage client")
}
bkt := client.Bucket(cfg.Bucket)
if _, err := bkt.Attrs(ctx); err != nil {
- log.WithFields(log.Fields{
- "error": err,
- "bucket": cfg.Bucket,
- }).Fatal("could not access configured bucket")
+ log.WithError(err).WithField("bucket", cfg.Bucket).Fatal("could not access configured bucket")
}
return bkt
@@ -188,10 +181,9 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if err != nil {
writeError(w, 500, "UNKNOWN", "image build failure")
- log.WithFields(log.Fields{
+ log.WithError(err).WithFields(log.Fields{
"image": imageName,
"tag": imageTag,
- "error": err,
}).Error("failed to build image manifest")
return
@@ -207,7 +199,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
"image": imageName,
"tag": imageTag,
"packages": buildResult.Pkgs,
- }).Error("could not find Nix packages")
+ }).Warn("could not find Nix packages")
return
}
@@ -229,11 +221,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
url, err := constructLayerUrl(&h.state.Cfg, digest)
if err != nil {
- log.WithFields(log.Fields{
- "layer": digest,
- "error": err,
- }).Error("failed to sign GCS URL")
-
+ log.WithError(err).WithField("layer", digest).Error("failed to sign GCS URL")
writeError(w, 500, "UNKNOWN", "could not serve layer")
return
}
@@ -243,9 +231,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
return
}
- log.WithFields(log.Fields{
- "uri": r.RequestURI,
- }).Info("unsupported registry route")
+ log.WithField("uri", r.RequestURI).Info("unsupported registry route")
w.WriteHeader(404)
}
@@ -253,28 +239,22 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
func main() {
cfg, err := config.FromEnv()
if err != nil {
- log.WithFields(log.Fields{
- "error": err,
- }).Fatal("failed to load configuration")
+ log.WithError(err).Fatal("failed to load configuration")
}
ctx := context.Background()
bucket := prepareBucket(ctx, &cfg)
cache, err := builder.NewCache()
if err != nil {
- log.WithFields(log.Fields{
- "error": err,
- }).Fatal("failed to instantiate build cache")
+ log.WithError(err).Fatal("failed to instantiate build cache")
}
var pop layers.Popularity
if cfg.PopUrl != "" {
pop, err = downloadPopularity(cfg.PopUrl)
if err != nil {
- log.WithFields(log.Fields{
- "error": err,
- "popURL": cfg.PopUrl,
- }).Fatal("failed to fetch popularity information")
+ log.WithError(err).WithField("popURL", cfg.PopUrl).
+ Fatal("failed to fetch popularity information")
}
}
@@ -288,7 +268,7 @@ func main() {
log.WithFields(log.Fields{
"version": version,
"port": cfg.Port,
- }).Info("Starting Nixery")
+ }).Info("starting Nixery")
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
--
cgit 1.4.1
From bf2718cebbd1c7af15c54c6da5685ed6d933cab4 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 9 Oct 2019 13:35:39 +0100
Subject: chore(build): Use separate GCS bucket for CI runs
This has become an issue recently with changes such as GZIP
compression, where CI runs no longer work because they conflict with
the production bucket for the public instance.
---
tools/nixery/.travis.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 5f0bbfd3b40f..31d485d7284c 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -26,7 +26,7 @@ script:
docker run -d -p 8080:8080 --name nixery \
-v ${PWD}/test-files:/var/nixery \
-e PORT=8080 \
- -e BUCKET=nixery-layers \
+ -e BUCKET=nixery-ci-tests \
-e GOOGLE_CLOUD_PROJECT=nixery \
-e GOOGLE_APPLICATION_CREDENTIALS=/var/nixery/key.json \
-e GCS_SIGNING_ACCOUNT="${GCS_SIGNING_ACCOUNT}" \
--
cgit 1.4.1
From 0693e371d66bfe3de2d97ab80e9c9684ec8abc34 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 11 Oct 2019 01:28:38 +0100
Subject: feat(server): Apply GZIP compression to all image layers
This fixes #62
---
tools/nixery/build-image/build-image.nix | 4 ++--
tools/nixery/server/builder/archive.go | 14 ++++++++++----
tools/nixery/server/builder/builder.go | 2 +-
tools/nixery/server/manifest/manifest.go | 4 ++--
4 files changed, 15 insertions(+), 9 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index 68a061290ed5..b78ee6626464 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -126,9 +126,9 @@ let
# Image layer that contains the symlink forest created above. This
# must be included in the image to ensure that the filesystem has a
# useful layout at runtime.
- symlinkLayer = runCommand "symlink-layer.tar" {} ''
+ symlinkLayer = runCommand "symlink-layer.tar.gz" {} ''
cp -r ${contentsEnv}/ ./layer
- tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -cf $out .
+ tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -czf $out .
'';
# Metadata about the symlink layer which is required for serving it.
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
index 63ea9c73814d..55b3c2a8b79c 100644
--- a/tools/nixery/server/builder/archive.go
+++ b/tools/nixery/server/builder/archive.go
@@ -9,6 +9,7 @@ package builder
import (
"archive/tar"
+ "compress/gzip"
"io"
"os"
"path/filepath"
@@ -16,10 +17,11 @@ import (
"github.com/google/nixery/server/layers"
)
-// Create a new tarball from each of the paths in the list and write the tarball
-// to the supplied writer.
-func tarStorePaths(l *layers.Layer, w io.Writer) error {
- t := tar.NewWriter(w)
+// Create a new compressed tarball from each of the paths in the list
+// and write it to the supplied writer.
+func packStorePaths(l *layers.Layer, w io.Writer) error {
+ gz := gzip.NewWriter(w)
+ t := tar.NewWriter(gz)
for _, path := range l.Contents {
err := filepath.Walk(path, tarStorePath(t))
@@ -32,6 +34,10 @@ func tarStorePaths(l *layers.Layer, w io.Writer) error {
return err
}
+ if err := gz.Close(); err != nil {
+ return err
+ }
+
return nil
}
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 78d09b55b1ec..748ff5f67d7b 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -270,7 +270,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
} else {
lh := l.Hash()
lw := func(w io.Writer) error {
- return tarStorePaths(&l, w)
+ return packStorePaths(&l, w)
}
entry, err := uploadHashLayer(ctx, s, lh, lw)
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index 2f236178b65f..8e65fa223b18 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -15,7 +15,7 @@ const (
// media types
manifestType = "application/vnd.docker.distribution.manifest.v2+json"
- layerType = "application/vnd.docker.image.rootfs.diff.tar"
+ layerType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
configType = "application/vnd.docker.container.image.v1+json"
// image config constants
@@ -102,7 +102,7 @@ func Manifest(layers []Entry) (json.RawMessage, ConfigLayer) {
hashes := make([]string, len(layers))
for i, l := range layers {
- l.MediaType = "application/vnd.docker.image.rootfs.diff.tar"
+ l.MediaType = layerType
layers[i] = l
hashes[i] = l.Digest
}
--
cgit 1.4.1
From e22ff5d176ba53deecf59737d402cac5d0cb9433 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 11 Oct 2019 11:57:14 +0100
Subject: fix(server): Use uncompressed tarball hashes in image config
Docker expects hashes of compressed tarballs in the manifest (as these
are used to fetch from the content-addressable layer store), but for
some reason it expects hashes in the configuration layer to be of
uncompressed tarballs.
To achieve this an additional SHA256 hash is calculcated while
creating the layer tarballs, but before passing them to the gzip
writer.
In the current constellation the symlink layer is first compressed and
then decompressed again to calculate its hash. This can be refactored
in a future change.
---
tools/nixery/build-image/build-image.nix | 9 ++++++---
tools/nixery/server/builder/archive.go | 19 +++++++++++++------
tools/nixery/server/builder/builder.go | 24 +++++++++++++++++++-----
tools/nixery/server/manifest/manifest.go | 6 ++++--
4 files changed, 42 insertions(+), 16 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index b78ee6626464..ab785006a9af 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -137,11 +137,14 @@ let
symlinkLayerMeta = fromJSON (readFile (runCommand "symlink-layer-meta.json" {
buildInputs = with pkgs; [ coreutils jq openssl ];
}''
- layerSha256=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
+ gzipHash=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
+ tarHash=$(cat ${symlinkLayer} | gzip -d | sha256sum | cut -d ' ' -f1)
layerSize=$(stat --printf '%s' ${symlinkLayer})
- jq -n -c --arg sha256 $layerSha256 --arg size $layerSize --arg path ${symlinkLayer} \
- '{ size: ($size | tonumber), sha256: $sha256, path: $path }' >> $out
+ jq -n -c --arg gzipHash $gzipHash --arg tarHash $tarHash --arg size $layerSize \
+ --arg path ${symlinkLayer} \
+ '{ size: ($size | tonumber), tarHash: $tarHash, gzipHash: $gzipHash, path: $path }' \
+ >> $out
''));
# Final output structure returned to Nixery if the build succeeded
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
index 55b3c2a8b79c..a3fb99882fd6 100644
--- a/tools/nixery/server/builder/archive.go
+++ b/tools/nixery/server/builder/archive.go
@@ -10,6 +10,8 @@ package builder
import (
"archive/tar"
"compress/gzip"
+ "crypto/sha256"
+ "fmt"
"io"
"os"
"path/filepath"
@@ -19,26 +21,31 @@ import (
// Create a new compressed tarball from each of the paths in the list
// and write it to the supplied writer.
-func packStorePaths(l *layers.Layer, w io.Writer) error {
+//
+// The uncompressed tarball is hashed because image manifests must
+// contain both the hashes of compressed and uncompressed layers.
+func packStorePaths(l *layers.Layer, w io.Writer) (string, error) {
+ shasum := sha256.New()
gz := gzip.NewWriter(w)
- t := tar.NewWriter(gz)
+ multi := io.MultiWriter(shasum, gz)
+ t := tar.NewWriter(multi)
for _, path := range l.Contents {
err := filepath.Walk(path, tarStorePath(t))
if err != nil {
- return err
+ return "", err
}
}
if err := t.Close(); err != nil {
- return err
+ return "", err
}
if err := gz.Close(); err != nil {
- return err
+ return "", err
}
- return nil
+ return fmt.Sprintf("sha256:%x", shasum.Sum([]byte{})), nil
}
func tarStorePath(w *tar.Writer) filepath.WalkFunc {
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 748ff5f67d7b..39befd0fb8f3 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -117,9 +117,10 @@ type ImageResult struct {
// These fields are populated in case of success
Graph layers.RuntimeGraph `json:"runtimeGraph"`
SymlinkLayer struct {
- Size int `json:"size"`
- SHA256 string `json:"sha256"`
- Path string `json:"path"`
+ Size int `json:"size"`
+ TarHash string `json:"tarHash"`
+ GzipHash string `json:"gzipHash"`
+ Path string `json:"path"`
} `json:"symlinkLayer"`
}
@@ -269,8 +270,18 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
entries = append(entries, *entry)
} else {
lh := l.Hash()
+
+ // While packing store paths, the SHA sum of
+ // the uncompressed layer is computed and
+ // written to `tarhash`.
+ //
+ // TODO(tazjin): Refactor this to make the
+ // flow of data cleaner.
+ var tarhash string
lw := func(w io.Writer) error {
- return packStorePaths(&l, w)
+ var err error
+ tarhash, err = packStorePaths(&l, w)
+ return err
}
entry, err := uploadHashLayer(ctx, s, lh, lw)
@@ -278,6 +289,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
return nil, err
}
entry.MergeRating = l.MergeRating
+ entry.TarHash = tarhash
var pkgs []string
for _, p := range l.Contents {
@@ -287,6 +299,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
log.WithFields(log.Fields{
"layer": lh,
"packages": pkgs,
+ "tarhash": tarhash,
}).Info("created image layer")
go cacheLayer(ctx, s, l.Hash(), *entry)
@@ -296,7 +309,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
// Symlink layer (built in the first Nix build) needs to be
// included here manually:
- slkey := result.SymlinkLayer.SHA256
+ slkey := result.SymlinkLayer.GzipHash
entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
f, err := os.Open(result.SymlinkLayer.Path)
if err != nil {
@@ -318,6 +331,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
return nil, err
}
+ entry.TarHash = "sha256:" + result.SymlinkLayer.TarHash
go cacheLayer(ctx, s, slkey, *entry)
entries = append(entries, *entry)
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index 8e65fa223b18..8ad828239591 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -29,9 +29,10 @@ type Entry struct {
Size int64 `json:"size"`
Digest string `json:"digest"`
- // This field is internal to Nixery and not part of the
+ // These fields are internal to Nixery and not part of the
// serialised entry.
MergeRating uint64 `json:"-"`
+ TarHash string `json:",omitempty"`
}
type manifest struct {
@@ -102,9 +103,10 @@ func Manifest(layers []Entry) (json.RawMessage, ConfigLayer) {
hashes := make([]string, len(layers))
for i, l := range layers {
+ hashes[i] = l.TarHash
l.MediaType = layerType
+ l.TarHash = ""
layers[i] = l
- hashes[i] = l.Digest
}
c := configLayer(hashes)
--
cgit 1.4.1
From 1853c74998953048478b8526f408f8c57b129958 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 11 Oct 2019 12:26:18 +0100
Subject: refactor(server): Only compress symlink forest layer once
Instead of compressing & decompressing again to get the underlying tar
hash, use a similar mechanism as for store path layers for the symlink
layer and only compress it once while uploading.
---
tools/nixery/build-image/build-image.nix | 13 +++++--------
tools/nixery/server/builder/builder.go | 27 +++++++++++++++++++--------
2 files changed, 24 insertions(+), 16 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index ab785006a9af..d4df707a8a3a 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -126,9 +126,9 @@ let
# Image layer that contains the symlink forest created above. This
# must be included in the image to ensure that the filesystem has a
# useful layout at runtime.
- symlinkLayer = runCommand "symlink-layer.tar.gz" {} ''
+ symlinkLayer = runCommand "symlink-layer.tar" {} ''
cp -r ${contentsEnv}/ ./layer
- tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -czf $out .
+ tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -cf $out .
'';
# Metadata about the symlink layer which is required for serving it.
@@ -137,14 +137,11 @@ let
symlinkLayerMeta = fromJSON (readFile (runCommand "symlink-layer-meta.json" {
buildInputs = with pkgs; [ coreutils jq openssl ];
}''
- gzipHash=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
- tarHash=$(cat ${symlinkLayer} | gzip -d | sha256sum | cut -d ' ' -f1)
+ tarHash=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
layerSize=$(stat --printf '%s' ${symlinkLayer})
- jq -n -c --arg gzipHash $gzipHash --arg tarHash $tarHash --arg size $layerSize \
- --arg path ${symlinkLayer} \
- '{ size: ($size | tonumber), tarHash: $tarHash, gzipHash: $gzipHash, path: $path }' \
- >> $out
+ jq -n -c --arg tarHash $tarHash --arg size $layerSize --arg path ${symlinkLayer} \
+ '{ size: ($size | tonumber), tarHash: $tarHash, path: $path }' >> $out
''));
# Final output structure returned to Nixery if the build succeeded
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 39befd0fb8f3..2a3aa182ac14 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -20,6 +20,7 @@ package builder
import (
"bufio"
"bytes"
+ "compress/gzip"
"context"
"crypto/sha256"
"encoding/json"
@@ -117,10 +118,9 @@ type ImageResult struct {
// These fields are populated in case of success
Graph layers.RuntimeGraph `json:"runtimeGraph"`
SymlinkLayer struct {
- Size int `json:"size"`
- TarHash string `json:"tarHash"`
- GzipHash string `json:"gzipHash"`
- Path string `json:"path"`
+ Size int `json:"size"`
+ TarHash string `json:"tarHash"`
+ Path string `json:"path"`
} `json:"symlinkLayer"`
}
@@ -309,7 +309,7 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
// Symlink layer (built in the first Nix build) needs to be
// included here manually:
- slkey := result.SymlinkLayer.GzipHash
+ slkey := result.SymlinkLayer.TarHash
entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
f, err := os.Open(result.SymlinkLayer.Path)
if err != nil {
@@ -317,14 +317,25 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
"image": image.Name,
"tag": image.Tag,
"layer": slkey,
- }).Error("failed to upload symlink layer")
+ }).Error("failed to open symlink layer")
return err
}
defer f.Close()
- _, err = io.Copy(w, f)
- return err
+ gz := gzip.NewWriter(w)
+ _, err = io.Copy(gz, f)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ "layer": slkey,
+ }).Error("failed to upload symlink layer")
+
+ return err
+ }
+
+ return gz.Close()
})
if err != nil {
--
cgit 1.4.1
From cca835ae37cc35f3cae80afe5af8049009a6aa89 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 11 Oct 2019 13:55:10 +0100
Subject: fix(build): Only take the first matching hash for source hashing
Some Nix download mechanisms will add a second hash in the store path,
which had been added to the source hash output (breaking argument
interpolation).
---
tools/nixery/default.nix | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 20a5b50220e1..422148d62615 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -23,7 +23,7 @@ rec {
# builds to distinguish errors between deployed versions, see
# server/logs.go for details.
nixery-src-hash = pkgs.runCommand "nixery-src-hash" {} ''
- echo ${./.} | grep -Eo '[a-z0-9]{32}' > $out
+ echo ${./.} | grep -Eo '[a-z0-9]{32}' | head -c 32 > $out
'';
# Go implementation of the Nixery server which implements the
--
cgit 1.4.1
From 3a5db4f9f184d38799cda1ca83039d11ff457c04 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 27 Oct 2019 13:36:53 +0100
Subject: refactor(server): Load GCS signing key from service account key
The JSON file generated for service account keys already contains the
required information for signing URLs in GCS, thus the environment
variables for toggling signing behaviour have been removed.
Signing is now enabled automatically in the presence of service
account credentials (i.e. `GOOGLE_APPLICATION_CREDENTIALS`).
---
tools/nixery/server/config/config.go | 28 ++++++++++++++++------------
1 file changed, 16 insertions(+), 12 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index fe05734ee6ac..6c1baafce8c1 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -23,29 +23,33 @@ import (
"cloud.google.com/go/storage"
log "github.com/sirupsen/logrus"
+ "golang.org/x/oauth2/google"
)
-// Load (optional) GCS bucket signing data from the GCS_SIGNING_KEY and
-// GCS_SIGNING_ACCOUNT envvars.
+// Configure GCS URL signing in the presence of a service account key
+// (toggled if the user has set GOOGLE_APPLICATION_CREDENTIALS).
func signingOptsFromEnv() *storage.SignedURLOptions {
- path := os.Getenv("GCS_SIGNING_KEY")
- id := os.Getenv("GCS_SIGNING_ACCOUNT")
-
- if path == "" || id == "" {
- log.Info("GCS URL signing disabled")
+ path := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS")
+ if path == "" {
return nil
}
- log.WithField("account", id).Info("GCS URL signing enabled")
+ key, err := ioutil.ReadFile(path)
+ if err != nil {
+ log.WithError(err).WithField("file", path).Fatal("failed to read service account key")
+ }
- k, err := ioutil.ReadFile(path)
+ conf, err := google.JWTConfigFromJSON(key)
if err != nil {
- log.WithError(err).WithField("file", path).Fatal("failed to read GCS signing key")
+ log.WithError(err).WithField("file", path).Fatal("failed to parse service account key")
}
+ log.WithField("account", conf.Email).Info("GCS URL signing enabled")
+
return &storage.SignedURLOptions{
- GoogleAccessID: id,
- PrivateKey: k,
+ Scheme: storage.SigningSchemeV4,
+ GoogleAccessID: conf.Email,
+ PrivateKey: conf.PrivateKey,
Method: "GET",
}
}
--
cgit 1.4.1
From 7b7d21205fb5288f1772d6ea4baff080565ebd9e Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 27 Oct 2019 13:42:24 +0100
Subject: docs: Update GCS signing key documentation
This key is now taken straight from the configured service account
key.
---
tools/nixery/README.md | 18 ++++++++++--------
tools/nixery/docs/src/run-your-own.md | 8 ++++----
2 files changed, 14 insertions(+), 12 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 3026451c74e0..1574d5950a22 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -79,15 +79,17 @@ variables:
* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
locally configured SSH/git credentials)
-* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
- for building
+* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to
+ use for building
* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
- (defaults to 60
-* `NIX_POPULARITY_URL`: URL to a file containing popularity data for the package set (see `popcount/`)
-* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
- used to sign Cloud Storage URLs
-* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
- to
+ (defaults to 60)
+* `NIX_POPULARITY_URL`: URL to a file containing popularity data for
+ the package set (see `popcount/`)
+
+If the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is set to a service
+account key, Nixery will also use this key to create [signed URLs][] for layers
+in the storage bucket. This makes it possible to serve layers from a bucket
+without having to make them publicly available.
## Roadmap
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
index 7a294f56055e..ffddec32db5f 100644
--- a/tools/nixery/docs/src/run-your-own.md
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -85,15 +85,15 @@ You may set *all* of these:
* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
(defaults to 60)
-* `GCS_SIGNING_KEY`: A Google service account key (in PEM format) that can be
- used to [sign Cloud Storage URLs][signed-urls]
-* `GCS_SIGNING_ACCOUNT`: Google service account ID that the signing key belongs
- to
To authenticate to the configured GCS bucket, Nixery uses Google's [Application
Default Credentials][ADC]. Depending on your environment this may require
additional configuration.
+If the `GOOGLE_APPLICATION_CREDENTIALS` environment is configured, the service
+account's private key will be used to create [signed URLs for
+layers][signed-urls].
+
## 4. Deploy Nixery
With the above environment variables configured, you can run the image that was
--
cgit 1.4.1
From ffe58d6cb510d0274ea1afed3e0e2b44c69e32c2 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 27 Oct 2019 15:33:14 +0100
Subject: refactor(build): Do not expose nixery-server attribute
In most cases this is not useful for users without the wrapper script,
so users should always build nixery-bin anyways.
---
tools/nixery/default.nix | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 422148d62615..3541037965e9 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -18,7 +18,7 @@
with pkgs;
-rec {
+let
# Hash of all Nixery sources - this is used as the Nixery version in
# builds to distinguish errors between deployed versions, see
# server/logs.go for details.
@@ -29,13 +29,11 @@ rec {
# Go implementation of the Nixery server which implements the
# container registry interface.
#
- # Users will usually not want to use this directly, instead see the
- # 'nixery' derivation below, which automatically includes runtime
- # data dependencies.
+ # Users should use the nixery-bin derivation below instead.
nixery-server = callPackage ./server {
srcHash = nixery-src-hash;
};
-
+in rec {
# Implementation of the Nix image building logic
nixery-build-image = import ./build-image { inherit pkgs; };
--
cgit 1.4.1
From f7d16c5d454ea2aee65e7180e19a9bb891178bbb Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 27 Oct 2019 16:49:54 +0100
Subject: refactor(server): Introduce pluggable interface for storage backends
This abstracts over the functionality of Google Cloud Storage and
other potential underlying storage backends to make it possible to
replace these in Nixery.
The GCS backend is not yet reimplemented.
---
tools/nixery/server/builder/builder.go | 101 ++++++++-------------------------
tools/nixery/server/builder/cache.go | 94 +++++++++++++-----------------
tools/nixery/server/config/config.go | 45 ++-------------
tools/nixery/server/main.go | 66 +++------------------
tools/nixery/server/storage/storage.go | 34 +++++++++++
5 files changed, 111 insertions(+), 229 deletions(-)
create mode 100644 tools/nixery/server/storage/storage.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 2a3aa182ac14..e2982b993dac 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -28,18 +28,16 @@ import (
"io"
"io/ioutil"
"net/http"
- "net/url"
"os"
"os/exec"
"sort"
"strings"
- "cloud.google.com/go/storage"
"github.com/google/nixery/server/config"
"github.com/google/nixery/server/layers"
"github.com/google/nixery/server/manifest"
+ "github.com/google/nixery/server/storage"
log "github.com/sirupsen/logrus"
- "golang.org/x/oauth2/google"
)
// The maximum number of layers in an image is 125. To allow for
@@ -47,19 +45,16 @@ import (
// use up is set at a lower point.
const LayerBudget int = 94
-// API scope needed for renaming objects in GCS
-const gcsScope = "https://www.googleapis.com/auth/devstorage.read_write"
-
// HTTP client to use for direct calls to APIs that are not part of the SDK
var client = &http.Client{}
// State holds the runtime state that is carried around in Nixery and
// passed to builder functions.
type State struct {
- Bucket *storage.BucketHandle
- Cache *LocalCache
- Cfg config.Config
- Pop layers.Popularity
+ Storage storage.Backend
+ Cache *LocalCache
+ Cfg config.Config
+ Pop layers.Popularity
}
// Image represents the information necessary for building a container image.
@@ -349,53 +344,6 @@ func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageRes
return entries, nil
}
-// renameObject renames an object in the specified Cloud Storage
-// bucket.
-//
-// The Go API for Cloud Storage does not support renaming objects, but
-// the HTTP API does. The code below makes the relevant call manually.
-func renameObject(ctx context.Context, s *State, old, new string) error {
- bucket := s.Cfg.Bucket
-
- creds, err := google.FindDefaultCredentials(ctx, gcsScope)
- if err != nil {
- return err
- }
-
- token, err := creds.TokenSource.Token()
- if err != nil {
- return err
- }
-
- // as per https://cloud.google.com/storage/docs/renaming-copying-moving-objects#rename
- url := fmt.Sprintf(
- "https://www.googleapis.com/storage/v1/b/%s/o/%s/rewriteTo/b/%s/o/%s",
- url.PathEscape(bucket), url.PathEscape(old),
- url.PathEscape(bucket), url.PathEscape(new),
- )
-
- req, err := http.NewRequest("POST", url, nil)
- req.Header.Add("Authorization", "Bearer "+token.AccessToken)
- _, err = client.Do(req)
- if err != nil {
- return err
- }
-
- // It seems that 'rewriteTo' copies objects instead of
- // renaming/moving them, hence a deletion call afterwards is
- // required.
- if err = s.Bucket.Object(old).Delete(ctx); err != nil {
- log.WithError(err).WithFields(log.Fields{
- "new": new,
- "old": old,
- }).Warn("failed to delete renamed object")
-
- // this error should not break renaming and is not returned
- }
-
- return nil
-}
-
// layerWriter is the type for functions that can write a layer to the
// multiwriter used for uploading & hashing.
//
@@ -430,33 +378,32 @@ func (b *byteCounter) Write(p []byte) (n int, err error) {
// The return value is the layer's SHA256 hash, which is used in the
// image manifest.
func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter) (*manifest.Entry, error) {
- staging := s.Bucket.Object("staging/" + key)
-
- // Sets up a "multiwriter" that simultaneously runs both hash
- // algorithms and uploads to the bucket
- sw := staging.NewWriter(ctx)
- shasum := sha256.New()
- counter := &byteCounter{}
- multi := io.MultiWriter(sw, shasum, counter)
+ path := "staging/" + key
+ sha256sum, size, err := s.Storage.Persist(path, func(sw io.Writer) (string, int64, error) {
+ // Sets up a "multiwriter" that simultaneously runs both hash
+ // algorithms and uploads to the storage backend.
+ shasum := sha256.New()
+ counter := &byteCounter{}
+ multi := io.MultiWriter(sw, shasum, counter)
+
+ err := lw(multi)
+ sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
+
+ return sha256sum, counter.count, err
+ })
- err := lw(multi)
if err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to create and upload layer")
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to create and store layer")
return nil, err
}
- if err = sw.Close(); err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to upload layer to staging")
- }
-
- sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
-
// Hashes are now known and the object is in the bucket, what
// remains is to move it to the correct location and cache it.
- err = renameObject(ctx, s, "staging/"+key, "layers/"+sha256sum)
+ err = s.Storage.Move("staging/"+key, "layers/"+sha256sum)
if err != nil {
log.WithError(err).WithField("layer", key).
Error("failed to move layer from staging")
@@ -464,8 +411,6 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
return nil, err
}
- size := counter.count
-
log.WithFields(log.Fields{
"layer": key,
"sha256": sha256sum,
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 88bf30de4812..2af214cd9188 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -114,24 +114,18 @@ func (c *LocalCache) localCacheLayer(key string, e manifest.Entry) {
}
// Retrieve a manifest from the cache(s). First the local cache is
-// checked, then the GCS-bucket cache.
+// checked, then the storage backend.
func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessage, bool) {
if m, cached := s.Cache.manifestFromLocalCache(key); cached {
return m, true
}
- obj := s.Bucket.Object("manifests/" + key)
-
- // Probe whether the file exists before trying to fetch it.
- _, err := obj.Attrs(ctx)
+ r, err := s.Storage.Fetch("manifests/" + key)
if err != nil {
- return nil, false
- }
-
- r, err := obj.NewReader(ctx)
- if err != nil {
- log.WithError(err).WithField("manifest", key).
- Error("failed to retrieve manifest from bucket cache")
+ log.WithError(err).WithFields(log.Fields{
+ "manifest": key,
+ "backend": s.Storage.Name(),
+ })
return nil, false
}
@@ -139,8 +133,10 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
m, err := ioutil.ReadAll(r)
if err != nil {
- log.WithError(err).WithField("manifest", key).
- Error("failed to read cached manifest from bucket")
+ log.WithError(err).WithFields(log.Fields{
+ "manifest": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to read cached manifest from storage backend")
return nil, false
}
@@ -155,21 +151,17 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage) {
go s.Cache.localCacheManifest(key, m)
- obj := s.Bucket.Object("manifests/" + key)
- w := obj.NewWriter(ctx)
- r := bytes.NewReader([]byte(m))
+ path := "manifests/" + key
+ _, size, err := s.Storage.Persist(path, func(w io.Writer) (string, int64, error) {
+ size, err := io.Copy(w, bytes.NewReader([]byte(m)))
+ return "", size, err
+ })
- size, err := io.Copy(w, r)
if err != nil {
- log.WithError(err).WithField("manifest", key).
- Error("failed to cache manifest to GCS")
-
- return
- }
-
- if err = w.Close(); err != nil {
- log.WithError(err).WithField("manifest", key).
- Error("failed to cache manifest to GCS")
+ log.WithError(err).WithFields(log.Fields{
+ "manifest": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to cache manifest to storage backend")
return
}
@@ -177,7 +169,8 @@ func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage)
log.WithFields(log.Fields{
"manifest": key,
"size": size,
- }).Info("cached manifest to GCS")
+ "backend": s.Storage.Name(),
+ }).Info("cached manifest to storage backend")
}
// Retrieve a layer build from the cache, first checking the local
@@ -187,16 +180,12 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
return entry, true
}
- obj := s.Bucket.Object("builds/" + key)
- _, err := obj.Attrs(ctx)
- if err != nil {
- return nil, false
- }
-
- r, err := obj.NewReader(ctx)
+ r, err := s.Storage.Fetch("builds/" + key)
if err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to retrieve cached layer from GCS")
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Warn("failed to retrieve cached layer from storage backend")
return nil, false
}
@@ -205,8 +194,10 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
jb := bytes.NewBuffer([]byte{})
_, err = io.Copy(jb, r)
if err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to read cached layer from GCS")
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to read cached layer from storage backend")
return nil, false
}
@@ -227,24 +218,19 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry) {
s.Cache.localCacheLayer(key, entry)
- obj := s.Bucket.Object("builds/" + key)
-
j, _ := json.Marshal(&entry)
+ path := "builds/" + key
+ _, _, err := s.Storage.Persist(path, func(w io.Writer) (string, int64, error) {
+ size, err := io.Copy(w, bytes.NewReader(j))
+ return "", size, err
+ })
- w := obj.NewWriter(ctx)
-
- _, err := io.Copy(w, bytes.NewReader(j))
if err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to cache layer")
-
- return
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to cache layer")
}
- if err = w.Close(); err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to cache layer")
-
- return
- }
+ return
}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 6c1baafce8c1..ad6dff40431f 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -18,42 +18,11 @@
package config
import (
- "io/ioutil"
"os"
- "cloud.google.com/go/storage"
log "github.com/sirupsen/logrus"
- "golang.org/x/oauth2/google"
)
-// Configure GCS URL signing in the presence of a service account key
-// (toggled if the user has set GOOGLE_APPLICATION_CREDENTIALS).
-func signingOptsFromEnv() *storage.SignedURLOptions {
- path := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS")
- if path == "" {
- return nil
- }
-
- key, err := ioutil.ReadFile(path)
- if err != nil {
- log.WithError(err).WithField("file", path).Fatal("failed to read service account key")
- }
-
- conf, err := google.JWTConfigFromJSON(key)
- if err != nil {
- log.WithError(err).WithField("file", path).Fatal("failed to parse service account key")
- }
-
- log.WithField("account", conf.Email).Info("GCS URL signing enabled")
-
- return &storage.SignedURLOptions{
- Scheme: storage.SigningSchemeV4,
- GoogleAccessID: conf.Email,
- PrivateKey: conf.PrivateKey,
- Method: "GET",
- }
-}
-
func getConfig(key, desc, def string) string {
value := os.Getenv(key)
if value == "" && def == "" {
@@ -70,13 +39,11 @@ func getConfig(key, desc, def string) string {
// Config holds the Nixery configuration options.
type Config struct {
- Bucket string // GCS bucket to cache & serve layers
- Signing *storage.SignedURLOptions // Signing options to use for GCS URLs
- Port string // Port on which to launch HTTP server
- Pkgs PkgSource // Source for Nix package set
- Timeout string // Timeout for a single Nix builder (seconds)
- WebDir string // Directory with static web assets
- PopUrl string // URL to the Nix package popularity count
+ Port string // Port on which to launch HTTP server
+ Pkgs PkgSource // Source for Nix package set
+ Timeout string // Timeout for a single Nix builder (seconds)
+ WebDir string // Directory with static web assets
+ PopUrl string // URL to the Nix package popularity count
}
func FromEnv() (Config, error) {
@@ -86,10 +53,8 @@ func FromEnv() (Config, error) {
}
return Config{
- Bucket: getConfig("BUCKET", "GCS bucket for layer storage", ""),
Port: getConfig("PORT", "HTTP port", ""),
Pkgs: pkgs,
- Signing: signingOptsFromEnv(),
Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
PopUrl: os.Getenv("NIX_POPULARITY_URL"),
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index f38fab2f2abd..22ed6f1a5e2c 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -32,9 +32,7 @@ import (
"io/ioutil"
"net/http"
"regexp"
- "time"
- "cloud.google.com/go/storage"
"github.com/google/nixery/server/builder"
"github.com/google/nixery/server/config"
"github.com/google/nixery/server/layers"
@@ -59,49 +57,6 @@ var (
layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
)
-// layerRedirect constructs the public URL of the layer object in the Cloud
-// Storage bucket, signs it and redirects the user there.
-//
-// Signing the URL allows unauthenticated clients to retrieve objects from the
-// bucket.
-//
-// The Docker client is known to follow redirects, but this might not be true
-// for all other registry clients.
-func constructLayerUrl(cfg *config.Config, digest string) (string, error) {
- log.WithField("layer", digest).Info("redirecting layer request to bucket")
- object := "layers/" + digest
-
- if cfg.Signing != nil {
- opts := *cfg.Signing
- opts.Expires = time.Now().Add(5 * time.Minute)
- return storage.SignedURL(cfg.Bucket, object, &opts)
- } else {
- return ("https://storage.googleapis.com/" + cfg.Bucket + "/" + object), nil
- }
-}
-
-// prepareBucket configures the handle to a Cloud Storage bucket in which
-// individual layers will be stored after Nix builds. Nixery does not directly
-// serve layers to registry clients, instead it redirects them to the public
-// URLs of the Cloud Storage bucket.
-//
-// The bucket is required for Nixery to function correctly, hence fatal errors
-// are generated in case it fails to be set up correctly.
-func prepareBucket(ctx context.Context, cfg *config.Config) *storage.BucketHandle {
- client, err := storage.NewClient(ctx)
- if err != nil {
- log.WithError(err).Fatal("failed to set up Cloud Storage client")
- }
-
- bkt := client.Bucket(cfg.Bucket)
-
- if _, err := bkt.Attrs(ctx); err != nil {
- log.WithError(err).WithField("bucket", cfg.Bucket).Fatal("could not access configured bucket")
- }
-
- return bkt
-}
-
// Downloads the popularity information for the package set from the
// URL specified in Nixery's configuration.
func downloadPopularity(url string) (layers.Popularity, error) {
@@ -218,16 +173,15 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
if len(layerMatches) == 3 {
digest := layerMatches[2]
- url, err := constructLayerUrl(&h.state.Cfg, digest)
-
+ storage := h.state.Storage
+ err := storage.ServeLayer(digest, w)
if err != nil {
- log.WithError(err).WithField("layer", digest).Error("failed to sign GCS URL")
- writeError(w, 500, "UNKNOWN", "could not serve layer")
- return
+ log.WithError(err).WithFields(log.Fields{
+ "layer": digest,
+ "backend": storage.Name(),
+ }).Error("failed to serve layer from storage backend")
}
- w.Header().Set("Location", url)
- w.WriteHeader(303)
return
}
@@ -243,7 +197,6 @@ func main() {
}
ctx := context.Background()
- bucket := prepareBucket(ctx, &cfg)
cache, err := builder.NewCache()
if err != nil {
log.WithError(err).Fatal("failed to instantiate build cache")
@@ -259,10 +212,9 @@ func main() {
}
state := builder.State{
- Bucket: bucket,
- Cache: &cache,
- Cfg: cfg,
- Pop: pop,
+ Cache: &cache,
+ Cfg: cfg,
+ Pop: pop,
}
log.WithFields(log.Fields{
diff --git a/tools/nixery/server/storage/storage.go b/tools/nixery/server/storage/storage.go
new file mode 100644
index 000000000000..15b8355e6ef5
--- /dev/null
+++ b/tools/nixery/server/storage/storage.go
@@ -0,0 +1,34 @@
+// Package storage implements an interface that can be implemented by
+// storage backends, such as Google Cloud Storage or the local
+// filesystem.
+package storage
+
+import (
+ "io"
+ "net/http"
+)
+
+type Backend interface {
+ // Name returns the name of the storage backend, for use in
+ // log messages and such.
+ Name() string
+
+ // Persist provides a user-supplied function with a writer
+ // that stores data in the storage backend.
+ //
+ // It needs to return the SHA256 hash of the data written as
+ // well as the total number of bytes, as those are required
+ // for the image manifest.
+ Persist(string, func(io.Writer) (string, int64, error)) (string, int64, error)
+
+ // Fetch retrieves data from the storage backend.
+ Fetch(path string) (io.ReadCloser, error)
+
+ // Move renames a path inside the storage backend. This is
+ // used for staging uploads while calculating their hashes.
+ Move(old, new string) error
+
+ // Serve provides a handler function to serve HTTP requests
+ // for layers in the storage backend.
+ ServeLayer(digest string, w http.ResponseWriter) error
+}
--
cgit 1.4.1
From 20e0ca53cba796a67311360d7a21c0f7c8baf78a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 27 Oct 2019 17:59:30 +0100
Subject: feat(server): Implement GCS storage backend with new interface
Logical implementation is mostly identical to the previous one, but
adhering to the new storage.Backend interface.
---
tools/nixery/server/storage/gcs.go | 206 +++++++++++++++++++++++++++++++++++++
1 file changed, 206 insertions(+)
create mode 100644 tools/nixery/server/storage/gcs.go
(limited to 'tools')
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
new file mode 100644
index 000000000000..1b75722bbb77
--- /dev/null
+++ b/tools/nixery/server/storage/gcs.go
@@ -0,0 +1,206 @@
+// Google Cloud Storage backend for Nixery.
+package storage
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "net/url"
+ "os"
+ "time"
+
+ "cloud.google.com/go/storage"
+ log "github.com/sirupsen/logrus"
+ "golang.org/x/oauth2/google"
+)
+
+// HTTP client to use for direct calls to APIs that are not part of the SDK
+var client = &http.Client{}
+
+// API scope needed for renaming objects in GCS
+const gcsScope = "https://www.googleapis.com/auth/devstorage"
+
+type GCSBackend struct {
+ bucket string
+ handle *storage.BucketHandle
+ signing *storage.SignedURLOptions
+}
+
+// Constructs a new GCS bucket backend based on the configured
+// environment variables.
+func New() (GCSBackend, error) {
+ bucket := os.Getenv("GCS_BUCKET")
+ if bucket == "" {
+ return GCSBackend{}, fmt.Errorf("GCS_BUCKET must be configured for GCS usage")
+ }
+
+ ctx := context.Background()
+ client, err := storage.NewClient(ctx)
+ if err != nil {
+ log.WithError(err).Fatal("failed to set up Cloud Storage client")
+ }
+
+ handle := client.Bucket(bucket)
+
+ if _, err := handle.Attrs(ctx); err != nil {
+ log.WithError(err).WithField("bucket", bucket).Error("could not access configured bucket")
+ return GCSBackend{}, err
+ }
+
+ signing, err := signingOptsFromEnv()
+ if err != nil {
+ log.WithError(err).Error("failed to configure GCS bucket signing")
+ return GCSBackend{}, err
+ }
+
+ return GCSBackend{
+ bucket: bucket,
+ handle: handle,
+ signing: signing,
+ }, nil
+}
+
+func (b *GCSBackend) Name() string {
+ return "Google Cloud Storage (" + b.bucket + ")"
+}
+
+func (b *GCSBackend) Persist(path string, f func(io.Writer) (string, int, error)) (string, int, error) {
+ ctx := context.Background()
+ obj := b.handle.Object(path)
+ w := obj.NewWriter(ctx)
+
+ hash, size, err := f(w)
+ if err != nil {
+ log.WithError(err).WithField("path", path).Error("failed to upload to GCS")
+ return hash, size, err
+ }
+
+ return hash, size, w.Close()
+}
+
+func (b *GCSBackend) Fetch(path string) (io.ReadCloser, error) {
+ ctx := context.Background()
+ obj := b.handle.Object(path)
+
+ // Probe whether the file exists before trying to fetch it
+ _, err := obj.Attrs(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ return obj.NewReader(ctx)
+}
+
+// renameObject renames an object in the specified Cloud Storage
+// bucket.
+//
+// The Go API for Cloud Storage does not support renaming objects, but
+// the HTTP API does. The code below makes the relevant call manually.
+func (b *GCSBackend) Move(old, new string) error {
+ ctx := context.Background()
+ creds, err := google.FindDefaultCredentials(ctx, gcsScope)
+ if err != nil {
+ return err
+ }
+
+ token, err := creds.TokenSource.Token()
+ if err != nil {
+ return err
+ }
+
+ // as per https://cloud.google.com/storage/docs/renaming-copying-moving-objects#rename
+ url := fmt.Sprintf(
+ "https://www.googleapis.com/storage/v1/b/%s/o/%s/rewriteTo/b/%s/o/%s",
+ url.PathEscape(b.bucket), url.PathEscape(old),
+ url.PathEscape(b.bucket), url.PathEscape(new),
+ )
+
+ req, err := http.NewRequest("POST", url, nil)
+ req.Header.Add("Authorization", "Bearer "+token.AccessToken)
+ _, err = client.Do(req)
+ if err != nil {
+ return err
+ }
+
+ // It seems that 'rewriteTo' copies objects instead of
+ // renaming/moving them, hence a deletion call afterwards is
+ // required.
+ if err = b.handle.Object(old).Delete(ctx); err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "new": new,
+ "old": old,
+ }).Warn("failed to delete renamed object")
+
+ // this error should not break renaming and is not returned
+ }
+
+ return nil
+}
+
+func (b *GCSBackend) Serve(digest string, w http.ResponseWriter) error {
+ url, err := b.constructLayerUrl(digest)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": digest,
+ "bucket": b.bucket,
+ }).Error("failed to sign GCS URL")
+
+ return err
+ }
+
+ w.Header().Set("Location", url)
+ w.WriteHeader(303)
+ return nil
+}
+
+// Configure GCS URL signing in the presence of a service account key
+// (toggled if the user has set GOOGLE_APPLICATION_CREDENTIALS).
+func signingOptsFromEnv() (*storage.SignedURLOptions, error) {
+ path := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS")
+ if path == "" {
+ // No credentials configured -> no URL signing
+ return nil, nil
+ }
+
+ key, err := ioutil.ReadFile(path)
+ if err != nil {
+ return nil, fmt.Errorf("failed to read service account key: %s", err)
+ }
+
+ conf, err := google.JWTConfigFromJSON(key)
+ if err != nil {
+ return nil, fmt.Errorf("failed to parse service account key: %s", err)
+ }
+
+ log.WithField("account", conf.Email).Info("GCS URL signing enabled")
+
+ return &storage.SignedURLOptions{
+ Scheme: storage.SigningSchemeV4,
+ GoogleAccessID: conf.Email,
+ PrivateKey: conf.PrivateKey,
+ Method: "GET",
+ }, nil
+}
+
+// layerRedirect constructs the public URL of the layer object in the Cloud
+// Storage bucket, signs it and redirects the user there.
+//
+// Signing the URL allows unauthenticated clients to retrieve objects from the
+// bucket.
+//
+// The Docker client is known to follow redirects, but this might not be true
+// for all other registry clients.
+func (b *GCSBackend) constructLayerUrl(digest string) (string, error) {
+ log.WithField("layer", digest).Info("redirecting layer request to bucket")
+ object := "layers/" + digest
+
+ if b.signing != nil {
+ opts := *b.signing
+ opts.Expires = time.Now().Add(5 * time.Minute)
+ return storage.SignedURL(b.bucket, object, &opts)
+ } else {
+ return ("https://storage.googleapis.com/" + b.bucket + "/" + object), nil
+ }
+}
--
cgit 1.4.1
From e8fd6b67348610ff7bbd4585036567b9e56945b7 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 27 Oct 2019 18:33:57 +0100
Subject: refactor(server): Change setup to create new storage backends
---
tools/nixery/server/builder/builder.go | 4 ----
tools/nixery/server/builder/cache.go | 2 +-
tools/nixery/server/config/config.go | 19 +++++++++++++++++++
tools/nixery/server/main.go | 20 +++++++++++++++++---
tools/nixery/server/storage/gcs.go | 14 +++++++-------
5 files changed, 44 insertions(+), 15 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index e2982b993dac..17ea1b8e6209 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -27,7 +27,6 @@ import (
"fmt"
"io"
"io/ioutil"
- "net/http"
"os"
"os/exec"
"sort"
@@ -45,9 +44,6 @@ import (
// use up is set at a lower point.
const LayerBudget int = 94
-// HTTP client to use for direct calls to APIs that are not part of the SDK
-var client = &http.Client{}
-
// State holds the runtime state that is carried around in Nixery and
// passed to builder functions.
type State struct {
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 2af214cd9188..4aed4b53463f 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -125,7 +125,7 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
log.WithError(err).WithFields(log.Fields{
"manifest": key,
"backend": s.Storage.Name(),
- })
+ }).Error("failed to fetch manifest from cache")
return nil, false
}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index ad6dff40431f..6cc69fa1f599 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -37,6 +37,13 @@ func getConfig(key, desc, def string) string {
return value
}
+// Backend represents the possible storage backend types
+type Backend int
+
+const (
+ GCS = iota
+)
+
// Config holds the Nixery configuration options.
type Config struct {
Port string // Port on which to launch HTTP server
@@ -44,6 +51,7 @@ type Config struct {
Timeout string // Timeout for a single Nix builder (seconds)
WebDir string // Directory with static web assets
PopUrl string // URL to the Nix package popularity count
+ Backend Backend // Storage backend to use for Nixery
}
func FromEnv() (Config, error) {
@@ -52,11 +60,22 @@ func FromEnv() (Config, error) {
return Config{}, err
}
+ var b Backend
+ switch os.Getenv("NIXERY_STORAGE_BACKEND") {
+ case "gcs":
+ b = GCS
+ default:
+ log.WithField("values", []string{
+ "gcs",
+ }).Fatal("NIXERY_STORAGE_BUCKET must be set to a supported value")
+ }
+
return Config{
Port: getConfig("PORT", "HTTP port", ""),
Pkgs: pkgs,
Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
PopUrl: os.Getenv("NIX_POPULARITY_URL"),
+ Backend: b,
}, nil
}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 22ed6f1a5e2c..b5d7091ed002 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -36,6 +36,7 @@ import (
"github.com/google/nixery/server/builder"
"github.com/google/nixery/server/config"
"github.com/google/nixery/server/layers"
+ "github.com/google/nixery/server/storage"
log "github.com/sirupsen/logrus"
)
@@ -196,6 +197,18 @@ func main() {
log.WithError(err).Fatal("failed to load configuration")
}
+ var s storage.Backend
+
+ switch cfg.Backend {
+ case config.GCS:
+ s, err = storage.NewGCSBackend()
+ }
+ if err != nil {
+ log.WithError(err).Fatal("failed to initialise storage backend")
+ }
+
+ log.WithField("backend", s.Name()).Info("initialised storage backend")
+
ctx := context.Background()
cache, err := builder.NewCache()
if err != nil {
@@ -212,9 +225,10 @@ func main() {
}
state := builder.State{
- Cache: &cache,
- Cfg: cfg,
- Pop: pop,
+ Cache: &cache,
+ Cfg: cfg,
+ Pop: pop,
+ Storage: s,
}
log.WithFields(log.Fields{
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
index 1b75722bbb77..feb6d30d681e 100644
--- a/tools/nixery/server/storage/gcs.go
+++ b/tools/nixery/server/storage/gcs.go
@@ -30,10 +30,10 @@ type GCSBackend struct {
// Constructs a new GCS bucket backend based on the configured
// environment variables.
-func New() (GCSBackend, error) {
+func NewGCSBackend() (*GCSBackend, error) {
bucket := os.Getenv("GCS_BUCKET")
if bucket == "" {
- return GCSBackend{}, fmt.Errorf("GCS_BUCKET must be configured for GCS usage")
+ return nil, fmt.Errorf("GCS_BUCKET must be configured for GCS usage")
}
ctx := context.Background()
@@ -46,16 +46,16 @@ func New() (GCSBackend, error) {
if _, err := handle.Attrs(ctx); err != nil {
log.WithError(err).WithField("bucket", bucket).Error("could not access configured bucket")
- return GCSBackend{}, err
+ return nil, err
}
signing, err := signingOptsFromEnv()
if err != nil {
log.WithError(err).Error("failed to configure GCS bucket signing")
- return GCSBackend{}, err
+ return nil, err
}
- return GCSBackend{
+ return &GCSBackend{
bucket: bucket,
handle: handle,
signing: signing,
@@ -66,7 +66,7 @@ func (b *GCSBackend) Name() string {
return "Google Cloud Storage (" + b.bucket + ")"
}
-func (b *GCSBackend) Persist(path string, f func(io.Writer) (string, int, error)) (string, int, error) {
+func (b *GCSBackend) Persist(path string, f func(io.Writer) (string, int64, error)) (string, int64, error) {
ctx := context.Background()
obj := b.handle.Object(path)
w := obj.NewWriter(ctx)
@@ -139,7 +139,7 @@ func (b *GCSBackend) Move(old, new string) error {
return nil
}
-func (b *GCSBackend) Serve(digest string, w http.ResponseWriter) error {
+func (b *GCSBackend) ServeLayer(digest string, w http.ResponseWriter) error {
url, err := b.constructLayerUrl(digest)
if err != nil {
log.WithError(err).WithFields(log.Fields{
--
cgit 1.4.1
From e5bb2fc887f8d4e7216a1dfcbfa81baac2b09cfd Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 17:52:41 +0100
Subject: feat(server): Implement initial filesystem storage backend
This allows users to store and serve layers from a local filesystem
path.
---
tools/nixery/server/storage/filesystem.go | 68 +++++++++++++++++++++++++++++++
1 file changed, 68 insertions(+)
create mode 100644 tools/nixery/server/storage/filesystem.go
(limited to 'tools')
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
new file mode 100644
index 000000000000..ef763da67f14
--- /dev/null
+++ b/tools/nixery/server/storage/filesystem.go
@@ -0,0 +1,68 @@
+// Filesystem storage backend for Nixery.
+package storage
+
+import (
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+ "path"
+
+ log "github.com/sirupsen/logrus"
+)
+
+type FSBackend struct {
+ path string
+}
+
+func NewFSBackend(p string) (*FSBackend, error) {
+ p = path.Clean(p)
+ err := os.MkdirAll(p, 0755)
+ if err != nil {
+ return nil, fmt.Errorf("failed to create storage dir: %s", err)
+ }
+
+ return &FSBackend{p}, nil
+}
+
+func (b *FSBackend) Name() string {
+ return fmt.Sprintf("Filesystem (%s)", b.path)
+}
+
+func (b *FSBackend) Persist(key string, f func(io.Writer) (string, int64, error)) (string, int64, error) {
+ full := path.Join(b.path, key)
+ dir := path.Dir(full)
+ err := os.MkdirAll(dir, 0755)
+ if err != nil {
+ log.WithError(err).WithField("path", dir).Error("failed to create storage directory")
+ return "", 0, err
+ }
+
+ file, err := os.OpenFile(full, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
+ if err != nil {
+ log.WithError(err).WithField("file", full).Error("failed to write file")
+ return "", 0, err
+ }
+ defer file.Close()
+
+ return f(file)
+}
+
+func (b *FSBackend) Fetch(key string) (io.ReadCloser, error) {
+ full := path.Join(b.path, key)
+ return os.Open(full)
+}
+
+func (b *FSBackend) Move(old, new string) error {
+ return os.Rename(path.Join(b.path, old), path.Join(b.path, new))
+}
+
+func (b *FSBackend) ServeLayer(digest string, w http.ResponseWriter) error {
+ // http.Serve* functions attempt to be a lot more clever than
+ // I want, but I also would prefer to avoid implementing error
+ // translation myself - thus a fake request is created here.
+ req := http.Request{Method: "GET"}
+ http.ServeFile(w, &req, path.Join(b.path, "sha256:"+digest))
+
+ return nil
+}
--
cgit 1.4.1
From 167a0b32630ed86b3a053e56fa499957872d7b38 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:18:17 +0100
Subject: refactor(server): Pass HTTP request to storage.ServeLayer
The request object is required for some serving methods (e.g. the
filesystem one).
---
tools/nixery/server/builder/builder.go | 2 +-
tools/nixery/server/main.go | 2 +-
tools/nixery/server/storage/gcs.go | 4 +++-
tools/nixery/server/storage/storage.go | 2 +-
4 files changed, 6 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 17ea1b8e6209..021cc662c56d 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -411,7 +411,7 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
"layer": key,
"sha256": sha256sum,
"size": size,
- }).Info("uploaded layer")
+ }).Info("created and persisted layer")
entry := manifest.Entry{
Digest: "sha256:" + sha256sum,
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index b5d7091ed002..282fe9773a73 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -175,7 +175,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if len(layerMatches) == 3 {
digest := layerMatches[2]
storage := h.state.Storage
- err := storage.ServeLayer(digest, w)
+ err := storage.ServeLayer(digest, r, w)
if err != nil {
log.WithError(err).WithFields(log.Fields{
"layer": digest,
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
index feb6d30d681e..749c7ba150e5 100644
--- a/tools/nixery/server/storage/gcs.go
+++ b/tools/nixery/server/storage/gcs.go
@@ -139,7 +139,7 @@ func (b *GCSBackend) Move(old, new string) error {
return nil
}
-func (b *GCSBackend) ServeLayer(digest string, w http.ResponseWriter) error {
+func (b *GCSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
url, err := b.constructLayerUrl(digest)
if err != nil {
log.WithError(err).WithFields(log.Fields{
@@ -150,6 +150,8 @@ func (b *GCSBackend) ServeLayer(digest string, w http.ResponseWriter) error {
return err
}
+ log.WithField("layer", digest).Info("redirecting layer request to GCS bucket")
+
w.Header().Set("Location", url)
w.WriteHeader(303)
return nil
diff --git a/tools/nixery/server/storage/storage.go b/tools/nixery/server/storage/storage.go
index 15b8355e6ef5..ad10d682e93a 100644
--- a/tools/nixery/server/storage/storage.go
+++ b/tools/nixery/server/storage/storage.go
@@ -30,5 +30,5 @@ type Backend interface {
// Serve provides a handler function to serve HTTP requests
// for layers in the storage backend.
- ServeLayer(digest string, w http.ResponseWriter) error
+ ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error
}
--
cgit 1.4.1
From 790bce219cf9acf01de3257fcf137a0a2833529e Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:19:06 +0100
Subject: feat(server): Add filesystem storage backend config options
The filesystem storage backend can be enabled by setting
`NIXERY_STORAGE_BACKEND` to `filesystem` and `STORAGE_PATH` to a disk
location from which Nixery can serve files.
---
tools/nixery/server/config/config.go | 3 +++
tools/nixery/server/main.go | 2 ++
tools/nixery/server/storage/filesystem.go | 7 ++++++-
3 files changed, 11 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
index 6cc69fa1f599..7ec102bd6cee 100644
--- a/tools/nixery/server/config/config.go
+++ b/tools/nixery/server/config/config.go
@@ -42,6 +42,7 @@ type Backend int
const (
GCS = iota
+ FileSystem
)
// Config holds the Nixery configuration options.
@@ -64,6 +65,8 @@ func FromEnv() (Config, error) {
switch os.Getenv("NIXERY_STORAGE_BACKEND") {
case "gcs":
b = GCS
+ case "filesystem":
+ b = FileSystem
default:
log.WithField("values", []string{
"gcs",
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index 282fe9773a73..f4f707313f22 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -202,6 +202,8 @@ func main() {
switch cfg.Backend {
case config.GCS:
s, err = storage.NewGCSBackend()
+ case config.FileSystem:
+ s, err = storage.NewFSBackend()
}
if err != nil {
log.WithError(err).Fatal("failed to initialise storage backend")
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
index ef763da67f14..f343d67b65f8 100644
--- a/tools/nixery/server/storage/filesystem.go
+++ b/tools/nixery/server/storage/filesystem.go
@@ -15,7 +15,12 @@ type FSBackend struct {
path string
}
-func NewFSBackend(p string) (*FSBackend, error) {
+func NewFSBackend() (*FSBackend, error) {
+ p := os.Getenv("STORAGE_PATH")
+ if p == "" {
+ return nil, fmt.Errorf("STORAGE_PATH must be set for filesystem storage")
+ }
+
p = path.Clean(p)
err := os.MkdirAll(p, 0755)
if err != nil {
--
cgit 1.4.1
From c08aa525587add05af2ff8e7f9ddc697858c9d0c Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:20:04 +0100
Subject: fix(server): Ensure error messages are correctly printed in logs
I assumed (incorrectly) that logrus would already take care of
surfacing error messages in human-readable form.
---
tools/nixery/server/logs.go | 7 +++++++
1 file changed, 7 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
index dec4a410fb06..cc218c69d265 100644
--- a/tools/nixery/server/logs.go
+++ b/tools/nixery/server/logs.go
@@ -76,6 +76,13 @@ func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
msg["eventTime"] = &e.Time
msg["severity"] = logSeverity(e.Level)
+ if err, ok := msg[log.ErrorKey]; ok {
+ // TODO(tazjin): Cast safely - for now there should be
+ // no calls to `.WithError` with a nil error, but who
+ // knows.
+ msg[log.ErrorKey] = (err.(error)).Error()
+ }
+
if isError(e) {
loc := reportLocation{
FilePath: e.Caller.File,
--
cgit 1.4.1
From b60a8d007b47a5570715e7a693e1aa186032f29c Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:21:58 +0100
Subject: fix(server): Ensure paths exist when renaming in filesystem storage
The point at which files are moved happens to also (initially) be the
point where the `layers` directory is created. For this reason
renaming must ensure that all path components exist, which this commit
takes care of.
---
tools/nixery/server/storage/filesystem.go | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
index f343d67b65f8..60c48e932ee2 100644
--- a/tools/nixery/server/storage/filesystem.go
+++ b/tools/nixery/server/storage/filesystem.go
@@ -59,7 +59,13 @@ func (b *FSBackend) Fetch(key string) (io.ReadCloser, error) {
}
func (b *FSBackend) Move(old, new string) error {
- return os.Rename(path.Join(b.path, old), path.Join(b.path, new))
+ newpath := path.Join(b.path, new)
+ err := os.MkdirAll(path.Dir(newpath), 0755)
+ if err != nil {
+ return err
+ }
+
+ return os.Rename(path.Join(b.path, old), newpath)
}
func (b *FSBackend) ServeLayer(digest string, w http.ResponseWriter) error {
--
cgit 1.4.1
From 4332d38f4f1250aebc6dc3e2bf05c67559fa57e7 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:22:02 +0100
Subject: fix(server): Correctly construct filesystem paths for layer serving
---
tools/nixery/server/storage/filesystem.go | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
index 60c48e932ee2..c390a4d65cfe 100644
--- a/tools/nixery/server/storage/filesystem.go
+++ b/tools/nixery/server/storage/filesystem.go
@@ -68,12 +68,14 @@ func (b *FSBackend) Move(old, new string) error {
return os.Rename(path.Join(b.path, old), newpath)
}
-func (b *FSBackend) ServeLayer(digest string, w http.ResponseWriter) error {
- // http.Serve* functions attempt to be a lot more clever than
- // I want, but I also would prefer to avoid implementing error
- // translation myself - thus a fake request is created here.
- req := http.Request{Method: "GET"}
- http.ServeFile(w, &req, path.Join(b.path, "sha256:"+digest))
+func (b *FSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
+ p := path.Join(b.path, "layers", digest)
+ log.WithFields(log.Fields{
+ "layer": digest,
+ "path": p,
+ }).Info("serving layer from filesystem")
+
+ http.ServeFile(w, r, p)
return nil
}
--
cgit 1.4.1
From 30e618b65bdd330ea5904b2be00cbac46d5b03e3 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:23:56 +0100
Subject: chore(server): Move cache miss log statement to debug level
This is very annoying otherwise.
---
tools/nixery/server/builder/cache.go | 2 +-
tools/nixery/server/storage/filesystem.go | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 4aed4b53463f..07ac9746d5f0 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -185,7 +185,7 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
log.WithError(err).WithFields(log.Fields{
"layer": key,
"backend": s.Storage.Name(),
- }).Warn("failed to retrieve cached layer from storage backend")
+ }).Debug("failed to retrieve cached layer from storage backend")
return nil, false
}
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
index c390a4d65cfe..8aca20aac2d0 100644
--- a/tools/nixery/server/storage/filesystem.go
+++ b/tools/nixery/server/storage/filesystem.go
@@ -73,7 +73,7 @@ func (b *FSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWr
log.WithFields(log.Fields{
"layer": digest,
- "path": p,
+ "path": p,
}).Info("serving layer from filesystem")
http.ServeFile(w, r, p)
--
cgit 1.4.1
From d8fba233655e127d54926394eacd4b9391ec8b8b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:32:02 +0100
Subject: fix(server): Thread request context to all relevant places
Previously background contexts where created where necessary (e.g. in
GCS interactions). Should I begin to use request timeouts or other
context-dependent things in the future, it's useful to have the actual
HTTP request context around.
This threads the request context through the application to all places
that need it.
---
tools/nixery/server/builder/builder.go | 4 ++--
tools/nixery/server/builder/cache.go | 8 ++++----
tools/nixery/server/main.go | 6 +-----
tools/nixery/server/storage/filesystem.go | 7 ++++---
tools/nixery/server/storage/gcs.go | 9 +++------
tools/nixery/server/storage/storage.go | 9 ++++++---
6 files changed, 20 insertions(+), 23 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 021cc662c56d..59158037e443 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -375,7 +375,7 @@ func (b *byteCounter) Write(p []byte) (n int, err error) {
// image manifest.
func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter) (*manifest.Entry, error) {
path := "staging/" + key
- sha256sum, size, err := s.Storage.Persist(path, func(sw io.Writer) (string, int64, error) {
+ sha256sum, size, err := s.Storage.Persist(ctx, path, func(sw io.Writer) (string, int64, error) {
// Sets up a "multiwriter" that simultaneously runs both hash
// algorithms and uploads to the storage backend.
shasum := sha256.New()
@@ -399,7 +399,7 @@ func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter)
// Hashes are now known and the object is in the bucket, what
// remains is to move it to the correct location and cache it.
- err = s.Storage.Move("staging/"+key, "layers/"+sha256sum)
+ err = s.Storage.Move(ctx, "staging/"+key, "layers/"+sha256sum)
if err != nil {
log.WithError(err).WithField("layer", key).
Error("failed to move layer from staging")
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
index 07ac9746d5f0..82bd90927cd0 100644
--- a/tools/nixery/server/builder/cache.go
+++ b/tools/nixery/server/builder/cache.go
@@ -120,7 +120,7 @@ func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessa
return m, true
}
- r, err := s.Storage.Fetch("manifests/" + key)
+ r, err := s.Storage.Fetch(ctx, "manifests/"+key)
if err != nil {
log.WithError(err).WithFields(log.Fields{
"manifest": key,
@@ -152,7 +152,7 @@ func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage)
go s.Cache.localCacheManifest(key, m)
path := "manifests/" + key
- _, size, err := s.Storage.Persist(path, func(w io.Writer) (string, int64, error) {
+ _, size, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
size, err := io.Copy(w, bytes.NewReader([]byte(m)))
return "", size, err
})
@@ -180,7 +180,7 @@ func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry,
return entry, true
}
- r, err := s.Storage.Fetch("builds/" + key)
+ r, err := s.Storage.Fetch(ctx, "builds/"+key)
if err != nil {
log.WithError(err).WithFields(log.Fields{
"layer": key,
@@ -220,7 +220,7 @@ func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry)
j, _ := json.Marshal(&entry)
path := "builds/" + key
- _, _, err := s.Storage.Persist(path, func(w io.Writer) (string, int64, error) {
+ _, _, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
size, err := io.Copy(w, bytes.NewReader(j))
return "", size, err
})
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
index f4f707313f22..6ae0730906dc 100644
--- a/tools/nixery/server/main.go
+++ b/tools/nixery/server/main.go
@@ -26,7 +26,6 @@
package main
import (
- "context"
"encoding/json"
"fmt"
"io/ioutil"
@@ -110,7 +109,6 @@ func writeError(w http.ResponseWriter, status int, code, message string) {
}
type registryHandler struct {
- ctx context.Context
state *builder.State
}
@@ -132,7 +130,7 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}).Info("requesting image manifest")
image := builder.ImageFromName(imageName, imageTag)
- buildResult, err := builder.BuildImage(h.ctx, h.state, &image)
+ buildResult, err := builder.BuildImage(r.Context(), h.state, &image)
if err != nil {
writeError(w, 500, "UNKNOWN", "image build failure")
@@ -211,7 +209,6 @@ func main() {
log.WithField("backend", s.Name()).Info("initialised storage backend")
- ctx := context.Background()
cache, err := builder.NewCache()
if err != nil {
log.WithError(err).Fatal("failed to instantiate build cache")
@@ -240,7 +237,6 @@ func main() {
// All /v2/ requests belong to the registry handler.
http.Handle("/v2/", ®istryHandler{
- ctx: ctx,
state: &state,
})
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
index 8aca20aac2d0..3fb91bc5e134 100644
--- a/tools/nixery/server/storage/filesystem.go
+++ b/tools/nixery/server/storage/filesystem.go
@@ -2,6 +2,7 @@
package storage
import (
+ "context"
"fmt"
"io"
"net/http"
@@ -34,7 +35,7 @@ func (b *FSBackend) Name() string {
return fmt.Sprintf("Filesystem (%s)", b.path)
}
-func (b *FSBackend) Persist(key string, f func(io.Writer) (string, int64, error)) (string, int64, error) {
+func (b *FSBackend) Persist(ctx context.Context, key string, f Persister) (string, int64, error) {
full := path.Join(b.path, key)
dir := path.Dir(full)
err := os.MkdirAll(dir, 0755)
@@ -53,12 +54,12 @@ func (b *FSBackend) Persist(key string, f func(io.Writer) (string, int64, error)
return f(file)
}
-func (b *FSBackend) Fetch(key string) (io.ReadCloser, error) {
+func (b *FSBackend) Fetch(ctx context.Context, key string) (io.ReadCloser, error) {
full := path.Join(b.path, key)
return os.Open(full)
}
-func (b *FSBackend) Move(old, new string) error {
+func (b *FSBackend) Move(ctx context.Context, old, new string) error {
newpath := path.Join(b.path, new)
err := os.MkdirAll(path.Dir(newpath), 0755)
if err != nil {
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
index 749c7ba150e5..b9d70ef20488 100644
--- a/tools/nixery/server/storage/gcs.go
+++ b/tools/nixery/server/storage/gcs.go
@@ -66,8 +66,7 @@ func (b *GCSBackend) Name() string {
return "Google Cloud Storage (" + b.bucket + ")"
}
-func (b *GCSBackend) Persist(path string, f func(io.Writer) (string, int64, error)) (string, int64, error) {
- ctx := context.Background()
+func (b *GCSBackend) Persist(ctx context.Context, path string, f Persister) (string, int64, error) {
obj := b.handle.Object(path)
w := obj.NewWriter(ctx)
@@ -80,8 +79,7 @@ func (b *GCSBackend) Persist(path string, f func(io.Writer) (string, int64, erro
return hash, size, w.Close()
}
-func (b *GCSBackend) Fetch(path string) (io.ReadCloser, error) {
- ctx := context.Background()
+func (b *GCSBackend) Fetch(ctx context.Context, path string) (io.ReadCloser, error) {
obj := b.handle.Object(path)
// Probe whether the file exists before trying to fetch it
@@ -98,8 +96,7 @@ func (b *GCSBackend) Fetch(path string) (io.ReadCloser, error) {
//
// The Go API for Cloud Storage does not support renaming objects, but
// the HTTP API does. The code below makes the relevant call manually.
-func (b *GCSBackend) Move(old, new string) error {
- ctx := context.Background()
+func (b *GCSBackend) Move(ctx context.Context, old, new string) error {
creds, err := google.FindDefaultCredentials(ctx, gcsScope)
if err != nil {
return err
diff --git a/tools/nixery/server/storage/storage.go b/tools/nixery/server/storage/storage.go
index ad10d682e93a..70095cba4334 100644
--- a/tools/nixery/server/storage/storage.go
+++ b/tools/nixery/server/storage/storage.go
@@ -4,10 +4,13 @@
package storage
import (
+ "context"
"io"
"net/http"
)
+type Persister = func(io.Writer) (string, int64, error)
+
type Backend interface {
// Name returns the name of the storage backend, for use in
// log messages and such.
@@ -19,14 +22,14 @@ type Backend interface {
// It needs to return the SHA256 hash of the data written as
// well as the total number of bytes, as those are required
// for the image manifest.
- Persist(string, func(io.Writer) (string, int64, error)) (string, int64, error)
+ Persist(context.Context, string, Persister) (string, int64, error)
// Fetch retrieves data from the storage backend.
- Fetch(path string) (io.ReadCloser, error)
+ Fetch(ctx context.Context, path string) (io.ReadCloser, error)
// Move renames a path inside the storage backend. This is
// used for staging uploads while calculating their hashes.
- Move(old, new string) error
+ Move(ctx context.Context, old, new string) error
// Serve provides a handler function to serve HTTP requests
// for layers in the storage backend.
--
cgit 1.4.1
From b736f5580de878a6fb095317058ea11190e53a4e Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:37:31 +0100
Subject: docs: Add storage configuration options to README
---
tools/nixery/README.md | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 1574d5950a22..32e5921fa475 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -74,13 +74,17 @@ interactive images.
Nixery supports the following configuration options, provided via environment
variables:
-* `BUCKET`: [Google Cloud Storage][gcs] bucket to store & serve image layers
* `PORT`: HTTP port on which Nixery should listen
* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
locally configured SSH/git credentials)
* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to
use for building
+* `NIXERY_STORAGE_BACKEND`: The type of backend storage to use, currently
+ supported values are `gcs` (Google Cloud Storage) and `filesystem`.
+
+ For each of these additional backend configuration is necessary, see the
+ [storage section](#storage) for details.
* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
(defaults to 60)
* `NIX_POPULARITY_URL`: URL to a file containing popularity data for
@@ -91,6 +95,26 @@ account key, Nixery will also use this key to create [signed URLs][] for layers
in the storage bucket. This makes it possible to serve layers from a bucket
without having to make them publicly available.
+### Storage
+
+Nixery supports multiple different storage backends in which its build cache and
+image layers are kept, and from which they are served.
+
+Currently the available storage backends are Google Cloud Storage and the local
+file system.
+
+In the GCS case, images are served by redirecting clients to the storage bucket.
+Layers stored on the filesystem are served straight from the local disk.
+
+These extra configuration variables must be set to configure storage backends:
+
+* `GCS_BUCKET`: Name of the Google Cloud Storage bucket to use (**required** for
+ `gcs`)
+* `GOOGLE_APPLICATION_CREDENTIALS`: Path to a GCP service account JSON key
+ (**optional** for `gcs`)
+* `STORAGE_PATH`: Path to a folder in which to store and from which to serve
+ data (**required** for `filesystem`)
+
## Roadmap
### Kubernetes integration
--
cgit 1.4.1
From 3611baf040f19e234b81309822ac63723690a51d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 18:49:01 +0100
Subject: docs(under-the-hood): Update builder & storage backend information
Both of these no longer matched the reality of what was actually going
on in Nixery.
---
tools/nixery/docs/src/under-the-hood.md | 79 +++++++++++++++++++++------------
1 file changed, 51 insertions(+), 28 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/src/under-the-hood.md b/tools/nixery/docs/src/under-the-hood.md
index b58a21d0d4ec..4b798300100b 100644
--- a/tools/nixery/docs/src/under-the-hood.md
+++ b/tools/nixery/docs/src/under-the-hood.md
@@ -6,9 +6,9 @@ image is requested from Nixery.
- [1. The image manifest is requested](#1-the-image-manifest-is-requested)
-- [2. Nix builds the image](#2-nix-builds-the-image)
-- [3. Layers are uploaded to Nixery's storage](#3-layers-are-uploaded-to-nixerys-storage)
-- [4. The image manifest is sent back](#4-the-image-manifest-is-sent-back)
+- [2. Nix fetches and prepares image content](#2-nix-fetches-and-prepares-image-content)
+- [3. Layers are grouped, created, hashed, and persisted](#3-layers-are-grouped-created-hashed-and-persisted)
+- [4. The manifest is assembled and returned to the client](#4-the-manifest-is-assembled-and-returned-to-the-client)
- [5. Image layers are requested](#5-image-layers-are-requested)
@@ -40,7 +40,7 @@ It then invokes Nix with three parameters:
2. image tag
3. configured package set source
-## 2. Nix builds the image
+## 2. Nix fetches and prepares image content
Using the parameters above, Nix imports the package set and begins by mapping
the image names to attributes in the package set.
@@ -50,12 +50,31 @@ their name, for example anything under `haskellPackages`. The registry protocol
does not allow uppercase characters, so the Nix code will translate something
like `haskellpackages` (lowercased) to the correct attribute name.
-After identifying all contents, Nix determines the contents of each layer while
-optimising for the best possible cache efficiency (see the [layering design
-doc][] for details).
+After identifying all contents, Nix uses the `symlinkJoin` function to
+create a special layer with the "symlink farm" required to let the
+image function like a normal disk image.
-Finally it builds each layer, assembles the image manifest as JSON structure,
-and yields this manifest back to the web server.
+Nix then returns information about the image contents as well as the
+location of the special layer to Nixery.
+
+## 3. Layers are grouped, created, hashed, and persisted
+
+With the information received from Nix, Nixery determines the contents
+of each layer while optimising for the best possible cache efficiency
+(see the [layering design doc][] for details).
+
+With the grouped layers, Nixery then begins to create compressed
+tarballs with all required contents for each layer. As these tarballs
+are being created, they are simultaneously being hashed (as the image
+manifest must contain the content-hashes of all layers) and persisted
+to storage.
+
+Storage can be either a remote [Google Cloud Storage][gcs] bucket, or
+a local filesystem path.
+
+During this step, Nixery checks its build cache (see [Caching][]) to
+determine whether a layer needs to be built or is already cached from
+a previous build.
*Note:* While this step is running (which can take some time in the case of
large first-time image builds), the registry client is left hanging waiting for
@@ -63,39 +82,43 @@ an HTTP response. Unfortunately the registry protocol does not allow for any
feedback back to the user at this point, so from the user's perspective things
just ... hang, for a moment.
-## 3. Layers are uploaded to Nixery's storage
-
-Nixery inspects the returned manifest and uploads each layer to the configured
-[Google Cloud Storage][gcs] bucket. To avoid unnecessary uploading, it will
-check whether layers are already present in the bucket.
+## 4. The manifest is assembled and returned to the client
-## 4. The image manifest is sent back
+Once armed with the hashes of all required layers, Nixery assembles
+the OCI Container Image manifest which describes the structure of the
+built image and names all of its layers by their content hash.
-If everything went well at this point, Nixery responds to the registry client
-with the image manifest.
-
-The client now inspects the manifest and basically sees a list of SHA256-hashes,
-each corresponding to one layer of the image. Most clients will now consult
-their local layer storage and determine which layers they are missing.
-
-Each of the missing layers is then requested from Nixery.
+This manifest is returned to the client.
## 5. Image layers are requested
-For each image layer that it needs to retrieve, the registry client assembles a
-request that looks like this:
+The client now inspects the manifest and determines which of the
+layers it is currently missing based on their content hashes. Note
+that different container runtimes will handle this differently, and in
+the case of certain engine and storage driver combinations (e.g.
+Docker with OverlayFS) layers might be downloaded again even if they
+are already present.
+
+For each of the missing layers, the client now issues a request to
+Nixery that looks like this:
`GET /v2/${imageName}/blob/sha256:${layerHash}`
-Nixery receives these requests and *rewrites* them to Google Cloud Storage URLs,
-responding with an `HTTP 303 See Other` status code and the actual download URL
-of the layer.
+Nixery receives these requests and handles them based on the
+configured storage backend.
+
+If the storage backend is GCS, it *redirects* them to Google Cloud
+Storage URLs, responding with an `HTTP 303 See Other` status code and
+the actual download URL of the layer.
Nixery supports using private buckets which are not generally world-readable, in
which case [signed URLs][] are constructed using a private key. These allow the
registry client to download each layer without needing to care about how the
underlying authentication works.
+If the storage backend is the local filesystem, Nixery will attempt to
+serve the layer back to the client from disk.
+
---------
That's it. After these five steps the registry client has retrieved all it needs
--
cgit 1.4.1
From ab190256ab3118af146a65787965e04e06ccfaa1 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 20:04:41 +0100
Subject: fix(server): Use correct scope for GCS tokens
---
tools/nixery/server/storage/gcs.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
index b9d70ef20488..5cb673c15162 100644
--- a/tools/nixery/server/storage/gcs.go
+++ b/tools/nixery/server/storage/gcs.go
@@ -20,7 +20,7 @@ import (
var client = &http.Client{}
// API scope needed for renaming objects in GCS
-const gcsScope = "https://www.googleapis.com/auth/devstorage"
+const gcsScope = "https://www.googleapis.com/auth/devstorage.read_write"
type GCSBackend struct {
bucket string
--
cgit 1.4.1
From 3a7c964a22ac0d7805cbe2d0a5f42f030ee68c19 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 20:04:59 +0100
Subject: chore(build): Configure build to use new GCS configuration options
---
tools/nixery/.travis.yml | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 31d485d7284c..89963840cc12 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -26,12 +26,11 @@ script:
docker run -d -p 8080:8080 --name nixery \
-v ${PWD}/test-files:/var/nixery \
-e PORT=8080 \
- -e BUCKET=nixery-ci-tests \
+ -e GCS_BUCKET=nixery-ci-tests \
-e GOOGLE_CLOUD_PROJECT=nixery \
-e GOOGLE_APPLICATION_CREDENTIALS=/var/nixery/key.json \
- -e GCS_SIGNING_ACCOUNT="${GCS_SIGNING_ACCOUNT}" \
- -e GCS_SIGNING_KEY=/var/nixery/gcs.pem \
-e NIXERY_CHANNEL=nixos-unstable \
+ -e NIXERY_STORAGE_BACKEND=gcs \
${IMG}
# print all of the container's logs regardless of success
--
cgit 1.4.1
From 904c3dade00881c6872dc633a135e12bdfe7e76b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 22:38:27 +0100
Subject: refactor(server): Handle non-error errors safely
This case should not be possible unless something manually constructs
a logrus entry with a non-error value in the log.ErrorKey field, but
it's better to be safe than sorry.
---
tools/nixery/server/logs.go | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
index cc218c69d265..7077b2b208fe 100644
--- a/tools/nixery/server/logs.go
+++ b/tools/nixery/server/logs.go
@@ -76,11 +76,12 @@ func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
msg["eventTime"] = &e.Time
msg["severity"] = logSeverity(e.Level)
- if err, ok := msg[log.ErrorKey]; ok {
- // TODO(tazjin): Cast safely - for now there should be
- // no calls to `.WithError` with a nil error, but who
- // knows.
- msg[log.ErrorKey] = (err.(error)).Error()
+ if e, ok := msg[log.ErrorKey]; ok {
+ if err, isError := e.(error); isError {
+ msg[log.ErrorKey] = err.Error()
+ } else {
+ delete(msg, log.ErrorKey)
+ }
}
if isError(e) {
--
cgit 1.4.1
From 2d4a3ea307350e1f7495a4fb6c6f5a37d10d3912 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 28 Oct 2019 22:39:12 +0100
Subject: chore(server): Remove outdated TODO
Real-life experience has shown that the weighting of the metric
produced here is appropriate.
---
tools/nixery/server/layers/grouping.go | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index 1fbbf33db3d7..74952a137835 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -300,11 +300,7 @@ func groupLayer(dt *flow.DominatorTree, root *closure) Layer {
sort.Strings(contents)
return Layer{
- Contents: contents,
- // TODO(tazjin): The point of this is to factor in
- // both the size and the popularity when making merge
- // decisions, but there might be a smarter way to do
- // it than a plain multiplication.
+ Contents: contents,
MergeRating: uint64(root.Popularity) * size,
}
}
--
cgit 1.4.1
From b03f7a1b4dff4780a8eaeb5c261598d422551220 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 31 Oct 2019 17:29:59 +0000
Subject: feat(popcount): Add new narinfo-based popcount implementation
Adds an implementation of popcount that, instead of realising
derivations locally, just queries the cache's narinfo files.
The downside of this is that calculating popularity for arbitrary Nix
package sets is not possible with this implementation. The upside is
that calculating the popularity for an entire Nix channel can now be
done in ~10 seconds[0].
This fixes #65.
[0]: Assuming a /fast/ internet connection.
---
tools/nixery/popcount/empty.json | 1 -
tools/nixery/popcount/popcount | 13 --
tools/nixery/popcount/popcount.go | 256 +++++++++++++++++++++++++++++++++++++
tools/nixery/popcount/popcount.nix | 53 --------
4 files changed, 256 insertions(+), 67 deletions(-)
delete mode 100644 tools/nixery/popcount/empty.json
delete mode 100755 tools/nixery/popcount/popcount
create mode 100644 tools/nixery/popcount/popcount.go
delete mode 100644 tools/nixery/popcount/popcount.nix
(limited to 'tools')
diff --git a/tools/nixery/popcount/empty.json b/tools/nixery/popcount/empty.json
deleted file mode 100644
index fe51488c7066..000000000000
--- a/tools/nixery/popcount/empty.json
+++ /dev/null
@@ -1 +0,0 @@
-[]
diff --git a/tools/nixery/popcount/popcount b/tools/nixery/popcount/popcount
deleted file mode 100755
index 83baf3045da7..000000000000
--- a/tools/nixery/popcount/popcount
+++ /dev/null
@@ -1,13 +0,0 @@
-#!/bin/bash
-set -ueo pipefail
-
-function graphsFor() {
- local pkg="${1}"
- local graphs=$(nix-build --timeout 2 --argstr target "${pkg}" popcount.nix || echo -n 'empty.json')
- cat $graphs | jq -r -cM '.[] | .references[]'
-}
-
-for pkg in $(cat all-top-level.json | jq -r '.[]'); do
- graphsFor "${pkg}" 2>/dev/null
- echo "Printed refs for ${pkg}" >&2
-done
diff --git a/tools/nixery/popcount/popcount.go b/tools/nixery/popcount/popcount.go
new file mode 100644
index 000000000000..a37408e37f2c
--- /dev/null
+++ b/tools/nixery/popcount/popcount.go
@@ -0,0 +1,256 @@
+// Popcount fetches popularity information for each store path in a
+// given Nix channel from the upstream binary cache.
+//
+// It does this simply by inspecting the narinfo files, rather than
+// attempting to deal with instantiation of the binary cache.
+//
+// This is *significantly* faster than attempting to realise the whole
+// channel and then calling `nix path-info` on it.
+//
+// TODO(tazjin): Persist intermediate results (references for each
+// store path) to speed up subsequent runs.
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "log"
+ "net/http"
+ "os"
+ "os/exec"
+ "regexp"
+ "strings"
+)
+
+var client http.Client
+var pathexp = regexp.MustCompile("/nix/store/([a-z0-9]{32})-(.*)$")
+var refsexp = regexp.MustCompile("(?m:^References: (.*)$)")
+var refexp = regexp.MustCompile("^([a-z0-9]{32})-(.*)$")
+
+type meta struct {
+ name string
+ url string
+ commit string
+}
+
+type item struct {
+ name string
+ hash string
+}
+
+func failOn(err error, msg string) {
+ if err != nil {
+ log.Fatalf("%s: %s", msg, err)
+ }
+}
+
+func channelMetadata(channel string) meta {
+ // This needs an HTTP client that does not follow redirects
+ // because the channel URL is used explicitly for other
+ // downloads.
+ c := http.Client{
+ CheckRedirect: func(req *http.Request, via []*http.Request) error {
+ return http.ErrUseLastResponse
+ },
+ }
+
+ resp, err := c.Get(fmt.Sprintf("https://nixos.org/channels/%s", channel))
+ failOn(err, "failed to retrieve channel metadata")
+
+ loc, err := resp.Location()
+ failOn(err, "no redirect location given for channel")
+ if resp.StatusCode != 302 {
+ log.Fatalf("Expected redirect for channel, but received '%s'\n", resp.Status)
+ }
+
+ commitResp, err := c.Get(fmt.Sprintf("%s/git-revision", loc.String()))
+ failOn(err, "failed to retrieve commit for channel")
+
+ defer commitResp.Body.Close()
+ commit, err := ioutil.ReadAll(commitResp.Body)
+ failOn(err, "failed to read commit from response")
+
+ return meta{
+ name: channel,
+ url: loc.String(),
+ commit: string(commit),
+ }
+}
+
+func downloadStorePaths(c *meta) []string {
+ resp, err := client.Get(fmt.Sprintf("%s/store-paths.xz", c.url))
+ failOn(err, "failed to download store-paths.xz")
+ defer resp.Body.Close()
+
+ cmd := exec.Command("xzcat")
+ stdin, err := cmd.StdinPipe()
+ failOn(err, "failed to open xzcat stdin")
+ stdout, err := cmd.StdoutPipe()
+ failOn(err, "failed to open xzcat stdout")
+ defer stdout.Close()
+
+ go func() {
+ defer stdin.Close()
+ io.Copy(stdin, resp.Body)
+ }()
+
+ err = cmd.Start()
+ failOn(err, "failed to start xzcat")
+
+ paths, err := ioutil.ReadAll(stdout)
+ failOn(err, "failed to read uncompressed store paths")
+
+ err = cmd.Wait()
+ failOn(err, "xzcat failed to decompress")
+
+ return strings.Split(string(paths), "\n")
+}
+
+func storePathToItem(path string) *item {
+ res := pathexp.FindStringSubmatch(path)
+ if len(res) != 3 {
+ return nil
+ }
+
+ return &item{
+ hash: res[1],
+ name: res[2],
+ }
+}
+
+func narInfoToRefs(narinfo string) []string {
+ all := refsexp.FindAllStringSubmatch(narinfo, 1)
+
+ if len(all) != 1 {
+ log.Fatalf("failed to parse narinfo:\n%s\nfound: %v\n", narinfo, all[0])
+ }
+
+ if len(all[0]) != 2 {
+ // no references found
+ return []string{}
+ }
+
+ refs := strings.Split(all[0][1], " ")
+ for i, s := range refs {
+ if s == "" {
+ continue
+ }
+
+ res := refexp.FindStringSubmatch(s)
+ refs[i] = res[2]
+ }
+
+ return refs
+}
+
+func fetchNarInfo(i *item) (string, error) {
+ resp, err := client.Get(fmt.Sprintf("https://cache.nixos.org/%s.narinfo", i.hash))
+ if err != nil {
+ return "", err
+ }
+
+ defer resp.Body.Close()
+
+ narinfo, err := ioutil.ReadAll(resp.Body)
+ return string(narinfo), err
+}
+
+// downloader starts a worker that takes care of downloading narinfos
+// for all paths received from the queue.
+//
+// If there is no data remaining in the queue, the downloader exits
+// and informs the finaliser queue about having exited.
+func downloader(queue chan *item, narinfos chan string, downloaders chan struct{}) {
+ for i := range queue {
+ ni, err := fetchNarInfo(i)
+ if err != nil {
+ log.Printf("couldn't fetch narinfo for %s: %s\n", i.name, err)
+ continue
+
+ }
+ narinfos <- ni
+ }
+ downloaders <- struct{}{}
+}
+
+// finaliser counts the number of downloaders that have exited and
+// closes the narinfos queue to signal to the counters that no more
+// elements will arrive.
+func finaliser(count int, downloaders chan struct{}, narinfos chan string) {
+ for range downloaders {
+ count--
+ if count == 0 {
+ close(downloaders)
+ close(narinfos)
+ break
+ }
+ }
+}
+
+func main() {
+ if len(os.Args) == 1 {
+ log.Fatalf("Nix channel must be specified as first argument")
+ }
+
+ count := 42 // concurrent downloader count
+ channel := os.Args[1]
+ log.Printf("Fetching metadata for channel '%s'\n", channel)
+
+ meta := channelMetadata(channel)
+ log.Printf("Pinned channel '%s' to commit '%s'\n", meta.name, meta.commit)
+
+ paths := downloadStorePaths(&meta)
+ log.Printf("Fetching references for %d store paths\n", len(paths))
+
+ // Download paths concurrently and receive their narinfos into
+ // a channel. Data is collated centrally into a map and
+ // serialised at the /very/ end.
+ downloadQueue := make(chan *item, len(paths))
+ for _, p := range paths {
+ if i := storePathToItem(p); i != nil {
+ downloadQueue <- i
+ }
+ }
+ close(downloadQueue)
+
+ // Set up a task tracking channel for parsing & counting
+ // narinfos, as well as a coordination channel for signaling
+ // that all downloads have finished
+ narinfos := make(chan string, 50)
+ downloaders := make(chan struct{}, count)
+ for i := 0; i < count; i++ {
+ go downloader(downloadQueue, narinfos, downloaders)
+ }
+
+ go finaliser(count, downloaders, narinfos)
+
+ counts := make(map[string]int)
+ for ni := range narinfos {
+ refs := narInfoToRefs(ni)
+ for _, ref := range refs {
+ if ref == "" {
+ continue
+ }
+
+ counts[ref] += 1
+ }
+ }
+
+ // Remove all self-references (i.e. packages not referenced by anyone else)
+ for k, v := range counts {
+ if v == 1 {
+ delete(counts, k)
+ }
+ }
+
+ bytes, _ := json.Marshal(counts)
+ outfile := fmt.Sprintf("popularity-%s-%s.json", meta.name, meta.commit)
+ err = ioutil.WriteFile(outfile, bytes, 0644)
+ if err != nil {
+ log.Fatalf("Failed to write output to '%s': %s\n", outfile, err)
+ }
+
+ log.Printf("Wrote output to '%s'\n", outfile)
+}
diff --git a/tools/nixery/popcount/popcount.nix b/tools/nixery/popcount/popcount.nix
deleted file mode 100644
index 54fd2ad589ee..000000000000
--- a/tools/nixery/popcount/popcount.nix
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This script, given a target attribute in `nixpkgs`, builds the
-# target derivations' runtime closure and returns its reference graph.
-#
-# This is invoked by popcount.sh for each package in nixpkgs to
-# collect all package references, so that package popularity can be
-# tracked.
-#
-# Check out build-image/group-layers.go for an in-depth explanation of
-# what the popularity counts are used for.
-
-{ pkgs ? import { config.allowUnfree = false; }, target }:
-
-let
- inherit (pkgs) coreutils runCommand writeText;
- inherit (builtins) readFile toFile fromJSON toJSON listToAttrs;
-
- # graphJSON abuses feature in Nix that makes structured runtime
- # closure information available to builders. This data is imported
- # back via IFD to process it for layering data.
- graphJSON = path:
- runCommand "build-graph" {
- __structuredAttrs = true;
- exportReferencesGraph.graph = path;
- PATH = "${coreutils}/bin";
- builder = toFile "builder" ''
- . .attrs.sh
- cat .attrs.json > ''${outputs[out]}
- '';
- } "";
-
- buildClosures = paths: (fromJSON (readFile (graphJSON paths)));
-
- buildGraph = paths:
- listToAttrs (map (c: {
- name = c.path;
- value = { inherit (c) closureSize references; };
- }) (buildClosures paths));
-in writeText "${target}-graph"
-(toJSON (buildClosures [ pkgs."${target}" ]).graph)
--
cgit 1.4.1
From 6a2fb092a72be70c173b756e5cb2276a542a09df Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 31 Oct 2019 17:35:15 +0000
Subject: chore: Add missing copyright headers to source files
---
tools/nixery/popcount/popcount.go | 14 ++++++++++++++
tools/nixery/server/builder/archive.go | 13 +++++++++++++
tools/nixery/server/layers/grouping.go | 14 ++++++++++++++
tools/nixery/server/logs.go | 13 +++++++++++++
tools/nixery/server/manifest/manifest.go | 14 ++++++++++++++
tools/nixery/server/storage/filesystem.go | 14 ++++++++++++++
tools/nixery/server/storage/gcs.go | 14 ++++++++++++++
tools/nixery/server/storage/storage.go | 14 ++++++++++++++
8 files changed, 110 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/popcount/popcount.go b/tools/nixery/popcount/popcount.go
index a37408e37f2c..bc5f42af8a85 100644
--- a/tools/nixery/popcount/popcount.go
+++ b/tools/nixery/popcount/popcount.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// Popcount fetches popularity information for each store path in a
// given Nix channel from the upstream binary cache.
//
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
index a3fb99882fd6..e0fb76d44bee 100644
--- a/tools/nixery/server/builder/archive.go
+++ b/tools/nixery/server/builder/archive.go
@@ -1,3 +1,16 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
package builder
// This file implements logic for walking through a directory and creating a
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
index 74952a137835..3902c8a4ef26 100644
--- a/tools/nixery/server/layers/grouping.go
+++ b/tools/nixery/server/layers/grouping.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// This package reads an export reference graph (i.e. a graph representing the
// runtime dependencies of a set of derivations) created by Nix and groups it in
// a way that is likely to match the grouping for other derivation sets with
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
index 7077b2b208fe..3179402e2e1f 100644
--- a/tools/nixery/server/logs.go
+++ b/tools/nixery/server/logs.go
@@ -1,3 +1,16 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
package main
// This file configures different log formatters via logrus. The
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index 8ad828239591..11fef3ff0d0c 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// Package image implements logic for creating the image metadata
// (such as the image manifest and configuration).
package manifest
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
index 3fb91bc5e134..cdbc31c5e046 100644
--- a/tools/nixery/server/storage/filesystem.go
+++ b/tools/nixery/server/storage/filesystem.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// Filesystem storage backend for Nixery.
package storage
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
index 5cb673c15162..c247cca62140 100644
--- a/tools/nixery/server/storage/gcs.go
+++ b/tools/nixery/server/storage/gcs.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// Google Cloud Storage backend for Nixery.
package storage
diff --git a/tools/nixery/server/storage/storage.go b/tools/nixery/server/storage/storage.go
index 70095cba4334..c97b5e4facc6 100644
--- a/tools/nixery/server/storage/storage.go
+++ b/tools/nixery/server/storage/storage.go
@@ -1,3 +1,17 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
// Package storage implements an interface that can be implemented by
// storage backends, such as Google Cloud Storage or the local
// filesystem.
--
cgit 1.4.1
From 05b5b1718a4b9f251d51767a189905649ad42282 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 31 Oct 2019 17:48:56 +0000
Subject: feat(popcount): Cache seen narinfos on disk
---
tools/nixery/popcount/popcount.go | 14 ++++++++++++++
1 file changed, 14 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/popcount/popcount.go b/tools/nixery/popcount/popcount.go
index bc5f42af8a85..b21cee2e0e7d 100644
--- a/tools/nixery/popcount/popcount.go
+++ b/tools/nixery/popcount/popcount.go
@@ -160,6 +160,11 @@ func narInfoToRefs(narinfo string) []string {
}
func fetchNarInfo(i *item) (string, error) {
+ file, err := ioutil.ReadFile("popcache/" + i.hash)
+ if err == nil {
+ return string(file), nil
+ }
+
resp, err := client.Get(fmt.Sprintf("https://cache.nixos.org/%s.narinfo", i.hash))
if err != nil {
return "", err
@@ -168,6 +173,10 @@ func fetchNarInfo(i *item) (string, error) {
defer resp.Body.Close()
narinfo, err := ioutil.ReadAll(resp.Body)
+
+ // best-effort write the file to the cache
+ ioutil.WriteFile("popcache/" + i.hash, narinfo, 0644)
+
return string(narinfo), err
}
@@ -208,6 +217,11 @@ func main() {
log.Fatalf("Nix channel must be specified as first argument")
}
+ err := os.MkdirAll("popcache", 0755)
+ if err != nil {
+ log.Fatalf("Failed to create 'popcache' directory in current folder: %s\n", err)
+ }
+
count := 42 // concurrent downloader count
channel := os.Args[1]
log.Printf("Fetching metadata for channel '%s'\n", channel)
--
cgit 1.4.1
From 7afbc912ceff01044b291388d1e0f567ac24bdef Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 31 Oct 2019 17:54:31 +0000
Subject: chore(build): Add nixery-popcount to top-level package set
---
tools/nixery/default.nix | 2 ++
tools/nixery/popcount/default.nix | 26 ++++++++++++++++++++++++++
2 files changed, 28 insertions(+)
create mode 100644 tools/nixery/popcount/default.nix
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 3541037965e9..af1ec904bf25 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -53,6 +53,8 @@ in rec {
exec ${nixery-server}/bin/server
'';
+ nixery-popcount = callPackage ./popcount { };
+
# Container image containing Nixery and Nix itself. This image can
# be run on Kubernetes, published on AppEngine or whatever else is
# desired.
diff --git a/tools/nixery/popcount/default.nix b/tools/nixery/popcount/default.nix
new file mode 100644
index 000000000000..4a3c8faf9c36
--- /dev/null
+++ b/tools/nixery/popcount/default.nix
@@ -0,0 +1,26 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+{ go, stdenv }:
+
+stdenv.mkDerivation {
+ name = "nixery-popcount";
+
+ buildInputs = [ go ];
+ phases = [ "buildPhase" ];
+ buildPhase = ''
+ mkdir -p $out/bin
+ go build -o $out/bin/popcount ${./popcount.go}
+ '';
+}
--
cgit 1.4.1
From 3c2de4c037b52d20476a9c47d53a0e456229ae55 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 5 Nov 2019 12:57:10 +0000
Subject: refactor(builder): Parameterise CPU architecture to use for images
Adds the CPU architecture to the image configuration. This will make
it possible to let users toggle architecture via meta-packages.
Relates to #13
---
tools/nixery/build-image/build-image.nix | 8 +++++++-
tools/nixery/server/builder/builder.go | 24 +++++++++++++++++++++++-
tools/nixery/server/manifest/manifest.go | 7 +++----
3 files changed, 33 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index d4df707a8a3a..eb14d52424bc 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -25,6 +25,7 @@
# Description of the package set to be used (will be loaded by load-pkgs.nix)
srcType ? "nixpkgs",
srcArgs ? "nixos-19.03",
+ system ? "x86_64-linux",
importArgs ? { },
# Path to load-pkgs.nix
loadPkgs ? ./load-pkgs.nix,
@@ -46,7 +47,12 @@ let
inherit (pkgs) lib runCommand writeText;
- pkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
+ pkgs = import loadPkgs {
+ inherit srcType srcArgs;
+ importArgs = importArgs // {
+ inherit system;
+ };
+ };
# deepFetch traverses the top-level Nix package set to retrieve an item via a
# path specified in string form.
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 59158037e443..c726b137fc33 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -53,6 +53,22 @@ type State struct {
Pop layers.Popularity
}
+// Architecture represents the possible CPU architectures for which
+// container images can be built.
+//
+// The default architecture is amd64, but support for ARM platforms is
+// available within nixpkgs and can be toggled via meta-packages.
+type Architecture struct {
+ // Name of the system tuple to pass to Nix
+ nixSystem string
+
+ // Name of the architecture as used in the OCI manifests
+ imageArch string
+}
+
+var amd64 = Architecture{"x86_64-linux", "amd64"}
+var arm = Architecture{"aarch64-linux", "arm64"}
+
// Image represents the information necessary for building a container image.
// This can be either a list of package names (corresponding to keys in the
// nixpkgs set) or a Nix expression that results in a *list* of derivations.
@@ -63,6 +79,10 @@ type Image struct {
// Names of packages to include in the image. These must correspond
// directly to top-level names of Nix packages in the nixpkgs tree.
Packages []string
+
+ // Architecture for which to build the image. Nixery defaults
+ // this to amd64 if not specified via meta-packages.
+ Arch *Architecture
}
// BuildResult represents the data returned from the server to the
@@ -96,6 +116,7 @@ func ImageFromName(name string, tag string) Image {
Name: strings.Join(pkgs, "/"),
Tag: tag,
Packages: expanded,
+ Arch: &amd64,
}
}
@@ -218,6 +239,7 @@ func prepareImage(s *State, image *Image) (*ImageResult, error) {
"--argstr", "packages", string(packages),
"--argstr", "srcType", srcType,
"--argstr", "srcArgs", srcArgs,
+ "--argstr", "system", image.Arch.nixSystem,
}
output, err := callNix("nixery-build-image", image.Name, args)
@@ -448,7 +470,7 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
return nil, err
}
- m, c := manifest.Manifest(layers)
+ m, c := manifest.Manifest(image.Arch.imageArch, layers)
lw := func(w io.Writer) error {
r := bytes.NewReader(c.Config)
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
index 11fef3ff0d0c..0d36826fb7e5 100644
--- a/tools/nixery/server/manifest/manifest.go
+++ b/tools/nixery/server/manifest/manifest.go
@@ -33,7 +33,6 @@ const (
configType = "application/vnd.docker.container.image.v1+json"
// image config constants
- arch = "amd64"
os = "linux"
fsType = "layers"
)
@@ -84,7 +83,7 @@ type ConfigLayer struct {
// Outside of this module the image configuration is treated as an
// opaque blob and it is thus returned as an already serialised byte
// array and its SHA256-hash.
-func configLayer(hashes []string) ConfigLayer {
+func configLayer(arch string, hashes []string) ConfigLayer {
c := imageConfig{}
c.Architecture = arch
c.OS = os
@@ -104,7 +103,7 @@ func configLayer(hashes []string) ConfigLayer {
// layer.
//
// Callers do not need to set the media type for the layer entries.
-func Manifest(layers []Entry) (json.RawMessage, ConfigLayer) {
+func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
// Sort layers by their merge rating, from highest to lowest.
// This makes it likely for a contiguous chain of shared image
// layers to appear at the beginning of a layer.
@@ -123,7 +122,7 @@ func Manifest(layers []Entry) (json.RawMessage, ConfigLayer) {
layers[i] = l
}
- c := configLayer(hashes)
+ c := configLayer(arch, hashes)
m := manifest{
SchemaVersion: schemaVersion,
--
cgit 1.4.1
From d7ccf351494873b7e45ba450019c5462ef860aeb Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 5 Nov 2019 14:03:48 +0000
Subject: feat(builder): Support 'arm64' meta-package
Specifying this meta-package toggles support for ARM64 images, for
example:
# Pull a default x86_64 image
docker pull nixery.dev/hello
# Pull an ARM64 image
docker pull nixery.dev/arm64/hello
---
tools/nixery/server/builder/builder.go | 41 ++++++++++++++++++++++++----------
1 file changed, 29 insertions(+), 12 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index c726b137fc33..57b94090911e 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -67,7 +67,7 @@ type Architecture struct {
}
var amd64 = Architecture{"x86_64-linux", "amd64"}
-var arm = Architecture{"aarch64-linux", "arm64"}
+var arm64 = Architecture{"aarch64-linux", "arm64"}
// Image represents the information necessary for building a container image.
// This can be either a list of package names (corresponding to keys in the
@@ -106,7 +106,7 @@ type BuildResult struct {
// only the order of requested packages has changed.
func ImageFromName(name string, tag string) Image {
pkgs := strings.Split(name, "/")
- expanded := convenienceNames(pkgs)
+ arch, expanded := metaPackages(pkgs)
expanded = append(expanded, "cacert", "iana-etc")
sort.Strings(pkgs)
@@ -116,7 +116,7 @@ func ImageFromName(name string, tag string) Image {
Name: strings.Join(pkgs, "/"),
Tag: tag,
Packages: expanded,
- Arch: &amd64,
+ Arch: arch,
}
}
@@ -136,22 +136,39 @@ type ImageResult struct {
} `json:"symlinkLayer"`
}
-// convenienceNames expands convenience package names defined by Nixery which
-// let users include commonly required sets of tools in a container quickly.
+// metaPackages expands package names defined by Nixery which either
+// include sets of packages or trigger certain image-building
+// behaviour.
//
-// Convenience names must be specified as the first package in an image.
+// Meta-packages must be specified as the first packages in an image
+// name.
//
-// Currently defined convenience names are:
+// Currently defined meta-packages are:
//
// * `shell`: Includes bash, coreutils and other common command-line tools
-func convenienceNames(packages []string) []string {
- shellPackages := []string{"bashInteractive", "coreutils", "moreutils", "nano"}
+// * `arm64`: Causes Nixery to build images for the ARM64 architecture
+func metaPackages(packages []string) (*Architecture, []string) {
+ arch := &amd64
+ var metapkgs []string
+ for idx, p := range packages {
+ if p == "shell" || p == "arm64" {
+ metapkgs = append(metapkgs, p)
+ } else {
+ packages = packages[idx:]
+ break
+ }
+ }
- if packages[0] == "shell" {
- return append(packages[1:], shellPackages...)
+ for _, p := range metapkgs {
+ switch p {
+ case "shell":
+ packages = append(packages, "bashInteractive", "coreutils", "moreutils", "nano")
+ case "arm64":
+ arch = &arm64
+ }
}
- return packages
+ return arch, packages
}
// logNix logs each output line from Nix. It runs in a goroutine per
--
cgit 1.4.1
From 145b7f4289cd6c54bbbe5d1345c8d034e6f16be7 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 7 Nov 2019 17:19:06 +0000
Subject: fix(build-image): Allow "cross-builds" of images for different arch
Imports the package set twice in the builder expression: Once
configured for the target system, once configured for the native
system.
This makes it possible to fetch the actual image contents for the
required architecture, but use local tools to assemble the symlink
layer and metadata.
---
tools/nixery/build-image/build-image.nix | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
index eb14d52424bc..4393f2b859a6 100644
--- a/tools/nixery/build-image/build-image.nix
+++ b/tools/nixery/build-image/build-image.nix
@@ -45,8 +45,13 @@ let
toFile
toJSON;
- inherit (pkgs) lib runCommand writeText;
+ # Package set to use for sourcing utilities
+ nativePkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
+ inherit (nativePkgs) coreutils jq openssl lib runCommand writeText symlinkJoin;
+ # Package set to use for packages to be included in the image. This
+ # package set is imported with the system set to the target
+ # architecture.
pkgs = import loadPkgs {
inherit srcType srcArgs;
importArgs = importArgs // {
@@ -115,7 +120,7 @@ let
runtimeGraph = runCommand "runtime-graph.json" {
__structuredAttrs = true;
exportReferencesGraph.graph = allContents.contents;
- PATH = "${pkgs.coreutils}/bin";
+ PATH = "${coreutils}/bin";
builder = toFile "builder" ''
. .attrs.sh
cp .attrs.json ''${outputs[out]}
@@ -124,7 +129,7 @@ let
# Create a symlink forest into all top-level store paths of the
# image contents.
- contentsEnv = pkgs.symlinkJoin {
+ contentsEnv = symlinkJoin {
name = "bulk-layers";
paths = allContents.contents;
};
@@ -141,7 +146,7 @@ let
# Two different hashes are computed for different usages (inclusion
# in manifest vs. content-checking in the layer cache).
symlinkLayerMeta = fromJSON (readFile (runCommand "symlink-layer-meta.json" {
- buildInputs = with pkgs; [ coreutils jq openssl ];
+ buildInputs = [ coreutils jq openssl ];
}''
tarHash=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
layerSize=$(stat --printf '%s' ${symlinkLayer})
--
cgit 1.4.1
From 1d6898a7cc3c4b4d46993e0076302dffad19f46f Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 9 Nov 2019 14:19:21 +0000
Subject: feat(build): Include arm64 in build matrix
---
tools/nixery/.travis.yml | 4 ++++
1 file changed, 4 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 89963840cc12..02700c319686 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -1,9 +1,13 @@
language: nix
+arch:
+ - amd64
+ - arm64
services:
- docker
env:
- NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/5271f8dddc0f2e54f55bd2fc1868c09ff72ac980.tar.gz
before_script:
+ - echo "Running Nixery CI build on $(uname -m)"
- mkdir test-files
- echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
- echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
--
cgit 1.4.1
From 9a8abeff977cfb488073c1f338df0821021d41ab Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 9 Nov 2019 14:30:01 +0000
Subject: feat(build): Integration test on both CPU architectures
---
tools/nixery/.travis.yml | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 02700c319686..72b2a657b34b 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -55,4 +55,25 @@ script:
echo -n "."
sleep 1
done
- - docker run --rm localhost:8080/hello hello
+
+ # Pull and run an image of the current CPU architecture
+ - |
+ case $(uname -m) in
+ x86_64)
+ docker run --rm localhost:8080/hello hello
+ ;;
+ aarch64)
+ docker run --rm localhost:8080/arm64/hello hello
+ ;;
+ esac
+
+ # Pull an image of the opposite CPU architecture (but without running it)
+ - |
+ case $(uname -m) in
+ x86_64)
+ docker pull localhost:8080/arm64/hello
+ ;;
+ aarch64)
+ docker pull localhost:8080/hello
+ ;;
+ esac
--
cgit 1.4.1
From 104c930040ab081f6e24aad95a7df71339d8e6d4 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 9 Nov 2019 14:30:48 +0000
Subject: chore(build): Use significantly fewer layers for Nixery itself
Nixery itself is built with the buildLayeredImage system, which takes
some time to create large numbers of layers.
This adjusts the default number of image layers from 96 to 20.
Additionally Nixery's image is often loaded with `docker load -i`,
which ignores layer cache hits anyways.
Additionaly the CI build is configured to use only 1, which speeds up
CI runs.
---
tools/nixery/.travis.yml | 2 +-
tools/nixery/default.nix | 6 ++++--
2 files changed, 5 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 72b2a657b34b..b0a2b3f997f9 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -15,7 +15,7 @@ before_script:
- cachix use nixery
script:
- test -z $(gofmt -l server/ build-image/)
- - nix-build | cachix push nixery
+ - nix-build --arg maxLayers 1 | cachix push nixery
# This integration test makes sure that the container image built
# for Nixery itself runs fine in Docker, and that images pulled
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index af1ec904bf25..44ac7313adc7 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -14,7 +14,8 @@
{ pkgs ? import { }
, preLaunch ? ""
-, extraPackages ? [] }:
+, extraPackages ? []
+, maxLayers ? 20 }:
with pkgs;
@@ -92,7 +93,8 @@ in rec {
in dockerTools.buildLayeredImage {
name = "nixery";
config.Cmd = [ "${nixery-launch-script}/bin/nixery" ];
- maxLayers = 96;
+
+ inherit maxLayers;
contents = [
bashInteractive
cacert
--
cgit 1.4.1
From a924093d0932b01e03a9cc6979926dddadb67323 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 27 Nov 2019 11:43:51 +0000
Subject: test(builder): Add test coverage for name->image conversion
Adds tests to cover that packages & metapackages are parsed into image
names correctly.
---
tools/nixery/server/builder/builder_test.go | 123 ++++++++++++++++++++++++++++
1 file changed, 123 insertions(+)
create mode 100644 tools/nixery/server/builder/builder_test.go
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder_test.go b/tools/nixery/server/builder/builder_test.go
new file mode 100644
index 000000000000..3fbe2ab40e23
--- /dev/null
+++ b/tools/nixery/server/builder/builder_test.go
@@ -0,0 +1,123 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package builder
+
+import (
+ "github.com/google/go-cmp/cmp"
+ "github.com/google/go-cmp/cmp/cmpopts"
+ "testing"
+)
+
+var ignoreArch = cmpopts.IgnoreFields(Image{}, "Arch")
+
+func TestImageFromNameSimple(t *testing.T) {
+ image := ImageFromName("hello", "latest")
+ expected := Image{
+ Name: "hello",
+ Tag: "latest",
+ Packages: []string{
+ "cacert",
+ "hello",
+ "iana-etc",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"hello\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameMultiple(t *testing.T) {
+ image := ImageFromName("hello/git/htop", "latest")
+ expected := Image{
+ Name: "git/hello/htop",
+ Tag: "latest",
+ Packages: []string{
+ "cacert",
+ "git",
+ "hello",
+ "htop",
+ "iana-etc",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"hello/git/htop\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameShell(t *testing.T) {
+ image := ImageFromName("shell", "latest")
+ expected := Image{
+ Name: "shell",
+ Tag: "latest",
+ Packages: []string{
+ "bashInteractive",
+ "cacert",
+ "coreutils",
+ "iana-etc",
+ "moreutils",
+ "nano",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"shell\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameShellMultiple(t *testing.T) {
+ image := ImageFromName("shell/htop", "latest")
+ expected := Image{
+ Name: "htop/shell",
+ Tag: "latest",
+ Packages: []string{
+ "bashInteractive",
+ "cacert",
+ "coreutils",
+ "htop",
+ "iana-etc",
+ "moreutils",
+ "nano",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"shell/htop\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameShellArm64(t *testing.T) {
+ image := ImageFromName("shell/arm64", "latest")
+ expected := Image{
+ Name: "arm64/shell",
+ Tag: "latest",
+ Packages: []string{
+ "bashInteractive",
+ "cacert",
+ "coreutils",
+ "iana-etc",
+ "moreutils",
+ "nano",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"shell/arm64\", \"latest\") mismatch:\n%s", diff)
+ }
+
+ if image.Arch.imageArch != "arm64" {
+ t.Fatal("Image(\"shell/arm64\"): Expected arch arm64")
+ }
+}
--
cgit 1.4.1
From df88da126a5c0dc97aa0fadaf1baf069b80ce251 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 27 Nov 2019 11:44:26 +0000
Subject: fix(builder): Ensure "solo-metapackages" do not break builds
The previous logic failed because single meta-packages such as
"nixery.dev/shell" would not end up removing the meta-package itself
from the list of packages passed to Nix, causing a build failure.
This was a regression introduced in 827468a.
---
tools/nixery/server/builder/builder.go | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
index 57b94090911e..da9dede1acd7 100644
--- a/tools/nixery/server/builder/builder.go
+++ b/tools/nixery/server/builder/builder.go
@@ -149,16 +149,22 @@ type ImageResult struct {
// * `arm64`: Causes Nixery to build images for the ARM64 architecture
func metaPackages(packages []string) (*Architecture, []string) {
arch := &amd64
+
var metapkgs []string
+ lastMeta := 0
for idx, p := range packages {
if p == "shell" || p == "arm64" {
metapkgs = append(metapkgs, p)
+ lastMeta = idx + 1
} else {
- packages = packages[idx:]
break
}
}
+ // Chop off the meta-packages from the front of the package
+ // list
+ packages = packages[lastMeta:]
+
for _, p := range metapkgs {
switch p {
case "shell":
--
cgit 1.4.1
From 2b82f1b71a50b8b1473421cce0eec1a0d7ddc360 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Mon, 11 Nov 2019 21:07:16 +0000
Subject: refactor: Reshuffle file structure for better code layout
This gets rid of the package called "server" and instead moves
everything into the project root, such that Go actually builds us a
binary called `nixery`.
This is the first step towards factoring out CLI-based functionality
for Nixery.
---
tools/nixery/build-image/build-image.nix | 173 ---------
tools/nixery/build-image/default.nix | 29 --
tools/nixery/build-image/load-pkgs.nix | 45 ---
tools/nixery/builder/archive.go | 113 ++++++
tools/nixery/builder/builder.go | 521 +++++++++++++++++++++++++++
tools/nixery/builder/builder_test.go | 123 +++++++
tools/nixery/builder/cache.go | 236 ++++++++++++
tools/nixery/builder/layers.go | 364 +++++++++++++++++++
tools/nixery/config/config.go | 84 +++++
tools/nixery/config/pkgsource.go | 159 ++++++++
tools/nixery/default.nix | 44 ++-
tools/nixery/go-deps.nix | 129 +++++++
tools/nixery/logs/logs.go | 119 ++++++
tools/nixery/main.go | 249 +++++++++++++
tools/nixery/manifest/manifest.go | 141 ++++++++
tools/nixery/popcount/popcount.go | 2 +-
tools/nixery/prepare-image/default.nix | 29 ++
tools/nixery/prepare-image/load-pkgs.nix | 45 +++
tools/nixery/prepare-image/prepare-image.nix | 173 +++++++++
tools/nixery/server/builder/archive.go | 116 ------
tools/nixery/server/builder/builder.go | 521 ---------------------------
tools/nixery/server/builder/builder_test.go | 123 -------
tools/nixery/server/builder/cache.go | 236 ------------
tools/nixery/server/config/config.go | 84 -----
tools/nixery/server/config/pkgsource.go | 159 --------
tools/nixery/server/default.nix | 62 ----
tools/nixery/server/go-deps.nix | 129 -------
tools/nixery/server/layers/grouping.go | 361 -------------------
tools/nixery/server/logs.go | 119 ------
tools/nixery/server/main.go | 248 -------------
tools/nixery/server/manifest/manifest.go | 141 --------
tools/nixery/server/storage/filesystem.go | 96 -----
tools/nixery/server/storage/gcs.go | 219 -----------
tools/nixery/server/storage/storage.go | 51 ---
tools/nixery/shell.nix | 2 +-
tools/nixery/storage/filesystem.go | 96 +++++
tools/nixery/storage/gcs.go | 219 +++++++++++
tools/nixery/storage/storage.go | 51 +++
38 files changed, 2890 insertions(+), 2921 deletions(-)
delete mode 100644 tools/nixery/build-image/build-image.nix
delete mode 100644 tools/nixery/build-image/default.nix
delete mode 100644 tools/nixery/build-image/load-pkgs.nix
create mode 100644 tools/nixery/builder/archive.go
create mode 100644 tools/nixery/builder/builder.go
create mode 100644 tools/nixery/builder/builder_test.go
create mode 100644 tools/nixery/builder/cache.go
create mode 100644 tools/nixery/builder/layers.go
create mode 100644 tools/nixery/config/config.go
create mode 100644 tools/nixery/config/pkgsource.go
create mode 100644 tools/nixery/go-deps.nix
create mode 100644 tools/nixery/logs/logs.go
create mode 100644 tools/nixery/main.go
create mode 100644 tools/nixery/manifest/manifest.go
create mode 100644 tools/nixery/prepare-image/default.nix
create mode 100644 tools/nixery/prepare-image/load-pkgs.nix
create mode 100644 tools/nixery/prepare-image/prepare-image.nix
delete mode 100644 tools/nixery/server/builder/archive.go
delete mode 100644 tools/nixery/server/builder/builder.go
delete mode 100644 tools/nixery/server/builder/builder_test.go
delete mode 100644 tools/nixery/server/builder/cache.go
delete mode 100644 tools/nixery/server/config/config.go
delete mode 100644 tools/nixery/server/config/pkgsource.go
delete mode 100644 tools/nixery/server/default.nix
delete mode 100644 tools/nixery/server/go-deps.nix
delete mode 100644 tools/nixery/server/layers/grouping.go
delete mode 100644 tools/nixery/server/logs.go
delete mode 100644 tools/nixery/server/main.go
delete mode 100644 tools/nixery/server/manifest/manifest.go
delete mode 100644 tools/nixery/server/storage/filesystem.go
delete mode 100644 tools/nixery/server/storage/gcs.go
delete mode 100644 tools/nixery/server/storage/storage.go
create mode 100644 tools/nixery/storage/filesystem.go
create mode 100644 tools/nixery/storage/gcs.go
create mode 100644 tools/nixery/storage/storage.go
(limited to 'tools')
diff --git a/tools/nixery/build-image/build-image.nix b/tools/nixery/build-image/build-image.nix
deleted file mode 100644
index 4393f2b859a6..000000000000
--- a/tools/nixery/build-image/build-image.nix
+++ /dev/null
@@ -1,173 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This file contains a derivation that outputs structured information
-# about the runtime dependencies of an image with a given set of
-# packages. This is used by Nixery to determine the layer grouping and
-# assemble each layer.
-#
-# In addition it creates and outputs a meta-layer with the symlink
-# structure required for using the image together with the individual
-# package layers.
-
-{
- # Description of the package set to be used (will be loaded by load-pkgs.nix)
- srcType ? "nixpkgs",
- srcArgs ? "nixos-19.03",
- system ? "x86_64-linux",
- importArgs ? { },
- # Path to load-pkgs.nix
- loadPkgs ? ./load-pkgs.nix,
- # Packages to install by name (which must refer to top-level attributes of
- # nixpkgs). This is passed in as a JSON-array in string form.
- packages ? "[]"
-}:
-
-let
- inherit (builtins)
- foldl'
- fromJSON
- hasAttr
- length
- match
- readFile
- toFile
- toJSON;
-
- # Package set to use for sourcing utilities
- nativePkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
- inherit (nativePkgs) coreutils jq openssl lib runCommand writeText symlinkJoin;
-
- # Package set to use for packages to be included in the image. This
- # package set is imported with the system set to the target
- # architecture.
- pkgs = import loadPkgs {
- inherit srcType srcArgs;
- importArgs = importArgs // {
- inherit system;
- };
- };
-
- # deepFetch traverses the top-level Nix package set to retrieve an item via a
- # path specified in string form.
- #
- # For top-level items, the name of the key yields the result directly. Nested
- # items are fetched by using dot-syntax, as in Nix itself.
- #
- # Due to a restriction of the registry API specification it is not possible to
- # pass uppercase characters in an image name, however the Nix package set
- # makes use of camelCasing repeatedly (for example for `haskellPackages`).
- #
- # To work around this, if no value is found on the top-level a second lookup
- # is done on the package set using lowercase-names. This is not done for
- # nested sets, as they often have keys that only differ in case.
- #
- # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev` and
- # `deepFetch haskellpackages.stylish-haskell` retrieves
- # `haskellPackages.stylish-haskell`.
- deepFetch = with lib; s: n:
- let path = splitString "." n;
- err = { error = "not_found"; pkg = n; };
- # The most efficient way I've found to do a lookup against
- # case-differing versions of an attribute is to first construct a
- # mapping of all lowercased attribute names to their differently cased
- # equivalents.
- #
- # This map is then used for a second lookup if the top-level
- # (case-sensitive) one does not yield a result.
- hasUpper = str: (match ".*[A-Z].*" str) != null;
- allUpperKeys = filter hasUpper (attrNames s);
- lowercased = listToAttrs (map (k: {
- name = toLower k;
- value = k;
- }) allUpperKeys);
- caseAmendedPath = map (v: if hasAttr v lowercased then lowercased."${v}" else v) path;
- fetchLower = attrByPath caseAmendedPath err s;
- in attrByPath path fetchLower s;
-
- # allContents contains all packages successfully retrieved by name
- # from the package set, as well as any errors encountered while
- # attempting to fetch a package.
- #
- # Accumulated error information is returned back to the server.
- allContents =
- # Folds over the results of 'deepFetch' on all requested packages to
- # separate them into errors and content. This allows the program to
- # terminate early and return only the errors if any are encountered.
- let splitter = attrs: res:
- if hasAttr "error" res
- then attrs // { errors = attrs.errors ++ [ res ]; }
- else attrs // { contents = attrs.contents ++ [ res ]; };
- init = { contents = []; errors = []; };
- fetched = (map (deepFetch pkgs) (fromJSON packages));
- in foldl' splitter init fetched;
-
- # Contains the export references graph of all retrieved packages,
- # which has information about all runtime dependencies of the image.
- #
- # This is used by Nixery to group closures into image layers.
- runtimeGraph = runCommand "runtime-graph.json" {
- __structuredAttrs = true;
- exportReferencesGraph.graph = allContents.contents;
- PATH = "${coreutils}/bin";
- builder = toFile "builder" ''
- . .attrs.sh
- cp .attrs.json ''${outputs[out]}
- '';
- } "";
-
- # Create a symlink forest into all top-level store paths of the
- # image contents.
- contentsEnv = symlinkJoin {
- name = "bulk-layers";
- paths = allContents.contents;
- };
-
- # Image layer that contains the symlink forest created above. This
- # must be included in the image to ensure that the filesystem has a
- # useful layout at runtime.
- symlinkLayer = runCommand "symlink-layer.tar" {} ''
- cp -r ${contentsEnv}/ ./layer
- tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -cf $out .
- '';
-
- # Metadata about the symlink layer which is required for serving it.
- # Two different hashes are computed for different usages (inclusion
- # in manifest vs. content-checking in the layer cache).
- symlinkLayerMeta = fromJSON (readFile (runCommand "symlink-layer-meta.json" {
- buildInputs = [ coreutils jq openssl ];
- }''
- tarHash=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
- layerSize=$(stat --printf '%s' ${symlinkLayer})
-
- jq -n -c --arg tarHash $tarHash --arg size $layerSize --arg path ${symlinkLayer} \
- '{ size: ($size | tonumber), tarHash: $tarHash, path: $path }' >> $out
- ''));
-
- # Final output structure returned to Nixery if the build succeeded
- buildOutput = {
- runtimeGraph = fromJSON (readFile runtimeGraph);
- symlinkLayer = symlinkLayerMeta;
- };
-
- # Output structure returned if errors occured during the build. Currently the
- # only error type that is returned in a structured way is 'not_found'.
- errorOutput = {
- error = "not_found";
- pkgs = map (err: err.pkg) allContents.errors;
- };
-in writeText "build-output.json" (if (length allContents.errors) == 0
- then toJSON buildOutput
- else toJSON errorOutput
-)
diff --git a/tools/nixery/build-image/default.nix b/tools/nixery/build-image/default.nix
deleted file mode 100644
index a61ac06bdd92..000000000000
--- a/tools/nixery/build-image/default.nix
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This file builds a wrapper script called by Nixery to ask for the
-# content information for a given image.
-#
-# The purpose of using a wrapper script is to ensure that the paths to
-# all required Nix files are set correctly at runtime.
-
-{ pkgs ? import {} }:
-
-pkgs.writeShellScriptBin "nixery-build-image" ''
- exec ${pkgs.nix}/bin/nix-build \
- --show-trace \
- --no-out-link "$@" \
- --argstr loadPkgs ${./load-pkgs.nix} \
- ${./build-image.nix}
-''
diff --git a/tools/nixery/build-image/load-pkgs.nix b/tools/nixery/build-image/load-pkgs.nix
deleted file mode 100644
index cceebfc14dae..000000000000
--- a/tools/nixery/build-image/load-pkgs.nix
+++ /dev/null
@@ -1,45 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Load a Nix package set from one of the supported source types
-# (nixpkgs, git, path).
-{ srcType, srcArgs, importArgs ? { } }:
-
-with builtins;
-let
- # If a nixpkgs channel is requested, it is retrieved from Github (as
- # a tarball) and imported.
- fetchImportChannel = channel:
- let
- url =
- "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
- in import (fetchTarball url) importArgs;
-
- # If a git repository is requested, it is retrieved via
- # builtins.fetchGit which defaults to the git configuration of the
- # outside environment. This means that user-configured SSH
- # credentials etc. are going to work as expected.
- fetchImportGit = spec: import (fetchGit spec) importArgs;
-
- # No special handling is used for paths, so users are expected to pass one
- # that will work natively with Nix.
- importPath = path: import (toPath path) importArgs;
-in if srcType == "nixpkgs" then
- fetchImportChannel srcArgs
-else if srcType == "git" then
- fetchImportGit (fromJSON srcArgs)
-else if srcType == "path" then
- importPath srcArgs
-else
- throw ("Invalid package set source specification: ${srcType} (${srcArgs})")
diff --git a/tools/nixery/builder/archive.go b/tools/nixery/builder/archive.go
new file mode 100644
index 000000000000..ff822e389a7d
--- /dev/null
+++ b/tools/nixery/builder/archive.go
@@ -0,0 +1,113 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package builder
+
+// This file implements logic for walking through a directory and creating a
+// tarball of it.
+//
+// The tarball is written straight to the supplied reader, which makes it
+// possible to create an image layer from the specified store paths, hash it and
+// upload it in one reading pass.
+import (
+ "archive/tar"
+ "compress/gzip"
+ "crypto/sha256"
+ "fmt"
+ "io"
+ "os"
+ "path/filepath"
+)
+
+// Create a new compressed tarball from each of the paths in the list
+// and write it to the supplied writer.
+//
+// The uncompressed tarball is hashed because image manifests must
+// contain both the hashes of compressed and uncompressed layers.
+func packStorePaths(l *layer, w io.Writer) (string, error) {
+ shasum := sha256.New()
+ gz := gzip.NewWriter(w)
+ multi := io.MultiWriter(shasum, gz)
+ t := tar.NewWriter(multi)
+
+ for _, path := range l.Contents {
+ err := filepath.Walk(path, tarStorePath(t))
+ if err != nil {
+ return "", err
+ }
+ }
+
+ if err := t.Close(); err != nil {
+ return "", err
+ }
+
+ if err := gz.Close(); err != nil {
+ return "", err
+ }
+
+ return fmt.Sprintf("sha256:%x", shasum.Sum([]byte{})), nil
+}
+
+func tarStorePath(w *tar.Writer) filepath.WalkFunc {
+ return func(path string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ // If the entry is not a symlink or regular file, skip it.
+ if info.Mode()&os.ModeSymlink == 0 && !info.Mode().IsRegular() {
+ return nil
+ }
+
+ // the symlink target is read if this entry is a symlink, as it
+ // is required when creating the file header
+ var link string
+ if info.Mode()&os.ModeSymlink != 0 {
+ link, err = os.Readlink(path)
+ if err != nil {
+ return err
+ }
+ }
+
+ header, err := tar.FileInfoHeader(info, link)
+ if err != nil {
+ return err
+ }
+
+ // The name retrieved from os.FileInfo only contains the file's
+ // basename, but the full path is required within the layer
+ // tarball.
+ header.Name = path
+ if err = w.WriteHeader(header); err != nil {
+ return err
+ }
+
+ // At this point, return if no file content needs to be written
+ if !info.Mode().IsRegular() {
+ return nil
+ }
+
+ f, err := os.Open(path)
+ if err != nil {
+ return err
+ }
+
+ if _, err := io.Copy(w, f); err != nil {
+ return err
+ }
+
+ f.Close()
+
+ return nil
+ }
+}
diff --git a/tools/nixery/builder/builder.go b/tools/nixery/builder/builder.go
new file mode 100644
index 000000000000..ceb112df90cf
--- /dev/null
+++ b/tools/nixery/builder/builder.go
@@ -0,0 +1,521 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package builder implements the logic for assembling container
+// images. It shells out to Nix to retrieve all required Nix-packages
+// and assemble the symlink layer and then creates the required
+// tarballs in-process.
+package builder
+
+import (
+ "bufio"
+ "bytes"
+ "compress/gzip"
+ "context"
+ "crypto/sha256"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "os/exec"
+ "sort"
+ "strings"
+
+ "github.com/google/nixery/config"
+ "github.com/google/nixery/manifest"
+ "github.com/google/nixery/storage"
+ log "github.com/sirupsen/logrus"
+)
+
+// The maximum number of layers in an image is 125. To allow for
+// extensibility, the actual number of layers Nixery is "allowed" to
+// use up is set at a lower point.
+const LayerBudget int = 94
+
+// State holds the runtime state that is carried around in Nixery and
+// passed to builder functions.
+type State struct {
+ Storage storage.Backend
+ Cache *LocalCache
+ Cfg config.Config
+ Pop Popularity
+}
+
+// Architecture represents the possible CPU architectures for which
+// container images can be built.
+//
+// The default architecture is amd64, but support for ARM platforms is
+// available within nixpkgs and can be toggled via meta-packages.
+type Architecture struct {
+ // Name of the system tuple to pass to Nix
+ nixSystem string
+
+ // Name of the architecture as used in the OCI manifests
+ imageArch string
+}
+
+var amd64 = Architecture{"x86_64-linux", "amd64"}
+var arm64 = Architecture{"aarch64-linux", "arm64"}
+
+// Image represents the information necessary for building a container image.
+// This can be either a list of package names (corresponding to keys in the
+// nixpkgs set) or a Nix expression that results in a *list* of derivations.
+type Image struct {
+ Name string
+ Tag string
+
+ // Names of packages to include in the image. These must correspond
+ // directly to top-level names of Nix packages in the nixpkgs tree.
+ Packages []string
+
+ // Architecture for which to build the image. Nixery defaults
+ // this to amd64 if not specified via meta-packages.
+ Arch *Architecture
+}
+
+// BuildResult represents the data returned from the server to the
+// HTTP handlers. Error information is propagated straight from Nix
+// for errors inside of the build that should be fed back to the
+// client (such as missing packages).
+type BuildResult struct {
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+ Manifest json.RawMessage `json:"manifest"`
+}
+
+// ImageFromName parses an image name into the corresponding structure which can
+// be used to invoke Nix.
+//
+// It will expand convenience names under the hood (see the `convenienceNames`
+// function below) and append packages that are always included (cacert, iana-etc).
+//
+// Once assembled the image structure uses a sorted representation of
+// the name. This is to avoid unnecessarily cache-busting images if
+// only the order of requested packages has changed.
+func ImageFromName(name string, tag string) Image {
+ pkgs := strings.Split(name, "/")
+ arch, expanded := metaPackages(pkgs)
+ expanded = append(expanded, "cacert", "iana-etc")
+
+ sort.Strings(pkgs)
+ sort.Strings(expanded)
+
+ return Image{
+ Name: strings.Join(pkgs, "/"),
+ Tag: tag,
+ Packages: expanded,
+ Arch: arch,
+ }
+}
+
+// ImageResult represents the output of calling the Nix derivation
+// responsible for preparing an image.
+type ImageResult struct {
+ // These fields are populated in case of an error
+ Error string `json:"error"`
+ Pkgs []string `json:"pkgs"`
+
+ // These fields are populated in case of success
+ Graph runtimeGraph `json:"runtimeGraph"`
+ SymlinkLayer struct {
+ Size int `json:"size"`
+ TarHash string `json:"tarHash"`
+ Path string `json:"path"`
+ } `json:"symlinkLayer"`
+}
+
+// metaPackages expands package names defined by Nixery which either
+// include sets of packages or trigger certain image-building
+// behaviour.
+//
+// Meta-packages must be specified as the first packages in an image
+// name.
+//
+// Currently defined meta-packages are:
+//
+// * `shell`: Includes bash, coreutils and other common command-line tools
+// * `arm64`: Causes Nixery to build images for the ARM64 architecture
+func metaPackages(packages []string) (*Architecture, []string) {
+ arch := &amd64
+
+ var metapkgs []string
+ lastMeta := 0
+ for idx, p := range packages {
+ if p == "shell" || p == "arm64" {
+ metapkgs = append(metapkgs, p)
+ lastMeta = idx + 1
+ } else {
+ break
+ }
+ }
+
+ // Chop off the meta-packages from the front of the package
+ // list
+ packages = packages[lastMeta:]
+
+ for _, p := range metapkgs {
+ switch p {
+ case "shell":
+ packages = append(packages, "bashInteractive", "coreutils", "moreutils", "nano")
+ case "arm64":
+ arch = &arm64
+ }
+ }
+
+ return arch, packages
+}
+
+// logNix logs each output line from Nix. It runs in a goroutine per
+// output channel that should be live-logged.
+func logNix(image, cmd string, r io.ReadCloser) {
+ scanner := bufio.NewScanner(r)
+ for scanner.Scan() {
+ log.WithFields(log.Fields{
+ "image": image,
+ "cmd": cmd,
+ }).Info("[nix] " + scanner.Text())
+ }
+}
+
+func callNix(program, image string, args []string) ([]byte, error) {
+ cmd := exec.Command(program, args...)
+
+ outpipe, err := cmd.StdoutPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ errpipe, err := cmd.StderrPipe()
+ if err != nil {
+ return nil, err
+ }
+ go logNix(program, image, errpipe)
+
+ if err = cmd.Start(); err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image,
+ "cmd": program,
+ }).Error("error invoking Nix")
+
+ return nil, err
+ }
+
+ log.WithFields(log.Fields{
+ "cmd": program,
+ "image": image,
+ }).Info("invoked Nix build")
+
+ stdout, _ := ioutil.ReadAll(outpipe)
+
+ if err = cmd.Wait(); err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image,
+ "cmd": program,
+ "stdout": stdout,
+ }).Info("failed to invoke Nix")
+
+ return nil, err
+ }
+
+ resultFile := strings.TrimSpace(string(stdout))
+ buildOutput, err := ioutil.ReadFile(resultFile)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image,
+ "file": resultFile,
+ }).Info("failed to read Nix result file")
+
+ return nil, err
+ }
+
+ return buildOutput, nil
+}
+
+// Call out to Nix and request metadata for the image to be built. All
+// required store paths for the image will be realised, but layers
+// will not yet be created from them.
+//
+// This function is only invoked if the manifest is not found in any
+// cache.
+func prepareImage(s *State, image *Image) (*ImageResult, error) {
+ packages, err := json.Marshal(image.Packages)
+ if err != nil {
+ return nil, err
+ }
+
+ srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
+
+ args := []string{
+ "--timeout", s.Cfg.Timeout,
+ "--argstr", "packages", string(packages),
+ "--argstr", "srcType", srcType,
+ "--argstr", "srcArgs", srcArgs,
+ "--argstr", "system", image.Arch.nixSystem,
+ }
+
+ output, err := callNix("nixery-prepare-image", image.Name, args)
+ if err != nil {
+ // granular error logging is performed in callNix already
+ return nil, err
+ }
+
+ log.WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ }).Info("finished image preparation via Nix")
+
+ var result ImageResult
+ err = json.Unmarshal(output, &result)
+ if err != nil {
+ return nil, err
+ }
+
+ return &result, nil
+}
+
+// Groups layers and checks whether they are present in the cache
+// already, otherwise calls out to Nix to assemble layers.
+//
+// Newly built layers are uploaded to the bucket. Cache entries are
+// added only after successful uploads, which guarantees that entries
+// retrieved from the cache are present in the bucket.
+func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageResult) ([]manifest.Entry, error) {
+ grouped := groupLayers(&result.Graph, &s.Pop, LayerBudget)
+
+ var entries []manifest.Entry
+
+ // Splits the layers into those which are already present in
+ // the cache, and those that are missing.
+ //
+ // Missing layers are built and uploaded to the storage
+ // bucket.
+ for _, l := range grouped {
+ if entry, cached := layerFromCache(ctx, s, l.Hash()); cached {
+ entries = append(entries, *entry)
+ } else {
+ lh := l.Hash()
+
+ // While packing store paths, the SHA sum of
+ // the uncompressed layer is computed and
+ // written to `tarhash`.
+ //
+ // TODO(tazjin): Refactor this to make the
+ // flow of data cleaner.
+ var tarhash string
+ lw := func(w io.Writer) error {
+ var err error
+ tarhash, err = packStorePaths(&l, w)
+ return err
+ }
+
+ entry, err := uploadHashLayer(ctx, s, lh, lw)
+ if err != nil {
+ return nil, err
+ }
+ entry.MergeRating = l.MergeRating
+ entry.TarHash = tarhash
+
+ var pkgs []string
+ for _, p := range l.Contents {
+ pkgs = append(pkgs, packageFromPath(p))
+ }
+
+ log.WithFields(log.Fields{
+ "layer": lh,
+ "packages": pkgs,
+ "tarhash": tarhash,
+ }).Info("created image layer")
+
+ go cacheLayer(ctx, s, l.Hash(), *entry)
+ entries = append(entries, *entry)
+ }
+ }
+
+ // Symlink layer (built in the first Nix build) needs to be
+ // included here manually:
+ slkey := result.SymlinkLayer.TarHash
+ entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
+ f, err := os.Open(result.SymlinkLayer.Path)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ "layer": slkey,
+ }).Error("failed to open symlink layer")
+
+ return err
+ }
+ defer f.Close()
+
+ gz := gzip.NewWriter(w)
+ _, err = io.Copy(gz, f)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ "layer": slkey,
+ }).Error("failed to upload symlink layer")
+
+ return err
+ }
+
+ return gz.Close()
+ })
+
+ if err != nil {
+ return nil, err
+ }
+
+ entry.TarHash = "sha256:" + result.SymlinkLayer.TarHash
+ go cacheLayer(ctx, s, slkey, *entry)
+ entries = append(entries, *entry)
+
+ return entries, nil
+}
+
+// layerWriter is the type for functions that can write a layer to the
+// multiwriter used for uploading & hashing.
+//
+// This type exists to avoid duplication between the handling of
+// symlink layers and store path layers.
+type layerWriter func(w io.Writer) error
+
+// byteCounter is a special io.Writer that counts all bytes written to
+// it and does nothing else.
+//
+// This is required because the ad-hoc writing of tarballs leaves no
+// single place to count the final tarball size otherwise.
+type byteCounter struct {
+ count int64
+}
+
+func (b *byteCounter) Write(p []byte) (n int, err error) {
+ b.count += int64(len(p))
+ return len(p), nil
+}
+
+// Upload a layer tarball to the storage bucket, while hashing it at
+// the same time. The supplied function is expected to provide the
+// layer data to the writer.
+//
+// The initial upload is performed in a 'staging' folder, as the
+// SHA256-hash is not yet available when the upload is initiated.
+//
+// After a successful upload, the file is moved to its final location
+// in the bucket and the build cache is populated.
+//
+// The return value is the layer's SHA256 hash, which is used in the
+// image manifest.
+func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter) (*manifest.Entry, error) {
+ path := "staging/" + key
+ sha256sum, size, err := s.Storage.Persist(ctx, path, func(sw io.Writer) (string, int64, error) {
+ // Sets up a "multiwriter" that simultaneously runs both hash
+ // algorithms and uploads to the storage backend.
+ shasum := sha256.New()
+ counter := &byteCounter{}
+ multi := io.MultiWriter(sw, shasum, counter)
+
+ err := lw(multi)
+ sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
+
+ return sha256sum, counter.count, err
+ })
+
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to create and store layer")
+
+ return nil, err
+ }
+
+ // Hashes are now known and the object is in the bucket, what
+ // remains is to move it to the correct location and cache it.
+ err = s.Storage.Move(ctx, "staging/"+key, "layers/"+sha256sum)
+ if err != nil {
+ log.WithError(err).WithField("layer", key).
+ Error("failed to move layer from staging")
+
+ return nil, err
+ }
+
+ log.WithFields(log.Fields{
+ "layer": key,
+ "sha256": sha256sum,
+ "size": size,
+ }).Info("created and persisted layer")
+
+ entry := manifest.Entry{
+ Digest: "sha256:" + sha256sum,
+ Size: size,
+ }
+
+ return &entry, nil
+}
+
+func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, error) {
+ key := s.Cfg.Pkgs.CacheKey(image.Packages, image.Tag)
+ if key != "" {
+ if m, c := manifestFromCache(ctx, s, key); c {
+ return &BuildResult{
+ Manifest: m,
+ }, nil
+ }
+ }
+
+ imageResult, err := prepareImage(s, image)
+ if err != nil {
+ return nil, err
+ }
+
+ if imageResult.Error != "" {
+ return &BuildResult{
+ Error: imageResult.Error,
+ Pkgs: imageResult.Pkgs,
+ }, nil
+ }
+
+ layers, err := prepareLayers(ctx, s, image, imageResult)
+ if err != nil {
+ return nil, err
+ }
+
+ m, c := manifest.Manifest(image.Arch.imageArch, layers)
+
+ lw := func(w io.Writer) error {
+ r := bytes.NewReader(c.Config)
+ _, err := io.Copy(w, r)
+ return err
+ }
+
+ if _, err = uploadHashLayer(ctx, s, c.SHA256, lw); err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "image": image.Name,
+ "tag": image.Tag,
+ }).Error("failed to upload config")
+
+ return nil, err
+ }
+
+ if key != "" {
+ go cacheManifest(ctx, s, key, m)
+ }
+
+ result := BuildResult{
+ Manifest: m,
+ }
+ return &result, nil
+}
diff --git a/tools/nixery/builder/builder_test.go b/tools/nixery/builder/builder_test.go
new file mode 100644
index 000000000000..3fbe2ab40e23
--- /dev/null
+++ b/tools/nixery/builder/builder_test.go
@@ -0,0 +1,123 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package builder
+
+import (
+ "github.com/google/go-cmp/cmp"
+ "github.com/google/go-cmp/cmp/cmpopts"
+ "testing"
+)
+
+var ignoreArch = cmpopts.IgnoreFields(Image{}, "Arch")
+
+func TestImageFromNameSimple(t *testing.T) {
+ image := ImageFromName("hello", "latest")
+ expected := Image{
+ Name: "hello",
+ Tag: "latest",
+ Packages: []string{
+ "cacert",
+ "hello",
+ "iana-etc",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"hello\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameMultiple(t *testing.T) {
+ image := ImageFromName("hello/git/htop", "latest")
+ expected := Image{
+ Name: "git/hello/htop",
+ Tag: "latest",
+ Packages: []string{
+ "cacert",
+ "git",
+ "hello",
+ "htop",
+ "iana-etc",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"hello/git/htop\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameShell(t *testing.T) {
+ image := ImageFromName("shell", "latest")
+ expected := Image{
+ Name: "shell",
+ Tag: "latest",
+ Packages: []string{
+ "bashInteractive",
+ "cacert",
+ "coreutils",
+ "iana-etc",
+ "moreutils",
+ "nano",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"shell\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameShellMultiple(t *testing.T) {
+ image := ImageFromName("shell/htop", "latest")
+ expected := Image{
+ Name: "htop/shell",
+ Tag: "latest",
+ Packages: []string{
+ "bashInteractive",
+ "cacert",
+ "coreutils",
+ "htop",
+ "iana-etc",
+ "moreutils",
+ "nano",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"shell/htop\", \"latest\") mismatch:\n%s", diff)
+ }
+}
+
+func TestImageFromNameShellArm64(t *testing.T) {
+ image := ImageFromName("shell/arm64", "latest")
+ expected := Image{
+ Name: "arm64/shell",
+ Tag: "latest",
+ Packages: []string{
+ "bashInteractive",
+ "cacert",
+ "coreutils",
+ "iana-etc",
+ "moreutils",
+ "nano",
+ },
+ }
+
+ if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
+ t.Fatalf("Image(\"shell/arm64\", \"latest\") mismatch:\n%s", diff)
+ }
+
+ if image.Arch.imageArch != "arm64" {
+ t.Fatal("Image(\"shell/arm64\"): Expected arch arm64")
+ }
+}
diff --git a/tools/nixery/builder/cache.go b/tools/nixery/builder/cache.go
new file mode 100644
index 000000000000..a4ebe03e1c94
--- /dev/null
+++ b/tools/nixery/builder/cache.go
@@ -0,0 +1,236 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package builder
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "io"
+ "io/ioutil"
+ "os"
+ "sync"
+
+ "github.com/google/nixery/manifest"
+ log "github.com/sirupsen/logrus"
+)
+
+// LocalCache implements the structure used for local caching of
+// manifests and layer uploads.
+type LocalCache struct {
+ // Manifest cache
+ mmtx sync.RWMutex
+ mdir string
+
+ // Layer cache
+ lmtx sync.RWMutex
+ lcache map[string]manifest.Entry
+}
+
+// Creates an in-memory cache and ensures that the local file path for
+// manifest caching exists.
+func NewCache() (LocalCache, error) {
+ path := os.TempDir() + "/nixery"
+ err := os.MkdirAll(path, 0755)
+ if err != nil {
+ return LocalCache{}, err
+ }
+
+ return LocalCache{
+ mdir: path + "/",
+ lcache: make(map[string]manifest.Entry),
+ }, nil
+}
+
+// Retrieve a cached manifest if the build is cacheable and it exists.
+func (c *LocalCache) manifestFromLocalCache(key string) (json.RawMessage, bool) {
+ c.mmtx.RLock()
+ defer c.mmtx.RUnlock()
+
+ f, err := os.Open(c.mdir + key)
+ if err != nil {
+ // This is a debug log statement because failure to
+ // read the manifest key is currently expected if it
+ // is not cached.
+ log.WithError(err).WithField("manifest", key).
+ Debug("failed to read manifest from local cache")
+
+ return nil, false
+ }
+ defer f.Close()
+
+ m, err := ioutil.ReadAll(f)
+ if err != nil {
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to read manifest from local cache")
+
+ return nil, false
+ }
+
+ return json.RawMessage(m), true
+}
+
+// Adds the result of a manifest build to the local cache, if the
+// manifest is considered cacheable.
+//
+// Manifests can be quite large and are cached on disk instead of in
+// memory.
+func (c *LocalCache) localCacheManifest(key string, m json.RawMessage) {
+ c.mmtx.Lock()
+ defer c.mmtx.Unlock()
+
+ err := ioutil.WriteFile(c.mdir+key, []byte(m), 0644)
+ if err != nil {
+ log.WithError(err).WithField("manifest", key).
+ Error("failed to locally cache manifest")
+ }
+}
+
+// Retrieve a layer build from the local cache.
+func (c *LocalCache) layerFromLocalCache(key string) (*manifest.Entry, bool) {
+ c.lmtx.RLock()
+ e, ok := c.lcache[key]
+ c.lmtx.RUnlock()
+
+ return &e, ok
+}
+
+// Add a layer build result to the local cache.
+func (c *LocalCache) localCacheLayer(key string, e manifest.Entry) {
+ c.lmtx.Lock()
+ c.lcache[key] = e
+ c.lmtx.Unlock()
+}
+
+// Retrieve a manifest from the cache(s). First the local cache is
+// checked, then the storage backend.
+func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessage, bool) {
+ if m, cached := s.Cache.manifestFromLocalCache(key); cached {
+ return m, true
+ }
+
+ r, err := s.Storage.Fetch(ctx, "manifests/"+key)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "manifest": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to fetch manifest from cache")
+
+ return nil, false
+ }
+ defer r.Close()
+
+ m, err := ioutil.ReadAll(r)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "manifest": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to read cached manifest from storage backend")
+
+ return nil, false
+ }
+
+ go s.Cache.localCacheManifest(key, m)
+ log.WithField("manifest", key).Info("retrieved manifest from GCS")
+
+ return json.RawMessage(m), true
+}
+
+// Add a manifest to the bucket & local caches
+func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage) {
+ go s.Cache.localCacheManifest(key, m)
+
+ path := "manifests/" + key
+ _, size, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
+ size, err := io.Copy(w, bytes.NewReader([]byte(m)))
+ return "", size, err
+ })
+
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "manifest": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to cache manifest to storage backend")
+
+ return
+ }
+
+ log.WithFields(log.Fields{
+ "manifest": key,
+ "size": size,
+ "backend": s.Storage.Name(),
+ }).Info("cached manifest to storage backend")
+}
+
+// Retrieve a layer build from the cache, first checking the local
+// cache followed by the bucket cache.
+func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry, bool) {
+ if entry, cached := s.Cache.layerFromLocalCache(key); cached {
+ return entry, true
+ }
+
+ r, err := s.Storage.Fetch(ctx, "builds/"+key)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Debug("failed to retrieve cached layer from storage backend")
+
+ return nil, false
+ }
+ defer r.Close()
+
+ jb := bytes.NewBuffer([]byte{})
+ _, err = io.Copy(jb, r)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to read cached layer from storage backend")
+
+ return nil, false
+ }
+
+ var entry manifest.Entry
+ err = json.Unmarshal(jb.Bytes(), &entry)
+ if err != nil {
+ log.WithError(err).WithField("layer", key).
+ Error("failed to unmarshal cached layer")
+
+ return nil, false
+ }
+
+ go s.Cache.localCacheLayer(key, entry)
+ return &entry, true
+}
+
+func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry) {
+ s.Cache.localCacheLayer(key, entry)
+
+ j, _ := json.Marshal(&entry)
+ path := "builds/" + key
+ _, _, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
+ size, err := io.Copy(w, bytes.NewReader(j))
+ return "", size, err
+ })
+
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": key,
+ "backend": s.Storage.Name(),
+ }).Error("failed to cache layer")
+ }
+
+ return
+}
diff --git a/tools/nixery/builder/layers.go b/tools/nixery/builder/layers.go
new file mode 100644
index 000000000000..f769e43c5808
--- /dev/null
+++ b/tools/nixery/builder/layers.go
@@ -0,0 +1,364 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// This package reads an export reference graph (i.e. a graph representing the
+// runtime dependencies of a set of derivations) created by Nix and groups it in
+// a way that is likely to match the grouping for other derivation sets with
+// overlapping dependencies.
+//
+// This is used to determine which derivations to include in which layers of a
+// container image.
+//
+// # Inputs
+//
+// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
+// * popularity values of each package in the Nix package set (in the form of a
+// direct reference count)
+// * a maximum number of layers to allocate for the image (the "layer budget")
+//
+// # Algorithm
+//
+// It works by first creating a (directed) dependency tree:
+//
+// img (root node)
+// │
+// ├───> A ─────┐
+// │ v
+// ├───> B ───> E
+// │ ^
+// ├───> C ─────┘
+// │ │
+// │ v
+// └───> D ───> F
+// │
+// └────> G
+//
+// Each node (i.e. package) is then visited to determine how important
+// it is to separate this node into its own layer, specifically:
+//
+// 1. Is the node within a certain threshold percentile of absolute
+// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
+//
+// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
+//
+// In either case, a bit is flipped for this node representing each
+// condition and an edge to it is inserted directly from the image
+// root, if it does not already exist.
+//
+// For the rest of the example we assume 'G' is above the threshold
+// size and 'E' is popular.
+//
+// This tree is then transformed into a dominator tree:
+//
+// img
+// │
+// ├───> A
+// ├───> B
+// ├───> C
+// ├───> E
+// ├───> D ───> F
+// └───> G
+//
+// Specifically this means that the paths to A, B, C, E, G, and D
+// always pass through the root (i.e. are dominated by it), whilst F
+// is dominated by D (all paths go through it).
+//
+// The top-level subtrees are considered as the initially selected
+// layers.
+//
+// If the list of layers fits within the layer budget, it is returned.
+//
+// Otherwise, a merge rating is calculated for each layer. This is the
+// product of the layer's total size and its root node's popularity.
+//
+// Layers are then merged in ascending order of merge ratings until
+// they fit into the layer budget.
+//
+// # Threshold values
+//
+// Threshold values for the partitioning conditions mentioned above
+// have not yet been determined, but we will make a good first guess
+// based on gut feeling and proceed to measure their impact on cache
+// hits/misses.
+//
+// # Example
+//
+// Using the logic described above as well as the example presented in
+// the introduction, this program would create the following layer
+// groupings (assuming no additional partitioning):
+//
+// Layer budget: 1
+// Layers: { A, B, C, D, E, F, G }
+//
+// Layer budget: 2
+// Layers: { G }, { A, B, C, D, E, F }
+//
+// Layer budget: 3
+// Layers: { G }, { E }, { A, B, C, D, F }
+//
+// Layer budget: 4
+// Layers: { G }, { E }, { D, F }, { A, B, C }
+//
+// ...
+//
+// Layer budget: 10
+// Layers: { E }, { D, F }, { A }, { B }, { C }
+package builder
+
+import (
+ "crypto/sha1"
+ "fmt"
+ "regexp"
+ "sort"
+ "strings"
+
+ log "github.com/sirupsen/logrus"
+ "gonum.org/v1/gonum/graph/flow"
+ "gonum.org/v1/gonum/graph/simple"
+)
+
+// runtimeGraph represents structured information from Nix about the runtime
+// dependencies of a derivation.
+//
+// This is generated in Nix by using the exportReferencesGraph feature.
+type runtimeGraph struct {
+ References struct {
+ Graph []string `json:"graph"`
+ } `json:"exportReferencesGraph"`
+
+ Graph []struct {
+ Size uint64 `json:"closureSize"`
+ Path string `json:"path"`
+ Refs []string `json:"references"`
+ } `json:"graph"`
+}
+
+// Popularity data for each Nix package that was calculated in advance.
+//
+// Popularity is a number from 1-100 that represents the
+// popularity percentile in which this package resides inside
+// of the nixpkgs tree.
+type Popularity = map[string]int
+
+// Layer represents the data returned for each layer that Nix should
+// build for the container image.
+type layer struct {
+ Contents []string `json:"contents"`
+ MergeRating uint64
+}
+
+// Hash the contents of a layer to create a deterministic identifier that can be
+// used for caching.
+func (l *layer) Hash() string {
+ sum := sha1.Sum([]byte(strings.Join(l.Contents, ":")))
+ return fmt.Sprintf("%x", sum)
+}
+
+func (a layer) merge(b layer) layer {
+ a.Contents = append(a.Contents, b.Contents...)
+ a.MergeRating += b.MergeRating
+ return a
+}
+
+// closure as pointed to by the graph nodes.
+type closure struct {
+ GraphID int64
+ Path string
+ Size uint64
+ Refs []string
+ Popularity int
+}
+
+func (c *closure) ID() int64 {
+ return c.GraphID
+}
+
+var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
+
+// PackageFromPath returns the name of a Nix package based on its
+// output store path.
+func packageFromPath(path string) string {
+ return nixRegexp.ReplaceAllString(path, "")
+}
+
+// DOTID provides a human-readable package name. The name stems from
+// the dot format used by GraphViz, into which the dependency graph
+// can be rendered.
+func (c *closure) DOTID() string {
+ return packageFromPath(c.Path)
+}
+
+// bigOrPopular checks whether this closure should be considered for
+// separation into its own layer, even if it would otherwise only
+// appear in a subtree of the dominator tree.
+func (c *closure) bigOrPopular() bool {
+ const sizeThreshold = 100 * 1000000 // 100MB
+
+ if c.Size > sizeThreshold {
+ return true
+ }
+
+ // Threshold value is picked arbitrarily right now. The reason
+ // for this is that some packages (such as `cacert`) have very
+ // few direct dependencies, but are required by pretty much
+ // everything.
+ if c.Popularity >= 100 {
+ return true
+ }
+
+ return false
+}
+
+func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
+ // Big or popular nodes get a separate edge from the top to
+ // flag them for their own layer.
+ if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
+ edge := graph.NewEdge(graph.Node(0), node)
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range node.Refs {
+ // Nix adds a self reference to each node, which
+ // should not be inserted.
+ if c != node.Path {
+ edge := graph.NewEdge(node, (*cmap)[c])
+ graph.SetEdge(edge)
+ }
+ }
+}
+
+// Create a graph structure from the references supplied by Nix.
+func buildGraph(refs *runtimeGraph, pop *Popularity) *simple.DirectedGraph {
+ cmap := make(map[string]*closure)
+ graph := simple.NewDirectedGraph()
+
+ // Insert all closures into the graph, as well as a fake root
+ // closure which serves as the top of the tree.
+ //
+ // A map from store paths to IDs is kept to actually insert
+ // edges below.
+ root := &closure{
+ GraphID: 0,
+ Path: "image_root",
+ }
+ graph.AddNode(root)
+
+ for idx, c := range refs.Graph {
+ node := &closure{
+ GraphID: int64(idx + 1), // inc because of root node
+ Path: c.Path,
+ Size: c.Size,
+ Refs: c.Refs,
+ }
+
+ // The packages `nss-cacert` and `iana-etc` are added
+ // by Nixery to *every single image* and should have a
+ // very high popularity.
+ //
+ // Other popularity values are populated from the data
+ // set assembled by Nixery's popcount.
+ id := node.DOTID()
+ if strings.HasPrefix(id, "nss-cacert") || strings.HasPrefix(id, "iana-etc") {
+ // glibc has ~300k references, these packages need *more*
+ node.Popularity = 500000
+ } else if p, ok := (*pop)[id]; ok {
+ node.Popularity = p
+ } else {
+ node.Popularity = 1
+ }
+
+ graph.AddNode(node)
+ cmap[c.Path] = node
+ }
+
+ // Insert the top-level closures with edges from the root
+ // node, then insert all edges for each closure.
+ for _, p := range refs.References.Graph {
+ edge := graph.NewEdge(root, cmap[p])
+ graph.SetEdge(edge)
+ }
+
+ for _, c := range cmap {
+ insertEdges(graph, &cmap, c)
+ }
+
+ return graph
+}
+
+// Extracts a subgraph starting at the specified root from the
+// dominator tree. The subgraph is converted into a flat list of
+// layers, each containing the store paths and merge rating.
+func groupLayer(dt *flow.DominatorTree, root *closure) layer {
+ size := root.Size
+ contents := []string{root.Path}
+ children := dt.DominatedBy(root.ID())
+
+ // This iteration does not use 'range' because the list being
+ // iterated is modified during the iteration (yes, I'm sorry).
+ for i := 0; i < len(children); i++ {
+ child := children[i].(*closure)
+ size += child.Size
+ contents = append(contents, child.Path)
+ children = append(children, dt.DominatedBy(child.ID())...)
+ }
+
+ // Contents are sorted to ensure that hashing is consistent
+ sort.Strings(contents)
+
+ return layer{
+ Contents: contents,
+ MergeRating: uint64(root.Popularity) * size,
+ }
+}
+
+// Calculate the dominator tree of the entire package set and group
+// each top-level subtree into a layer.
+//
+// Layers are merged together until they fit into the layer budget,
+// based on their merge rating.
+func dominate(budget int, graph *simple.DirectedGraph) []layer {
+ dt := flow.Dominators(graph.Node(0), graph)
+
+ var layers []layer
+ for _, n := range dt.DominatedBy(dt.Root().ID()) {
+ layers = append(layers, groupLayer(&dt, n.(*closure)))
+ }
+
+ sort.Slice(layers, func(i, j int) bool {
+ return layers[i].MergeRating < layers[j].MergeRating
+ })
+
+ if len(layers) > budget {
+ log.WithFields(log.Fields{
+ "layers": len(layers),
+ "budget": budget,
+ }).Info("ideal image exceeds layer budget")
+ }
+
+ for len(layers) > budget {
+ merged := layers[0].merge(layers[1])
+ layers[1] = merged
+ layers = layers[1:]
+ }
+
+ return layers
+}
+
+// groupLayers applies the algorithm described above the its input and returns a
+// list of layers, each consisting of a list of Nix store paths that it should
+// contain.
+func groupLayers(refs *runtimeGraph, pop *Popularity, budget int) []layer {
+ graph := buildGraph(refs, pop)
+ return dominate(budget, graph)
+}
diff --git a/tools/nixery/config/config.go b/tools/nixery/config/config.go
new file mode 100644
index 000000000000..7ec102bd6cee
--- /dev/null
+++ b/tools/nixery/config/config.go
@@ -0,0 +1,84 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package config implements structures to store Nixery's configuration at
+// runtime as well as the logic for instantiating this configuration from the
+// environment.
+package config
+
+import (
+ "os"
+
+ log "github.com/sirupsen/logrus"
+)
+
+func getConfig(key, desc, def string) string {
+ value := os.Getenv(key)
+ if value == "" && def == "" {
+ log.WithFields(log.Fields{
+ "option": key,
+ "description": desc,
+ }).Fatal("missing required configuration envvar")
+ } else if value == "" {
+ return def
+ }
+
+ return value
+}
+
+// Backend represents the possible storage backend types
+type Backend int
+
+const (
+ GCS = iota
+ FileSystem
+)
+
+// Config holds the Nixery configuration options.
+type Config struct {
+ Port string // Port on which to launch HTTP server
+ Pkgs PkgSource // Source for Nix package set
+ Timeout string // Timeout for a single Nix builder (seconds)
+ WebDir string // Directory with static web assets
+ PopUrl string // URL to the Nix package popularity count
+ Backend Backend // Storage backend to use for Nixery
+}
+
+func FromEnv() (Config, error) {
+ pkgs, err := pkgSourceFromEnv()
+ if err != nil {
+ return Config{}, err
+ }
+
+ var b Backend
+ switch os.Getenv("NIXERY_STORAGE_BACKEND") {
+ case "gcs":
+ b = GCS
+ case "filesystem":
+ b = FileSystem
+ default:
+ log.WithField("values", []string{
+ "gcs",
+ }).Fatal("NIXERY_STORAGE_BUCKET must be set to a supported value")
+ }
+
+ return Config{
+ Port: getConfig("PORT", "HTTP port", ""),
+ Pkgs: pkgs,
+ Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
+ WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
+ PopUrl: os.Getenv("NIX_POPULARITY_URL"),
+ Backend: b,
+ }, nil
+}
diff --git a/tools/nixery/config/pkgsource.go b/tools/nixery/config/pkgsource.go
new file mode 100644
index 000000000000..95236c4b0d15
--- /dev/null
+++ b/tools/nixery/config/pkgsource.go
@@ -0,0 +1,159 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package config
+
+import (
+ "crypto/sha1"
+ "encoding/json"
+ "fmt"
+ "os"
+ "regexp"
+ "strings"
+
+ log "github.com/sirupsen/logrus"
+)
+
+// PkgSource represents the source from which the Nix package set used
+// by Nixery is imported. Users configure the source by setting one of
+// the supported environment variables.
+type PkgSource interface {
+ // Convert the package source into the representation required
+ // for calling Nix.
+ Render(tag string) (string, string)
+
+ // Create a key by which builds for this source and iamge
+ // combination can be cached.
+ //
+ // The empty string means that this value is not cacheable due
+ // to the package source being a moving target (such as a
+ // channel).
+ CacheKey(pkgs []string, tag string) string
+}
+
+type GitSource struct {
+ repository string
+}
+
+// Regex to determine whether a git reference is a commit hash or
+// something else (branch/tag).
+//
+// Used to check whether a git reference is cacheable, and to pass the
+// correct git structure to Nix.
+//
+// Note: If a user creates a branch or tag with the name of a commit
+// and references it intentionally, this heuristic will fail.
+var commitRegex = regexp.MustCompile(`^[0-9a-f]{40}$`)
+
+func (g *GitSource) Render(tag string) (string, string) {
+ args := map[string]string{
+ "url": g.repository,
+ }
+
+ // The 'git' source requires a tag to be present. If the user
+ // has not specified one, it is assumed that the default
+ // 'master' branch should be used.
+ if tag == "latest" || tag == "" {
+ tag = "master"
+ }
+
+ if commitRegex.MatchString(tag) {
+ args["rev"] = tag
+ } else {
+ args["ref"] = tag
+ }
+
+ j, _ := json.Marshal(args)
+
+ return "git", string(j)
+}
+
+func (g *GitSource) CacheKey(pkgs []string, tag string) string {
+ // Only full commit hashes can be used for caching, as
+ // everything else is potentially a moving target.
+ if !commitRegex.MatchString(tag) {
+ return ""
+ }
+
+ unhashed := strings.Join(pkgs, "") + tag
+ hashed := fmt.Sprintf("%x", sha1.Sum([]byte(unhashed)))
+
+ return hashed
+}
+
+type NixChannel struct {
+ channel string
+}
+
+func (n *NixChannel) Render(tag string) (string, string) {
+ return "nixpkgs", n.channel
+}
+
+func (n *NixChannel) CacheKey(pkgs []string, tag string) string {
+ // Since Nix channels are downloaded from the nixpkgs-channels
+ // Github, users can specify full commit hashes as the
+ // "channel", in which case builds are cacheable.
+ if !commitRegex.MatchString(n.channel) {
+ return ""
+ }
+
+ unhashed := strings.Join(pkgs, "") + n.channel
+ hashed := fmt.Sprintf("%x", sha1.Sum([]byte(unhashed)))
+
+ return hashed
+}
+
+type PkgsPath struct {
+ path string
+}
+
+func (p *PkgsPath) Render(tag string) (string, string) {
+ return "path", p.path
+}
+
+func (p *PkgsPath) CacheKey(pkgs []string, tag string) string {
+ // Path-based builds are not currently cacheable because we
+ // have no local hash of the package folder's state easily
+ // available.
+ return ""
+}
+
+// Retrieve a package source from the environment. If no source is
+// specified, the Nix code will default to a recent NixOS channel.
+func pkgSourceFromEnv() (PkgSource, error) {
+ if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
+ log.WithField("channel", channel).Info("using Nix package set from Nix channel or commit")
+
+ return &NixChannel{
+ channel: channel,
+ }, nil
+ }
+
+ if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
+ log.WithField("repo", git).Info("using NIx package set from git repository")
+
+ return &GitSource{
+ repository: git,
+ }, nil
+ }
+
+ if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
+ log.WithField("path", path).Info("using Nix package set at local path")
+
+ return &PkgsPath{
+ path: path,
+ }, nil
+ }
+
+ return nil, fmt.Errorf("no valid package source has been specified")
+}
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 44ac7313adc7..7454c14a8567 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -20,6 +20,8 @@
with pkgs;
let
+ inherit (pkgs) buildGoPackage;
+
# Hash of all Nixery sources - this is used as the Nixery version in
# builds to distinguish errors between deployed versions, see
# server/logs.go for details.
@@ -30,13 +32,41 @@ let
# Go implementation of the Nixery server which implements the
# container registry interface.
#
- # Users should use the nixery-bin derivation below instead.
- nixery-server = callPackage ./server {
- srcHash = nixery-src-hash;
+ # Users should use the nixery-bin derivation below instead as it
+ # provides the paths of files needed at runtime.
+ nixery-server = buildGoPackage rec {
+ name = "nixery-server";
+ goDeps = ./go-deps.nix;
+ src = ./.;
+
+ goPackagePath = "github.com/google/nixery";
+ doCheck = true;
+
+ # Simplify the Nix build instructions for Go to just the basics
+ # required to get Nixery up and running with the additional linker
+ # flags required.
+ outputs = [ "out" ];
+ preConfigure = "bin=$out";
+ buildPhase = ''
+ runHook preBuild
+ runHook renameImport
+
+ export GOBIN="$out/bin"
+ go install -ldflags "-X main.version=$(cat ${nixery-src-hash})" ${goPackagePath}
+ '';
+
+ fixupPhase = ''
+ remove-references-to -t ${go} $out/bin/nixery
+ '';
+
+ checkPhase = ''
+ go vet ${goPackagePath}
+ go test ${goPackagePath}
+ '';
};
in rec {
# Implementation of the Nix image building logic
- nixery-build-image = import ./build-image { inherit pkgs; };
+ nixery-prepare-image = import ./prepare-image { inherit pkgs; };
# Use mdBook to build a static asset page which Nixery can then
# serve. This is primarily used for the public instance at
@@ -50,8 +80,8 @@ in rec {
# are installing Nixery directly.
nixery-bin = writeShellScriptBin "nixery" ''
export WEB_DIR="${nixery-book}"
- export PATH="${nixery-build-image}/bin:$PATH"
- exec ${nixery-server}/bin/server
+ export PATH="${nixery-prepare-image}/bin:$PATH"
+ exec ${nixery-server}/bin/nixery
'';
nixery-popcount = callPackage ./popcount { };
@@ -104,7 +134,7 @@ in rec {
gzip
iana-etc
nix
- nixery-build-image
+ nixery-prepare-image
nixery-launch-script
openssh
zlib
diff --git a/tools/nixery/go-deps.nix b/tools/nixery/go-deps.nix
new file mode 100644
index 000000000000..847b44dce63c
--- /dev/null
+++ b/tools/nixery/go-deps.nix
@@ -0,0 +1,129 @@
+# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
+[
+ {
+ goPackagePath = "cloud.google.com/go";
+ fetch = {
+ type = "git";
+ url = "https://code.googlesource.com/gocloud";
+ rev = "77f6a3a292a7dbf66a5329de0d06326f1906b450";
+ sha256 = "1c9pkx782nbcp8jnl5lprcbzf97van789ky5qsncjgywjyymhigi";
+ };
+ }
+ {
+ goPackagePath = "github.com/golang/protobuf";
+ fetch = {
+ type = "git";
+ url = "https://github.com/golang/protobuf";
+ rev = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7";
+ sha256 = "1k1wb4zr0qbwgpvz9q5ws9zhlal8hq7dmq62pwxxriksayl6hzym";
+ };
+ }
+ {
+ goPackagePath = "github.com/googleapis/gax-go";
+ fetch = {
+ type = "git";
+ url = "https://github.com/googleapis/gax-go";
+ rev = "bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2";
+ sha256 = "1lxawwngv6miaqd25s3ba0didfzylbwisd2nz7r4gmbmin6jsjrx";
+ };
+ }
+ {
+ goPackagePath = "github.com/hashicorp/golang-lru";
+ fetch = {
+ type = "git";
+ url = "https://github.com/hashicorp/golang-lru";
+ rev = "59383c442f7d7b190497e9bb8fc17a48d06cd03f";
+ sha256 = "0yzwl592aa32vfy73pl7wdc21855w17zssrp85ckw2nisky8rg9c";
+ };
+ }
+ {
+ goPackagePath = "go.opencensus.io";
+ fetch = {
+ type = "git";
+ url = "https://github.com/census-instrumentation/opencensus-go";
+ rev = "b4a14686f0a98096416fe1b4cb848e384fb2b22b";
+ sha256 = "1aidyp301v5ngwsnnc8v1s09vvbsnch1jc4vd615f7qv77r9s7dn";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/net";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/net";
+ rev = "da137c7871d730100384dbcf36e6f8fa493aef5b";
+ sha256 = "1qsiyr3irmb6ii06hivm9p2c7wqyxczms1a9v1ss5698yjr3fg47";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/oauth2";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/oauth2";
+ rev = "0f29369cfe4552d0e4bcddc57cc75f4d7e672a33";
+ sha256 = "06jwpvx0x2gjn2y959drbcir5kd7vg87k0r1216abk6rrdzzrzi2";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/sys";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/sys";
+ rev = "51ab0e2deafac1f46c46ad59cf0921be2f180c3d";
+ sha256 = "0xdhpckbql3bsqkpc2k5b1cpnq3q1qjqjjq2j3p707rfwb8nm91a";
+ };
+ }
+ {
+ goPackagePath = "golang.org/x/text";
+ fetch = {
+ type = "git";
+ url = "https://go.googlesource.com/text";
+ rev = "342b2e1fbaa52c93f31447ad2c6abc048c63e475";
+ sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/api";
+ fetch = {
+ type = "git";
+ url = "https://code.googlesource.com/google-api-go-client";
+ rev = "069bea57b1be6ad0671a49ea7a1128025a22b73f";
+ sha256 = "19q2b610lkf3z3y9hn6rf11dd78xr9q4340mdyri7kbijlj2r44q";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/genproto";
+ fetch = {
+ type = "git";
+ url = "https://github.com/google/go-genproto";
+ rev = "c506a9f9061087022822e8da603a52fc387115a8";
+ sha256 = "03hh80aqi58dqi5ykj4shk3chwkzrgq2f3k6qs5qhgvmcy79y2py";
+ };
+ }
+ {
+ goPackagePath = "google.golang.org/grpc";
+ fetch = {
+ type = "git";
+ url = "https://github.com/grpc/grpc-go";
+ rev = "977142214c45640483838b8672a43c46f89f90cb";
+ sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
+ };
+ }
+ {
+ goPackagePath = "gonum.org/v1/gonum";
+ fetch = {
+ type = "git";
+ url = "https://github.com/gonum/gonum";
+ rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
+ sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
+ };
+ }
+ {
+ goPackagePath = "github.com/sirupsen/logrus";
+ fetch = {
+ type = "git";
+ url = "https://github.com/sirupsen/logrus";
+ rev = "de736cf91b921d56253b4010270681d33fdf7cb5";
+ sha256 = "1qixss8m5xy7pzbf0qz2k3shjw0asklm9sj6zyczp7mryrari0aj";
+ };
+ }
+]
diff --git a/tools/nixery/logs/logs.go b/tools/nixery/logs/logs.go
new file mode 100644
index 000000000000..4c755bc8ab0c
--- /dev/null
+++ b/tools/nixery/logs/logs.go
@@ -0,0 +1,119 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+package logs
+
+// This file configures different log formatters via logrus. The
+// standard formatter uses a structured JSON format that is compatible
+// with Stackdriver Error Reporting.
+//
+// https://cloud.google.com/error-reporting/docs/formatting-error-messages
+
+import (
+ "bytes"
+ "encoding/json"
+ log "github.com/sirupsen/logrus"
+)
+
+type stackdriverFormatter struct{}
+
+type serviceContext struct {
+ Service string `json:"service"`
+ Version string `json:"version"`
+}
+
+type reportLocation struct {
+ FilePath string `json:"filePath"`
+ LineNumber int `json:"lineNumber"`
+ FunctionName string `json:"functionName"`
+}
+
+var nixeryContext = serviceContext{
+ Service: "nixery",
+}
+
+// isError determines whether an entry should be logged as an error
+// (i.e. with attached `context`).
+//
+// This requires the caller information to be present on the log
+// entry, as stacktraces are not available currently.
+func isError(e *log.Entry) bool {
+ l := e.Level
+ return (l == log.ErrorLevel || l == log.FatalLevel || l == log.PanicLevel) &&
+ e.HasCaller()
+}
+
+// logSeverity formats the entry's severity into a format compatible
+// with Stackdriver Logging.
+//
+// The two formats that are being mapped do not have an equivalent set
+// of severities/levels, so the mapping is somewhat arbitrary for a
+// handful of them.
+//
+// https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity
+func logSeverity(l log.Level) string {
+ switch l {
+ case log.TraceLevel:
+ return "DEBUG"
+ case log.DebugLevel:
+ return "DEBUG"
+ case log.InfoLevel:
+ return "INFO"
+ case log.WarnLevel:
+ return "WARNING"
+ case log.ErrorLevel:
+ return "ERROR"
+ case log.FatalLevel:
+ return "CRITICAL"
+ case log.PanicLevel:
+ return "EMERGENCY"
+ default:
+ return "DEFAULT"
+ }
+}
+
+func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
+ msg := e.Data
+ msg["serviceContext"] = &nixeryContext
+ msg["message"] = &e.Message
+ msg["eventTime"] = &e.Time
+ msg["severity"] = logSeverity(e.Level)
+
+ if e, ok := msg[log.ErrorKey]; ok {
+ if err, isError := e.(error); isError {
+ msg[log.ErrorKey] = err.Error()
+ } else {
+ delete(msg, log.ErrorKey)
+ }
+ }
+
+ if isError(e) {
+ loc := reportLocation{
+ FilePath: e.Caller.File,
+ LineNumber: e.Caller.Line,
+ FunctionName: e.Caller.Function,
+ }
+ msg["context"] = &loc
+ }
+
+ b := new(bytes.Buffer)
+ err := json.NewEncoder(b).Encode(&msg)
+
+ return b.Bytes(), err
+}
+
+func Init(version string) {
+ nixeryContext.Version = version
+ log.SetReportCaller(true)
+ log.SetFormatter(stackdriverFormatter{})
+}
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
new file mode 100644
index 000000000000..6cad93740978
--- /dev/null
+++ b/tools/nixery/main.go
@@ -0,0 +1,249 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// The nixery server implements a container registry that transparently builds
+// container images based on Nix derivations.
+//
+// The Nix derivation used for image creation is responsible for creating
+// objects that are compatible with the registry API. The targeted registry
+// protocol is currently Docker's.
+//
+// When an image is requested, the required contents are parsed out of the
+// request and a Nix-build is initiated that eventually responds with the
+// manifest as well as information linking each layer digest to a local
+// filesystem path.
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "regexp"
+
+ "github.com/google/nixery/builder"
+ "github.com/google/nixery/config"
+ "github.com/google/nixery/logs"
+ "github.com/google/nixery/storage"
+ log "github.com/sirupsen/logrus"
+)
+
+// ManifestMediaType is the Content-Type used for the manifest itself. This
+// corresponds to the "Image Manifest V2, Schema 2" described on this page:
+//
+// https://docs.docker.com/registry/spec/manifest-v2-2/
+const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
+
+// This variable will be initialised during the build process and set
+// to the hash of the entire Nixery source tree.
+var version string = "devel"
+
+// Regexes matching the V2 Registry API routes. This only includes the
+// routes required for serving images, since pushing and other such
+// functionality is not available.
+var (
+ manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
+ layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+)
+
+// Downloads the popularity information for the package set from the
+// URL specified in Nixery's configuration.
+func downloadPopularity(url string) (builder.Popularity, error) {
+ resp, err := http.Get(url)
+ if err != nil {
+ return nil, err
+ }
+
+ if resp.StatusCode != 200 {
+ return nil, fmt.Errorf("popularity download from '%s' returned status: %s\n", url, resp.Status)
+ }
+
+ j, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return nil, err
+ }
+
+ var pop builder.Popularity
+ err = json.Unmarshal(j, &pop)
+ if err != nil {
+ return nil, err
+ }
+
+ return pop, nil
+}
+
+// Error format corresponding to the registry protocol V2 specification. This
+// allows feeding back errors to clients in a way that can be presented to
+// users.
+type registryError struct {
+ Code string `json:"code"`
+ Message string `json:"message"`
+}
+
+type registryErrors struct {
+ Errors []registryError `json:"errors"`
+}
+
+func writeError(w http.ResponseWriter, status int, code, message string) {
+ err := registryErrors{
+ Errors: []registryError{
+ {code, message},
+ },
+ }
+ json, _ := json.Marshal(err)
+
+ w.WriteHeader(status)
+ w.Header().Add("Content-Type", "application/json")
+ w.Write(json)
+}
+
+type registryHandler struct {
+ state *builder.State
+}
+
+func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ // Acknowledge that we speak V2 with an empty response
+ if r.RequestURI == "/v2/" {
+ return
+ }
+
+ // Serve the manifest (straight from Nix)
+ manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
+ if len(manifestMatches) == 3 {
+ imageName := manifestMatches[1]
+ imageTag := manifestMatches[2]
+
+ log.WithFields(log.Fields{
+ "image": imageName,
+ "tag": imageTag,
+ }).Info("requesting image manifest")
+
+ image := builder.ImageFromName(imageName, imageTag)
+ buildResult, err := builder.BuildImage(r.Context(), h.state, &image)
+
+ if err != nil {
+ writeError(w, 500, "UNKNOWN", "image build failure")
+
+ log.WithError(err).WithFields(log.Fields{
+ "image": imageName,
+ "tag": imageTag,
+ }).Error("failed to build image manifest")
+
+ return
+ }
+
+ // Some error types have special handling, which is applied
+ // here.
+ if buildResult.Error == "not_found" {
+ s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
+ writeError(w, 404, "MANIFEST_UNKNOWN", s)
+
+ log.WithFields(log.Fields{
+ "image": imageName,
+ "tag": imageTag,
+ "packages": buildResult.Pkgs,
+ }).Warn("could not find Nix packages")
+
+ return
+ }
+
+ // This marshaling error is ignored because we know that this
+ // field represents valid JSON data.
+ manifest, _ := json.Marshal(buildResult.Manifest)
+ w.Header().Add("Content-Type", manifestMediaType)
+ w.Write(manifest)
+ return
+ }
+
+ // Serve an image layer. For this we need to first ask Nix for
+ // the manifest, then proceed to extract the correct layer from
+ // it.
+ layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
+ if len(layerMatches) == 3 {
+ digest := layerMatches[2]
+ storage := h.state.Storage
+ err := storage.ServeLayer(digest, r, w)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": digest,
+ "backend": storage.Name(),
+ }).Error("failed to serve layer from storage backend")
+ }
+
+ return
+ }
+
+ log.WithField("uri", r.RequestURI).Info("unsupported registry route")
+
+ w.WriteHeader(404)
+}
+
+func main() {
+ logs.Init(version)
+ cfg, err := config.FromEnv()
+ if err != nil {
+ log.WithError(err).Fatal("failed to load configuration")
+ }
+
+ var s storage.Backend
+
+ switch cfg.Backend {
+ case config.GCS:
+ s, err = storage.NewGCSBackend()
+ case config.FileSystem:
+ s, err = storage.NewFSBackend()
+ }
+ if err != nil {
+ log.WithError(err).Fatal("failed to initialise storage backend")
+ }
+
+ log.WithField("backend", s.Name()).Info("initialised storage backend")
+
+ cache, err := builder.NewCache()
+ if err != nil {
+ log.WithError(err).Fatal("failed to instantiate build cache")
+ }
+
+ var pop builder.Popularity
+ if cfg.PopUrl != "" {
+ pop, err = downloadPopularity(cfg.PopUrl)
+ if err != nil {
+ log.WithError(err).WithField("popURL", cfg.PopUrl).
+ Fatal("failed to fetch popularity information")
+ }
+ }
+
+ state := builder.State{
+ Cache: &cache,
+ Cfg: cfg,
+ Pop: pop,
+ Storage: s,
+ }
+
+ log.WithFields(log.Fields{
+ "version": version,
+ "port": cfg.Port,
+ }).Info("starting Nixery")
+
+ // All /v2/ requests belong to the registry handler.
+ http.Handle("/v2/", ®istryHandler{
+ state: &state,
+ })
+
+ // All other roots are served by the static file server.
+ webDir := http.Dir(cfg.WebDir)
+ http.Handle("/", http.FileServer(webDir))
+
+ log.Fatal(http.ListenAndServe(":"+cfg.Port, nil))
+}
diff --git a/tools/nixery/manifest/manifest.go b/tools/nixery/manifest/manifest.go
new file mode 100644
index 000000000000..0d36826fb7e5
--- /dev/null
+++ b/tools/nixery/manifest/manifest.go
@@ -0,0 +1,141 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package image implements logic for creating the image metadata
+// (such as the image manifest and configuration).
+package manifest
+
+import (
+ "crypto/sha256"
+ "encoding/json"
+ "fmt"
+ "sort"
+)
+
+const (
+ // manifest constants
+ schemaVersion = 2
+
+ // media types
+ manifestType = "application/vnd.docker.distribution.manifest.v2+json"
+ layerType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
+ configType = "application/vnd.docker.container.image.v1+json"
+
+ // image config constants
+ os = "linux"
+ fsType = "layers"
+)
+
+type Entry struct {
+ MediaType string `json:"mediaType,omitempty"`
+ Size int64 `json:"size"`
+ Digest string `json:"digest"`
+
+ // These fields are internal to Nixery and not part of the
+ // serialised entry.
+ MergeRating uint64 `json:"-"`
+ TarHash string `json:",omitempty"`
+}
+
+type manifest struct {
+ SchemaVersion int `json:"schemaVersion"`
+ MediaType string `json:"mediaType"`
+ Config Entry `json:"config"`
+ Layers []Entry `json:"layers"`
+}
+
+type imageConfig struct {
+ Architecture string `json:"architecture"`
+ OS string `json:"os"`
+
+ RootFS struct {
+ FSType string `json:"type"`
+ DiffIDs []string `json:"diff_ids"`
+ } `json:"rootfs"`
+
+ // sic! empty struct (rather than `null`) is required by the
+ // image metadata deserialiser in Kubernetes
+ Config struct{} `json:"config"`
+}
+
+// ConfigLayer represents the configuration layer to be included in
+// the manifest, containing its JSON-serialised content and SHA256
+// hash.
+type ConfigLayer struct {
+ Config []byte
+ SHA256 string
+}
+
+// imageConfig creates an image configuration with the values set to
+// the constant defaults.
+//
+// Outside of this module the image configuration is treated as an
+// opaque blob and it is thus returned as an already serialised byte
+// array and its SHA256-hash.
+func configLayer(arch string, hashes []string) ConfigLayer {
+ c := imageConfig{}
+ c.Architecture = arch
+ c.OS = os
+ c.RootFS.FSType = fsType
+ c.RootFS.DiffIDs = hashes
+
+ j, _ := json.Marshal(c)
+
+ return ConfigLayer{
+ Config: j,
+ SHA256: fmt.Sprintf("%x", sha256.Sum256(j)),
+ }
+}
+
+// Manifest creates an image manifest from the specified layer entries
+// and returns its JSON-serialised form as well as the configuration
+// layer.
+//
+// Callers do not need to set the media type for the layer entries.
+func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
+ // Sort layers by their merge rating, from highest to lowest.
+ // This makes it likely for a contiguous chain of shared image
+ // layers to appear at the beginning of a layer.
+ //
+ // Due to moby/moby#38446 Docker considers the order of layers
+ // when deciding which layers to download again.
+ sort.Slice(layers, func(i, j int) bool {
+ return layers[i].MergeRating > layers[j].MergeRating
+ })
+
+ hashes := make([]string, len(layers))
+ for i, l := range layers {
+ hashes[i] = l.TarHash
+ l.MediaType = layerType
+ l.TarHash = ""
+ layers[i] = l
+ }
+
+ c := configLayer(arch, hashes)
+
+ m := manifest{
+ SchemaVersion: schemaVersion,
+ MediaType: manifestType,
+ Config: Entry{
+ MediaType: configType,
+ Size: int64(len(c.Config)),
+ Digest: "sha256:" + c.SHA256,
+ },
+ Layers: layers,
+ }
+
+ j, _ := json.Marshal(m)
+
+ return json.RawMessage(j), c
+}
diff --git a/tools/nixery/popcount/popcount.go b/tools/nixery/popcount/popcount.go
index b21cee2e0e7d..992a88e874d1 100644
--- a/tools/nixery/popcount/popcount.go
+++ b/tools/nixery/popcount/popcount.go
@@ -175,7 +175,7 @@ func fetchNarInfo(i *item) (string, error) {
narinfo, err := ioutil.ReadAll(resp.Body)
// best-effort write the file to the cache
- ioutil.WriteFile("popcache/" + i.hash, narinfo, 0644)
+ ioutil.WriteFile("popcache/"+i.hash, narinfo, 0644)
return string(narinfo), err
}
diff --git a/tools/nixery/prepare-image/default.nix b/tools/nixery/prepare-image/default.nix
new file mode 100644
index 000000000000..60b208f522d5
--- /dev/null
+++ b/tools/nixery/prepare-image/default.nix
@@ -0,0 +1,29 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This file builds a wrapper script called by Nixery to ask for the
+# content information for a given image.
+#
+# The purpose of using a wrapper script is to ensure that the paths to
+# all required Nix files are set correctly at runtime.
+
+{ pkgs ? import {} }:
+
+pkgs.writeShellScriptBin "nixery-prepare-image" ''
+ exec ${pkgs.nix}/bin/nix-build \
+ --show-trace \
+ --no-out-link "$@" \
+ --argstr loadPkgs ${./load-pkgs.nix} \
+ ${./prepare-image.nix}
+''
diff --git a/tools/nixery/prepare-image/load-pkgs.nix b/tools/nixery/prepare-image/load-pkgs.nix
new file mode 100644
index 000000000000..cceebfc14dae
--- /dev/null
+++ b/tools/nixery/prepare-image/load-pkgs.nix
@@ -0,0 +1,45 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Load a Nix package set from one of the supported source types
+# (nixpkgs, git, path).
+{ srcType, srcArgs, importArgs ? { } }:
+
+with builtins;
+let
+ # If a nixpkgs channel is requested, it is retrieved from Github (as
+ # a tarball) and imported.
+ fetchImportChannel = channel:
+ let
+ url =
+ "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
+ in import (fetchTarball url) importArgs;
+
+ # If a git repository is requested, it is retrieved via
+ # builtins.fetchGit which defaults to the git configuration of the
+ # outside environment. This means that user-configured SSH
+ # credentials etc. are going to work as expected.
+ fetchImportGit = spec: import (fetchGit spec) importArgs;
+
+ # No special handling is used for paths, so users are expected to pass one
+ # that will work natively with Nix.
+ importPath = path: import (toPath path) importArgs;
+in if srcType == "nixpkgs" then
+ fetchImportChannel srcArgs
+else if srcType == "git" then
+ fetchImportGit (fromJSON srcArgs)
+else if srcType == "path" then
+ importPath srcArgs
+else
+ throw ("Invalid package set source specification: ${srcType} (${srcArgs})")
diff --git a/tools/nixery/prepare-image/prepare-image.nix b/tools/nixery/prepare-image/prepare-image.nix
new file mode 100644
index 000000000000..4393f2b859a6
--- /dev/null
+++ b/tools/nixery/prepare-image/prepare-image.nix
@@ -0,0 +1,173 @@
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This file contains a derivation that outputs structured information
+# about the runtime dependencies of an image with a given set of
+# packages. This is used by Nixery to determine the layer grouping and
+# assemble each layer.
+#
+# In addition it creates and outputs a meta-layer with the symlink
+# structure required for using the image together with the individual
+# package layers.
+
+{
+ # Description of the package set to be used (will be loaded by load-pkgs.nix)
+ srcType ? "nixpkgs",
+ srcArgs ? "nixos-19.03",
+ system ? "x86_64-linux",
+ importArgs ? { },
+ # Path to load-pkgs.nix
+ loadPkgs ? ./load-pkgs.nix,
+ # Packages to install by name (which must refer to top-level attributes of
+ # nixpkgs). This is passed in as a JSON-array in string form.
+ packages ? "[]"
+}:
+
+let
+ inherit (builtins)
+ foldl'
+ fromJSON
+ hasAttr
+ length
+ match
+ readFile
+ toFile
+ toJSON;
+
+ # Package set to use for sourcing utilities
+ nativePkgs = import loadPkgs { inherit srcType srcArgs importArgs; };
+ inherit (nativePkgs) coreutils jq openssl lib runCommand writeText symlinkJoin;
+
+ # Package set to use for packages to be included in the image. This
+ # package set is imported with the system set to the target
+ # architecture.
+ pkgs = import loadPkgs {
+ inherit srcType srcArgs;
+ importArgs = importArgs // {
+ inherit system;
+ };
+ };
+
+ # deepFetch traverses the top-level Nix package set to retrieve an item via a
+ # path specified in string form.
+ #
+ # For top-level items, the name of the key yields the result directly. Nested
+ # items are fetched by using dot-syntax, as in Nix itself.
+ #
+ # Due to a restriction of the registry API specification it is not possible to
+ # pass uppercase characters in an image name, however the Nix package set
+ # makes use of camelCasing repeatedly (for example for `haskellPackages`).
+ #
+ # To work around this, if no value is found on the top-level a second lookup
+ # is done on the package set using lowercase-names. This is not done for
+ # nested sets, as they often have keys that only differ in case.
+ #
+ # For example, `deepFetch pkgs "xorg.xev"` retrieves `pkgs.xorg.xev` and
+ # `deepFetch haskellpackages.stylish-haskell` retrieves
+ # `haskellPackages.stylish-haskell`.
+ deepFetch = with lib; s: n:
+ let path = splitString "." n;
+ err = { error = "not_found"; pkg = n; };
+ # The most efficient way I've found to do a lookup against
+ # case-differing versions of an attribute is to first construct a
+ # mapping of all lowercased attribute names to their differently cased
+ # equivalents.
+ #
+ # This map is then used for a second lookup if the top-level
+ # (case-sensitive) one does not yield a result.
+ hasUpper = str: (match ".*[A-Z].*" str) != null;
+ allUpperKeys = filter hasUpper (attrNames s);
+ lowercased = listToAttrs (map (k: {
+ name = toLower k;
+ value = k;
+ }) allUpperKeys);
+ caseAmendedPath = map (v: if hasAttr v lowercased then lowercased."${v}" else v) path;
+ fetchLower = attrByPath caseAmendedPath err s;
+ in attrByPath path fetchLower s;
+
+ # allContents contains all packages successfully retrieved by name
+ # from the package set, as well as any errors encountered while
+ # attempting to fetch a package.
+ #
+ # Accumulated error information is returned back to the server.
+ allContents =
+ # Folds over the results of 'deepFetch' on all requested packages to
+ # separate them into errors and content. This allows the program to
+ # terminate early and return only the errors if any are encountered.
+ let splitter = attrs: res:
+ if hasAttr "error" res
+ then attrs // { errors = attrs.errors ++ [ res ]; }
+ else attrs // { contents = attrs.contents ++ [ res ]; };
+ init = { contents = []; errors = []; };
+ fetched = (map (deepFetch pkgs) (fromJSON packages));
+ in foldl' splitter init fetched;
+
+ # Contains the export references graph of all retrieved packages,
+ # which has information about all runtime dependencies of the image.
+ #
+ # This is used by Nixery to group closures into image layers.
+ runtimeGraph = runCommand "runtime-graph.json" {
+ __structuredAttrs = true;
+ exportReferencesGraph.graph = allContents.contents;
+ PATH = "${coreutils}/bin";
+ builder = toFile "builder" ''
+ . .attrs.sh
+ cp .attrs.json ''${outputs[out]}
+ '';
+ } "";
+
+ # Create a symlink forest into all top-level store paths of the
+ # image contents.
+ contentsEnv = symlinkJoin {
+ name = "bulk-layers";
+ paths = allContents.contents;
+ };
+
+ # Image layer that contains the symlink forest created above. This
+ # must be included in the image to ensure that the filesystem has a
+ # useful layout at runtime.
+ symlinkLayer = runCommand "symlink-layer.tar" {} ''
+ cp -r ${contentsEnv}/ ./layer
+ tar --transform='s|^\./||' -C layer --sort=name --mtime="@$SOURCE_DATE_EPOCH" --owner=0 --group=0 -cf $out .
+ '';
+
+ # Metadata about the symlink layer which is required for serving it.
+ # Two different hashes are computed for different usages (inclusion
+ # in manifest vs. content-checking in the layer cache).
+ symlinkLayerMeta = fromJSON (readFile (runCommand "symlink-layer-meta.json" {
+ buildInputs = [ coreutils jq openssl ];
+ }''
+ tarHash=$(sha256sum ${symlinkLayer} | cut -d ' ' -f1)
+ layerSize=$(stat --printf '%s' ${symlinkLayer})
+
+ jq -n -c --arg tarHash $tarHash --arg size $layerSize --arg path ${symlinkLayer} \
+ '{ size: ($size | tonumber), tarHash: $tarHash, path: $path }' >> $out
+ ''));
+
+ # Final output structure returned to Nixery if the build succeeded
+ buildOutput = {
+ runtimeGraph = fromJSON (readFile runtimeGraph);
+ symlinkLayer = symlinkLayerMeta;
+ };
+
+ # Output structure returned if errors occured during the build. Currently the
+ # only error type that is returned in a structured way is 'not_found'.
+ errorOutput = {
+ error = "not_found";
+ pkgs = map (err: err.pkg) allContents.errors;
+ };
+in writeText "build-output.json" (if (length allContents.errors) == 0
+ then toJSON buildOutput
+ else toJSON errorOutput
+)
diff --git a/tools/nixery/server/builder/archive.go b/tools/nixery/server/builder/archive.go
deleted file mode 100644
index e0fb76d44bee..000000000000
--- a/tools/nixery/server/builder/archive.go
+++ /dev/null
@@ -1,116 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-package builder
-
-// This file implements logic for walking through a directory and creating a
-// tarball of it.
-//
-// The tarball is written straight to the supplied reader, which makes it
-// possible to create an image layer from the specified store paths, hash it and
-// upload it in one reading pass.
-
-import (
- "archive/tar"
- "compress/gzip"
- "crypto/sha256"
- "fmt"
- "io"
- "os"
- "path/filepath"
-
- "github.com/google/nixery/server/layers"
-)
-
-// Create a new compressed tarball from each of the paths in the list
-// and write it to the supplied writer.
-//
-// The uncompressed tarball is hashed because image manifests must
-// contain both the hashes of compressed and uncompressed layers.
-func packStorePaths(l *layers.Layer, w io.Writer) (string, error) {
- shasum := sha256.New()
- gz := gzip.NewWriter(w)
- multi := io.MultiWriter(shasum, gz)
- t := tar.NewWriter(multi)
-
- for _, path := range l.Contents {
- err := filepath.Walk(path, tarStorePath(t))
- if err != nil {
- return "", err
- }
- }
-
- if err := t.Close(); err != nil {
- return "", err
- }
-
- if err := gz.Close(); err != nil {
- return "", err
- }
-
- return fmt.Sprintf("sha256:%x", shasum.Sum([]byte{})), nil
-}
-
-func tarStorePath(w *tar.Writer) filepath.WalkFunc {
- return func(path string, info os.FileInfo, err error) error {
- if err != nil {
- return err
- }
-
- // If the entry is not a symlink or regular file, skip it.
- if info.Mode()&os.ModeSymlink == 0 && !info.Mode().IsRegular() {
- return nil
- }
-
- // the symlink target is read if this entry is a symlink, as it
- // is required when creating the file header
- var link string
- if info.Mode()&os.ModeSymlink != 0 {
- link, err = os.Readlink(path)
- if err != nil {
- return err
- }
- }
-
- header, err := tar.FileInfoHeader(info, link)
- if err != nil {
- return err
- }
-
- // The name retrieved from os.FileInfo only contains the file's
- // basename, but the full path is required within the layer
- // tarball.
- header.Name = path
- if err = w.WriteHeader(header); err != nil {
- return err
- }
-
- // At this point, return if no file content needs to be written
- if !info.Mode().IsRegular() {
- return nil
- }
-
- f, err := os.Open(path)
- if err != nil {
- return err
- }
-
- if _, err := io.Copy(w, f); err != nil {
- return err
- }
-
- f.Close()
-
- return nil
- }
-}
diff --git a/tools/nixery/server/builder/builder.go b/tools/nixery/server/builder/builder.go
deleted file mode 100644
index da9dede1acd7..000000000000
--- a/tools/nixery/server/builder/builder.go
+++ /dev/null
@@ -1,521 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Package builder implements the code required to build images via Nix. Image
-// build data is cached for up to 24 hours to avoid duplicated calls to Nix
-// (which are costly even if no building is performed).
-package builder
-
-import (
- "bufio"
- "bytes"
- "compress/gzip"
- "context"
- "crypto/sha256"
- "encoding/json"
- "fmt"
- "io"
- "io/ioutil"
- "os"
- "os/exec"
- "sort"
- "strings"
-
- "github.com/google/nixery/server/config"
- "github.com/google/nixery/server/layers"
- "github.com/google/nixery/server/manifest"
- "github.com/google/nixery/server/storage"
- log "github.com/sirupsen/logrus"
-)
-
-// The maximum number of layers in an image is 125. To allow for
-// extensibility, the actual number of layers Nixery is "allowed" to
-// use up is set at a lower point.
-const LayerBudget int = 94
-
-// State holds the runtime state that is carried around in Nixery and
-// passed to builder functions.
-type State struct {
- Storage storage.Backend
- Cache *LocalCache
- Cfg config.Config
- Pop layers.Popularity
-}
-
-// Architecture represents the possible CPU architectures for which
-// container images can be built.
-//
-// The default architecture is amd64, but support for ARM platforms is
-// available within nixpkgs and can be toggled via meta-packages.
-type Architecture struct {
- // Name of the system tuple to pass to Nix
- nixSystem string
-
- // Name of the architecture as used in the OCI manifests
- imageArch string
-}
-
-var amd64 = Architecture{"x86_64-linux", "amd64"}
-var arm64 = Architecture{"aarch64-linux", "arm64"}
-
-// Image represents the information necessary for building a container image.
-// This can be either a list of package names (corresponding to keys in the
-// nixpkgs set) or a Nix expression that results in a *list* of derivations.
-type Image struct {
- Name string
- Tag string
-
- // Names of packages to include in the image. These must correspond
- // directly to top-level names of Nix packages in the nixpkgs tree.
- Packages []string
-
- // Architecture for which to build the image. Nixery defaults
- // this to amd64 if not specified via meta-packages.
- Arch *Architecture
-}
-
-// BuildResult represents the data returned from the server to the
-// HTTP handlers. Error information is propagated straight from Nix
-// for errors inside of the build that should be fed back to the
-// client (such as missing packages).
-type BuildResult struct {
- Error string `json:"error"`
- Pkgs []string `json:"pkgs"`
- Manifest json.RawMessage `json:"manifest"`
-}
-
-// ImageFromName parses an image name into the corresponding structure which can
-// be used to invoke Nix.
-//
-// It will expand convenience names under the hood (see the `convenienceNames`
-// function below) and append packages that are always included (cacert, iana-etc).
-//
-// Once assembled the image structure uses a sorted representation of
-// the name. This is to avoid unnecessarily cache-busting images if
-// only the order of requested packages has changed.
-func ImageFromName(name string, tag string) Image {
- pkgs := strings.Split(name, "/")
- arch, expanded := metaPackages(pkgs)
- expanded = append(expanded, "cacert", "iana-etc")
-
- sort.Strings(pkgs)
- sort.Strings(expanded)
-
- return Image{
- Name: strings.Join(pkgs, "/"),
- Tag: tag,
- Packages: expanded,
- Arch: arch,
- }
-}
-
-// ImageResult represents the output of calling the Nix derivation
-// responsible for preparing an image.
-type ImageResult struct {
- // These fields are populated in case of an error
- Error string `json:"error"`
- Pkgs []string `json:"pkgs"`
-
- // These fields are populated in case of success
- Graph layers.RuntimeGraph `json:"runtimeGraph"`
- SymlinkLayer struct {
- Size int `json:"size"`
- TarHash string `json:"tarHash"`
- Path string `json:"path"`
- } `json:"symlinkLayer"`
-}
-
-// metaPackages expands package names defined by Nixery which either
-// include sets of packages or trigger certain image-building
-// behaviour.
-//
-// Meta-packages must be specified as the first packages in an image
-// name.
-//
-// Currently defined meta-packages are:
-//
-// * `shell`: Includes bash, coreutils and other common command-line tools
-// * `arm64`: Causes Nixery to build images for the ARM64 architecture
-func metaPackages(packages []string) (*Architecture, []string) {
- arch := &amd64
-
- var metapkgs []string
- lastMeta := 0
- for idx, p := range packages {
- if p == "shell" || p == "arm64" {
- metapkgs = append(metapkgs, p)
- lastMeta = idx + 1
- } else {
- break
- }
- }
-
- // Chop off the meta-packages from the front of the package
- // list
- packages = packages[lastMeta:]
-
- for _, p := range metapkgs {
- switch p {
- case "shell":
- packages = append(packages, "bashInteractive", "coreutils", "moreutils", "nano")
- case "arm64":
- arch = &arm64
- }
- }
-
- return arch, packages
-}
-
-// logNix logs each output line from Nix. It runs in a goroutine per
-// output channel that should be live-logged.
-func logNix(image, cmd string, r io.ReadCloser) {
- scanner := bufio.NewScanner(r)
- for scanner.Scan() {
- log.WithFields(log.Fields{
- "image": image,
- "cmd": cmd,
- }).Info("[nix] " + scanner.Text())
- }
-}
-
-func callNix(program, image string, args []string) ([]byte, error) {
- cmd := exec.Command(program, args...)
-
- outpipe, err := cmd.StdoutPipe()
- if err != nil {
- return nil, err
- }
-
- errpipe, err := cmd.StderrPipe()
- if err != nil {
- return nil, err
- }
- go logNix(program, image, errpipe)
-
- if err = cmd.Start(); err != nil {
- log.WithError(err).WithFields(log.Fields{
- "image": image,
- "cmd": program,
- }).Error("error invoking Nix")
-
- return nil, err
- }
-
- log.WithFields(log.Fields{
- "cmd": program,
- "image": image,
- }).Info("invoked Nix build")
-
- stdout, _ := ioutil.ReadAll(outpipe)
-
- if err = cmd.Wait(); err != nil {
- log.WithError(err).WithFields(log.Fields{
- "image": image,
- "cmd": program,
- "stdout": stdout,
- }).Info("failed to invoke Nix")
-
- return nil, err
- }
-
- resultFile := strings.TrimSpace(string(stdout))
- buildOutput, err := ioutil.ReadFile(resultFile)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "image": image,
- "file": resultFile,
- }).Info("failed to read Nix result file")
-
- return nil, err
- }
-
- return buildOutput, nil
-}
-
-// Call out to Nix and request metadata for the image to be built. All
-// required store paths for the image will be realised, but layers
-// will not yet be created from them.
-//
-// This function is only invoked if the manifest is not found in any
-// cache.
-func prepareImage(s *State, image *Image) (*ImageResult, error) {
- packages, err := json.Marshal(image.Packages)
- if err != nil {
- return nil, err
- }
-
- srcType, srcArgs := s.Cfg.Pkgs.Render(image.Tag)
-
- args := []string{
- "--timeout", s.Cfg.Timeout,
- "--argstr", "packages", string(packages),
- "--argstr", "srcType", srcType,
- "--argstr", "srcArgs", srcArgs,
- "--argstr", "system", image.Arch.nixSystem,
- }
-
- output, err := callNix("nixery-build-image", image.Name, args)
- if err != nil {
- // granular error logging is performed in callNix already
- return nil, err
- }
-
- log.WithFields(log.Fields{
- "image": image.Name,
- "tag": image.Tag,
- }).Info("finished image preparation via Nix")
-
- var result ImageResult
- err = json.Unmarshal(output, &result)
- if err != nil {
- return nil, err
- }
-
- return &result, nil
-}
-
-// Groups layers and checks whether they are present in the cache
-// already, otherwise calls out to Nix to assemble layers.
-//
-// Newly built layers are uploaded to the bucket. Cache entries are
-// added only after successful uploads, which guarantees that entries
-// retrieved from the cache are present in the bucket.
-func prepareLayers(ctx context.Context, s *State, image *Image, result *ImageResult) ([]manifest.Entry, error) {
- grouped := layers.Group(&result.Graph, &s.Pop, LayerBudget)
-
- var entries []manifest.Entry
-
- // Splits the layers into those which are already present in
- // the cache, and those that are missing.
- //
- // Missing layers are built and uploaded to the storage
- // bucket.
- for _, l := range grouped {
- if entry, cached := layerFromCache(ctx, s, l.Hash()); cached {
- entries = append(entries, *entry)
- } else {
- lh := l.Hash()
-
- // While packing store paths, the SHA sum of
- // the uncompressed layer is computed and
- // written to `tarhash`.
- //
- // TODO(tazjin): Refactor this to make the
- // flow of data cleaner.
- var tarhash string
- lw := func(w io.Writer) error {
- var err error
- tarhash, err = packStorePaths(&l, w)
- return err
- }
-
- entry, err := uploadHashLayer(ctx, s, lh, lw)
- if err != nil {
- return nil, err
- }
- entry.MergeRating = l.MergeRating
- entry.TarHash = tarhash
-
- var pkgs []string
- for _, p := range l.Contents {
- pkgs = append(pkgs, layers.PackageFromPath(p))
- }
-
- log.WithFields(log.Fields{
- "layer": lh,
- "packages": pkgs,
- "tarhash": tarhash,
- }).Info("created image layer")
-
- go cacheLayer(ctx, s, l.Hash(), *entry)
- entries = append(entries, *entry)
- }
- }
-
- // Symlink layer (built in the first Nix build) needs to be
- // included here manually:
- slkey := result.SymlinkLayer.TarHash
- entry, err := uploadHashLayer(ctx, s, slkey, func(w io.Writer) error {
- f, err := os.Open(result.SymlinkLayer.Path)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "image": image.Name,
- "tag": image.Tag,
- "layer": slkey,
- }).Error("failed to open symlink layer")
-
- return err
- }
- defer f.Close()
-
- gz := gzip.NewWriter(w)
- _, err = io.Copy(gz, f)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "image": image.Name,
- "tag": image.Tag,
- "layer": slkey,
- }).Error("failed to upload symlink layer")
-
- return err
- }
-
- return gz.Close()
- })
-
- if err != nil {
- return nil, err
- }
-
- entry.TarHash = "sha256:" + result.SymlinkLayer.TarHash
- go cacheLayer(ctx, s, slkey, *entry)
- entries = append(entries, *entry)
-
- return entries, nil
-}
-
-// layerWriter is the type for functions that can write a layer to the
-// multiwriter used for uploading & hashing.
-//
-// This type exists to avoid duplication between the handling of
-// symlink layers and store path layers.
-type layerWriter func(w io.Writer) error
-
-// byteCounter is a special io.Writer that counts all bytes written to
-// it and does nothing else.
-//
-// This is required because the ad-hoc writing of tarballs leaves no
-// single place to count the final tarball size otherwise.
-type byteCounter struct {
- count int64
-}
-
-func (b *byteCounter) Write(p []byte) (n int, err error) {
- b.count += int64(len(p))
- return len(p), nil
-}
-
-// Upload a layer tarball to the storage bucket, while hashing it at
-// the same time. The supplied function is expected to provide the
-// layer data to the writer.
-//
-// The initial upload is performed in a 'staging' folder, as the
-// SHA256-hash is not yet available when the upload is initiated.
-//
-// After a successful upload, the file is moved to its final location
-// in the bucket and the build cache is populated.
-//
-// The return value is the layer's SHA256 hash, which is used in the
-// image manifest.
-func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter) (*manifest.Entry, error) {
- path := "staging/" + key
- sha256sum, size, err := s.Storage.Persist(ctx, path, func(sw io.Writer) (string, int64, error) {
- // Sets up a "multiwriter" that simultaneously runs both hash
- // algorithms and uploads to the storage backend.
- shasum := sha256.New()
- counter := &byteCounter{}
- multi := io.MultiWriter(sw, shasum, counter)
-
- err := lw(multi)
- sha256sum := fmt.Sprintf("%x", shasum.Sum([]byte{}))
-
- return sha256sum, counter.count, err
- })
-
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": key,
- "backend": s.Storage.Name(),
- }).Error("failed to create and store layer")
-
- return nil, err
- }
-
- // Hashes are now known and the object is in the bucket, what
- // remains is to move it to the correct location and cache it.
- err = s.Storage.Move(ctx, "staging/"+key, "layers/"+sha256sum)
- if err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to move layer from staging")
-
- return nil, err
- }
-
- log.WithFields(log.Fields{
- "layer": key,
- "sha256": sha256sum,
- "size": size,
- }).Info("created and persisted layer")
-
- entry := manifest.Entry{
- Digest: "sha256:" + sha256sum,
- Size: size,
- }
-
- return &entry, nil
-}
-
-func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, error) {
- key := s.Cfg.Pkgs.CacheKey(image.Packages, image.Tag)
- if key != "" {
- if m, c := manifestFromCache(ctx, s, key); c {
- return &BuildResult{
- Manifest: m,
- }, nil
- }
- }
-
- imageResult, err := prepareImage(s, image)
- if err != nil {
- return nil, err
- }
-
- if imageResult.Error != "" {
- return &BuildResult{
- Error: imageResult.Error,
- Pkgs: imageResult.Pkgs,
- }, nil
- }
-
- layers, err := prepareLayers(ctx, s, image, imageResult)
- if err != nil {
- return nil, err
- }
-
- m, c := manifest.Manifest(image.Arch.imageArch, layers)
-
- lw := func(w io.Writer) error {
- r := bytes.NewReader(c.Config)
- _, err := io.Copy(w, r)
- return err
- }
-
- if _, err = uploadHashLayer(ctx, s, c.SHA256, lw); err != nil {
- log.WithError(err).WithFields(log.Fields{
- "image": image.Name,
- "tag": image.Tag,
- }).Error("failed to upload config")
-
- return nil, err
- }
-
- if key != "" {
- go cacheManifest(ctx, s, key, m)
- }
-
- result := BuildResult{
- Manifest: m,
- }
- return &result, nil
-}
diff --git a/tools/nixery/server/builder/builder_test.go b/tools/nixery/server/builder/builder_test.go
deleted file mode 100644
index 3fbe2ab40e23..000000000000
--- a/tools/nixery/server/builder/builder_test.go
+++ /dev/null
@@ -1,123 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-package builder
-
-import (
- "github.com/google/go-cmp/cmp"
- "github.com/google/go-cmp/cmp/cmpopts"
- "testing"
-)
-
-var ignoreArch = cmpopts.IgnoreFields(Image{}, "Arch")
-
-func TestImageFromNameSimple(t *testing.T) {
- image := ImageFromName("hello", "latest")
- expected := Image{
- Name: "hello",
- Tag: "latest",
- Packages: []string{
- "cacert",
- "hello",
- "iana-etc",
- },
- }
-
- if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
- t.Fatalf("Image(\"hello\", \"latest\") mismatch:\n%s", diff)
- }
-}
-
-func TestImageFromNameMultiple(t *testing.T) {
- image := ImageFromName("hello/git/htop", "latest")
- expected := Image{
- Name: "git/hello/htop",
- Tag: "latest",
- Packages: []string{
- "cacert",
- "git",
- "hello",
- "htop",
- "iana-etc",
- },
- }
-
- if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
- t.Fatalf("Image(\"hello/git/htop\", \"latest\") mismatch:\n%s", diff)
- }
-}
-
-func TestImageFromNameShell(t *testing.T) {
- image := ImageFromName("shell", "latest")
- expected := Image{
- Name: "shell",
- Tag: "latest",
- Packages: []string{
- "bashInteractive",
- "cacert",
- "coreutils",
- "iana-etc",
- "moreutils",
- "nano",
- },
- }
-
- if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
- t.Fatalf("Image(\"shell\", \"latest\") mismatch:\n%s", diff)
- }
-}
-
-func TestImageFromNameShellMultiple(t *testing.T) {
- image := ImageFromName("shell/htop", "latest")
- expected := Image{
- Name: "htop/shell",
- Tag: "latest",
- Packages: []string{
- "bashInteractive",
- "cacert",
- "coreutils",
- "htop",
- "iana-etc",
- "moreutils",
- "nano",
- },
- }
-
- if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
- t.Fatalf("Image(\"shell/htop\", \"latest\") mismatch:\n%s", diff)
- }
-}
-
-func TestImageFromNameShellArm64(t *testing.T) {
- image := ImageFromName("shell/arm64", "latest")
- expected := Image{
- Name: "arm64/shell",
- Tag: "latest",
- Packages: []string{
- "bashInteractive",
- "cacert",
- "coreutils",
- "iana-etc",
- "moreutils",
- "nano",
- },
- }
-
- if diff := cmp.Diff(expected, image, ignoreArch); diff != "" {
- t.Fatalf("Image(\"shell/arm64\", \"latest\") mismatch:\n%s", diff)
- }
-
- if image.Arch.imageArch != "arm64" {
- t.Fatal("Image(\"shell/arm64\"): Expected arch arm64")
- }
-}
diff --git a/tools/nixery/server/builder/cache.go b/tools/nixery/server/builder/cache.go
deleted file mode 100644
index 82bd90927cd0..000000000000
--- a/tools/nixery/server/builder/cache.go
+++ /dev/null
@@ -1,236 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-package builder
-
-import (
- "bytes"
- "context"
- "encoding/json"
- "io"
- "io/ioutil"
- "os"
- "sync"
-
- "github.com/google/nixery/server/manifest"
- log "github.com/sirupsen/logrus"
-)
-
-// LocalCache implements the structure used for local caching of
-// manifests and layer uploads.
-type LocalCache struct {
- // Manifest cache
- mmtx sync.RWMutex
- mdir string
-
- // Layer cache
- lmtx sync.RWMutex
- lcache map[string]manifest.Entry
-}
-
-// Creates an in-memory cache and ensures that the local file path for
-// manifest caching exists.
-func NewCache() (LocalCache, error) {
- path := os.TempDir() + "/nixery"
- err := os.MkdirAll(path, 0755)
- if err != nil {
- return LocalCache{}, err
- }
-
- return LocalCache{
- mdir: path + "/",
- lcache: make(map[string]manifest.Entry),
- }, nil
-}
-
-// Retrieve a cached manifest if the build is cacheable and it exists.
-func (c *LocalCache) manifestFromLocalCache(key string) (json.RawMessage, bool) {
- c.mmtx.RLock()
- defer c.mmtx.RUnlock()
-
- f, err := os.Open(c.mdir + key)
- if err != nil {
- // This is a debug log statement because failure to
- // read the manifest key is currently expected if it
- // is not cached.
- log.WithError(err).WithField("manifest", key).
- Debug("failed to read manifest from local cache")
-
- return nil, false
- }
- defer f.Close()
-
- m, err := ioutil.ReadAll(f)
- if err != nil {
- log.WithError(err).WithField("manifest", key).
- Error("failed to read manifest from local cache")
-
- return nil, false
- }
-
- return json.RawMessage(m), true
-}
-
-// Adds the result of a manifest build to the local cache, if the
-// manifest is considered cacheable.
-//
-// Manifests can be quite large and are cached on disk instead of in
-// memory.
-func (c *LocalCache) localCacheManifest(key string, m json.RawMessage) {
- c.mmtx.Lock()
- defer c.mmtx.Unlock()
-
- err := ioutil.WriteFile(c.mdir+key, []byte(m), 0644)
- if err != nil {
- log.WithError(err).WithField("manifest", key).
- Error("failed to locally cache manifest")
- }
-}
-
-// Retrieve a layer build from the local cache.
-func (c *LocalCache) layerFromLocalCache(key string) (*manifest.Entry, bool) {
- c.lmtx.RLock()
- e, ok := c.lcache[key]
- c.lmtx.RUnlock()
-
- return &e, ok
-}
-
-// Add a layer build result to the local cache.
-func (c *LocalCache) localCacheLayer(key string, e manifest.Entry) {
- c.lmtx.Lock()
- c.lcache[key] = e
- c.lmtx.Unlock()
-}
-
-// Retrieve a manifest from the cache(s). First the local cache is
-// checked, then the storage backend.
-func manifestFromCache(ctx context.Context, s *State, key string) (json.RawMessage, bool) {
- if m, cached := s.Cache.manifestFromLocalCache(key); cached {
- return m, true
- }
-
- r, err := s.Storage.Fetch(ctx, "manifests/"+key)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "manifest": key,
- "backend": s.Storage.Name(),
- }).Error("failed to fetch manifest from cache")
-
- return nil, false
- }
- defer r.Close()
-
- m, err := ioutil.ReadAll(r)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "manifest": key,
- "backend": s.Storage.Name(),
- }).Error("failed to read cached manifest from storage backend")
-
- return nil, false
- }
-
- go s.Cache.localCacheManifest(key, m)
- log.WithField("manifest", key).Info("retrieved manifest from GCS")
-
- return json.RawMessage(m), true
-}
-
-// Add a manifest to the bucket & local caches
-func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage) {
- go s.Cache.localCacheManifest(key, m)
-
- path := "manifests/" + key
- _, size, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
- size, err := io.Copy(w, bytes.NewReader([]byte(m)))
- return "", size, err
- })
-
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "manifest": key,
- "backend": s.Storage.Name(),
- }).Error("failed to cache manifest to storage backend")
-
- return
- }
-
- log.WithFields(log.Fields{
- "manifest": key,
- "size": size,
- "backend": s.Storage.Name(),
- }).Info("cached manifest to storage backend")
-}
-
-// Retrieve a layer build from the cache, first checking the local
-// cache followed by the bucket cache.
-func layerFromCache(ctx context.Context, s *State, key string) (*manifest.Entry, bool) {
- if entry, cached := s.Cache.layerFromLocalCache(key); cached {
- return entry, true
- }
-
- r, err := s.Storage.Fetch(ctx, "builds/"+key)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": key,
- "backend": s.Storage.Name(),
- }).Debug("failed to retrieve cached layer from storage backend")
-
- return nil, false
- }
- defer r.Close()
-
- jb := bytes.NewBuffer([]byte{})
- _, err = io.Copy(jb, r)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": key,
- "backend": s.Storage.Name(),
- }).Error("failed to read cached layer from storage backend")
-
- return nil, false
- }
-
- var entry manifest.Entry
- err = json.Unmarshal(jb.Bytes(), &entry)
- if err != nil {
- log.WithError(err).WithField("layer", key).
- Error("failed to unmarshal cached layer")
-
- return nil, false
- }
-
- go s.Cache.localCacheLayer(key, entry)
- return &entry, true
-}
-
-func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry) {
- s.Cache.localCacheLayer(key, entry)
-
- j, _ := json.Marshal(&entry)
- path := "builds/" + key
- _, _, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
- size, err := io.Copy(w, bytes.NewReader(j))
- return "", size, err
- })
-
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": key,
- "backend": s.Storage.Name(),
- }).Error("failed to cache layer")
- }
-
- return
-}
diff --git a/tools/nixery/server/config/config.go b/tools/nixery/server/config/config.go
deleted file mode 100644
index 7ec102bd6cee..000000000000
--- a/tools/nixery/server/config/config.go
+++ /dev/null
@@ -1,84 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Package config implements structures to store Nixery's configuration at
-// runtime as well as the logic for instantiating this configuration from the
-// environment.
-package config
-
-import (
- "os"
-
- log "github.com/sirupsen/logrus"
-)
-
-func getConfig(key, desc, def string) string {
- value := os.Getenv(key)
- if value == "" && def == "" {
- log.WithFields(log.Fields{
- "option": key,
- "description": desc,
- }).Fatal("missing required configuration envvar")
- } else if value == "" {
- return def
- }
-
- return value
-}
-
-// Backend represents the possible storage backend types
-type Backend int
-
-const (
- GCS = iota
- FileSystem
-)
-
-// Config holds the Nixery configuration options.
-type Config struct {
- Port string // Port on which to launch HTTP server
- Pkgs PkgSource // Source for Nix package set
- Timeout string // Timeout for a single Nix builder (seconds)
- WebDir string // Directory with static web assets
- PopUrl string // URL to the Nix package popularity count
- Backend Backend // Storage backend to use for Nixery
-}
-
-func FromEnv() (Config, error) {
- pkgs, err := pkgSourceFromEnv()
- if err != nil {
- return Config{}, err
- }
-
- var b Backend
- switch os.Getenv("NIXERY_STORAGE_BACKEND") {
- case "gcs":
- b = GCS
- case "filesystem":
- b = FileSystem
- default:
- log.WithField("values", []string{
- "gcs",
- }).Fatal("NIXERY_STORAGE_BUCKET must be set to a supported value")
- }
-
- return Config{
- Port: getConfig("PORT", "HTTP port", ""),
- Pkgs: pkgs,
- Timeout: getConfig("NIX_TIMEOUT", "Nix builder timeout", "60"),
- WebDir: getConfig("WEB_DIR", "Static web file dir", ""),
- PopUrl: os.Getenv("NIX_POPULARITY_URL"),
- Backend: b,
- }, nil
-}
diff --git a/tools/nixery/server/config/pkgsource.go b/tools/nixery/server/config/pkgsource.go
deleted file mode 100644
index 95236c4b0d15..000000000000
--- a/tools/nixery/server/config/pkgsource.go
+++ /dev/null
@@ -1,159 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-package config
-
-import (
- "crypto/sha1"
- "encoding/json"
- "fmt"
- "os"
- "regexp"
- "strings"
-
- log "github.com/sirupsen/logrus"
-)
-
-// PkgSource represents the source from which the Nix package set used
-// by Nixery is imported. Users configure the source by setting one of
-// the supported environment variables.
-type PkgSource interface {
- // Convert the package source into the representation required
- // for calling Nix.
- Render(tag string) (string, string)
-
- // Create a key by which builds for this source and iamge
- // combination can be cached.
- //
- // The empty string means that this value is not cacheable due
- // to the package source being a moving target (such as a
- // channel).
- CacheKey(pkgs []string, tag string) string
-}
-
-type GitSource struct {
- repository string
-}
-
-// Regex to determine whether a git reference is a commit hash or
-// something else (branch/tag).
-//
-// Used to check whether a git reference is cacheable, and to pass the
-// correct git structure to Nix.
-//
-// Note: If a user creates a branch or tag with the name of a commit
-// and references it intentionally, this heuristic will fail.
-var commitRegex = regexp.MustCompile(`^[0-9a-f]{40}$`)
-
-func (g *GitSource) Render(tag string) (string, string) {
- args := map[string]string{
- "url": g.repository,
- }
-
- // The 'git' source requires a tag to be present. If the user
- // has not specified one, it is assumed that the default
- // 'master' branch should be used.
- if tag == "latest" || tag == "" {
- tag = "master"
- }
-
- if commitRegex.MatchString(tag) {
- args["rev"] = tag
- } else {
- args["ref"] = tag
- }
-
- j, _ := json.Marshal(args)
-
- return "git", string(j)
-}
-
-func (g *GitSource) CacheKey(pkgs []string, tag string) string {
- // Only full commit hashes can be used for caching, as
- // everything else is potentially a moving target.
- if !commitRegex.MatchString(tag) {
- return ""
- }
-
- unhashed := strings.Join(pkgs, "") + tag
- hashed := fmt.Sprintf("%x", sha1.Sum([]byte(unhashed)))
-
- return hashed
-}
-
-type NixChannel struct {
- channel string
-}
-
-func (n *NixChannel) Render(tag string) (string, string) {
- return "nixpkgs", n.channel
-}
-
-func (n *NixChannel) CacheKey(pkgs []string, tag string) string {
- // Since Nix channels are downloaded from the nixpkgs-channels
- // Github, users can specify full commit hashes as the
- // "channel", in which case builds are cacheable.
- if !commitRegex.MatchString(n.channel) {
- return ""
- }
-
- unhashed := strings.Join(pkgs, "") + n.channel
- hashed := fmt.Sprintf("%x", sha1.Sum([]byte(unhashed)))
-
- return hashed
-}
-
-type PkgsPath struct {
- path string
-}
-
-func (p *PkgsPath) Render(tag string) (string, string) {
- return "path", p.path
-}
-
-func (p *PkgsPath) CacheKey(pkgs []string, tag string) string {
- // Path-based builds are not currently cacheable because we
- // have no local hash of the package folder's state easily
- // available.
- return ""
-}
-
-// Retrieve a package source from the environment. If no source is
-// specified, the Nix code will default to a recent NixOS channel.
-func pkgSourceFromEnv() (PkgSource, error) {
- if channel := os.Getenv("NIXERY_CHANNEL"); channel != "" {
- log.WithField("channel", channel).Info("using Nix package set from Nix channel or commit")
-
- return &NixChannel{
- channel: channel,
- }, nil
- }
-
- if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.WithField("repo", git).Info("using NIx package set from git repository")
-
- return &GitSource{
- repository: git,
- }, nil
- }
-
- if path := os.Getenv("NIXERY_PKGS_PATH"); path != "" {
- log.WithField("path", path).Info("using Nix package set at local path")
-
- return &PkgsPath{
- path: path,
- }, nil
- }
-
- return nil, fmt.Errorf("no valid package source has been specified")
-}
diff --git a/tools/nixery/server/default.nix b/tools/nixery/server/default.nix
deleted file mode 100644
index d497f106b02e..000000000000
--- a/tools/nixery/server/default.nix
+++ /dev/null
@@ -1,62 +0,0 @@
-# Copyright 2019 Google LLC
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# https://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-{ buildGoPackage, go, lib, srcHash }:
-
-buildGoPackage rec {
- name = "nixery-server";
- goDeps = ./go-deps.nix;
- src = ./.;
-
- goPackagePath = "github.com/google/nixery/server";
- doCheck = true;
-
- # The following phase configurations work around the overengineered
- # Nix build configuration for Go.
- #
- # All I want this to do is produce a binary in the standard Nix
- # output path, so pretty much all the phases except for the initial
- # configuration of the "dependency forest" in $GOPATH have been
- # overridden.
- #
- # This is necessary because the upstream builder does wonky things
- # with the build arguments to the compiler, but I need to set some
- # complex flags myself
-
- outputs = [ "out" ];
- preConfigure = "bin=$out";
- buildPhase = ''
- runHook preBuild
- runHook renameImport
-
- export GOBIN="$out/bin"
- go install -ldflags "-X main.version=$(cat ${srcHash})" ${goPackagePath}
- '';
-
- fixupPhase = ''
- remove-references-to -t ${go} $out/bin/server
- '';
-
- checkPhase = ''
- go vet ${goPackagePath}
- go test ${goPackagePath}
- '';
-
- meta = {
- description = "Container image builder serving Nix-backed images";
- homepage = "https://github.com/google/nixery";
- license = lib.licenses.asl20;
- maintainers = [ lib.maintainers.tazjin ];
- };
-}
diff --git a/tools/nixery/server/go-deps.nix b/tools/nixery/server/go-deps.nix
deleted file mode 100644
index 847b44dce63c..000000000000
--- a/tools/nixery/server/go-deps.nix
+++ /dev/null
@@ -1,129 +0,0 @@
-# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
-[
- {
- goPackagePath = "cloud.google.com/go";
- fetch = {
- type = "git";
- url = "https://code.googlesource.com/gocloud";
- rev = "77f6a3a292a7dbf66a5329de0d06326f1906b450";
- sha256 = "1c9pkx782nbcp8jnl5lprcbzf97van789ky5qsncjgywjyymhigi";
- };
- }
- {
- goPackagePath = "github.com/golang/protobuf";
- fetch = {
- type = "git";
- url = "https://github.com/golang/protobuf";
- rev = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7";
- sha256 = "1k1wb4zr0qbwgpvz9q5ws9zhlal8hq7dmq62pwxxriksayl6hzym";
- };
- }
- {
- goPackagePath = "github.com/googleapis/gax-go";
- fetch = {
- type = "git";
- url = "https://github.com/googleapis/gax-go";
- rev = "bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2";
- sha256 = "1lxawwngv6miaqd25s3ba0didfzylbwisd2nz7r4gmbmin6jsjrx";
- };
- }
- {
- goPackagePath = "github.com/hashicorp/golang-lru";
- fetch = {
- type = "git";
- url = "https://github.com/hashicorp/golang-lru";
- rev = "59383c442f7d7b190497e9bb8fc17a48d06cd03f";
- sha256 = "0yzwl592aa32vfy73pl7wdc21855w17zssrp85ckw2nisky8rg9c";
- };
- }
- {
- goPackagePath = "go.opencensus.io";
- fetch = {
- type = "git";
- url = "https://github.com/census-instrumentation/opencensus-go";
- rev = "b4a14686f0a98096416fe1b4cb848e384fb2b22b";
- sha256 = "1aidyp301v5ngwsnnc8v1s09vvbsnch1jc4vd615f7qv77r9s7dn";
- };
- }
- {
- goPackagePath = "golang.org/x/net";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/net";
- rev = "da137c7871d730100384dbcf36e6f8fa493aef5b";
- sha256 = "1qsiyr3irmb6ii06hivm9p2c7wqyxczms1a9v1ss5698yjr3fg47";
- };
- }
- {
- goPackagePath = "golang.org/x/oauth2";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/oauth2";
- rev = "0f29369cfe4552d0e4bcddc57cc75f4d7e672a33";
- sha256 = "06jwpvx0x2gjn2y959drbcir5kd7vg87k0r1216abk6rrdzzrzi2";
- };
- }
- {
- goPackagePath = "golang.org/x/sys";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/sys";
- rev = "51ab0e2deafac1f46c46ad59cf0921be2f180c3d";
- sha256 = "0xdhpckbql3bsqkpc2k5b1cpnq3q1qjqjjq2j3p707rfwb8nm91a";
- };
- }
- {
- goPackagePath = "golang.org/x/text";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/text";
- rev = "342b2e1fbaa52c93f31447ad2c6abc048c63e475";
- sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
- };
- }
- {
- goPackagePath = "google.golang.org/api";
- fetch = {
- type = "git";
- url = "https://code.googlesource.com/google-api-go-client";
- rev = "069bea57b1be6ad0671a49ea7a1128025a22b73f";
- sha256 = "19q2b610lkf3z3y9hn6rf11dd78xr9q4340mdyri7kbijlj2r44q";
- };
- }
- {
- goPackagePath = "google.golang.org/genproto";
- fetch = {
- type = "git";
- url = "https://github.com/google/go-genproto";
- rev = "c506a9f9061087022822e8da603a52fc387115a8";
- sha256 = "03hh80aqi58dqi5ykj4shk3chwkzrgq2f3k6qs5qhgvmcy79y2py";
- };
- }
- {
- goPackagePath = "google.golang.org/grpc";
- fetch = {
- type = "git";
- url = "https://github.com/grpc/grpc-go";
- rev = "977142214c45640483838b8672a43c46f89f90cb";
- sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
- };
- }
- {
- goPackagePath = "gonum.org/v1/gonum";
- fetch = {
- type = "git";
- url = "https://github.com/gonum/gonum";
- rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
- sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
- };
- }
- {
- goPackagePath = "github.com/sirupsen/logrus";
- fetch = {
- type = "git";
- url = "https://github.com/sirupsen/logrus";
- rev = "de736cf91b921d56253b4010270681d33fdf7cb5";
- sha256 = "1qixss8m5xy7pzbf0qz2k3shjw0asklm9sj6zyczp7mryrari0aj";
- };
- }
-]
diff --git a/tools/nixery/server/layers/grouping.go b/tools/nixery/server/layers/grouping.go
deleted file mode 100644
index 3902c8a4ef26..000000000000
--- a/tools/nixery/server/layers/grouping.go
+++ /dev/null
@@ -1,361 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// This package reads an export reference graph (i.e. a graph representing the
-// runtime dependencies of a set of derivations) created by Nix and groups it in
-// a way that is likely to match the grouping for other derivation sets with
-// overlapping dependencies.
-//
-// This is used to determine which derivations to include in which layers of a
-// container image.
-//
-// # Inputs
-//
-// * a graph of Nix runtime dependencies, generated via exportReferenceGraph
-// * popularity values of each package in the Nix package set (in the form of a
-// direct reference count)
-// * a maximum number of layers to allocate for the image (the "layer budget")
-//
-// # Algorithm
-//
-// It works by first creating a (directed) dependency tree:
-//
-// img (root node)
-// │
-// ├───> A ─────┐
-// │ v
-// ├───> B ───> E
-// │ ^
-// ├───> C ─────┘
-// │ │
-// │ v
-// └───> D ───> F
-// │
-// └────> G
-//
-// Each node (i.e. package) is then visited to determine how important
-// it is to separate this node into its own layer, specifically:
-//
-// 1. Is the node within a certain threshold percentile of absolute
-// popularity within all of nixpkgs? (e.g. `glibc`, `openssl`)
-//
-// 2. Is the node's runtime closure above a threshold size? (e.g. 100MB)
-//
-// In either case, a bit is flipped for this node representing each
-// condition and an edge to it is inserted directly from the image
-// root, if it does not already exist.
-//
-// For the rest of the example we assume 'G' is above the threshold
-// size and 'E' is popular.
-//
-// This tree is then transformed into a dominator tree:
-//
-// img
-// │
-// ├───> A
-// ├───> B
-// ├───> C
-// ├───> E
-// ├───> D ───> F
-// └───> G
-//
-// Specifically this means that the paths to A, B, C, E, G, and D
-// always pass through the root (i.e. are dominated by it), whilst F
-// is dominated by D (all paths go through it).
-//
-// The top-level subtrees are considered as the initially selected
-// layers.
-//
-// If the list of layers fits within the layer budget, it is returned.
-//
-// Otherwise, a merge rating is calculated for each layer. This is the
-// product of the layer's total size and its root node's popularity.
-//
-// Layers are then merged in ascending order of merge ratings until
-// they fit into the layer budget.
-//
-// # Threshold values
-//
-// Threshold values for the partitioning conditions mentioned above
-// have not yet been determined, but we will make a good first guess
-// based on gut feeling and proceed to measure their impact on cache
-// hits/misses.
-//
-// # Example
-//
-// Using the logic described above as well as the example presented in
-// the introduction, this program would create the following layer
-// groupings (assuming no additional partitioning):
-//
-// Layer budget: 1
-// Layers: { A, B, C, D, E, F, G }
-//
-// Layer budget: 2
-// Layers: { G }, { A, B, C, D, E, F }
-//
-// Layer budget: 3
-// Layers: { G }, { E }, { A, B, C, D, F }
-//
-// Layer budget: 4
-// Layers: { G }, { E }, { D, F }, { A, B, C }
-//
-// ...
-//
-// Layer budget: 10
-// Layers: { E }, { D, F }, { A }, { B }, { C }
-package layers
-
-import (
- "crypto/sha1"
- "fmt"
- "regexp"
- "sort"
- "strings"
-
- log "github.com/sirupsen/logrus"
- "gonum.org/v1/gonum/graph/flow"
- "gonum.org/v1/gonum/graph/simple"
-)
-
-// RuntimeGraph represents structured information from Nix about the runtime
-// dependencies of a derivation.
-//
-// This is generated in Nix by using the exportReferencesGraph feature.
-type RuntimeGraph struct {
- References struct {
- Graph []string `json:"graph"`
- } `json:"exportReferencesGraph"`
-
- Graph []struct {
- Size uint64 `json:"closureSize"`
- Path string `json:"path"`
- Refs []string `json:"references"`
- } `json:"graph"`
-}
-
-// Popularity data for each Nix package that was calculated in advance.
-//
-// Popularity is a number from 1-100 that represents the
-// popularity percentile in which this package resides inside
-// of the nixpkgs tree.
-type Popularity = map[string]int
-
-// Layer represents the data returned for each layer that Nix should
-// build for the container image.
-type Layer struct {
- Contents []string `json:"contents"`
- MergeRating uint64
-}
-
-// Hash the contents of a layer to create a deterministic identifier that can be
-// used for caching.
-func (l *Layer) Hash() string {
- sum := sha1.Sum([]byte(strings.Join(l.Contents, ":")))
- return fmt.Sprintf("%x", sum)
-}
-
-func (a Layer) merge(b Layer) Layer {
- a.Contents = append(a.Contents, b.Contents...)
- a.MergeRating += b.MergeRating
- return a
-}
-
-// closure as pointed to by the graph nodes.
-type closure struct {
- GraphID int64
- Path string
- Size uint64
- Refs []string
- Popularity int
-}
-
-func (c *closure) ID() int64 {
- return c.GraphID
-}
-
-var nixRegexp = regexp.MustCompile(`^/nix/store/[a-z0-9]+-`)
-
-// PackageFromPath returns the name of a Nix package based on its
-// output store path.
-func PackageFromPath(path string) string {
- return nixRegexp.ReplaceAllString(path, "")
-}
-
-func (c *closure) DOTID() string {
- return PackageFromPath(c.Path)
-}
-
-// bigOrPopular checks whether this closure should be considered for
-// separation into its own layer, even if it would otherwise only
-// appear in a subtree of the dominator tree.
-func (c *closure) bigOrPopular() bool {
- const sizeThreshold = 100 * 1000000 // 100MB
-
- if c.Size > sizeThreshold {
- return true
- }
-
- // Threshold value is picked arbitrarily right now. The reason
- // for this is that some packages (such as `cacert`) have very
- // few direct dependencies, but are required by pretty much
- // everything.
- if c.Popularity >= 100 {
- return true
- }
-
- return false
-}
-
-func insertEdges(graph *simple.DirectedGraph, cmap *map[string]*closure, node *closure) {
- // Big or popular nodes get a separate edge from the top to
- // flag them for their own layer.
- if node.bigOrPopular() && !graph.HasEdgeFromTo(0, node.ID()) {
- edge := graph.NewEdge(graph.Node(0), node)
- graph.SetEdge(edge)
- }
-
- for _, c := range node.Refs {
- // Nix adds a self reference to each node, which
- // should not be inserted.
- if c != node.Path {
- edge := graph.NewEdge(node, (*cmap)[c])
- graph.SetEdge(edge)
- }
- }
-}
-
-// Create a graph structure from the references supplied by Nix.
-func buildGraph(refs *RuntimeGraph, pop *Popularity) *simple.DirectedGraph {
- cmap := make(map[string]*closure)
- graph := simple.NewDirectedGraph()
-
- // Insert all closures into the graph, as well as a fake root
- // closure which serves as the top of the tree.
- //
- // A map from store paths to IDs is kept to actually insert
- // edges below.
- root := &closure{
- GraphID: 0,
- Path: "image_root",
- }
- graph.AddNode(root)
-
- for idx, c := range refs.Graph {
- node := &closure{
- GraphID: int64(idx + 1), // inc because of root node
- Path: c.Path,
- Size: c.Size,
- Refs: c.Refs,
- }
-
- // The packages `nss-cacert` and `iana-etc` are added
- // by Nixery to *every single image* and should have a
- // very high popularity.
- //
- // Other popularity values are populated from the data
- // set assembled by Nixery's popcount.
- id := node.DOTID()
- if strings.HasPrefix(id, "nss-cacert") || strings.HasPrefix(id, "iana-etc") {
- // glibc has ~300k references, these packages need *more*
- node.Popularity = 500000
- } else if p, ok := (*pop)[id]; ok {
- node.Popularity = p
- } else {
- node.Popularity = 1
- }
-
- graph.AddNode(node)
- cmap[c.Path] = node
- }
-
- // Insert the top-level closures with edges from the root
- // node, then insert all edges for each closure.
- for _, p := range refs.References.Graph {
- edge := graph.NewEdge(root, cmap[p])
- graph.SetEdge(edge)
- }
-
- for _, c := range cmap {
- insertEdges(graph, &cmap, c)
- }
-
- return graph
-}
-
-// Extracts a subgraph starting at the specified root from the
-// dominator tree. The subgraph is converted into a flat list of
-// layers, each containing the store paths and merge rating.
-func groupLayer(dt *flow.DominatorTree, root *closure) Layer {
- size := root.Size
- contents := []string{root.Path}
- children := dt.DominatedBy(root.ID())
-
- // This iteration does not use 'range' because the list being
- // iterated is modified during the iteration (yes, I'm sorry).
- for i := 0; i < len(children); i++ {
- child := children[i].(*closure)
- size += child.Size
- contents = append(contents, child.Path)
- children = append(children, dt.DominatedBy(child.ID())...)
- }
-
- // Contents are sorted to ensure that hashing is consistent
- sort.Strings(contents)
-
- return Layer{
- Contents: contents,
- MergeRating: uint64(root.Popularity) * size,
- }
-}
-
-// Calculate the dominator tree of the entire package set and group
-// each top-level subtree into a layer.
-//
-// Layers are merged together until they fit into the layer budget,
-// based on their merge rating.
-func dominate(budget int, graph *simple.DirectedGraph) []Layer {
- dt := flow.Dominators(graph.Node(0), graph)
-
- var layers []Layer
- for _, n := range dt.DominatedBy(dt.Root().ID()) {
- layers = append(layers, groupLayer(&dt, n.(*closure)))
- }
-
- sort.Slice(layers, func(i, j int) bool {
- return layers[i].MergeRating < layers[j].MergeRating
- })
-
- if len(layers) > budget {
- log.WithFields(log.Fields{
- "layers": len(layers),
- "budget": budget,
- }).Info("ideal image exceeds layer budget")
- }
-
- for len(layers) > budget {
- merged := layers[0].merge(layers[1])
- layers[1] = merged
- layers = layers[1:]
- }
-
- return layers
-}
-
-// GroupLayers applies the algorithm described above the its input and returns a
-// list of layers, each consisting of a list of Nix store paths that it should
-// contain.
-func Group(refs *RuntimeGraph, pop *Popularity, budget int) []Layer {
- graph := buildGraph(refs, pop)
- return dominate(budget, graph)
-}
diff --git a/tools/nixery/server/logs.go b/tools/nixery/server/logs.go
deleted file mode 100644
index 3179402e2e1f..000000000000
--- a/tools/nixery/server/logs.go
+++ /dev/null
@@ -1,119 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-package main
-
-// This file configures different log formatters via logrus. The
-// standard formatter uses a structured JSON format that is compatible
-// with Stackdriver Error Reporting.
-//
-// https://cloud.google.com/error-reporting/docs/formatting-error-messages
-
-import (
- "bytes"
- "encoding/json"
- log "github.com/sirupsen/logrus"
-)
-
-type stackdriverFormatter struct{}
-
-type serviceContext struct {
- Service string `json:"service"`
- Version string `json:"version"`
-}
-
-type reportLocation struct {
- FilePath string `json:"filePath"`
- LineNumber int `json:"lineNumber"`
- FunctionName string `json:"functionName"`
-}
-
-var nixeryContext = serviceContext{
- Service: "nixery",
-}
-
-// isError determines whether an entry should be logged as an error
-// (i.e. with attached `context`).
-//
-// This requires the caller information to be present on the log
-// entry, as stacktraces are not available currently.
-func isError(e *log.Entry) bool {
- l := e.Level
- return (l == log.ErrorLevel || l == log.FatalLevel || l == log.PanicLevel) &&
- e.HasCaller()
-}
-
-// logSeverity formats the entry's severity into a format compatible
-// with Stackdriver Logging.
-//
-// The two formats that are being mapped do not have an equivalent set
-// of severities/levels, so the mapping is somewhat arbitrary for a
-// handful of them.
-//
-// https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity
-func logSeverity(l log.Level) string {
- switch l {
- case log.TraceLevel:
- return "DEBUG"
- case log.DebugLevel:
- return "DEBUG"
- case log.InfoLevel:
- return "INFO"
- case log.WarnLevel:
- return "WARNING"
- case log.ErrorLevel:
- return "ERROR"
- case log.FatalLevel:
- return "CRITICAL"
- case log.PanicLevel:
- return "EMERGENCY"
- default:
- return "DEFAULT"
- }
-}
-
-func (f stackdriverFormatter) Format(e *log.Entry) ([]byte, error) {
- msg := e.Data
- msg["serviceContext"] = &nixeryContext
- msg["message"] = &e.Message
- msg["eventTime"] = &e.Time
- msg["severity"] = logSeverity(e.Level)
-
- if e, ok := msg[log.ErrorKey]; ok {
- if err, isError := e.(error); isError {
- msg[log.ErrorKey] = err.Error()
- } else {
- delete(msg, log.ErrorKey)
- }
- }
-
- if isError(e) {
- loc := reportLocation{
- FilePath: e.Caller.File,
- LineNumber: e.Caller.Line,
- FunctionName: e.Caller.Function,
- }
- msg["context"] = &loc
- }
-
- b := new(bytes.Buffer)
- err := json.NewEncoder(b).Encode(&msg)
-
- return b.Bytes(), err
-}
-
-func init() {
- nixeryContext.Version = version
- log.SetReportCaller(true)
- log.SetFormatter(stackdriverFormatter{})
-}
diff --git a/tools/nixery/server/main.go b/tools/nixery/server/main.go
deleted file mode 100644
index 6ae0730906dc..000000000000
--- a/tools/nixery/server/main.go
+++ /dev/null
@@ -1,248 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// The nixery server implements a container registry that transparently builds
-// container images based on Nix derivations.
-//
-// The Nix derivation used for image creation is responsible for creating
-// objects that are compatible with the registry API. The targeted registry
-// protocol is currently Docker's.
-//
-// When an image is requested, the required contents are parsed out of the
-// request and a Nix-build is initiated that eventually responds with the
-// manifest as well as information linking each layer digest to a local
-// filesystem path.
-package main
-
-import (
- "encoding/json"
- "fmt"
- "io/ioutil"
- "net/http"
- "regexp"
-
- "github.com/google/nixery/server/builder"
- "github.com/google/nixery/server/config"
- "github.com/google/nixery/server/layers"
- "github.com/google/nixery/server/storage"
- log "github.com/sirupsen/logrus"
-)
-
-// ManifestMediaType is the Content-Type used for the manifest itself. This
-// corresponds to the "Image Manifest V2, Schema 2" described on this page:
-//
-// https://docs.docker.com/registry/spec/manifest-v2-2/
-const manifestMediaType string = "application/vnd.docker.distribution.manifest.v2+json"
-
-// This variable will be initialised during the build process and set
-// to the hash of the entire Nixery source tree.
-var version string = "devel"
-
-// Regexes matching the V2 Registry API routes. This only includes the
-// routes required for serving images, since pushing and other such
-// functionality is not available.
-var (
- manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
- layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
-)
-
-// Downloads the popularity information for the package set from the
-// URL specified in Nixery's configuration.
-func downloadPopularity(url string) (layers.Popularity, error) {
- resp, err := http.Get(url)
- if err != nil {
- return nil, err
- }
-
- if resp.StatusCode != 200 {
- return nil, fmt.Errorf("popularity download from '%s' returned status: %s\n", url, resp.Status)
- }
-
- j, err := ioutil.ReadAll(resp.Body)
- if err != nil {
- return nil, err
- }
-
- var pop layers.Popularity
- err = json.Unmarshal(j, &pop)
- if err != nil {
- return nil, err
- }
-
- return pop, nil
-}
-
-// Error format corresponding to the registry protocol V2 specification. This
-// allows feeding back errors to clients in a way that can be presented to
-// users.
-type registryError struct {
- Code string `json:"code"`
- Message string `json:"message"`
-}
-
-type registryErrors struct {
- Errors []registryError `json:"errors"`
-}
-
-func writeError(w http.ResponseWriter, status int, code, message string) {
- err := registryErrors{
- Errors: []registryError{
- {code, message},
- },
- }
- json, _ := json.Marshal(err)
-
- w.WriteHeader(status)
- w.Header().Add("Content-Type", "application/json")
- w.Write(json)
-}
-
-type registryHandler struct {
- state *builder.State
-}
-
-func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- // Acknowledge that we speak V2 with an empty response
- if r.RequestURI == "/v2/" {
- return
- }
-
- // Serve the manifest (straight from Nix)
- manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
- if len(manifestMatches) == 3 {
- imageName := manifestMatches[1]
- imageTag := manifestMatches[2]
-
- log.WithFields(log.Fields{
- "image": imageName,
- "tag": imageTag,
- }).Info("requesting image manifest")
-
- image := builder.ImageFromName(imageName, imageTag)
- buildResult, err := builder.BuildImage(r.Context(), h.state, &image)
-
- if err != nil {
- writeError(w, 500, "UNKNOWN", "image build failure")
-
- log.WithError(err).WithFields(log.Fields{
- "image": imageName,
- "tag": imageTag,
- }).Error("failed to build image manifest")
-
- return
- }
-
- // Some error types have special handling, which is applied
- // here.
- if buildResult.Error == "not_found" {
- s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
- writeError(w, 404, "MANIFEST_UNKNOWN", s)
-
- log.WithFields(log.Fields{
- "image": imageName,
- "tag": imageTag,
- "packages": buildResult.Pkgs,
- }).Warn("could not find Nix packages")
-
- return
- }
-
- // This marshaling error is ignored because we know that this
- // field represents valid JSON data.
- manifest, _ := json.Marshal(buildResult.Manifest)
- w.Header().Add("Content-Type", manifestMediaType)
- w.Write(manifest)
- return
- }
-
- // Serve an image layer. For this we need to first ask Nix for
- // the manifest, then proceed to extract the correct layer from
- // it.
- layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
- if len(layerMatches) == 3 {
- digest := layerMatches[2]
- storage := h.state.Storage
- err := storage.ServeLayer(digest, r, w)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": digest,
- "backend": storage.Name(),
- }).Error("failed to serve layer from storage backend")
- }
-
- return
- }
-
- log.WithField("uri", r.RequestURI).Info("unsupported registry route")
-
- w.WriteHeader(404)
-}
-
-func main() {
- cfg, err := config.FromEnv()
- if err != nil {
- log.WithError(err).Fatal("failed to load configuration")
- }
-
- var s storage.Backend
-
- switch cfg.Backend {
- case config.GCS:
- s, err = storage.NewGCSBackend()
- case config.FileSystem:
- s, err = storage.NewFSBackend()
- }
- if err != nil {
- log.WithError(err).Fatal("failed to initialise storage backend")
- }
-
- log.WithField("backend", s.Name()).Info("initialised storage backend")
-
- cache, err := builder.NewCache()
- if err != nil {
- log.WithError(err).Fatal("failed to instantiate build cache")
- }
-
- var pop layers.Popularity
- if cfg.PopUrl != "" {
- pop, err = downloadPopularity(cfg.PopUrl)
- if err != nil {
- log.WithError(err).WithField("popURL", cfg.PopUrl).
- Fatal("failed to fetch popularity information")
- }
- }
-
- state := builder.State{
- Cache: &cache,
- Cfg: cfg,
- Pop: pop,
- Storage: s,
- }
-
- log.WithFields(log.Fields{
- "version": version,
- "port": cfg.Port,
- }).Info("starting Nixery")
-
- // All /v2/ requests belong to the registry handler.
- http.Handle("/v2/", ®istryHandler{
- state: &state,
- })
-
- // All other roots are served by the static file server.
- webDir := http.Dir(cfg.WebDir)
- http.Handle("/", http.FileServer(webDir))
-
- log.Fatal(http.ListenAndServe(":"+cfg.Port, nil))
-}
diff --git a/tools/nixery/server/manifest/manifest.go b/tools/nixery/server/manifest/manifest.go
deleted file mode 100644
index 0d36826fb7e5..000000000000
--- a/tools/nixery/server/manifest/manifest.go
+++ /dev/null
@@ -1,141 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Package image implements logic for creating the image metadata
-// (such as the image manifest and configuration).
-package manifest
-
-import (
- "crypto/sha256"
- "encoding/json"
- "fmt"
- "sort"
-)
-
-const (
- // manifest constants
- schemaVersion = 2
-
- // media types
- manifestType = "application/vnd.docker.distribution.manifest.v2+json"
- layerType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
- configType = "application/vnd.docker.container.image.v1+json"
-
- // image config constants
- os = "linux"
- fsType = "layers"
-)
-
-type Entry struct {
- MediaType string `json:"mediaType,omitempty"`
- Size int64 `json:"size"`
- Digest string `json:"digest"`
-
- // These fields are internal to Nixery and not part of the
- // serialised entry.
- MergeRating uint64 `json:"-"`
- TarHash string `json:",omitempty"`
-}
-
-type manifest struct {
- SchemaVersion int `json:"schemaVersion"`
- MediaType string `json:"mediaType"`
- Config Entry `json:"config"`
- Layers []Entry `json:"layers"`
-}
-
-type imageConfig struct {
- Architecture string `json:"architecture"`
- OS string `json:"os"`
-
- RootFS struct {
- FSType string `json:"type"`
- DiffIDs []string `json:"diff_ids"`
- } `json:"rootfs"`
-
- // sic! empty struct (rather than `null`) is required by the
- // image metadata deserialiser in Kubernetes
- Config struct{} `json:"config"`
-}
-
-// ConfigLayer represents the configuration layer to be included in
-// the manifest, containing its JSON-serialised content and SHA256
-// hash.
-type ConfigLayer struct {
- Config []byte
- SHA256 string
-}
-
-// imageConfig creates an image configuration with the values set to
-// the constant defaults.
-//
-// Outside of this module the image configuration is treated as an
-// opaque blob and it is thus returned as an already serialised byte
-// array and its SHA256-hash.
-func configLayer(arch string, hashes []string) ConfigLayer {
- c := imageConfig{}
- c.Architecture = arch
- c.OS = os
- c.RootFS.FSType = fsType
- c.RootFS.DiffIDs = hashes
-
- j, _ := json.Marshal(c)
-
- return ConfigLayer{
- Config: j,
- SHA256: fmt.Sprintf("%x", sha256.Sum256(j)),
- }
-}
-
-// Manifest creates an image manifest from the specified layer entries
-// and returns its JSON-serialised form as well as the configuration
-// layer.
-//
-// Callers do not need to set the media type for the layer entries.
-func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
- // Sort layers by their merge rating, from highest to lowest.
- // This makes it likely for a contiguous chain of shared image
- // layers to appear at the beginning of a layer.
- //
- // Due to moby/moby#38446 Docker considers the order of layers
- // when deciding which layers to download again.
- sort.Slice(layers, func(i, j int) bool {
- return layers[i].MergeRating > layers[j].MergeRating
- })
-
- hashes := make([]string, len(layers))
- for i, l := range layers {
- hashes[i] = l.TarHash
- l.MediaType = layerType
- l.TarHash = ""
- layers[i] = l
- }
-
- c := configLayer(arch, hashes)
-
- m := manifest{
- SchemaVersion: schemaVersion,
- MediaType: manifestType,
- Config: Entry{
- MediaType: configType,
- Size: int64(len(c.Config)),
- Digest: "sha256:" + c.SHA256,
- },
- Layers: layers,
- }
-
- j, _ := json.Marshal(m)
-
- return json.RawMessage(j), c
-}
diff --git a/tools/nixery/server/storage/filesystem.go b/tools/nixery/server/storage/filesystem.go
deleted file mode 100644
index cdbc31c5e046..000000000000
--- a/tools/nixery/server/storage/filesystem.go
+++ /dev/null
@@ -1,96 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Filesystem storage backend for Nixery.
-package storage
-
-import (
- "context"
- "fmt"
- "io"
- "net/http"
- "os"
- "path"
-
- log "github.com/sirupsen/logrus"
-)
-
-type FSBackend struct {
- path string
-}
-
-func NewFSBackend() (*FSBackend, error) {
- p := os.Getenv("STORAGE_PATH")
- if p == "" {
- return nil, fmt.Errorf("STORAGE_PATH must be set for filesystem storage")
- }
-
- p = path.Clean(p)
- err := os.MkdirAll(p, 0755)
- if err != nil {
- return nil, fmt.Errorf("failed to create storage dir: %s", err)
- }
-
- return &FSBackend{p}, nil
-}
-
-func (b *FSBackend) Name() string {
- return fmt.Sprintf("Filesystem (%s)", b.path)
-}
-
-func (b *FSBackend) Persist(ctx context.Context, key string, f Persister) (string, int64, error) {
- full := path.Join(b.path, key)
- dir := path.Dir(full)
- err := os.MkdirAll(dir, 0755)
- if err != nil {
- log.WithError(err).WithField("path", dir).Error("failed to create storage directory")
- return "", 0, err
- }
-
- file, err := os.OpenFile(full, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
- if err != nil {
- log.WithError(err).WithField("file", full).Error("failed to write file")
- return "", 0, err
- }
- defer file.Close()
-
- return f(file)
-}
-
-func (b *FSBackend) Fetch(ctx context.Context, key string) (io.ReadCloser, error) {
- full := path.Join(b.path, key)
- return os.Open(full)
-}
-
-func (b *FSBackend) Move(ctx context.Context, old, new string) error {
- newpath := path.Join(b.path, new)
- err := os.MkdirAll(path.Dir(newpath), 0755)
- if err != nil {
- return err
- }
-
- return os.Rename(path.Join(b.path, old), newpath)
-}
-
-func (b *FSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
- p := path.Join(b.path, "layers", digest)
-
- log.WithFields(log.Fields{
- "layer": digest,
- "path": p,
- }).Info("serving layer from filesystem")
-
- http.ServeFile(w, r, p)
- return nil
-}
diff --git a/tools/nixery/server/storage/gcs.go b/tools/nixery/server/storage/gcs.go
deleted file mode 100644
index c247cca62140..000000000000
--- a/tools/nixery/server/storage/gcs.go
+++ /dev/null
@@ -1,219 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Google Cloud Storage backend for Nixery.
-package storage
-
-import (
- "context"
- "fmt"
- "io"
- "io/ioutil"
- "net/http"
- "net/url"
- "os"
- "time"
-
- "cloud.google.com/go/storage"
- log "github.com/sirupsen/logrus"
- "golang.org/x/oauth2/google"
-)
-
-// HTTP client to use for direct calls to APIs that are not part of the SDK
-var client = &http.Client{}
-
-// API scope needed for renaming objects in GCS
-const gcsScope = "https://www.googleapis.com/auth/devstorage.read_write"
-
-type GCSBackend struct {
- bucket string
- handle *storage.BucketHandle
- signing *storage.SignedURLOptions
-}
-
-// Constructs a new GCS bucket backend based on the configured
-// environment variables.
-func NewGCSBackend() (*GCSBackend, error) {
- bucket := os.Getenv("GCS_BUCKET")
- if bucket == "" {
- return nil, fmt.Errorf("GCS_BUCKET must be configured for GCS usage")
- }
-
- ctx := context.Background()
- client, err := storage.NewClient(ctx)
- if err != nil {
- log.WithError(err).Fatal("failed to set up Cloud Storage client")
- }
-
- handle := client.Bucket(bucket)
-
- if _, err := handle.Attrs(ctx); err != nil {
- log.WithError(err).WithField("bucket", bucket).Error("could not access configured bucket")
- return nil, err
- }
-
- signing, err := signingOptsFromEnv()
- if err != nil {
- log.WithError(err).Error("failed to configure GCS bucket signing")
- return nil, err
- }
-
- return &GCSBackend{
- bucket: bucket,
- handle: handle,
- signing: signing,
- }, nil
-}
-
-func (b *GCSBackend) Name() string {
- return "Google Cloud Storage (" + b.bucket + ")"
-}
-
-func (b *GCSBackend) Persist(ctx context.Context, path string, f Persister) (string, int64, error) {
- obj := b.handle.Object(path)
- w := obj.NewWriter(ctx)
-
- hash, size, err := f(w)
- if err != nil {
- log.WithError(err).WithField("path", path).Error("failed to upload to GCS")
- return hash, size, err
- }
-
- return hash, size, w.Close()
-}
-
-func (b *GCSBackend) Fetch(ctx context.Context, path string) (io.ReadCloser, error) {
- obj := b.handle.Object(path)
-
- // Probe whether the file exists before trying to fetch it
- _, err := obj.Attrs(ctx)
- if err != nil {
- return nil, err
- }
-
- return obj.NewReader(ctx)
-}
-
-// renameObject renames an object in the specified Cloud Storage
-// bucket.
-//
-// The Go API for Cloud Storage does not support renaming objects, but
-// the HTTP API does. The code below makes the relevant call manually.
-func (b *GCSBackend) Move(ctx context.Context, old, new string) error {
- creds, err := google.FindDefaultCredentials(ctx, gcsScope)
- if err != nil {
- return err
- }
-
- token, err := creds.TokenSource.Token()
- if err != nil {
- return err
- }
-
- // as per https://cloud.google.com/storage/docs/renaming-copying-moving-objects#rename
- url := fmt.Sprintf(
- "https://www.googleapis.com/storage/v1/b/%s/o/%s/rewriteTo/b/%s/o/%s",
- url.PathEscape(b.bucket), url.PathEscape(old),
- url.PathEscape(b.bucket), url.PathEscape(new),
- )
-
- req, err := http.NewRequest("POST", url, nil)
- req.Header.Add("Authorization", "Bearer "+token.AccessToken)
- _, err = client.Do(req)
- if err != nil {
- return err
- }
-
- // It seems that 'rewriteTo' copies objects instead of
- // renaming/moving them, hence a deletion call afterwards is
- // required.
- if err = b.handle.Object(old).Delete(ctx); err != nil {
- log.WithError(err).WithFields(log.Fields{
- "new": new,
- "old": old,
- }).Warn("failed to delete renamed object")
-
- // this error should not break renaming and is not returned
- }
-
- return nil
-}
-
-func (b *GCSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
- url, err := b.constructLayerUrl(digest)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": digest,
- "bucket": b.bucket,
- }).Error("failed to sign GCS URL")
-
- return err
- }
-
- log.WithField("layer", digest).Info("redirecting layer request to GCS bucket")
-
- w.Header().Set("Location", url)
- w.WriteHeader(303)
- return nil
-}
-
-// Configure GCS URL signing in the presence of a service account key
-// (toggled if the user has set GOOGLE_APPLICATION_CREDENTIALS).
-func signingOptsFromEnv() (*storage.SignedURLOptions, error) {
- path := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS")
- if path == "" {
- // No credentials configured -> no URL signing
- return nil, nil
- }
-
- key, err := ioutil.ReadFile(path)
- if err != nil {
- return nil, fmt.Errorf("failed to read service account key: %s", err)
- }
-
- conf, err := google.JWTConfigFromJSON(key)
- if err != nil {
- return nil, fmt.Errorf("failed to parse service account key: %s", err)
- }
-
- log.WithField("account", conf.Email).Info("GCS URL signing enabled")
-
- return &storage.SignedURLOptions{
- Scheme: storage.SigningSchemeV4,
- GoogleAccessID: conf.Email,
- PrivateKey: conf.PrivateKey,
- Method: "GET",
- }, nil
-}
-
-// layerRedirect constructs the public URL of the layer object in the Cloud
-// Storage bucket, signs it and redirects the user there.
-//
-// Signing the URL allows unauthenticated clients to retrieve objects from the
-// bucket.
-//
-// The Docker client is known to follow redirects, but this might not be true
-// for all other registry clients.
-func (b *GCSBackend) constructLayerUrl(digest string) (string, error) {
- log.WithField("layer", digest).Info("redirecting layer request to bucket")
- object := "layers/" + digest
-
- if b.signing != nil {
- opts := *b.signing
- opts.Expires = time.Now().Add(5 * time.Minute)
- return storage.SignedURL(b.bucket, object, &opts)
- } else {
- return ("https://storage.googleapis.com/" + b.bucket + "/" + object), nil
- }
-}
diff --git a/tools/nixery/server/storage/storage.go b/tools/nixery/server/storage/storage.go
deleted file mode 100644
index c97b5e4facc6..000000000000
--- a/tools/nixery/server/storage/storage.go
+++ /dev/null
@@ -1,51 +0,0 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License"); you may not
-// use this file except in compliance with the License. You may obtain a copy of
-// the License at
-//
-// https://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-// License for the specific language governing permissions and limitations under
-// the License.
-
-// Package storage implements an interface that can be implemented by
-// storage backends, such as Google Cloud Storage or the local
-// filesystem.
-package storage
-
-import (
- "context"
- "io"
- "net/http"
-)
-
-type Persister = func(io.Writer) (string, int64, error)
-
-type Backend interface {
- // Name returns the name of the storage backend, for use in
- // log messages and such.
- Name() string
-
- // Persist provides a user-supplied function with a writer
- // that stores data in the storage backend.
- //
- // It needs to return the SHA256 hash of the data written as
- // well as the total number of bytes, as those are required
- // for the image manifest.
- Persist(context.Context, string, Persister) (string, int64, error)
-
- // Fetch retrieves data from the storage backend.
- Fetch(ctx context.Context, path string) (io.ReadCloser, error)
-
- // Move renames a path inside the storage backend. This is
- // used for staging uploads while calculating their hashes.
- Move(ctx context.Context, old, new string) error
-
- // Serve provides a handler function to serve HTTP requests
- // for layers in the storage backend.
- ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error
-}
diff --git a/tools/nixery/shell.nix b/tools/nixery/shell.nix
index 93cd1f4cec62..b37caa83ade3 100644
--- a/tools/nixery/shell.nix
+++ b/tools/nixery/shell.nix
@@ -20,5 +20,5 @@ let nixery = import ./default.nix { inherit pkgs; };
in pkgs.stdenv.mkDerivation {
name = "nixery-dev-shell";
- buildInputs = with pkgs; [ jq nixery.nixery-build-image ];
+ buildInputs = with pkgs; [ jq nixery.nixery-prepare-image ];
}
diff --git a/tools/nixery/storage/filesystem.go b/tools/nixery/storage/filesystem.go
new file mode 100644
index 000000000000..cdbc31c5e046
--- /dev/null
+++ b/tools/nixery/storage/filesystem.go
@@ -0,0 +1,96 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Filesystem storage backend for Nixery.
+package storage
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+ "path"
+
+ log "github.com/sirupsen/logrus"
+)
+
+type FSBackend struct {
+ path string
+}
+
+func NewFSBackend() (*FSBackend, error) {
+ p := os.Getenv("STORAGE_PATH")
+ if p == "" {
+ return nil, fmt.Errorf("STORAGE_PATH must be set for filesystem storage")
+ }
+
+ p = path.Clean(p)
+ err := os.MkdirAll(p, 0755)
+ if err != nil {
+ return nil, fmt.Errorf("failed to create storage dir: %s", err)
+ }
+
+ return &FSBackend{p}, nil
+}
+
+func (b *FSBackend) Name() string {
+ return fmt.Sprintf("Filesystem (%s)", b.path)
+}
+
+func (b *FSBackend) Persist(ctx context.Context, key string, f Persister) (string, int64, error) {
+ full := path.Join(b.path, key)
+ dir := path.Dir(full)
+ err := os.MkdirAll(dir, 0755)
+ if err != nil {
+ log.WithError(err).WithField("path", dir).Error("failed to create storage directory")
+ return "", 0, err
+ }
+
+ file, err := os.OpenFile(full, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0644)
+ if err != nil {
+ log.WithError(err).WithField("file", full).Error("failed to write file")
+ return "", 0, err
+ }
+ defer file.Close()
+
+ return f(file)
+}
+
+func (b *FSBackend) Fetch(ctx context.Context, key string) (io.ReadCloser, error) {
+ full := path.Join(b.path, key)
+ return os.Open(full)
+}
+
+func (b *FSBackend) Move(ctx context.Context, old, new string) error {
+ newpath := path.Join(b.path, new)
+ err := os.MkdirAll(path.Dir(newpath), 0755)
+ if err != nil {
+ return err
+ }
+
+ return os.Rename(path.Join(b.path, old), newpath)
+}
+
+func (b *FSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
+ p := path.Join(b.path, "layers", digest)
+
+ log.WithFields(log.Fields{
+ "layer": digest,
+ "path": p,
+ }).Info("serving layer from filesystem")
+
+ http.ServeFile(w, r, p)
+ return nil
+}
diff --git a/tools/nixery/storage/gcs.go b/tools/nixery/storage/gcs.go
new file mode 100644
index 000000000000..c247cca62140
--- /dev/null
+++ b/tools/nixery/storage/gcs.go
@@ -0,0 +1,219 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Google Cloud Storage backend for Nixery.
+package storage
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "net/url"
+ "os"
+ "time"
+
+ "cloud.google.com/go/storage"
+ log "github.com/sirupsen/logrus"
+ "golang.org/x/oauth2/google"
+)
+
+// HTTP client to use for direct calls to APIs that are not part of the SDK
+var client = &http.Client{}
+
+// API scope needed for renaming objects in GCS
+const gcsScope = "https://www.googleapis.com/auth/devstorage.read_write"
+
+type GCSBackend struct {
+ bucket string
+ handle *storage.BucketHandle
+ signing *storage.SignedURLOptions
+}
+
+// Constructs a new GCS bucket backend based on the configured
+// environment variables.
+func NewGCSBackend() (*GCSBackend, error) {
+ bucket := os.Getenv("GCS_BUCKET")
+ if bucket == "" {
+ return nil, fmt.Errorf("GCS_BUCKET must be configured for GCS usage")
+ }
+
+ ctx := context.Background()
+ client, err := storage.NewClient(ctx)
+ if err != nil {
+ log.WithError(err).Fatal("failed to set up Cloud Storage client")
+ }
+
+ handle := client.Bucket(bucket)
+
+ if _, err := handle.Attrs(ctx); err != nil {
+ log.WithError(err).WithField("bucket", bucket).Error("could not access configured bucket")
+ return nil, err
+ }
+
+ signing, err := signingOptsFromEnv()
+ if err != nil {
+ log.WithError(err).Error("failed to configure GCS bucket signing")
+ return nil, err
+ }
+
+ return &GCSBackend{
+ bucket: bucket,
+ handle: handle,
+ signing: signing,
+ }, nil
+}
+
+func (b *GCSBackend) Name() string {
+ return "Google Cloud Storage (" + b.bucket + ")"
+}
+
+func (b *GCSBackend) Persist(ctx context.Context, path string, f Persister) (string, int64, error) {
+ obj := b.handle.Object(path)
+ w := obj.NewWriter(ctx)
+
+ hash, size, err := f(w)
+ if err != nil {
+ log.WithError(err).WithField("path", path).Error("failed to upload to GCS")
+ return hash, size, err
+ }
+
+ return hash, size, w.Close()
+}
+
+func (b *GCSBackend) Fetch(ctx context.Context, path string) (io.ReadCloser, error) {
+ obj := b.handle.Object(path)
+
+ // Probe whether the file exists before trying to fetch it
+ _, err := obj.Attrs(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ return obj.NewReader(ctx)
+}
+
+// renameObject renames an object in the specified Cloud Storage
+// bucket.
+//
+// The Go API for Cloud Storage does not support renaming objects, but
+// the HTTP API does. The code below makes the relevant call manually.
+func (b *GCSBackend) Move(ctx context.Context, old, new string) error {
+ creds, err := google.FindDefaultCredentials(ctx, gcsScope)
+ if err != nil {
+ return err
+ }
+
+ token, err := creds.TokenSource.Token()
+ if err != nil {
+ return err
+ }
+
+ // as per https://cloud.google.com/storage/docs/renaming-copying-moving-objects#rename
+ url := fmt.Sprintf(
+ "https://www.googleapis.com/storage/v1/b/%s/o/%s/rewriteTo/b/%s/o/%s",
+ url.PathEscape(b.bucket), url.PathEscape(old),
+ url.PathEscape(b.bucket), url.PathEscape(new),
+ )
+
+ req, err := http.NewRequest("POST", url, nil)
+ req.Header.Add("Authorization", "Bearer "+token.AccessToken)
+ _, err = client.Do(req)
+ if err != nil {
+ return err
+ }
+
+ // It seems that 'rewriteTo' copies objects instead of
+ // renaming/moving them, hence a deletion call afterwards is
+ // required.
+ if err = b.handle.Object(old).Delete(ctx); err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "new": new,
+ "old": old,
+ }).Warn("failed to delete renamed object")
+
+ // this error should not break renaming and is not returned
+ }
+
+ return nil
+}
+
+func (b *GCSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
+ url, err := b.constructLayerUrl(digest)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": digest,
+ "bucket": b.bucket,
+ }).Error("failed to sign GCS URL")
+
+ return err
+ }
+
+ log.WithField("layer", digest).Info("redirecting layer request to GCS bucket")
+
+ w.Header().Set("Location", url)
+ w.WriteHeader(303)
+ return nil
+}
+
+// Configure GCS URL signing in the presence of a service account key
+// (toggled if the user has set GOOGLE_APPLICATION_CREDENTIALS).
+func signingOptsFromEnv() (*storage.SignedURLOptions, error) {
+ path := os.Getenv("GOOGLE_APPLICATION_CREDENTIALS")
+ if path == "" {
+ // No credentials configured -> no URL signing
+ return nil, nil
+ }
+
+ key, err := ioutil.ReadFile(path)
+ if err != nil {
+ return nil, fmt.Errorf("failed to read service account key: %s", err)
+ }
+
+ conf, err := google.JWTConfigFromJSON(key)
+ if err != nil {
+ return nil, fmt.Errorf("failed to parse service account key: %s", err)
+ }
+
+ log.WithField("account", conf.Email).Info("GCS URL signing enabled")
+
+ return &storage.SignedURLOptions{
+ Scheme: storage.SigningSchemeV4,
+ GoogleAccessID: conf.Email,
+ PrivateKey: conf.PrivateKey,
+ Method: "GET",
+ }, nil
+}
+
+// layerRedirect constructs the public URL of the layer object in the Cloud
+// Storage bucket, signs it and redirects the user there.
+//
+// Signing the URL allows unauthenticated clients to retrieve objects from the
+// bucket.
+//
+// The Docker client is known to follow redirects, but this might not be true
+// for all other registry clients.
+func (b *GCSBackend) constructLayerUrl(digest string) (string, error) {
+ log.WithField("layer", digest).Info("redirecting layer request to bucket")
+ object := "layers/" + digest
+
+ if b.signing != nil {
+ opts := *b.signing
+ opts.Expires = time.Now().Add(5 * time.Minute)
+ return storage.SignedURL(b.bucket, object, &opts)
+ } else {
+ return ("https://storage.googleapis.com/" + b.bucket + "/" + object), nil
+ }
+}
diff --git a/tools/nixery/storage/storage.go b/tools/nixery/storage/storage.go
new file mode 100644
index 000000000000..c97b5e4facc6
--- /dev/null
+++ b/tools/nixery/storage/storage.go
@@ -0,0 +1,51 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not
+// use this file except in compliance with the License. You may obtain a copy of
+// the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+// License for the specific language governing permissions and limitations under
+// the License.
+
+// Package storage implements an interface that can be implemented by
+// storage backends, such as Google Cloud Storage or the local
+// filesystem.
+package storage
+
+import (
+ "context"
+ "io"
+ "net/http"
+)
+
+type Persister = func(io.Writer) (string, int64, error)
+
+type Backend interface {
+ // Name returns the name of the storage backend, for use in
+ // log messages and such.
+ Name() string
+
+ // Persist provides a user-supplied function with a writer
+ // that stores data in the storage backend.
+ //
+ // It needs to return the SHA256 hash of the data written as
+ // well as the total number of bytes, as those are required
+ // for the image manifest.
+ Persist(context.Context, string, Persister) (string, int64, error)
+
+ // Fetch retrieves data from the storage backend.
+ Fetch(ctx context.Context, path string) (io.ReadCloser, error)
+
+ // Move renames a path inside the storage backend. This is
+ // used for staging uploads while calculating their hashes.
+ Move(ctx context.Context, old, new string) error
+
+ // Serve provides a handler function to serve HTTP requests
+ // for layers in the storage backend.
+ ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error
+}
--
cgit 1.4.1
From 1031d890ec2b9a0a4d965eddc304bc5bf580442d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sun, 19 Jan 2020 02:21:08 +0000
Subject: fix(builder): Fix minor logging switcharoo
---
tools/nixery/builder/builder.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/builder/builder.go b/tools/nixery/builder/builder.go
index ceb112df90cf..028bcc57690a 100644
--- a/tools/nixery/builder/builder.go
+++ b/tools/nixery/builder/builder.go
@@ -201,7 +201,7 @@ func callNix(program, image string, args []string) ([]byte, error) {
if err != nil {
return nil, err
}
- go logNix(program, image, errpipe)
+ go logNix(image, program, errpipe)
if err = cmd.Start(); err != nil {
log.WithError(err).WithFields(log.Fields{
--
cgit 1.4.1
From 215df37187501523bb3bf348b7bbd76a01692f19 Mon Sep 17 00:00:00 2001
From: Florian Klink
Date: Tue, 25 Feb 2020 10:40:25 -0800
Subject: fix(popcount): Fix nix-build -A nixery-popcount
Previously, this was failing as follows:
```
these derivations will be built:
/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv
building '/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv'...
building
warning: GOPATH set to GOROOT (/nix/store/4859cp1v7zqcqh43jkqsayl4wrz3g6hp-go-1.13.4/share/go) has no effect
failed to initialize build cache at /homeless-shelter/.cache/go-build: mkdir /homeless-shelter: permission denied
builder for '/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv' failed with exit code 1
error: build of '/nix/store/7rbrf06phkiyz31dwpq88x920zjhnw0c-nixery-popcount.drv' failed
```
---
tools/nixery/popcount/default.nix | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/popcount/default.nix b/tools/nixery/popcount/default.nix
index 4a3c8faf9c36..bd695380cf0b 100644
--- a/tools/nixery/popcount/default.nix
+++ b/tools/nixery/popcount/default.nix
@@ -12,15 +12,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-{ go, stdenv }:
+{ buildGoPackage }:
-stdenv.mkDerivation {
+buildGoPackage {
name = "nixery-popcount";
- buildInputs = [ go ];
- phases = [ "buildPhase" ];
- buildPhase = ''
- mkdir -p $out/bin
- go build -o $out/bin/popcount ${./popcount.go}
- '';
+ src = ./.;
+
+ goPackagePath = "github.com/google/nixery/popcount";
+ doCheck = true;
}
--
cgit 1.4.1
From bdda24a77287e09cb855eee7019148ee8bbd1cd9 Mon Sep 17 00:00:00 2001
From: Raphael Borun Das Gupta
Date: Fri, 1 May 2020 01:38:25 +0200
Subject: chore(nix): update channel 19.03 -> 20.03
Use a NixOS / NixPkgs release that's actually being supported
and regularly updated.
---
tools/nixery/docs/src/caching.md | 2 +-
tools/nixery/docs/src/nixery.md | 2 +-
tools/nixery/docs/src/run-your-own.md | 4 ++--
tools/nixery/prepare-image/prepare-image.nix | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/src/caching.md b/tools/nixery/docs/src/caching.md
index b07d9e22f046..05ea6d89b655 100644
--- a/tools/nixery/docs/src/caching.md
+++ b/tools/nixery/docs/src/caching.md
@@ -29,7 +29,7 @@ Manifest caching *only* applies in the following cases:
Manifest caching *never* applies in the following cases:
* package source specification is a local file path (i.e. `NIXERY_PKGS_PATH`)
-* package source specification is a NixOS channel (e.g. `NIXERY_CHANNEL=nixos-19.03`)
+* package source specification is a NixOS channel (e.g. `NIXERY_CHANNEL=nixos-20.03`)
* package source specification is a git branch or tag (e.g. `staging`, `master` or `latest`)
It is thus always preferable to request images from a fully-pinned package
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index 6cc16431c9e8..185c84630889 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -54,7 +54,7 @@ own instance of Nixery.
### Which revision of `nixpkgs` is used for the builds?
The instance at `nixery.dev` tracks a recent NixOS channel, currently NixOS
-19.03. The channel is updated several times a day.
+20.03. The channel is updated several times a day.
Private registries might be configured to track a different channel (such as
`nixos-unstable`) or even track a git repository with custom packages.
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
index ffddec32db5f..9c20e3f2cde7 100644
--- a/tools/nixery/docs/src/run-your-own.md
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -44,7 +44,7 @@ be performed for trivial things.
However if you are running a private Nixery, chances are high that you intend to
use it with your own packages. There are three options available:
-1. Specify an upstream Nix/NixOS channel[^1], such as `nixos-19.03` or
+1. Specify an upstream Nix/NixOS channel[^1], such as `nixos-20.03` or
`nixos-unstable`.
2. Specify your own git-repository with a custom package set[^2]. This makes it
possible to pull different tags, branches or commits by modifying the image
@@ -73,7 +73,7 @@ You must set *all* of these:
* `BUCKET`: [Google Cloud Storage][gcs] bucket to store & serve image layers
* `PORT`: HTTP port on which Nixery should listen
-You may set *one* of these, if unset Nixery defaults to `nixos-19.03`:
+You may set *one* of these, if unset Nixery defaults to `nixos-20.03`:
* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
diff --git a/tools/nixery/prepare-image/prepare-image.nix b/tools/nixery/prepare-image/prepare-image.nix
index 4393f2b859a6..7b73b92bfd9b 100644
--- a/tools/nixery/prepare-image/prepare-image.nix
+++ b/tools/nixery/prepare-image/prepare-image.nix
@@ -24,7 +24,7 @@
{
# Description of the package set to be used (will be loaded by load-pkgs.nix)
srcType ? "nixpkgs",
- srcArgs ? "nixos-19.03",
+ srcArgs ? "nixos-20.03",
system ? "x86_64-linux",
importArgs ? { },
# Path to load-pkgs.nix
--
cgit 1.4.1
From b4e0b55e5608ae2b394880e69912d9352fea2d9d Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 1 May 2020 12:46:38 +0100
Subject: chore(build): Change pin for default nixpkgs used to build Nixery
This moves the pin from just being in the Travis configuration to also
being set in a nixpkgs-pin.nix file, which makes it trivial to build
at the right commit when performing local builds.
---
tools/nixery/.travis.yml | 2 +-
tools/nixery/default.nix | 2 +-
tools/nixery/nixpkgs-pin.nix | 4 ++++
3 files changed, 6 insertions(+), 2 deletions(-)
create mode 100644 tools/nixery/nixpkgs-pin.nix
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index b0a2b3f997f9..72bbb90b6af2 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -5,7 +5,7 @@ arch:
services:
- docker
env:
- - NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/5271f8dddc0f2e54f55bd2fc1868c09ff72ac980.tar.gz
+ - NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz
before_script:
- echo "Running Nixery CI build on $(uname -m)"
- mkdir test-files
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 7454c14a8567..411865a8a40b 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-{ pkgs ? import { }
+{ pkgs ? import ./nixpkgs-pin.nix
, preLaunch ? ""
, extraPackages ? []
, maxLayers ? 20 }:
diff --git a/tools/nixery/nixpkgs-pin.nix b/tools/nixery/nixpkgs-pin.nix
new file mode 100644
index 000000000000..ea1b37bfe7a0
--- /dev/null
+++ b/tools/nixery/nixpkgs-pin.nix
@@ -0,0 +1,4 @@
+import (builtins.fetchTarball {
+ url = "https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz";
+ sha256 = "1j8gy2d61lmrp5gzi1a2jmb2v2pbk4b9666y8pf1pjg3jiqkzf7m";
+}) {}
--
cgit 1.4.1
From 987a90510aca61e429ac659e510cbc51de9ae0bb Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 1 May 2020 12:47:31 +0100
Subject: fix(popcount): Accommodate upstream changes on nixos.org
Channel serving has moved to a new subdomain, and the redirect
semantics have changed. Instead of serving temporary redirects,
permanent redirects are now issued.
I've reported this upstream as a bug, but this workaround will fix it
in the meantime.
---
tools/nixery/popcount/popcount.go | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/popcount/popcount.go b/tools/nixery/popcount/popcount.go
index 992a88e874d1..d632090c0dc2 100644
--- a/tools/nixery/popcount/popcount.go
+++ b/tools/nixery/popcount/popcount.go
@@ -70,12 +70,16 @@ func channelMetadata(channel string) meta {
},
}
- resp, err := c.Get(fmt.Sprintf("https://nixos.org/channels/%s", channel))
+ resp, err := c.Get(fmt.Sprintf("https://channels.nixos.org/%s", channel))
failOn(err, "failed to retrieve channel metadata")
loc, err := resp.Location()
failOn(err, "no redirect location given for channel")
- if resp.StatusCode != 302 {
+
+ // TODO(tazjin): These redirects are currently served as 301s, but
+ // should (and used to) be 302s. Check if/when this is fixed and
+ // update accordingly.
+ if !(resp.StatusCode == 301 || resp.StatusCode == 302) {
log.Fatalf("Expected redirect for channel, but received '%s'\n", resp.Status)
}
@@ -85,6 +89,9 @@ func channelMetadata(channel string) meta {
defer commitResp.Body.Close()
commit, err := ioutil.ReadAll(commitResp.Body)
failOn(err, "failed to read commit from response")
+ if commitResp.StatusCode != 200 {
+ log.Fatalf("non-success status code when fetching commit: %s", string(commit), commitResp.StatusCode)
+ }
return meta{
name: channel,
--
cgit 1.4.1
From bc9742f927dcd7d68abb3aadb0d578d2c6088ff3 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 1 May 2020 13:19:55 +0100
Subject: chore(build): Update pinned Go dependencies
---
tools/nixery/go-deps.nix | 91 ++++++++++++++++++++++++++----------------------
1 file changed, 50 insertions(+), 41 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/go-deps.nix b/tools/nixery/go-deps.nix
index 847b44dce63c..7d57ef204b8e 100644
--- a/tools/nixery/go-deps.nix
+++ b/tools/nixery/go-deps.nix
@@ -4,9 +4,18 @@
goPackagePath = "cloud.google.com/go";
fetch = {
type = "git";
- url = "https://code.googlesource.com/gocloud";
- rev = "77f6a3a292a7dbf66a5329de0d06326f1906b450";
- sha256 = "1c9pkx782nbcp8jnl5lprcbzf97van789ky5qsncjgywjyymhigi";
+ url = "https://github.com/googleapis/google-cloud-go";
+ rev = "22b9552106761e34e39c0cf48b783f092c660767";
+ sha256 = "17sb3753h1m5pbjlv4r5ydbd35kfh086g2qxv2zjlqd90kcsdj7x";
+ };
+ }
+ {
+ goPackagePath = "github.com/golang/groupcache";
+ fetch = {
+ type = "git";
+ url = "https://github.com/golang/groupcache";
+ rev = "8c9f03a8e57eb486e42badaed3fb287da51807ba";
+ sha256 = "0vjjr79r32icjzlb05wn02k59av7jx0rn1jijml8r4whlg7dnkfh";
};
}
{
@@ -14,8 +23,8 @@
fetch = {
type = "git";
url = "https://github.com/golang/protobuf";
- rev = "6c65a5562fc06764971b7c5d05c76c75e84bdbf7";
- sha256 = "1k1wb4zr0qbwgpvz9q5ws9zhlal8hq7dmq62pwxxriksayl6hzym";
+ rev = "d04d7b157bb510b1e0c10132224b616ac0e26b17";
+ sha256 = "0m5z81im4nsyfgarjhppayk4hqnrwswr3nix9mj8pff8x9jvcjqw";
};
}
{
@@ -23,17 +32,17 @@
fetch = {
type = "git";
url = "https://github.com/googleapis/gax-go";
- rev = "bd5b16380fd03dc758d11cef74ba2e3bc8b0e8c2";
- sha256 = "1lxawwngv6miaqd25s3ba0didfzylbwisd2nz7r4gmbmin6jsjrx";
+ rev = "be11bb253a768098254dc71e95d1a81ced778de3";
+ sha256 = "072iv8llzr99za4pndfskgndq9rzms38r0sqy4d127ijnzmgl5nd";
};
}
{
- goPackagePath = "github.com/hashicorp/golang-lru";
+ goPackagePath = "github.com/sirupsen/logrus";
fetch = {
type = "git";
- url = "https://github.com/hashicorp/golang-lru";
- rev = "59383c442f7d7b190497e9bb8fc17a48d06cd03f";
- sha256 = "0yzwl592aa32vfy73pl7wdc21855w17zssrp85ckw2nisky8rg9c";
+ url = "https://github.com/sirupsen/logrus";
+ rev = "6699a89a232f3db797f2e280639854bbc4b89725";
+ sha256 = "1a59pw7zimvm8k423iq9l4f4qjj1ia1xc6pkmhwl2mxc46y2n442";
};
}
{
@@ -41,8 +50,8 @@
fetch = {
type = "git";
url = "https://github.com/census-instrumentation/opencensus-go";
- rev = "b4a14686f0a98096416fe1b4cb848e384fb2b22b";
- sha256 = "1aidyp301v5ngwsnnc8v1s09vvbsnch1jc4vd615f7qv77r9s7dn";
+ rev = "d7677d6af5953e0506ac4c08f349c62b917a443a";
+ sha256 = "1nphs1qjz4a99b2n4izpqw13shiy26jy0i7qgm95giyhwwdqspk0";
};
}
{
@@ -50,8 +59,8 @@
fetch = {
type = "git";
url = "https://go.googlesource.com/net";
- rev = "da137c7871d730100384dbcf36e6f8fa493aef5b";
- sha256 = "1qsiyr3irmb6ii06hivm9p2c7wqyxczms1a9v1ss5698yjr3fg47";
+ rev = "627f9648deb96c27737b83199d44bb5c1010cbcf";
+ sha256 = "0ziz7i9mhz6dy2f58dsa83flkk165w1cnazm7yksql5i9m7x099z";
};
}
{
@@ -59,8 +68,8 @@
fetch = {
type = "git";
url = "https://go.googlesource.com/oauth2";
- rev = "0f29369cfe4552d0e4bcddc57cc75f4d7e672a33";
- sha256 = "06jwpvx0x2gjn2y959drbcir5kd7vg87k0r1216abk6rrdzzrzi2";
+ rev = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303";
+ sha256 = "1sirdib60zwmh93kf9qrx51r8544k1p9rs5mk0797wibz3m4mrdg";
};
}
{
@@ -68,8 +77,8 @@
fetch = {
type = "git";
url = "https://go.googlesource.com/sys";
- rev = "51ab0e2deafac1f46c46ad59cf0921be2f180c3d";
- sha256 = "0xdhpckbql3bsqkpc2k5b1cpnq3q1qjqjjq2j3p707rfwb8nm91a";
+ rev = "226ff32320da7b90d0b5bc2365f4e359c466fb78";
+ sha256 = "137cdvmmrmx8qf72r94pn6zxp0wg9rfl0ii2fa9jk5hdkhifiqa6";
};
}
{
@@ -77,53 +86,53 @@
fetch = {
type = "git";
url = "https://go.googlesource.com/text";
- rev = "342b2e1fbaa52c93f31447ad2c6abc048c63e475";
- sha256 = "0flv9idw0jm5nm8lx25xqanbkqgfiym6619w575p7nrdh0riqwqh";
+ rev = "3a82255431918bb7c2e1c09c964a18991756910b";
+ sha256 = "1vrps79ap8dy7car64pf0j4hnb1j2hwp9wf67skv6izmi8r9425w";
};
}
{
- goPackagePath = "google.golang.org/api";
+ goPackagePath = "gonum.org/v1/gonum";
fetch = {
type = "git";
- url = "https://code.googlesource.com/google-api-go-client";
- rev = "069bea57b1be6ad0671a49ea7a1128025a22b73f";
- sha256 = "19q2b610lkf3z3y9hn6rf11dd78xr9q4340mdyri7kbijlj2r44q";
+ url = "https://github.com/gonum/gonum";
+ rev = "f69c0ac28e32bbfe5ccf3f821d85533b5f646b04";
+ sha256 = "1qpws9899qyr0m2v4v7asxgy1z0wh9f284ampc2ha5c0hp0x6r4m";
};
}
{
- goPackagePath = "google.golang.org/genproto";
+ goPackagePath = "google.golang.org/api";
fetch = {
type = "git";
- url = "https://github.com/google/go-genproto";
- rev = "c506a9f9061087022822e8da603a52fc387115a8";
- sha256 = "03hh80aqi58dqi5ykj4shk3chwkzrgq2f3k6qs5qhgvmcy79y2py";
+ url = "https://github.com/googleapis/google-api-go-client";
+ rev = "4ec5466f5645b0f7f76ecb2246e7c5f3568cf8bb";
+ sha256 = "1clrw2syb40a51zqpaw0mri8jk54cqp3scjwxq44hqr7cmqp0967";
};
}
{
- goPackagePath = "google.golang.org/grpc";
+ goPackagePath = "google.golang.org/genproto";
fetch = {
type = "git";
- url = "https://github.com/grpc/grpc-go";
- rev = "977142214c45640483838b8672a43c46f89f90cb";
- sha256 = "05wig23l2sil3bfdv19gq62sya7hsabqj9l8pzr1sm57qsvj218d";
+ url = "https://github.com/googleapis/go-genproto";
+ rev = "43cab4749ae7254af90e92cb2cd767dfc641f6dd";
+ sha256 = "0svbzzf9drd4ndxwj5rlggly4jnds9c76fkcq5f8czalbf5mlvb6";
};
}
{
- goPackagePath = "gonum.org/v1/gonum";
+ goPackagePath = "google.golang.org/grpc";
fetch = {
type = "git";
- url = "https://github.com/gonum/gonum";
- rev = "ced62fe5104b907b6c16cb7e575c17b2e62ceddd";
- sha256 = "1b7q6haabnp53igpmvr6a2414yralhbrldixx4kbxxg1apy8jdjg";
+ url = "https://github.com/grpc/grpc-go";
+ rev = "9106c3fff5236fd664a8de183f1c27682c66b823";
+ sha256 = "02m1k06p6a5fc6ahj9qgd81wdzspa9xc29gd7awygwiyk22rd3md";
};
}
{
- goPackagePath = "github.com/sirupsen/logrus";
+ goPackagePath = "google.golang.org/protobuf";
fetch = {
type = "git";
- url = "https://github.com/sirupsen/logrus";
- rev = "de736cf91b921d56253b4010270681d33fdf7cb5";
- sha256 = "1qixss8m5xy7pzbf0qz2k3shjw0asklm9sj6zyczp7mryrari0aj";
+ url = "https://go.googlesource.com/protobuf";
+ rev = "a0f95d5b14735d688bd94ca511d396115e5b86be";
+ sha256 = "0l4xsfz337vmlij7pg8s2bjwlmrk4g83p1b9n8vnfavrmnrcw6j0";
};
}
]
--
cgit 1.4.1
From c194c5662b7a64047e469c27f6a5ac45b68e8d03 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 25 Jul 2020 14:08:24 +0100
Subject: fix(build): Don't use Cachix as the binary cache during builds
Permission changes in the Travis CI Nix builders have caused this to
start failing, as the build user now has insufficient permissions to
use caches.
There may be a way to change the permissions instead, but in the
meantime we will just cause things to rebuild.
---
tools/nixery/.travis.yml | 1 -
1 file changed, 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 72bbb90b6af2..674231217f69 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -12,7 +12,6 @@ before_script:
- echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
- echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
- nix-env -f '' -iA cachix -A go
- - cachix use nixery
script:
- test -z $(gofmt -l server/ build-image/)
- nix-build --arg maxLayers 1 | cachix push nixery
--
cgit 1.4.1
From ad0541940fe61181e2653c9e63f9d7cb0911a130 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 12:42:03 +0100
Subject: fix(build): Completely remove Cachix from build setup
Installing Cachix started failing on ARM64.
---
tools/nixery/.travis.yml | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index 674231217f69..ed715da14118 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -11,10 +11,10 @@ before_script:
- mkdir test-files
- echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
- echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
- - nix-env -f '' -iA cachix -A go
+ - nix-env -f '' -iA -A go
script:
- test -z $(gofmt -l server/ build-image/)
- - nix-build --arg maxLayers 1 | cachix push nixery
+ - nix-build --arg maxLayers 1
# This integration test makes sure that the container image built
# for Nixery itself runs fine in Docker, and that images pulled
--
cgit 1.4.1
From 4ce32adfe8aff1189e42a8a375782c9ea86a0b79 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 12:52:05 +0100
Subject: fix(build): Work around arbitrary new maxLayers restriction
---
tools/nixery/.travis.yml | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
index ed715da14118..fcf2034d4472 100644
--- a/tools/nixery/.travis.yml
+++ b/tools/nixery/.travis.yml
@@ -14,7 +14,7 @@ before_script:
- nix-env -f '' -iA -A go
script:
- test -z $(gofmt -l server/ build-image/)
- - nix-build --arg maxLayers 1
+ - nix-build --arg maxLayers 2
# This integration test makes sure that the container image built
# for Nixery itself runs fine in Docker, and that images pulled
--
cgit 1.4.1
From 5ce745d104e82d836967e3bd9fd7af9602e76114 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 13:30:12 +0100
Subject: refactor(main): Split HTTP handlers into separate functions
There is a new handler coming up to fix #102 and I want to avoid
falling into the classic Go trap of creating thousand-line functions.
---
tools/nixery/config/pkgsource.go | 2 +-
tools/nixery/main.go | 117 ++++++++++++++++++++-------------------
2 files changed, 62 insertions(+), 57 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/config/pkgsource.go b/tools/nixery/config/pkgsource.go
index 95236c4b0d15..380e664367e7 100644
--- a/tools/nixery/config/pkgsource.go
+++ b/tools/nixery/config/pkgsource.go
@@ -140,7 +140,7 @@ func pkgSourceFromEnv() (PkgSource, error) {
}
if git := os.Getenv("NIXERY_PKGS_REPO"); git != "" {
- log.WithField("repo", git).Info("using NIx package set from git repository")
+ log.WithField("repo", git).Info("using Nix package set from git repository")
return &GitSource{
repository: git,
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 6cad93740978..92c602b1f79f 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -53,8 +53,8 @@ var version string = "devel"
// routes required for serving images, since pushing and other such
// functionality is not available.
var (
- manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
- layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+ manifestTagRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
+ layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
)
// Downloads the popularity information for the package set from the
@@ -112,75 +112,80 @@ type registryHandler struct {
state *builder.State
}
-func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
- // Acknowledge that we speak V2 with an empty response
- if r.RequestURI == "/v2/" {
- return
- }
+// Serve a manifest by tag, building it via Nix and populating caches
+// if necessary.
+func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Request, name string, tag string) {
+ log.WithFields(log.Fields{
+ "image": name,
+ "tag": tag,
+ }).Info("requesting image manifest")
- // Serve the manifest (straight from Nix)
- manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
- if len(manifestMatches) == 3 {
- imageName := manifestMatches[1]
- imageTag := manifestMatches[2]
+ image := builder.ImageFromName(name, tag)
+ buildResult, err := builder.BuildImage(r.Context(), h.state, &image)
- log.WithFields(log.Fields{
- "image": imageName,
- "tag": imageTag,
- }).Info("requesting image manifest")
+ if err != nil {
+ writeError(w, 500, "UNKNOWN", "image build failure")
- image := builder.ImageFromName(imageName, imageTag)
- buildResult, err := builder.BuildImage(r.Context(), h.state, &image)
+ log.WithError(err).WithFields(log.Fields{
+ "image": name,
+ "tag": tag,
+ }).Error("failed to build image manifest")
- if err != nil {
- writeError(w, 500, "UNKNOWN", "image build failure")
+ return
+ }
- log.WithError(err).WithFields(log.Fields{
- "image": imageName,
- "tag": imageTag,
- }).Error("failed to build image manifest")
+ // Some error types have special handling, which is applied
+ // here.
+ if buildResult.Error == "not_found" {
+ s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
+ writeError(w, 404, "MANIFEST_UNKNOWN", s)
- return
- }
+ log.WithFields(log.Fields{
+ "image": name,
+ "tag": tag,
+ "packages": buildResult.Pkgs,
+ }).Warn("could not find Nix packages")
- // Some error types have special handling, which is applied
- // here.
- if buildResult.Error == "not_found" {
- s := fmt.Sprintf("Could not find Nix packages: %v", buildResult.Pkgs)
- writeError(w, 404, "MANIFEST_UNKNOWN", s)
+ return
+ }
- log.WithFields(log.Fields{
- "image": imageName,
- "tag": imageTag,
- "packages": buildResult.Pkgs,
- }).Warn("could not find Nix packages")
+ // This marshaling error is ignored because we know that this
+ // field represents valid JSON data.
+ manifest, _ := json.Marshal(buildResult.Manifest)
+ w.Header().Add("Content-Type", manifestMediaType)
+ w.Write(manifest)
+}
- return
- }
+// serveLayer serves an image layer from storage (if it exists).
+func (h *registryHandler) serveLayer(w http.ResponseWriter, r *http.Request, digest string) {
+ storage := h.state.Storage
+ err := storage.ServeLayer(digest, r, w)
+ if err != nil {
+ log.WithError(err).WithFields(log.Fields{
+ "layer": digest,
+ "backend": storage.Name(),
+ }).Error("failed to serve layer from storage backend")
+ }
+}
- // This marshaling error is ignored because we know that this
- // field represents valid JSON data.
- manifest, _ := json.Marshal(buildResult.Manifest)
- w.Header().Add("Content-Type", manifestMediaType)
- w.Write(manifest)
+// ServeHTTP dispatches HTTP requests to the matching handlers.
+func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ // Acknowledge that we speak V2 with an empty response
+ if r.RequestURI == "/v2/" {
return
}
- // Serve an image layer. For this we need to first ask Nix for
- // the manifest, then proceed to extract the correct layer from
- // it.
+ // Build & serve a manifest by tag
+ manifestMatches := manifestTagRegex.FindStringSubmatch(r.RequestURI)
+ if len(manifestMatches) == 3 {
+ h.serveManifestTag(w, r, manifestMatches[1], manifestMatches[2])
+ return
+ }
+
+ // Serve an image layer
layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
if len(layerMatches) == 3 {
- digest := layerMatches[2]
- storage := h.state.Storage
- err := storage.ServeLayer(digest, r, w)
- if err != nil {
- log.WithError(err).WithFields(log.Fields{
- "layer": digest,
- "backend": storage.Name(),
- }).Error("failed to serve layer from storage backend")
- }
-
+ h.serveLayer(w, r, layerMatches[2])
return
}
--
cgit 1.4.1
From cbbf45b5cb268a605134022d91308e6c57c9d273 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 13:47:08 +0100
Subject: refactor(storage): Rename ServeLayer -> Serve
This is going to be used for general content-addressed objects, and is
not layer specific anymore.
---
tools/nixery/main.go | 4 ++--
tools/nixery/storage/filesystem.go | 8 ++++----
tools/nixery/storage/gcs.go | 6 +++---
tools/nixery/storage/storage.go | 6 +++---
4 files changed, 12 insertions(+), 12 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 92c602b1f79f..573f42767d04 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -1,4 +1,4 @@
-// Copyright 2019 Google LLC
+// Copyright 2019-2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not
// use this file except in compliance with the License. You may obtain a copy of
@@ -159,7 +159,7 @@ func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Reques
// serveLayer serves an image layer from storage (if it exists).
func (h *registryHandler) serveLayer(w http.ResponseWriter, r *http.Request, digest string) {
storage := h.state.Storage
- err := storage.ServeLayer(digest, r, w)
+ err := storage.Serve(digest, r, w)
if err != nil {
log.WithError(err).WithFields(log.Fields{
"layer": digest,
diff --git a/tools/nixery/storage/filesystem.go b/tools/nixery/storage/filesystem.go
index cdbc31c5e046..926e9257a1e0 100644
--- a/tools/nixery/storage/filesystem.go
+++ b/tools/nixery/storage/filesystem.go
@@ -83,13 +83,13 @@ func (b *FSBackend) Move(ctx context.Context, old, new string) error {
return os.Rename(path.Join(b.path, old), newpath)
}
-func (b *FSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
+func (b *FSBackend) Serve(digest string, r *http.Request, w http.ResponseWriter) error {
p := path.Join(b.path, "layers", digest)
log.WithFields(log.Fields{
- "layer": digest,
- "path": p,
- }).Info("serving layer from filesystem")
+ "digest": digest,
+ "path": p,
+ }).Info("serving blob from filesystem")
http.ServeFile(w, r, p)
return nil
diff --git a/tools/nixery/storage/gcs.go b/tools/nixery/storage/gcs.go
index c247cca62140..61b5dea52351 100644
--- a/tools/nixery/storage/gcs.go
+++ b/tools/nixery/storage/gcs.go
@@ -150,18 +150,18 @@ func (b *GCSBackend) Move(ctx context.Context, old, new string) error {
return nil
}
-func (b *GCSBackend) ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error {
+func (b *GCSBackend) Serve(digest string, r *http.Request, w http.ResponseWriter) error {
url, err := b.constructLayerUrl(digest)
if err != nil {
log.WithError(err).WithFields(log.Fields{
- "layer": digest,
+ "digest": digest,
"bucket": b.bucket,
}).Error("failed to sign GCS URL")
return err
}
- log.WithField("layer", digest).Info("redirecting layer request to GCS bucket")
+ log.WithField("digest", digest).Info("redirecting blob request to GCS bucket")
w.Header().Set("Location", url)
w.WriteHeader(303)
diff --git a/tools/nixery/storage/storage.go b/tools/nixery/storage/storage.go
index c97b5e4facc6..4040ef08dc9c 100644
--- a/tools/nixery/storage/storage.go
+++ b/tools/nixery/storage/storage.go
@@ -1,4 +1,4 @@
-// Copyright 2019 Google LLC
+// Copyright 2019-2020 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not
// use this file except in compliance with the License. You may obtain a copy of
@@ -46,6 +46,6 @@ type Backend interface {
Move(ctx context.Context, old, new string) error
// Serve provides a handler function to serve HTTP requests
- // for layers in the storage backend.
- ServeLayer(digest string, r *http.Request, w http.ResponseWriter) error
+ // for objects in the storage backend.
+ Serve(digest string, r *http.Request, w http.ResponseWriter) error
}
--
cgit 1.4.1
From 94570aa83f0ebb9a192ff9d62e14b811457a6284 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 14:24:14 +0100
Subject: feat(main): Implement serving of manifests by digest
Modifies the layer serving endpoint to be a generic blob-serving
endpoint that can handle both manifest and layer object "types".
Note that this commit does not yet populate the CAS with any
manifests.
---
tools/nixery/main.go | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 573f42767d04..8a6d3c3aedaa 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -53,8 +53,8 @@ var version string = "devel"
// routes required for serving images, since pushing and other such
// functionality is not available.
var (
- manifestTagRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
- layerRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/blobs/sha256:(\w+)$`)
+ manifestRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/manifests/([\w|\-|\.|\_]+)$`)
+ blobRegex = regexp.MustCompile(`^/v2/([\w|\-|\.|\_|\/]+)/(blobs|manifests)/sha256:(\w+)$`)
)
// Downloads the popularity information for the package set from the
@@ -156,15 +156,16 @@ func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Reques
w.Write(manifest)
}
-// serveLayer serves an image layer from storage (if it exists).
-func (h *registryHandler) serveLayer(w http.ResponseWriter, r *http.Request, digest string) {
+// serveBlob serves a blob from storage by digest
+func (h *registryHandler) serveBlob(w http.ResponseWriter, r *http.Request, blobType, digest string) {
storage := h.state.Storage
err := storage.Serve(digest, r, w)
if err != nil {
log.WithError(err).WithFields(log.Fields{
- "layer": digest,
+ "type": blobType,
+ "digest": digest,
"backend": storage.Name(),
- }).Error("failed to serve layer from storage backend")
+ }).Error("failed to serve blob from storage backend")
}
}
@@ -176,16 +177,16 @@ func (h *registryHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
// Build & serve a manifest by tag
- manifestMatches := manifestTagRegex.FindStringSubmatch(r.RequestURI)
+ manifestMatches := manifestRegex.FindStringSubmatch(r.RequestURI)
if len(manifestMatches) == 3 {
h.serveManifestTag(w, r, manifestMatches[1], manifestMatches[2])
return
}
- // Serve an image layer
- layerMatches := layerRegex.FindStringSubmatch(r.RequestURI)
- if len(layerMatches) == 3 {
- h.serveLayer(w, r, layerMatches[2])
+ // Serve a blob by digest
+ layerMatches := blobRegex.FindStringSubmatch(r.RequestURI)
+ if len(layerMatches) == 4 {
+ h.serveBlob(w, r, layerMatches[2], layerMatches[3])
return
}
--
cgit 1.4.1
From 9e5ebb2f4f57c310977e979166e9ab287ef65865 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 14:50:02 +0100
Subject: feat(main): Implement caching of manifests in CAS
To ensure that registry clients which attempt to pull manifests by
their content hash can interact with Nixery, this change implements
persisting image manifests in the CAS in the same way as image layers.
In combination with the previous refactorings this means that Nixery's
serving flow is now compatible with containerd.
I have verified this locally, but CI currently only runs against
Docker and not containerd, which is something I plan to address in a
subsequent PR.
This fixes #102
---
tools/nixery/main.go | 35 +++++++++++++++++++++++++++++++++++
1 file changed, 35 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 8a6d3c3aedaa..39cfbc525528 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -26,8 +26,11 @@
package main
import (
+ "context"
+ "crypto/sha256"
"encoding/json"
"fmt"
+ "io"
"io/ioutil"
"net/http"
"regexp"
@@ -153,6 +156,38 @@ func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Reques
// field represents valid JSON data.
manifest, _ := json.Marshal(buildResult.Manifest)
w.Header().Add("Content-Type", manifestMediaType)
+
+ // The manifest needs to be persisted to the blob storage (to become
+ // available for clients that fetch manifests by their hash, e.g.
+ // containerd) and served to the client.
+ //
+ // Since we have no stable key to address this manifest (it may be
+ // uncacheable, yet still addressable by blob) we need to separate
+ // out the hashing, uploading and serving phases. The latter is
+ // especially important as clients may start to fetch it by digest
+ // as soon as they see a response.
+ sha256sum := fmt.Sprintf("%x", sha256.Sum256(manifest))
+ path := "layers/" + sha256sum
+ ctx := context.TODO()
+
+ _, _, err = h.state.Storage.Persist(ctx, path, func(sw io.Writer) (string, int64, error) {
+ // We already know the hash, so no additional hash needs to be
+ // constructed here.
+ written, err := sw.Write(manifest)
+ return sha256sum, int64(written), err
+ })
+
+ if err != nil {
+ writeError(w, 500, "MANIFEST_UPLOAD", "could not upload manifest to blob store")
+
+ log.WithError(err).WithFields(log.Fields{
+ "image": name,
+ "tag": tag,
+ }).Error("could not upload manifest")
+
+ return
+ }
+
w.Write(manifest)
}
--
cgit 1.4.1
From 8a5c446babbac860d6eaee6f7e0c5a5a8a6f4183 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Oct 2020 22:55:20 +0100
Subject: docs: Add a note about a Nix-native builder to the roadmap
... if I don't mention this somewhere I'll probably never do it!
---
tools/nixery/README.md | 6 ++++++
1 file changed, 6 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 32e5921fa475..6731d8786582 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -124,6 +124,12 @@ correct caching behaviour, addressing and so on.
See [issue #4](https://github.com/google/nixery/issues/4).
+### Nix-native builder
+
+The image building and layering functionality of Nixery will be extracted into a
+separate Nix function, which will make it possible to build images directly in
+Nix builds.
+
[Nix]: https://nixos.org/
[layering strategy]: https://storage.googleapis.com/nixdoc/nixery-layers.html
[gist]: https://gist.github.com/tazjin/08f3d37073b3590aacac424303e6f745
--
cgit 1.4.1
From cc35bf0fc3a900dccf4f9edcc581cadb5956c439 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 29 Oct 2020 16:13:53 +0100
Subject: feat(storage): Add support for content-types (GCS only)
Extends storage.Persist to accept a Content-Type argument, which in
the GCS backend is persisted with the object to ensure that the object
is served back with this content-type.
This is not yet implemented for the filesystem backend, where the
parameter is simply ignored.
This should help in the case of clients which expect the returned
objects to have content-types set when, for example, fetching layers
by digest.
---
tools/nixery/builder/builder.go | 2 +-
tools/nixery/builder/cache.go | 4 ++--
tools/nixery/main.go | 3 ++-
tools/nixery/manifest/manifest.go | 8 ++++----
tools/nixery/storage/filesystem.go | 3 ++-
tools/nixery/storage/gcs.go | 25 ++++++++++++++++++++++---
tools/nixery/storage/storage.go | 2 +-
7 files changed, 34 insertions(+), 13 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/builder/builder.go b/tools/nixery/builder/builder.go
index 028bcc57690a..115f1e37ef32 100644
--- a/tools/nixery/builder/builder.go
+++ b/tools/nixery/builder/builder.go
@@ -420,7 +420,7 @@ func (b *byteCounter) Write(p []byte) (n int, err error) {
// image manifest.
func uploadHashLayer(ctx context.Context, s *State, key string, lw layerWriter) (*manifest.Entry, error) {
path := "staging/" + key
- sha256sum, size, err := s.Storage.Persist(ctx, path, func(sw io.Writer) (string, int64, error) {
+ sha256sum, size, err := s.Storage.Persist(ctx, path, manifest.LayerType, func(sw io.Writer) (string, int64, error) {
// Sets up a "multiwriter" that simultaneously runs both hash
// algorithms and uploads to the storage backend.
shasum := sha256.New()
diff --git a/tools/nixery/builder/cache.go b/tools/nixery/builder/cache.go
index a4ebe03e1c94..35b563e52496 100644
--- a/tools/nixery/builder/cache.go
+++ b/tools/nixery/builder/cache.go
@@ -152,7 +152,7 @@ func cacheManifest(ctx context.Context, s *State, key string, m json.RawMessage)
go s.Cache.localCacheManifest(key, m)
path := "manifests/" + key
- _, size, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
+ _, size, err := s.Storage.Persist(ctx, path, manifest.ManifestType, func(w io.Writer) (string, int64, error) {
size, err := io.Copy(w, bytes.NewReader([]byte(m)))
return "", size, err
})
@@ -220,7 +220,7 @@ func cacheLayer(ctx context.Context, s *State, key string, entry manifest.Entry)
j, _ := json.Marshal(&entry)
path := "builds/" + key
- _, _, err := s.Storage.Persist(ctx, path, func(w io.Writer) (string, int64, error) {
+ _, _, err := s.Storage.Persist(ctx, path, "", func(w io.Writer) (string, int64, error) {
size, err := io.Copy(w, bytes.NewReader(j))
return "", size, err
})
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 39cfbc525528..d94d51b4681e 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -38,6 +38,7 @@ import (
"github.com/google/nixery/builder"
"github.com/google/nixery/config"
"github.com/google/nixery/logs"
+ mf "github.com/google/nixery/manifest"
"github.com/google/nixery/storage"
log "github.com/sirupsen/logrus"
)
@@ -170,7 +171,7 @@ func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Reques
path := "layers/" + sha256sum
ctx := context.TODO()
- _, _, err = h.state.Storage.Persist(ctx, path, func(sw io.Writer) (string, int64, error) {
+ _, _, err = h.state.Storage.Persist(ctx, path, mf.ManifestType, func(sw io.Writer) (string, int64, error) {
// We already know the hash, so no additional hash needs to be
// constructed here.
written, err := sw.Write(manifest)
diff --git a/tools/nixery/manifest/manifest.go b/tools/nixery/manifest/manifest.go
index 0d36826fb7e5..e499920075f0 100644
--- a/tools/nixery/manifest/manifest.go
+++ b/tools/nixery/manifest/manifest.go
@@ -28,8 +28,8 @@ const (
schemaVersion = 2
// media types
- manifestType = "application/vnd.docker.distribution.manifest.v2+json"
- layerType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
+ ManifestType = "application/vnd.docker.distribution.manifest.v2+json"
+ LayerType = "application/vnd.docker.image.rootfs.diff.tar.gzip"
configType = "application/vnd.docker.container.image.v1+json"
// image config constants
@@ -117,7 +117,7 @@ func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
hashes := make([]string, len(layers))
for i, l := range layers {
hashes[i] = l.TarHash
- l.MediaType = layerType
+ l.MediaType = LayerType
l.TarHash = ""
layers[i] = l
}
@@ -126,7 +126,7 @@ func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
m := manifest{
SchemaVersion: schemaVersion,
- MediaType: manifestType,
+ MediaType: ManifestType,
Config: Entry{
MediaType: configType,
Size: int64(len(c.Config)),
diff --git a/tools/nixery/storage/filesystem.go b/tools/nixery/storage/filesystem.go
index 926e9257a1e0..bd757587b4db 100644
--- a/tools/nixery/storage/filesystem.go
+++ b/tools/nixery/storage/filesystem.go
@@ -49,7 +49,8 @@ func (b *FSBackend) Name() string {
return fmt.Sprintf("Filesystem (%s)", b.path)
}
-func (b *FSBackend) Persist(ctx context.Context, key string, f Persister) (string, int64, error) {
+// TODO(tazjin): Implement support for persisting content-types for the filesystem backend.
+func (b *FSBackend) Persist(ctx context.Context, key, _type string, f Persister) (string, int64, error) {
full := path.Join(b.path, key)
dir := path.Dir(full)
err := os.MkdirAll(dir, 0755)
diff --git a/tools/nixery/storage/gcs.go b/tools/nixery/storage/gcs.go
index 61b5dea52351..eac34461af76 100644
--- a/tools/nixery/storage/gcs.go
+++ b/tools/nixery/storage/gcs.go
@@ -80,17 +80,36 @@ func (b *GCSBackend) Name() string {
return "Google Cloud Storage (" + b.bucket + ")"
}
-func (b *GCSBackend) Persist(ctx context.Context, path string, f Persister) (string, int64, error) {
+func (b *GCSBackend) Persist(ctx context.Context, path, contentType string, f Persister) (string, int64, error) {
obj := b.handle.Object(path)
w := obj.NewWriter(ctx)
hash, size, err := f(w)
if err != nil {
- log.WithError(err).WithField("path", path).Error("failed to upload to GCS")
+ log.WithError(err).WithField("path", path).Error("failed to write to GCS")
return hash, size, err
}
- return hash, size, w.Close()
+ err = w.Close()
+ if err != nil {
+ log.WithError(err).WithField("path", path).Error("failed to complete GCS upload")
+ return hash, size, err
+ }
+
+ // GCS natively supports content types for objects, which will be
+ // used when serving them back.
+ if contentType != "" {
+ _, err = obj.Update(ctx, storage.ObjectAttrsToUpdate{
+ ContentType: contentType,
+ })
+
+ if err != nil {
+ log.WithError(err).WithField("path", path).Error("failed to update object attrs")
+ return hash, size, err
+ }
+ }
+
+ return hash, size, nil
}
func (b *GCSBackend) Fetch(ctx context.Context, path string) (io.ReadCloser, error) {
diff --git a/tools/nixery/storage/storage.go b/tools/nixery/storage/storage.go
index 4040ef08dc9c..fd496f440ae3 100644
--- a/tools/nixery/storage/storage.go
+++ b/tools/nixery/storage/storage.go
@@ -36,7 +36,7 @@ type Backend interface {
// It needs to return the SHA256 hash of the data written as
// well as the total number of bytes, as those are required
// for the image manifest.
- Persist(context.Context, string, Persister) (string, int64, error)
+ Persist(ctx context.Context, path, contentType string, f Persister) (string, int64, error)
// Fetch retrieves data from the storage backend.
Fetch(ctx context.Context, path string) (io.ReadCloser, error)
--
cgit 1.4.1
From 8ad5c55ad281902e91e2c4d9a7f8e33d9f73c24e Mon Sep 17 00:00:00 2001
From: Dave Nicponski
Date: Thu, 3 Dec 2020 13:53:31 -0500
Subject: docs(config): Fix comment typo
---
tools/nixery/config/pkgsource.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/config/pkgsource.go b/tools/nixery/config/pkgsource.go
index 380e664367e7..55007bc80623 100644
--- a/tools/nixery/config/pkgsource.go
+++ b/tools/nixery/config/pkgsource.go
@@ -32,7 +32,7 @@ type PkgSource interface {
// for calling Nix.
Render(tag string) (string, string)
- // Create a key by which builds for this source and iamge
+ // Create a key by which builds for this source and image
// combination can be cached.
//
// The empty string means that this value is not cacheable due
--
cgit 1.4.1
From 3e1d63ccb309f5a7d677adeb630472e598783628 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 5 Dec 2020 13:23:26 +0100
Subject: docs: Update README with a link to the NixCon talk
---
tools/nixery/README.md | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 6731d8786582..c701a0e62ee1 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -20,10 +20,9 @@ and/or large dependencies.
A public instance as well as additional documentation is available at
[nixery.dev][public].
-The project started out inspired by the [buildLayeredImage][] blog post with the
-intention of becoming a Kubernetes controller that can serve declarative image
-specifications specified in CRDs as container images. The design for this was
-outlined in [a public gist][gist].
+You can watch the NixCon 2019 [talk about
+Nixery](https://www.youtube.com/watch?v=pOI9H4oeXqA) for more information about
+the project and its use-cases.
This is not an officially supported Google project.
@@ -115,6 +114,13 @@ These extra configuration variables must be set to configure storage backends:
* `STORAGE_PATH`: Path to a folder in which to store and from which to serve
data (**required** for `filesystem`)
+### Background
+
+The project started out inspired by the [buildLayeredImage][] blog post with the
+intention of becoming a Kubernetes controller that can serve declarative image
+specifications specified in CRDs as container images. The design for this was
+outlined in [a public gist][gist].
+
## Roadmap
### Kubernetes integration
--
cgit 1.4.1
From 954953d8bad1978b529df146d8ea50d91d4e257a Mon Sep 17 00:00:00 2001
From: Jerome Petazzoni
Date: Tue, 13 Apr 2021 16:20:06 +0200
Subject: chore(nix): update channel URL
It looks like NixPkgs channels have moved. Fixing this URL allows
using nixos-20.09, for instance.
---
tools/nixery/prepare-image/load-pkgs.nix | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/prepare-image/load-pkgs.nix b/tools/nixery/prepare-image/load-pkgs.nix
index cceebfc14dae..4a89dcde3a14 100644
--- a/tools/nixery/prepare-image/load-pkgs.nix
+++ b/tools/nixery/prepare-image/load-pkgs.nix
@@ -23,7 +23,7 @@ let
fetchImportChannel = channel:
let
url =
- "https://github.com/NixOS/nixpkgs-channels/archive/${channel}.tar.gz";
+ "https://github.com/NixOS/nixpkgs/archive/${channel}.tar.gz";
in import (fetchTarball url) importArgs;
# If a git repository is requested, it is retrieved via
--
cgit 1.4.1
From f172107ef1b2568bd3a1b1eafa4b9e2546e14c1d Mon Sep 17 00:00:00 2001
From: Jerome Petazzoni
Date: Tue, 13 Apr 2021 16:26:04 +0200
Subject: feat(storage): Add generic support for content-types
When serving a manifest, it is important to set the content-type
correctly (otherwise pulling an image is likely to give a cryptic
error message, "Error response from daemon: missing signature key").
This makes sure that we set the content-type properly for both
manifests and layers.
---
tools/nixery/main.go | 10 ++++++++++
1 file changed, 10 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index d94d51b4681e..6af4636e51a0 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -195,6 +195,16 @@ func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Reques
// serveBlob serves a blob from storage by digest
func (h *registryHandler) serveBlob(w http.ResponseWriter, r *http.Request, blobType, digest string) {
storage := h.state.Storage
+ switch blobType {
+ case "manifests":
+ // It is necessary to set the correct content-type when serving manifests.
+ // Otherwise, you may get the following mysterious error message when pulling:
+ // "Error response from daemon: missing signature key"
+ w.Header().Add("Content-Type", mf.ManifestType)
+ case "blobs":
+ // It is not strictly necessary to set this content-type, but since we're here...
+ w.Header().Add("Content-Type", mf.LayerType)
+ }
err := storage.Serve(digest, r, w)
if err != nil {
log.WithError(err).WithFields(log.Fields{
--
cgit 1.4.1
From d2767bbe8a7af403e4e252f3fbfe9ec8fa0c5a90 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Tue, 27 Apr 2021 16:28:49 +0200
Subject: feat(ci): Configure initial GitHub Actions setup
Travis is being deprecated, and this might be the best option for now.
---
tools/nixery/.github/workflows/build-and-test.yaml | 29 ++++++++++++++++++++++
1 file changed, 29 insertions(+)
create mode 100644 tools/nixery/.github/workflows/build-and-test.yaml
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
new file mode 100644
index 000000000000..79dd54a71ea5
--- /dev/null
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -0,0 +1,29 @@
+# Build Nixery, spin up an instance and pull an image from it.
+name: "Build and test Nixery"
+on:
+ push:
+ branches:
+ - master
+ pull_request: {}
+permissions: "read-all"
+env:
+ NIX_PATH: "nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz"
+jobs:
+ build-and-test:
+ runs-on: ubuntu-latest
+ steps:
+ - name: "Do we have Docker?"
+ run: |
+ docker ps -a
+ - name: Install Nix
+ uses: cachix/install-nix-action@v13
+ - name: Checkout
+ uses: actions/checkout@v2.3.4
+ - name: Prepare environment
+ run: |
+ mkdir test-storage
+ nix-env -f '' -iA go
+ - name: Check formatting
+ run: "test -z $(gofmt -l .)"
+ - name: Build Nixery
+ run: "nix-build --arg maxLayers 2"
--
cgit 1.4.1
From ee48bd891cbe0c5f7e819787f6ddd39053507cf1 Mon Sep 17 00:00:00 2001
From: Florian Klink
Date: Thu, 29 Apr 2021 15:22:00 +0200
Subject: feat(ci): remove unneeded permissions: read-all
We don't intend to label, authenticate or whatever with the
GITHUB_TOKEN, so there's not really a reason to give any broader
permissions than the defaults.
---
tools/nixery/.github/workflows/build-and-test.yaml | 1 -
1 file changed, 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
index 79dd54a71ea5..1036220aaffd 100644
--- a/tools/nixery/.github/workflows/build-and-test.yaml
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -5,7 +5,6 @@ on:
branches:
- master
pull_request: {}
-permissions: "read-all"
env:
NIX_PATH: "nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz"
jobs:
--
cgit 1.4.1
From 970f49223599ec124809ead7be0b61e3e30431f9 Mon Sep 17 00:00:00 2001
From: Florian Klink
Date: Thu, 29 Apr 2021 15:10:54 +0200
Subject: feat(ci): add integration tests to GitHub Actions, remove
.travis.yaml
This copies the integration tests from `.travis.yaml` into a script,
documents the assumptions it makes, and wires it into GitHub Actions.
Contrary to the travis version, we don't use Nixery's GCS backend, as
handing out access to the bucket used, especially for PRs, needs to be
done carefully.
Adding back GCS to the integration test can be done at a later point,
either by using a mock server, or by only exposing the credentials for
master builds (and have the test script decide on whether
GOOGLE_APPLICATION_CREDENTIALS is set or not).
The previous travis version had some complicated post-mortem log
gathering - instead of doing this, we can just `docker run` nixery, but
fork it into the background with the shell - causing it to still be able
to log its output as it's running.
An additional `--rm` is appended, so the container gets cleaned up on
termination - this allows subsequent runs on non-CI infrastructure (like
developer laptops), without having to manually clean up containers.
Fixes #119.
---
tools/nixery/.github/workflows/build-and-test.yaml | 2 +
tools/nixery/.travis.yml | 78 ----------------------
tools/nixery/scripts/integration-test.sh | 51 ++++++++++++++
3 files changed, 53 insertions(+), 78 deletions(-)
delete mode 100644 tools/nixery/.travis.yml
create mode 100755 tools/nixery/scripts/integration-test.sh
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
index 1036220aaffd..3ae4e639e31e 100644
--- a/tools/nixery/.github/workflows/build-and-test.yaml
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -26,3 +26,5 @@ jobs:
run: "test -z $(gofmt -l .)"
- name: Build Nixery
run: "nix-build --arg maxLayers 2"
+ - name: Run integration test
+ run: scripts/integration-test.sh
diff --git a/tools/nixery/.travis.yml b/tools/nixery/.travis.yml
deleted file mode 100644
index fcf2034d4472..000000000000
--- a/tools/nixery/.travis.yml
+++ /dev/null
@@ -1,78 +0,0 @@
-language: nix
-arch:
- - amd64
- - arm64
-services:
- - docker
-env:
- - NIX_PATH=nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz
-before_script:
- - echo "Running Nixery CI build on $(uname -m)"
- - mkdir test-files
- - echo ${GOOGLE_KEY} | base64 -d > test-files/key.json
- - echo ${GCS_SIGNING_PEM} | base64 -d > test-files/gcs.pem
- - nix-env -f '' -iA -A go
-script:
- - test -z $(gofmt -l server/ build-image/)
- - nix-build --arg maxLayers 2
-
- # This integration test makes sure that the container image built
- # for Nixery itself runs fine in Docker, and that images pulled
- # from it work in Docker.
- #
- # Output from the Nixery container is printed at the end of the
- # test regardless of test status.
- - IMG=$(docker load -q -i $(nix-build -A nixery-image) | awk '{ print $3 }')
- - echo "Loaded Nixery image as ${IMG}"
-
- - |
- docker run -d -p 8080:8080 --name nixery \
- -v ${PWD}/test-files:/var/nixery \
- -e PORT=8080 \
- -e GCS_BUCKET=nixery-ci-tests \
- -e GOOGLE_CLOUD_PROJECT=nixery \
- -e GOOGLE_APPLICATION_CREDENTIALS=/var/nixery/key.json \
- -e NIXERY_CHANNEL=nixos-unstable \
- -e NIXERY_STORAGE_BACKEND=gcs \
- ${IMG}
-
- # print all of the container's logs regardless of success
- - |
- function print_logs {
- echo "Nixery container logs:"
- docker logs nixery
- }
- trap print_logs EXIT
-
- # Give the container ~20 seconds to come up
- - |
- attempts=0
- echo -n "Waiting for Nixery to start ..."
- until $(curl --fail --silent "http://localhost:8080/v2/"); do
- [[ attempts -eq 30 ]] && echo "Nixery container failed to start!" && exit 1
- ((attempts++))
- echo -n "."
- sleep 1
- done
-
- # Pull and run an image of the current CPU architecture
- - |
- case $(uname -m) in
- x86_64)
- docker run --rm localhost:8080/hello hello
- ;;
- aarch64)
- docker run --rm localhost:8080/arm64/hello hello
- ;;
- esac
-
- # Pull an image of the opposite CPU architecture (but without running it)
- - |
- case $(uname -m) in
- x86_64)
- docker pull localhost:8080/arm64/hello
- ;;
- aarch64)
- docker pull localhost:8080/hello
- ;;
- esac
diff --git a/tools/nixery/scripts/integration-test.sh b/tools/nixery/scripts/integration-test.sh
new file mode 100755
index 000000000000..9595f37d9417
--- /dev/null
+++ b/tools/nixery/scripts/integration-test.sh
@@ -0,0 +1,51 @@
+#!/usr/bin/env bash
+set -eou pipefail
+
+# This integration test makes sure that the container image built
+# for Nixery itself runs fine in Docker, and that images pulled
+# from it work in Docker.
+
+IMG=$(docker load -q -i "$(nix-build -A nixery-image)" | awk '{ print $3 }')
+echo "Loaded Nixery image as ${IMG}"
+
+# Run the built nixery docker image in the background, but keep printing its
+# output as it occurs.
+docker run --rm -p 8080:8080 --name nixery \
+ -e PORT=8080 \
+ --mount type=tmpfs,destination=/var/cache/nixery \
+ -e NIXERY_CHANNEL=nixos-unstable \
+ -e NIXERY_STORAGE_BACKEND=filesystem \
+ -e STORAGE_PATH=/var/cache/nixery \
+ "${IMG}" &
+
+# Give the container ~20 seconds to come up
+set +e
+attempts=0
+echo -n "Waiting for Nixery to start ..."
+until curl --fail --silent "http://localhost:8080/v2/"; do
+ [[ attempts -eq 30 ]] && echo "Nixery container failed to start!" && exit 1
+ ((attempts++))
+ echo -n "."
+ sleep 1
+done
+set -e
+
+# Pull and run an image of the current CPU architecture
+case $(uname -m) in
+ x86_64)
+ docker run --rm localhost:8080/hello hello
+ ;;
+ aarch64)
+ docker run --rm localhost:8080/arm64/hello hello
+ ;;
+esac
+
+# Pull an image of the opposite CPU architecture (but without running it)
+case $(uname -m) in
+x86_64)
+ docker pull localhost:8080/arm64/hello
+ ;;
+aarch64)
+ docker pull localhost:8080/hello
+ ;;
+esac
--
cgit 1.4.1
From 7e8295189bbcd4a30ea684c65c0a3c343d4842a9 Mon Sep 17 00:00:00 2001
From: Florian Klink
Date: Thu, 29 Apr 2021 16:02:26 +0200
Subject: docs: document unset GOOGLE_APPLICATION_CREDENTIALS
In case the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is not
set, a redirect to storage.googleapis.com is issued, which means the
underlying bucket objects need to be publicly accessible.
This wasn't really obvious until now, so further clarify it.
---
tools/nixery/README.md | 4 ++++
tools/nixery/storage/gcs.go | 4 ++++
2 files changed, 8 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index c701a0e62ee1..cebf28b58492 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -94,6 +94,10 @@ account key, Nixery will also use this key to create [signed URLs][] for layers
in the storage bucket. This makes it possible to serve layers from a bucket
without having to make them publicly available.
+In case the `GOOGLE_APPLICATION_CREDENTIALS` environment variable is not set, a
+redirect to storage.googleapis.com is issued, which means the underlying bucket
+objects need to be publicly accessible.
+
### Storage
Nixery supports multiple different storage backends in which its build cache and
diff --git a/tools/nixery/storage/gcs.go b/tools/nixery/storage/gcs.go
index eac34461af76..a4bb4ba31f67 100644
--- a/tools/nixery/storage/gcs.go
+++ b/tools/nixery/storage/gcs.go
@@ -222,6 +222,10 @@ func signingOptsFromEnv() (*storage.SignedURLOptions, error) {
// Signing the URL allows unauthenticated clients to retrieve objects from the
// bucket.
//
+// In case signing is not configured, a redirect to storage.googleapis.com is
+// issued, which means the underlying bucket objects need to be publicly
+// accessible.
+//
// The Docker client is known to follow redirects, but this might not be true
// for all other registry clients.
func (b *GCSBackend) constructLayerUrl(digest string) (string, error) {
--
cgit 1.4.1
From 8a1add9ef8f6899b8f110bd789d2422dbe688642 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Thu, 29 Apr 2021 23:52:42 +0200
Subject: chore(ci): Remove unnecessary commands from new CI setup
* remove a step that was not supposed to be committed ("Do we have
Docker?")
* remove setup of old temporary storage directory (now done in
integration script test instead)
* skip creation of out-link for initial Nixery build (to avoid
cache-busting on the second build)
---
tools/nixery/.github/workflows/build-and-test.yaml | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
index 3ae4e639e31e..86a2c43637ff 100644
--- a/tools/nixery/.github/workflows/build-and-test.yaml
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -11,20 +11,15 @@ jobs:
build-and-test:
runs-on: ubuntu-latest
steps:
- - name: "Do we have Docker?"
- run: |
- docker ps -a
- name: Install Nix
uses: cachix/install-nix-action@v13
- name: Checkout
uses: actions/checkout@v2.3.4
- name: Prepare environment
- run: |
- mkdir test-storage
- nix-env -f '' -iA go
+ run: nix-env -f '' -iA go
- name: Check formatting
run: "test -z $(gofmt -l .)"
- name: Build Nixery
- run: "nix-build --arg maxLayers 2"
+ run: "nix-build --arg maxLayers 2 --no-out-link"
- name: Run integration test
run: scripts/integration-test.sh
--
cgit 1.4.1
From 7520f2cb96159ae3cbee80b4822900f9a92f7f53 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 23 Apr 2021 11:00:06 +0200
Subject: chore: Update default NixOS channel to nixos-20.09
---
tools/nixery/.github/workflows/build-and-test.yaml | 2 +-
tools/nixery/docs/src/caching.md | 2 +-
tools/nixery/docs/src/nixery.md | 2 +-
tools/nixery/docs/src/run-your-own.md | 4 ++--
tools/nixery/nixpkgs-pin.nix | 4 ++--
tools/nixery/prepare-image/prepare-image.nix | 2 +-
6 files changed, 8 insertions(+), 8 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
index 86a2c43637ff..4563239a852f 100644
--- a/tools/nixery/.github/workflows/build-and-test.yaml
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -6,7 +6,7 @@ on:
- master
pull_request: {}
env:
- NIX_PATH: "nixpkgs=https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz"
+ NIX_PATH: "nixpkgs=https://github.com/NixOS/nixpkgs/archive/4263ba5e133cc3fc699c1152ab5ee46ef668e675.tar.gz"
jobs:
build-and-test:
runs-on: ubuntu-latest
diff --git a/tools/nixery/docs/src/caching.md b/tools/nixery/docs/src/caching.md
index 05ea6d89b655..05ea68ef6083 100644
--- a/tools/nixery/docs/src/caching.md
+++ b/tools/nixery/docs/src/caching.md
@@ -29,7 +29,7 @@ Manifest caching *only* applies in the following cases:
Manifest caching *never* applies in the following cases:
* package source specification is a local file path (i.e. `NIXERY_PKGS_PATH`)
-* package source specification is a NixOS channel (e.g. `NIXERY_CHANNEL=nixos-20.03`)
+* package source specification is a NixOS channel (e.g. `NIXERY_CHANNEL=nixos-20.09`)
* package source specification is a git branch or tag (e.g. `staging`, `master` or `latest`)
It is thus always preferable to request images from a fully-pinned package
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index 185c84630889..5f6dcb7e361c 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -54,7 +54,7 @@ own instance of Nixery.
### Which revision of `nixpkgs` is used for the builds?
The instance at `nixery.dev` tracks a recent NixOS channel, currently NixOS
-20.03. The channel is updated several times a day.
+20.09. The channel is updated several times a day.
Private registries might be configured to track a different channel (such as
`nixos-unstable`) or even track a git repository with custom packages.
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
index 9c20e3f2cde7..4ffb8c4d4b6e 100644
--- a/tools/nixery/docs/src/run-your-own.md
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -44,7 +44,7 @@ be performed for trivial things.
However if you are running a private Nixery, chances are high that you intend to
use it with your own packages. There are three options available:
-1. Specify an upstream Nix/NixOS channel[^1], such as `nixos-20.03` or
+1. Specify an upstream Nix/NixOS channel[^1], such as `nixos-20.09` or
`nixos-unstable`.
2. Specify your own git-repository with a custom package set[^2]. This makes it
possible to pull different tags, branches or commits by modifying the image
@@ -73,7 +73,7 @@ You must set *all* of these:
* `BUCKET`: [Google Cloud Storage][gcs] bucket to store & serve image layers
* `PORT`: HTTP port on which Nixery should listen
-You may set *one* of these, if unset Nixery defaults to `nixos-20.03`:
+You may set *one* of these, if unset Nixery defaults to `nixos-20.09`:
* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
diff --git a/tools/nixery/nixpkgs-pin.nix b/tools/nixery/nixpkgs-pin.nix
index ea1b37bfe7a0..ffedc92ed384 100644
--- a/tools/nixery/nixpkgs-pin.nix
+++ b/tools/nixery/nixpkgs-pin.nix
@@ -1,4 +1,4 @@
import (builtins.fetchTarball {
- url = "https://github.com/NixOS/nixpkgs-channels/archive/0a40a3999eb4d577418515da842a2622a64880c5.tar.gz";
- sha256 = "1j8gy2d61lmrp5gzi1a2jmb2v2pbk4b9666y8pf1pjg3jiqkzf7m";
+ url = "https://github.com/NixOS/nixpkgs/archive/4263ba5e133cc3fc699c1152ab5ee46ef668e675.tar.gz";
+ sha256 = "1nzqrdw0lhbldbs9r651zmgqpwhjhh9sssykhcl2155kgsfsrk7i";
}) {}
diff --git a/tools/nixery/prepare-image/prepare-image.nix b/tools/nixery/prepare-image/prepare-image.nix
index 7b73b92bfd9b..316bfbbf2712 100644
--- a/tools/nixery/prepare-image/prepare-image.nix
+++ b/tools/nixery/prepare-image/prepare-image.nix
@@ -24,7 +24,7 @@
{
# Description of the package set to be used (will be loaded by load-pkgs.nix)
srcType ? "nixpkgs",
- srcArgs ? "nixos-20.03",
+ srcArgs ? "nixos-20.09",
system ? "x86_64-linux",
importArgs ? { },
# Path to load-pkgs.nix
--
cgit 1.4.1
From 5c2db7b8ce4386bff4596eb0dfcc5d1f61dbf744 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 30 Apr 2021 12:44:16 +0200
Subject: chore(build): Use current git commit hash as build version
---
tools/nixery/default.nix | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 411865a8a40b..00ecad583c13 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -1,4 +1,4 @@
-# Copyright 2019 Google LLC
+# Copyright 2019-2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -22,12 +22,10 @@ with pkgs;
let
inherit (pkgs) buildGoPackage;
- # Hash of all Nixery sources - this is used as the Nixery version in
+ # Current Nixery commit - this is used as the Nixery version in
# builds to distinguish errors between deployed versions, see
# server/logs.go for details.
- nixery-src-hash = pkgs.runCommand "nixery-src-hash" {} ''
- echo ${./.} | grep -Eo '[a-z0-9]{32}' | head -c 32 > $out
- '';
+ nixery-commit-hash = pkgs.lib.commitIdFromGitRepo ./.git;
# Go implementation of the Nixery server which implements the
# container registry interface.
@@ -52,7 +50,7 @@ let
runHook renameImport
export GOBIN="$out/bin"
- go install -ldflags "-X main.version=$(cat ${nixery-src-hash})" ${goPackagePath}
+ go install -ldflags "-X main.version=$(cat ${nixery-commit-hash})" ${goPackagePath}
'';
fixupPhase = ''
--
cgit 1.4.1
From 13d97c9e51a3c6cc2abcb354bcd0519a49aeed68 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 30 Apr 2021 12:56:36 +0200
Subject: refactor(build): Pin dependencies using Go modules
Drops the go2nix configuration in favour of pkgs.buildGoModule.
Note that the go.sum file is bloated by issues with cyclic
dependencies in some Google projects, but this large number of
dependencies is not actually built.
---
tools/nixery/.github/workflows/build-and-test.yaml | 2 +-
tools/nixery/default.nix | 32 +-
tools/nixery/go-deps.nix | 138 ------
tools/nixery/go.mod | 11 +
tools/nixery/go.sum | 534 +++++++++++++++++++++
5 files changed, 553 insertions(+), 164 deletions(-)
delete mode 100644 tools/nixery/go-deps.nix
create mode 100644 tools/nixery/go.mod
create mode 100644 tools/nixery/go.sum
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
index 4563239a852f..2c0aff49d218 100644
--- a/tools/nixery/.github/workflows/build-and-test.yaml
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -20,6 +20,6 @@ jobs:
- name: Check formatting
run: "test -z $(gofmt -l .)"
- name: Build Nixery
- run: "nix-build --arg maxLayers 2 --no-out-link"
+ run: "nix-build --no-out-link"
- name: Run integration test
run: scripts/integration-test.sh
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 00ecad583c13..a5cc61e7e2ee 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -20,7 +20,7 @@
with pkgs;
let
- inherit (pkgs) buildGoPackage;
+ inherit (pkgs) buildGoModule;
# Current Nixery commit - this is used as the Nixery version in
# builds to distinguish errors between deployed versions, see
@@ -32,35 +32,17 @@ let
#
# Users should use the nixery-bin derivation below instead as it
# provides the paths of files needed at runtime.
- nixery-server = buildGoPackage rec {
+ nixery-server = buildGoModule rec {
name = "nixery-server";
- goDeps = ./go-deps.nix;
src = ./.;
-
- goPackagePath = "github.com/google/nixery";
doCheck = true;
- # Simplify the Nix build instructions for Go to just the basics
- # required to get Nixery up and running with the additional linker
- # flags required.
- outputs = [ "out" ];
- preConfigure = "bin=$out";
- buildPhase = ''
- runHook preBuild
- runHook renameImport
-
- export GOBIN="$out/bin"
- go install -ldflags "-X main.version=$(cat ${nixery-commit-hash})" ${goPackagePath}
- '';
+ # Needs to be updated after every modification of go.mod/go.sum
+ vendorSha256 = "1ff0kfww6fy6pnvyva7x8cc6l1d12aafps48wrkwawk2qjy9a8b9";
- fixupPhase = ''
- remove-references-to -t ${go} $out/bin/nixery
- '';
-
- checkPhase = ''
- go vet ${goPackagePath}
- go test ${goPackagePath}
- '';
+ buildFlagsArray = [
+ "-ldflags=-s -w -X main.version=${nixery-commit-hash}"
+ ];
};
in rec {
# Implementation of the Nix image building logic
diff --git a/tools/nixery/go-deps.nix b/tools/nixery/go-deps.nix
deleted file mode 100644
index 7d57ef204b8e..000000000000
--- a/tools/nixery/go-deps.nix
+++ /dev/null
@@ -1,138 +0,0 @@
-# This file was generated by https://github.com/kamilchm/go2nix v1.3.0
-[
- {
- goPackagePath = "cloud.google.com/go";
- fetch = {
- type = "git";
- url = "https://github.com/googleapis/google-cloud-go";
- rev = "22b9552106761e34e39c0cf48b783f092c660767";
- sha256 = "17sb3753h1m5pbjlv4r5ydbd35kfh086g2qxv2zjlqd90kcsdj7x";
- };
- }
- {
- goPackagePath = "github.com/golang/groupcache";
- fetch = {
- type = "git";
- url = "https://github.com/golang/groupcache";
- rev = "8c9f03a8e57eb486e42badaed3fb287da51807ba";
- sha256 = "0vjjr79r32icjzlb05wn02k59av7jx0rn1jijml8r4whlg7dnkfh";
- };
- }
- {
- goPackagePath = "github.com/golang/protobuf";
- fetch = {
- type = "git";
- url = "https://github.com/golang/protobuf";
- rev = "d04d7b157bb510b1e0c10132224b616ac0e26b17";
- sha256 = "0m5z81im4nsyfgarjhppayk4hqnrwswr3nix9mj8pff8x9jvcjqw";
- };
- }
- {
- goPackagePath = "github.com/googleapis/gax-go";
- fetch = {
- type = "git";
- url = "https://github.com/googleapis/gax-go";
- rev = "be11bb253a768098254dc71e95d1a81ced778de3";
- sha256 = "072iv8llzr99za4pndfskgndq9rzms38r0sqy4d127ijnzmgl5nd";
- };
- }
- {
- goPackagePath = "github.com/sirupsen/logrus";
- fetch = {
- type = "git";
- url = "https://github.com/sirupsen/logrus";
- rev = "6699a89a232f3db797f2e280639854bbc4b89725";
- sha256 = "1a59pw7zimvm8k423iq9l4f4qjj1ia1xc6pkmhwl2mxc46y2n442";
- };
- }
- {
- goPackagePath = "go.opencensus.io";
- fetch = {
- type = "git";
- url = "https://github.com/census-instrumentation/opencensus-go";
- rev = "d7677d6af5953e0506ac4c08f349c62b917a443a";
- sha256 = "1nphs1qjz4a99b2n4izpqw13shiy26jy0i7qgm95giyhwwdqspk0";
- };
- }
- {
- goPackagePath = "golang.org/x/net";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/net";
- rev = "627f9648deb96c27737b83199d44bb5c1010cbcf";
- sha256 = "0ziz7i9mhz6dy2f58dsa83flkk165w1cnazm7yksql5i9m7x099z";
- };
- }
- {
- goPackagePath = "golang.org/x/oauth2";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/oauth2";
- rev = "bf48bf16ab8d622ce64ec6ce98d2c98f916b6303";
- sha256 = "1sirdib60zwmh93kf9qrx51r8544k1p9rs5mk0797wibz3m4mrdg";
- };
- }
- {
- goPackagePath = "golang.org/x/sys";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/sys";
- rev = "226ff32320da7b90d0b5bc2365f4e359c466fb78";
- sha256 = "137cdvmmrmx8qf72r94pn6zxp0wg9rfl0ii2fa9jk5hdkhifiqa6";
- };
- }
- {
- goPackagePath = "golang.org/x/text";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/text";
- rev = "3a82255431918bb7c2e1c09c964a18991756910b";
- sha256 = "1vrps79ap8dy7car64pf0j4hnb1j2hwp9wf67skv6izmi8r9425w";
- };
- }
- {
- goPackagePath = "gonum.org/v1/gonum";
- fetch = {
- type = "git";
- url = "https://github.com/gonum/gonum";
- rev = "f69c0ac28e32bbfe5ccf3f821d85533b5f646b04";
- sha256 = "1qpws9899qyr0m2v4v7asxgy1z0wh9f284ampc2ha5c0hp0x6r4m";
- };
- }
- {
- goPackagePath = "google.golang.org/api";
- fetch = {
- type = "git";
- url = "https://github.com/googleapis/google-api-go-client";
- rev = "4ec5466f5645b0f7f76ecb2246e7c5f3568cf8bb";
- sha256 = "1clrw2syb40a51zqpaw0mri8jk54cqp3scjwxq44hqr7cmqp0967";
- };
- }
- {
- goPackagePath = "google.golang.org/genproto";
- fetch = {
- type = "git";
- url = "https://github.com/googleapis/go-genproto";
- rev = "43cab4749ae7254af90e92cb2cd767dfc641f6dd";
- sha256 = "0svbzzf9drd4ndxwj5rlggly4jnds9c76fkcq5f8czalbf5mlvb6";
- };
- }
- {
- goPackagePath = "google.golang.org/grpc";
- fetch = {
- type = "git";
- url = "https://github.com/grpc/grpc-go";
- rev = "9106c3fff5236fd664a8de183f1c27682c66b823";
- sha256 = "02m1k06p6a5fc6ahj9qgd81wdzspa9xc29gd7awygwiyk22rd3md";
- };
- }
- {
- goPackagePath = "google.golang.org/protobuf";
- fetch = {
- type = "git";
- url = "https://go.googlesource.com/protobuf";
- rev = "a0f95d5b14735d688bd94ca511d396115e5b86be";
- sha256 = "0l4xsfz337vmlij7pg8s2bjwlmrk4g83p1b9n8vnfavrmnrcw6j0";
- };
- }
-]
diff --git a/tools/nixery/go.mod b/tools/nixery/go.mod
new file mode 100644
index 000000000000..e24b2b6ff579
--- /dev/null
+++ b/tools/nixery/go.mod
@@ -0,0 +1,11 @@
+module github.com/google/nixery
+
+go 1.15
+
+require (
+ cloud.google.com/go/storage v1.15.0
+ github.com/google/go-cmp v0.5.5
+ github.com/sirupsen/logrus v1.8.1
+ golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c
+ gonum.org/v1/gonum v0.9.1
+)
diff --git a/tools/nixery/go.sum b/tools/nixery/go.sum
new file mode 100644
index 000000000000..49307968374d
--- /dev/null
+++ b/tools/nixery/go.sum
@@ -0,0 +1,534 @@
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
+cloud.google.com/go v0.52.0/go.mod h1:pXajvRH/6o3+F9jDHZWQ5PbGhn+o8w9qiu/CffaVdO4=
+cloud.google.com/go v0.53.0/go.mod h1:fp/UouUEsRkN6ryDKNW/Upv/JBKnv6WDthjR6+vze6M=
+cloud.google.com/go v0.54.0/go.mod h1:1rq2OEkV3YMf6n/9ZvGWI3GWw0VoqH/1x2nd8Is/bPc=
+cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKVk=
+cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
+cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
+cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
+cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
+cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
+cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
+cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
+cloud.google.com/go v0.81.0 h1:at8Tk2zUz63cLPR0JPWm5vp77pEZmzxEQBEfRKn1VV8=
+cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
+cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
+cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
+cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
+cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
+cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
+cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
+cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
+cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
+cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
+cloud.google.com/go/storage v1.15.0 h1:Ljj+ZXVEhCr/1+4ZhvtteN1ND7UUsNTlduGclLh8GO0=
+cloud.google.com/go/storage v1.15.0/go.mod h1:mjjQMoxxyGH7Jr8K5qrx6N2O0AHsczI61sMNn03GIZI=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+gioui.org v0.0.0-20210308172011-57750fc8a0a6/go.mod h1:RSH6KIUZ0p2xy5zHDxgAM4zumjgTw83q2ge/PI+yyw8=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
+github.com/boombuler/barcode v1.0.0/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8=
+github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
+github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
+github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
+github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
+github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
+github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
+github.com/fogleman/gg v1.3.0/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
+github.com/go-fonts/dejavu v0.1.0/go.mod h1:4Wt4I4OU2Nq9asgDCteaAaWZOV24E+0/Pwo0gppep4g=
+github.com/go-fonts/latin-modern v0.2.0/go.mod h1:rQVLdDMK+mK1xscDwsqM5J8U2jrRa3T0ecnM9pNujks=
+github.com/go-fonts/liberation v0.1.1/go.mod h1:K6qoJYypsmfVjWg8KOVDQhLc8UDgIK2HYqyqAO9z7GY=
+github.com/go-fonts/stix v0.1.0/go.mod h1:w/c1f0ldAUlJmLBvlbkvVXLAD+tAMqobIIQpmnUIzUY=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-latex/latex v0.0.0-20210118124228-b3d85cf34e07/go.mod h1:CO1AlKB2CSIqUrmQPqA0gdRIlnLEY0gK5JGjh37zN5U=
+github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGwJL78qG/PmXZO1EjYhfJinVAhrmmHX6Z8B9k=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
+github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
+github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
+github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.4/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
+github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
+github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
+github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
+github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
+github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
+github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
+github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
+github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
+github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
+github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.4.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
+github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
+github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/martian/v3 v3.1.0 h1:wCKgOCHuUEVfsaQLpPSJb7VdYCdTVZQAuOdYm1yc/60=
+github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200212024743-f11f1df84d12/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200229191704-1ebb73c60ed3/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
+github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
+github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
+github.com/jung-kurt/gofpdf v1.0.0/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
+github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/phpdave11/gofpdf v1.4.2/go.mod h1:zpO6xFn9yxo3YLyMvW8HcKWVdbNqgIfOOp2dXMnm1mY=
+github.com/phpdave11/gofpdi v1.0.12/go.mod h1:vBmVV0Do6hSBHC8uKUQ71JGW+ZGQq74llk/7bXwjDoI=
+github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+github.com/ruudk/golang-pdf417 v0.0.0-20181029194003-1af4ab5afa58/go.mod h1:6lfFZQK844Gfx8o5WFuvpxWRwnSoipWe/p622j1v06w=
+github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
+github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
+github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
+github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
+go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
+go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
+go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191002040644-a1355ae1e2c3/go.mod h1:NOZ3BPKG0ec/BKJQgnvsSFpcKLM5xXVWnvZS97DWHgE=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/exp v0.0.0-20191129062945-2f5052295587/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
+golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
+golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6 h1:QE6XYQK6naiK1EPAe1g/ILLxN5RBoH5xkJk3CqlMI/Y=
+golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
+golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
+golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/image v0.0.0-20190910094157-69e4b8554b2a/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/image v0.0.0-20200119044424-58c23975cae1/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/image v0.0.0-20200430140353-33d19683fad8/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/image v0.0.0-20200618115811-c13761719519/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/image v0.0.0-20201208152932-35266b937fa6/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/image v0.0.0-20210216034530-4410531fe030/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
+golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5 h1:2M3HP5CCK1Si9FQhwnzYhXdG6DXeebvUHFpre8QvbyI=
+golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.1 h1:Kvvh58BN8Y9/lBi7hTekvtMpm07eUZ0ck5pRHpsMWrY=
+golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200222125558-5a598a2470a0/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200501053045-e0ff5e5a1de5/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200506145744-7e3656a0809f/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200513185701-a91f0712d120/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
+golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4 h1:b0LrWgu8+q7z4J+0Y3Umo5q1dL7NXBkKBWkaVkAq17E=
+golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20200902213428-5d25da1a8d43/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20201109201403-9fd604954f58/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210413134643-5e61552d6c78/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c h1:SgVl/sCtkicsS7psKkje4H9YtjdEl3xsYh7N+5TDHqY=
+golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210304124612-50617c2ba197/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750 h1:ZBu6861dZq7xBnG1bn5SRU0vA8nx42at4+kP07FMTog=
+golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
+golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
+golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190206041539-40960b6deb8e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190927191325-030b2cf1153e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191125144606-a911d9008d1f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191216173652-a0e659d51361/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20191227053925-7b8e75db28f4/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200117161641-43d50277825c/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200122220014-bf1340f18c4a/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200204074204-1cc6d1ef6c74/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200207183749-b753a1ba74fa/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200212150539-ea181f53ac56/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
+golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
+golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200512131952-2bc93b1c0c88/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200515010526-7d3b6ebf133d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200618134242-20370b0cb4b2/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
+golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
+golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.1.0 h1:po9/4sTYwZU9lPhi1tOrb4hCv3qrhiQ77LZfGa2OjwY=
+golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
+gonum.org/v1/gonum v0.8.2/go.mod h1:oe/vMfY3deqTw+1EZJhuvEW2iwGF1bW9wwu7XCu0+v0=
+gonum.org/v1/gonum v0.9.1 h1:HCWmqqNoELL0RAQeKBXWtkp04mGk8koafcB4He6+uhc=
+gonum.org/v1/gonum v0.9.1/go.mod h1:TZumC3NeyVQskjXqmyWt4S3bINhy7B4eYwW69EbyX+0=
+gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0 h1:OE9mWmgKkjJyEmDAAtGMPjXu+YNeGvK9VTSHY6+Qihc=
+gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw=
+gonum.org/v1/plot v0.0.0-20190515093506-e2840ee46a6b/go.mod h1:Wt8AAjI+ypCyYX3nZBvf6cAIx93T+c/OS2HFAYskSZc=
+gonum.org/v1/plot v0.9.0/go.mod h1:3Pcqqmp6RHvJI72kgb8fThyUnav364FOsdDo2aGW5lY=
+google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.15.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.17.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.18.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.19.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.20.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.22.0/go.mod h1:BwFmGc8tA3vsd7r/7kR8DY7iEEGSU04BFxCo5jP/sfE=
+google.golang.org/api v0.24.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.28.0/go.mod h1:lIXQywCXRcnZPGlsd8NbLnOjtAoL6em04bJ9+z0MncE=
+google.golang.org/api v0.29.0/go.mod h1:Lcubydp8VUV7KeIHD9z2Bys/sm/vGKnG1UHuDBSrHWM=
+google.golang.org/api v0.30.0/go.mod h1:QGmEvQ87FHZNiUVJkT14jQNYJ4ZJjdRF23ZXz5138Fc=
+google.golang.org/api v0.35.0/go.mod h1:/XrVsuzM0rZmrsbjJutiuftIzeuTQcEeaYcSk/mQ1dg=
+google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34qYtE=
+google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
+google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
+google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
+google.golang.org/api v0.45.0 h1:pqMffJFLBVUDIoYsHcqtxgQVTsmxMDpYLOc5MT4Jrww=
+google.golang.org/api v0.45.0/go.mod h1:ISLIJCedJolbZvDfAk+Ctuq5hf+aJ33WgtUsfyFoLXA=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
+google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
+google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200224152610-e50cd9704f63/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200228133532-8c2c7df3a383/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200305110556-506484158171/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
+google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20200904004341-0bd0a958aa1d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
+google.golang.org/genproto v0.0.0-20210413151531-c14fb6ef47c3/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
+google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2 h1:g2sJMUGCpeHZqTx8p3wsAWRS64nFq20i4dvJWcKGqvY=
+google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
+google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.27.1/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
+google.golang.org/grpc v1.28.0/go.mod h1:rpkK4SK4GF4Ach/+MFLZUBavHOvF2JJB5uozKKal+60=
+google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3IjizoKk=
+google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
+google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
+google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
+google.golang.org/grpc v1.37.0 h1:uSZWeQJX5j11bIQ4AJoj+McDBo29cY1MCoC1wO3ts+c=
+google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
+google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
+google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
+google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
+google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
+google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
+google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
+google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
+google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
+gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
+rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
+rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
+rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
--
cgit 1.4.1
From 768f3986a9399c82fc61ddcd4865d10f3bb93351 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 30 Apr 2021 13:01:16 +0200
Subject: feat(build): Run `go vet` as a step in the GitHub Actions workflow
---
tools/nixery/.github/workflows/build-and-test.yaml | 2 ++
tools/nixery/popcount/popcount.go | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/.github/workflows/build-and-test.yaml b/tools/nixery/.github/workflows/build-and-test.yaml
index 2c0aff49d218..d3f258ffaac4 100644
--- a/tools/nixery/.github/workflows/build-and-test.yaml
+++ b/tools/nixery/.github/workflows/build-and-test.yaml
@@ -19,6 +19,8 @@ jobs:
run: nix-env -f '' -iA go
- name: Check formatting
run: "test -z $(gofmt -l .)"
+ - name: Run `go vet`
+ run: "go vet ./..."
- name: Build Nixery
run: "nix-build --no-out-link"
- name: Run integration test
diff --git a/tools/nixery/popcount/popcount.go b/tools/nixery/popcount/popcount.go
index d632090c0dc2..dab10aae64c0 100644
--- a/tools/nixery/popcount/popcount.go
+++ b/tools/nixery/popcount/popcount.go
@@ -90,7 +90,7 @@ func channelMetadata(channel string) meta {
commit, err := ioutil.ReadAll(commitResp.Body)
failOn(err, "failed to read commit from response")
if commitResp.StatusCode != 200 {
- log.Fatalf("non-success status code when fetching commit: %s", string(commit), commitResp.StatusCode)
+ log.Fatalf("non-success status code when fetching commit: %s (%v)", string(commit), commitResp.StatusCode)
}
return meta{
--
cgit 1.4.1
From 3efbbfcd4e22811dd549c6ed46374736308b0b82 Mon Sep 17 00:00:00 2001
From: Florian Klink
Date: Fri, 18 Jun 2021 21:08:05 +0200
Subject: feat(ci): don't mount /var/cache/nixery from tmpfs into docker
container
With https://github.com/google/nixery/pull/127, nixery will use extended
attributes to store metadata (when using local storage).
Right now, our integration test mounts a tmpfs to /var/cache/nixery.
However, *user* xattrs aren't supported with tmpfs [1], so setting
xattrs would fail.
To workaround this, use a folder in the current working directory and
hope it's backed by something supporting user xattrs (which is the case
for GitHub Actions).
[1]: https://man7.org/linux/man-pages/man5/tmpfs.5.html#NOTES
---
tools/nixery/.gitignore | 3 +++
tools/nixery/scripts/integration-test.sh | 12 ++++++++++--
2 files changed, 13 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/.gitignore b/tools/nixery/.gitignore
index 4203fee19569..578eea392301 100644
--- a/tools/nixery/.gitignore
+++ b/tools/nixery/.gitignore
@@ -7,3 +7,6 @@ debug/
*.pem
*.p12
*.json
+
+# Created by the integration test
+var-cache-nixery
diff --git a/tools/nixery/scripts/integration-test.sh b/tools/nixery/scripts/integration-test.sh
index 9595f37d9417..9d06e96ba29c 100755
--- a/tools/nixery/scripts/integration-test.sh
+++ b/tools/nixery/scripts/integration-test.sh
@@ -10,9 +10,17 @@ echo "Loaded Nixery image as ${IMG}"
# Run the built nixery docker image in the background, but keep printing its
# output as it occurs.
-docker run --rm -p 8080:8080 --name nixery \
+# We can't just mount a tmpfs to /var/cache/nixery, as tmpfs doesn't support
+# user xattrs.
+# So create a temporary directory in the current working directory, and hope
+# it's backed by something supporting user xattrs.
+# We'll notice it isn't if nixery starts complaining about not able to set
+# xattrs anyway.
+if [ -d var-cache-nixery ]; then rm -Rf var-cache-nixery; fi
+mkdir var-cache-nixery
+docker run --privileged --rm -p 8080:8080 --name nixery \
-e PORT=8080 \
- --mount type=tmpfs,destination=/var/cache/nixery \
+ --mount "type=bind,source=${PWD}/var-cache-nixery,target=/var/cache/nixery" \
-e NIXERY_CHANNEL=nixos-unstable \
-e NIXERY_STORAGE_BACKEND=filesystem \
-e STORAGE_PATH=/var/cache/nixery \
--
cgit 1.4.1
From 94e04a76b6d2baf0cc5060c3168a428fae6c28ab Mon Sep 17 00:00:00 2001
From: Jérôme Petazzoni
Date: Thu, 22 Apr 2021 16:52:12 +0200
Subject: feat(storage): Store blob content-type in extended attributes
After the discussion in #116, this stores the blob content types
in extended attributes when using the filesystem backend.
If the underlying filesystem doesn't support extended attributes,
storing blobs won't work; also, if extended attributes get removed,
blobs won't be served anymore. We can relax this behavior if
needed (i.e. log errors but still accept to store or serve blobs).
However, since the Docker Engine (and possibly other container
engines) won't accept to pull images from a registry that doesn't
use correct content types for manifest files, it could be argued
that it's better to give a hard fail. (Otherwise, the container
engine gives cryptic error messages like "missing signature key".)
I can change that behavior (and log errors but still store/serve
blobs to the filesystem) if you think it's better.
---
tools/nixery/default.nix | 2 +-
tools/nixery/go.mod | 1 +
tools/nixery/go.sum | 3 +++
tools/nixery/storage/filesystem.go | 17 +++++++++++++++--
4 files changed, 20 insertions(+), 3 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index a5cc61e7e2ee..39003537516c 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -38,7 +38,7 @@ let
doCheck = true;
# Needs to be updated after every modification of go.mod/go.sum
- vendorSha256 = "1ff0kfww6fy6pnvyva7x8cc6l1d12aafps48wrkwawk2qjy9a8b9";
+ vendorSha256 = "1adjav0dxb97ws0w2k50rhk6r46wvfry6aj4sik3ninl525kd15s";
buildFlagsArray = [
"-ldflags=-s -w -X main.version=${nixery-commit-hash}"
diff --git a/tools/nixery/go.mod b/tools/nixery/go.mod
index e24b2b6ff579..3b819a7965c0 100644
--- a/tools/nixery/go.mod
+++ b/tools/nixery/go.mod
@@ -5,6 +5,7 @@ go 1.15
require (
cloud.google.com/go/storage v1.15.0
github.com/google/go-cmp v0.5.5
+ github.com/pkg/xattr v0.4.3
github.com/sirupsen/logrus v1.8.1
golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c
gonum.org/v1/gonum v0.9.1
diff --git a/tools/nixery/go.sum b/tools/nixery/go.sum
index 49307968374d..812babcf3e6f 100644
--- a/tools/nixery/go.sum
+++ b/tools/nixery/go.sum
@@ -158,6 +158,8 @@ github.com/phpdave11/gofpdf v1.4.2/go.mod h1:zpO6xFn9yxo3YLyMvW8HcKWVdbNqgIfOOp2
github.com/phpdave11/gofpdi v1.0.12/go.mod h1:vBmVV0Do6hSBHC8uKUQ71JGW+ZGQq74llk/7bXwjDoI=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/xattr v0.4.3 h1:5Jx4GCg5ABtqWZH8WLzeI4fOtM1HyX4RBawuCoua1es=
+github.com/pkg/xattr v0.4.3/go.mod h1:sBD3RAqlr8Q+RC3FutZcikpT8nyDrIEEBw2J744gVWs=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -322,6 +324,7 @@ golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201101102859-da207088b7d1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
diff --git a/tools/nixery/storage/filesystem.go b/tools/nixery/storage/filesystem.go
index bd757587b4db..2be3457f324a 100644
--- a/tools/nixery/storage/filesystem.go
+++ b/tools/nixery/storage/filesystem.go
@@ -23,6 +23,7 @@ import (
"os"
"path"
+ "github.com/pkg/xattr"
log "github.com/sirupsen/logrus"
)
@@ -49,8 +50,7 @@ func (b *FSBackend) Name() string {
return fmt.Sprintf("Filesystem (%s)", b.path)
}
-// TODO(tazjin): Implement support for persisting content-types for the filesystem backend.
-func (b *FSBackend) Persist(ctx context.Context, key, _type string, f Persister) (string, int64, error) {
+func (b *FSBackend) Persist(ctx context.Context, key, contentType string, f Persister) (string, int64, error) {
full := path.Join(b.path, key)
dir := path.Dir(full)
err := os.MkdirAll(dir, 0755)
@@ -66,6 +66,12 @@ func (b *FSBackend) Persist(ctx context.Context, key, _type string, f Persister)
}
defer file.Close()
+ err = xattr.Set(full, "user.mime_type", []byte(contentType))
+ if err != nil {
+ log.WithError(err).WithField("file", full).Error("failed to store file type in xattrs")
+ return "", 0, err
+ }
+
return f(file)
}
@@ -92,6 +98,13 @@ func (b *FSBackend) Serve(digest string, r *http.Request, w http.ResponseWriter)
"path": p,
}).Info("serving blob from filesystem")
+ contentType, err := xattr.Get(p, "user.mime_type")
+ if err != nil {
+ log.WithError(err).WithField("file", p).Error("failed to read file type from xattrs")
+ return err
+ }
+ w.Header().Add("Content-Type", string(contentType))
+
http.ServeFile(w, r, p)
return nil
}
--
cgit 1.4.1
From 84fb380f574aa4dac4e4cc4c9d0cded8a4c54fa3 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Sat, 26 Jun 2021 01:32:07 +0200
Subject: docs: Update build badge in README
Moves the build badge to point at Github Actions, instead of the old (failing) Travis build
---
tools/nixery/README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index cebf28b58492..745e73cb4930 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -4,7 +4,7 @@
-----------------
-[![Build Status](https://travis-ci.org/google/nixery.svg?branch=master)](https://travis-ci.org/google/nixery)
+[![Build Status](https://github.com/google/nixery/actions/workflows/build-and-test.yaml/badge.svg)](https://github.com/google/nixery/actions/workflows/build-and-test.yaml)
**Nixery** is a Docker-compatible container registry that is capable of
transparently building and serving container images using [Nix][].
--
cgit 1.4.1
From 02455bd0fdf42c698524320c8d43b2bd7ef11c3b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 6 Aug 2021 14:21:47 +0300
Subject: chore(build): Allow passing in a specific commit hash when building
Required for builds where the full repository isn't available (e.g.
from a tarball).
---
tools/nixery/default.nix | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 39003537516c..696420d0f048 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -15,7 +15,8 @@
{ pkgs ? import ./nixpkgs-pin.nix
, preLaunch ? ""
, extraPackages ? []
-, maxLayers ? 20 }:
+, maxLayers ? 20
+, commitHash ? null }@args:
with pkgs;
@@ -25,7 +26,7 @@ let
# Current Nixery commit - this is used as the Nixery version in
# builds to distinguish errors between deployed versions, see
# server/logs.go for details.
- nixery-commit-hash = pkgs.lib.commitIdFromGitRepo ./.git;
+ nixery-commit-hash = args.commitHash or pkgs.lib.commitIdFromGitRepo ./.git;
# Go implementation of the Nixery server which implements the
# container registry interface.
--
cgit 1.4.1
From af337010e964671e2f8568a0167a4f8f9eb08216 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Wed, 25 Aug 2021 13:53:31 +0300
Subject: feat(prepare-image): Ensure /usr/bin/env is always present
This is required by common patterns in shell scripts.
There are some caveats around this. Adding logic to filter whether
coreutils is included in an image would slow down the Nix evaluation,
so the link is currently created even in cases where it doesn't point
to anything.
Fixes #109
---
tools/nixery/prepare-image/prepare-image.nix | 12 ++++++++++++
1 file changed, 12 insertions(+)
(limited to 'tools')
diff --git a/tools/nixery/prepare-image/prepare-image.nix b/tools/nixery/prepare-image/prepare-image.nix
index 316bfbbf2712..56f9e7a3bf5c 100644
--- a/tools/nixery/prepare-image/prepare-image.nix
+++ b/tools/nixery/prepare-image/prepare-image.nix
@@ -132,6 +132,18 @@ let
contentsEnv = symlinkJoin {
name = "bulk-layers";
paths = allContents.contents;
+
+ # Ensure that there is always a /usr/bin/env for shell scripts
+ # that require it.
+ #
+ # Note that images which do not actually contain `coreutils` will
+ # still have this symlink, but it will be dangling.
+ #
+ # TODO(tazjin): Don't link this if coreutils is not included.
+ postBuild = ''
+ mkdir -p $out/usr/bin
+ ln -s ${coreutils}/bin/env $out/usr/bin/env
+ '';
};
# Image layer that contains the symlink forest created above. This
--
cgit 1.4.1
From dd778e77665f41f9aa9ad75d34bdf0b0d0dc6953 Mon Sep 17 00:00:00 2001
From: Jérôme Petazzoni
Date: Mon, 4 Oct 2021 19:08:25 +0200
Subject: revert: "feat(storage): Add generic support for content-types"
This reverts commit 7db252f36a68d875429a25e06d88fbfc804d84fd.
Superseded by the implementation in #127.
---
tools/nixery/main.go | 10 ----------
1 file changed, 10 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/main.go b/tools/nixery/main.go
index 6af4636e51a0..d94d51b4681e 100644
--- a/tools/nixery/main.go
+++ b/tools/nixery/main.go
@@ -195,16 +195,6 @@ func (h *registryHandler) serveManifestTag(w http.ResponseWriter, r *http.Reques
// serveBlob serves a blob from storage by digest
func (h *registryHandler) serveBlob(w http.ResponseWriter, r *http.Request, blobType, digest string) {
storage := h.state.Storage
- switch blobType {
- case "manifests":
- // It is necessary to set the correct content-type when serving manifests.
- // Otherwise, you may get the following mysterious error message when pulling:
- // "Error response from daemon: missing signature key"
- w.Header().Add("Content-Type", mf.ManifestType)
- case "blobs":
- // It is not strictly necessary to set this content-type, but since we're here...
- w.Header().Add("Content-Type", mf.LayerType)
- }
err := storage.Serve(digest, r, w)
if err != nil {
log.WithError(err).WithFields(log.Fields{
--
cgit 1.4.1
From 9929891f563cf5d8c004d89a64867ae1533746fe Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 29 Oct 2021 17:25:58 +0200
Subject: docs: Remove note about unsupported Google projects
I no longer work at Google and the repo has moved, so this is no
longer relevant.
---
tools/nixery/README.md | 2 --
1 file changed, 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 745e73cb4930..90799fac3feb 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -24,8 +24,6 @@ You can watch the NixCon 2019 [talk about
Nixery](https://www.youtube.com/watch?v=pOI9H4oeXqA) for more information about
the project and its use-cases.
-This is not an officially supported Google project.
-
## Demo
Click the image to see an example in which an image containing an interactive
--
cgit 1.4.1
From f4daffbb50f947a49ec17e51d846f1ebb15fd41b Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 29 Oct 2021 17:29:33 +0200
Subject: chore(docs): Bump included nix-1p version
... basically never updated this, oops.
---
tools/nixery/docs/default.nix | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/default.nix b/tools/nixery/docs/default.nix
index fdf0e6ff9ea5..d27cbe5b3e9e 100644
--- a/tools/nixery/docs/default.nix
+++ b/tools/nixery/docs/default.nix
@@ -24,8 +24,8 @@ let
nix-1p = fetchFromGitHub {
owner = "tazjin";
repo = "nix-1p";
- rev = "e0a051a016b9118bea90ec293d6cd346b9707e77";
- sha256 = "0d1lfkxg03lki8dc3229g1cgqiq3nfrqgrknw99p6w0zk1pjd4dj";
+ rev = "9f0baf5e270128d9101ba4446cf6844889e399a2";
+ sha256 = "1pf9i90gn98vz67h296w5lnwhssk62dc6pij983dff42dbci7lhj";
};
in runCommand "nixery-book" { } ''
mkdir -p $out
--
cgit 1.4.1
From 485b8aa929f7ae07ea917814018f6c416556e44a Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 29 Oct 2021 17:36:05 +0200
Subject: chore: Bump nixpkgs pin to nixos-unstable 2021-10-29
---
tools/nixery/nixpkgs-pin.nix | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/nixpkgs-pin.nix b/tools/nixery/nixpkgs-pin.nix
index ffedc92ed384..d868855b5b3b 100644
--- a/tools/nixery/nixpkgs-pin.nix
+++ b/tools/nixery/nixpkgs-pin.nix
@@ -1,4 +1,4 @@
import (builtins.fetchTarball {
- url = "https://github.com/NixOS/nixpkgs/archive/4263ba5e133cc3fc699c1152ab5ee46ef668e675.tar.gz";
- sha256 = "1nzqrdw0lhbldbs9r651zmgqpwhjhh9sssykhcl2155kgsfsrk7i";
+ url = "https://github.com/NixOS/nixpkgs/archive/2deb07f3ac4eeb5de1c12c4ba2911a2eb1f6ed61.tar.gz";
+ sha256 = "0036sv1sc4ddf8mv8f8j9ifqzl3fhvsbri4z1kppn0f1zk6jv9yi";
}) {}
--
cgit 1.4.1
From fc62b905142a2e43b0c94d291e820b4b0426f258 Mon Sep 17 00:00:00 2001
From: Vincent Ambo
Date: Fri, 29 Oct 2021 19:13:19 +0200
Subject: chore: Bump all Go dependencies
Result of 'go get -u && go mod tidy'
---
tools/nixery/default.nix | 2 +-
tools/nixery/go.mod | 22 ++++--
tools/nixery/go.sum | 181 +++++++++++++++++++++++++++++++++++++++--------
3 files changed, 169 insertions(+), 36 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/default.nix b/tools/nixery/default.nix
index 696420d0f048..19143fccf6f9 100644
--- a/tools/nixery/default.nix
+++ b/tools/nixery/default.nix
@@ -39,7 +39,7 @@ let
doCheck = true;
# Needs to be updated after every modification of go.mod/go.sum
- vendorSha256 = "1adjav0dxb97ws0w2k50rhk6r46wvfry6aj4sik3ninl525kd15s";
+ vendorSha256 = "1xnmyz2a5s5sck0fzhcz51nds4s80p0jw82dhkf4v2c4yzga83yk";
buildFlagsArray = [
"-ldflags=-s -w -X main.version=${nixery-commit-hash}"
diff --git a/tools/nixery/go.mod b/tools/nixery/go.mod
index 3b819a7965c0..dfaeb7206820 100644
--- a/tools/nixery/go.mod
+++ b/tools/nixery/go.mod
@@ -3,10 +3,22 @@ module github.com/google/nixery
go 1.15
require (
- cloud.google.com/go/storage v1.15.0
- github.com/google/go-cmp v0.5.5
- github.com/pkg/xattr v0.4.3
+ cloud.google.com/go/storage v1.18.2
+ github.com/census-instrumentation/opencensus-proto v0.3.0 // indirect
+ github.com/cespare/xxhash/v2 v2.1.2 // indirect
+ github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4 // indirect
+ github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1 // indirect
+ github.com/envoyproxy/go-control-plane v0.10.0 // indirect
+ github.com/envoyproxy/protoc-gen-validate v0.6.2 // indirect
+ github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
+ github.com/google/go-cmp v0.5.6
+ github.com/pkg/xattr v0.4.4
github.com/sirupsen/logrus v1.8.1
- golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c
- gonum.org/v1/gonum v0.9.1
+ golang.org/x/net v0.0.0-20211029160332-540bb53d3b2e // indirect
+ golang.org/x/oauth2 v0.0.0-20211028175245-ba495a64dcb5
+ golang.org/x/sys v0.0.0-20211029162942-c1bf0bb051ef // indirect
+ gonum.org/v1/gonum v0.9.3
+ google.golang.org/api v0.60.0 // indirect
+ google.golang.org/genproto v0.0.0-20211029142109-e255c875f7c7 // indirect
+ google.golang.org/grpc v1.41.0 // indirect
)
diff --git a/tools/nixery/go.sum b/tools/nixery/go.sum
index 812babcf3e6f..312cbfaa2e2c 100644
--- a/tools/nixery/go.sum
+++ b/tools/nixery/go.sum
@@ -17,8 +17,15 @@ cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKP
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
-cloud.google.com/go v0.81.0 h1:at8Tk2zUz63cLPR0JPWm5vp77pEZmzxEQBEfRKn1VV8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
+cloud.google.com/go v0.83.0/go.mod h1:Z7MJUsANfY0pYPdw0lbnivPx4/vhy/e2FEkSkF7vAVY=
+cloud.google.com/go v0.84.0/go.mod h1:RazrYuxIK6Kb7YrzzhPoLmCVzl7Sup4NrbKPg8KHSUM=
+cloud.google.com/go v0.87.0/go.mod h1:TpDYlFy7vuLzZMMZ+B6iRiELaY7z/gJPaqbMx6mlWcY=
+cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aDQ=
+cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI=
+cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
+cloud.google.com/go v0.97.0 h1:3DXvAyifywvq64LfkKaMOmkWPS1CikIQdMe2lY9vxU8=
+cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
@@ -36,15 +43,24 @@ cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0Zeo
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
-cloud.google.com/go/storage v1.15.0 h1:Ljj+ZXVEhCr/1+4ZhvtteN1ND7UUsNTlduGclLh8GO0=
-cloud.google.com/go/storage v1.15.0/go.mod h1:mjjQMoxxyGH7Jr8K5qrx6N2O0AHsczI61sMNn03GIZI=
+cloud.google.com/go/storage v1.18.2 h1:5NQw6tOn3eMm0oE8vTkfjau18kjL79FlMjy/CHTpmoY=
+cloud.google.com/go/storage v1.18.2/go.mod h1:AiIj7BWXyhO5gGVmYJ+S8tbkCx3yb0IMjua8Aw4naVM=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
gioui.org v0.0.0-20210308172011-57750fc8a0a6/go.mod h1:RSH6KIUZ0p2xy5zHDxgAM4zumjgTw83q2ge/PI+yyw8=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/ajstarks/svgo v0.0.0-20180226025133-644b8db467af/go.mod h1:K08gAheRH3/J6wwsYMMT4xOr94bZjxIelGM0+d/wbFw=
+github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/boombuler/barcode v1.0.0/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/census-instrumentation/opencensus-proto v0.3.0 h1:t/LhUZLVitR1Ow2YOnduCsavhwFUklBMoGVYUCqmCqk=
+github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
+github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
+github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
+github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
@@ -52,6 +68,14 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
+github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4 h1:hzAQntlaYRkVSFEfj9OTWlVV1H155FMD8BTKktLv0QI=
+github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
+github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1 h1:zH8ljVhhq7yC0MIeUL/IviMtY8hx2mK8cN9wEYb8ggw=
+github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -61,9 +85,16 @@ github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1m
github.com/envoyproxy/go-control-plane v0.9.7/go.mod h1:cwu0lG7PUMfa9snN8LXBig5ynNVH9qI8YYLbd1fK2po=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
+github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
+github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
+github.com/envoyproxy/go-control-plane v0.10.0 h1:WVt4HEPbdRbRD/PKKPbPnIVavO6gk/h673jWyIJ016k=
+github.com/envoyproxy/go-control-plane v0.10.0/go.mod h1:AY7fTTXNdv/aJ2O5jwpxAPOWUZ7hQAEvzN5Pf27BkQQ=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/envoyproxy/protoc-gen-validate v0.6.2 h1:JiO+kJTpmYGjEodY7O1Zk8oZcNz1+f30UtwtXoFUPzE=
+github.com/envoyproxy/protoc-gen-validate v0.6.2/go.mod h1:2t7qjJNvHPx8IjnBOzl9E9/baC+qXE/TeeyBRzgJDws=
github.com/fogleman/gg v1.2.1-0.20190220221249-0403632d5b90/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
github.com/fogleman/gg v1.3.0/go.mod h1:R/bRT+9gY/C5z7JzPU0zXsXHKM4/ayA+zqcVNZzPa1k=
+github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-fonts/dejavu v0.1.0/go.mod h1:4Wt4I4OU2Nq9asgDCteaAaWZOV24E+0/Pwo0gppep4g=
github.com/go-fonts/latin-modern v0.2.0/go.mod h1:rQVLdDMK+mK1xscDwsqM5J8U2jrRa3T0ecnM9pNujks=
github.com/go-fonts/liberation v0.1.1/go.mod h1:K6qoJYypsmfVjWg8KOVDQhLc8UDgIK2HYqyqAO9z7GY=
@@ -76,8 +107,9 @@ github.com/golang/freetype v0.0.0-20170609003504-e2365dfdc4a0/go.mod h1:E/TSTwGw
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
-github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e h1:1r7pUrabqp18hOBcwBwiTsbnFeTZHV9eER/QT5JVZxY=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
+github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
@@ -86,6 +118,7 @@ github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.5.0/go.mod h1:CWnOUgYIOo4TcNZ0wHX3YZCqsaM1I1Jvs6v3mP3KVu8=
+github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@@ -104,6 +137,8 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/snappy v0.0.3 h1:fHPg5GQYlCeLIPB9BZqMVR5nR9A+IM5zcgeTdjMYmLA=
+github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
@@ -116,13 +151,15 @@ github.com/google/go-cmp v0.5.1/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
-github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
+github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
-github.com/google/martian/v3 v3.1.0 h1:wCKgOCHuUEVfsaQLpPSJb7VdYCdTVZQAuOdYm1yc/60=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
+github.com/google/martian/v3 v3.2.1 h1:d8MncMlErDFTwQGBK1xhv026j9kqhvw1Qv9IbWT1VLQ=
+github.com/google/martian/v3 v3.2.1/go.mod h1:oBOf6HBosgwRXnUGWUB05QECsc6uvmMiJ3+6W4l/CUk=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20191218002539-d4f498aebedc/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
@@ -134,49 +171,65 @@ github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
-github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
+github.com/googleapis/gax-go/v2 v2.1.1 h1:dp3bWCh+PPO1zjRRiCSczJav13sBvG4UhNyVTa1KqdU=
+github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0eJc8R6ouapiM=
+github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
-github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jung-kurt/gofpdf v1.0.0/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/jung-kurt/gofpdf v1.0.3-0.20190309125859-24315acbbda5/go.mod h1:7Id9E/uU8ce6rXgefFLlgrJj/GYY22cpxn+r32jIOes=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/lyft/protoc-gen-star v0.5.3/go.mod h1:V0xaHgaf5oCCqmcxYcWiDfTiKsZsRc87/1qhoTACD8w=
github.com/phpdave11/gofpdf v1.4.2/go.mod h1:zpO6xFn9yxo3YLyMvW8HcKWVdbNqgIfOOp2dXMnm1mY=
github.com/phpdave11/gofpdi v1.0.12/go.mod h1:vBmVV0Do6hSBHC8uKUQ71JGW+ZGQq74llk/7bXwjDoI=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
-github.com/pkg/xattr v0.4.3 h1:5Jx4GCg5ABtqWZH8WLzeI4fOtM1HyX4RBawuCoua1es=
-github.com/pkg/xattr v0.4.3/go.mod h1:sBD3RAqlr8Q+RC3FutZcikpT8nyDrIEEBw2J744gVWs=
+github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
+github.com/pkg/xattr v0.4.4 h1:FSoblPdYobYoKCItkqASqcrKCxRn9Bgurz0sCBwzO5g=
+github.com/pkg/xattr v0.4.4/go.mod h1:sBD3RAqlr8Q+RC3FutZcikpT8nyDrIEEBw2J744gVWs=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/ruudk/golang-pdf417 v0.0.0-20181029194003-1af4ab5afa58/go.mod h1:6lfFZQK844Gfx8o5WFuvpxWRwnSoipWe/p622j1v06w=
github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE=
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
+github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
+github.com/spf13/afero v1.3.3/go.mod h1:5KUK8ByomD5Ti5Artl0RtHeI5pTF7MIDuXL3yY520V4=
+github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
-github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
+github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -185,9 +238,11 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
+go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -224,8 +279,8 @@ golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHl
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
golang.org/x/lint v0.0.0-20200130185559-910be7a94367/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
-golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5 h1:2M3HP5CCK1Si9FQhwnzYhXdG6DXeebvUHFpre8QvbyI=
golang.org/x/lint v0.0.0-20201208152925-83fdc39ff7b5/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
+golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
@@ -235,8 +290,9 @@ golang.org/x/mod v0.1.1-0.20191107180719-034126e5016b/go.mod h1:QqPTAvyqsEbceGzB
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
-golang.org/x/mod v0.4.1 h1:Kvvh58BN8Y9/lBi7hTekvtMpm07eUZ0ck5pRHpsMWrY=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -269,8 +325,12 @@ golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
-golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4 h1:b0LrWgu8+q7z4J+0Y3Umo5q1dL7NXBkKBWkaVkAq17E=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
+golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
+golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211029160332-540bb53d3b2e h1:2lVrcCMRP9p7tfk4KUpV1ESqtf49jpihlUtYnSj67k4=
+golang.org/x/net v0.0.0-20211029160332-540bb53d3b2e/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -282,9 +342,13 @@ golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
-golang.org/x/oauth2 v0.0.0-20210413134643-5e61552d6c78/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
-golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c h1:SgVl/sCtkicsS7psKkje4H9YtjdEl3xsYh7N+5TDHqY=
-golang.org/x/oauth2 v0.0.0-20210427180440-81ed05c6b58c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
+golang.org/x/oauth2 v0.0.0-20211028175245-ba495a64dcb5 h1:v79phzBz03tsVCUTbvTBmmC3CUXF5mKYt7DA4ZVldpM=
+golang.org/x/oauth2 v0.0.0-20211028175245-ba495a64dcb5/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -334,8 +398,21 @@ golang.org/x/sys v0.0.0-20210304124612-50617c2ba197/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750 h1:ZBu6861dZq7xBnG1bn5SRU0vA8nx42at4+kP07FMTog=
-golang.org/x/sys v0.0.0-20210412220455-f1c623a9e750/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210917161153-d61c044b1678/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211029162942-c1bf0bb051ef h1:1ZMK6QI8sz0q1ficx9/snrJ8E/PeRW7Oagamf+0xp10=
+golang.org/x/sys v0.0.0-20211029162942-c1bf0bb051ef/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -343,8 +420,10 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
+golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
+golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -396,8 +475,12 @@ golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4f
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
-golang.org/x/tools v0.1.0 h1:po9/4sTYwZU9lPhi1tOrb4hCv3qrhiQ77LZfGa2OjwY=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
+golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.3/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.4/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -405,8 +488,8 @@ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1N
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.0.0-20180816165407-929014505bf4/go.mod h1:Y+Yx5eoAFn32cQvJDxZx5Dpnq+c3wtXuadVZAcxbbBo=
gonum.org/v1/gonum v0.8.2/go.mod h1:oe/vMfY3deqTw+1EZJhuvEW2iwGF1bW9wwu7XCu0+v0=
-gonum.org/v1/gonum v0.9.1 h1:HCWmqqNoELL0RAQeKBXWtkp04mGk8koafcB4He6+uhc=
-gonum.org/v1/gonum v0.9.1/go.mod h1:TZumC3NeyVQskjXqmyWt4S3bINhy7B4eYwW69EbyX+0=
+gonum.org/v1/gonum v0.9.3 h1:DnoIG+QAMaF5NvxnGe/oKsgKcAc6PcUyl8q0VetfQ8s=
+gonum.org/v1/gonum v0.9.3/go.mod h1:TZumC3NeyVQskjXqmyWt4S3bINhy7B4eYwW69EbyX+0=
gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0 h1:OE9mWmgKkjJyEmDAAtGMPjXu+YNeGvK9VTSHY6+Qihc=
gonum.org/v1/netlib v0.0.0-20190313105609-8cb42192e0e0/go.mod h1:wa6Ws7BG/ESfp6dHfk7C6KdzKA7wR7u/rKwOGE66zvw=
gonum.org/v1/plot v0.0.0-20190515093506-e2840ee46a6b/go.mod h1:Wt8AAjI+ypCyYX3nZBvf6cAIx93T+c/OS2HFAYskSZc=
@@ -432,8 +515,17 @@ google.golang.org/api v0.36.0/go.mod h1:+z5ficQTmoYpPn8LCUNVpK5I7hwkpjbcgqA7I34q
google.golang.org/api v0.40.0/go.mod h1:fYKFpnQN0DsDSKRVRcQSDQNtqWPfM9i+zNPxepjRCQ8=
google.golang.org/api v0.41.0/go.mod h1:RkxM5lITDfTzmyKFPt+wGrCJbVfniCr2ool8kTBzRTU=
google.golang.org/api v0.43.0/go.mod h1:nQsDGjRXMo4lvh5hP0TKqF244gqhGcr/YSIykhUk/94=
-google.golang.org/api v0.45.0 h1:pqMffJFLBVUDIoYsHcqtxgQVTsmxMDpYLOc5MT4Jrww=
-google.golang.org/api v0.45.0/go.mod h1:ISLIJCedJolbZvDfAk+Ctuq5hf+aJ33WgtUsfyFoLXA=
+google.golang.org/api v0.47.0/go.mod h1:Wbvgpq1HddcWVtzsVLyfLp8lDg6AA241LmgIL59tHXo=
+google.golang.org/api v0.48.0/go.mod h1:71Pr1vy+TAZRPkPs/xlCf5SsU8WjuAWv1Pfjbtukyy4=
+google.golang.org/api v0.50.0/go.mod h1:4bNT5pAuq5ji4SRZm+5QIkjny9JAyVD/3gaSihNefaw=
+google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00sOU=
+google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k=
+google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
+google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
+google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
+google.golang.org/api v0.58.0/go.mod h1:cAbP2FsxoGVNwtgNAmmn3y5G1TWAiVYRmg4yku3lv+E=
+google.golang.org/api v0.60.0 h1:eq/zs5WPH4J9undYM9IP1O7dSr7Yh8Y0GtSCpzGzIUk=
+google.golang.org/api v0.60.0/go.mod h1:d7rl65NZAkEQ90JFzqBjcRq1TVeG5ZoGV3sSpEnnVb4=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -465,6 +557,7 @@ google.golang.org/genproto v0.0.0-20200312145019-da6875a35672/go.mod h1:55QSHmfG
google.golang.org/genproto v0.0.0-20200331122359-1ee6d9798940/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200430143042-b979b6f78d84/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
+google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
@@ -481,9 +574,27 @@ google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210402141018-6c239bbf2bb1/go.mod h1:9lPAdzaEmUacj36I+k7YKbEc5CXzPIeORRgDAUOu28A=
-google.golang.org/genproto v0.0.0-20210413151531-c14fb6ef47c3/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
-google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2 h1:g2sJMUGCpeHZqTx8p3wsAWRS64nFq20i4dvJWcKGqvY=
-google.golang.org/genproto v0.0.0-20210420162539-3c870d7478d2/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
+google.golang.org/genproto v0.0.0-20210513213006-bf773b8c8384/go.mod h1:P3QM42oQyzQSnHPnZ/vqoCdDmzH28fzWByN9asMeM8A=
+google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210604141403-392c879c8b08/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210608205507-b6d2f5bf0d7d/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
+google.golang.org/genproto v0.0.0-20210624195500-8bfb893ecb84/go.mod h1:SzzZ/N+nwJDaO1kznhnlzqS8ocJICar6hYhVyhi++24=
+google.golang.org/genproto v0.0.0-20210713002101-d411969a0d9a/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
+google.golang.org/genproto v0.0.0-20210716133855-ce7ef5c701ea/go.mod h1:AxrInvYm1dci+enl5hChSFPOmmUF1+uAa/UsgNRWd7k=
+google.golang.org/genproto v0.0.0-20210728212813-7823e685a01f/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
+google.golang.org/genproto v0.0.0-20210805201207-89edb61ffb67/go.mod h1:ob2IJxKrgPT52GcgX759i1sleT07tiKowYBGbczaW48=
+google.golang.org/genproto v0.0.0-20210813162853-db860fec028c/go.mod h1:cFeNkxwySK631ADgubI+/XFU/xp8FD5KIVV4rj8UC5w=
+google.golang.org/genproto v0.0.0-20210821163610-241b8fcbd6c8/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210917145530-b395a37504d4/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
+google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211016002631-37fc39342514/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211021150943-2b146023228c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211029142109-e255c875f7c7 h1:aaSaYY/DIDJy3f/JLXWv6xJ1mBQSRnQ1s5JhAFTnzO4=
+google.golang.org/genproto v0.0.0-20211029142109-e255c875f7c7/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
@@ -497,13 +608,21 @@ google.golang.org/grpc v1.29.1/go.mod h1:itym6AZVZYACWQqET3MqgPpjcuV5QH3BxFS3Iji
google.golang.org/grpc v1.30.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.0/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
google.golang.org/grpc v1.31.1/go.mod h1:N36X2cJ7JwdamYAgDz+s+rVMFjt3numwzf/HckM8pak=
+google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.34.0/go.mod h1:WotjhfgOW/POjDeRt8vscBtXq+2VjORFy659qA51WJ8=
google.golang.org/grpc v1.35.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.36.1/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
-google.golang.org/grpc v1.37.0 h1:uSZWeQJX5j11bIQ4AJoj+McDBo29cY1MCoC1wO3ts+c=
google.golang.org/grpc v1.37.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.37.1/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
+google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
+google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE=
+google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
+google.golang.org/grpc v1.41.0 h1:f+PlOh7QV4iIJkPrx5NQ7qaNGFQ3OTse67yaDHfju4E=
+google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k=
+google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -515,13 +634,15 @@ google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpAD
google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
-google.golang.org/protobuf v1.26.0 h1:bxAC2xTBsZGibn2RTntX0oH50xLsqy1OxA9tTL3p/lk=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
+google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
--
cgit 1.4.1
From 1dd342161550adfc58f3a2ea5e6c5843dc2c5ca9 Mon Sep 17 00:00:00 2001
From: Jérôme Petazzoni
Date: Thu, 23 Dec 2021 12:10:39 +0100
Subject: docs: update installation instructions
These instructions were not up-to-date (they didn't mention
the different storage backends, and some variables were
tagged as optional while they were mandatory). With this
update, they should (hopefully) be more accurate! :)
I also added instructions if someone wants to run Nixery
outside of the container image (I found it convenient when
working on Nixery's code).
---
tools/nixery/docs/src/run-your-own.md | 72 +++++++++++++++++++++++++++++------
1 file changed, 60 insertions(+), 12 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
index 4ffb8c4d4b6e..eb7d494ee3e2 100644
--- a/tools/nixery/docs/src/run-your-own.md
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -32,7 +32,16 @@ To run Nixery, you must have:
* [Nix][] (to build Nixery itself)
* Somewhere to run it (your own server, Google AppEngine, a Kubernetes cluster,
whatever!)
-* A [Google Cloud Storage][gcs] bucket in which to store & serve layers
+* *Either* a [Google Cloud Storage][gcs] bucket in which to store & serve layers,
+ *or* a comfortable amount of disk space
+
+Note that while the main Nixery process is a server written in Go,
+it invokes a script that itself relies on Nix to be available.
+You can compile the main Nixery daemon without Nix, but it won't
+work without Nix.
+
+(If you are completely new to Nix and don't know how to get
+started, check the [Nix installation documentation][nixinstall].)
## 1. Choose a package set
@@ -54,8 +63,11 @@ use it with your own packages. There are three options available:
## 2. Build Nixery itself
-Building Nixery creates a container image. This section assumes that the
-container runtime used is Docker, please modify instructions correspondingly if
+### 2.1. With a container image
+
+The easiest way to run Nixery is to build a container image.
+This section assumes that the container runtime used is Docker,
+please modify instructions accordingly if
you are using something else.
With a working Nix installation, building Nixery is done by invoking `nix-build
@@ -64,23 +76,46 @@ With a working Nix installation, building Nixery is done by invoking `nix-build
This will create a `result`-symlink which points to a tarball containing the
image. In Docker, this tarball can be loaded by using `docker load -i result`.
+### 2.2. Without a container image
+
+*This method might be more convenient if you intend to work on
+the code of the Nixery server itself, because you won't have to
+rebuild (and reload) an image each time to test your changes.*
+
+You will need to run the two following commands at the root of the repo:
+
+* `go build` to build the `nixery` binary;
+* `nix-env --install --file prepare-image/default.nix` to build
+ the required helpers.
+
## 3. Prepare configuration
Nixery is configured via environment variables.
You must set *all* of these:
-* `BUCKET`: [Google Cloud Storage][gcs] bucket to store & serve image layers
+* `NIXERY_STORAGE_BACKEND` (must be set to `gcs` or `filesystem`)
* `PORT`: HTTP port on which Nixery should listen
+* `WEB_DIR`: directory containing static files (see below)
-You may set *one* of these, if unset Nixery defaults to `nixos-20.09`:
+You must set *one* of these:
-* `NIXERY_CHANNEL`: The name of a Nix/NixOS channel to use for building
+* `NIXERY_CHANNEL`: The name of a [Nix/NixOS channel][nixchannel] to use for building,
+ for instance `nixos-21.05`
* `NIXERY_PKGS_REPO`: URL of a git repository containing a package set (uses
locally configured SSH/git credentials)
* `NIXERY_PKGS_PATH`: A local filesystem path containing a Nix package set to use
for building
+If `NIXERY_STORAGE_BACKEND` is set to `filesystem`, then `STORAGE_PATH`
+must be set to the directory that will hold the registry blobs.
+That directory must be located on a filesystem that supports extended
+attributes (which means that on most systems, `/tmp` won't work).
+
+If `NIXERY_STORAGE_BACKEND` is set to `gcs`, then `GCS_BUCKET`
+must be set to the [Google Cloud Storage][gcs] bucket that will be
+used to store & serve image layers.
+
You may set *all* of these:
* `NIX_TIMEOUT`: Number of seconds that any Nix builder is allowed to run
@@ -94,13 +129,11 @@ If the `GOOGLE_APPLICATION_CREDENTIALS` environment is configured, the service
account's private key will be used to create [signed URLs for
layers][signed-urls].
-## 4. Deploy Nixery
-
-With the above environment variables configured, you can run the image that was
-built in step 2.
+## 4. Start Nixery
-How this works depends on the environment you are using and is, for now, outside
-of the scope of this tutorial.
+Run the image that was built in step 2.1 with all the environment variables
+mentioned above. Alternatively, set all the environment variables and run
+the Nixery server that was built in step 2.2.
Once Nixery is running you can immediately start requesting images from it.
@@ -125,6 +158,19 @@ following:
* Configure request timeouts for Nixery if you have your own web server in front
of it. This will be natively supported by Nixery in the future.
+## 6. `WEB_DIR`
+
+All the URLs accessed by Docker registry clients start with `/v2/`.
+This means that it is possible to serve a static website from Nixery
+itself (as long as you don't want to serve anything starting with `/v2`).
+This is how, for instance, https://nixery.dev shows the website for Nixery,
+while it is also possible to e.g. `docker pull nixery.dev/shell`.
+
+When running Nixery, you must set the `WEB_DIR` environment variable.
+When Nixery receives requests that don't look like registry requests,
+it tries to serve them using files in the directory indicated by `WEB_DIR`.
+If the directory doesn't exist, Nixery will run fine but serve 404.
+
-------
[^1]: Nixery will not work with Nix channels older than `nixos-19.03`.
@@ -141,3 +187,5 @@ following:
[repo]: https://github.com/google/nixery
[signed-urls]: under-the-hood.html#5-image-layers-are-requested
[ADC]: https://cloud.google.com/docs/authentication/production#finding_credentials_automatically
+[nixinstall]: https://nixos.org/manual/nix/stable/installation/installing-binary.html
+[nixchannel]: https://nixos.wiki/wiki/Nix_channels
--
cgit 1.4.1
From aaf53703443075dc7c54127d390d8bfb6cb206ce Mon Sep 17 00:00:00 2001
From: Jérôme Petazzoni
Date: Thu, 23 Dec 2021 12:14:49 +0100
Subject: chore: fix env var name in error message
The error message shows the wrong variable name, which might
be confusing for new users.
---
tools/nixery/config/config.go | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
(limited to 'tools')
diff --git a/tools/nixery/config/config.go b/tools/nixery/config/config.go
index 7ec102bd6cee..8ea2edc28c81 100644
--- a/tools/nixery/config/config.go
+++ b/tools/nixery/config/config.go
@@ -70,7 +70,7 @@ func FromEnv() (Config, error) {
default:
log.WithField("values", []string{
"gcs",
- }).Fatal("NIXERY_STORAGE_BUCKET must be set to a supported value")
+ }).Fatal("NIXERY_STORAGE_BACKEND must be set to a supported value (gcs or filesystem)")
}
return Config{
--
cgit 1.4.1
From 15f79e1364410bc78d1d734f4af7617f2ab8e432 Mon Sep 17 00:00:00 2001
From: Ethan Davidson
Date: Thu, 9 Dec 2021 13:42:35 -0500
Subject: docs: mention arm64 metapackage
---
tools/nixery/docs/src/nixery.md | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index 5f6dcb7e361c..3f68311dabb4 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -33,8 +33,10 @@ Each path segment corresponds either to a key in the Nix package set, or a
meta-package that automatically expands to several other packages.
Meta-packages **must** be the first path component if they are used. Currently
-the only meta-package is `shell`, which provides a `bash`-shell with interactive
-configuration and standard tools like `coreutils`.
+there are only two meta-packages:
+- `shell`, which provides a `bash`-shell with interactive configuration and
+ standard tools like `coreutils`.
+- `arm64`, which provides ARM64 binaries.
**Tip:** When pulling from a private Nixery instance, replace `nixery.dev` in
the above examples with your registry address.
--
cgit 1.4.1
From 7433d620bbe82fb2642097226a580b96487f32c0 Mon Sep 17 00:00:00 2001
From: Jérôme Petazzoni
Date: Fri, 24 Dec 2021 16:11:49 +0100
Subject: feat: add /tmp
Examples of programs that fail when /tmp doesn't exist:
- terraform
- anything using mktemp and similar helpers
---
tools/nixery/prepare-image/prepare-image.nix | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/prepare-image/prepare-image.nix b/tools/nixery/prepare-image/prepare-image.nix
index 56f9e7a3bf5c..acd1430548b2 100644
--- a/tools/nixery/prepare-image/prepare-image.nix
+++ b/tools/nixery/prepare-image/prepare-image.nix
@@ -133,14 +133,16 @@ let
name = "bulk-layers";
paths = allContents.contents;
- # Ensure that there is always a /usr/bin/env for shell scripts
- # that require it.
+ # Provide a few essentials that many programs expect:
+ # - a /tmp directory,
+ # - a /usr/bin/env for shell scripts that require it.
#
- # Note that images which do not actually contain `coreutils` will
- # still have this symlink, but it will be dangling.
+ # Note that in images that do not actually contain `coreutils`,
+ # /usr/bin/env will be a dangling symlink.
#
- # TODO(tazjin): Don't link this if coreutils is not included.
+ # TODO(tazjin): Don't link /usr/bin/env if coreutils is not included.
postBuild = ''
+ mkdir -p $out/tmp
mkdir -p $out/usr/bin
ln -s ${coreutils}/bin/env $out/usr/bin/env
'';
--
cgit 1.4.1
From dd7de32c36845ea40b73b85084c7d5e80027d96e Mon Sep 17 00:00:00 2001
From: Jérôme Petazzoni
Date: Thu, 23 Dec 2021 12:19:39 +0100
Subject: feat: set SSL_CERT_FILE and provide a Cmd
Two minor "quality of life" improvements:
- automatically set SSL_CERT_FILE environment variable,
so that programs relying on OpenSSL for certificate
validation can actually validate certificates
(the certificates are included no matter what since
we add the "cacert" package to all iamges)
- if the requested image includes an interactive shell
(e.g. if it includes the "shell" metapackage), set
the image Cmd to "bash", which allows to execute
"docker run nixery.dev/shell" and get a shell)
I'm happy to split this PR in two if you'd like, but
since both features touch the Config structure and are
rather small, I thought it would make sense to bundle
them together.
---
tools/nixery/builder/builder.go | 10 +++++++++-
tools/nixery/manifest/manifest.go | 17 +++++++++++------
2 files changed, 20 insertions(+), 7 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/builder/builder.go b/tools/nixery/builder/builder.go
index 115f1e37ef32..4279cb0a1114 100644
--- a/tools/nixery/builder/builder.go
+++ b/tools/nixery/builder/builder.go
@@ -493,7 +493,15 @@ func BuildImage(ctx context.Context, s *State, image *Image) (*BuildResult, erro
return nil, err
}
- m, c := manifest.Manifest(image.Arch.imageArch, layers)
+ // If the requested packages include a shell,
+ // set cmd accordingly.
+ cmd := ""
+ for _, pkg := range image.Packages {
+ if pkg == "bashInteractive" {
+ cmd = "bash"
+ }
+ }
+ m, c := manifest.Manifest(image.Arch.imageArch, layers, cmd)
lw := func(w io.Writer) error {
r := bytes.NewReader(c.Config)
diff --git a/tools/nixery/manifest/manifest.go b/tools/nixery/manifest/manifest.go
index e499920075f0..afe84072eabf 100644
--- a/tools/nixery/manifest/manifest.go
+++ b/tools/nixery/manifest/manifest.go
@@ -64,9 +64,10 @@ type imageConfig struct {
DiffIDs []string `json:"diff_ids"`
} `json:"rootfs"`
- // sic! empty struct (rather than `null`) is required by the
- // image metadata deserialiser in Kubernetes
- Config struct{} `json:"config"`
+ Config struct {
+ Cmd []string `json:"cmd,omitempty"`
+ Env []string `json:"env,omitempty"`
+ } `json:"config"`
}
// ConfigLayer represents the configuration layer to be included in
@@ -83,12 +84,16 @@ type ConfigLayer struct {
// Outside of this module the image configuration is treated as an
// opaque blob and it is thus returned as an already serialised byte
// array and its SHA256-hash.
-func configLayer(arch string, hashes []string) ConfigLayer {
+func configLayer(arch string, hashes []string, cmd string) ConfigLayer {
c := imageConfig{}
c.Architecture = arch
c.OS = os
c.RootFS.FSType = fsType
c.RootFS.DiffIDs = hashes
+ if cmd != "" {
+ c.Config.Cmd = []string{cmd}
+ }
+ c.Config.Env = []string{"SSL_CERT_FILE=/etc/ssl/certs/ca-bundle.crt"}
j, _ := json.Marshal(c)
@@ -103,7 +108,7 @@ func configLayer(arch string, hashes []string) ConfigLayer {
// layer.
//
// Callers do not need to set the media type for the layer entries.
-func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
+func Manifest(arch string, layers []Entry, cmd string) (json.RawMessage, ConfigLayer) {
// Sort layers by their merge rating, from highest to lowest.
// This makes it likely for a contiguous chain of shared image
// layers to appear at the beginning of a layer.
@@ -122,7 +127,7 @@ func Manifest(arch string, layers []Entry) (json.RawMessage, ConfigLayer) {
layers[i] = l
}
- c := configLayer(arch, hashes)
+ c := configLayer(arch, hashes, cmd)
m := manifest{
SchemaVersion: schemaVersion,
--
cgit 1.4.1
From 3d26ea9e636e9cd137d9430dd36f672e83239e7b Mon Sep 17 00:00:00 2001
From: Raphael Borun Das Gupta
Date: Tue, 19 Apr 2022 21:32:46 +0200
Subject: docs: change references to repo URL
The Nixery main Git repo has moved
from https://github.com/google/nixery
to https://github.com/tazjin/nixery .
So change it in README and on the https://nixery.dev/ website.
---
tools/nixery/README.md | 4 ++--
tools/nixery/docs/src/nixery.md | 2 +-
tools/nixery/docs/src/run-your-own.md | 4 ++--
3 files changed, 5 insertions(+), 5 deletions(-)
(limited to 'tools')
diff --git a/tools/nixery/README.md b/tools/nixery/README.md
index 90799fac3feb..cba8ce6b14f6 100644
--- a/tools/nixery/README.md
+++ b/tools/nixery/README.md
@@ -4,7 +4,7 @@
-----------------
-[![Build Status](https://github.com/google/nixery/actions/workflows/build-and-test.yaml/badge.svg)](https://github.com/google/nixery/actions/workflows/build-and-test.yaml)
+[![Build Status](https://github.com/tazjin/nixery/actions/workflows/build-and-test.yaml/badge.svg)](https://github.com/tazjin/nixery/actions/workflows/build-and-test.yaml)
**Nixery** is a Docker-compatible container registry that is capable of
transparently building and serving container images using [Nix][].
@@ -130,7 +130,7 @@ outlined in [a public gist][gist].
It should be trivial to deploy Nixery inside of a Kubernetes cluster with
correct caching behaviour, addressing and so on.
-See [issue #4](https://github.com/google/nixery/issues/4).
+See [issue #4](https://github.com/tazjin/nixery/issues/4).
### Nix-native builder
diff --git a/tools/nixery/docs/src/nixery.md b/tools/nixery/docs/src/nixery.md
index 3f68311dabb4..7b78ddf5aaf8 100644
--- a/tools/nixery/docs/src/nixery.md
+++ b/tools/nixery/docs/src/nixery.md
@@ -77,7 +77,7 @@ availability.
Nixery was written by [tazjin][], but many people have contributed to Nix over
time, maybe you could become one of them?
-[Nixery]: https://github.com/google/nixery
+[Nixery]: https://github.com/tazjin/nixery
[Nix]: https://nixos.org/nix
[layering strategy]: https://storage.googleapis.com/nixdoc/nixery-layers.html
[layers]: https://grahamc.com/blog/nix-and-layered-docker-images
diff --git a/tools/nixery/docs/src/run-your-own.md b/tools/nixery/docs/src/run-your-own.md
index eb7d494ee3e2..cf4dc2ce6166 100644
--- a/tools/nixery/docs/src/run-your-own.md
+++ b/tools/nixery/docs/src/run-your-own.md
@@ -181,10 +181,10 @@ If the directory doesn't exist, Nixery will run fine but serve 404.
extensively.
[GKE]: https://cloud.google.com/kubernetes-engine/
-[nixery#4]: https://github.com/google/nixery/issues/4
+[nixery#4]: https://github.com/tazjin/nixery/issues/4
[Nix]: https://nixos.org/nix
[gcs]: https://cloud.google.com/storage/
-[repo]: https://github.com/google/nixery
+[repo]: https://github.com/tazjin/nixery
[signed-urls]: under-the-hood.html#5-image-layers-are-requested
[ADC]: https://cloud.google.com/docs/authentication/production#finding_credentials_automatically
[nixinstall]: https://nixos.org/manual/nix/stable/installation/installing-binary.html
--
cgit 1.4.1