Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Use the same output ordering and format everywhere.
This is such a common issue that we trade the single-line error message for
more readability.
Old message:
```
fixed-output derivation produced path '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com' with sha256 hash '08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm' instead of the expected hash '1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m'
```
New message:
```
hash mismatch in fixed-output derivation '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com':
wanted: sha256:1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
```
|
|
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
|
|
download: if there are active requests, never sleep for 10s
|
|
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
|
|
|
|
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
|
|
|
|
Works for uploading and not downloading.
|
|
This fixes 'error 10 while decompressing xz file'.
https://hydra.nixos.org/build/78308551
|
|
Fixes #2225.
|
|
In some versions/configurations libcurl doesn't handle timeouts
(especially DNS timeouts) in a way that wakes curl_multi_wait.
This doesn't appear to be a problem if using c-ares, FWIW.
|
|
|
|
|
|
|
|
I'm not sure if curl ever asks for enough data at once
for truncation to occur but better safe than sorry.
|
|
Don't say "download" when we mean "upload".
|
|
|
|
This reduces memory consumption of
nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)
Issue https://github.com/NixOS/nix/issues/1681.
Issue https://github.com/NixOS/nix/issues/1969.
|
|
|
|
|
|
|
|
fixes #1964
|
|
|
|
ignore "interrupted" exception in progress callback
|
|
|
|
Fixes #1905
|
|
|
|
The certificates won't get any better if we retry.
|
|
Actually fixes #1976.
|
|
E.g.
nix run --store ~/my-nix -f channel:nixos-17.03 hello -c hello
This problem was mentioned in #1897.
|
|
|
|
|
|
|
|
Some servers, such as Artifactory, allow uploading with PUT and BASIC
auth. This allows nix copy to work to upload binaries to those
servers.
Worked on together with @adelbertc
|
|
|
|
CentOS 7.4 and RHEL 7.4 ship with libcurl-devel-7.29.0-42.el7.x86_64; this flag
was added in 7.30.0
https://curl.haxx.se/libcurl/c/CURLMOPT_MAX_TOTAL_CONNECTIONS.html
|
|
Context/discusson:
https://github.com/NixOS/nix/issues/1692#issuecomment-348282301
|
|
This allows specifying the AWS configuration profile to use. E.g.
nix copy --from s3://my-cache?profile=aws-dev-account /nix/store/cf3isrlqavvd5w7rpky1fa8j9lcnlggm-...
|
|
E.g.
$ nix eval '(fetchMercurial https://www.mercurial-scm.org/repo/hello)'
{ branch = "default"; outPath = "/nix/store/alvb9y1kfz42bjishqmyy3pphnrh1pfa-source"; rev = "82e55d328c8ca4ee16520036c0aaace03a5beb65"; revCount = 1; shortRev = "82e55d328c8c"; }
$ nix eval '(fetchMercurial { url = https://www.mercurial-scm.org/repo/hello; rev = "0a04b987be5ae354b710cefeba0e2d9de7ad41a9"; })'
{ branch = "default"; outPath = "/nix/store/alvb9y1kfz42bjishqmyy3pphnrh1pfa-source"; rev = "0a04b987be5ae354b710cefeba0e2d9de7ad41a9"; revCount = 0; shortRev = "0a04b987be5a"; }
$ nix eval '(fetchMercurial /tmp/unclean-hg-tree)'
{ branch = "default"; outPath = "/nix/store/cm750cdw1x8wfpm3jq7mz09r30l9r024-source"; rev = "0000000000000000000000000000000000000000"; revCount = 0; shortRev = "000000000000"; }
|
|
The computation of urlHash didn't take the name into account, so
subsequent fetchurl calls with the same URL but a different name would
resolve to the same cached store path.
|
|
It was getting too much like whac-a-mole listing all the retriable error
conditions, so we now retry by default and list the cases where retrying
is almost certainly hopeless.
|
|
Fixes #1568.
|
|
Maybe this will fix the curl hangs on macOS. (We could also use
CURLOPT_TIMEOUT but that seems more of a sledgehammer.)
|
|
|
|
And print them (separately from the progress bar) given sufficient -v
flags.
|
|
In particular, this allows more relevant activities ("substituting X")
to supersede inferior ones ("downloading X").
|
|
|
|
E.g.
$ nix build --store local?root=/tmp/nix nixpkgs.firefox-unwrapped
[0/1 built, 1/97/98 fetched, 65.8/92.8 MiB DL, 203.2/309.2 MiB copied] downloading 'https://cache.nixos.org/nar/1czm9fk0svacy4h6a3fzkpafi4f7a9gml36kk8cq1igaghbspg3k.nar.xz'
|
|
Relevant RFC: NixOS/rfcs#4
$ ag -l | xargs sed -i -e "/\"/s/’/'/g;/\"/s/‘/'/g"
|