Age | Commit message (Collapse) | Author | Files | Lines |
|
Make curl's low speed limit configurable via stalled-download-timeout.
Before, this limit was five minutes without receiving a single byte.
This is much too long as if the remote end may not have even
acknowledged the HTTP request.
|
|
This is a much simpler fix to the 'error 9 while decompressing xz
file' problem than 78fa47a7f08a4cb6ee7061bf0bd86a40e1d6dc91. We just
do a ranged HTTP request starting after the data that we previously
wrote into the sink.
Fixes #2952, #379.
|
|
|
|
This reverts commit 78fa47a7f08a4cb6ee7061bf0bd86a40e1d6dc91.
|
|
Makes it easier to identify the failure reason in other tooling, eg.
differentiate between a non-deterministic --check vs a failed build.
$ nix-build '<nix/fetchurl.nix>' --argstr url http://example.org --argstr sha256 0000000000000000000000000000000000000000000000000000
hash mismatch in fixed-output derivation '/nix/store/nzi9ck45rwlxzcwr25is7qlf3hs5xl83-example.org':
wanted: sha256:0000000000000000000000000000000000000000000000000000
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
$ echo $?
102
$ nix-build -E 'with import <nixpkgs> {}; runCommand "foo" {} "date +%s > $out"' --check
warning: rewriting hashes in '/nix/store/g3k47g0399fvjmbm0p0mnad74k4w8vkz-foo'; cross fingers
error: derivation '/nix/store/mggc8dz13ackb49qca6m23zq4fpq132q-foo.drv' may not be deterministic: output '/nix/store/g3k47g0399fvjmbm0p0mnad74k4w8vkz-foo' differs
$ echo $?
104
|
|
Setting `http2 = false` in nix config (e.g. /etc/nix/nix.conf)
had no effect, and `nix-env -vvvvv -i hello` still downloaded .nar
packages using HTTP/2.
In `src/libstore/download.cc`, the `CURL_HTTP_VERSION_2TLS` option was
being explicitly set when `downloadSettings.enableHttp2` was `true`,
but, `CURL_HTTP_VERSION_1_1` option was not being explicitly set when
`downloadSettings.enableHttp2` was `false`.
This may be because `https://curl.haxx.se/libcurl/c/libcurl-env.html` states:
"You have to set this option if you want to use libcurl's HTTP/2 support."
but, also, in the changelog, states:
"DEFAULT
Since curl 7.62.0: CURL_HTTP_VERSION_2TLS
Before that: CURL_HTTP_VERSION_1_1"
So, the default setting for `libcurl` is HTTP/2 for version >= 7.62.0.
In this commit, option `CURLOPT_HTTP_VERSION` is explicitly set to
`CURL_HTTP_VERSION_1_1` when `downloadSettings.enableHttp2` nix config
setting is `false`.
This can be tested by running `nix-env -vvvvv -i hello | grep HTTP`
|
|
--no-net causes tarballTtl to be set to the largest 32-bit integer,
which causes comparison like 'time + tarballTtl < other_time' to
fail on 32-bit systems. So cast them to 64-bit first.
https://hydra.nixos.org/build/95076624
(cherry picked from commit 29ccb2e9697ee2184012dd13854e487928ae4441)
|
|
(cherry picked from commit df3f5a78d5ab0a1f2dc9d288b271b38a9b8b33b5)
|
|
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
(cherry picked from commit 8ea842260b4fd93315d35c5ba94b1ff99ab391d8)
|
|
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes #2952.
(cherry picked from commit a67cf5a3585c41dd9f219a2c7aa9cf67fa69520b)
|
|
(cherry picked from commit 15fa70cd1b853f5e62662b99ccb9ef3da6cfadff)
|
|
Also, make fetchGit and fetchMercurial update allowedPaths properly.
(Maybe the evaluator, rather than the caller of the evaluator, should
apply toRealPath(), but that's a bigger change.)
(cherry picked from commit 5c34d665386f4053d666b0899ecca0639e500fbd)
|
|
(cherry picked from commit 529add316c5356a8060c35f987643b7bf5c796dc)
|
|
|
|
|
|
|
|
Use the same output ordering and format everywhere.
This is such a common issue that we trade the single-line error message for
more readability.
Old message:
```
fixed-output derivation produced path '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com' with sha256 hash '08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm' instead of the expected hash '1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m'
```
New message:
```
hash mismatch in fixed-output derivation '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com':
wanted: sha256:1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
```
|
|
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
|
|
download: if there are active requests, never sleep for 10s
|
|
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
|
|
|
|
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
|
|
|
|
Works for uploading and not downloading.
|
|
This fixes 'error 10 while decompressing xz file'.
https://hydra.nixos.org/build/78308551
|
|
Fixes #2225.
|
|
In some versions/configurations libcurl doesn't handle timeouts
(especially DNS timeouts) in a way that wakes curl_multi_wait.
This doesn't appear to be a problem if using c-ares, FWIW.
|
|
|
|
|
|
|
|
I'm not sure if curl ever asks for enough data at once
for truncation to occur but better safe than sorry.
|
|
Don't say "download" when we mean "upload".
|
|
|
|
This reduces memory consumption of
nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)
Issue https://github.com/NixOS/nix/issues/1681.
Issue https://github.com/NixOS/nix/issues/1969.
|
|
|
|
|
|
|
|
fixes #1964
|
|
|
|
ignore "interrupted" exception in progress callback
|
|
|
|
Fixes #1905
|
|
|
|
The certificates won't get any better if we retry.
|
|
Actually fixes #1976.
|
|
E.g.
nix run --store ~/my-nix -f channel:nixos-17.03 hello -c hello
This problem was mentioned in #1897.
|
|
|
|
|
|
|
|
Some servers, such as Artifactory, allow uploading with PUT and BASIC
auth. This allows nix copy to work to upload binaries to those
servers.
Worked on together with @adelbertc
|