## Summary
The `DefaultResolverProvider` struct was not public. This PR exposes it
so we can build our own and use this as a fallback.
## Test Plan
I did not explicitly test this trivial change.
A WARN log was being emitted for a "broken cache entry" in the case
where the cache entry simply doesn't exist. But this is totally fine and
expected. So we detect the kind of error that occurred and emit a TRACE
if the file simply didn't exist.
A file in a zip can set arbitrary unix permissions, but we, like pip,
want to preserve only the executable bit and otherwise use the OS
defaults.
This should be faster for wheels with many files since we now avoid the
blocking fs call to set the permissions in most cases.
Fixes#1740.
## Summary
Don't preserve mtime to work around alexcrichton/tar-rs#349. Same as
#634 except for the streaming unzip.
Fixes#1748.
## Test Plan
Added the tomli source dist as test case.
## Summary
I am looking to instantiate a `RegistryClient`. However, when using the
`RegistryClientBuilder` a new reqwest client is always constructed. I
would like to pass in a custom `reqwest::Client` to be able to share the
http resources with other parts of my application.
## Test Plan
The uv codebase does not use my addition to the builder and all tests
still succeed. And in my code I can pass a custom Client.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Add the environment variable `UV_REQUEST_TIMEOUT` to allow control over
pip timeouts.
Closes#1549
## Test Plan
I built uv in the repository top Dockerfile, set the timeout to 3
seconds, and ran `uv pip install torch`.
I measured the execution time with the time command and confirmed that
the process finished at a value close to the timeout we set.
```bash
root@037c69228cdc:~# time UV_REQUEST_TIMEOUT=3 /uv pip install torch
Resolved 22 packages in 25ms
error: Failed to download distributions
Caused by: Failed to fetch wheel: nvidia-cusolver-cu12==11.4.5.107
Caused by: Failed to extract source distribution
Caused by: request or response body error: operation timed out
Caused by: operation timed out
real 0m3.064s
user 0m0.225s
sys 0m0.240s
```
## Summary
This opens up space to add other cache-related commands. (`uv clean`
continues to work for backwards compatibility but is hidden from the
CLI.)
## Summary
We don't control these, so it seems preferable _not_ to fail on them,
but rather, to just ignore them entirely. (I considered adding a long
allow-list, but then questioned the point of it? We'd end up having to
extend it if more invalid extras were published in the future.)
Closes https://github.com/astral-sh/uv/issues/1633.
## Summary
The main change is that we need to have an explicit list of protocols we
_do_ support (like `https`), so that when we see a Windows absolute path
(`C:\...`), we don't treat the `C` as a protocol itself.
Closes https://github.com/astral-sh/uv/issues/1539.
## Summary
When we read `--index-url` from a `requirements.txt`, we attempt to
respect the `--index-url` provided by the CLI if it exists.
Unfortunately, `--index-url` from the CLI has a default value... so we
_never_ respect the `--index-url` in the requirements file.
This PR modifies the CLI to use `None`, and moves the default into logic
in the `IndexLocations `struct.
Closes https://github.com/astral-sh/uv/issues/1692.
Uses `--find-links` to discover vendored scenario build dependencies and
allows us to use `--index-url` instead of `--extra-index-url` to avoid
hitting the real PyPI in scenario tests.
## Summary
This fixes https://github.com/astral-sh/uv/issues/1704 by removing the
version from the produced header.
## Test Plan
Checked with clippy, and tests are updated too.
## Summary
This PR adds the `--prompt` option to `venv` subcommand.
The default behavior for `uv venv` is to create a virtual environment in
the current directory with `.venv` name. This is different from `venv` /
`virtualenv` where a user always needs to provide the virtual
environment path. This allows us to define our own behavior in the
default scenario (`uv venv`). We've decided to use the current
directory's name in that case.
Workflows:
| Command | Virtual Environment Name | Prompt |
|--------|--------|--------|
| `uv venv` | `.venv` (default) | Current directory name |
| `uv venv project` | `project` | `project` |
| `uv venv --prompt .` | `.venv` | Current directory name |
| `uv venv --prompt foobar` | `.venv` | `foobar` |
| `uv venv project --prompt foobar` | `project` | `foobar` |
Fixes#1445
## Test Plan
This is my first Rust code and I don't know how to write tests yet.
I just checked the behavior manually:
```
$ cargo build
$ mkdir t
$ cd t
$ ../target/debug/uv venv -p 3.11
$ rg -w t .venv/bin/acti*
.venv/bin/activate.csh
13:setenv VIRTUAL_ENV '/Users/inada-n/work/uv/t/.venv'
20:if ('t' != "") then
21: setenv VIRTUAL_ENV_PROMPT 't'
23: setenv VIRTUAL_ENV_PROMPT "$VIRTUAL_ENV:t:q"
38: # in which case, $prompt is undefined and we wouldn't
.venv/bin/activate
48:VIRTUAL_ENV='/Users/inada-n/work/uv/t/.venv'
59: VIRTUAL_ENV_PROMPT="t"
.venv/bin/activate.fish
61:set -gx VIRTUAL_ENV '/Users/inada-n/work/uv/t/.venv'
73:if test -n 't'
74: set -gx VIRTUAL_ENV_PROMPT 't'
.venv/bin/activate.ps1
40:if ("t" -ne "") {
41: $env:VIRTUAL_ENV_PROMPT = "t"
.venv/bin/activate.nu
6:# but then simply `deactivate` won't work because it is just an alias to hide
35: let virtual_env = '/Users/inada-n/work/uv/t/.venv'
50: let virtual_env_prompt = (if ('t' | is-empty) {
53: 't'
```
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
This PR introduces more robust cache healing when `uv` fails to
deserialize an existing cache entry.
("Cache healing" in this context means that if `uv` fails to
deserialize a cache entry, then it will automatically invalidate that
entry and re-generate the data. Typically by sending an HTTP request.)
Previous to some optimizations I made around deserialization, we were
already doing this. After those optimizations, deserializing a cache
policy and the payload were split into two steps. While deserializing
a cache policy retained its cache healing behavior, deserializing the
payload did not. This became an issue when #1556 landed, which changed
one of our `rkyv` data types. This in turn made our internal types
incompatible with existing cache entries. One could work-around this
by clearing `uv`'s cache with `uv clean`, but we should just do it
automatically on a cache entry by entry basis.
This does technically introduce a new cost by pessimistically cloning
the HTTP request so that we can re-send it if necessary (see the commit
messages for the knot pushing me toward this approach). So I re-ran my
favorite ad-hoc benchmark:
```
$ hyperfine -w10 --runs 50 "uv-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" "uv-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" ; A bart
Benchmark 1: uv-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 114.4 ms ± 3.2 ms [User: 149.4 ms, System: 221.5 ms]
Range (min … max): 106.7 ms … 122.0 ms 50 runs
Benchmark 2: uv-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 114.0 ms ± 3.0 ms [User: 146.0 ms, System: 223.3 ms]
Range (min … max): 105.3 ms … 121.4 ms 50 runs
Summary
uv-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
1.00 ± 0.04 times faster than uv-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```
Which is about what I expected.
We should endeavor to have a better testing strategy for these kinds of
bugs, but I think it might be a little tricky to do. I created
https://github.com/astral-sh/uv/issues/1699 to track that.
Fixes#1571
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Adds cli command / flag (`generate-shell-completion <SHELL>` /
`--generate-shell-completion <SHELL>`) to generate the completion script
for the given shell. Implemented in exactly the same way as it is done
in ruff
(https://github.com/astral-sh/ruff/blob/main/crates/ruff/src/lib.rs#L197)
Closes https://github.com/astral-sh/uv/issues/1654
## Test Plan
I've normally tested the generated script manually only for bash shell
on Ubuntu 22.04.3
```bash
$ uv --generate-shell-completion bash > /usr/share/bash-completion/completions/uv
$ uv # <TAB>
-q -h --verbose --no-cache --version clean
-v -V --no-color --cache-dir pip generate-shell-completion
-n --quiet --color --help venv help
$ uv pip # <TAB>
-q -n -V --verbose --color --cache-dir --version sync uninstall help
-v -h --quiet --no-color --no-cache --help compile install freeze
```
Resolves#1292.
## Summary
Move the yanked warnings for `uv pip sync` and `uv pip install` to the
end of the commands, as per #1292.
## Test Plan
I ran the unit tests: `cargo nextest run`
## Summary
Just as we mark virtualenvs as `gitignore`d by default, we should also
mark them as `CACHEDIR.TAG`, to ensure that they aren't included in
backups, etc.
Closes https://github.com/astral-sh/uv/issues/1648.
## Test Plan
Ran `cargo run venv` and:
```
❯ ls .venv
CACHEDIR.TAG bin lib pyvenv.cfg
```
## Summary
Added `uv` to the list of the preserved packages when building the
installer plan. In that case `uv` is not going to be removed when, for
example, using `python -m uv pip sync requirements.txt` when
requirements.txt does not contain `uv`, but `uv` is installed in that
venv.
Closes#1631
## Test Plan
Got through the example attached to
https://github.com/astral-sh/uv/issues/1631 and did see the uv deletion
in the output
```
$ python -m uv pip sync requirements.txt
Installed 1 package in 20ms
+ ruff==0.2.2
```
## Sumamry
This PR adds the `activation.bat`, `deactivation.bat` and `pyenv.bat`
files to add support for using uv from CMD.
This PR further fixes an issue with our trampoline implementation where
calling an executable like `black` failed:
```
(venv) C:\Users\Micha\astral\test>where black
C:\Users\Micha\astral\test\.venv\Scripts\black.exe
(venv) C:\Users\Micha\astral\test>black
C:\Users\Micha\AppData\Local\Programs\Python\Python312\python.exe: can't open file 'C:\\Users\\Micha\\astral\\test\\black': [Errno 2] No such file or directory
```
The issue was that CMD doesn't extend `black` to its full path before
passing it to the trampoline and our trampoline generated the command
`<python> black` instead of `<python> .venv/Scripts/black`, and Python
can't find `black` in the project directory.
This PR fixes this by using the full executable name (that we already
parsed out to discover the Python version). This adds one complication,
we need to preserve the arguments without repeating the executable name
that is the first argument.
One option is to use
[`CommandLineToArgvW`](https://learn.microsoft.com/de-de/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw)
and then serialize the arguments 1.. to a string again. I decided
against that. Win32 API calls are easy to get wrong. That's why I
implemented the parsing rules specified in
[`CommandLineToArgvW`](https://learn.microsoft.com/de-de/windows/win32/api/shellapi/nf-shellapi-commandlinetoargvw)
to skip the first argument.
Fixes https://github.com/astral-sh/uv/issues/1471
## Test Plan
https://github.com/astral-sh/uv/assets/1203881/bdb537b6-97c8-4f7e-bb4a-3a614eb5e0f6
Powershell continues to work
https://github.com/astral-sh/uv/assets/1203881/6c806477-a7c6-4047-9ffc-5ed91c6f1c84
I haven't been able to test the aarch binaries.
## Summary
If an editable package declares a direct URL requirement, we currently
error since it's not considered an "allowed" requirement. We need to add
those URLs to the allow-list.
Closes https://github.com/astral-sh/uv/issues/1603.
## Summary
It's incorrect to pass the resolution and dependency mode down to the
`BuildDispatch`, since it means that we'll use `--no-deps` when building
source distributions. If you set resolution to `lowest`, it also means
we end up using (e.g.) the lowest version of `wheel`, which also doesn't
make sense.
It's fine to pass `--exclude-newer`.
Closes https://github.com/astral-sh/uv/issues/1355.
Closes https://github.com/astral-sh/uv/issues/1563.
This PR fixes the bug where the `BIN_NAME` replacement field wasn't
being used in the activator scripts.
fixes: #1518
## Test plan
As I don't have a Windows machine, I switched the `bin_name` value here
to point to `Scripts` on `unix` platform:
2a76c59084/crates/gourgeist/src/bare.rs (L99-L105)
<details><summary>Code diff</summary>
<p>
```diff
```diff
diff --git a/crates/gourgeist/src/bare.rs b/crates/gourgeist/src/bare.rs
index 4c7808d3..0e0b41cf 100644
--- a/crates/gourgeist/src/bare.rs
+++ b/crates/gourgeist/src/bare.rs
@@ -97,9 +97,9 @@ pub fn create_bare_venv(location: &Utf8Path,
interpreter: &Interpreter) -> io::R
// TODO(konstin): I bet on windows we'll have to strip the prefix again
let location = location.canonicalize_utf8()?;
let bin_name = if cfg!(unix) {
- "bin"
- } else if cfg!(windows) {
"Scripts"
+ } else if cfg!(windows) {
+ "bin"
} else {
unimplemented!("Only Windows and Unix are supported")
};
```
</p>
</details>
I then created the virtual environment as usual and tested out that the path modifications were correct:
```console
$ cargo run --bin uv -- venv
Finished dev [unoptimized + debuginfo] target(s) in 0.13s
Running `target/debug/uv venv`
Using Python 3.12.1 interpreter at
/Users/dhruv/.pyenv/versions/3.12.1/bin/python3.12
Creating virtualenv at: .venv
$ source .venv/Scripts/activate
$ echo $PATH
/Users/dhruv/work/astral/uv/.venv/Scripts:[...]
$ which python
/Users/dhruv/work/astral/uv/.venv/Scripts/python
```
I'm not sure how else to test this without having access to a Windows machine
## Summary#1562
It turns out that `hexdump` uses an invalid source distribution format
whereby the contents aren't nested in a top-level directory -- instead,
they're all just flattened at the top-level. In looking at pip's source
(51de88ca64/src/pip/_internal/utils/unpacking.py (L62)),
it only strips the top-level directory if all entries have the same
directory prefix (i.e., if it's the only thing in the directory). This
PR accommodates these "invalid" distributions.
I can't find any history on this method in `pip`. It looks like it dates
back over 15 years ago, to before `pip` was even called `pip`.
Closes https://github.com/astral-sh/uv/issues/1376.
## Summary
This was just a missing line -- we have `dependencies.remove(&package);`
in the ~identical branch above, but it must've been an oversight to omit
it here.
Closes https://github.com/astral-sh/uv/issues/1467.
## Test Plan
`cargo test`
## Summary
It turns out that it's not uncommon to end up with repeated packages in
requirements files when running `pip-sync`, e.g., you might have
`anyio==4.0.0` specified multiple times. This PR relaxes our assertions
in the install plan to allow such repeated packages, as long as the
requirement markers are exactly the same (i.e., they are truly
duplicates).
Closes https://github.com/astral-sh/uv/issues/1552.
## Summary
If you're developing on a package like `attrs` locally, and it has a
recursive extra like `attrs[dev]`, it turns out that we then try to find
the `attrs` in `attrs[dev]` from the registry, rather than recognizing
that it's part of the editable.
This PR fixes the issue by making editables slightly more first-class
throughout the resolver. Instead of mocking metadata, we explicitly
check for extras in various places. Part of the problem here is that we
treated editables as URL dependencies, but when we saw an _extra_ like
`attrs[dev]`, we didn't map that back to the URL. So now, we treat them
as registry dependencies, but with the appropriate guardrails
throughout.
Closes https://github.com/astral-sh/uv/issues/1447.
## Test Plan
- Cloned `attrs`.
- Ran `cargo run venv && cargo run pip install -e ".[dev]" -v`.
## Summary
This _could_ fix https://github.com/astral-sh/uv/issues/1454, but I'm
not sure. I was able to replicate by forcing a bunch of error states.
But, in short, if we fail to hardlink on the initial copy due to a file
existing, and then fail _again_, we fallback to copying. But if we copy,
then the tempfile doesn't exist, and so the `fs_err::rename(&tempfile,
&out_path)?;` will fail with "File not found".
This PR just ensures that the cases are explicitly mutually exclusive:
we only attempt to rename if the hardlink succeeded.
This PR fixes the OS detection for Alpine Linux such that the version
of musl available is correctly determined. The issue boiled down to
a regex that required 2 digits for each version component. But a
valid musl version is 1.2.4, which only has a single digit for each
component.
It's unclear how this was working for musl before this change. My
theory is that our other methods of OS detection were somehow working.
The first commit in this PR cleans up our Linux detection logic and adds
lots of tracing calls to make debugging issues like this easier in the
future. To do so, one can run:
$ RUST_LOG=trace uv pip install -v whatever
The second commit has the actual fix.
Fixes#1427
## Summary
By using the display representation of `Version` to form a `PackageId`,
we run the risk (as seen in the linked issue) of thinking that versions
like `2021.1` and `2021.1.0` are not equivalent.
Closes https://github.com/astral-sh/uv/issues/1536
This fixes a bug where `uv pip install` failed to install `polars`:
```
$ uv pip install polars==0.14.0
error: Failed to download: polars==0.14.0
Caused by: Couldn't parse metadata of polars-0.14.0-cp37-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl from 749022b096cb7c1c2cc32b7f433c4f/polars-0.14.0-cp37-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Caused by: Operator >= cannot be used with a wildcard version specifier
pyarrow>=4.0.*; extra == 'pyarrow'
^^^^^^^
```
Since `pyarrow>=4.0.*; extra == 'pyarrow'` is invalid *and* it comes
from the metadata of a dependency (that isn't under the control of the
end user), we actually attempt to "fix" it. Namely, wildcard
dependency specifications are only allowed with `==` and `!=`, as per
the [Version Specifiers spec]. (They aren't explicitly forbidden in
these cases, but instead only have specified behavior for the `==` and
`!=` operators.)
This is all fine, but it turns out that when we fix the `>=4.0.*`
component, we also strip the quotes around `pyarrow`. (Because some
dependency specifications include stray quotes.) We fix this by making
our quote stripping a bit more selective. (We require that it appear
adjacent to a digit or a `*`.)
Note that #1477 also reports this error:
```
$ uv pip install 'requests>=2.30.*'
error: Failed to parse `requests>=2.30.*`
Caused by: Operator >= cannot be used with a wildcard version specifier
requests>=2.30.*
```
However, we specifically keep that error message since it's something
under the end user's control. And similarly for a dependency
specification in a `requirements.txt` file.
Fixes#1477
[Version Specifiers spec]:
https://packaging.python.org/en/latest/specifications/version-specifiers/
It turns out that /bin/ls can sometimes be plain text file. For
example, in Rocky Linux 9:
```
$ cat /bin/ls
#!/usr/bin/coreutils --coreutils-prog-shebang=ls
```
However, `/bin/sh` is an ELF binary:
```
$ file /bin/sh
/bin/sh: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=7acbb41bf6f1b7d977f1b44675bf3ed213776835, for GNU/Linux 3.2.0, stripped
```
In a related issue (#1433), @zanieb fixed#1395 where, on NixOS,
`/bin/ls` doesn't exist but `/bin/sh` does. However, the fix attempts
`/bin/ls` first and only tries `/bin/sh` if `/bin/ls` doesn't exist. If
`/bin/ls` exists but isn't a valid ELF file, then the entire enterprise
gives up and `uv` fails to detect the version of `libc` that is
installed.
Instead of tweaking the logic to keep trying `/bin/ls` and then
`/bin/sh` after even if parsing `/bin/ls` fails, we just switch over to
reading `/bin/sh` only. It seems like a more fundamental thing to sniff
and likely less error prone.
We can adjust this heuristic as needed if it provdes to be problematic.
I tested this fix manually on Rocky Linux 9 via Docker:
```
$ cross b -r -p uv --target x86_64-unknown-linux-musl
$ cp target/x86_64-unknown-linux-musl/release/uv ~/astral/issues/uv/i1486/uv
$ docker run --rm -it --mount type=bind,src=/home/andrew/astral/issues/uv/i1486,dst=/host rockylinux:9 bash
[root@df2baa65d2f8 /]# /host/uv venv
Using Python 3.9.18 interpreter at /usr/bin/python3.9
Creating virtualenv at: .venv
[root@df2baa65d2f8 /]#
```
Fixes#1486, Ref #1433
I'm not sure if we should just switch to _always_ reading from sh
instead? I don't love that all these errors are strings and I if
`/bin/ls` exists but can't be parsed we still won't try `/bin/sh`. We
may want to address these things in the future.
Closes https://github.com/astral-sh/uv/issues/1395
## Summary
It looks like `devpi` might add an empty fragment (`#`) at the end of
the URL. We expect it to contain the hash; this just makes
empty-fragment map to "no hash".
Closes https://github.com/astral-sh/uv/issues/1441.
## Summary
If a distribution contains a `+`, it'll be HTML-escaped; so when we try
to identify the `#`, we'll split in the wrong location.
Closes https://github.com/astral-sh/uv/issues/1338.
Closes https://github.com/astral-sh/uv/issues/1388
Fixes incorrect handling of relative paths returned by indexes without
an explicit `<base>`.
`Url.join` will drop the last segment in an url e.g. `http://foo/bar` ->
`http://foo/baz` if there is not a trailing slash but what we want is
`http://foo/bar/baz`. We don't add the trailing `/` in
`base_url_join_relative` because flat indexes are `http://foo/bar.html`
and we _want_ `bar.html` to be replaced.
## Summary
In a `requirements.txt` file, it turns out that the `-c` and `-r`
entries should be interpreted as relative to the file in which they're
declared, while the `-e` entries should be interpreted as relative to
the current working directory, no matter where they're defined.
Previously, we always used the current working directory; now, we use
the declaring file's directory for `-c` and `-r`.
Closes https://github.com/astral-sh/uv/issues/1367.
Closes https://github.com/astral-sh/uv/issues/1416.
## Summary
Closes https://github.com/astral-sh/uv/issues/1402.
## Test Plan
Ran `cargo run pip install junos-eznc==2.6.5`, which still fails for me,
but fails identically to `pip` (and not on the `requires-python`):
```
/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmp7mxT9L/built-wheels-v0/pypi/ncclient/0.6.13/4vvPwmDC_CL2OUXd68Zqb/ncclient-0.6.13.tar.gz/versioneer.py:421: SyntaxWarning: invalid escape sequence '\s'
LONG_VERSION_PY['git'] = '''
Traceback (most recent call last):
File "<string>", line 10, in <module>
File "/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmplD5mMO/.venv/lib/python3.12/site-packages/setuptools/build_meta.py", line 366, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmplD5mMO/.venv/lib/python3.12/site-packages/setuptools/build_meta.py", line 480, in run_setup
super().run_setup(setup_script=setup_script)
File "/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmplD5mMO/.venv/lib/python3.12/site-packages/setuptools/build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 45, in <module>
File "/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmp7mxT9L/built-wheels-v0/pypi/ncclient/0.6.13/4vvPwmDC_CL2OUXd68Zqb/ncclient-0.6.13.tar.gz/versioneer.py", line 1480, in get_version
return get_versions()["version"]
^^^^^^^^^^^^^^
File "/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmp7mxT9L/built-wheels-v0/pypi/ncclient/0.6.13/4vvPwmDC_CL2OUXd68Zqb/ncclient-0.6.13.tar.gz/versioneer.py", line 1412, in get_versions
cfg = get_config_from_root(root)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/.tmp7mxT9L/built-wheels-v0/pypi/ncclient/0.6.13/4vvPwmDC_CL2OUXd68Zqb/ncclient-0.6.13.tar.gz/versioneer.py", line 342, in get_config_from_root
parser = configparser.SafeConfigParser()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'configparser' has no attribute 'SafeConfigParser'. Did you mean: 'RawConfigParser'?
```
This PR improves the error message for the problem described in
https://github.com/astral-sh/uv/issues/1376. The original output
duplicates the actual error message and includes lots of noise
(`DirEntry { inner: DirEntry(...) }`).
```
$ uv pip install hexdump==3.3
error: Failed to download and build: hexdump==3.3
Caused by: Failed to extract source distribution: The top level of the archive must only contain a list directory, but it contains: [DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/__main__.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/hexdump.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/data") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/PKG-INFO") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/setup.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/README.txt") }]
Caused by: The top level of the archive must only contain a list directory, but it contains: [DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/__main__.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/hexdump.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/data") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/PKG-INFO") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/setup.py") }, DirEntry { inner: DirEntry("/home/robin/.cache/uv/.tmpgSvTCk/README.txt") }]
```
This PR removes the duplication and `DirEntry` internals so that the
error message is easier to grasp:
```
$ uv pip install hexdump==3.3
error: Failed to download and build: hexdump==3.3
Caused by: Failed to extract source distribution
Caused by: The top level of the archive must only contain a list directory, but it contains: ["__main__.py", "hexdump.py", "data", "PKG-INFO", "setup.py", "README.txt"]
```
It's a little picky about the value, but that seems okay.
```
❯ ./target/debug/uv pip install trio
Audited 1 package in 4ms
❯ UV_NO_CACHE=true ./target/debug/uv pip install trio
Audited 1 package in 50ms
```
Closes#1382
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```
Instead of dropping versions without a compatible distribution, we track
them as incompatibilities in the solver. This implementation follows
patterns established in https://github.com/astral-sh/puffin/pull/1290.
This required some significant refactoring of how we track incompatible
distributions. Notably:
- `Option<TagPriority>` is now `WheelCompatibility` which allows us to
track the reason a wheel is incompatible instead of just `None`.
- `Candidate` now has a `CandidateDist` with `Compatible` and
`Incompatibile` variants instead of just `ResolvableDist`; candidates
are not strictly compatible anymore
- `ResolvableDist` was renamed to `CompatibleDist`
- `IncompatibleWheel` was given an ordering implementation so we can
track the "most compatible" (but still incompatible) wheel. This allows
us to collapse the reason a version cannot be used to a single
incompatibility.
- The filtering in the `VersionMap` is retained, we still only store one
incompatible wheel per version. This is sufficient for error reporting.
- A `TagCompatibility` type was added for tracking which part of a wheel
tag is incompatible
- `Candidate::validate_python` moved to
`PythonRequirement::validate_dist`
I am doing more refactoring in #1298 — I think a couple passes will be
necessary to clarify the relationships of these types.
Includes improved error message snapshots for multiple incompatible
Python tag types from #1285 — we should add more scenarios for coverage
of behavior when multiple tags with different levels are present.
Mostly throwing this up here as a discussion topic. Having something
like this is primarily useful for enabling use cases similar to `rye
add` where I want to use this currently. One can accomplish something
similar with `unearth` today or by abusing regular `pip install`:
```
$ ~/.rye/self/bin/pip install --no-deps --dry-run flask --report - -q | jq '.install[0].metadata | {name, version}'
{
"name": "Flask",
"version": "3.0.2"
}
```
Another option would be to have a `puffin resolve` command or similar
that works like `pip compile` without dependencies, takes the
requirements as arguments and returns a line for each resolution. That
would be a larger change.
This rollbacks the optimization in the previous commit to be more
general. That is, instead of specializing the case of a range for a
singleton version, we make iteration over the distributions in a
`VersionMap` more explicitly lazy. Iteration now provides a `Version`
(like it did previously) and a _handle_ to a distribution that can be
turned into a `ResolvableDist`.
Doing things this way permits callers to iterate over the versions and
only materialize a distribution if they actually need one. In cases like
candidate selection, one can often rule out use of a distribution
through its version alone, and thus skip construction of that
distribution entirely.
In many cases, version ranges are actually just pins to a
specific and single version. And we can detect that statically
by examining the range. If we do have a range that is just one
version, then we can ask a `VersionMap` for just that version
instead of iterating over what's in the map until we find one
that satisfies the range.
I had tried this before making `VersionMap` construction lazy,
but it didn't seem to matter much. But helps a lot more now
with a lazy `VersionMap` because it lets us avoid creating a
lot of distributions in memory that we won't ultimately use.
That is, a `PrioritizedDistribution` for a specific version of a
package is not actually materialized in memory until a corresponding
`VersionMap::get` call is made for that version. Similarly, iteration
lazily materializes distributions as it moves through the map. It
specifically does not materialize everything first.
The main reason why this is effective is that an
`OwnedArchive<SimpleMetadata>` represents a zero-copy (other than
reading the source file) version of `SimpleMetadata` that is really just
a `Vec<u8>` internally. The problem with `VersionMap` construction
previously is that it had to eagerly materialize a `SimpleMetadata` in
memory before anything else, which defeats a large part of the purpose
of zero-copy deserialization. By making more of `VersionMap`
construction itself lazy, we permit doing some parts of resolution
without necessarily fully deserializing a `SimpleMetadata` into memory.
Indeed, with this commit, in the warm cached case, a `SimpleMetadata` is
itself never materialized fully in memory.
This does not completely and totally fully realize the benefits of
zero-copy deserialization. For example, we are likely still building
lots of distributions in memory that we don't actually need in some
cases. Perhaps in cases where no resolution exists, or when one needs to
iterate over large portions of the total versions published for a
package.
This commit adds some logging to candidate selection during
resolution. The idea with these logs is to get a signal on
how much "exploring" the resolver does in specific examples.
For example, this logs helped me realize that at least in
some cases, candidate selection was looking through a long list
of versions even when its range consisted of exactly one
version. We'll use this fact in a later commit.
This makes cloning and thus sharing across multiple threads much
cheaper. Since Tags is conceptually immutable once it is constructed,
this doesn't pose an issue and shouldn't introduce any additional
costs.
This is really annoying, but the snapshots keep changing indentation
when updated.
I could not get insta to update them. So I added a print statement to
`main` and updated the snapshots, then removed the statement and updated
the snapshots again to force them all to refresh.
We use
- An arbitrary ABI hash: `MMMMMM` (six base64 characters)
- An unlikely Jython27 Python tag
For cases that are valid but are never going to be available during
tests.
See https://github.com/zanieb/packse/pull/109
Moves yanked version filtering from `VersionMap::from_metadata` to the
resolver and tracks it as a PubGrub unavailable incompatibility so
yanked versions are reflected in error messages.
e.g. before
```
╰─▶ Because only albatross<=0.1.0 is available and you require albatross>0.1.0,
we can conclude that the requirements are unsatisfiable.
```
after
```
╰─▶ Because only the following versions of albatross are available:
albatross<=0.1.0
albatross==1.0.0
and albatross==1.0.0 is unusable because it was yanked, we can conclude that albatross>0.1.0 cannot be used.
And because you require albatross>0.1.0, we can conclude that the requirements are unsatisfiable.
```
## Summary
This PR adds an `--offline` flag to Puffin that disables network
requests (implemented as a Reqwest middleware on our registry client).
When `--offline` is provided, we also allow the HTTP cache to return
stale data.
Closes#942.
Updates our `--no-binary` option and adds a `--only-binary` option for
compatibility with `pip` which uses `:all:`, `:none:` and `<name>` for
specifying packages.
This required adding support for `--only-binary <name>` into our
resolver, previously it was only a boolean toggle.
Retains`--no-build` which is equivalent to `--only-binary :all:`. This
is common enough for safety that I would prefer it is available without
pip's awkward `:all:` syntax.
---------
Co-authored-by: konsti <konstin@mailbox.org>
## Summary
For PEP 517 builds, the current working directory needs to be set to the
directory of the source distribution. It turns out that on Windows, if
you use a UNC path for the working directory, then relative paths are
interpreted relative to the root of the current drive
([source](https://www.fileside.app/blog/2023-03-17_windows-file-paths/#paths-relative-to-the-root-of-the-current-drive)).
So, when builds attempted to resolve relative paths, they always
errored...
This PR ensures that we remove the UNC prefix when setting the current
working directory.
Closes#1238.
## Test Plan
I tested this on my Windows machine by installing `ujson` with
`--no-binary ujson`. (I don't want to add that specific test, since it's
really slow to build.)
Contrary to our prior assumption, we can't reliably select a specific
patch version. With the deadsnakes PPA for example, `python3.12` is
installed into `PATH`, but `python3.12.1` isn't. Based on the assumption
(or rather, observation) that users have a single python patch version
per python minor version installed, generally the latest, we only check
if the installed patch version matches the selected patch version, if
any, instead of search for one.
In the process, i deduplicated the python discovery logic.
Run `cargo test` on windows in CI, pulling the switch on tier 1 windows
support.
These changes make the bootstrap script virtually required for running
the tests. This gives us consistency between and CI, but it also locks
our tests to python-build-standalone and an articificial `PATH`.
I've deleted the shell bootstrap script in favor of only the python one,
which also runs on windows. I've left the (sym)link creation of the
bootstrap in place, even though it is not used by the tests anymore.
I've reactivated the three tests that would previously stack overflow by
doubling their stack sizes. The stack overflows only happen in debug
mode, so this is neither a user facing problem nor an actual problem
with our code and this workaround seems better than optimizing our code
for case that the (release) compiler can optimize much better for.
The handling of patch versions will be fixed in a follow-up PR.
Closes#1160Closes#1161
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
In the process of making VersionMap construction lazy, I realized this
refactoring would be useful to me. It also simplifies a fair bit of case
analysis and does fewer BTreeMap lookups during construction. With that
said, this doesn't seem to matter for perf:
```
$ hyperfine -w10 --runs 50 \
"puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" \
"puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null"
Benchmark 1: puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 146.8 ms ± 4.1 ms [User: 350.1 ms, System: 314.2 ms]
Range (min … max): 140.7 ms … 158.0 ms 50 runs
Benchmark 2: puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 146.8 ms ± 4.5 ms [User: 359.8 ms, System: 308.3 ms]
Range (min … max): 138.2 ms … 160.1 ms 50 runs
Summary
puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
1.00 ± 0.04 times faster than puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```
But the simplification is still nice, and will decrease the delta
between what we have now and a lazy version map.
This PR reduces the stack sizes a windows a little further using the
stack traces from stack overflows combined with looking at the type
sizes. Ultimately, it ignore the three remaining tests failing in debug
on windows due to stack overflows to unblock `cargo test` for windows on
CI.
444 tests run: 444 passed (39 slow), 1 skipped
We need to use the anstream print macros instead of the std print
macros, otherwise we risk wrong color behavior
(https://github.com/astral-sh/puffin/pull/1258#discussion_r1480428236).
Luckily, the `print_stderr` and `print_stdout` lints catch usages of the
std prints.
This PR switches over to anstream consistently and removes the now
redundant clippy lints. The lints should catch missing anstream usage in
the future.
Remove windows-only dependencies from the snapshot output using regex.
We now do the filtering entirely on our without relying on insta
settings.
435 tests run: 430 passed (30 slow), 5 failed, 1 skipped
There are no binary installers for the latests patch versions of cpython
for windows, and building them is hard. As an alternative, we download
python-build-standanlone cpythons and put them into `<project
root>/bin`. On unix, we can symlink `pythonx.y.z` into this directory
and point `PUFFIN_PYTHON_PATH` to it. On windows, all pythons are called
`python.exe` and they don't like being linked. Instead, we add the path
to each directory containing a `python.exe` to `PUFFIN_PYTHON_PATH`,
similar to the regular `PATH`. The python discovery on windows was
extended to respect `PUFFIN_PYTHON_PATH` where needed.
These changes mean that we don't need to (sym)link pythons anymore and
could drop that part to the script.
435 tests run: 389 passed (21 slow), 46 failed, 1 skipped
## Summary
Open to other opinions here. We could just continue (and warn), prompt
the user with a confirmation, etc.
(The weird thing about those two options is we might need to validate
the command-line arguments _before_ we do that -- so you could get
errors for bad arguments, and then get a warning that your subcommand is
wrong. I can probably avoid that with more work if it feels like a
better out come though.)
Closes https://github.com/astral-sh/puffin/issues/1256.
## Summary
These add and remove dependencies from a `pyproject.toml` -- but they're
currently hidden, and don't match the rest of the workflow. We can
re-add them when the time is right.
Since unavailable packages with `--no-index` can be confusing when the
user does not also provide `--find-links` we add a hint for this case.
Required some plumbing to get the required information to the
`NoSolution` error.
---------
Co-authored-by: konstin <konstin@mailbox.org>
(Please review this PR commit by commit.)
This PR closes an initial loop on zero-copy deserialization. That
is, provides a way to get a `Archived<SimpleMetadata>` (spelled
`OwnedArchive<SimpleMetadata>` in the code) from a `CachedClient`. The
main benefit of zero-copy deserialization is that we can read bytes
from a file, cast those bytes to a structured representation without
cost, and then start using that type as any other Rust type. The
"catch" is that the structured representation is not the actual type
you started with, but the "archived" version of it.
In order to make all this work, we ended up needing to shave a rather
large yak: we had to re-implement HTTP cache semantics. Previously,
we were using the `http-cache-semantics` crate. While it does support
Serde, it doesn't support `rkyv`. Moreover, even simple support for
`rkyv` wouldn't be enough. What we actually want is for the HTTP cache
semantics to be implemented on the *archived* type so that we can
decide whether our cached response is stale or not without needing to
do a full deserialization into the unarchived type. This is why, in
this PR, you'll see `impl ArchivedCachePolicy { ... }` instead of
`impl CachePolicy { ... }`. (The `derive(rkyv::Archive)` macro
automatically introduces the `ArchivedCachePolicy` type into the
current namespace.)
Unfortunately, this PR does not fully realize the dream that is
zero-copy deserialization. Namely, while a `CachedClient` can now
provide an `OwnedArchive<SimpleMetadata>`, the rest of our code
doesn't really make use of it. Indeed, as soon as we go to build a
`VersionMap`, we eagerly convert our archived metadata into an owned
`SimpleMetadata` via deserialization (that *isn't* zero-copy). After
this change, a lot of the work now shifts to `rkyv` deserialization
and `VersionMap` construction. More precisely, the main thing we drop
here is `CachePolicy` deserialization (which is now truly zero-copy)
and the parsing of the MessagePack format for `SimpleMetadata`. But we
are still paying for deserialization. We're just paying for it in a
different place.
This PR does seem to bring a speed-up, but it is somewhat underwhelming.
My measurements have been pretty noisy, but I get a 1.1x speedup fairly
often:
```
$ hyperfine -w5 "puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" "puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" ; A kang
Benchmark 1: puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 164.4 ms ± 18.8 ms [User: 427.1 ms, System: 348.6 ms]
Range (min … max): 131.1 ms … 190.5 ms 18 runs
Benchmark 2: puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
Time (mean ± σ): 148.3 ms ± 10.2 ms [User: 357.1 ms, System: 319.4 ms]
Range (min … max): 136.8 ms … 184.4 ms 19 runs
Summary
puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
1.11 ± 0.15 times faster than puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```
One downside is that this does increase cache size (`rkyv`'s
serialization format is not as compact as MessagePack). On disk size
increases by about 1.8x for our `simple-v0` cache.
```
$ sort-filesize cache-main
4.0K cache-main/CACHEDIR.TAG
4.0K cache-main/.gitignore
8.0K cache-main/interpreter-v0
8.7M cache-main/wheels-v0
18M cache-main/archive-v0
59M cache-main/simple-v0
109M cache-main/built-wheels-v0
193M cache-main
193M total
$ sort-filesize cache-test
4.0K cache-test/CACHEDIR.TAG
4.0K cache-test/.gitignore
8.0K cache-test/interpreter-v0
8.7M cache-test/wheels-v0
18M cache-test/archive-v0
107M cache-test/simple-v0
109M cache-test/built-wheels-v0
242M cache-test
242M total
```
Also, while I initially intended to do a simplistic implementation of
HTTP cache semantics, I found that everything was somewhat
inter-connected. I could have wrote code that _specifically_ only worked
with the present behavior of PyPI, but then it would need to be special
cased and everything else would need to continue to use
`http-cache-sematics`. By implementing what we need based on what Puffin
actually is (which is still less than what `http-cache-semantics` does),
we can avoid special casing and use zero-copy deserialization for our
cache policy in _all_ cases.
Previously, whenever we encountered a missing package we would throw an
error without information about why the package was requested. This
meant that if a transitive dependency required a missing package, the
user would have no idea why it was even selected. Here, we track
`NotFound` and `NoIndex` errors as `NoVersions` incompatibilities with
an attached reason. Improves our test coverage for `--no-index` without
`--find-links`.
The
[snapshots](https://github.com/astral-sh/puffin/pull/1241/files#diff-3eea1658f165476252f1f061d0aa9f915aabdceafac21611cdf45019447f60ec)
show a nice improvement.
I think this will also enable backtracking to another version if some
version of transitive dependency has a missing dependency. I'll write a
scenario for that next.
Requires https://github.com/zanieb/pubgrub/pull/22
Closes#884
e.g.
```
❯ cargo run -q -- pip compile --python-version 3.12 requirements.in
× No solution found when resolving dependencies:
╰─▶ Because the requested Python version (3.12) does not satisfy Python>=3.6,<3.10 and recommenders==1.0.0 depends on Python>=3.6,<3.9, we can conclude that recommenders==1.0.0 cannot be used.
And because only the following versions of recommenders are available:
recommenders<=0.7
recommenders==1.0.0
recommenders==1.1.0
recommenders==1.1.1
we can conclude that recommenders>0.7,<1.1.0 cannot be used. (1)
Because the requested Python version (3.12) does not satisfy Python>=3.6,<3.10 and recommenders>=1.1.0 depends on Python>=3.6,<3.10, we can conclude that recommenders>=1.1.0 cannot be used.
And because we know from (1) that recommenders>0.7,<1.1.0 cannot be used, we can conclude that recommenders>0.7 cannot be used.
And because you require recommenders>0.7, we can conclude that the requirements are unsatisfiable.
```
## Summary
Previously, we were blocking operations that could run in parallel. We
would send request through our main requests channel, but not yield so
that the receiver could only start processing requests much later than
necessary. We solve this by switching to the async
`tokio::sync::mpsc::channel`, where send is an async functions that
yields.
Due to the increased parallelism cache deserialization and the
conversion from simple api request to version map became bottlenecks, so
i moved them to `spawn_blocking`. Together these result in a 30-60%
speedup for larger warm cache resolution. Small cases such as black
already resolve in 5.7 ms on my machine so there's no speedup to be
gained, refresh and no cache were to noisy to get signal from.
Note for the future: Revisit the bounded channel if we want to produce
requests from `process_request`, too, (this would be good for
prefetching) to avoid deadlocks.
## Details
We can look at the behavior change through the spans:
```
RUST_LOG=puffin=info TRACING_DURATIONS_FILE=target/traces/jupyter-warm-branch.ndjson cargo run --features tracing-durations-export --bin puffin-dev --profile profiling -- resolve jupyter 2> /dev/null
```
Below, you can see how on main, we have discrete phases: All (cached)
simple api requests in parallel, then all (cached) metadata requests in
parallel, repeat until done. The solver is mostly waiting until it has
it's version map from the simple API query to be able to choose a
version. The main thread is blocked by process requests.
In the PR branch, the simple api requests succeeds much earlier,
allowing the solver to advance and also to schedule more prefetching.
Due to that `parse_cache` and `from_metadata` became bottlenecks, so i
moved them off the main thread (green color, and their spans can now
overlap because they can run on multiple threads in parallel). The main
thread isn't blocked on `process_request` anymore, instead it has
frequent idle times. The spans are all much shorter, which indicates
that on main they could have finished much earlier, but a task didn't
yield so they weren't scheduled to finish (though i haven't dug deep
enough to understand the exact scheduling between the process request
stream and the solver here).
**main**

**PR**

## Benchmarks
```
$ hyperfine --warmup 3 "target/profiling/main-dev resolve jupyter" "target/profiling/branch-dev resolve jupyter"
Benchmark 1: target/profiling/main-dev resolve jupyter
Time (mean ± σ): 29.1 ms ± 0.7 ms [User: 22.9 ms, System: 11.1 ms]
Range (min … max): 27.7 ms … 32.2 ms 103 runs
Benchmark 2: target/profiling/branch-dev resolve jupyter
Time (mean ± σ): 18.8 ms ± 1.1 ms [User: 37.0 ms, System: 22.7 ms]
Range (min … max): 16.5 ms … 21.9 ms 154 runs
Summary
target/profiling/branch-dev resolve jupyter ran
1.55 ± 0.10 times faster than target/profiling/main-dev resolve jupyter
$ hyperfine --warmup 3 "target/profiling/main-dev resolve meine_stadt_transparent" "target/profiling/branch-dev resolve meine_stadt_transparent"
Benchmark 1: target/profiling/main-dev resolve meine_stadt_transparent
Time (mean ± σ): 37.8 ms ± 0.9 ms [User: 30.7 ms, System: 14.1 ms]
Range (min … max): 36.6 ms … 41.5 ms 79 runs
Benchmark 2: target/profiling/branch-dev resolve meine_stadt_transparent
Time (mean ± σ): 24.7 ms ± 1.5 ms [User: 47.0 ms, System: 39.3 ms]
Range (min … max): 21.5 ms … 28.7 ms 113 runs
Summary
target/profiling/branch-dev resolve meine_stadt_transparent ran
1.53 ± 0.10 times faster than target/profiling/main-dev resolve meine_stadt_transparent
$ hyperfine --warmup 3 "target/profiling/main pip compile scripts/requirements/home-assistant.in" "target/profiling/branch pip compile scripts/requirements/home-assistant.in"
Benchmark 1: target/profiling/main pip compile scripts/requirements/home-assistant.in
Time (mean ± σ): 229.0 ms ± 2.8 ms [User: 197.3 ms, System: 63.7 ms]
Range (min … max): 225.8 ms … 234.0 ms 13 runs
Benchmark 2: target/profiling/branch pip compile scripts/requirements/home-assistant.in
Time (mean ± σ): 91.4 ms ± 5.3 ms [User: 289.2 ms, System: 176.9 ms]
Range (min … max): 81.0 ms … 104.7 ms 32 runs
Summary
target/profiling/branch pip compile scripts/requirements/home-assistant.in ran
2.50 ± 0.15 times faster than target/profiling/main pip compile scripts/requirements/home-assistant.in
```
In the scenario tests, we want to make sure we're actually conforming to
the scenario's expectations, so we now have an extra assertion on
whether resolution failed or succeeded as well as that it includes the
given packages.
Closes#1112Closes#1030
We need more flexible filters than those `inta` offers, and `insta_cmd`
makes it impossible to plug in programmatic filters. At the same time we
use barely any of `insta_cmd`'s features. We can replace the subset we
need in about 50 loc.
Mostly a mechanical refactor to use the `puffin_snapshot!` and
`TestContext` infrastructure in the add, remove, venv and pip uninstall
tests, in preparation for adding programmatic windows testing filters.
The is only one remaining usage of `assert_cmd_snapshot!` now in the
`puffin_snapshot!` macro.
Mostly a mechanical refactor to use the `puffin_snapshot!` and
`TestContext` infrastructure in the pip install and pip sync tests, in
preparation for adding programmatic windows testing filters.