## Summary
Implements `uv add foo --workspace`, which adds `foo` as a workspace
dependency with the corresponding `tool.uv.sources` entry.
Part of https://github.com/astral-sh/uv/issues/3959.
Specifically, these are emitted when requirements fail to satisfy
`Requires-Python` or the markers associated with the current fork in the
resolver.
Closes#4373
Closes#4380
This is the same logic as `should_stop_discovery` but I changed the log
level and duplicated it because I don't really want that method to be
public. Maybe it should be though?
As in #4360, updates the uv project CLI to respect `.python-version`
files as default Python version requests. Additionally, updates project
interpreter discovery to fetch managed toolchains as in `uv venv
--preview`.
It was becoming problematic that the virtual environment test context
diverged from the other one i.e. we had to implement filtering twice.
This combines the contexts and tweaks the `TestContext` API and
filtering mechanisms for Python versions. Combined with my previous
changes to the test context at #4364 and
https://github.com/astral-sh/uv/pull/4368 this finally unblocks the
snapshots for test cases in #4360 and
https://github.com/astral-sh/uv/pull/4362.
## Summary
Make the git commit/date part of the version string matched in
cache_prune optional, so that the test also works correctly when uv is
built from an unpacked release tarball rather than a git repository.
Fixes#4374
## Test Plan
Ran `cargo test` inside the git repository and inside unpacked archive
for `0.2.12` release with the patch applied on top.
## Summary
Adds a `--no-clean` flag to `uv sync` that keeps extraneous
installations. This is the default in `uv run` and `uv add`, but not in
`uv sync` or `uv remove`. This means you need to run an explicit `uv
sync/remove` to clean the virtual environment.
Extends new filters for interpreter paths to apply to tests with
multiple Python versions. Adds patch version filtering for them as well,
which is needed for #4360 tests.
Extends #4364 automatically adding filters to the test context for
additional test virtual environments.
It turns out that the `pip sync` tests were really on the loose with
their virtual environment creation and it was difficult to use the new
helpers because they require mutability and the tests had immutable
borrows to extend the filters 😭. I refactored the tests to just reset
the environment.
A bare `uv toolchain install` invocation now reads default requests from
Python version files in the working directory. In order, a bare
invocation means:
- requests from `.python-versions`
- a single request from`.python-version`
- any installed managed toolchain
- the latest managed toolchain download
This replaces all the functionality of `cargo dev fetch-python`, which
we drop in #4337
## Summary
Closes https://github.com/astral-sh/uv/issues/4162
Changes keyring subprocess to allow display of stderr.
This aligns with pip's behavior since pip 23.1.
## Test Plan
* Tested using gnome-keyring-backend on a self-hosted private registry
as well as the keyring script described in #4162 to confirm both
existing functionality and the new stderr display.
* Existing tests using `scripts/packages/keyring_test_plugin` are now
showing its stderr output as well.
Allows installation of multiple toolchains in a single invocation
because I don't want to be limited to one! Most of the implementation
for concurrent downloads ported from `cargo dev fetch-python`.
Adds support for toolchain keys e.g. `cpython-3.11.2-macos` allowing you
to download toolchains for specific architectures and operating systems
using the format we use to uniquely identify a toolchain.
Closes#4240
e.g.
```
❯ cargo run -q -- pip install anyio --python "/Users/zb/Library/Application Support/uv/toolchains/cpython-3.12.0-macos-aarch64-none/install/bin/python3"
error: The interpreter at /Users/zb/Library/Application Support/uv/toolchains/cpython-3.12.0-macos-aarch64-none/install is externally managed, and indicates the following:
This toolchain is managed by uv and should not be modified.
Consider creating a virtual environment with `uv venv`.
```
I found this while testing the tracking of marker expressions across
resolver forks. Namely, given
sys_platform == 'darwin' and implementation_name == 'pypy'
And:
sys_platform == 'bar' or implementation_name == 'foo'
These should be disjoint, but the disjointness checker was reporting
them as overlapping. I fixed this by giving handling of disjunctions
higher precedence than conjunctions, although I am not 100% confident
that this is correct for all cases.
This commit adds marker expressions to our `Fork` type, which are in
turn passed down into `PubGrubDependencies::from_requirements` to filter
our any dependencies with markers that are disjoint from the fork's
marker expression.
This is necessary to avoid visiting packages in the dependency graph
that can never actually be installed. This is because when a fork is
created in the resolver, it always happens when there are two sibling
dependency specifications on a package with the same name, but with
non-overlapping marker expressions. Each fork corresponds to each
such conflicting dependency specification, and each fork assumes the
the corresponding marker expression as a pre-condition for any future
dependencies considered by it. That is, since the fork represents an
installation path that can only be taken when the corresponding
dependency specification (and its marker expression) is actually used,
it also therefore follows that the marker expression is true. Therefore,
any dependency visited in that fork with a marker expression that cannot
possibly be true when the markers of the fork are true can and ought to
be completely ignored.
There are some key invariants that I had to re-learn by reading the
code. This hopefully makes those invariants easier to discover by future
me (and others).
Otherwise, when testing `uv venv --preview` the default toolchain
directory will leak into the test.
I believe I've made a similar change for the standard `TestContext` in
another commit somewhere in my stack, if not I'll add it after.
Adds a command to find a toolchain on the system. Right now, it displays
the path to the first matching toolchain. We'll probably have more rich
output in the future (after implementing `toolchain show`).
The eventual plan (separate from here) is to port all of the toolchain
discovery tests to use this command. I'll add a few tests for this
command here anyway.
## Summary
This would be a lightweight solution to
https://github.com/astral-sh/uv/issues/4307 that doesn't fully engage
with all the possibilities in the design space (but would unblock
cross-platform for now).
## Summary
The fixtures here are pretty large, but it lets us test what we actually
care about (the resolved settings) rather than inferring the resolved
settings from behavior, which I think is a big improvement.
I also broke the tests down into more granular cases.
## Summary
Because the workspace member itself is part of the resolution, adding
the workspace name for the project leads to confusing errors, like:
```
❯ cargo run lock --preview
Compiling uv v0.2.11 (/Users/crmarsh/workspace/puffin/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.79s
Running `/Users/crmarsh/workspace/puffin/target/debug/uv lock --preview`
× No solution found when resolving dependencies:
╰─▶ Because only albatross==0.1.0 is available and albatross==0.1.0 depends on anyio<=3, we can conclude that all versions of albatross depend on anyio<=3.
And because bird-feeder==1.0.0 depends on anyio>=4.3.0,<5 and only bird-feeder==1.0.0 is available, we can conclude that all versions of albatross and all versions of bird-feeder are incompatible.
And because albatross depends on albatross and bird-feeder, we can conclude that the requirements are unsatisfiable.
```
(Notice "albatross depends on albatross".)
## Summary
In a workspace, we now read configuration from the workspace root.
Previously, we read configuration from the first `pyproject.toml` or
`uv.toml` file in path -- but in a workspace, that would often be the
_project_ rather than the workspace configuration.
We need to read configuration from the workspace root, rather than its
members, because we lock the workspace globally, so all configuration
applies to the workspace globally.
As part of this change, the `uv-workspace` crate has been renamed to
`uv-settings` and its purpose has been narrowed significantly (it no
longer discovers a workspace; instead, it just reads the settings from a
directory).
If a user has a `uv.toml` in their directory or in a parent directory
but is _not_ in a workspace, we will still respect that use-case as
before.
Closes#4249.
## Summary
This PR introduces top-level configuration for uv, such that you can do:
```toml
[tool.uv]
index-url = "https://test.pypi.org/simple"
```
And `uv pip compile`, `uv run`, `uv tool run`, etc., will all respect
that configuration.
The settings that were escalated to the top-level remain on
`tool.uv.pip` too, but they're only respected in `uv pip` commands. If
they're specified in both places, then the `pip` settings win out.
While making this change, I also wired up some of the global options,
like `connectivity` and `native_tls`, through to all the relevant
places.
Closes#4250.
Previously, we took the first executable on the `PATH` but if it was not
a usable interpreter we'd fail. Now, we'll continue searching in the
path until we find an interpreter as we do with the standard executable
names.
Previously, `b` in the test case would have been incorrectly locked to
the path of `a`. I've moved `relative_to` into uv-fs since it's now used
in two different places.
Previously failing lockfile when `a/pyproject.toml` and
`a/b/pyproject.toml` exist (not in a workspace) and `a` was depending on
`b`:
```toml
version = 1
requires-python = ">=3.11, <3.13"
[[distribution]]
name = "b"
version = "0.1.0"
source = "directory+/home/konsti/projects/uv/a"
sdist = { path = "/home/konsti/projects/uv/a" }
[[distribution]]
name = "black"
version = "0.1.0"
source = "editable+."
sdist = { path = "." }
[[distribution.dependencies]]
name = "b"
version = "0.1.0"
source = "directory+/home/konsti/projects/uv/a"
```
Includes system interpreters in `uv toolchain list`.
This includes a refactor of `find_toolchain` to support iterating over
all toolchains
that match a request rather than ending earlier.
## Summary
This i still a draft, but it gets some of the work started on issue
#4064 , which requests --no-binary functionality similar to `uv pip
install` to exist on `uv pip compile`.
So far this has moved the command line shape from install and cloned it
over to compile. The actual functionality of respecting --no-binary
<package> and generating the resulting line in requirements.txt I could
use a hand with.
My understanding is we want to create a requirements.in file such as:
```
yt
```
Then when running `cargo run -- pip compile --no-binary yt
requirements.in`
we want the file to have this line in it:
```
yt==4.3.1 --no-binary yt
```
## Test Plan
Existing unit tests continue to pass.
No new unit tests have been created yet.
The new command line options do show up when testing with `cargo run --
pip compile -h`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Closes https://github.com/astral-sh/uv/issues/4296.
## Test Plan
Ran `cargo run lock --verbose` from
`scripts/workspaces/albatross-virtual-workspace`:
```
DEBUG uv 0.2.11 (ef3bc1612 2024-06-12)
warning: `uv lock` is experimental and may change without warning.
DEBUG Found workspace root: `/Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace`
DEBUG Adding discovered workspace member: /Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace/packages/albatross
DEBUG Adding discovered workspace member: /Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace/packages/bird-feeder
DEBUG Adding discovered workspace member: /Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace/packages/seeds
DEBUG Searching for Python >=3.12 in search path or managed toolchains
DEBUG Searching for managed toolchains at `/Users/crmarsh/Library/Application Support/uv/toolchains`
DEBUG Found managed toolchain `cpython-3.12.3-macos-aarch64-none`
DEBUG Found CPython 3.12.3 at `/Users/crmarsh/Library/Application Support/uv/toolchains/cpython-3.12.3-macos-aarch64-none/install/bin/python3` (managed toolchains)
Using Python 3.12.3 interpreter at: /Users/crmarsh/Library/Application Support/uv/toolchains/cpython-3.12.3-macos-aarch64-none/install/bin/python3
```
Before 0.2.10 we would parse `--python=python` as an executable name.
After https://github.com/astral-sh/uv/pull/4214, we started treating
this as a Python version range request (with an empty version range).
This is not entirely unreasonable, but it was an unexpected regression
and I don't think `VersionRequest` should support empty ranges in its
`from_str` implementation without more consideration.
Updates `--no-binary <package>` to take precedence over `--only-binary
:all:` and `--only-binary <package>` to take precedence over
`--no-binary :all:`.
I'm not entirely sure about this behavior, e.g. maybe I provided
`--only-binary :all:` later on the command line and really want it to
override those earlier arguments of `--no-binary <package>` for safety.
Right now we just fail to solve though since we can't satisfy the
overlapping requests.
Closes https://github.com/astral-sh/uv/issues/4063
## Summary
This is what I consider to be the "real" fix for #8072. We now treat
directory and path URLs as separate `ParsedUrl` types and
`RequirementSource` types. This removes a lot of `.is_dir()` forking
within the `ParsedUrl::Path` arms and makes some states impossible
(e.g., you can't have a `.whl` path that is editable). It _also_ fixes
the `direct_url.json` for direct URLs that refer to files. Previously,
we wrote out to these as if they were installed as directories, which is
just wrong.
We weren't using the common interface in `uv lock` because it didn't
support finding an interpreter without touching the virtual environment.
Here I refactor the project interface to support what we need and update
`uv lock` to use the shared implementation.
In the time before universal resolving, we would always pass a
`MarkerEnvironment`, and this environment would capture any relevant
`Requires-Python` specifier (including if `-p/--python` was provided on
the CLI).
But in universal resolution, we very specifically do not use a
`MarkerEnvironment` because we want to produce a resolution that is
compatible across potentially multiple environments. This in turn meant
that we lost `Requires-Python` filtering.
This PR adds it back. We do this by converting our `PythonRequirement`
into a `MarkerTree` that encodes the version specifiers in a
`Requires-Python` specifier. We then ask whether that `MarkerTree` is
disjoint with a dependency specification's `MarkerTree`. If it is, then
we know it's impossible for that dependency specification to every be
true, and we can completely ignore it.
## Summary
Right now, we're _always_ reinstalling local wheel archives, even if the
timestamp didn't change.
I want to fix the TODO properly but I will do so in a separate PR.
## Summary
`normalize` now takes an owned value and returns `Option<MarkerTree>`,
such that if any sub-expression evaluates to `true`, we can normalize
out the entire marker.
Closes https://github.com/astral-sh/uv/issues/4267.
Closes https://github.com/astral-sh/uv/issues/3857
Instead of using custom `Arch`, `Os`, and `Libc` types I just use
`target-lexicon`'s which enumerate way more variants and implement
display and parsing. We use a wrapper type to represent a couple special
cases to support the "x86" alias for "i686" and "macos" for "darwin".
Alternatively we could try to use our `platform-tags` types but those
capture more information (like operating system versions) that we don't
have for downloads.
As discussed in https://github.com/astral-sh/uv/pull/4160, this is not
sufficient for proper libc detection but that work is larger and will be
handled separately.
## Summary
Fix the docsting where `remove` was used instead of `add` in the context
of `uv add` command.
## Test Plan
```
cargo run -- add --help
```
```
Add one or more packages to the project requirements
Usage: uv add [OPTIONS] <REQUIREMENTS>...
Arguments:
<REQUIREMENTS>...
The packages to add, as PEP 508 requirements (e.g., `flask==2.2.3`)
```
## Summary
I think this is a useful piece of connective tissue that will let us
avoid back-and-forths when folks include traces.
## Test Plan
```
❯ cargo run pip list --verbose
DEBUG uv 0.2.11 (44041bccd 2024-06-11)
DEBUG Searching for Python interpreter in virtual environments
DEBUG Found CPython 3.12.3 at `/Users/crmarsh/workspace/puffin/.venv/bin/python3` (virtual environment)
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
```
## Summary
If we're ORing an OR, we should just append rather than nesting in
another OR.
In my branch, this let us simplify:
```
python_version < '3.10' or python_version > '3.12' or (python_version < '3.8' or python_version > '3.12')
```
To:
```
python_version < '3.10' or python_version > '3.12
```
## Summary
The build tags in this case are like, e.g., `202206090410`. That's
larger than a `u32`, so we're rejecting the wheel. In theory build tags
could be even larger, but we already use `u64` for version segment so I
think it's fine to keep that constraint here.
I'm going to look into surfacing these errors separately.
Closes https://github.com/astral-sh/uv/issues/4252.
## Test Plan
`cargo run pip install monailabel`
e.g.
```
❯ uv toolchain install
Found installed toolchain 'cpython-3.9.19-macos-aarch64-none'
A toolchain is already installed. Use `uv toolchain install <request>` to install a specific toolchain
```
instead of
```
❯ uv toolchain install
Using latest Python version
Found installed toolchain 'cpython-3.9.19-macos-aarch64-none'
Already installed at /Users/zb/Library/Application Support/uv/toolchains/cpython-3.9.19-macos-aarch64-none
```
## Summary
Something that looks like it was forgotten to replace in #4164.
## Test Plan
Run `cargo run toolchain install` should display the warning: ``warning:
`uv toolchain install` is experimental and may change without warning.``
By splitting `path` into a lockable, relative (or absolute) and an
absolute installable path and by splitting between urls and paths by
dist type, we can store relative paths in the lockfile.
## Summary
We may choose to persist these eventually, but for now, it's useful to
have them colocated with the cache, and in their own dedicated bucket
(so, at the very least, we can keep track of the use-cases).
Closes https://github.com/astral-sh/uv/issues/4219.
A merge kerfuffle from #4222 and #4218
Now we fail because we genuinely can't find any interpreters since tests
contexts are isolated by default. I'll improve the error message and
maybe add another test case once `main` is fixed.
By setting the test search path to an empty path, we avoid accidentally
pulling interpreters from the system during a test case.
Cherry-picked from https://github.com/astral-sh/uv/pull/4214
Cherry-picked from https://github.com/astral-sh/uv/pull/4214
The first commit gets us some context on an IO error during queries:
Previously:
```
failed to canonicalize path `[VENV]/bin/python3`
Caused by: No such file or directory (os error 2)
```
Now:
```
Failed to query Python interpreter
Caused by: failed to canonicalize path `[VENV]/bin/python3`
Caused by: No such file or directory (os error 2)
```
but really we shouldn't attempt to query a missing interpreter during
discovery anyway, so we improve handling of that too.
## Summary
Should be no behavior changes, but one piece of technical debt I noticed
left over in the URL resolver. We already have structured paths, so we
shouldn't need to compare verbatim URLs.
## Summary
If the user requests a package as both editable and non-editable, the
editable now "wins".
Previously, `pip install -e . .` would install as editable. However,
`pip install -e . -r requirements.txt` would _not_ if `requirements.txt`
contained `.`, because we ignored `editable` when deduplicating and the
order of iteration was just dependent on internals.
Closes https://github.com/astral-sh/uv/issues/4053.
Adds a command (following #4163) to download and install specific
toolchains. While we fetch toolchains on demand, this is useful for,
e.g., pre-downloading a toolchain in a Docker image build.
~I kind of think we should call this `install` instead of `fetch`~ I
changed the name from `fetch` to `install`.
## Summary
Adds handling for a few cases to improve interoperability with Poetry:
- If the `project` schema is invalid, we now raise a hard error, rather
than treating the metadata as dynamic and then falling back to the build
backend. This could cause problems, I'm not sure. It's stricter than
before.
- If the project contains `tool.poetry` but omits
`project.dependencies`, we now treat it as dynamic. We could go even
further and treat _any_ Poetry project as dynamic, but then we'd be
ignoring user-declared dependencies, which is also confusing.
Closes https://github.com/astral-sh/uv/issues/4142.
Extends https://github.com/astral-sh/uv/pull/4121
Part of #2607
Adds support for managed toolchain fetching to `uv venv`, e.g.
```
❯ cargo run -q -- venv --python 3.9.18 --preview -v
DEBUG Searching for Python 3.9.18 in search path or managed toolchains
DEBUG Searching for managed toolchains at `/Users/zb/Library/Application Support/uv/toolchains`
DEBUG Found CPython 3.12.3 at `/opt/homebrew/bin/python3` (search path)
DEBUG Found CPython 3.9.6 at `/usr/bin/python3` (search path)
DEBUG Found CPython 3.12.3 at `/opt/homebrew/bin/python3` (search path)
DEBUG Requested Python not found, checking for available download...
DEBUG Using registry request timeout of 30s
INFO Fetching requested toolchain...
DEBUG Downloading https://github.com/indygreg/python-build-standalone/releases/download/20240224/cpython-3.9.18%2B20240224-aarch64-apple-darwin-pgo%2Blto-full.tar.zst to temporary location /Users/zb/Library/Application Support/uv/toolchains/.tmpgohKwp
DEBUG Extracting cpython-3.9.18%2B20240224-aarch64-apple-darwin-pgo%2Blto-full.tar.zst
DEBUG Moving /Users/zb/Library/Application Support/uv/toolchains/.tmpgohKwp/python to /Users/zb/Library/Application Support/uv/toolchains/cpython-3.9.18-macos-aarch64-none
Using Python 3.9.18 interpreter at: /Users/zb/Library/Application Support/uv/toolchains/cpython-3.9.18-macos-aarch64-none/install/bin/python3
Creating virtualenv at: .venv
INFO Removing existing directory
Activate with: source .venv/bin/activate
```
The preview flag is required. The fetch is performed if we can't find an
interpreter that satisfies the request. Once fetched, the toolchain will
be available for later invocations that include the `--preview` flag.
There will be follow-ups to improve toolchain management in general,
there is still outstanding work from the initial implementation.
## Summary
Similar to how we abstracted the dependencies into
`ResolutionDependencyNames`, I think it makes sense to abstract the base
packages into a `ResolutionPackage`. This also avoids leaking details
about the various `PubGrubPackage` enum variants to `ResolutionGraph`.
Remove the panic when there is an invalid wheel source, instead surface
the error. This error can only occur when manually editing the lock
file, but since it's an external file, we should error and not panic.
This change is helpful since the method needs to be able to error for
relative path support.
## Summary
If a package lacks a source distribution, and we can't find a compatible
wheel for the current platform, we need to just _assume_ that the
package will have a valid wheel on all platforms on which it's
requested; if not, we raise an error at install time.
It's possible that we can be smarter about this over time. For example,
if the package was requested _only_ for macOS, we could verify that
there's at least one macOS-compatible wheel. See the linked issue for
more details.
Closes https://github.com/astral-sh/uv/issues/4139.
## Summary
If we want more granular control over how these errors are handled, then
we need to move them out of the TOML deserialization.
No actual behavior changes here.
Part of https://github.com/astral-sh/uv/issues/4142.
## Summary
See the long comment inline. I think this is debatable but probably
right for now. The other options have their own problems, but there are
a few alternate ideas in the comment.
Closes https://github.com/astral-sh/uv/issues/4132.
The basic idea here is to make it so forking can only ever result in a
resolution that, for a particular marker environment, will only install
at most one version of a package. We can guarantee this by ensuring we
only fork on conflicting dependency specifications only when their
corresponding markers are completely disjoint. If they aren't, then
resolution _must_ find a single version of the package in the
intersection of the two dependency specifications.
A test for this case has been added to packse here:
https://github.com/astral-sh/packse/pull/182. Previously, that test
would result in a resolution with two different unconditional versions
of the same package. With this change, resolution fails (as it should).
A commit-by-commit review should be helpful here, since the first commit
is a refactor to make the second commit a bit more digestible.
## Summary
I think we should be able to model PubGrub such that this isn't
necessary (at least for the case described in the issue), but for now,
let's just avoid attempting to build very old distributions in
prefetching.
Closes https://github.com/astral-sh/uv/issues/4136.
Drops `find_toolchain`, `find_best_toolchain`, etc. in favor of
`Toolchain::find_...`
We can change this in the future, but there should only be one "right"
way to do it not two redundant ways in the public interface.
## Summary
Simplify and normalize marker expressions in the lockfile. Right now
this does a simple analysis by only looking at related operators at the
same level of precedence. I think anything more complex would be out of
scope.
Resolves https://github.com/astral-sh/uv/issues/4002.
## Summary
This PR adds a lowering similar to that seen in
https://github.com/astral-sh/uv/pull/3100, but this time, for markers.
Like `PubGrubPackageInner::Extra`, we now have
`PubGrubPackageInner::Marker`. The dependencies of the `Marker` are
`PubGrubPackageInner::Package` with and without the marker.
As an example of why this is useful: assume we have `urllib3>=1.22.0` as
a direct dependency. Later, we see `urllib3 ; python_version > '3.7'` as
a transitive dependency. As-is, we might (for some reason) pick a very
old version of `urllib3` to satisfy `urllib3 ; python_version > '3.7'`,
then attempt to fetch its dependencies, which could even involve
building a very old version of `urllib3 ; python_version > '3.7'`. Once
we fetch the dependencies, we would see that `urllib3` at the same
version is _also_ a dependency (because we tack it on). In the new
scheme though, as soon as we "choose" the very old version of `urllib3 ;
python_version > '3.7'`, we'd then see that `urllib3` (the base package)
is also a dependency; so we see a conflict before we even fetch the
dependencies of the old variant.
With this, I can successfully resolve the case in #4099.
Closes https://github.com/astral-sh/uv/issues/4099.
Extends #4120
Part of #2607
There should be no behavior changes here. Restructures the discovery API
to be focused on a toolchain first perspective in preparation for
exposing a `find_or_fetch` method for toolchains in
https://github.com/astral-sh/uv/pull/4138.