Picks up the work from
- #14559
- https://github.com/astral-sh/uv/pull/14896
There are some high-level changes from those pull requests
1. We do not stash seen credentials in the keyring automatically
2. We use `auth login` and `auth logout` (for future consistency)
3. We add a `token` command for showing the credential that will be used
As well as many smaller changes to API, messaging, testing, etc.
---------
Co-authored-by: John Mumm <jtfmumm@gmail.com>
## Summary
We (and I'm sure many others) are currently doing a lot of RISC-V work
in QEMU. It is possible to significantly improve the speed of
Python-related builds by taking care of the environment setup using an
AMD64 `uv` binary (bypassing binfmt/qemu-system emulation).
Some approx numbers from local testing in riscv64 Ubuntu in QEMU:
| Resolver arch | Command | Time |
| --- | --- | --- |
| riscv64 | `pip install --upgrade --break-system-packages
--index-url=https://gitlab.com/api/v4/projects/riseproject%2Fpython%2Fwheel_builder/packages/pypi/simple
openai-harmony` | 15s |
| riscv64 | `uv pip install --upgrade --system --break-system-packages
--index-url=https://gitlab.com/api/v4/projects/riseproject%2Fpython%2Fwheel_builder/packages/pypi/simple
openai-harmony` | 5s |
| amd64 | `uv pip install --python-platform=riscv64-unknown-linux
--upgrade --system --break-system-packages
--index-url=https://gitlab.com/api/v4/projects/riseproject%2Fpython%2Fwheel_builder/packages/pypi/simple
openai-harmony` | 4s |
The numbers from some larger internal packages with deeper dependency
trees are much more pronounced - 3m6 vs 43s vs 8s, in one example.
Manylinux 2.39 is specified, as it's the first (only?) RISC-V manylinux
## Test Plan
Locally, in QEMU.
`$ docker run --platform linux/riscv64 -it ubuntu:latest`, get amd64
libc into LD_LIBRARY_PATH, tests as above
## Summary
Override `sys.base_prefix` when performing `python_module` tests, in
order to prevent `find_uv_bin()` from finding `uv` installed alongside
system Python, and therefore fix test failures on Gentoo.
Fixes#15368
## Test Plan
```
cargo test --profile=fast-build --features git --features pypi --features python --no-default-features --test it python_module
```
Signed-off-by: Michał Górny <mgorny@gentoo.org>
When migrating from the `reqwest_retry` crate, we want to ensure that
the status codes we retry stay the same. This also helps us to
intentionally migrate to a different list later, by enumerating the list
of status codes that are retried.
In https://github.com/astral-sh/uv/issues/11636, we're getting reports
for installation flakes that report an invalid package format for what
appears to be a network problem. Since we're cutting the error reporting
to the first error message in the chain, we're not reporting the actual
network error underneath it.
This PR displays the whole error chain for invalid package format
errors, so we can debug and eventually catch-and-retry
https://github.com/astral-sh/uv/issues/11636.
## Summary
This was fixed in https://github.com/astral-sh/uv/pull/15161, then
reverted as it regressed the error handling. I've re-applied the change
here, but moved the error handling to the runtime, rather than
parse-time. I think this is slightly worse in that we no longer include
the originating source code snippet, but it at least gives us the
expected behavior :(
Closes https://github.com/astral-sh/uv/issues/15124.
When there is an error during the streaming download and unpack for
Python interpreter and bin installs, we would previously fail, causing a
lot of CI flakes on GitHub Actions.
The problem was that the error is not one of the extended IO errors we
were previously handling, but a regular reqwest error, nested below
layers of errors of other crates processing the stream, including some
IO errors. We now handle nested reqwest errors, too.
This surfaced another problem: Our manual retry loop couldn't inform the
retry middleware that it already performed the limit of retries, and
that the middleware should not retry anymore. While too many retries are
more a problem for debugging than for the user, this causes confusing
error output. To work around this, we disable the retries in the client
and handle all retry errors in our loop.
Fixes https://github.com/astral-sh/uv/issues/14171
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Alternative to #15105
Instead of building a `BaseClientBuilder` from `NetworkSettings` each
time we need a client, we instead build a single `BaseClientBuilder` and
pass it around. The `RegistryClientBuilder` then uses
`BaseClientBuilder` exclusively for configuration. This removes a chunk
of copy-and-paste code, and also moves the fallible `retries_from_env`
into a single place
Borrow vs. clone is mostly ad-hoc, we can change it in either direction
if it matters.
Closes#15105
https://github.com/astral-sh/uv/issues/11836#issuecomment-3022735011 was
caused by a missing `cache_index_credentials()` call. This call was
always preceding a registry client builder. We can improve this
situation by caching index credentials in the registry client builder.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Adds the enhancement proposed in #15470. Each package in the dependency
tree now shows its compressed wheel file size, reading the wheel sizes
directly from the lockfile (uv.lock). Doesn't break existing tree
formatting or options. If no wheel size is available, nothing is added.
Now, developers can identify large packages in their dependency tree.
The tree still shows extras exactly as before, and then appends a size
for the package.
## Test Plan
Manually tested :
```
harsh@fcr-node:~/uv/test-uv-tree-sizes$ ../target/debug/uv tree
Using CPython 3.13.7
warning: No `requires-python` value found in the workspace. Defaulting to `>=3.13`.
Resolved 4 packages in 6ms
pure-python v0.1.0
├── click v8.2.1
└── six v1.17.0
harsh@fcr-node:~/uv/test-uv-tree-sizes$ ../target/debug/uv tree --show-sizes
Using CPython 3.13.7
warning: No `requires-python` value found in the workspace. Defaulting to `>=3.13`.
Resolved 4 packages in 6ms
pure-python v0.1.0
├── click v8.2.1 (99.8KiB)
└── six v1.17.0 (10.8KiB)
```
## Summary
`CLICOLOR_FORCE` changes the output of underlying build commands, which
messes with wrapper tools trying to parse their output.
Closes#12564, closes#15415.
Add support for `RUST_LOG` to the uv build backend. While we were
previously using logging statements in the uv build backend, they could
only be shown when when using the direct build fast path through uv, as
there was no tracing subscriber to write log messages out. This means no
debug logging when using the build backend through pip, `python -m
build`, an incompatible version of uv, or any other build frontend; No
option to figure why includes and excludes behave the way they do.
This PR closes this gap by adding a tracing subscriber. The only option
to enable it is `RUST_LOG`, as we don't have a CLI. The formatting style
is the same as for uv, and color is also support in the same way, albeit
only through anstream's support for TTYs and environment variables. We
recommend only `RUST_LOG=uv=debug` and `RUST_LOG=uv=verbose` in the
docs, but this can be used to debug into crates such as `glob`, too.
<img width="1008" height="325" alt="image"
src="https://github.com/user-attachments/assets/d33df219-750b-46a2-b3b4-8895aa137ab9"
/>
**Before**
```
$ pip wheel . -v [...]
Looking in links: /home/konsti/projects/uv/target/wheels/
Processing /home/konsti/projects/uv/scripts/packages/built-by-uv
Running command pip subprocess to install build dependencies
Looking in links: /home/konsti/projects/uv/target/wheels/
Processing /home/konsti/projects/uv/target/wheels/uv_build-0.8.13-py3-none-manylinux_2_39_x86_64.whl
Installing collected packages: uv_build
Successfully installed uv_build-0.8.13
Installing build dependencies ... done
Running command Getting requirements to build wheel
Getting requirements to build wheel ... done
Running command Preparing metadata (pyproject.toml)
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: built-by-uv
Running command Building wheel for built-by-uv (pyproject.toml)
Error: Unsupported glob expression in: `tool.uv.build-backend.*-exclude`
Caused by:
Invalid character `!` at position 10 in glob: `**/build-*!$§%!½¼²¼³¬!§%$§%.h`. hint: Characters can be escaped with a backslash
error: subprocess-exited-with-error
× Building wheel for built-by-uv (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /usr/lib/python3/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpow1illc9
cwd: /home/konsti/projects/uv/scripts/packages/built-by-uv
Building wheel for built-by-uv (pyproject.toml) ... error
ERROR: Failed building wheel for built-by-uv
Failed to build built-by-uv
ERROR: Failed to build one or more wheels
```
**After**
```
$ RUST_LOG=uv=debug pip wheel . -v [...]
Looking in links: /home/konsti/projects/uv/target/wheels/
Processing /home/konsti/projects/uv/scripts/packages/built-by-uv
Running command pip subprocess to install build dependencies
Looking in links: /home/konsti/projects/uv/target/wheels/
Processing /home/konsti/projects/uv/target/wheels/uv_build-0.8.13-py3-none-manylinux_2_39_x86_64.whl
Installing collected packages: uv_build
Successfully installed uv_build-0.8.13
Installing build dependencies ... done
Running command Getting requirements to build wheel
Getting requirements to build wheel ... done
Running command Preparing metadata (pyproject.toml)
DEBUG Writing metadata files to /tmp/pip-modern-metadata-l_kh78cj
DEBUG Found PEP 639 license declarations, using METADATA 2.4
DEBUG License files match: `LICENSE-APACHE`
DEBUG License files match: `LICENSE-MIT`
DEBUG License files match: `third-party-licenses/PEP-401.txt`
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: built-by-uv
Running command Building wheel for built-by-uv (pyproject.toml)
DEBUG Checking metadata directory /tmp/pip-modern-metadata-l_kh78cj/built_by_uv-0.1.0.dist-info
DEBUG Found PEP 639 license declarations, using METADATA 2.4
DEBUG License files match: `LICENSE-APACHE`
DEBUG License files match: `LICENSE-MIT`
DEBUG License files match: `third-party-licenses/PEP-401.txt`
DEBUG Writing wheel at /tmp/pip-wheel-bu6to9i7/built_by_uv-0.1.0-py3-none-any.whl
DEBUG Wheel excludes: ["__pycache__", "*.pyc", "*.pyo", "build-*!$§%!½¼²¼³¬!§%$§%.h", "/src/built_by_uv/not-packaged.txt"]
Error: Unsupported glob expression in: `tool.uv.build-backend.*-exclude`
Caused by:
Invalid character `!` at position 10 in glob: `**/build-*!$§%!½¼²¼³¬!§%$§%.h`. hint: Characters can be escaped with a backslash
error: subprocess-exited-with-error
× Building wheel for built-by-uv (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /usr/lib/python3/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py build_wheel /tmp/tmpjrxou13a
cwd: /home/konsti/projects/uv/scripts/packages/built-by-uv
Building wheel for built-by-uv (pyproject.toml) ... error
ERROR: Failed building wheel for built-by-uv
Failed to build built-by-uv
ERROR: Failed to build one or more wheels
```
(There is no color in the above uv log statements, as pip doesn't
register as a TTY)
Fixes#12723
Allows pinning the Python build version via environment variables, e.g.,
`UV_PYTHON_CPYTHON_BUILD=...`. Each variable is implementation specific,
because they use different versioning schemes.
Updates the Python download metadata to include a `build` string, so we
can filter downloads by the pin. Writes the build version to a file in
the managed install, e.g., `cpython-3.10.18-macos-aarch64-none/BUILD`,
so we can filter installed versions by the pin.
Some important follow-up here:
- Include the build version in not found errors (when pinned)
- Automatically use a remote list of Python downloads to satisfy build
versions not present in the latest embedded download metadata
Some less important follow-ups to consider:
- Allow using ranges for build version pins
## Summary
We've received several requests to validate that installed wheels match
the current Python platform. This isn't _super_ common, since it
requires that your platform changes in some meaningful way (e.g., you
switch from x86 to ARM), though in practice, it sounds like it _can_
happen in HPC environments. This seems like a good thing to do
regardless, so we now validate that the tags (as recoded in `WHEEL`) are
consistent with the current platform during installs.
Closes https://github.com/astral-sh/uv/issues/15035.
## Summary
After chatting with the PyTorch team, it looks like some number of
wheels were accidentally uploaded with
`no-cache,no-store,must-revalidate` due to
https://github.com/pytorch/pytorch/pull/149218. They're going to correct
this for the respective wheels. I've encouraged them to set an immutable
caching header for these files, and it might happen. But even if this
isn't set, by default we only allow these wheels to be cached for 600s,
since the other wheels don't include a `Cache-Control` header at all
(but do include a `Last-Modified`, so we cache based on our heuristic:
`Freshness lifetime heuristically assumed because of presence of
last-modified header: 600s`). This probably leads to tons of unnecessary
downloads for users over time. Andrey from the PyTorch team agreed that
we should do this.
Closes https://github.com/astral-sh/uv/issues/15480.
## Summary
This is causing some cyclic dependencies issues for me, because these
can be used in virtually _any_ crate (like `uv-install-wheel`), which
then means that all of `uv-configuration` becomes a dependency, etc. I
think this should be a leaf crate so that we can safely depend on it
anywhere.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Closes#14866. Adds a `no-install-local` flag to the sync and export
commands that excludes locally defined packages from being installed.
This helps with if you're caching your virtual environment. You can
exclude local packages since they're more likely to change between
builds.
## Test Plan
snapshot test: `sync::no_install_local`
CI
## Notes
I made an `InstallOptions` struct to avoid a crate isolation issue I was
running into while implementing.
Thanks for maintaining this project!
## Summary
There isn't any risk here, and we have reports of at least one zip file
with more than one (but fewer than, e.g., 10) null bytes.
Closes https://github.com/astral-sh/uv/issues/15451.
## Summary
Packages like `triton` should come from the PyTorch index, but they
don't actually vary across (e.g.) the `cu128` or `cu129` indexes.
Closes https://github.com/astral-sh/uv/issues/15446.
## Test Plan
Validate that the following pins to `cu128`, rather than `cpu`:
```
echo "vllm\ntorch==2.7.1+cu128" | cargo run pip compile --torch-backend=auto --extra-index-url https://wheels.vllm.ai/b2f6c247a9b84556a8ea0e75bb4a2db765ff3315 - --python-platform linux --python-version 3.13 -v
```
## Summary
After #15395, I realized that we didn't actually need a separate struct
for this since we now pass it around as an `Option`. (The key change
from #15395 is that when combining, we treat the options as a single
unit.)
## Summary
`match-runtime` can be explicitly specified, and if it's `false` it
should behave the same way as if it's omitted.
## Test Plan
Added snapshot test
We should not unnecessarily leak memory. Instead, we follow the general
patterns and use `Cow` for strings that can be from either a static or a
dynamic source.
## Summary
Right now, if you put `upgrade = false` in a `uv.toml`, then pass
`--upgrade-package numpy` on the CLI, we won't upgrade NumPy. This PR
fixes that interaction by ensuring that when we "combine", we look at
those arguments holistically (i.e., we bundle `upgrade` and
`upgrade-package` into a single struct, which then goes through the
`.combine` logic), rather than combining `upgrade` and `upgrade-package`
independently.
If approved, I then need to add the same thing for `no-build-isolation`,
`reinstall`, `no-build`, and `no-binary`.
## Summary
Add torch cuda 12.9 backend
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
---------
Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
The PyTorch team publishes ARM Linux wheels for `triton` to the PyTorch
index, which aren't available on PyPI.
## Test Plan
```
echo "torch" | cargo run pip compile - --torch-backend=cu128 --python-platform aarch64-unknown-linux-gnu --python-version 3.13
```
Previously failed because it couldn't find a compatible `triton` wheel.
As a frontend to Ruff's formatter.
There are some interesting choices here, some of which may just be
temporary:
1. We pin a default version of Ruff, so `uv format` is stable for a
given uv version
2. We install Ruff from GitHub instead of PyPI, which means we don't
need a Python interpreter or environment
3. We do not read the Ruff version from the dependency tree
See https://github.com/astral-sh/ruff/pull/19665 for a prototype of the
LSP integration.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Currently record hashes are the hex encoded sha-256 sum. However,
they're supposed to be urlsafe-base64-nopad.
https://packaging.python.org/en/latest/specifications/recording-installed-packages/#the-record-fileFixes#15398
## Test Plan
<!-- How was it tested? -->
Build any wheel
```
uv build --wheel
```
Unpack the wheel
```
uvx wheel unpack dist/*.whl
```
Before this change, it will fail with a hash mismatch. I could confirm
with a local build that now the wheel can be unpacked with the `wheel`
command. While I don't enable hash checking when syncing, presumably it
would also currently fail.
## Summary
I've written a reasonably-long comment to explain what's going on here.
We should fix this, but it's better to continue using a
potentially-stale distribution than to panic.
Closes https://github.com/astral-sh/uv/issues/15386.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Mark `find_uv_bin_py38` test as requiring `python-eol`. Resolves one of
the issues reported in #15368.
## Test Plan
```
cargo test --profile=dev --features git --features pypi --features python --no-default-features
```
(without Python 3.8 installed)
Signed-off-by: Michał Górny <mgorny@gentoo.org>
Venvs should not be in source distributions, and on Unix, we now reject
them for having a link outside the source directory. This PR adds a hint
for that since users were confused (#15096).
In the process, we're differentiating IO errors for format error for
uncompression generally.
Fixes#15096
## Summary
Closes#15355
This PR adds a fallback mechanism to `Shell::from_env()` that inspects
the parent process when shell environment variables are not available on
Unix-like systems.
Currently, `uv tool update-shell` fails with "the current shell could
not be determined" when environment variables like `ZSH_VERSION`,
`BASH_VERSION`, or `SHELL` are not exported. This commonly occurs in
automated environments such as GitHub Actions runners.
The fallback approach:
1. Uses `nix::unistd::getppid()` to get the parent process ID
2. Reads `/proc/<ppid>/exe` to determine the parent executable path
3. Falls back to `/proc/<ppid>/comm` if the exe symlink fails
4. Uses existing `parse_shell_from_path()` to identify the shell type
This maintains full backward compatibility - the fallback only activates
when environment variables are unavailable and an error would otherwise
occur.
## Test Plan
Tested locally with:
```bash
env -u ZSH_VERSION -u SHELL PATH="/usr/bin:/bin" $(which cargo) run -- tool update-shell --verbose
```
```text
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.30s
Running `target/debug/uv tool update-shell --verbose`
DEBUG uv 0.8.11
DEBUG Ensuring that the executable directory is in PATH: /home/user/.local/bin
DEBUG Detected parent process ID: 4147396
DEBUG Parent process executable: /usr/bin/zsh
Updated configuration file: /home/user/.zshenv
Restart your shell to apply changes
```
## Summary
This PR productionizes an idea I saw in
https://github.com/astral-sh/uv/issues/15248, which was added in Pixi:
https://github.com/prefix-dev/pixi/pull/4247. The core of the idea is
that if we install all build isolation-enabled packages first, and the
build isolation-disabled packages in a second phase, the sync is more
likely to "just work", because if all the build dependencies of the
build isolation-disabled packages are included as dependencies (as is
the case for `flash-attn`, at least), they'll be present.
This isn't really a silver bullet, because it requires that all the
build dependencies are included as first-party dependencies, and if you
have packages that want build isolation to be disabled but rely on other
packages that also require build isolation disabled, that won't work
either. I think `extra-build-dependencies` will be more robust and have
much better caching behavior, but this will get more cases right than
our current behavior, and I don't see any downsides.
Closes https://github.com/astral-sh/uv/issues/15301.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fix WindowsRunnable::from_script_path to correctly append extensions
instead of replacing them when resolving executable paths. This resolves
https://github.com/astral-sh/uv/issues/15165#issue-3304086689.
- Add add_extension_to_path helper that appends extensions properly
- Update extension resolution to use the new helper
- Add tests
## Test Plan
Added unit tests for the new and existing functionality that the change
touches. Tested manually locally on Windows.
<!-- How was it tested? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Correct typo. "uv cache clear" is not a command.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
With this PR, we track the settings that were used to build a wheel
(`--config-settings`, plus any `extra-build-dependencies` or
`extra-build-variables`) and write those to the `.dist-info` directory
upon install. This then allows us to "reject" already-installed wheels,
if the user changes the build dependencies or `--config-settings` (or,
crucially, if they use `match-runtime = true` and the resolution
changes).
Closes https://github.com/astral-sh/uv/issues/15218.
This PR is a first step toward support for storing credentials in the
system keyring. The `keyring-rs` crate is the best option for system
keyring integration, but the latest version (v4) requires either that
Linux users have `libdbus` installed or that it is built with `libdbus`
vendored in. This is because v4 depends on
[dbus-secret-service](https://github.com/open-source-cooperative/dbus-secret-service),
which was created as an alternative to
[secret-service](https://github.com/open-source-cooperative/secret-service-rs)
so that users are not required to use an async runtime. Since uv does
use an async runtime, this is not a good tradeoff for uv.
This PR:
* Vendors `keyring-rs` crate into a new `uv-keyring` workspace crate
* Moves to the async `secret-service` crate that does not require
clients on Linux to have `libdbus` on their machines. This includes
updating `CredentialsAPI` trait (and implementations) to use async
methods.
* Adds `uv-keyring` tests to `cargo test` jobs. For `cargo test |
ubuntu`, this meant setting up secret service and priming gnome-keyring
as an earlier step.
* Removes iOS code paths
* Patches in @oconnor663 's changes from his [`keyring-rs`
PR](https://github.com/open-source-cooperative/keyring-rs/pull/261)
* Applies many clippy-driven updates
## Summary
If `match-runtime = true`, but we can't resolve a package's metadata
statically, then we can't _know_ what the runtime version of the package
will be -- because we can't resolve without building it. This PR makes
that footgun clearer by raising an error.
Closes https://github.com/astral-sh/uv/issues/15264.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
We are using UV as a library and we would like to provide an custom
reqwest client to the `RegistryClient`/`BaseClient`. We have a central
place in our repo where we configure the reqwest client to our needs
(certs, proxy, ...) and it is safer for us to just pass the same client
to UV rather than trying to reproduce the same client config with the
APIs that UV exposes.
Are you ok with that change?
## Test Plan
<!-- How was it tested? -->
## Summary
This breaks up a cycle I'm running into in incorporating the build
configuration into our cache keys. This is actually a type that ends up
in the frontend build system, etc., so I think it makes more sense here
anyway (as opposed to `uv-configuration` which tend to be our own
user-facing types).
## Summary
I noticed that these paths aren't returning the cache information, so if
you install through these paths, we actually don't write `uv_cache.json`
at all. I'm not sure how a user would actually end up here, because
assuming there are no bugs, we don't really ever use this path? The
install plan indexes the cached wheels and marks the wheel as installed,
which means it's typically a mistake if we're asking the
`DistributionDatabase` for a wheel that's already available in the
cache... But I did verify that if I _skip_ the install plan's cache
lookup, we write a wheel without `uv_cache.json`, so this is definitely
more correct.
This allows `PythonDownloadRequest` which is used for parsing general
install key requests to have missing segments, which unblocks requests
like `windows-aarch64` or `cpython-linux` (whereas before those would
require `any-any-windows-aarch64` and `cpython-any-linux` respectively).
We still require strict ordering of segments.
Previously, we only allowed missing segments at the end of the key.
This uses a state machine for parsing, which is quite a bit more
complicated.
I'm a little hesitant about the possibility that this regresses error
messages and the complexity of the implementation, but `uv run -p
aarch64` seems valuable following #13724. The alternative to this would
probably be to make these explicit in various places? e.g., expose
`--python-arch`, `--python-libc`, and `--python-os`? Or make
`--python-platform` (which already exists) accept a subset of the keys?
There is a possibility of regressions here, e.g., if something matches
this parser it will not fallback to the `PythonRequest::ExecutableName`
case and we've made this parser more permissive, but I think that should
be quite rare?
## Summary
Split the cleanup fixes from https://github.com/astral-sh/uv/pull/15196
into a separate PR for easier review.
This cleans up some minor env var usage / references throughout tests
and runtime code.
## Test Plan
Existing Tests. No functional changes.
## Summary
It would be nice if this rendered as
`[tool.uv.extra-build-dependencies]` and `[extra-build-dependencies]`
(in `uv.toml`), but this is at least correct.
Closes https://github.com/astral-sh/uv/issues/15124.
## Summary
fixes https://github.com/astral-sh/uv/issues/15172
This change adds a regex filter to normalize dates in GitHub release
URLs within the `python_install_no_cache` test snapshot.
**Problem:**
The test was hardcoding the date `20250808` in the expected error
message URL:
```console
https://github.com/astral-sh/python-build-standalone/releases/download/20250808/cpython-3.12.[PATCH]-[DATE]-[PLATFORM].tar.gz
```
This creates a maintenance burden as the snapshot would need to be
updated whenever the underlying Python release date changes.
**Solution:**
Added a regex filter `r"releases/download/\d{8}/"` →
`"releases/download/[DATE]/"` to replace any 8-digit date in the GitHub
release URL path with a generic `[DATE]` placeholder.
**Result:**
The test is now resilient to new Python releases and won't require
snapshot updates when the underlying release date changes. The error
message now consistently shows:
```console
https://github.com/astral-sh/python-build-standalone/releases/download/[DATE]/cpython-3.12.[PATCH]-[DATE]-[PLATFORM].tar.gz
```
## Test Plan
`python_install` tests seem to pass ✅
```console
$ cargo test --package uv --test it -- python_install
Compiling uv-cli v0.0.1 (/home/ubaid/projects/uv/crates/uv-cli)
Compiling uv v0.8.8 (/home/ubaid/projects/uv/crates/uv)
Finished `test` profile [unoptimized + debuginfo] target(s) in 19.04s
Running tests/it/main.rs (target/debug/deps/it-14d47eb0324a8a0a)
running 30 tests
test python_install::python_install_unknown ... ok
test network::python_install_io_error ... ok
test network::python_install_http_500 ... ok
test python_install::python_install_invalid_request ... ok
test python_install::python_install_broken_link ... ok
test python_install::python_install_preview_no_bin ... ok
test python_install::regression_cpython ... ok
test python_install::uninstall_last_patch ... ok
test python_install::install_no_transparent_upgrade_with_venv_patch_specification ... ok
test python_install::install_lower_patch_automatically ... ok
test python_install::uninstall_highest_patch ... ok
test python_install::install_transparent_patch_upgrade_venv_module ... ok
test python_install::python_install_default_from_env ... ok
test python_install::python_install ... ok
test python_install::python_reinstall_patch ... ok
test python_install::python_install_force ... ok
test python_install::install_transparent_patch_upgrade_uv_venv ... ok
test python_install::install_multiple_patches ... ok
test python_install::python_install_314 ... ok
test python_install::python_install_default ... ok
test python_install::python_install_automatic ... ok
test python_install::python_install_freethreaded ... ok
test python_install::python_install_preview_upgrade ... ok
test python_install::python_install_no_cache ... ok
test python_install::python_install_default_preview ... ok
test python_install::python_install_preview ... ok
test python_install::python_install_minor ... ok
test python_install::python_reinstall ... ok
test python_install::python_install_cached ... ok
test python_install::python_install_multiple_patch ... ok
test result: ok. 30 passed; 0 failed; 0 ignored; 0 measured; 2207 filtered out; finished in 23.34s
```
As described in #15179, there are cases where it can be useful to
reinstall the latest patch on upgrade if it is already installed. Using
this flag, you don't need to know ahead of time if you have the latest
patch already.
Closes#15179.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Uses a <3.10-compatible version of `zip` since the `strict` argument was
[added in 3.10](https://docs.python.org/3.10/library/functions.html#zip)
## Test Plan
I executed the `_matching_parents` function in a local 3.9 environment
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Automated update for Python releases.
This picks up dynamically-linked tkinter/libtcl/libtk, which fixes#6893
and a host of similar issues.
Co-authored-by: Geoffrey Thomas <geofft@ldpreload.com>
Related to https://github.com/astral-sh/uv/issues/15113
The case in the linked issue is that we perhaps should not be allowing
`uv run --with` with system interpreters at all. I think we can consider
that, but the issue highlighted that `uv run --with` for a system
interpreter is broken if the base interpreter has custom site packages.
This generalizes beyond system interpreters so we should probably fix
our overlays.
A little spicy. We could consider this breaking, but I can't think of
what workflow it'd break and it matches the spirit of `--isolated`. This
was requested by @ssbarnea
Revives https://github.com/astral-sh/uv/pull/9130
Previously, we allowed scoping conflicting extras or groups to specific
packages, e.g. ,`{ package = "foo", extra = "bar" }` for a conflict in
`foo[bar]`. Now, we allow dropping the `extra` or `group` bit and using
`{ package = "foo" }` directly which declares a conflict with `foo`'s
production dependencies.
This means you can declare conflicts between workspace members, e.g.:
```
[tool.uv]
conflicts = [[{ package = "foo" }, { package = "bar" }]]
```
would not allow `foo` and `bar` to be installed at the same time.
Similarly, a conflict can be declared between a package and a group:
```
[tool.uv]
conflicts = [[{ package = "foo" }, { group = "lint" }]]
```
which would mean, e.g., that `--only-group lint` would be required for
the invocation.
As with our existing support for conflicting extras, there are
edge-cases here where the resolver will _not_ fail even if there are
conflicts that render a particular install target unusable. There's test
coverage for some of these. We'll still error at install-time when the
conflicting groups are selected. Due to the likelihood of bugs in this
feature, I've marked it as a preview feature.
I would not recommend reading the commits as there's some slop from not
wanting to rebase Andrew's branch.
---------
Co-authored-by: Andrew Gallant <andrew@astral.sh>
We should definitely not pick up user-level installations unless we
can't find uv anywhere else. Otherwise, e.g., we would find a uv
installed with `pipx install uv` before the one matching the uv module.
We regularly get confusing bug reports where a package sometimes works
and sometimes doesn't and it's not clear to the user why. Ultimately, it
turns out that two packages contain the same module and there is a race
condition when installing the two packages. Usually, it's one of the
opencv-python distributions, but recently it's been z3, too. These error
are completely inscrutable to users.
* https://github.com/astral-sh/uv/issues/10708
* https://github.com/astral-sh/uv/issues/11806
* https://github.com/astral-sh/uv/issues/11659
* https://github.com/astral-sh/uv/issues/13435
* https://github.com/astral-sh/uv/issues/13550
* https://github.com/astral-sh/uv/issues/14030
We now warn for top-level modules (pattern: `<identifier>/__init__.py`)
that collide in a single installation, naming the offending wheels.
Checking for `__init__.py` excludes namespace packages.
Test script:
```
uv venv -q && cargo run -q --profile fast-build pip install --no-progress --link-mode clone opencv-python opencv-contrib-python --no-build --no-deps
uv venv -q && cargo run -q --profile fast-build pip install --no-progress --link-mode copy opencv-python opencv-contrib-python --no-build --no-deps
uv venv -q && cargo run -q --profile fast-build pip install --no-progress --link-mode hardlink opencv-python opencv-contrib-python --no-build --no-deps
uv venv -q && cargo run -q --profile fast-build pip install --no-progress --link-mode symlink opencv-python opencv-contrib-python --no-build --no-deps
```
We currently only catch conflicts in a single installation. Should we
prime the lock database with the site-packages contents, and would that
carry overhead?
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
At some places the virtualenv directory was manually removed instead of
using `remove_virtualenv`.
I also adjusted the error type.
#14985
## Test Plan
<!-- How was it tested? -->
Follows https://github.com/astral-sh/uv/pull/14181
Two goals here
- Remove duplicated logic and make the search order clear
- Resolve user confusion around the searched directories; we previously
only displayed the last attempt, which we rarely expect to be relevant
## Summary
uv will now reject ZIP files that meet any of the following conditions:
- Multiple local header entries exist for the same file with different
contents.
- A local header entry exists for a file that isn't included in the
end-of-central directory record.
- An entry exists in the end-of-central directory record that does not
have a corresponding local header.
- The ZIP file contains contents after the first end-of-central
directory record.
- The CRC32 doesn't match between the local file header and the
end-of-central directory record.
- The compressed size doesn't match between the local file header and
the end-of-central directory record.
- The uncompressed size doesn't match between the local file header and
the end-of-central directory record.
- The reported central directory offset (in the end-of-central-directory
header) does not match the actual offset.
- The reported ZIP64 end of central directory locator offset does not
match the actual offset.
We also validate the above for files with data descriptors, which we
previously ignored.
Wheels from the most recent releases of the top 15,000 packages on PyPI
have been confirmed to pass these checks, and PyPI will also reject ZIPs
under many of the same conditions (at upload time) in the future.
In rare cases, this validation can be disabled by setting
`UV_INSECURE_NO_ZIP_VALIDATION=1`. Any validations should be reported to
the uv issue tracker and to the upstream package maintainer.
Previously, publish would always use the default retries, now it
respects `UV_HTTP_RETRIES`
Some awkward error handling to avoid pulling anyhow into uv-publish.
Specifically, support `UV_NO_EDITABLE=1 uv export`. It's now also
supported in `uv add`, though it's default there anyway and the env var
exists only for completeness.
Fixes#15103
## Summary
1. Given the upcoming 1.89 update, this bumps uv-trampoline to "~1.87"
(closest nightly) from "~1.86" (closest nightly).
2. Adds additional CI check for arm builds now that runners are
available.
I wasn't sure the MSRV policy applies to uv-trampoline, so I didn't go
for higher than ~1.87 nightly.
This PR also fixes a build issue starting after 1.87 where fma and fmaf
symbols were missing.
Temporarily dded `#[allow(clippy::ptr_eq)]` to `close_handles` as this
lint should not trigger anymore in 1.88 and above.
## Test Plan
Existing tests and local build process. I did not commit the built
binaries for security purposes.
---------
Co-authored-by: konstin <konstin@mailbox.org>
Previously, `simplify_conflict_markers` assumed that it can remove all
conflict set together, when we need to look at each conflict set
individually. Specifically, `(platform_machine == 'x86_64' and extra ==
'extra-5-foo-b') or extra == 'extra-5-foo-a'` can't be reduced
`platform_machine == 'x86_64'` only because it reduces to true when both
conflict extras are activated.
This case applied in https://github.com/astral-sh/uv/issues/14805, where
a jax 0.5.3 version was used for `platform_machine != 'aarch64' or
sys_platform != 'linux'` and the conflict extra `cu128`, but jax 0.7.0
for the conflict extra `cpu`.
Only removing the faulty inference regresses lockfiles to much more
verbose markers. To balance the much more conservative inference, I
added `unify_inference_sets` to simplify cases where all conflict
branches reduce to the same marker.
This still regresses some markers. For example `sys_platform == 'win32'`
regresses to `sys_platform == 'win32' or (extra == 'extra-3-pkg-x1' and
extra == 'extra-3-pkg-x2')` in `extra_inferences`, even through x1 and
x2 conflict and the second conjunction could be simplified away.
Fixes https://github.com/astral-sh/uv/issues/14805
"Cached request ... is not storable" doesn't make sense from a user
perspective, it's leaking our internal `CachedClient` abstraction. I
think it makes more sense to talk about this as "Response from ... is
not storable"
## Summary
Make the use of `Self` consistent. Mostly done by running `cargo clippy
--fix -- -A clippy::all -W clippy::use_self`.
## Test Plan
<!-- How was it tested? -->
No need.
## Summary
This is an alternative to https://github.com/astral-sh/uv/pull/14944
that functions a little differently. Rather than adding separate
strategies, you can instead say:
```toml
[tool.uv.extra-build-dependencies]
child = [{ requirement = "anyio", match-runtime = true }]
```
Which will then enforce that `anyio` uses the same version as in the
lockfile.
This fixes a regression from 0.8.0 from
https://github.com/astral-sh/uv/pull/7934 and follows
https://github.com/astral-sh/uv/pull/15059
The regression is from [this
change](https://github.com/astral-sh/uv/pull/7934/files#diff-c7a660ac39628d5e12f388b0cacc7360affa3d7bb21191184d7ee78489675e83),
which was made because we'd otherwise (with the other changes in that
pull request) _filter out_ managed Python interpreters found in virtual
environments.
When `--system` is used we'll convert the default Python preference of
`managed` to `system` which avoids things like `uv pip install --system`
targeting a managed Python installation.
The basic test is
```
uv python install
uv pip install --system anyio
```
Prior to this change, we'd read a managed interpreter from our managed
installation directory and target that. After this change, without
#15059, we'd read a managed interpreter from the PATH and target that.
Both of those experiences are bad, because the managed interpreters are
marked as externally managed. After this change, with #15059, we
properly target the system interpreter.
Since we use `system` instead of `only-system`, if there is not a system
interpreter we'll still retain our existing behavior and use a managed
interpreter. This should limit breakage from the change. Given the
source of the regression, we could probably use `only-system` here. I
don't feel strongly. I think the main benefit of doing so would be that
we'd omit the check for managed installations in error messages when an
interpreter cannot be found?
We can't really add test coverage here because the test suite always has
externally managed interpreters :)
## Summary
I should've noticed this during review -- my bad -- but it looks like
after lowering, we're converting back to `uv_pep508::Requirement`. This
is mostly okay, but it's lossy for some lowerings. For example, we lose
index pinning. With this PR, we now preserve the lowered types
(`Requirement`).
Closes https://github.com/astral-sh/uv/issues/15037.
This is the first part of fixing a 0.8.0 regression from
https://github.com/astral-sh/uv/pull/7934
There, we added handling for skipping managed interpreters on the PATH
when `only-system` is used, but did not update the logic to prefer
system interpreters over managed ones when `system` is used. Here, we
fix that by skipping managed interpreters when `system` is used unless
_only_ managed interpreters are available. While this logic is applied
during in a general discovery method, it's only relevant for the PATH
(and the Windows registry) because we already change the _order_ that we
inspect installations in when `system` is used, so the managed
installation directory is inspected last.
This behavior did not regress in 0.8, it's always been this way,
however, I need this change in order to fix a different bug.
Following a CI failure in https://github.com/astral-sh/uv/pull/15028,
ensure that all workspace crates are inheriting the MSRV and other
workspace configuration from the workspace root.
## Summary
We weren't including these in the cache key when constructing the
install plan. We likely still read them from the cache later, but we may
have reported the wrong number of prepares, etc.
Apply fixes for some `cargo check` and `cargo clippy` lints that are on
in nightly Rust.
The following command now passes, the blanket allows had to many
false-positives:
```
cargo +nightly clippy -- -A clippy::doc_markdown -A mismatched_lifetime_syntaxes -A clippy::explicit_deref_methods
```
`cargo +nightly check -- -A mismatched_lifetime_syntaxes` now passes
without warnings.
Gracefully handle entrypoint permission errors
`uv run --with` could fail with a "permission denied" error when it
tried to copy an entrypoint with restrictive permissions.
For instance:
```sh
$ stat -c '%A' /usr/bin/groupmems
-rwxr-s---
$ uv python find
/usr/bin/python
$ uv run --with dummy_test
error: failed to open file `/usr/bin/groupmems`: Permission denied (os error 13)
```
The entrypoint copying logic now catches these permission errors and
skips the file, making `uv` more resilient on systems with binaries that
have restrictive permissions.
## Summary
We often match on `ErrorKind` to figure out how to handle an error
(e.g., to treat a 404 as "Not found" rather than aborting the program).
Unfortunately, if we retry, we wrap the error in a new kind that
includes the retry count. This PR adds an unwrapping mechanism to ensure
that callers always look at the underlying error.
Closes https://github.com/astral-sh/uv/issues/14941.
Closes https://github.com/astral-sh/uv/issues/14989.
## Summary
This just looks like an oversight. We weren't including hashes from
local Simple API indexes if a package had both a wheel and a source
distribution.
Closes https://github.com/astral-sh/uv/issues/14883
## Summary
The basic problem here is that when we had multiple items in an inline
array, and that array expanded to multiple lines, we accidentally
changed the indentation part-way through due to how prefixes work in the
TOML.
Here's Claude's explanation of the root cause, which I find pretty
decent:
```
Here's what happened step by step:
1. First item ("iniconfig"): Has empty prefix "" → indentation_prefix stays None → uses default 4 spaces
2. Second item ("ruff"): Has empty prefix "" → indentation_prefix stays None → uses default 4 spaces
3. Third item ("typing-extensions"): Has prefix " " (single space from inline format) → indentation_prefix becomes
Some(" ") → uses only 1 space!
This produced:
[dependency-groups]
dev = [
"iniconfig>=2.0.0",
"ruff",
"typing-extensions", # ← Only 1 space instead of 4!
]
Why the Third Item Had a Different Prefix
In inline arrays like ["ruff", "typing-extensions"], the items are separated by commas and spaces. When parsed by
the TOML library:
- "ruff" has no prefix (it comes right after [)
- "typing-extensions" has a single space prefix (the space after the comma)
The Fix
Moving the indentation calculation outside the loop ensures it's calculated only once:
// Calculate indentation ONCE before the loop
if let Some(first_item) = deps.iter().next() {
let decor_prefix = /* get prefix from first item */
indentation_prefix = (!decor_prefix.is_empty()).then_some(decor_prefix.to_string());
}
// Now use the same indentation for ALL items
for item in deps.iter_mut() {
// Apply consistent indentation to every item
}
This ensures all items get the same indentation (4 spaces by default when converting from inline arrays), producing
the correct output:
[dependency-groups]
dev = [
"iniconfig>=2.0.0",
"ruff",
"typing-extensions", # ← Correct 4-space indentation
]
The bug only affected arrays being converted from inline to multiline format, where different items might have
different residual formatting from their inline representation.
```
Closes#14961.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
This is a bit of a weird request, but in [pixi](https://pixi.sh) we are
making use of this function to lazily instantiate a conda environment.
Well, in actuality we are using a shim to the `BuildDispatch` to
actually to only create a conda prefix, if some package needs to be
built during the resolution phase. Otherwise we can resolve everything
without an enviroment containing a python intepreter.
We are using a method now - that uses the runtime to run async code
inside this function, as `interpreter` is the first method called on a
`BuildContext` when running a source build - using
`tokio::Handle::block_on`.
However was causing a deadlock in very specific situations, me and
@baszalmstra + @wolfv have investigated this thoroughly, but have not
been able to find the root cause. It would hang in a part of the uv code
that hits the index, but that is **after** all of our initialization
*and the blocking call* was completed.
Changing this to be fully async fixes the problem, this requires this
method to be async though.
We get that this is not necessarily required, and we might find a
workaround, but I wanted to try it this way first.
Thanks!
Close#6314
## Summary
Continuing from #7592. Created a new PR to rebase the old branch with
`main`, cleaned up test errors, and improved readability.
## Test Plan
Same test cases as in #7592.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
In https://github.com/astral-sh/uv/issues/14919 it was reported that
uv's behavior differed after the first invocation. I noticed we weren't
copying entrypoints after the first invocation. It turns out the
shebangs were written with `.../python` but on a subsequent invocation
the `sys.executable` was `.../python3` so we didn't detect these as
matching.
This is a pretty naive fix, but it seems much easier than ensuring the
entry point path exactly matches the subsequent `sys.executable` we
find.
I guess we should fix this in reverse too? but I think we might always
prefer `python3` when loading interpreters from environments.
See #14790 for more background.
Replaces https://github.com/astral-sh/uv/pull/14092
Adds `tool.uv.extra-build-dependencies = {package = [dependency, ...]}`
which extends `build-system.requires` during package builds.
These are lowered via workspace sources, are applied to transitive
dependencies, and are included in the wheel cache shard hash.
There are some features we need to follow-up on, but are out of scope
here:
- Preferring locked versions for build dependencies
- Settings for requiring locked versions for build depencies
There are some quality of life follow-ups we should also do:
- Warn on `extra-build-dependencies` that do not apply to any packages
- Add test cases and improve error messaging when the
`extra-build-dependencies` resolve fails
-------
There ~are~ were a few open decisions to be made here
1. Should we resolve these dependencies alongside the
`build-system.requires` dependencies? Or should we resolve separately?
(I think the latter is more powerful? because you can override things?
but it opens the door to breaking your build)
2. Should we install these dependencies into the same environment? Or
should we layer it on top as we do elsewhere? (I think it's fine to
install into the same environment)
3. Should we respect sources defined in the parent project? (I think
yes, but then we need to lower the dependencies earlier — I don't think
that's a big deal, but it's not implemented)
4. Should we respect sources defined in the child project? (I think no,
this gets really complicated and seems weird to allow)
5. Should we apply this to transitive dependencies? (I think so)
---------
Co-authored-by: Aria Desires <aria.desires@gmail.com>
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
I noticed what appears to be a small typo in the documentation. In the
section describing dev versions, it says `sbpth table releases`. I
believe this was meant to be `both stable releases`, to match the
structure of the previous sentence about post versions.
We do not just "ignore" the existing lockfile here. We retain the
existing messaging for cases where we do actually throw out the
lockfile, like `--upgrade`.
Adds `exclude-newer-package = { package = timestamp, ... } ` and
`--exclude-newer-package package=timestamp`. These take precedence over
`exclude-newer` for a given package.
This does need to be serialized to the lockfile, so the revision is
bumped to 3. I tested a previous version and we can read a lockfile with
this information just fine.
Closes https://github.com/astral-sh/uv/issues/14394
Adds a cache bucket for Python installs and uses it by default during
tests, extending the opt-in cache added in
https://github.com/astral-sh/uv/pull/12175
Updates the `python_install` tests to use a shared cache for Python
installs. This reduces the `python_install` test runtime on my machine
from 23s -> 17s. The difference should be much larger on machines with
slower internet and less cores for test workers :) This should also
improve stability in CI by reducing reliance on the network during test
runs, see #14327
Fixes#14920
## Summary
Problem: When building wheel packages, metadata files (such as RECORD,
METADATA, WHEEL, and
license files) were being created with incorrect Unix permissions
(--w--wx---), lacking
read permissions and having unexpected executable permissions.
Solution: The fix ensures that all metadata files in wheel packages are
created with proper
644 (rw-r--r--) permissions by:
- Adding explicit unix_permissions(0o644) setting in the write_bytes
method for metadata
files
- Updating permission constants to use octal notation for clarity
- Improving code comments to document the permission settings
Impact: This change ensures wheel packages created by uv have standard
file permissions
consistent with other Python build tools like setuptools, improving
compatibility and
following Python packaging best practices.
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [criterion](https://bheisler.github.io/criterion.rs/book/index.html)
([source](https://redirect.github.com/bheisler/criterion.rs)) |
dependencies | minor | `0.6.0` -> `0.7.0` |
---
> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.
---
### Release Notes
<details>
<summary>bheisler/criterion.rs (criterion)</summary>
###
[`v0.7.0`](https://redirect.github.com/bheisler/criterion.rs/blob/HEAD/CHANGELOG.md#070---2025-07-25)
[Compare
Source](https://redirect.github.com/bheisler/criterion.rs/compare/0.6.0...0.7.0)
- Bump version of criterion-plot to align dependencies.
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/uv).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MS40MC4wIiwidXBkYXRlZEluVmVyIjoiNDEuNDAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
I think this would give us better hygiene than a global flag. It makes
it easier for users to opt-in to overlapping features, such as Python
upgrades and Python bin installations and to disable warnings for
preview mode without opting in to a bunch of other features. In general,
I want to reduce the burden for putting something under preview.
The `--preview` and `--no-preview` flags are retained as global
overrides. A new `--preview-features` option is added which accepts
comma separated features or can be passed multiple times, e.g.,
`--preview-features add-bounds,pylock`. There's a `UV_PREVIEW_FEATURES`
environment variable for that option (I'm not sure if we should overload
`UV_PREVIEW`, but could be convinced).
`Candidate` has an optional field `prioritized`, which was mostly
redundant with `CandidateDist`. Specifically, it was only `None`, if
`CandidateDist` was `Installed`. This commit removes this duplication.
## Summary
This is an alternative to #14003 that takes advantage of the fact that
we already validate that the requirements are up-to-date when validating
the lockfile, and the requirements for pinned requirements include the
index itself -- so rather than collecting all the explicit indexes
upfront, we can just add them to the available list as we iterate over
the lockfile's dependency graph.
This gets all the tests passing from that PR, but with ~no performance
impact and a much less invasive change. It also gets the "circular
dependency" test passing, which is marked with a TODO in that PR.
Closes https://github.com/astral-sh/uv/issues/11419.
It seems that non-standard entrypoints are still widely used,
downgrading the error to a tracing warning.
Fixes#14442
---------
Co-authored-by: Ed Morley <501702+edmorley@users.noreply.github.com>
## Summary
This fixes a regression from https://github.com/astral-sh/uv/pull/14447
that we seemingly didn't have test coverage for. Specifically, if you
have a version of a package in your project, and then install a
different version with `--with`, the environment should import the
`--with` version.
Closes#14860.
## Summary
The core problem here is that `allowed_indexes` only includes at most
one "default" index. This is problematic for tool upgrades, since the
index in the receipt will be marked as default, but credentials will be
omitted; if credentials are then defined in a `uv.toml`, we'll never
look at those, since that will _also_ be marked as default, and we only
look at the first default.
Instead, we should consider all defined indexes in priority order.
Closes https://github.com/astral-sh/uv/issues/14806.
## Summary
Right now, we write index URLs to the tool receipt with redacted
credentials (i.e., a username, and `****` in lieu of a password). This
is always wrong and unusable. Instead, this PR drops them entirely.
Part of https://github.com/astral-sh/uv/issues/14806.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
We are using UV as a library and need to set `tls_built_in_root_certs`
on the reqwest client.
This PR exposes this property in the `BaseClientBuilder` and in the
`RegistryClientBuilder`. The default is set to `false`, so this does not
change any behaviour unless you explicitly opt into it.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
A little nuanced, but... When you add multiple `--index` URLs on the CLI
(e.g., in `uv pip install`), we check the first-provided index, then the
second index, etc. However, when we _write_ those URLs to the
`pyproject.toml` in `uv add`, we were adding them in reverse-order. We
now add them in a way that preserves the priority order.
Closes https://github.com/astral-sh/uv/issues/14817.
## Summary
This PR adds derivation chain for another class of resolver failures.
For example, if we encounter a transitive URL dependency, we now tell
the user which package included it, and the full derivation chain:
```
× Failed to resolve dependencies for `foo` (v0.1.0)
╰─▶ Package `flask` was included as a URL dependency. URL dependencies must be
expressed as direct requirements or constraints. Consider adding `flask @
9d4508e893f34853a30fd769c02e9d/flask-3.1.1-py3-none-any.whl`
to your dependencies or constraints file.
help: `foo` (v0.1.0) was included because `baz` (v0.1.0) depends on `foo`
```
Closes#14795.
When users run `uv version` in a directory without a `pyproject.toml`
file, they often intend to check uv's own version rather than a
project's version. This change adds a helpful hint to guide users to the
correct command.
**Before:**
```
❯ uv version
error: No `pyproject.toml` found in current directory or any parent directory
```
**After:**
```
❯ uv version
error: No `pyproject.toml` found in current directory or any parent directory
hint: If you meant to view uv's version, use `uv self version` instead
```
## Changes
- Modified `find_target()` function in
`crates/uv/src/commands/project/version.rs` to catch
`WorkspaceError::MissingPyprojectToml` specifically and enhance the
error message with a helpful hint
- Added import for `WorkspaceError` to access the specific error type
- Updated existing tests to expect the new hint message in error output
- Added new test case `version_get_missing_with_hint()` to verify
behavior
The hint appears consistently across all scenarios where `uv version`
fails to find a project:
- `uv version` (normal mode)
- `uv version --project .` (explicit project mode)
- `uv version --preview` (preview mode)
The change maintains all existing functionality - when a
`pyproject.toml` is found, `uv version` continues to work normally
without showing the hint.
Fixes#14730.
<!-- START COPILOT CODING AGENT TIPS -->
---
💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: zanieb <2586601+zanieb@users.noreply.github.com>
## Summary
Closes#12163.
## Test Plan
Created an offending source distribution with this script:
```python
import io
import tarfile
import textwrap
import time
PKG_NAME = "badpkg"
VERSION = "0.1"
DIST_NAME = f"{PKG_NAME}-{VERSION}"
ARCHIVE = f"{DIST_NAME}.tar.gz"
def _bytes(data: str) -> io.BytesIO:
"""Helper: wrap a text blob as a BytesIO for tarfile.addfile()."""
return io.BytesIO(data.encode())
def main(out_path: str = ARCHIVE) -> None:
now = int(time.time())
with tarfile.open(out_path, mode="w:gz") as tar:
def add_file(path: str, data: str, mode: int = 0o644) -> None:
"""Add a regular file whose *content* is supplied as a string."""
buf = _bytes(data)
info = tarfile.TarInfo(path)
info.size = len(buf.getbuffer())
info.mtime = now
info.mode = mode
tar.addfile(info, buf)
# ── top‑level setup.py ───────────────────────────────────────────────
setup_py = textwrap.dedent(f"""\
from setuptools import setup, find_packages
setup(
name="{PKG_NAME}",
version="{VERSION}",
packages=find_packages(),
)
""")
add_file(f"{DIST_NAME}/setup.py", setup_py)
# ── minimal package code ─────────────────────────────────────────────
add_file(f"{DIST_NAME}/{PKG_NAME}/__init__.py", "# placeholder\\n")
# ── the malicious symlink ────────────────────────────────────────────
link = tarfile.TarInfo(f"{DIST_NAME}/{PKG_NAME}/evil_link")
link.type = tarfile.SYMTYPE
link.mtime = now
link.mode = 0o777
link.linkname = "../../../outside.txt"
tar.addfile(link)
print(f"Created {out_path}")
if __name__ == "__main__":
main()
```
Verified that both `pip install` and `uv pip install` rejected it.
I also changed `link.linkname = "../../../outside.txt"` to
`link.linkname = "/etc/outside"`, and verified that the absolute path
was rejected too.
This is an alternative to https://github.com/astral-sh/uv/pull/14788
which has the benefit that it addresses
https://github.com/astral-sh/uv/issues/13327 which would be an issue
even if we reverted #14447.
There are two changes here
1. We copy entry points into the ephemeral environment, and rewrite
their shebangs (or trampoline target) to ensure the ephemeral
environment is not bypassed.
2. We link `etc/jupyter` and `share/jupyter` data directories into the
ephemeral environment, this is in order to ensure the above doesn't
break Jupyter which unfortunately cannot find the `share` directory
otherwise. I'd love not to do this, as it seems brittle and we don't
have a motivating use-case beyond Jupyter. I've opened
https://github.com/jupyterlab/jupyterlab/issues/17716 upstream for
discussion, as there is a viable patch that could be made upstream to
resolve the problem. I've limited the fix to Jupyter directories so we
can remove it without breakage.
Closes https://github.com/astral-sh/uv/issues/14729
Closes https://github.com/astral-sh/uv/issues/13327
Closes https://github.com/astral-sh/uv/issues/14749
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
fix some minor issues in comments
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
Signed-off-by: pingshuijie <pingshuijie@outlook.com>
## Summary
If `HF_TOKEN` is set, we'll automatically wire it up to authenticate
requests when hitting private `huggingface.co` URLs in `uv run`.
## Test Plan
An unauthenticated request:
```
> cargo run -- run https://huggingface.co/datasets/cmarsh/test/resolve/main/main.py
File "/var/folders/nt/6gf2v7_s3k13zq_t3944rwz40000gn/T/mainYadr5M.py", line 1
Invalid username or password.
^^^^^^^^
SyntaxError: invalid syntax
```
An authenticated request:
```
> HF_TOKEN=hf_... cargo run run https://huggingface.co/datasets/cmarsh/test/resolve/main/main.py
Hello from main.py!
```
We currently have two marker keys that a list, `extras` and
`dependency_groups`, both from PEP 751. With the variants PEP, we will
add three more. This change is broken out of the wheel variants PR to
introduce generic marker list support, plus a change to use
`ContainerOperator` in more places.
## Summary
I found it confusing that the `else` case for `== "graalpy"` is still
necessary for the `== "pypy"` branch (i.e., that `pythonw.exe` is copied
for PyPy despite not being in the `== "pypy"` branch).
Instead, we now use a match for PyP, GraalPy, and then everything else.
The `version_get_fallback_unmanaged_json` test was failing when running
tests outside of a git checkout (e.g., from a release tarball) due to
inconsistent behavior based on git availability.
The test had conditional logic that expected different outcomes
depending on whether `git_version_info_expected()` returned true or
false:
- In git checkouts: Expected failure with "The project is marked as
unmanaged" error
- Outside git checkouts: Expected success with fallback behavior showing
version info
However, the fallback behavior was removed in version 0.8.0, making this
test obsolete. All other similar tests
(`version_get_fallback_unmanaged`,
`version_get_fallback_unmanaged_short`,
`version_get_fallback_unmanaged_strict`) consistently expect failure
when a project is marked as unmanaged, regardless of git availability.
This change removes the problematic test entirely, as suggested by
@zanieb. All remaining version tests (51 total) continue to pass.
Fixes#14785.
<!-- START COPILOT CODING AGENT TIPS -->
---
💡 You can make Copilot smarter by setting up custom instructions,
customizing its development environment and configuring Model Context
Protocol (MCP) servers. Learn more [Copilot coding agent
tips](https://gh.io/copilot-coding-agent-tips) in the docs.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: zanieb <2586601+zanieb@users.noreply.github.com>
## Summary
This should give us some performance and error message improvements.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
We don't yet support writing these, but we can at least read them
(which, e.g., allows you to install PDM-exported `pylock.toml` files
with uv, since PDM _always_ writes a default group).
Closes#14740.
## Summary
Follow #14078, use GitHub generated sha256 for GraalPy releases too.
## Test Plan
```console
uv run ./crates/uv-python/fetch-download-metadata.py
```
## Summary
Rename `_parse_download_url` to `_parse_download_asset` and move the
`asset['digest']` logic into it.
## Test Plan
```console
uv run ./crates/uv-python/fetch-download-metadata.py
```
## Summary
A refactor that I'm extracting from #14755. There should be no
functional changes, but the core idea is to postpone filling in the
default `path` for a dependency group until we make the specification.
This allows us to use the groups for the `pylock.toml` in the future, if
such a `pylock.toml` is provided.
## Summary
This was just an oversight on my part in the initial implementation.
Closes https://github.com/astral-sh/uv/issues/14719.
## Test Plan
With:
```toml
[project]
name = "foo"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13.2"
dependencies = [
]
[[tool.uv.index]]
url = "https://download.pytorch.org/whl/cpu"
cache-control = { api = "max-age=600" }
```
Ran `cargo run lock -vvv` and verified that the PyTorch index response
was cached (whereas it typically returns `cache-control:
no-cache,no-store,must-revalidate`).
With the previous order of operations, there could be warnings from race
conditions between two process A and B removing and installing Python
versions.
* A removes the files for CPython3.9.18
* B sees the key CPython3.9.18
* B sees that CPython3.9.18 has no files
* A removes the key for CPython3.9.18
* B try to removes the key for CPython3.9.18, gets and error that it's
already gone, issues a warning
We make the more resilient in two ways:
* We remove the registry key first, avoiding dangling registry keys in
the removal process
* We ignore not found errors in registry removal operations: If we try
to remove something that's already gone, that's fine.
Fixes#14714 (hopefully)
Reviewing #14687, I noticed that we had implemented a
`Url::from_url_or_path`-like function, but it wasn't reusable. This
change `Verbatim::from_url_or_path` so we can use it in other places
too.
The PEP 508 parser is an odd place for this, but that's where
`VerbatimUrl` and `Scheme` are already living.
We recently ran over the file limit and had to drop hash file from the
releases page in favor of bulk SHA256SUMS files
(https://github.com/astral-sh/python-build-standalone/pull/691).
Conveniently, GitHub has recently started to add a SHA256 digest to the
API. GitHub did not backfill the hashes for the old releases, so use the
API hashes for newer assets, and eventually only download SHA256SUMS for
older releases.
We currently treat path sources as virtual if they do not specify a
build system, which is surprising behavior. This PR updates the behavior
to treat path sources as packages unless the path source is explicitly
marked as `package = false` or its own `tool.uv.package` is set to
`false`.
Closes#12015
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
By default, `uv venv <venv-name>` currently removes the `<venv-name`>
directory if it exists. This can be surprising behavior: not everyone
expects an existing environment to be overwritten. This PR updates the
default to fail if a non-empty `<venv-name>` directory already exists
and neither `--allow-existing` nor the new `-c/--clear` option is
provided (if a TTY is detected, it prompts first). If it's not a TTY,
then uv will only warn and not fail for now — we'll make this an error
in the future. I've also added a corresponding `UV_VENV_CLEAR` env var.
I've chosen to use `--clear` instead of `--force` for this option
because it is used by the `venv` module and `virtualenv` and will be
familiar to users. I also think its meaning is clearer in this context
than `--force` (which could plausibly mean force overwrite just the
virtual environment files, which is what our current `--allow-existing`
option does).
Closes#1472.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
In the case of `uv sync` all we really need to do is handle the
`OutdatedEnvironment` error (precisely the error we yield only on
dry-runs when everything Works but we determine things are outdated) in
`OperationDiagnostic::report` (the post-processor on all
`operations::install` calls) because any diagnostic handled by that gets
downgraded to from status 2 to status 1 (although I don't know if that's
really intentional or a random other bug in our status handling... but I
figured it's best to highlight that other potential status code
incongruence than not rely on it 😄).
Fixes#12603
---------
Co-authored-by: John Mumm <jtfmumm@gmail.com>
We weren't following our usual "destructure all the options" pattern in
this function, and several "this isn't actually read from uv.toml"
fields slipped through the cracks over time since folks forgot it
existed.
Fixes part of #14308, although we could still try to make the warning in
FilesystemOptions more accurate?
You could argue this is a breaking change, but I think it ultimately
isn't really, because we were already silently ignoring these fields.
Now we properly error.
If a user specifies `-e /path/to/dir` and `/path/to/dir` in a `uv pip
install` command, we want the editable to "win" (rather than erroring
due to conflicting URLs). Unfortunately, this behavior meant that when
you requested a package as editable and non-editable in conflicting
groups, the editable version was _always_ used. This PR modifies the
requisite types to use `Option<bool>` rather than `bool` for the
`editable` field, so we can determine whether a requirement was
explicitly requested as editable, explicitly requested as non-editable,
or not specified (as in the case of `/path/to/dir` in a
`requirements.txt` file). In the latter case, we allow editables to
override the "unspecified" requirement.
If a project includes a path dependency twice, once with `editable =
true` and once without any `editable` annotation, those are now
considered conflicting URLs, and lead to an error, so I've marked this
change as breaking.
Closes https://github.com/astral-sh/uv/issues/14139.
If `--workspace` is provided, we add all paths as workspace members.
If `--no-workspace` is provided, we add all paths as direct path
dependencies.
If neither is provided, then we add any paths that are under the
workspace root as workspace members, and the rest as direct path
dependencies.
Closes#14524.
While reviewing https://github.com/astral-sh/uv/pull/14107, @oconnor663
pointed out a bug where we allow `uv python pin --rm` to delete the
global pin without the `--global` flag. I think that shouldn't be
allowed? I'm not 100% certain though.
Adds environment variables for
https://github.com/astral-sh/uv/pull/14612 and
https://github.com/astral-sh/uv/pull/14614
We can't use the Clap `BoolishValueParser` here, and the reasoning is a
little hard to explain. If we used `UV_PYTHON_INSTALL_NO_BIN`, as is our
typical pattern, it'd work, but here we allow opt-in to hard errors with
`UV_PYTHON_INSTALL_BIN=1` and I don't think we should have both
`UV_PYTHON_INSTALL_BIN` and `UV_PYTHON_INSTALL_NO_BIN`.
Consequently, this pull request introduces a new `EnvironmentOptions`
abstraction which allows us to express semantics that Clap cannot —
which we probably want anyway because we have an increasing number of
environment variables we're parsing downstream, e.g., #14544 and #14369.
## Summary
When refactoring the addition PR I accidentally introduced a bug where
the debug message would not be output if the default value is used.
cc @zanieb
## Summary
When installing packages on _very_ slow/overloaded systems it'spossible
to trigger bytecode compilation timeouts, which tends to happen in
environments such as Qemu (especially without KVM/virtio), but also on
systems that are simply overloaded. I've seen this in my Nix builds if I
for example am compiling a Linux kernel at the same time as a few other
concurrent builds.
By making the bytecode compilation timeout adjustable you can work
around such issues. I plan to set `UV_COMPILE_BYTECODE_TIMEOUT=0` in the
[pyproject.nix
builders](https://pyproject-nix.github.io/pyproject.nix/build.html) to
make them more reliable.
- Related issues
* https://github.com/astral-sh/uv/issues/6105
## Test Plan
Only manual testing was applied in this instance. There is no existing
automated tests for bytecode compilation timeout afaict.
Closes#14262
## Description
Adds `UV_LIBC` environment variable and implements check within
`Libc::from_env` as recommended here:
https://github.com/astral-sh/uv/issues/14262#issuecomment-3014600313
Gave this a few passes to make sure I follow dev practices within uv as
best I am able. Feel free to call out anything that could be improved.
## Test Plan
Planned to simply run existing test suite. Open to adding more tests
once implementation is validated due to my limited Rust experience.
## Summary
We validate the `uv.toml` when it's discovered automatically, but not
when provided via `--config-file`. The same limitations exist, though --
I think the lack of enforcement is just an oversight.
Closes https://github.com/astral-sh/uv/issues/14650.
Part of #14296
This is the same as `uv tool update-shell` but handles the case where
the Python bin directory is configured to a different path.
```
❯ UV_PYTHON_BIN_DIR=/tmp/foo cargo run -q -- python install --preview 3.13.3
Installed Python 3.13.3 in 1.75s
+ cpython-3.13.3-macos-aarch64-none
warning: `/tmp/foo` is not on your PATH. To use installed Python executables, run `export PATH="/tmp/foo:$PATH"` or `uv python update-shell`.
❯ UV_PYTHON_BIN_DIR=/tmp/foo cargo run -q -- python update-shell
Created configuration file: /Users/zb/.zshenv
Restart your shell to apply changes
❯ cat /Users/zb/.zshenv
# uv
export PATH="/tmp/foo:$PATH"
❯ UV_TOOL_BIN_DIR=/tmp/bar cargo run -q -- tool update-shell
Updated configuration file: /Users/zb/.zshenv
Restart your shell to apply changes
❯ cat /Users/zb/.zshenv
# uv
export PATH="/tmp/foo:$PATH"
# uv
export PATH="/tmp/bar:$PATH"
```
Previously, if installation of executables into the bin directory failed
we'd with a non-zero code. However, if we make this behavior the default
we don't want it to be fatal. There's a `--bin` opt-in to _require_
successful executable installation and a `--no-bin` opt-out to silence
the warning / opt-out of installation entirely.
Part of https://github.com/astral-sh/uv/issues/14296 — we need this
before we can stabilize the behavior.
In #14614 we do the same for writing entries to the Windows registry.
## Summary
You can now override the cache control headers for the Simple API, file
downloads, or both:
```toml
[[tool.uv.index]]
name = "example"
url = "https://example.com/simple"
cache-control = { api = "max-age=600", files = "max-age=365000000, immutable" }
```
Closes https://github.com/astral-sh/uv/issues/10444.
## Summary
There's some inconsistent behaviour in handling symlinks when
`cache-key` is a glob or a file path. This PR attempts to address that.
- When cache-key is a path,
[`Path::metadata()`](https://doc.rust-lang.org/std/path/struct.Path.html#method.metadata)
is used to check if it's a file or not. According to the docs:
> This function will traverse symbolic links to query information about
the destination file.
So, if the target file is a symlink, it will be resolved and the
metadata will be queried for the underlying file.
- When cache-key is a glob, `globwalk` is used, specifically allowing
for symlinks:
```rust
.file_type(globwalk::FileType::FILE | globwalk::FileType::SYMLINK)
```
- However, without enabling link following, `DirEntry::metadata()` will
return an equivalent of `Path::symlink_metadata()` (and not
`Path::metadata()`), which will have a file type that looks like
```rust
FileType {
is_file: false,
is_dir: false,
is_symlink: true,
..
}
```
- Then, there's a check for `metadata.is_file()` which fails and
complains that the target entry "is a directory when file was expected".
- TLDR: glob cache-keys don't work with symlinks.
## Solutions
Option 1 (current PR): follow symlinks.
Option 2 (also doable): don't follow symlinks, but resolve the resulting
target entry manually in case its file type is a symlink. However, this
would be a little weird and unobvious in that we resolve files but not
directories for some reason. Also, symlinking directories is pretty
useful if you want to symlink directories of local dependencies that are
not under the project's path.
## Test Plan
This has been tested manually:
```rust
fn main() {
for follow_links in [false, true] {
let walker = globwalk::GlobWalkerBuilder::from_patterns(".", &["a/*"])
.file_type(globwalk::FileType::FILE | globwalk::FileType::SYMLINK)
.follow_links(follow_links)
.build()
.unwrap();
let entry = walker.into_iter().next().unwrap().unwrap();
dbg!(&entry);
dbg!(entry.file_type());
dbg!(entry.path_is_symlink());
dbg!(entry.path());
let meta = entry.metadata().unwrap();
dbg!(meta.is_file());
}
let path = std::path::PathBuf::from("./a/b");
dbg!(path.metadata().unwrap().file_type());
dbg!(path.symlink_metadata().unwrap().file_type());
}
```
Current behaviour (glob cache-key, don't follow links):
```
[src/main.rs:9:9] &entry = DirEntry("./a/b")
[src/main.rs:10:9] entry.file_type() = FileType {
is_file: false,
is_dir: false,
is_symlink: true,
..
}
[src/main.rs:11:9] entry.path_is_symlink() = true
[src/main.rs:12:9] entry.path() = "./a/b"
[src/main.rs:14:9] meta.is_file() = false
```
Glob cache-key, follow links:
```
[src/main.rs:9:9] &entry = DirEntry("./a/b")
[src/main.rs:10:9] entry.file_type() = FileType {
is_file: true,
is_dir: false,
is_symlink: false,
..
}
[src/main.rs:11:9] entry.path_is_symlink() = true
[src/main.rs:12:9] entry.path() = "./a/b"
[src/main.rs:14:9] meta.is_file() = true
```
Using `path.metadata()` for a non-glob cache key:
```
[src/main.rs:18:5] path.metadata().unwrap().file_type() = FileType {
is_file: true,
is_dir: false,
is_symlink: false,
..
}
[src/main.rs:19:5] path.symlink_metadata().unwrap().file_type() = FileType {
is_file: false,
is_dir: false,
is_symlink: true,
..
}
```
This is a continuation of the work in
* #12405
I have:
* moved to an architecture where the human output is derived from the
json structs to centralize more of the printing state/logic
* cleaned up some of the names/types
* added tests
* removed the restriction that this output is --dry-run only
I have not yet added package info, which was TBD in their design.
---------
Co-authored-by: x0rw <mahdi.svt5@gmail.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Co-authored-by: John Mumm <jtfmumm@gmail.com>
We've seen a few cases of uv.exe exiting with an exception code as its
exit status and no user-visible output (#14563 in the field, and #13812
in CI). It seems that recent versions of Windows no longer show dialog
boxes on access violations (what UNIX calls segfaults) or similar
errors. Something is probably sent to Windows Error Reporting, and we
can maybe sign up to get the crashes from Microsoft, but the user
experience of seeing uv exit with no output is poor, both for end users
and during development. While it's possible to opt out of this behavior
or set up a debugger, this isn't the default configuration. (See
https://superuser.com/q/1246626 for some pointers.)
In order to get some output on a crash, we need to install our own
default handler for unhandled exceptions (or call all our code inside a
Structured Exception Handling __try/__catch block, which is complicated
on Rust). This is the moral equivalent of a segfault handler on Windows;
the kernel creates a new stack frame and passes arguments to it with
some processor state.
This commit adds a relatively simple exception handler that leans on
Rust's own backtrace implementation and also displays some minimal
information from the exception itself. This should be enough info to
communicate that something went wrong and let us collect enough
information to attempt to debug. There are also a handful of (non-Rust)
open-source libraries for this like Breakpad and Crashpad (both from
Google) and crashrpt.
The approach here, of using SetUnhandledExceptionFilter, seems to be the
standard one taken by other such libraries. Crashpad also seems to try
to use a newer mechanism for an out-of-tree DLL to report the crash:
https://issues.chromium.org/issues/42310037
If we have serious problems with memory corruption, it might be worth
adopting some third-party library that has already implemented this
approach. (In general, the docs of other crash reporting libraries are
worth skimming to understand how these things ought to work.)
Co-authored-by: samypr100 <3933065+samypr100@users.noreply.github.com>
## Summary
(Related PR: #13438 - would be nice to have it merged as well since it
touches on the same globwalker code)
There's a few issues with `cache-key` globs, which this PR attempts to
address:
- As of the current state, parent or absolute paths are not allowed,
which is not obvious and is not documented. E.g., cache-key paths of the
form `{file = "../dep/**"}` will be essentially ignored.
- Absolute glob patterns also don't work (funnily enough, there's logic
in `globwalk` itself that attempts to address it in
[`globwalk::glob_builder()`](8973fa2bc5/src/lib.rs (L415)),
which serves as inspiration to some parts of this PR).
- The reason for parent paths being ignored is the way globwalker is
currently being triggered in `uv-cache-info`: the base directory is
being walked over completely and each entry is then being matched to one
of the provided match patterns.
- This may also end up being very inefficient if you have a huge root
folder with thousands of files: if your match patterns are `a/b/*.rs`
and `a/c/*.py` then instead of walking over the root directory, you can
just walk over `a/b` and `a/c` and match the relevant patterns there.
- Why supporting parent paths may be important to the point of being a
blocker: in large codebases with python projects depending on other
local non-python projects (e.g. rust crates), cache-keys can be very
useful to track dependency on the source code of the latter (e.g.
`cache-keys = [{ file = "../../crates/some-dep/**" }]`.
- TLDR: parent/absolute cache-key globs don't work, glob walk can be
slow.
## Solution
- In this PR, user-provided glob patterns are first clustered
(LCP-style) into pattern groups with longest common path prefix; each of
these groups can then be walked over separately.
- Pattern groups do not overlap, so we would never walk over the same
directory twice (unless there's symlinks pointing to same folders).
- Paths are not canonicalized nor virtually normalized (which is
impossible on Unix without FS access), so the method is symlink-safe
(i.e. we don't treat `a/b/..` as `a`) and should work fine with #13438.
- Because of LCP logic, the minimal amount of directory space will be
traversed to cover all patterns.
- Absolute glob patterns will now work.
- Parent-relative glob patterns will now work.
- Glob walking will be more efficient in some cases.
## Possible improvements
- Efficiency can be further greatly improved if we limit max depth for
globwalk. Currently, a simple ".toml" will deep-traverse the whole
folder. Essentially, max depth can be always set to either N or
infinity. If a pattern at a pivot node contains `**`, we collect all
children nodes from the subtree into the same group and don't limit max
depth; otherwise, we set max depth to the length of the glob pattern.
This wouldn't change correctness though and can we done separately if
needed.
- If this is considered important enough, docs can be updated to
indicate that parent and absolute globs are supported (and symlinks are
resolved, if the relevant PR is algo merged in).
## Test Plan
- Glob splitting and clustering tests are included in the PR.
- Relative and absolute glob cache-keys were tested in an actual
codebase.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This is a small quality of life feature that adds a shorthand (`-w`) to
the `--with` flag for minimizing keystrokes.
Pretty minor, but I didn't see any conflicts with `-w` and thought this
could be a nice place for it.
```bash
# proposed addition (short)
uvx -w numpy ipython
# original (long)
uvx --with numpy ipython
```
## Test Plan
Added testing already in the P.R. - just copied over tests from the
`--with` flag
<!-- How was it tested? -->
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
The `version_extras` test added in
85c0fc963b needs to connect to PyPI. This
PR conditionalizes it on the `pypi` extra so that people running the
tests offline don’t have to skip that test explicitly.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
I already ran `cargo test` in the git checkout to confirm I didn’t
somehow introduce a syntax error. I am also applying this PR as a patch
to [the `uv` package in Fedora](https://src.fedoraproject.org/rpms/uv),
which runs tests offline with the `pypi` feature disabled.
Follow-up to https://github.com/astral-sh/uv/pull/14509 to provide the
_reason_ downloads are disabled and surface it as a hint rather than a
debug log.
e.g.,
```
❯ cargo run -q -- run --no-managed-python -p 3.13.4 python
error: No interpreter found for Python 3.13.4 in virtual environments or search path
hint: A managed Python download is available for Python 3.13.4, but the Python preference is set to 'only system'
```
This adds `alpha`, `beta`, `rc`, `stable`, `post`, and `dev` modes to
`uv version --bump`.
The components that `--bump` accepts are ordered as follows:
major > minor > patch > stable > alpha > beta > rc > post > dev
Bumping a component "clears" all lesser component (`alpha`, `beta`, and
`rc` all overwrite each other):
* `--bump minor` on `1.2.3a4.post5.dev6` => `1.3.0`
* `--bump alpha` on `1.2.3a4.post5.dev6` => `1.2.3a5`
* `--bump dev ` on `1.2.3a4.post5.dev6` => `1.2.3a4.post5.dev7`
In addition, `--bump` can now be repeated. The primary motivation of
this is "bump stable version and also enter a prerelease", but it
technically lets you express other things if you want them:
* `--bump patch --bump alpha` on `1.2.3` => `1.2.4a1` ("bump patch
version and go to alpha 1")
* `--bump minor --bump patch` on `1.2.3` => `1.3.1` ("bump minor version
and got to patch 1")
* `--bump minor --bump minor` on `1.2.3` => `1.4.0` ("bump minor version
twice")
The `--bump` flags are sorted by their priority, so that you don't need
to remember the priority yourself. This ordering is the only "useful"
one that preserves every `--bump` you passed, so there's no concern
about loss of expressiveness. For instance `--bump minor --bump major`
would just be `--bump major` if we didn't sort, as the major bump clears
the minor version. The ordering of `beta` after `alpha` means `--bump
alpha --bump beta` will just result in beta 1; this is the one case
where a bump request will effectively get overwritten.
The `stable` mode "bumps to the next stable release", clearing the pre
(`alpha`, `beta`, `rc`), `dev`, and `post` components from a version
(`1.2.3a4.post5.dev6` => `1.2.3`). The choice to clear `post` here is a
bit odd, in that `1.2.3.post4` => `1.2.3` is actually a version
decrease, but I think this gives a more intuitive model (as preserving
`post5` in the previous example is definitely wrong), and also
post-releases are extremely obscure so probably no one will notice. In
the cases where this behaviour isn't useful, you probably wanted to pass
`--bump patch` or something anyway which *should* definitely clear the
`post5` (putting it another way: the only cases where `--bump stable`
has dubious behaviour is when you wanted it to do a noop, which, is a
command you could have just not written at all).
In all cases we preserve the "epoch" and "local" components of a
version, so the `7!` and `+local` in `7!1.2.3+local` will never be
modified by `--bump` (you can use the raw version set mode if you want
to touch those). The preservation of `local` is another slightly odd
choice, but it's a really obscure feature (so again it mostly won't come
up) and when it's used it seems to mostly be used for referring to
variant releases, in which case preserving it tends to be correct.
Fixes#13223
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
remove redundant words in comment
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
Signed-off-by: jingchanglu <jingchanglu@outlook.com>
Support multiple root modules in namespace packages by enumerating them:
```toml
[tool.uv.build-backend]
module-name = ["foo", "bar"]
```
This allows applications with multiple root packages without migrating
to workspaces. Since those are regular module names (we iterate over
them an process each one like a single module names), it allows
combining dotted (namespace) names and regular names. It also
technically allows combining regular and stub modules, even though this
is even less recommends.
We don't recommend this structure (please use a workspace instead, or
structure everything in one root module), but it reduces the number of
cases that need `namespace = true`.
Fixes#14435Fixes#14438
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
This PR intends to enable `--torch-backend=auto` to detect Intel GPUs
automatically:
- On Linux, detection is performed using the `lspci` command via
`Display controller` id.
- On Windows, ~~detection is done via a `powershell` query to
`Win32_VideoController`~~. Skip support for now—revisit once a better
solution is available.
Currently, Intel GPUs (XPU) do not rely on specific driver or toolkit
versions to distribute different PyTorch wheels.
## Test Plan
<!-- How was it tested? -->
On Linux:

~~On Windows:
~~
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Reverts:
- #14349
- #14346
- #14245
Retains the test cases. Includes a `find-links` test case.
Supersedes
- https://github.com/astral-sh/uv/pull/14387
- https://github.com/astral-sh/uv/pull/14503
We originally got a report at
https://github.com/astral-sh/uv/issues/13707 that inclusion of a
trailing slash on an index URL was causing lockfile churn despite having
no semantic meaning and resolved the issue by adding normalization that
stripped trailing slashes at parse time.
We then discovered that, while there are not semantic differences for
trailing slashes on Simple API index URLs, there are differences for
some flat (or find links) indexes. As reported in
https://github.com/astral-sh/uv/issues/14367, the change in
https://github.com/astral-sh/uv/pull/14245 caused a regression for at
least one user.
We attempted to fix the regression via a few approaches.
https://github.com/astral-sh/uv/pull/14387 attempted to differentiate
between Simple API and flat index URL parsing, but failed to account for
the `Deserialize` implementation, which always assumed Simple API-style
index URLs and incorrectly trimmed trailing slashes in various cases
where we deserialized the `IndexUrl` type from a file. I attempted to
resolve this by performing a larger refactor, but it ended up being
quite painful. In particular, the `Index` type was a blocker — we don't
know the `IndexUrl` variant until we've parsed the `IndexFormat` and
having a multi-stage deserializer is not appealing but adding a new
intermediate type (i.e., `RawIndex`) is painful due to the pervasiveness
of `Index`. Given that we've regressed behavior here and there's not a
straight-forward fix, we're reverting the normalization entirely.
https://github.com/astral-sh/uv/pull/14503 attempted to perform
normalization at compare-time, but that means we'd fail to invalidate
the lockfile when the a trailing slash was added or removed and given
that a trailing slash has semantic meaning for a find-links URL... we'd
have another correctness problem.
After this revert, we'll retain all index URLs verbatim. The downside to
this approach is that we'll be adding a bunch of trailing slashes back
to lockfiles that we previously normalized out, and we'll be reverting
our fix for users with inconsistent trailing slashes on their index
URLs. Users affected by the original motivating issue should use
consistent trailing slashes on their URLs, as they do frequently have
semantic meaning. We may want to revisit normalization and type aware
index URL parsing as part of a larger change.
Closes https://github.com/astral-sh/uv/issues/14367
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
We are using UV as a library and `installer()` returned `"pip\n"`. The
packages got installed by the pip package manager and not by UV. pip
seems to add a new line to the `INSTALLER` file and UV does not.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
When [updating](https://github.com/astral-sh/uv/pull/14475) to the
latest `reqwest` version, our fragment propagation test broke. That test
was partially testing the `reqwest` behavior, so this PR moves the
fragment test to directly test our logic for constructing redirect
requests.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
In pixi we overlay the PyPI packages over the conda packages and we
sometimes need to figure out what PyPI packages are involved in the
no-solution error. We could parse the error message, but this is pretty
error-prone, so it would be good to get access to more information. A
lot of information in this module is private and should probably stay
this way, but package names are easy enough to expose. This would help
us a lot!
I collect into a HashSet to remove duplication, and did not want to
expose a rustc_hash datastructure directly, thats's why I've chosen to
expose as an iterator :)
Let me know if any changes need to be done, and thanks!
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Hey, are you okay with exposing the `ErrorTree` for library consumers?
We have a use case that needs more information on conflicts. We need the
tree-structure of the conflict and be able to traverse it in particular.
Signed-off-by: Simon Sure <ssure@palantir.com>
The uv build backend has gone through some feedback cycles, we expect no
more major configuration changes, and we're ready to take the next step:
The uv build backend in stable.
This PR stabilizes:
* Using `uv_build` as build backend
* The documentation of the uv build backend
* The direct build fast path, where uv doesn't use PEP 517 if you're
using `uv_build` in a compatible version.
* `uv build --list`, which is limited to `uv_build`.
It does not:
* Make `uv_build` the default on `uv init`
* Make `--package` the default on `uv init`
## Summary
If we fail to acquire a lock on an environment, uv shouldn't fail; we
should just warn. In some cases, users run uv with read-only permissions
for their projects, etc.
For now, I kept any locks acquired _in the cache_ as hard failures,
since we always need write-access to the cache.
Closes https://github.com/astral-sh/uv/issues/14411.
## Summary
The idea here is that if a user runs `uv pip compile --universal`, we
should ignore the patch version on the current interpreter. I think this
makes sense... `--universal` tries to resolve for all future versions,
so it seems a bit odd that we'd start at the _current_ patch version.
Closes https://github.com/astral-sh/uv/issues/14397.
Clap does not perform global validation, so flag that are declared as
overriding can be set at the same time:
https://github.com/clap-rs/clap/issues/6049. This would previously cause
a panic. We work around this by choosing the yes-value always and
writing a warning.
An alternative would be erroring when both are set, but it's unclear to
me if this may break things we want to support. (`UV_OFFLINE=1 cargo run
-q pip --no-offline install tqdm --no-cache` is already banned).
Fixes https://github.com/astral-sh/uv/pull/14299
**Test Plan**
```
$ cargo run -q pip --offline install --no-offline tqdm --no-cache
warning: Boolean flags on different levels are not correctly supported (https://github.com/clap-rs/clap/issues/6049)
× No solution found when resolving dependencies:
╰─▶ Because tqdm was not found in the cache and you require tqdm, we can conclude that your requirements are unsatisfiable.
hint: Packages were unavailable because the network was disabled. When the network is disabled, registry packages may only be read from the cache.
```
## Summary
The basic idea here is that we can (should) reuse a build environment
across resolution (`prepare_metadata_for_build_wheel`) and installation.
This also happens to solve the build-PyTorch-from-source problem, since
we use a consistent build environment between the invocations.
Since `SourceDistributionBuilder` is stateless, we instead store the
builds on `BuildContext`, and we key them by various properties: the
underlying interpreter, the configuration settings, etc. This just
ensures that if we build the same package twice within a process, we
don't accidentally reuse an incompatible build (virtual) environment.
(Note that still drop build environments at the end of the command, and
don't attempt to reuse them across processes.)
Closes#14269.
If/when we see https://github.com/astral-sh/uv/issues/14171 again, this
should clarify whether our retry logic was skipped (i.e. a transient
error wasn't correctly identified as transient), or whether we exhausted
our retries. Previously, if you ran a local example fileserver as in
https://github.com/astral-sh/uv/issues/14171#issuecomment-3014580701 and
then you tried to install Python from it, you'd get:
```
$ export UV_TEST_NO_CLI_PROGRESS=1
$ uv python install 3.8.20 --mirror http://localhost:8000 2>&1 | cat
error: Failed to install cpython-3.8.20-linux-x86_64-gnu
Caused by: Failed to extract archive: cpython-3.8.20-20241002-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz
Caused by: failed to unpack `/home/jacko/.local/share/uv/python/.temp/.tmpS4sHHZ/python/lib/libpython3.8.so.1.0`
Caused by: failed to unpack `python/lib/libpython3.8.so.1.0` into `/home/jacko/.local/share/uv/python/.temp/.tmpS4sHHZ/python/lib/libpython3.8.so.1.0`
Caused by: error decoding response body
Caused by: request or response body error
Caused by: error reading a body from connection
Caused by: Connection reset by peer (os error 104)
```
With this change you get:
```
error: Failed to install cpython-3.8.20-linux-x86_64-gnu
Caused by: Request failed after 3 retries
Caused by: Failed to extract archive: cpython-3.8.20-20241002-x86_64-unknown-linux-gnu-install_only_stripped.tar.gz
Caused by: failed to unpack `/home/jacko/.local/share/uv/python/.temp/.tmp4Ia24w/python/lib/libpython3.8.so.1.0`
Caused by: failed to unpack `python/lib/libpython3.8.so.1.0` into `/home/jacko/.local/share/uv/python/.temp/.tmp4Ia24w/python/lib/libpython3.8.so.1.0`
Caused by: error decoding response body
Caused by: request or response body error
Caused by: error reading a body from connection
Caused by: Connection reset by peer (os error 104)
```
At the same time, I'm updating the way we handle the retry count to
avoid nested retry loops exceeding the intended number of attempts, as I
mentioned at
https://github.com/astral-sh/uv/issues/14069#issuecomment-3020634281.
It's not clear to me whether we actually want this part of the change,
and I need feedback here.
## Summary
This PR ensures that we avoid cleaning up build directories until the
end of a resolve-and-install cycle. It's not bulletproof (since we could
still run into issues with `uv lock` followed by `uv sync` whereby a
build directory gets cleaned up that's still referenced in the `build`
artifacts), but it at least gets PyTorch building without error with `uv
pip install .`, which is a case that's been reported several times.
Closes https://github.com/astral-sh/uv/issues/14269.
The marker display code assumes that all versions are normalized, in
that all trailing zeroes are stripped. This is not the case for
tilde-equals and equals-star versions, where the trailing zeroes (before
the `.*`) are semantically relevant. This would cause path
dependent-behavior where we would get a different marker string
depending on whether a version with or without a trailing zero was added
to the cache first.
To handle both equals-star and tilde-equals when converting
`python_version` to `python_full_version` markers, we have to merge the
version normalization (i.e. trimming the trailing zeroes) and the
conversion both to `python_full_version` and to `Ranges`, while special
casing equals-star and tilde-equals.
To avoid churn in lockfiles, we only trim in the conversion to `Ranges`
for markers, but keep using untrimmed versions for requires-python.
(Note that this behavior is technically also path dependent, as versions
with and without trailing zeroes have the same Hash and Eq. E.q.,
`requires-python == ">= 3.10.0"` and `requires-python == ">= 3.10"` in
the same workspace could lead to either value in `uv.lock`, and which
one it is could change if we make unrelated (performance) changes.
Always trimming however definitely changes lockfiles, a churn I wouldn't
do outside another breaking or lockfile-changing change.) Nevertheless,
there is a change for users who have `requires-python = "~= 3.12.0"` in
their `pyproject.toml`, as this now hits the correct normalization path.
Fixes#14231Fixes#14270
## Summary
There's a good example of the downside of using verbatim URLs here:
https://github.com/astral-sh/uv/pull/14197#discussion_r2163599625 (we
show two relative paths that point to the same directory, but it's not
clear from the error message).
The diff:
```
2 2 │ ----- stdout -----
3 3 │
4 4 │ ----- stderr -----
5 5 │ error: Requirements contain conflicting URLs for package `library` in all marker environments:
6 │-- ../../library
7 │-- ./library
6 │+- file://[TEMP_DIR]/library
7 │+- file://[TEMP_DIR]/library (editable)
```
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
As explained in the [`codspeed-rust` v3 release
notes](https://github.com/CodSpeedHQ/codspeed-rust/releases/tag/v3.0.0),
the `v3` of the compatibility layers is now required to work with the
latest version(`v3`) of `cargo-codspeed`.
and prefer emulated x64 windows in its stead.
This is preparatory work for shipping support for uv downloading and
installing aarch64 (arm64) windows Pythons. We've [had builds for this
platform ready for a
while](https://github.com/astral-sh/python-build-standalone/pull/387),
but have held back on shipping them due to a fundamental problem:
**The Python packaging ecosystem does not have strong support for
aarch64 windows**, e.g., not many projects build aarch64 wheels yet. The
net effect of this is that, if we handed you an aarch64 python
interpreter on windows, you would have to build a lot more sdists, and
there's a high chance you will simply fail to build that sdist and be
sad.
Yes unfortunately, in this case a non-native Python interpreter simply
*works better* than the native one... in terms of working at all, today.
Of course, if the native interpreter works for your project, it should
presumably have better performance and platform compatibility.
We do not want to stand in the way of progress, as ideally this
situation is a temporary state of affairs as the ecosystem grows to
support aarch64 windows. To enable progress, on aarch64 Windows builds
of uv:
* We will still use a native python interpreter, e.g., if it's at the
front of your `PATH` or the only installed version.
* If we are choosing between equally good interpreters that differ in
architecture, x64 will be preferred.
* If the aarch64 version is newer, we will prefer the aarch64 one.
* We will emit a diagnostic on installation, and show the python request
to pass to uv to force aarch64 windows to be used.
* Will be shipping [aarch64 Windows Python
downloads](https://github.com/astral-sh/python-build-standalone/pull/387)
* Will probably add some kind of global override setting/env-var to
disable this behaviour.
* Will be shipping this behaviour in
[astral-sh/setup-uv](https://github.com/astral-sh/setup-uv)
We're coordinating with Microsoft, GitHub (for the `setup-python`
action), and the CPython team (for the `python.org` installers), to
ensure we're aligned on this default and the timing of toggling to
prefer native distributions in the future.
See discussion in
- https://github.com/astral-sh/uv/issues/12906
---
This is an alternative to
* #13719
which uses sorting rather than filtering, as discussed in
* #13721
This fixes an obscure cache collision in Python interpreter queries,
which we believe to be the root cause of CI flakes we've been seeing
where a project environment is invalidated and recreated.
This work follows from the logs in [this CI
run](https://github.com/astral-sh/uv/actions/runs/15934322410/job/44950599993?pr=14326)
which captured one of the flakes with tracing enabled. There, we can see
that the project environment is invalidated because the Python
interpreter in the environment has a different version than expected:
```
DEBUG Checking for Python environment at `.venv`
TRACE Cached interpreter info for Python 3.12.9, skipping probing: .venv/bin/python3
DEBUG The interpreter in the project environment has different version (3.12.9) than it was created with (3.9.21)
```
(this message is updated to reflect #14329)
The flow is roughly:
- We create an environment with 3.12.9
- We query the environment, and cache the interpreter version for
`.venv/bin/python`
- We create an environment for 3.9.12, replacing the existing one
- We query the environment, and read the cached information
The Python cache entries are keyed by the absolute path to the
interpreter, and rely on the modification time (ctime, nsec resolution)
of the canonicalized path to determine if the cache entry should be
invalidated. The key is a hex representation of a u64 sea hasher output
— which is very unlikely to collide.
After an audit of the Python query caching logic, we determined that the
most likely cause of a collision in cache entries is that the
modification times of underlying interpreters are identical. This seems
pretty feasible, especially if the file system does not support
nanosecond precision — though it appears that the GitHub runners do
support it.
The fix here is to include the canonicalized path in the cache key,
which ensures we're looking at the modification time of the _same_
underlying interpreter.
This will "invalidate" all existing interpreter cache entries but that's
not a big deal.
This should also have the effect of reducing cache churn for
interpreters in virtual environments. Now, when you change Python
versions, we won't invalidate the previous cache entry so if you change
_back_ to the old version we can re-use our cached information.
It's a bit speculative, since we don't have a deterministic reproduction
in CI, but this is the strongest candidate given the logs and should
increase correctness regardless.
Closes https://github.com/astral-sh/uv/issues/14160
Closes https://github.com/astral-sh/uv/issues/13744
Closes https://github.com/astral-sh/uv/issues/13745
Once it's confirmed the flakes are resolved, we should revert
- https://github.com/astral-sh/uv/pull/14275
- #13817
## Summary
In #14245, we started normalizing index URLs by dropping the trailing
slash in the lockfile. We added tests to ensure that this didn't cause
existing lockfiles to be invalidated, but we missed one of the
constructors (specifically, the path that's used with
`tool.uv.sources`).
Motivated by some code duplication highlighted in
https://github.com/astral-sh/uv/pull/14201, I noticed we weren't taking
advantage of the existing implementation for casting to a str here.
Unfortunately, we do need a special case for CPython still.
Close#7426
## Summary
Picking up on #8284, I noticed that the `requires_python` object already
has its specifiers canonicalized in the `intersection` method, meaning
`~=3.12` is converted to `>=3.12, <4`. To fix this, we check and warn in
`intersection`.
## Test Plan
Used the same tests from #8284.
## Summary
When the user provides a requirement like `==2.4.*`, we desugar that to
`>=2.4.dev0,<2.5.dev0`. These bounds then appear in error messages, and
worse, they also trick the error message reporter into thinking that the
user asked for a pre-release.
This PR adds logic to convert to the more-concise `==2.4.*`
representation when possible. We could probably do a similar thing for
the compatible release operator (`~=`).
Closes https://github.com/astral-sh/uv/issues/14177.
Co-authored-by: Zanie Blue <contact@zanie.dev>
Python `bin` installations installed with `uv python install --default
--preview` (no version specified) were not being installed as
upgradeable. Instead each link was pointed at the highest patch version
for a minor version. This change ensures that these preview default
installations are also treated as upgradeable.
The PR includes some updates to the related tests. First, it checks the
default install without specified version case. Second, since it's
adding more read link checks, it creates a new `read_link` helper method
to consolidate repeated logic and replace instances of
`#[cfg(unix/windows)` with `if cfg!(unix/windows)`.
Fixes#14247
This PR updates `IndexUrl` parsing to normalize non-file URLs by
removing trailing slashes. It also normalizes registry source URLs when
using them to validate the lockfile.
Prior to this change, when writing an index URL to the lockfile, uv
would use a trailing slash if present in the provided URL and no
trailing slash otherwise. This can cause surprising behavior. For
example, `uv lock --locked` will fail when a package is added with an
`--index` value without a trailing slash and then `uv lock --locked` is
run with a `pyproject.toml` version of the index URL that contains a
trailing slash. This PR fixes this and adds a test for the scenario.
It might be safe to normalize file URLs in the same way, but since
slashes have a well-defined meaning in the context of files and
directories, I chose not to normalize them here.
Closes#13707.
uv currently ignores URL-encoded credentials in a redirect location.
This PR adds a check for these credentials to the redirect handling
logic. If found, they are moved to the Authorization header in the
redirect request.
Closes#11097
Previously we were using the XDG data dir to avoid symlinks, but there's no
particular guarantee that that's not going to be a symlink too. Using the
canonicalized temp dir by default is also slightly nicer for a couple reasons:
It's sometimes faster (an in-memory tempfs on e.g. Arch), and it makes
overriding `$TMPDIR` or `%TMP%` sufficient to control where tests put temp
files, without needing to override `UV_INTERNAL__TEST_DIR` too.
There was a regression introduced in #13954 on Windows where creating a
venv behaved as if there was a minor version link even if none existed.
This PR adds a check to fix this.
Closes#14249.
We do not currently support passing index names to `--index` for
installing packages. However, we do accept relative paths that can look
like index names. This PR adds the requirement that `--index` values
must be disambiguated with a prefix (`./` or `../` on Unix and Windows
or `.\\` or `..\\` on Windows). For now, if an ambiguous value is
provided, uv will warn that this will not be supported in the future.
Currently, if you provide an index name like `--index test` when there
is no `test` directory, uv will error with a `Directory not found...`
error. That's not very informative if you thought index names were
supported. The new warning makes the context clearer.
Closes#13921
In workspaces with multiple packages, you usually don't want to include
shared files such as the license repeatedly. Instead, we reading from
symlinked files. This would be supported if we had used std's `is_file`
and read methods, but walkdir's `is_file` does not consider symlinked
files as files.
See https://github.com/astral-sh/uv/issues/3957#issuecomment-2994675003
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Update [schemars
0.9.0](https://github.com/GREsau/schemars/releases/tag/v0.9.0)
There are differences in the generated JSON Schema and I will [contact
the author](https://github.com/GREsau/schemars/issues/407).
## Test Plan
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
This flakes often and we don't really need it to be monitored
continuously. We can always revive it from Git later.
Closes https://github.com/astral-sh/uv/issues/13952.
Don't log that we resolved a reference through the GitHub fast path if
we didn't use GitHub at all but used the cached revision. This avoids
stating that the fast path works when it's blocked due to unrelated
reasons (e.g. rate limits).
@oconnor663 discovered that executing `3.10.8` on Arch Linux ran into an
error loading `libcrypt.so.1`. This caused uv to install the latest
patch version on `uv venv` operations during upgrade tests, which
undermined their purpose (since they are checking that if you first
install `3.10.8` and then upgrade, virtual environments are
transparently upgraded). This PR updates the test to use `3.10.17`
instead to avoid this issue.
#13954 introduced an unnecessary slow-down to Python uninstall by
calling `installations.find_all()` to discover remaining installations
after an uninstall. Instead, we can filter all initial installations
against those in `uninstalled`.
As part of this change, I've updated `uninstalled` from a `Vec` to an
`IndexSet` in order to do efficient lookups in the filter. This required
a change I call out below to how we were retrieving them for messaging.
We were checking whether a path was an executable in a virtual
environment or the base directory of a virtual environment in multiple
places in the codebase. This PR consolidates this logic into one place.
Closes#13947.
This PR contains the following updates:
| Package | Type | Update | Change |
|---|---|---|---|
| [mimalloc](https://redirect.github.com/purpleprotocol/mimalloc_rust) |
dependencies | patch | `0.1.46` -> `0.1.47` |
---
> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.
---
### Release Notes
<details>
<summary>purpleprotocol/mimalloc_rust (mimalloc)</summary>
###
[`v0.1.47`](https://redirect.github.com/purpleprotocol/mimalloc_rust/releases/tag/v0.1.47):
Version 0.1.47
[Compare
Source](https://redirect.github.com/purpleprotocol/mimalloc_rust/compare/v0.1.46...v0.1.47)
##### Changes
- Mimalloc `v2.2.4`
</details>
---
### Configuration
📅 **Schedule**: Branch creation - "before 4am on Monday" (UTC),
Automerge - At any time (no schedule defined).
🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.
♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.
🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.
---
- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box
---
This PR was generated by [Mend Renovate](https://mend.io/renovate/).
View the [repository job
log](https://developer.mend.io/github/astral-sh/uv).
<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0MC42Mi4xIiwidXBkYXRlZEluVmVyIjoiNDAuNjIuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOlsiaW50ZXJuYWwiXX0=-->
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
## Summary
Allows `--torch-backend=auto` to detect AMD GPUs. The approach is fairly
well-documented inline, but I opted for `rocm_agent_enumerator` over
(e.g.) `rocminfo` since it seems to be the recommended approach for
scripting:
https://rocm.docs.amd.com/projects/rocminfo/en/latest/how-to/use-rocm-agent-enumerator.html.
Closes https://github.com/astral-sh/uv/issues/14086.
## Test Plan
```
root@rocm-jupyter-gpu-mi300x1-192gb-devcloud-atl1:~# ./uv-linux-libc-11fb582c5c046bae09766ceddd276dcc5bb41218/uv pip install torch --torch-backend=auto
Resolved 11 packages in 251ms
Prepared 2 packages in 6ms
Installed 11 packages in 257ms
+ filelock==3.18.0
+ fsspec==2025.5.1
+ jinja2==3.1.6
+ markupsafe==3.0.2
+ mpmath==1.3.0
+ networkx==3.5
+ pytorch-triton-rocm==3.3.1
+ setuptools==80.9.0
+ sympy==1.14.0
+ torch==2.7.1+rocm6.3
+ typing-extensions==4.14.0
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
I was looking into `uv tool` not supporting version files, and noticed
this implementation was confusing and skipped handling like a tracing
log if `--no-config` excludes selection a file. I've refactored it in
preparation for the next change.
> NOTE: The PRs that were merged into this feature branch have all been
independently reviewed. But it's also useful to see all of the changes
in their final form. I've added comments to significant changes
throughout the PR to aid discussion.
This PR introduces transparent Python version upgrades to uv, allowing
for a smoother experience when upgrading to new patch versions.
Previously, upgrading Python patch versions required manual updates to
each virtual environment. Now, virtual environments can transparently
upgrade to newer patch versions.
Due to significant changes in how uv installs and executes managed
Python executables, this functionality is initially available behind a
`--preview` flag. Once an installation has been made upgradeable through
`--preview`, subsequent operations (like `uv venv -p 3.10` or patch
upgrades) will work without requiring the flag again. This is
accomplished by checking for the existence of a minor version symlink
directory (or junction on Windows).
### Features
* New `uv python upgrade` command to upgrade installed Python versions
to the latest available patch release:
```
# Upgrade specific minor version
uv python upgrade 3.12 --preview
# Upgrade all installed minor versions
uv python upgrade --preview
```
* Transparent upgrades also occur when installing newer patch versions:
```
uv python install 3.10.8 --preview
# Automatically upgrades existing 3.10 environments
uv python install 3.10.18
```
* Support for transparently upgradeable Python `bin` installations via
`--preview` flag
```
uv python install 3.13 --preview
# Automatically upgrades the `bin` installation if there is a newer patch version available
uv python upgrade 3.13 --preview
```
* Virtual environments can still be tied to a patch version if desired
(ignoring patch upgrades):
```
uv venv -p 3.10.8
```
### Implementation
Transparent upgrades are implemented using:
* Minor version symlink directories (Unix) or junctions (Windows)
* On Windows, trampolines simulate paths with junctions
* Symlink directory naming follows Python build standalone format: e.g.,
`cpython-3.10-macos-aarch64-none`
* Upgrades are scoped to the minor version key (as represented in the
naming format: implementation-minor version+variant-os-arch-libc)
* If the context does not provide a patch version request and the
interpreter is from a managed CPython installation, the `Interpreter`
used by `uv python run` will use the full symlink directory executable
path when available, enabling transparently upgradeable environments
created with the `venv` module (`uv run python -m venv`)
New types:
* `PythonMinorVersionLink`: in a sense, the core type for this PR, this
is a representation of a minor version symlink directory (or junction on
Windows) that points to the highest installed managed CPython patch
version for a minor version key.
* `PythonInstallationMinorVersionKey`: provides a view into a
`PythonInstallationKey` that excludes the patch and prerelease. This is
used for grouping installations by minor version key (e.g., to find the
highest available patch installation for that minor version key) and for
minor version directory naming.
### Compatibility
* Supports virtual environments created with:
* `uv venv`
* `uv run python -m venv` (using managed Python that was installed or
upgraded with `--preview`)
* Virtual environments created within these environments
* Existing virtual environments from before these changes continue to
work but aren't transparently upgradeable without being recreated
* Supports both standard Python (`python3.10`) and freethreaded Python
(`python3.10t`)
* Support for transparently upgrades is currently only available for
managed CPython installations
Closes#7287Closes#7325Closes#7892Closes#9031Closes#12977
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
This PR is a combination of #12920 and #13754. Prior to these changes,
following a redirect when searching indexes would bypass our
authentication middleware. This PR updates uv to support propagating
credentials through our middleware on same-origin redirects and to
support netrc credentials for both same- and cross-origin redirects. It
does not handle the case described in #11097 where the redirect location
itself includes credentials (e.g.,
`https://user:pass@redirect-location.com`). That will be addressed in
follow-up work.
This includes unit tests for the new redirect logic and integration
tests for credential propagation. The automated external registries test
is also passing for AWS CodeArtifact, Azure Artifacts, GCP Artifact
Registry, JFrog Artifactory, GitLab, Cloudsmith, and Gemfury.
Close#13922
## Summary
Add a warning if the directory given by the `--index` argument is empty.
## Test Plan
Added test case `add_index_empty_directory` in `edit.rs`
When working on support for reading global Python pins in tool
operations, I noticed that we weren't using the canonicalized Python
request in receipts — we were using the raw string provided by the user.
Since we'll need to compare these values, we should be using the
canonicalized string.
The `Tool` and `ToolReceipt` types have been updated to hold a
`PythonRequest` instead of a `String`, and `Serialize` was implemented
for `PythonRequest` so canonicalization can happen at the edge instead
of being the caller's responsibility.
Fix `uv run -p 3.7` by not using a walrus operator. Python 3.7 isn't
really supported anymore, but there's no reason to break interpreter
discovery for it.
When using `uv lock --upgrade-package=python` after changing
`requires-python`, it was possible to get into a state where the fork
markers produced corresponded to the empty set. This in turn resulted in
an empty lock file.
There was already some infrastructure in place that I think was perhaps
intended to handle this. In particular, `Lock::check_marker_coverage`
checks whether the fork markers have some overlap with the supported
environments (including the `requires-python`). But there were two
problems with this.
First is that in lock validation, this marker coverage check came
_after_ a path that returned `Preferable` (meaning that the fork markers
should be kept) when `--upgrade-package` was used. Second is that the
marker coverage check used the `requires-python` in the lock file and
_not_ the `requires-python` in the now updated `pyproject.toml`.
We attempt to solve this conundrum by slightly re-arranging lock file
validation and by explicitly checking whether the *new*
`requires-python` is disjoint from the fork markers in the lock file. If
it is, then we return `Versions` from lock file validation (indicating
that the fork markers should be dropped).
Fixes#13951
We always ignore the `clippy::struct_excessive_bools` rule and formerly
annotated this at the function level. This PR specifies the allow in
`workspace.lints.clippy` in `Cargo.toml`.