These were introduced in https://github.com/astral-sh/uv/pull/587 but
are now showing up in our slow test list (#878) and we previously pared
down the `poetry_editable` test case dependencies — I think these were
just missed.
## Summary
I'm inferring that these are like... the older tag format? See, e.g.:
```
soxbindings-0.0.1-pp27-pypy_73-macosx_10_9_x86_64.whl
soxbindings-0.0.1-pp27-pypy_73-manylinux2010_x86_64.whl
soxbindings-0.0.1-pp36-pypy36_pp73-macosx_10_9_x86_64.whl
soxbindings-0.0.1-pp36-pypy36_pp73-manylinux2010_x86_64.whl
```
## Summary
Fixes#10598
## Test Plan
Looking for input here @zanieb. How/where would you include tests for
this?
More broadly: do we want a failure to perform the rename to be a hard
error? Or should it start out as a warning?
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
This log message is shown every time a script including a uv
shebang is run. After installing all dependencies, printing this log
message every time does not add any relevant information for the user. I
would say it could even be misleading and motivate the user to debug his
own program searching for this log message.
As a consequence, reduce the log level of this message to debug.
## Test Plan
uv run was called with default settings and the log message didn't show
up.
cargo test was run and I tried to fix the issues.
## Summary
This PR modifies the lockfile to omit versions for source trees that use
`dynamic` versioning, thereby enabling projects to use dynamic
versioning with `uv.lock`.
Prior to this change, dynamic versioning was largely incompatible with
locking, especially for popular tools like `setuptools_scm` -- in that
case, every commit bumps the version, so every commit invalidates the
committed lockfile.
Closes https://github.com/astral-sh/uv/issues/7533.
## Summary
I previously made this required, but we now need to be able to create
these from a lockfile that _omits_ versions for dynamic source trees.
They should still be present in most cases, but it's best-effort.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
I use `uv` for automation on remote hosts and it would be useful to have
it be able to tell me the supported versions of python (for the remote
machine) in a machine readable manner so I do not need to parse `uv
python list`.
This change adds `--format (json|text)` to `uv python list` to make it's
output machine readable
Loosely related:
- https://github.com/astral-sh/uv/issues/411
## Test Plan
Manually tested via
```
# quick inspection without pretty print
cargo run -- python list --format json
```
### Short example of output (trimmed down)
Cmd: `cargo run -- python list --format json | jq '.[:2]'`
```json
[
{
"key": "cpython-3.13.1+freethreaded-linux-x86_64-gnu",
"version": "3.13.1",
"version_parts": {
"major": 3,
"minor": 13,
"patch": 1
},
"path": null,
"symlink": null,
"url": "https://github.com/astral-sh/python-build-standalone/releases/download/20241219/cpython-3.13.1%2B20241219-x86_64-unknown-linux-gnu-freethreaded%2Bpgo%2Blto-full.tar.zst",
"os": "linux",
"variant": "freethreaded",
"implementation": "cpython",
"arch": "x86_64",
"libc": "gnu"
},
{
"key": "cpython-3.13.1-linux-x86_64-gnu",
"version": "3.13.1",
"version_parts": {
"major": 3,
"minor": 13,
"patch": 1
},
"path": "/usr/bin/python3.13",
"symlink": null,
"url": null,
"os": "linux",
"variant": "default",
"implementation": "cpython",
"arch": "x86_64",
"libc": "gnu"
}
]
```
---------
Co-authored-by: John Zlotek <jzlotek@gmail.com>
## Summary
I don't think this had an impact in practice, but it is "wrong" to omit
these. Confirmed that the cache (for example) now includes the build tag
(as in, `mkl_fft-1.3.8-72-cp310-cp310-manylinux2014_x86_64`).
## Summary
* Closes https://github.com/astral-sh/uv/pull/10515
* Bumps Rust Nightly to 1.85 Beta
* Removes old dev dependencies
## Test Plan
Existing tests.
Note, binaries need to be rebuilt for integrity before merging.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
After we resolve, we filter out any wheels that aren't applicable for
the target platforms. So, e.g., we remove macOS wheels if we find that
the user only asked to solve for Windows.
This PR extends the same logic to architectures, so that we filter out
ARM-only wheels when the user is only solving for x86, etc.
Closes#10571.
## Summary
This PR extends the thinking in #10525 to platform tags, and then uses
the structured tag enums everywhere, rather than passing around strings.
I think this is a big improvement! It means we're no longer doing ad hoc
tag parsing all over the place.
## Summary
The idea here is to show both (1) an example of a compatible tag and (2)
the tags that were available, whenever we fail to resolve due to an
abscence of matching wheels.
Closes https://github.com/astral-sh/uv/issues/2777.
## Summary
I need to be able to do non-lexicographic comparisons between tags
(e.g., so I can sort `cp313` as greater than `cp39`). It ended up being
easiest to just create structured types for all the tags we support,
with `FromStr` and `Display` implementations.
We don't currently store these in `Tags` or in `WheelFilename`. We may
want to, since they're really small (and `Copy`), but I need to
benchmark to determine whether parsing these in `WheelFilename` is
prohibitively slow.
## Summary
Resolves#5952
Add a `--path` option to `uv pip freeze` to be compatible with `pip
freeze`
## Test Plan
New snapshot tests
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Closes#3312.
This PR adds Git LFS support to the `uv-git` crate by using the
`git-lfs` CLI to fetch required LFS objects for a revision following the
call to `git fetch`.
The LFS fetch step is disabled by default and only enabled if the
environment variable `UV_GIT_LFS` is set.
When enabled, the LFS fetch step is run for all repositories regardless
of whether they have associated LFS objects. The step is skipped if the
`git-lfs` CLI tool isn't installed.
## Test Plan
I verified that the minimal example in the linked issue passes, i.e.
this command now succeeds:
```sh
UV_GIT_LFS=1 uv pip install git+https://github.com/grebnetiew/lfs-py.git
```
I also verified that non-LFS repositories still work, with or without
`git-lfs` installed.
### To Replicate
Attempt to use uv to install a Git dependency that contains LFS objects
(e.g. `uv pip install git+https://github.com/grebnetiew/lfs-py.git`).
This should fail with a smudge filter error.
Re-run the same command with the added environment variable
`UV_GIT_LFS=1`. The install should now succeed.
## Potential Changes / Improvements
~With this change LFS objects in a given revision will always be
downloaded if the user has Git LFS installed, which may not always be
desired behavior. It might be helpful to add a field to the `uv`
settings and/or an environment variable so that the LFS step can be
disabled if needed.~
Enabling/disabled via environment variable has now been implemented.
---------
Co-authored-by: Sydney Duckworth <sydduckworth@users.noreply.github.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Closes https://github.com/astral-sh/uv/issues/10522.
## Test Plan
```
❯ cargo run venv
warning: Failed to parse `pyproject.toml` during environment creation:
TOML parse error at line 1, column 1
|
1 | [project]
| ^^^^^^^^^
`pyproject.toml` is using the `[project]` table, but the required `project.version` field is neither set nor present in the `project.dynamic` list
Using CPython 3.13.0
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
```
## Summary
The assumption that all tags are listed under a flat `.git/ref/tags`
structure was wrong. Git creates a hierarchy of directories for tags
containing slashes. To fix the cache key calculation, we need to
recursively traverse all files under that folder instead.
## Test Plan
1. Create an `uv` project with git-tag cache-keys;
2. Add any tag with slash;
3. Run `uv sync` and see uv_cache_info error in verbose log;
4. `uv sync` doesn't trigger reinstall on next tag addition or removal;
5. With fix applied, reinstall triggers on every tag update and there
are no errors in the log.
Fixes#10467
---------
Co-authored-by: Sergei Nizovtsev <sergei.nizovtsev@eqvilent.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Closes https://github.com/astral-sh/uv/issues/10493.
## Test Plan
Run `cargo test --profile fast-build --no-fail-fast -p uv
username_password_sources` from a terminal.
## Summary
If you have a dependency with a marker, and you add a constraint, it
causes us to _always_ fork, because we represent the constraint as a
second dependency with the marker repeated (and, therefore, we have two
requirements of the same name, both with markers). I don't think we
should fork here -- and in the end it's leading to this undesirable
resolution: #10481.
I tried to change constraints such that we just _reuse_ and augment the
initial requirement, but that has a fairly negative effect on error
messages: #10489. So this fix seems a bit better to me.
Closes https://github.com/astral-sh/uv/issues/10481.
## Summary
Fixes a bug when there are only comments in the dependencies section.
Basically, after one removes all dependencies, if there are remaining
comments then the value unwrapped here
c198e2233e/crates/uv-workspace/src/pyproject_mut.rs (L1309)
is never properly initialized.
It's initialized to `None`, here
c198e2233e/crates/uv-workspace/src/pyproject_mut.rs (L1256),
but doesn't get set to `Some(...)` until the first dependency here
c198e2233e/crates/uv-workspace/src/pyproject_mut.rs (L1276)
and since we remove them all... there are none.
## Test Plan
Manually induced bug with
```
[project]
name = "t1"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"duct>=0.6.4",
"minilog>=2.3.1",
# comment
]
```
Then running
```
$ RUST_LOG=trace RUST_BACKTRACE=full uv remove duct minilog
DEBUG uv 0.5.8
DEBUG Found project root: `/home/bnorick/dev/workspace/t1`
DEBUG No workspace root found, using project root
thread 'main' panicked at crates/uv-workspace/src/pyproject_mut.rs:1294:73:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x5638d7bed6ba - <unknown>
1: 0x5638d783760b - <unknown>
2: 0x5638d7bae232 - <unknown>
3: 0x5638d7bf0f07 - <unknown>
4: 0x5638d7bf215c - <unknown>
5: 0x5638d7bf1972 - <unknown>
6: 0x5638d7bf1909 - <unknown>
7: 0x5638d7bf18f4 - <unknown>
8: 0x5638d75087d2 - <unknown>
9: 0x5638d750896b - <unknown>
10: 0x5638d7508d68 - <unknown>
11: 0x5638d8dcf1bb - <unknown>
12: 0x5638d76be271 - <unknown>
13: 0x5638d75ef1f9 - <unknown>
14: 0x5638d75fc3cd - <unknown>
15: 0x5638d772d9de - <unknown>
16: 0x5638d8476812 - <unknown>
17: 0x5638d83e1894 - <unknown>
18: 0x5638d84722d3 - <unknown>
19: 0x5638d83e1372 - <unknown>
20: 0x7f851cfc7d90 - <unknown>
21: 0x7f851cfc7e40 - __libc_start_main
22: 0x5638d758e992 - <unknown>
23: 0x0 - <unknown>
```
N.B. After fixing #10430, `ArcStr` became the fastest implementation
(and the gains were significantly reduced, down to 1-2%). See:
https://github.com/astral-sh/uv/pull/10453#issuecomment-2583344414.
## Summary
I tried out a variety of small string crates, but `Arc<str>`
outperformed them, giving a ~10% speed-up:
```console
❯ hyperfine "../arcstr lock" "../flexstr lock" "uv lock" "../arc lock" "../compact_str lock" --prepare "rm -f uv.lock" --min-runs 50 --warmup 20
Benchmark 1: ../arcstr lock
Time (mean ± σ): 304.6 ms ± 2.3 ms [User: 302.9 ms, System: 117.8 ms]
Range (min … max): 299.0 ms … 311.3 ms 50 runs
Benchmark 2: ../flexstr lock
Time (mean ± σ): 319.2 ms ± 1.7 ms [User: 317.7 ms, System: 118.2 ms]
Range (min … max): 316.8 ms … 323.3 ms 50 runs
Benchmark 3: uv lock
Time (mean ± σ): 330.6 ms ± 1.5 ms [User: 328.1 ms, System: 139.3 ms]
Range (min … max): 326.6 ms … 334.2 ms 50 runs
Benchmark 4: ../arc lock
Time (mean ± σ): 303.0 ms ± 1.2 ms [User: 301.6 ms, System: 118.4 ms]
Range (min … max): 300.3 ms … 305.3 ms 50 runs
Benchmark 5: ../compact_str lock
Time (mean ± σ): 320.4 ms ± 2.0 ms [User: 318.7 ms, System: 120.8 ms]
Range (min … max): 317.3 ms … 326.7 ms 50 runs
Summary
../arc lock ran
1.01 ± 0.01 times faster than ../arcstr lock
1.05 ± 0.01 times faster than ../flexstr lock
1.06 ± 0.01 times faster than ../compact_str lock
1.09 ± 0.01 times faster than uv lock
```
## Summary
We can read from the slice directly. I don't think this will affect
performance today, because `from_str` will then allocate, but it
_should_ be a speedup once #10475 merges, since we can then avoid
allocating a `String` and go straight from `str` to `ArcStr`.
#8061 incorrectly claims to change the delimiter for `UV_FIND_LINKS`
from spaces to commas. In reality, it prevents `UV_FIND_LINKS` from
being split. This commit fixes that.
## Summary
This appears to be a consistent 1% performance improvement and should
also reduce memory quite a bit. We've also decided to use these for
markers, so it's nice to use the same optimization here.
```
❯ hyperfine "./uv pip compile --universal scripts/requirements/airflow.in" "./arcstr pip compile --universal scripts/requirements/airflow.in" --min-runs 50 --warmup 20
Benchmark 1: ./uv pip compile --universal scripts/requirements/airflow.in
Time (mean ± σ): 136.3 ms ± 4.0 ms [User: 139.1 ms, System: 241.9 ms]
Range (min … max): 131.5 ms … 149.5 ms 50 runs
Benchmark 2: ./arcstr pip compile --universal scripts/requirements/airflow.in
Time (mean ± σ): 134.9 ms ± 3.2 ms [User: 137.6 ms, System: 239.0 ms]
Range (min … max): 130.1 ms … 151.8 ms 50 runs
Summary
./arcstr pip compile --universal scripts/requirements/airflow.in ran
1.01 ± 0.04 times faster than ./uv pip compile --universal scripts/requirements/airflow.in
```
It turns out that we use `UniversalMarker::pep508` quite a bit. To the
point that it makes sense to pre-compute it when constructing a
`UniversalMarker`.
This still isn't necessarily the fastest thing we can do, but this
results in a major speed-up and `without_extras` no longer shows up for
me in a profile.
Motivating benchmarks. First, from #10430:
```
$ hyperfine 'rm -f uv.lock && uv lock' 'rm -f uv.lock && uv-ag-optimize-without-extras lock'
Benchmark 1: rm -f uv.lock && uv lock
Time (mean ± σ): 408.3 ms ± 276.6 ms [User: 333.6 ms, System: 111.1 ms]
Range (min … max): 316.9 ms … 1195.3 ms 10 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (1.195 s). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Benchmark 2: rm -f uv.lock && uv-ag-optimize-without-extras lock
Time (mean ± σ): 209.4 ms ± 2.2 ms [User: 209.8 ms, System: 103.8 ms]
Range (min … max): 206.1 ms … 213.4 ms 14 runs
Summary
rm -f uv.lock && uv-ag-optimize-without-extras lock ran
1.95 ± 1.32 times faster than rm -f uv.lock && uv lock
```
And now from #10438:
```
$ hyperfine 'uv pip compile requirements.in -c constraints.txt --universal --no-progress --python-version 3.8 --offline > /dev/null' 'uv-ag-optimize-without-extras pip compile requirements.in -c constraints.txt --universal --no-progress --python-version 3.8 --offline > /dev/null'
Benchmark 1: uv pip compile requirements.in -c constraints.txt --universal --no-progress --python-version 3.8 --offline > /dev/null
Time (mean ± σ): 12.718 s ± 0.052 s [User: 12.818 s, System: 0.140 s]
Range (min … max): 12.650 s … 12.815 s 10 runs
Benchmark 2: uv-ag-optimize-without-extras pip compile requirements.in -c constraints.txt --universal --no-progress --python-version 3.8 --offline > /dev/null
Time (mean ± σ): 419.5 ms ± 6.7 ms [User: 434.7 ms, System: 100.6 ms]
Range (min … max): 412.7 ms … 434.3 ms 10 runs
Summary
uv-ag-optimize-without-extras pip compile requirements.in -c constraints.txt --universal --no-progress --python-version 3.8 --offline > /dev/null ran
30.32 ± 0.50 times faster than uv pip compile requirements.in -c constraints.txt --universal --no-progress --python-version 3.8 --offline > /dev/null
```
Fixes#10430, Fixes#10438
## Summary
We shouldn't consider incompatible distributions (e.g., those that don't
match the required Python version) when determining the implied markers.
For some reason this was banned when originally added (I did not see
discussion about it). I think it's fine to allow. With `uv run`, there's
a bit of nuance because we also allow the script to be read from stdin.
## Summary
If a user provides a constraint like `flask==3.0.0`, that gets expanded
to `[3.0.0, 3.0.0+[max])`. So it's not a _singleton_, but it should be
treated as such for the purposes of prioritization, since in practice it
will almost always map to a single version.
This should be essentially the exact same behaviour, but backon is a
total API redesign, so things had to be expressed slightly differently.
Overall I think the code is more readable, which is nice.
Fixes#10001
## Summary
The issue here is that we add `urllib3{python_full_version >= '3.8'}` as
a dependency, then `requests{python_full_version >= '3.8'}`, which adds
`urllib3`, but at that point, we haven't expanded
`urllib3{python_full_version >= '3.8'}`, so we "lose" the singleton
constraint. The solution is to ensure that we visit proxies eagerly, so
that we accumulate constraints as early as possible.
Closes
https://github.com/astral-sh/uv/issues/10425#issuecomment-2580324578.
## Summary
You can now run `uv tree --script main.py` to show the dependency tree
for a given script. If a lockfile doesn't exist, it will create one.
Closes https://github.com/astral-sh/uv/issues/7328.
## Summary
`uv add --script main.py anyio` will now update the lockfile, _if_ it
already exists. (If no such lockfile exists, the behavior is unchanged.)
## Summary
This PR adds `ls` alias to `uv {tool, python, pip} list` for
convenience.
Not sure if folks previously discussed this or have any opinion on
having aliases – but I have a muscle memory for `ls` for listing things
in commands I'm using (like `docker images ls`, `zellij ls`, `helm ls`
etc.) and thought having `ls` alias for `list` command would be useful.
## Test Plan
I simply compiled `uv` and manually checked `./target/release/uv {tool,
python, pip} ls`.
## Summary
You can now run `uv lock --script main.py` to lock a given script
(though as of this PR, the script itself isn't used anywhere).
Closes https://github.com/astral-sh/uv/issues/6318.
The shellcheck action we uses misses some files, so they fell out of
spec for what we support. This PR first and foremost adds them to the
scanning list, and then fixes the issues found.
Fixes#7480
## Summary
This PR revives https://github.com/astral-sh/uv/pull/7827 to improve
tool resolutions such that, if the resolution fails, and the selected
interpreter doesn't match the required Python version from the solve, we
attempt to re-solve with a newly-discovered interpreter that _does_
match the required Python version.
For now, we attempt to choose a Python interpreter that's greater than
the inferred `requires-python`, but compatible with the same Python
minor. This helps avoid successive failures for cases like Posting,
where choosing Python 3.13 fails because it has a dependency that lacks
source distributions and doesn't publish any Python 3.13 wheels. We
should further improve the strategy to solve _that_ case too, but this
is at least the more conservative option...
In short, if you do `uv tool instal posting`, and we find Python 3.8 on
your machine, we'll detect that `requires-python: >=3.11`, then search
for the latest Python 3.11 interpreter and re-resolve.
Closes https://github.com/astral-sh/uv/issues/6381.
Closes https://github.com/astral-sh/uv/issues/10282.
## Test Plan
The following should succeed:
```
cargo run python uninstall --all
cargo run python install 3.8
cargo run tool install posting
```
In the logs, we see:
```
...
DEBUG No compatible version found for: posting
DEBUG Refining interpreter with: Python >=3.11, <3.12
DEBUG Searching for Python >=3.11, <3.12 in managed installations or search path
DEBUG Searching for managed installations at `/Users/crmarsh/.local/share/uv/python`
DEBUG Skipping incompatible managed installation `cpython-3.8.20-macos-aarch64-none`
DEBUG Found `cpython-3.13.1-macos-aarch64-none` at `/opt/homebrew/bin/python3` (search path)
DEBUG Skipping interpreter at `/opt/homebrew/opt/python@3.13/bin/python3.13` from search path: does not satisfy request `>=3.11, <3.12`
DEBUG Found `cpython-3.11.7-macos-aarch64-none` at `/opt/homebrew/bin/python3.11` (search path)
DEBUG Re-resolving with Python 3.11.7
DEBUG Using request timeout of 30s
DEBUG Solving with installed Python version: 3.11.7
DEBUG Solving with target Python version: >=3.11.7
DEBUG Adding direct dependency: posting*
DEBUG Searching for a compatible version of posting (*)
...
```
This test started failing on main.
I don't understand why this changed (there was a new release but exclude-newer is supposed to exclude those), but the error message improved.
PowerPC seems to build without errors if we upgrade `zlib-ng`, but
upgrading `zlib-ng` causes Windows to break
(https://github.com/rust-lang/libz-sys/issues/225), and Cargo doesn't
let us include two different versions.
s390x fails because it can't find `stfle`. It's possible that we could
fix this by by upgrading our manylinux version and/or by upgrading GCC
(which may necessitate upgrading our manylinux version), but I don't
know if it's fixable without one of those things? And it's not worth
bumping compatibility for that reason. \cc @konstin
This happened as a result of #10345 and #10362 being merged
independently. The latter used the old `Version::release` API, but the
former changed the `Version::release` API. This PR tweaks the new test
to use the new API (i.e., force a deref on the proxy type).
Basically, this explicitly checks that parsing a `1.2.0` into a
`Version` will roundtrip back to a `1.2.0`, and that parsing a `1.2`
will roundtrip back to a `1.2`.
I think this case is included in the other tests in this module, but
this test makes the behavior more clearly intentional I think.
Ref #10345
Ref https://github.com/astral-sh/uv/issues/10344
Not a performance optimization, but the function had become too large.
No logic changes, just code moving around. Looks slightly better when
ignoring whitespace changes.
It's still too complex but i haven't found an apt simplification.
## Summary
This allows, e.g., `uv remove flask[dotenv]` to remove `flask`. Like
`pip install` and `uv pip install`, the content after the package name
has no effect.
Closes https://github.com/astral-sh/uv/issues/9764.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
https://docs.rs/serde_json/latest/serde_json/fn.from_reader.html
suggests that
> When reading from a source against which short reads are not
efficient, such as a
[File](https://doc.rust-lang.org/std/fs/struct.File.html), you will want
to apply your own buffering because serde_json will not buffer the
input. See
[std::io::BufReader](https://doc.rust-lang.org/std/io/struct.BufReader.html).
Without this buffering, we observe a sequence of single byte reads which
can be quite inefficient depending on the underlying filesystem.
This adds buffering with `std::io::BufReader` to resolve this.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Unit tests cover this code.
<!-- How was it tested? -->
## Summary
When `--upgrade` is provided, we should retain already-installed
packages _if_ they're newer than whatever is available from the
registry.
Closes https://github.com/astral-sh/uv/issues/10089.
## Summary
Sort of undecided on this. These are already stored as `dyn Reporter` in
each struct, so we're already using dynamic dispatch in that sense. But
all the methods take `impl Reporter`. This is sometimes nice (the
callsites are simpler?), but it also means that in practice, you often
_can't_ pass `None` to these methods that accept `Option<impl
Reporter>`, because Rust can't infer the generic type.
Anyway, this adds more consistency and simplifies the setup by using
`Arc<dyn Reporter>` everywhere.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Follow up to #8553
Clarifies that the `exclude-newer` setting must be a full timestamp and
not a date.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
N/A
<!-- How was it tested? -->
## Summary
This PR extends #10046 to also handle architectures, which allows us to
correctly include `2.5.1` on the `cu124` index for ARM Linux.
Closes https://github.com/astral-sh/uv/issues/9655.
## Summary
This should address the comment here:
https://github.com/astral-sh/uv/pull/10179#issuecomment-2569189265. We
don't compute implied markers if the marker is already `TRUE`, and we
set it to `TRUE` as soon as we see a source distribution. So if we visit
the source distribution before the wheels, we'll avoid computing these
for any irrelevant distributions.
The uv-performance-memory-allocator is currently optimized out at least
on musl due to the crate being otherwise unused
(https://github.com/rust-lang/rust/issues/64402), causing musl to not
use jemalloc and being slow.
Command:
```
cargo build --target x86_64-unknown-linux-musl --profile profiling
hyperfine --warmup 1 --runs 10 --prepare "uv venv -p 3.12" "target/x86_64-unknown-linux-musl/profiling/uv pip compile scripts/requirements/airflow.in"
```
Before:
```
Time (mean ± σ): 1.149 s ± 0.013 s [User: 1.498 s, System: 0.433 s]
Range (min … max): 1.131 s … 1.173 s 10 runs
```
After:
```
Time (mean ± σ): 552.6 ms ± 4.7 ms [User: 771.7 ms, System: 197.5 ms]
Range (min … max): 546.4 ms … 561.6 ms 10 runs
The `cdylib` was used for the pyo3 bindings to uv-pep508, which don't
exist anymore. It was now creating warnings on musl due to musl
(statically linked) no supporting shared libraries.
## Summary
This follows Ruff's design exactly: you can provide a version specifier
(like `>=0.5`), and we'll enforce it at runtime.
Closes https://github.com/astral-sh/uv/issues/8605.
## Summary
Allows uv to recognize the ARMv5TE platform. This platform is currently
supported on Debian distributions. It is an older 32 bit platform mostly
used in embedded devices, currently in rust tier 2.5 so it requires
cross compilation.
Fixes#10157 .
## Test Plan
Tested directly on device by applying a slightly different patch to tag
0.5.4 which is used by the current Home Assistant version (2024.12.5).
After the patch Home Assistant is able to recognize the Python venv and
setup its dependencies.
Patched uv was built with
```
$ CARGO_TARGET_ARMV5TE_UNKNOWN_LINUX_GNUEABI_LINKER="/usr/bin/arm-linux-gnueabi-gcc" maturin build --release --target armv5te-unknown-linux-gnueabi --manylinux off
```
The target wheel was then moved on the device and installed via pip
install.
## Summary
Closes#7913 by adding an optional `--description` argument to `uv init`
that fills the description field in the pyproject.toml with the supplied
arg value.
Updated `uv init` docs to describe this new optional argument.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Added snapshot tests in `uv/crates/uv/tests/it/init.rs` to test this
functionality.
<!-- How was it tested? -->
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
`uv run --exact` will remove any unnecessary packages prior to running
the given command. (By default, `uv run` uses "inexact" semantics.)
Closes https://github.com/astral-sh/uv/issues/7838.
Signed-off-by: Frost Ming <me@frostming.com>
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR solves an issue on Windows that platform-specific paths are
written to the `RECORD` file when installing, which is inconsistent with
PEP 376, quoting:
> Each record is composed of three elements:
>
>the file’s path
> * a ‘/’-separated path, relative to the base location, if the file is
under the base location.
> * a ‘/’-separated path, relative to the base location, if the file is
under the installation prefix AND if the base location is a subpath of
the installation prefix.
> * an absolute path, using the local platform separator
## Test Plan
<!-- How was it tested? -->
Test case included
---------
Signed-off-by: Frost Ming <me@frostming.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR introduces a `LockTarget`, which is peer to `InstallTarget` and
enables us to capture the common functionality necessary to support
locking.
For now, to minimize changes, only the `Workspace` target is
implemented. In a future PR, I'll add a `Script` target for both locking
and installing.
## Summary
The proximate motivation is that I want to add new variant for scripts,
but `uv-resolver` can't depend on `uv-scripts` without creating a
circular dependency. However, I think this _does_ just make more sense
-- the resolver crate shouldn't be coupled to the various kinds of
workspaces, and these details are mostly encoded in `projects/lock.rs`
and similar files.
## Summary
This is necessary for some future improvements to non-`[project]`
workspaces and PEP 723 scripts. It's not "breaking", but it will
invalidate lockfiles for non-`[project]` workspaces. I think that's
okay, since we consider those legacy right now, and they're really rare.
## Summary
We had the right logic for determining whether the list is already
sorted, but we forgot to apply the same logic when deciding where to
insert the requirement, which made the list _unsorted_ for future
operations.
Closes https://github.com/astral-sh/uv/issues/10076.
## Summary
A few places where there are extra conversions to and from string that
seem unnecessary; a few places where we're using `PathBuf` instead of
`PortablePathBuf`.
## Summary
This is yet another variation on
https://github.com/astral-sh/uv/pull/9928, with a few minor changes:
1. It only applies to local versions (e.g., `2.5.1+cpu`).
2. It only _considers_ the non-local version as an alternative (e.g.,
`2.5.1`).
3. It only _considers_ the non-local alternative if it _does_ support
the unsupported platform.
4. Instead of failing, it falls back to using the local version.
So, this is far less strict, and is effectively designed to solve
PyTorch but nothing else. It's also not user-configurable, except by way
of using `environments` to exclude platforms.
## Summary
We had a bug in our handling of escape sequences that caused us to
duplicate backslashes. If you installed repeatedly, we'd keep doubling
them, leading to an exponential blowup.
Closes#10060.
uv gives priorities to packages by package name, not by virtual package
(`PubGrubPackage`). pubgrub otoh when prioritizing order the virtual
packages. When the order of virtual packages changes, uv changes its
resolutions and error messages. This means uv was depending on
implementation details of pubgrub's prioritization caching.
This broke with https://github.com/pubgrub-rs/pubgrub/pull/299, which
added a tiebreaker term that made pubgrub's sorting deterministic given
a deterministic ordering of allocating the packages (which happens the
first time pubgrub sees a package).
The new custom tiebreaker decreases the difference to upstream pubgrub.
Previously, the batch prefetcher was part of the solver loop, used
across forks. This would lead to each preference in a fork being counted
as a tried version, so that after 5 forks with the identical version, we
would start batch prefetching. The reported numbers of tried versions
are also reported. By tracking the batch prefetcher on the fork the
numbers are corrected.
An alternative would be tracking the actually tried versions, but that
would mean more overhead in the top level solver loop when the current
heuristic works.
In `ecosystem/transformers`:
```
$ hyperfine --runs 10 --prepare "rm -f uv.lock" "../../target/release/uv lock --exclude-newer 2024-08-08T00:00:00Z" "uv lock --exclude-newer 2024-08-08T00:00:00Z"
Benchmark 1: ../../target/release/uv lock --exclude-newer 2024-08-08T00:00:00Z
Time (mean ± σ): 386.2 ms ± 6.1 ms [User: 396.0 ms, System: 144.5 ms]
Range (min … max): 378.5 ms … 397.9 ms 10 runs
Benchmark 2: uv lock --exclude-newer 2024-08-08T00:00:00Z
Time (mean ± σ): 422.0 ms ± 5.5 ms [User: 459.6 ms, System: 190.3 ms]
Range (min … max): 415.0 ms … 430.5 ms 10 runs
Summary
../../target/release/uv lock --exclude-newer 2024-08-08T00:00:00Z ran
1.09 ± 0.02 times faster than uv lock --exclude-newer 2024-08-08T00:00:00Z
```
Hello! 🙂
## Summary
After submitting retry mechanisms on scripts installation for windows:
#9543 , I noticed that some other functions were using the same
`persist` features of temporary files. This could lead to the same issue
spotted before (temporary lock by AV/EDR software). I validated that it
was possible.
So I updated them to go through the same function on Windows, which is
using the retry mechanisms if needed.
In order to do so, I add to add an async version of the
`persist_with_retry`.
There is a little trick to make the borrow-checker happy line 306,
curious of your opinion on it? This is just a pointer move so it should
not induce some performance regression if I'm not mistaking.
I also updated them to use `fs_err` on Unix for better error messages.
Also, one of the error messages I introduced was badly formatted, I
fixed it. 🙂
## Test Plan
The changes should be iso functional and covered with the existing
test-suite.
## Summary
With the advent of `--fork-strategy requires-python` (the default), we
actually _want_ to solve higher lower-bound forks before lower
lower-bound forks. The former ensures we get the most compatible
versions, while the latter ensures we get fewer overall versions. These
two strategies match up with `--fork-strategy`, but need to be respected
as such.
Closes https://github.com/astral-sh/uv/issues/9998.
From PEP 517:
```python
def prepare_metadata_for_build_wheel(metadata_directory, config_settings=None):
...
```
> Must create a .dist-info directory containing wheel metadata inside
the specified metadata_directory (i.e., creates a directory like
{metadata_directory}/{package}-{version}.dist-info/).
```python
def build_wheel(wheel_directory, config_settings=None, metadata_directory=None):
...
```
> If the build frontend has previously called
prepare_metadata_for_build_wheel and depends on the wheel resulting from
this call to have metadata matching this earlier call, then it should
provide the path to the created .dist-info directory as the
metadata_directory argument.
Notice that the `metadata_directory` is different for the both hooks:
For `prepare_metadata_for_build_wheel` is doesn't contain the
`.dist-info` directory as final segment, for `build_wheel` it does.
Previously, the code assumed that both directories didn't contain the
`.dist-info` for both cases.
Checked with:
```
maturin build
uv init test-uv-build-backend --build-backend uv
cd test-uv-build-backend
uv build --sdist --preview
cd ..
UV_PREVIEW=1 pip install test-uv-build-backend/dist/test_uv_build_backend-0.1.0.tar.gz --no-index --find-links target/wheels/ -v --no-cache-dir
```
Fixes#9969
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Override XDG_CONFIG_DIRS in show_settings tests, in order to ensure that
they don't pick system configuration, and therefore fail due to value
mismatches. This specifically addresses test failures on Gentoo where a
default `/etc/xdg/uv/uv.toml` is installed, and users are free to modify
it.
Prior to #9914, we used to set `XDG_CONFIG_DIRS` locally before running
the test suite. However, since the test now wipes the environment, the
problem can no longer be resolved downstream.
## Test Plan
`cargo test` on a Gentoo system (with `/etc/xdg/uv/uv.toml` present).
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
A revival of an old idea (#9344) that I have slightly more confidence in
now. I abandoned this idea because (1) it couldn't capture that, e.g.,
`platform_system == 'Windows' and sys_platform == 'foo'` (or some other
unknown value) are disjoint, and (2) I thought that Android returned
`"android"` for one of `sys_platform` or `platform_system`, which
would've made this logic incorrect.
However, it looks like Android... doesn't do that? And the values here
are almost always in a small, known set. So in the end, the tradeoffs
here actually seem pretty good.
Vis-a-vis our current solution, this can (e.g.) _simplify out_
expressions like `sys_platform == 'win32' or platform_system ==
'Windows'`.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Since the `backoff` dependency is only *used* on Windows in practice,
this PR would ensure that it is only *compiled* on Windows, too. This is
helpful because it appears to be unmaintained upstream,
https://github.com/astral-sh/uv/issues/10001, and it would be nice to be
able to [drop it from
Fedora](https://bugzilla.redhat.com/show_bug.cgi?id=2329729).
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
```
$ cargo run python install
$ cargo test
```
Closes https://github.com/astral-sh/uv/issues/9891
There are two changes here
1. We now exclude pre-releases (if they are not allowed) from the
available versions set when simplifying ranges, this means the
simplified range reflects the _allowed_ available versions — which is
what we want. We no longer segment ranges into arbitrary looking
segments..
2. We improve on #9885, expanding the scope to avoid regressions where
we would now otherwise enumerate a bunch of versions
---------
Co-authored-by: konsti <konstin@mailbox.org>
Build failures are one of the most common user facing failures that
aren't "obivous" errors (such as typos) or resolver errors. Currently,
they show more technical details than being focussed on this being an
error in a subprocess that is either on the side of the package or -
more likely - in the build environment, e.g. the user needs to install a
dev package or their python version is incompatible.
The new error message clearly delineates the part that's important (this
is a build backend problem) from the internals (we called this hook) and
is consistent about which part of the dist building stage failed. We
have to calibrate the exact wording of the error message some more. Most
of the implementation is working around the orphan rule, (this)error
rules and trait rules, so it came out more of a refactoring than
intended.
Example:

Enable `lzma-sys/static` through the performance feature not only in uv,
but in uv-dev and uv-bench too, to avoid the system dependency on
`liblzma-dev`.
Ref #9880
In a message like
```
❯ echo "numpy>2" | uv pip compile -p 3.8 -
× No solution found when resolving dependencies:
╰─▶ Because the requested Python version (>=3.8.0) does not satisfy Python>=3.10 and the requested
Python version (>=3.8.0) does not satisfy Python>=3.9,<3.10, we can conclude that Python>=3.9 is incompatible.
And because numpy>=2.0.1,<=2.0.2 depends on Python>=3.9 and only the following versions of numpy are available:
numpy<=2.0.2
```
I'm surprised that `-p 3.8` leads to expressions like `>=3.8.0` (I
understand it, of course, but it's not intuitive) and then all the
_other_ Python versions in the message omit the trailing zero. This
updates the `PythonRequirement` parsing to drop the trailing zeros. It's
easier to do there because the version is not yet abstracted.
When using a 32-bit OS on 64-bit host, almost all Python std methods
will report a 64-bit aarch64, but we most not install 64-bit executables
since Python is actually 32-bit, identifiable through
`struct.calcsize("P") == 4`.
Porting
4dc334c86d/src/packaging/tags.py (L539-L543)
to uv.
Tested on a raspberry pi 4 with a 64-bit host raspbian and `docker run
-it --rm -v arm32v7/ubuntu` as 32-bit "host".
Fixes#9842
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
The `fork-strategy` default value was overlooked in #9887.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
For publishing, we want to allow all simple `[[tool.uv.index]]` entries,
whether they are explicit or not. We don't allow flat indexes here,
assuming that an index you can upload to has a simple index URL (and
generally doesn't have a flat index URL, at least I don't know any case
that has).
The `no_index` branch isn't used atm, but I left it in case the method
gathers more users.
Fixes#9919
Background reading: https://github.com/astral-sh/uv/issues/8157
Companion PR: https://github.com/astral-sh/pubgrub/pull/36
Requires for test coverage: https://github.com/astral-sh/packse/pull/230
When two packages A and B conflict, we have the option to choose a lower
version of A, or a lower version of B. Currently, we determine this by
the order we saw a package (assuming equal specificity of the
requirement): If we saw A before B, we pin A until all versions of B are
exhausted. This can lead to undesirable outcomes, from cases where it's
just slow (sentry) to others cases without lower bounds where be
backtrack to a very old version of B. This old version may fail to build
(terminating the resolution), or it's a version so old that it doesn't
depend on A (or the shared conflicting package) anymore - but also is
too old for the user's application (fastapi). #8157 collects such cases,
and the `wrong-backtracking` packse scenario contains a minimized
example.
We try to solve this by tracking which packages are "A"s, culprits, and
"B"s, affected, and manually interfering with project selection and
backtracking. Whenever a version we just chose is rejected, we give the
current package a counter for being affected, and the package it
conflicted with a counter for being a culprit. If a package accumulates
more counts than a threshold, we reprioritize: Undecided after the
culprits, after the affected, after packages that only have a single
version (URLs, `==<version>`). We then ask pubgrub to backtrack just
before the culprit. Due to the changed priorities, we now select package
B, the affected, instead of package A, the culprit.
To do this efficiently, we ask pubgrub for the incompatibility that
caused backtracking, or just the last version to be discarded (due to
its dependencies). For backtracking, we use the last incompatibility
from unit propagation as a heuristic. When a version is discarded
because one of its dependencies conflicts with the partial solution, the
incompatibility tells us the package in the partial solution that
conflicted.
We only backtrack once per package, on the first time it passes the
threshold. This prevents backtracking loops in which we make the same
decisions over and over again. But we also changed the priority, so that
we shouldn't take the same path even after the one time we backtrack (it
would defeat the purpose of this change).
There are some parameters that can be tweaked: Currently, the threshold
is set to 5, which feels not too eager with so me of the conflicts that
we want to tolerate but also changes strategies quickly. The relative
order of the new priorities can also be changed, as for each (A, B) pair
the priority of B is afterwards lower than that for A. Currently,
culprits capture conflict for the whole package, but we could limit that
to a specific version. We could discard conflict counters after
backtracking instead of keeping them eternally as we do now. Note that
we're always taking about pairs (A, B), but in practice we track
individual packages, not pairs.
A case that we wouldn't capture is when B is only introduced to the
dependency graph after A, but I think that would require cyclical
dependency for A and B to conflict? There may also be cases where
looking at the last incompatibility is insufficient.
Another example that we can't repair with prioritization is
urllib3/boto3/botocore: We actually have to check all the newer versions
of boto3 and botocore to identify the version that allows with the older
urllib3, no shortcuts allowed.
```
urllib3<1.25.4
boto3
```
All examples I tested were cases with two packages where we only had to
switch the order, so I've abstracted them into a single packse case.
This PR changes the resolution for certain paths, and there is the risk
for regressions.
Fixes#8157
---
All tested examples improved.
Input fastapi:
```text
starlette<=0.36.0
fastapi<=0.115.2
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/fastapi.txt
annotated-types==0.7.0
anyio==4.6.0
fastapi==0.1.17
idna==3.10
pydantic==2.9.2
pydantic-core==2.23.4
sniffio==1.3.1
starlette==0.36.0
typing-extensions==4.12.2
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/fastapi.txt
annotated-types==0.7.0
anyio==4.6.0
fastapi==0.109.1
idna==3.10
pydantic==2.9.2
pydantic-core==2.23.4
sniffio==1.3.1
starlette==0.35.1
typing-extensions==4.12.2
```
Input xarray:
```text
xarray[accel]
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/xarray-accel.txt
bottleneck==1.4.0
flox==0.9.13
llvmlite==0.36.0
numba==0.53.1
numbagg==0.8.2
numpy==2.1.1
numpy-groupies==0.11.2
opt-einsum==3.4.0
packaging==24.1
pandas==2.2.3
python-dateutil==2.9.0.post0
pytz==2024.2
scipy==1.14.1
setuptools==75.1.0
six==1.16.0
toolz==0.12.1
tzdata==2024.2
xarray==2024.9.0
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/xarray-accel.txt
bottleneck==1.4.0
flox==0.9.13
llvmlite==0.43.0
numba==0.60.0
numbagg==0.8.2
numpy==2.0.2
numpy-groupies==0.11.2
opt-einsum==3.4.0
packaging==24.1
pandas==2.2.3
python-dateutil==2.9.0.post0
pytz==2024.2
scipy==1.14.1
six==1.16.0
toolz==0.12.1
tzdata==2024.2
xarray==2024.9.0
```
Input sentry: The resolution is identical, but arrived at much faster:
main tries 69 versions (sentry-kafka-schemas: 63), PR tries 12 versions
(sentry-kafka-schemas: 6; 5 times conflicting, then once the right
version).
```text
python-rapidjson<=1.20,>=1.4
sentry-kafka-schemas<=0.1.113,>=0.1.50
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/sentry.txt
fastjsonschema==2.20.0
msgpack==1.1.0
python-rapidjson==1.8
pyyaml==6.0.2
sentry-kafka-schemas==0.1.111
typing-extensions==4.12.2
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/sentry.txt
fastjsonschema==2.20.0
msgpack==1.1.0
python-rapidjson==1.8
pyyaml==6.0.2
sentry-kafka-schemas==0.1.111
typing-extensions==4.12.2
```
Input apache-beam
```text
# Run on Python 3.10
dill<0.3.9,>=0.2.2
apache-beam<=2.49.0
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.10 --exclude-newer 2024-10-01 --no-annotate debug/apache-beam.txt
× Failed to download and build `apache-beam==2.0.0`
╰─▶ Build backend failed to determine requirements with `build_wheel()` (exit status: 1)
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.10 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/apache-beam.txt
apache-beam==2.49.0
certifi==2024.8.30
charset-normalizer==3.3.2
cloudpickle==2.2.1
crcmod==1.7
dill==0.3.1.1
dnspython==2.6.1
docopt==0.6.2
fastavro==1.9.7
fasteners==0.19
grpcio==1.66.2
hdfs==2.7.3
httplib2==0.22.0
idna==3.10
numpy==1.24.4
objsize==0.6.1
orjson==3.10.7
proto-plus==1.24.0
protobuf==4.23.4
pyarrow==11.0.0
pydot==1.4.2
pymongo==4.10.0
pyparsing==3.1.4
python-dateutil==2.9.0.post0
pytz==2024.2
regex==2024.9.11
requests==2.32.3
six==1.16.0
typing-extensions==4.12.2
urllib3==2.2.3
zstandard==0.23.0
```
## Summary
This now looks like:
```
error: Failed to parse: `pyproject.toml`
Caused by: TOML parse error at line 1, column 1
|
1 | [project]
| ^^^^^^^^^
`pyproject.toml` is using the `[project]` table, but the required `project.version` field is neither set nor present in the `project.dynamic` list
```
Closes https://github.com/astral-sh/uv/issues/9910.
## Summary
If the shell is currently in a directory that no longer exists, uv will
panic from any command. Panicking is a confusing behavior to those
unfamiliar with Rust and can sometimes make it hard to determine the
true issue.
Closes#9875
## Test Plan
The reproduction steps in the issue report were followed and uv no
longer panics. `uv version` can still successfully print the version if
the directory does exist.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR makes the behavior in https://github.com/astral-sh/uv/pull/9827
the default: we try to select the latest supported package version for
each supported Python version, but we still optimize for choosing fewer
versions when stratifying by platform.
However, you can opt out with `--fork-strategy fewest`.
Closes https://github.com/astral-sh/uv/issues/7190.
## Summary
This PR addresses a significant limitation in the resolver whereby we
avoid choosing the latest versions of packages when the user supports a
wider range.
For example, with NumPy, the latest versions only support Python 3.10
and later. If you lock a project with `requires-python = ">=3.8"`, we
pick the last NumPy version that supported Python 3.8, and use that for
_all_ Python versions. So you get `1.24.4` for all versions, rather than
`2.2.0`. And we'll never upgrade you unless you bump your
`requires-python`. (Even worse, those versions don't have wheels for
Python 3.12, etc., so you end up building from source.)
(As-is, this is intentional. We optimize for minimizing the number of
selected versions, and the current logic does that well!)
Instead, we know recognize when a version has an elevated
`requires-python` specifier and fork. This is a new fork point, since we
need to fork once we have the package metadata, as opposed to when we
see the dependencies.
In this iteration, I've made this behavior the default. I'm sort of
undecided on whether I want to push on that... Previously, I'd suggested
making it opt-in via a setting
(https://github.com/astral-sh/uv/pull/8686).
Closes https://github.com/astral-sh/uv/issues/8492.
## Summary
This PR reimplements
[`sysconfigpatcher`](https://github.com/bluss/sysconfigpatcher) in Rust
and applies it to our Python installations at install-time, ensuring
that the `sysconfig` data is more likely to be correct.
For now, we only rewrite prefixes (i.e., any path that starts with
`/install` gets rewritten to the correct absolute path for the current
machine).
Unlike `sysconfigpatcher`, this PR does not yet do any of the following:
- Patch `pkginfo` files.
- Change `clang` references to `cc`.
A few things that we should do as follow-ups, in my opinion:
1. Rewrite
[`AR`](c1ebf8ab92/src/sysconfigpatcher.py (L61)).
2. Remove `-isysroot`, which we already do for newer builds.
## Summary
Very tricky problem whereby `workspace_root.join(path)` returns the
workspace root with a trailing slash if `path` is empty... This caused
us to accidentally _include_ excluded members during workspace
discovery, since (e.g.) `packages/seeds` doesn't match
`packages/seeds/`.
Closes
https://github.com/astral-sh/uv/issues/9832#issuecomment-2539121761.
## Summary
In CPython, it appears that `/` is not considered as a valid path in
`search_up`:
```c
static PyObject *
getpath_dirname(PyObject *Py_UNUSED(self), PyObject *args)
{
PyObject *path;
if (!PyArg_ParseTuple(args, "U", &path)) {
return NULL;
}
Py_ssize_t end = PyUnicode_GET_LENGTH(path);
Py_ssize_t pos = PyUnicode_FindChar(path, SEP, 0, end, -1);
if (pos < 0) {
return PyUnicode_FromStringAndSize(NULL, 0);
}
return PyUnicode_Substring(path, 0, pos);
}
```
```python
def search_up(prefix, *landmarks, test=isfile):
while prefix:
if any(test(joinpath(prefix, f)) for f in landmarks):
return prefix
prefix = dirname(prefix)
```
Closes https://github.com/astral-sh/uv/issues/9818.
Make the local packse workflow work again:
```
# In packse:
uv run --extra index --extra serve packse serve --no-hash scenarios &
# In uv:
UV_TEST_INDEX_URL="http://localhost:3141/simple/" ./scripts/scenarios/generate.py
```
Bugs fixed:
* The default scenario pattern didn't match anything.
* The snapshot update test command was wrong since the test
centralization
* Snapshot update failures would not be reported
I somehow got in a state where we'd fail to install with
```
error: Failed to install cpython-3.13.0-macos-aarch64-none
Caused by: Executable already exists at `/Users/zb/.local/bin/python3` but is not managed by uv; use `--force` to replace it
error: Failed to install cpython-3.13.0-macos-aarch64-none
Caused by: Executable already exists at `/Users/zb/.local/bin/python` but is not managed by uv; use `--force` to replace it
```
but `python` / `python3` _were_ managed by uv, they just were linked to
an installation that was deleted.
This updates the logic to replace broken executables that are broken
symlinks. We apply this to broken links regardless of whether or not we
think the target is managed by uv.
## Summary
This has been bothering me a bit: `uv pip install "foo @
https://github.com/user/foo"` fails, telling you that it doesn't end in
a supported extension. But we should be able to tell you that it looks
like a Git repo.
When publishing, we currently ask the user to set `--publish-url` to the
upload URL and `--check-url` to the simple index URL, or the equivalent
configuration keys. But that's redundant with the `[[tool.uv.index]]`
declaration. Instead, we extend `[[tool.uv.index]]` with a `publish-url`
entry and allow passing `uv publish --index <name>`.
`uv publish --index <name>` requires the `pyproject.toml` to be present
when publishing, unlike using `--publish-url ... --check-url ...` which
can be used e.g. in CI without a checkout step. `--index` also always
uses the check URL feature to aid upload consistency.
The documentation tries to explain both approaches together, which
overlap for the check URL feature.
Fixes#8864
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
This PR improves our "don't fully resolve symlinks" behavior for
`python-build-standalone` builds based on learnings from
https://github.com/indygreg/python-build-standalone/issues/380#issuecomment-2526575235.
Specifically, we can now robustly detect whether a target executable
will lead to a valid `prefix` or not, and iteratively resolve symlinks
until we find a valid target executable.
## Test Plan
### Direct symlink to `python`
Correctly resolves to the symlink target, rather than the symlink
itself.
```
❯ ln -s /Users/crmarsh/.local/share/uv/python/cpython-3.12.6-macos-aarch64-none/bin/python foo
❯ cargo run venv --python ./foo
❯ cat .venv/pyvenv.cfg
home = /Users/crmarsh/.local/share/uv/python/cpython-3.12.6-macos-aarch64-none/bin
implementation = CPython
uv = 0.5.7
version_info = 3.12.6
include-system-site-packages = false
prompt = uv
❯ .venv/bin/python -c "import sys"
```
### Symlink to the Python installation
Correctly does _not_ resolve the symlink.
```
❯ ln -s /Users/crmarsh/.local/share/uv/python/cpython-3.12.6-macos-aarch64-none bar
❯ cargo run venv --python ./bar
❯ cat .venv/pyvenv.cfg
home = /Users/crmarsh/workspace/uv/bar/bin
implementation = CPython
uv = 0.5.7
version_info = 3.12.6
include-system-site-packages = false
prompt = uv
❯ .venv/bin/python -c "import sys"
```
### Direct symlink to `python` in a symlinked Python installation
Correctly resolves the direct symlink, but not the symlink of the Python
installation.
```
❯ ln -s bar/bin/python baz
❯ cargo run venv --python ./baz
❯ cat .venv/pyvenv.cfg
home = /Users/crmarsh/workspace/uv/bar/bin
implementation = CPython
uv = 0.5.7
version_info = 3.12.6
include-system-site-packages = false
prompt = uv
❯ .venv/bin/python -c "import sys"
```
Addresses #6805
## Summary
This PR adds a `--gui-script` flag to `uv run` that allows running
Python scripts with `pythonw.exe` on Windows, regardless of file
extension. This solves the issue where users need to maintain duplicate
`.py` and `.pyw` files to run the same script with and without a console
window.
The implementation follows the pattern established by the existing
`--script` flag, but uses `pythonw.exe` instead of `python.exe` on
Windows. On non-Windows platforms, the flag is present but returns an
error indicating it's Windows-only functionality.
Changes:
- Added `--gui-script` flag (Windows-only)
- Added Windows test to verify GUI script behavior
- Added non-Windows test to verify proper error message
- Updated CLI documentation
## Test Plan
The changes are tested through:
1. New Windows-specific test that verifies:
- Script runs successfully with `pythonw.exe` when using `--gui-script`
- Console output is suppressed in GUI mode but visible in regular mode
- Same script can be run both ways without modification
2. New non-Windows test that verifies:
- Appropriate error message when `--gui-script` is used on non-Windows
platforms
3. Documentation updates to clearly indicate Windows-only functionality
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Supersedes https://github.com/astral-sh/uv/pull/8517 with an alternative
approach of making all the variants available instead of replacing the
x86_64 (v1) variant with x86_64_v2.
Doesn't add automatic inference of the supported instructions, but that
should be doable per @charliermarsh's comment there. Going to do it as a
follow-up since this has been pretty time consuming.
e.g.,
```
❯ cargo run -q -- python install cpython-3.12.8-linux-x86_64_v3-gnu
Installed Python 3.12.8 in 2.72s
+ cpython-3.12.8-linux-x86_64_v3-gnu
```
Co-authored-by: j178 <10510431+j178@users.noreply.github.com>
```
❯ uv python install foo
error: Cannot download managed Python for request: directory `foo`
❯ cargo run -q -- python install foo
error: `foo` is not a valid Python download request; see `uv python help` for supported formats and `uv python list --only-downloads` for available versions
```
## Summary
This optimization isn't quite right, because we can successfully extract
metadata without having to build from source. (The builder itself will
error if we reach the point at which we need to build, but builds are
disabled.)
Closes https://github.com/astral-sh/uv/issues/9776.
## Summary
If you look at Ed's reply
[here](https://github.com/toml-rs/toml/issues/818#issuecomment-2532626305),
it sounds like we're being too heavy-handed in applying `.fmt()`. I
think I added this to handle an issue with inline tables whereby we were
inserting a space after a trailing comma? So now I'm just applying
`.fmt()` to inline tables, which don't allow comments between elements
anyway.
Closes https://github.com/astral-sh/uv/issues/9758.
Since we don't (currently) include conflict markers with our
`resolution-markers` in the lock file, it's possible that we end up
with duplicate markers. This happens when the resolver creates more
than one fork with the same PEP 508 markers but different conflict
markers, _and_ where those PEP 508 markers don't simplify to "always
true" after accounting for `requires-python`.
This change should be a strict improvement on the status quo. We aren't
removing any information. It is possible that we should be writing
conflict markers here (like we do for dependency edges), but I haven't
been able to come up with a case or think through a scenario where they
are necessary.
Fixes#9296
## Summary
Fix#8075.
Invalid discovered environments in the working directory should be
filtered out.
## Test Plan
- Test python_find
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
This PR adds `--install-dir` argument for the following commands:
- `uv python install`
- `uv python uninstall`
The `UV_PYTHON_INSTALL_DIR` env variable can be used to set it
(previously it was also used internally).
Any more commands we would want to add this to?
## Test Plan
For now just manual test (works on my machine hehe)
```
❯ ./target/debug/uv python install --install-dir /tmp/pythons 3.8.12
Searching for Python versions matching: Python 3.8.12
Installed Python 3.8.12 in 4.31s
+ cpython-3.8.12-linux-x86_64-gnu
❯ /tmp/pythons/cpython-3.8.12-linux-x86_64-gnu/bin/python --help
usage: /tmp/pythons/cpython-3.8.12-linux-x86_64-gnu/bin/python [option] ... [-c cmd | -m mod | file | -] [arg] ...
```
Open to add some tests after the initial feedback.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
The resolver methods are already too large and complex, especially
`choose_version*`, so i wanted to shrink and simplify them a bit before
adding new methods to them.
I've split `MetadataResponse` into three variants: success, non-fatal
error (reported through pubgrub), fatal error (reported as error trace).
The resulting non-fatal `MetadataUnavailable` type is equivalent to the
`IncompletePackage` type, so they are now merged. (`UnavailableVersion`
is a bit different since, besides the extra `IncompatibleDist` variant,
it have no error source attached). This shows that the missing metadata
variant was unused, which I removed.
Tagging as error messages for the logging format changes.
This PR adds a notion of "conflict markers" to the lock file as an
attempt to address #9289. The idea is to encode a new kind of boolean
expression indicating how to choose dependencies based on which extras
are activated.
As an example of what conflict markers look like, consider one of the
cases
brought up in #9289, where `anyio` had unconditional dependencies on
two different versions of `idna`. Now, those are gated by markers, like
this:
```toml
[[package]]
name = "anyio"
version = "4.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna", version = "3.5", source = { registry = "https://pypi.org/simple" }, marker = "extra == 'extra-7-project-foo'" },
{ name = "idna", version = "3.6", source = { registry = "https://pypi.org/simple" }, marker = "extra == 'extra-7-project-bar' or extra != 'extra-7-project-foo'" },
{ name = "sniffio" },
]
```
The odd extra values like `extra-7-project-foo` are an encoding of not
just the conflicting extra (`foo`) but also the package it's declared
for (`project`). We need both bits of information because different
packages may have the same extra name, even if they are completely
unrelated. The `extra-` part is a prefix to distinguish it from groups
(which, in this case, would be encoded as `group-7-project-foo` if `foo`
were a dependency group). And the `7` part indicates the length of the
package name which makes it possible to parse out the package and extra
name from this encoding. (We don't actually utilize that property, but
it seems like good sense to do it in case we do need to extra
information from these markers.)
While this preserves PEP 508 compatibility at a surface level, it does
require utilizing this encoding scheme in order
to evaluate them when they're present (which only occurs when
conflicting extras/groups are declared).
My sense is that the most complex part of this change is not just adding
conflict markers, but their simplification. I tried to address this in
the code comments and commit messages.
Reviewers should look at this commit-by-commit.
Fixes#9289, Fixes#9546, Fixes#9640, Fixes#9622, Fixes#9498, Fixes
#9701, Fixes#9734
## Summary
So the error here is:
```rust
ExtractError("cpython-3.11.11%2B20241206-aarch64-apple-darwin-install_only_stripped.tar.gz", Io(Custom { kind: UnexpectedEof, error: TarError { desc: "failed to unpack `/Users/crmarsh/.local/share/uv/python/.cache/.tmpkqFzqE/python/lib/libpython3.11.dylib`", io: Custom { kind: UnexpectedEof, error: TarError { desc: "failed to unpack `python/lib/libpython3.11.dylib` into `/Users/crmarsh/.local/share/uv/python/.cache/.tmpkqFzqE/python/lib/libpython3.11.dylib`", io: Custom { kind: UnexpectedEof, error: "unexpected end of file" } } } } }))
```
This isn't a Reqwest error, so we miss it in
`is_extended_transient_error`.
We could add `TarError` or `ExtractError` here, but... should we? This
PR just extends it to any error that has an IO source. I don't see much
of a downside.
Closes https://github.com/astral-sh/uv/issues/9747.
## Test Plan
First, ran: `uv run ./scripts/create-python-mirror.py --name cpython
--arch aarch64 --os darwin`.
Then, dropped this into `./scripts/mirror/server.py`:
```python
import os
import random
from http.server import SimpleHTTPRequestHandler, HTTPServer
class GlitchyStaticServer(SimpleHTTPRequestHandler):
def do_GET(self):
"""Handle GET request."""
file_path = self.translate_path(self.path)
if not os.path.exists(file_path):
self.send_error(404, "File not found")
return
try:
with open(file_path, 'rb') as f:
file_content = f.read()
# Introduce an "unexpected end of file" glitch randomly
if random.random() < 0.75: # 75% chance of glitch
glitch_point = random.randint(1, len(file_content) - 1)
file_content = file_content[:glitch_point]
self.send_response(200)
self.send_header("Content-type", self.guess_type(file_path))
self.send_header("Content-Length", len(file_content))
self.end_headers()
self.wfile.write(file_content)
except Exception as e:
self.send_error(500, f"Internal Server Error: {e}")
def run(server_class=HTTPServer, handler_class=GlitchyStaticServer, port=8080):
"""Run the server."""
server_address = ('', port)
httpd = server_class(server_address, handler_class)
print(f"Serving on port {port} with glitchy behavior")
httpd.serve_forever()
if __name__ == "__main__":
run()
```
Then ran `python server.py` from that directory.
From there, ran `UV_PYTHON_INSTALL_MIRROR="http://localhost:8080" cargo
run python install 3.11 --reinstall --verbose` to reliably test retries.
## Summary
I'm not sure why this hasn't come up before... But it looks like this
method is only looking at `python.exe` and `python3.exe`? From the user
screenshots, the `python3.12.exe` and `python3.13.exe` are also present,
though.
Closes https://github.com/astral-sh/uv/issues/9667.
I don't see any real reason to forbid executing these in a
cross-platform way
```
❯ echo "print('hello world')" > test.pyw
❯ uv run test.pyw
error: Failed to spawn: `test.pyw`
Caused by: No such file or directory (os error 2)
❯ cargo run -q -- run test.pyw
hello world
```
Closes https://github.com/astral-sh/uv/issues/9757
## Summary
On `main`, if you ask for a source but name a missing subdirectory, you
just get:
```
{source} does not appear to be a Python project, as neither `pyproject.toml` nor `setup.py` are present in the directory
```
But, in reality, the directory doesn't exist at all.
## Summary
We were reading an `.egg-info` file from the root directory that didn't
apply to the root member -- it was for another workspace member. I think
this is driven from some idiosyncracies in the `setuptools` setup for
that workspace member, but it's still wrong to fail.
This PR adds a few measures to fix this:
1. We validate the `egg-info` filename against the package metadata.
2. We skip, rather than fail, if we see incorrect metadata in an
`egg-info` file or similar. This is an optimization anyway; worst case,
we try to build the package, then fail there.
Closes https://github.com/astral-sh/uv/issues/9743.
The `SysVersion` registry entry may or may not include the patch
version, so if we encounter a registry entry without a patch version, we
must not assume that the patch version is 0.
```
Name Property
---- --------
3.9 DisplayName : Python 3.9 (64-bit)
SupportUrl : https://www.python.org/
Version : 3.9.13
SysVersion : 3.9
SysArchitecture : 64bit
Hive: HKEY_CURRENT_USER\Software\Python\PythonCore\3.9
```
Confirmed the fix manually.
Fixes#9668
## Summary
This PR allows users to specify a source both in `project.dependencies`
("production") and `tool.uv.sources` ("development"). It's not intended
as a holistic fix for "production" vs. "development" dependencies, but
in some cases this is good enough with `--no-sources`, and I don't see a
great reason for enforcing it right now.
Closes: https://github.com/astral-sh/uv/issues/9682
Ref: https://github.com/astral-sh/uv/issues/7945 (but I'll leave this
open?)
## Summary
Before:
```console
$ cargo run -- --version
uv 0.5.7 (b17902da0 2024-12-09)
```
After:
```console
$ cargo run -- --version
uv 0.5.7+14 (7cd0ab77a 2024-12-09)
```
Currently `cargo run -- --version` does not includes the number of
commits since last tag, because `cargo-dist` create non-annotated tag,
and
`git log -1 --date=short --abbrev=9 --format='%H %h %cd %(describe)'`
use only annoated tags by default.
```console
$ git log -1 --date=short --abbrev=9 --format='%H %h %cd %(describe)'
7cd0ab77a97cd0ab77a 2024-12-09
```
To include these tags, use `git log -1 --date=short --abbrev=9
--format='%H %h %cd %(describe:tags)'`, which will display:
```console
$ git log -1 --date=short --abbrev=9 --format='%H %h %cd %(describe:tags)'
7cd0ab77a97cd0ab77a 2024-12-09 0.5.7-14-g7cd0ab77a
```
## Summary
Sort of ridiculous, but today this passes, when it should fail:
```toml
[project]
name = "foo"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13.0"
dependencies = []
[project.optional-dependencies]
async = [
"foo[async]==0.2.0",
]
```
## Summary
In the end, the problem is that `relative_to` has incorrect behavior if
either path is non-normalize (e.g., `foo/bar/../project`). So I've fixed
that method, but we _also_ now normalize `project` upfront, which _also_
fixes the issue.
Closes https://github.com/astral-sh/uv/issues/9692.
## Summary
Small thing I noticed while working on another change: if we error when
extracting `requires-dist`, we go through the full metadata build. We
need to distinguish between fatal errors and "the data isn't static".
## Summary
This is an alternative to #9344. If accepted, I need to audit the
codebase and call sites to apply it everywhere, but the basic idea is:
rather than encoding mutually-incompatible pairs of markers in the
representation itself, we have an additional method on `MarkerTree` that
expands the false-y definition to take into account assumptions about
which markers can be true alongside others. We then check if the the
current marker implies that at least one of them is true.
So, for example, we know that `sys_platform == 'win32'` and
`platform_system == 'Darwin'` are mutually exclusive. When given a
marker expression like `python_version >= '3.7'`, we test if
`python_version >= '3.7'` and `sys_platform != 'win32' or
platform_system != 'Darwin'` are disjoint, i.e., if the following can't
be satisfied:
```
python_version >= '3.7' and (sys_platform != 'win32' or platform_system != 'Darwin')
```
Since, if this can't be satisfied, it implies that the left-hand
expression requires `sys_platform == 'win32'` and `platform_system ==
'Darwin'` to be true at the same time.
I think the main downsides here are:
1. We can't _simplify_ markers based on these implications. So we'd
still write markers like `sys_platform == 'win32' and platform_system !=
'Darwin'`, even though we know the latter expression is redundant.
2. It might be expensive? I'm not sure. I don't think we test for
falseness _that_ often though.
Closes#7760.
Closes#9275.
Instead of modifying the error to replace a dummy derivation chain from
construction with the real one, build the error with the real derivation
chain directly.
This came up when trying to improve the build error reporting.
Introduces `DistErrorKind` to avoid error variants for each case that
are only different in one line of the message.
Add a preview option `uv init --build-backend uv --preview` that uses
the uv build backend when generating the project. The uv build backend
is in preview, so the option is also guarded by preview and hidden from
the help message and docs.
For https://github.com/astral-sh/uv/issues/3957#issuecomment-2518757563
In https://github.com/astral-sh/uv/issues/8155#issuecomment-2508969900,
resolution lowest was complaining about missing lower bounds for a
pacakge, even though the package had a URL, too:
```
uv pip install dist/pymatgen-2024.10.3.tar.gz pymatgen[ci,optional] --resolution=lowest
```
The error was raised from `pymatgen[ci,optional]`, because we were
looking at it before looking at the "URL"
`dist/pymatgen-2024.10.3.tar.gz`.
I've also added constraints and overrides to the bounds lookup, since
they are missing from the dependency graph.
Fixes#8155 (again)
## Summary
Closes#9643.
I modified the `commit` fn so this applies to `uv compile --output-file`
too. But I can move it to the export module if we want to restrict this
to `uv export` only.
## Test Plan
`cargo test`
This is like #9556, but at the level of all other builds, including the
resolver and installer. Going through PEP 517 to build a package is
slow, so when building a package with the uv build backend, we can call
into the uv build backend directly instead: No temporary virtual env, no
temp venv sync, no python subprocess calls, no uv subprocess calls.
This fast path is gated through preview. Since the uv wheel is not
available at test time, I've manually confirmed the feature by comparing
`uv venv && cargo run pip install . -v --preview --reinstall .` and `uv
venv && cargo run pip install . -v --reinstall .`. When hacking the
preview so that the python uv build backend works without the setting
the direct build also (wheel built with `maturin build --profile
profiling`), we can see the perfomance difference:
```
$ hyperfine --prepare "uv venv" --warmup 3 \
"UV_PREVIEW=1 target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --preview" \
"target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --find-links target/wheels/"
Benchmark 1: UV_PREVIEW=1 target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --preview
Time (mean ± σ): 33.1 ms ± 2.5 ms [User: 25.7 ms, System: 13.0 ms]
Range (min … max): 29.8 ms … 47.3 ms 73 runs
Benchmark 2: target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --find-links target/wheels/
Time (mean ± σ): 115.1 ms ± 4.3 ms [User: 54.0 ms, System: 27.0 ms]
Range (min … max): 109.2 ms … 123.8 ms 25 runs
Summary
UV_PREVIEW=1 target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --preview ran
3.48 ± 0.29 times faster than target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --find-links target/wheels/
```
Do we need a global option to disable the fast path? There is one for
`uv build` because `--force-pep517` moves `uv build` much closer to a
`pip install` from source that a user of a library would experience (See
discussion at #9610), but uv overall doesn't really make guarantees
around the build env of dependencies, so I consider the direct build a
valid option.
Best reviewed commit-by-commit, only the last commit is the actual
implementation, while the preview mode introduction is just a
refactoring touching too many files.
When building a wheel from a source distribution or both a source
distribution and a wheel, the versions in their filenames must be the
same.
By inspecting the filenames, we also assert that the filenames from the
build a valid (we don't enforce normalization though, just that uv can
parse them).
Note that we're not yet checking that also the `pyproject.toml` version,
if declared, and METADATA version matches.
When running `lock_requires_python_exact`, we would download CPython
3.12.0 each time. By instead downloading CPython 3.13.0 ahead of time
and passing it in, we speed the test up and avoid timeouts. Locally in
pycharm, the test goes from 6.5s to 500ms.
When encountering `dynamic = ["version"]` in the pyproject.toml of a
source dist, we can ignore that and treat it as a statically known
metadata distribution, since the filename tells us the version and that
version must not change on build.
This fixed locking PyGObject 3.50.0 from `pygobject-3.50.0.tar.gz`
(minimized):
```toml
[project]
name = "PyGObject"
description = "Python bindings for GObject Introspection"
requires-python = ">=3.9, <4.0"
dependencies = [
"pycairo>=1.16"
]
dynamic = ["version"]
```
Afterwards, `uv add --no-sync toga` passes on Ubuntu 24.04 without the
pygobject build deps, when previously it needed `{ name = "pygobject",
version = "3.50.0", requires-dist = [], requires-python = ">=3.9" }`.
I've added a check that source distribution versions are respected after
build.
Fixes#9548
Add the `uv build --list`, a "subcommand" to list the files that would
be included when building a distribution. It does not build the
distribution, except when a source dist is required for source dist ->
wheel. This is an important debugging tool for the include and exclude
options: Did i actually include the files I wanted, or am i shipping a
broken distribution? Are there any temporary files I still need to
exclude?
Cargo offers this as `cargo package --list`.
`--list` is preview-exclusive, since it requires the fast path, which I
also put into preview.
Examples:



I'll fix the error handling in a follow-up.
Tagging as enhancement because it changes the stable output slightly
(two lines instead of one).
CC @charliermarsh for uv-wide consistency in the stdout/stderr handling.
## Summary
This change introduces the `UV_NO_INSTALLER_METADATA` environment
variable
as a way to opt out of the extra installer metadata files that `uv` is
creating.
This is important to achieve reproducible builds in distribution
packaging, allowing to replace usage of
[installer](https://pypi.org/project/installer) with `uv pip install`.
At the time of writing these files are:
- `uv_cache.json`
Contains timestamps which are non-reproducible.
These hashes also leak in to the `RECORD` file.
- `direct_url.json`
Contains the path to the installed wheel.
While not non-reproducible it's not required for distribution packaging.
- `INSTALLER`
Again, not non-reproducible, but of no value in distribution packaging.
## Test Plan
Automated test added.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Today, our dependency group implementation is a little awkward... For
each package `P`, we check if `P` contains dependencies for each enabled
group, then add a dependency on `P` with the group enabled. There are a
few issues here:
1. It's sort of backwards... We add a dependency from the base package
`P` to `P` with the group enabled. Then `P` with the group enabled adds
a dependency on the base package.
2. We can't, e.g., enable different groups for different packages. (We
don't have a way for users to specify this on the CLI, but there's no
reason that it should be _impossible_ in the resolver.)
3. It's inconsistent with how extras work, which leads to confusing
differences in the resolver.
Instead, our internal requirement type can now include dependency
groups, which makes dependency groups look much, much more like extras
in the resolver.
Using the directory writer trait, we can collect the files instead of
writing them to a real sink. This builds up to a `uv build --list`
similar to `cargo package --list`. It is not connected to the cli yet.
## Summary
Discovered while working on https://github.com/astral-sh/uv/issues/9516.
In the linked repo, the root uses a `../dependency` path for the
workspace member, which we weren't normalizing.
## Summary
If a Git repository uses a `path` dependency (rather than a
`workspace`), we need to expand the path to make it relative to the Git
root.
Closes https://github.com/astral-sh/uv/issues/9516.
## Summary
Include the `git_member` when fetching metadata from cache.
h/t to @PhilipVinc for the suggested fix
Resolves#8887
## Test Plan
Pending
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This pull request is best viewed with [whitespace
hidden](https://github.com/astral-sh/uv/pull/8650/files?diff=unified&w=1)
Adds a `--default` flag to `uv python install` in preview. This includes
a `python` and `python{major}` executable in addition to the
`python{major}.{minor}` executable. We will replace uv-managed
executables, but externally managed executables require the `--force`
flag to overwrite.
If you run `uv python install` (without arguments), we include the
`--default` flag implicitly to populate `python` and `python3` for the
"default" install version.
In the future, we should add a warning if the installed executable isn't
at the front of the PATH.
## Summary
Fixes#9027
Minor enhancement on top of #8531 that makes the CLI parameter
`--check-url` also available as the setting `check-url` in configuration
files.
## Test Plan
Updates existing tests to take the new setting into account.
Within publish command testing I didn't see existing tests covering
settings from toml files (instead of from CLI params), so I didn't add
anything of that sort.
## Summary
On Windows, non-virtual environments put the `python.exe` in the
top-level of the installation directory, rather than in `Scripts`. This
PR adds those paths to `PATH` in `uv run` and `uv tool run`.
Closes
https://github.com/astral-sh/uv/issues/9574#issuecomment-2512217110.
This _partially_ unwinds the optimization in #9540 by adding back the
base package dependency as a sibling to the extra package dependency
in some cases. Specifically, this occurs when _any_ of the extras are
declared as conflicting.
This is believed to be necessary (until another method is found) to
handle the forking logic based on conflicts. Namely, the forking logic
depends on the base and extra packages being sibling dependencies. If
only the extra is present, then it won't be included in the fork that
excludes all conflicting extras. And that means the base package won't
either, even though it should be included in that fork in some cases. If
the base package dependency is deferred, then it will never be reached.
This also adds another test and updates the snapshots that would have
caught the regression in #9540 if the conflict tests had been enabled.
Embarrassingly, PR #9474 moved the conflicting extras/groups tests into
their own module, but never actually included the module in
`it/main.rs`.
This adds `lock_conflict` to `main.rs` and fixes the fallout.
For listing files, we first use a directory writer for source dists,
which we will use for collecting the filenames instead of writing the
archive in the future. I've split breaking `lib.rs` of uv-build-backend
into modules into the next PR.
No logic changes, only restructuring.
Best reviewed commit-by-commit
Going through PEP 517 to build a package is slow, so when building a
package with the uv build backend, we can call into the uv build backend
directly. This is the basis for the `uv build --list`.
This does not enable the fast path for general source dependencies.
There is a possible difference in execution if the latest uv version is
newer than the one currently running: The PEP 517 path would use the
latest version, while the fast path uses the current version.
Please review commit-by-commit
### Benchmark
`built_with_uv`, using the fast path:
```
$ hyperfine "~/projects/uv/target/profiling/uv build"
Time (mean ± σ): 9.2 ms ± 1.1 ms [User: 4.6 ms, System: 4.6 ms]
Range (min … max): 6.4 ms … 12.7 ms 290 runs
```
`hatcling_editable`, with hatchling being optimized for fast startup
times:
```
$ hyperfine "~/projects/uv/target/profiling/uv build"
Time (mean ± σ): 270.5 ms ± 18.4 ms [User: 230.8 ms, System: 44.5 ms]
Range (min … max): 250.7 ms … 298.4 ms 10 runs
```
In the course of working on #9289, I've had to devise
some additions to our markers. While we are still staying
strictly compatible with the PEP 508 format, we will be
abusing the `extra` expression to carry a lot more
information.
Specifically, we want the following additional
operations:
* Simplify `extra != 'foo'`
* Remove all extra expressions
* Remove everything except extra expressions
My work on #9289 requires all of these (which will be
in a future in PR).
## Summary
This proposes adding the command line option `uv pip uninstall --dry-run
...`, complementing the existing `uv pip install --dry-run ...` added
for #1244 in #1436.
This option does not exist in PyPA's `pip uninstall`, if adopted it
would be unique to `uv pip`. The code should be considered PoC, it is
baby's first Rust.
The initial motivation was while investigating
https://github.com/moreati/ansible-uv/issues/2 - to allow Ansible module
`moreati.uv.pip` to work with`state: absent` in "check_mode" (Ansible's
equivalent of a dry run), without requiring `packaging` or `setuptools`.
## Test Plan
One new unit test has been added. I pedge to add more if the feature is
desired/accepted
Example usage
```console
➜ uv git:(pip-uninstall--dry-run) rm -rf .venv
➜ uv git:(pip-uninstall--dry-run) ./target/debug/uv venv
Using CPython 3.13.0
Creating virtual environment at: .venv
Activate with: source .venv/bin/activate
➜ uv git:(pip-uninstall--dry-run) ./target/debug/uv pip install httpx
Resolved 7 packages in 178ms
Prepared 5 packages in 60ms
Installed 7 packages in 15ms
+ anyio==4.6.2.post1
+ certifi==2024.8.30
+ h11==0.14.0
+ httpcore==1.0.7
+ httpx==0.28.0
+ idna==3.10
+ sniffio==1.3.1
➜ uv git:(pip-uninstall--dry-run) ./target/debug/uv pip uninstall --dry-run httpx
Would uninstall 1 package
- httpx==0.28.0
➜ uv git:(pip-uninstall--dry-run) ./target/debug/uv pip list
Package Version
-------- -----------
anyio 4.6.2.post1
certifi 2024.8.30
h11 0.14.0
httpcore 1.0.7
httpx 0.28.0
idna 3.10
sniffio 1.3.1
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Fixes#9531
## Context
While working with [uv](https://github.com/astral-sh/uv), I encountered
issues with a python dependency, [httpx](https://www.python-httpx.org/)
unable to be installed because of a **os error 5 permission denied**.
The error occur when we try to persist a **.exe file** from a temporary
folder into a persistent one.
I only reproduce the issue in an enterprise **Windows** Jenkins Runner.
In my virtual machines, I don't have any issues. So I think this is most
probably coming from the system configuration. This windows runner
**contains an AV/EDR**. And the fact that the file locked occured only
once for an executable make me think that it's most probably the cause.
While doing some research and speaking with some colleagues (hi
@vmeurisse), it seems that the issue is a very recurrent one on Windows.
In the Javascript ecosystem, there is this package, created by the
@isaacs, `npm` inventor: https://www.npmjs.com/package/graceful-fs, used
inside `npm`, allowing its package installations to be more resilient to
filesystem errors:
> The improvements are meant to normalize behavior across different
platforms and environments, and to make filesystem access more resilient
to errors.
One of its core feature is this one:
> On Windows, it retries renaming a file for up to one second if EACCESS
or EPERM error occurs, likely because antivirus software has locked the
directory.
So I tried to implement the same algorithm on `uv`, **and it fixed my
issue**! I can finally install `httpx`.
Then, [as I mentionned in this
issue](https://github.com/astral-sh/uv/issues/9531#issuecomment-2508981316),
I saw that you already implemented exactly the same algorithm in an
asynchronous function for renames 😄22fd9f7ff1/crates/uv-fs/src/lib.rs (L221)
## Summary of changes
- I added a similar function for `persist` (was not easy to have the
benediction of the borrow checker 😄)
- I added a `sync` variant of `rename_with_retry`
- I edited `install_script` to use the function including retries on
Windows
Let me know if I should change anything 🙂
Thanks!!
## Test Plan
This pull-request should be totally iso-functional, so I think it should
be covered by existing tests in case of regression.
All tests are still passing on my side.
Also, of course validated that my windows machines (windows 10 & windows
11) containing AV/EDR software are now able to install `httpx.exe`
script.
However, if you think any additional test is needed, feel free to tell
me!
When looking at the build frontend code, I noticed that we always pass
every single field of the shared state to the build dispatch:
```rust
let build_dispatch = BuildDispatch::new(
...
&state.index,
&state.git,
&state.capabilities,
&state.in_flight,
...
);
```
We can abstract this by moving `SharedState` into the build dispatch.
The `BuildDispatch` then has only immutable fields and the
`SharedState`. Since the `SharedState` is all `Arc`s, we can clone it
freely.
## Summary
After #9524, I noticed two other dependencies were misaligned.
Since the previous PR has been merged, I was thinking I could submit
those two misses.
Of course, open to any comments/decline!
Thanks!! 🙂
## Test Plan
All units tests are still passing on my side. Let's see with the
pull-request CI again 😄
## Summary
Previously, when we encountered `foo[bar]`, we'd add a dependency on
`PubGrubPackage::Package` for `foo`, and then `PubGrubPackage::Extra`
for `foo[bar]`.
Later, when we ask for the dependencies of the `PubGrubPackage::Extra`,
we add `PubGrubPackage::Package` for `foo`, and
`PubGrubPackage::Package` for `foo[bar]`. This is an intentional
strategy because it ensures that PubGrub "knows" that these have to be
solved to the same version as early as possible.
It turns out that the first part here ("add a dependency on
`PubGrubPackage::Package` for `foo`") is suboptimal, because it means
PubGrub might try to solve _just_ `foo` without realizing that it also
has to accommodate all the constraints from the extra.
Instead, we now add _just_ `PubGrubPackage::Extra` for `foo[bar]`, and
defer adding the base package. It looks like this leads to a far more
efficient solve for Airflow.
When adding excludes, we usually don't want to include python cache
files. On the contrary, I haven't seen any project in my ecosystem
research that would want any of `__pycache__`, `*.pyc`, `*.pyo` to be
included. By moving them behind a `default-excludes` toggle, they are
always active even when defining custom excludes, but can be deactivated
if the user so chooses.
With includes and excludes being this small again, we can roll back the
include-exclude anchored difference to always using anchored globs (i.e.
you would need to use `**/build-*.h` below).
A pyproject.toml with custom settings with the change applied:
```toml
[project]
name = "foo"
version = "0.1.0"
readme = "README.md"
license-files = ["LICENSE*", "third-party-licenses/*"]
[tool.uv.build-backend]
# A file we need for the source dist -> wheel step, but not in the wheel itself (currently unused)
source-include = ["data/build-script.py"]
# A temporary or generated file we want to ignore
source-exclude = ["/src/foo/not-packaged.txt"]
# Headers are build-only
wheel-exclude = ["build-*.h"]
[tool.uv.build-backend.data]
scripts = "scripts"
data = "assets"
headers = "header"
[build-system]
requires = ["uv>=0.5.5,<0.6"]
build-backend = "uv"
```
When building the source distribution, we always need to include
`pyproject.toml` and the module, when building the wheel, we always
include the module but nothing else at top level. Since we only allow a
single module per wheel, that means that there are no specific wheel
includes. This means we have source includes, source excludes, wheel
excludes, but no wheel includes: This is defined by the module root,
plus the metadata files and data directories separately.
Extra source dist includes are currently unused (they can't end up in
the wheel currently), but it makes sense to model them here, they will
be needed for any sort of procedural build step.
This results in the following fields being relevant for inclusions and
exclusion:
* `pyproject.toml` (always included in the source dist)
* project.readme: PEP 621
* project.license-files: PEP 639
* module_root: `Path`
* source_include: `Vec<Glob>`
* source_exclude: `Vec<Glob>`
* wheel_exclude: `Vec<Glob>`
* data: `Map<KnownDataName, Path>`
An opinionated choice is that that wheel excludes always contain the
source excludes: Otherwise you could have a path A in the source tree
that gets included when building the wheel directly from the source
tree, but not when going through the source dist as intermediary,
because A is in source excludes, but not in the wheel excludes. This has
been a source of errors previously.
In the process, I fixed a bug where we would skip directories and only
include the files and were missing license due to absolute globs.
## Summary
When we serialize and deserialize the lockfile, we remove the conflict
markers. So in the linked case, the edges for the `tqdm` entries are
like:
```
complexified_marker: UniversalMarker {
pep508_marker: python_full_version >= '3.9.0',
conflict_marker: true,
},
```
However... when we evaluate in-memory, the conflict markers are still
there...
```
complexified_marker: UniversalMarker {
pep508_marker: true,
conflict_marker: extra == 't1' and extra != 't2',
},
```
So if `uv run` creates the lockfile, we evaluate this as `false`.
We should make this consistent, and I expect @BurntSushi is aware. But
for now, it's reasonable / correct to pass the extra when evaluating at
this specific point, since we know the dependency was enabled by the
marker.
Closes
https://github.com/astral-sh/uv/issues/9533#issuecomment-2508908591.
When changing something about the settings,
`invalid_pyproject_toml_option_unknown_field` would fail unexpectedly
because the exact list of possible options had changed. Since we're
already testing this list in the settings-related test
`resolve_config_file`, i'm stubbing the exact output here.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
While working on potential bug fixes with temporary files on Windows (I
think I am currently ecountering the same issue as #2810)
I noticed that sub-workspaces were not all having the same `tempfile`
version. And they were not relying on the cargo root project dependency.
I don't know at all if it was done on purpose or not.
(I also wanted to override the root dependency with a local source but
it was not possible due to sub-workspaces not relying on the same).
The root lockfile already pinned to the `3.14.0`. Some sub-workspaces
were depending on the `3.12.0`, some others on the `3.14.0`. So I
updated the root `Cargo.toml` to the `3.14.0`.
Feel free to decline if it was done on purpose! No worries at all
🙂
Thanks!
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
All units tests are still passing on my side. Let's see with the
pull-request CI 😄
## Summary
A lot of good new lints, and most importantly, error stabilizations. I
tried to find a few usages of the new stabilizations, but I'm sure there
are more.
IIUC, this _does_ require bumping our MSRV.
## Summary
When you pass a system drive to `Path::join`, Rust doesn't insert a
backslash between the drive and the path itself, so our lookups for
system configuration were failing.
Closes https://github.com/astral-sh/uv/issues/9416.
When trying to upload without a password but with the keyring, check
that the keyring has a password for the upload URL and username and warn
if it doesn't.
Fixes#8781
There are already a fair number and I'm planning to add more. And
`lock.rs` is already quite big.
There aren't any new tests or other changes here. This is just moving
tests and trimming down the function names to avoid redundancy in the
names.
## Summary
With `uv pip install --target` and `--prefix`, we (1) should allow
managed Pythons, and (2) should show a different message that's focused
on the interpreter we selected, rather than the environment.
## Summary
We still only respect overrides and constraints in the workspace root --
which we may want to change -- but overrides and constraints are now
correctly lowered.
Closes https://github.com/astral-sh/uv/issues/8148.
We were previously not uploading all metadata in the formdata of an
upload request in the legacy api. Notably, we were missing the PEP 639
license-files field.
I had to switch to pdm due to https://github.com/pypa/hatch/issues/1828
When performing a noop sync, we don't need the rayon threadpool, yet we
pay for its initialization:

Be making the initialization lazy, we avoid that cost:

This code runs every time before user code in `uv run`.
This means that before calling rayon, one now needs to call
`LazyLock::force(&RAYON_INITIALIZE);`.
Performance mode (CPU 0 is a perf core):
```
$ taskset -c 0 hyperfine --warmup 5 -N "/home/konsti/projects/uv/uv-main sync" "/home/konsti/projects/uv/target/profiling/uv sync"
Benchmark 1: /home/konsti/projects/uv/uv-main sync
Time (mean ± σ): 4.5 ms ± 0.1 ms [User: 2.7 ms, System: 1.8 ms]
Range (min … max): 4.4 ms … 6.4 ms 640 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: /home/konsti/projects/uv/target/profiling/uv sync
Time (mean ± σ): 4.4 ms ± 0.1 ms [User: 2.7 ms, System: 1.6 ms]
Range (min … max): 4.3 ms … 5.0 ms 679 runs
Summary
/home/konsti/projects/uv/target/profiling/uv sync ran
1.03 ± 0.04 times faster than /home/konsti/projects/uv/uv-main sync
```
Power saver mode:
```
$ hyperfine --warmup 5 -N "/home/konsti/projects/uv/uv-main sync" "/home/konsti/projects/uv/target/profiling/uv sync"
Benchmark 1: /home/konsti/projects/uv/uv-main sync
Time (mean ± σ): 28.1 ms ± 1.2 ms [User: 15.5 ms, System: 20.3 ms]
Range (min … max): 25.7 ms … 31.9 ms 102 runs
Benchmark 2: /home/konsti/projects/uv/target/profiling/uv sync
Time (mean ± σ): 24.0 ms ± 1.2 ms [User: 13.8 ms, System: 9.9 ms]
Range (min … max): 22.2 ms … 28.2 ms 122 runs
Summary
/home/konsti/projects/uv/target/profiling/uv sync ran
1.17 ± 0.08 times faster than /home/konsti/projects/uv/uv-main sync
```
## Summary
We never construct these -- they should be impossible, since we always
translate to `python_full_version`. This PR encodes that impossibility
in the types.
## Summary
This PR modifies our lowered representation such that any deprecated
aliases are treated as "the same" marker in the algebra.
So, for example, we now recognize that this is impossible, despite the
marker names being different:
```
typing-extensions ; platform.python_implementation == 'CPython' and python_implementation != 'CPython'
```
Similarly, we now recognize that this is just `sys_platform == 'win32'`,
despite the presence of both markers:
```
anyio ; sys_platform == 'win32' and sys.platform == 'win32'
```
## Summary
I want to move towards a more normalized marker representation within
the marker tree, which means that the things we warn against will
disappear by the time we get to evaluation. I think it makes more sense
to show these warnings when we create the tree, rather than when we
evaluate it.
As discussed in https://github.com/astral-sh/uv/issues/9423, it's
confusing that we do not allow `uv sync` just because the `.venv`
directory _exists_. This change matches `uv venv`.
<!--
Thank you for contributing to uv! To help us review effectively, please
ensure that:
- The pull request includes a summary of the change.
- The title is descriptive and concise.
- Relevant issues are referenced where applicable.
-->
## Summary
Resolves#9333
This pull request introduces support for the `--no-extra` command-line
flag and the corresponding `no-extra` UV setting.
### Behavior
- When `--all-extras` is supplied, the specified extras in `--no-extra`
will be excluded from the installation.
- If `--all-extras` is not supplied, `--no-extra` has no effect and is
safely ignored.
## Test Plan
Since `ExtrasSpecification::from_args` and
`ExtrasSpecification::extra_names` are the most important parts in the
implementation, I added the following tests in the
`uv-configuration/src/extras.rs` module:
- **`test_no_extra_full`**: Verifies behavior when `no_extra` includes
the entire list of extras.
- **`test_no_extra_partial`**: Tests partial exclusion, ensuring only
specified extras are excluded.
- **`test_no_extra_empty`**: Confirms that no extras are excluded if
`no_extra` is empty.
- **`test_no_extra_excessive`**: Ensures the implementation ignores
`no_extra` values that don't match any available extras.
- **`test_no_extra_without_all_extras`**: Validates that `no_extra` has
no effect when `--all-extras` is not supplied.
- **`test_no_extra_without_package_extras`**: Confirms correct behavior
when no extras are available in the package.
- **`test_no_extra_duplicates`**: Verifies that duplicate entries in
`pkg_extras` or `no_extra` do not cause errors.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This adds a `--prune` flag to the `export` command to correspond with
the `--prune` flag of the `tree` command.
The purpose is for generating a `requirements.txt` that omits a package
and all of that package's unique dependencies. This is useful for cases
where the project has a dependency on a common core package, but where
that package does not need to be installed in the target environment.
For example, a pyspark job needs spark for development, but when
installing into a cluster that already has pyspark installed, it is
desirable to omit pyspark's whole dependency tree so that only the
unique dependencies that your job needs get installed, and do not risk
breaking the pyspark dependencies with something incompatible.
Dev groups cannot always cover this case because there are other
projects where this common dependency occurs as a transitive. One
example is Airflow providers, which include Airflow itself as a
dependency, but it is unnecessary and undesirable to include Airflow's
dependency tree in the `requirements.txt` for your DAGs.
Partly related to #7214, though I'm not sure it covers the ask in that
one of having this functionality extend to the project's actual
published metadata.
## Test Plan
An integration test was added, and some manual testing. Let me know if
more would be better.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This change is correct because disjointness checks now
incorporate conflicts. In this case, there are actually
four forks. Two of them correspond to
`sys_platform == 'darwin'` and `sys_platform != 'darwin'`,
but neither of those contain `jinja2==3.1.3`. Instead,
they contain other versions of `jinja2` linked to other
extras.
If we ever add conflicts to our `resolution-markers` in
the lock file, then those forks should show up here
again. (Because, of course, some forks do contain
`jinja2==3.1.3` here.)
When we generate conflict markers for each resolution after the
resolver runs, it turns out that generating them just from exclusion
rules is not sufficient.
For example, if `foo` and `bar` are declared as conflicting extras, then
we end up with the following forks:
A: extra != 'foo'
B: extra != 'bar'
C: extra != 'foo' and extra != 'bar'
Now let's take an example where these forks don't share the same version
for all packages. Consider a case where `idna==3.9` is in forks A and C,
but where `idna==3.10` is in fork B. If we combine the markers in forks
A and C through disjunction, we get the following:
idna==3.9: extra != 'foo' or (extra != 'foo' and extra != 'bar')
idna==3.10: extra != 'bar'
Which simplifies to:
idna==3.9: extra != 'foo'
idna==3.10: extra != 'bar'
But these are clearly not disjoint. Both dependencies could be selected,
for example, when neither `foo` nor `bar` are active. We can remedy this
by keeping around the inclusion rules for each fork:
A: extra != 'foo' and extra == 'bar'
B: extra != 'bar' and extra == 'foo'
C: extra != 'foo' and extra != 'bar'
And so for `idna`, we have:
idna==3.9: (extra != 'foo' and extra == 'bar') or (extra != 'foo' and extra != 'bar')
idna==3.10: extra != 'bar' and extra == 'foo'
Which simplifies to:
idna==3.9: extra != 'foo'
idna==3.10: extra != 'bar' and extra == 'foo'
And these *are* properly disjoint. There is no way for them both to be
active. This also correctly accounts for fork C where neither `foo` nor
`bar` are active, and yet, `idna==3.9` is still enabled but `idna==3.10`
is not. (In the [motivating example], this comes from `baz` being enabled.)
That is, this captures the idea that for `idna==3.10` to be installed,
there must actually be a specific extra that is enabled. That's what
makes it disjoint from `idna==3.9`.
We aren't quite done yet, because this does add *too many* conflict
markers to dependency edges that don't need it. In the next commit,
we'll add in our world knowledge to simplify these conflict markers.
[motivating example]: https://github.com/astral-sh/uv/issues/9289
I think Ibraheem had this routine at some point in the past, but
we ended up dropping it because we didn't have a use for it. Well,
now we do!
It turns out that when we generate "conflict markers," they don't
actually take "world knowledge" into account. In particular, there
is "world knowledge" that a particular set of extras cannot be
enabled simultaneously. This in turn allows us to simplify most
conflict markers. If we didn't do this, it's likely that lock files
would become littered with conflict markers whenever any conflicts
are declared.
This is somewhat (although not completely) analogous to how we
"simplify" markers with respect to `requires-python`. That is,
`requires-python` reflects world knowledge that enables markers
to be written more simply than they otherwise would be without
world knowledge.
Previously, we had copied the behavior of `try_markers` to return
`None` in the case where the marker was always true. I believe this
was done because it somewhat implies that there is no forking
happening. But I find this somewhat strange personally, and instead
flipped this around so that it still returns a marker in that case.
The one call site that is impacted by this is the resolution
graph construction. If we left it as-is, it would end up with
a list of one marker that is always true in some cases. And this
in turn results in writing an empty `resolution-markers` to the
lock file. Probably the output logic should be tweaked instead,
but we leave it alone for now.
## Summary
If we're installing with `--target` or `--prefix`, then it's not a
mutable operation, so we should be allowed to discover system Pythons. I
suspect this was hard to special-case in the past but is now trivial
after @zanieb's various refactors.
Closes https://github.com/astral-sh/uv/issues/9356.
## Summary
Aligns the description of `UV_NO_PROGRESS` with other env vars that also
have a related flag.
`--no-progress` is a "global option" and exists in every command.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
This effectively combines a PEP 508 marker and an as-yet-specified
marker for expressing conflicts among extras and groups.
This just defines the type and threads it through most of the various
points in the code that previously used `MarkerTree` only. Some parts
do still continue to use `MarkerTree` specifically, e.g., when dealing
with non-universal resolution or exporting to `requirements.txt`.
This doesn't change any behavior.
This doesn't change any behavior. But this makes it a bit
clearer in the code that `uv pip compile` does not support
specifying conflicts. Indeed, we always pass an empty set of
conflicts to the resolver.
This is because there is no way to encode the conditional
logic of conflicts into the `requirements.txt` format. This
is unlike markers.
This doesn't change any behavior. My guess is that this code was
a casualty of refactoring. But basically, it was doing redundant
case analysis and iterating over all resolutions (even though it's
in the branch that can only occur when there is only one
resolution).
This filtering is now redundant, since forking now avoids these
degenerate cases by construction.
The main change to forking that enables skipping over "always
false" forks is that forking now starts with the parent's markers
instead of starting with MarkerTree::TRUE and trying to combine
them with the parent's markers later. This in turn leads to
skipping over anything that "can't" happen when combined with the
parents markers. So we never hit the case of generating a fork
that, when combined with the parent's markers, results in a
marker that is always false. We just avoid it in the first place.
This test demonstrates the difference between `extra != "foo"` and
`sys_platform != "foo"`.
I wrote this test down to test the extra simplification logic was
correct. And I also wanted to test whether we could somehow hackily
encode `group` (as opposed to just `extra`) logic into marker
expressions by reusing another field. But I don't think we can.
_get_glibc_version() can after #9005 return either (0, 0) if glibc
string is missing or (-1, -1) if the string can't be parsed. There was
no need to change missing string to (0, 0).
Also, move back indentation to make it easier to understand.
Currently, user display returns an empty path if the current dir is the
directory we are printing. This leads to odd messages such as
> Including project.license-files at `` with `LICENSE*`
or
> Not a license files match: ``
Instead, we display the current path as a dot.
## Summary
`--upgrade` isn't useful, since it's the default. So it's now hidden,
but continues to warn if you enable it.
`--no-upgrade` isn't useful, since it panics. So it's now removed
entirely. This isn't breaking, since it already didn't work.
`--upgrade-package` actually _is_ useful, because it turns out it allows
things like: `uv tool upgrade babel --upgrade-package "babel<0.2.14"` to
constrain the upgrade.
I left this in place but hid it... I think we should provide a better
workflow for this, like `uv tool upgrade "babel<0.2.14"`? It's strange
to specify the package twice, and that `uv tool upgrade` has an
`--upgrade-package` flag.
Closes https://github.com/astral-sh/uv/issues/9317.
## Summary
On Termux, uv currently fails to find any interpreter because it can't
find a glibc version, because there isn't one. But the Python
interpreter is still functional nonetheless.
So, when glibc cannot be found, simply return 0 for the version numbers
and mark the interpreter as being incompatible with manylinux
I really don't know if this is the right way to address this, but I can
attest that manual testing shows uv appears to be fully functional, at
least for pip and virtualenvs.
Fixes#7373
## Test Plan
I tried running the test suite, and after some tweaks, a good portion of
the test suite passes as well. A significant number of tests fail, but
this appears to be due to minor differences in output, like warnings
about hard links not working (hard links are completely disallowed on
Android), differences in the number of files removed, etc. The test
suite seems to be very sensitive to minor variations in output.
This PR contains three smaller improvements:
* Improve the include/exclude logging. We're still showing the current
directory as empty backticks, not sure what to do about that
* Add early stopping to license file globbing, so we don't traverse the
whole directory recursively when license files can only be in few places
* Support explicit wheel excludes. These are still not entirely right,
but at least we're correctly excluding compiled python files now. The
next step is to make sure that the wheel excludes contain all pattern
from source dist excludes, to make sure source tree -> wheel can't have
more files than source tree -> source dist -> wheel.
## Summary
The issue here is fairly complex. Consider the following:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
cu124 = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "cu124" },
],
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
```
When solving this project, we first pick a PyTorch version from PyPI, to
solve the `cu124` extra, selecting `2.5.1`.
Later, we try to solve the `cpu` extra. In solving that extra, we look
at the PyTorch CPU index. Ideally, we'd select `2.5.1+cpu`... But
`2.5.1` is already a preference. So we choose that.
Now, we only respect preferences for explicit indexes if they came from
the same index.
Closes https://github.com/astral-sh/uv/issues/9295.
## Summary
The new `--index` and `--default-index` flags are being omitted in the
`uv pip compile` header, unintentionally.
Closes https://github.com/astral-sh/uv/issues/9287.
## Summary
I find myself messing this up with `--build-constraint` vs.
`--build-constraints`, and it turns out our own CLI isn't fully
consistent here either.
When building only a single crate in the workspace to run its tests, we
often recompile a lot of other, unrelated crates. Whenever cargo has a
different set of crate features, it needs to recompile. By moving some
features (non-exhaustive for now) to the workspace level, we always
activate them an avoid recompiling.
The cargo docs mismatch the behavior of cargo around default-deps, so I
filed that upstream and left most `default-features` mismatches:
https://github.com/rust-lang/cargo/issues/14841.
Reference script:
```python
import tomllib
from collections import defaultdict
from pathlib import Path
uv = Path("/home/konsti/projects/uv")
skip_list = ["uv-trampoline", "uv-dev", "uv-performance-flate2-backend", "uv-performance-memory-allocator"]
root_feature_map = defaultdict(set)
root_default_features = defaultdict(bool)
cargo_toml = tomllib.loads(uv.joinpath("Cargo.toml").read_text())
for dep, declaration in cargo_toml["workspace"]["dependencies"].items():
root_default_features[dep] = root_default_features[dep] or declaration.get("default-features", True)
root_feature_map[dep].update(declaration.get("features", []))
feature_map = defaultdict(set)
default_features = defaultdict(bool)
for crate in uv.joinpath("crates").iterdir():
if crate.name in skip_list:
continue
if not crate.joinpath("Cargo.toml").is_file():
continue
cargo_toml = tomllib.loads(crate.joinpath("Cargo.toml").read_text())
for dep, declaration in cargo_toml.get("dependencies", {}).items():
# If any item uses default features, they are used everywhere
default_features[dep] = default_features[dep] or declaration.get("default-features", True)
feature_map[dep].update(declaration.get("features", []))
for dep, features in sorted(feature_map.items()):
features = features - root_feature_map.get(dep, set())
if not features and default_features[dep] == root_default_features[dep]:
continue
print(dep, default_features[dep], sorted(features))
```
## Summary
Just trying to unify the retry handling, as in
https://github.com/astral-sh/uv/pull/9274 and elsewhere. Right now, the
publish handler doesn't use any backoff and always retries three times
regardless of settings.
## Summary
This was an oversight in the initial implementation. We shouldn't
validate sources for the `build-system.requires` field, since extras and
groups can _never_ be active.
Closes https://github.com/astral-sh/uv/issues/9259.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
In uv-globfilter, use the workspace `fs-err` in `dev-dependencies`.
This fixes an unnecessary dev-dependency on `fs-err` 2.x even after the
workspace fs-err was updated to 3.x in
https://github.com/astral-sh/uv/pull/8625.
The `Cargo.lock` file still has `fs-err v2.11.0` after this PR, but it
is via `tracing-durations-export v0.3.0` rather than directly required
by any `uv` crate.
## Test Plan
<!-- How was it tested? -->
```
$ cd crates/uv-globfilter/
$ cargo test
```
## Summary
It turns out that `WrappedReqwestError` skips the `reqwest::Error`
itself in order to hack the display. This PR adds it to the list of
variants we check when retrying transient errors.
Closes https://github.com/astral-sh/uv/issues/9246.
## Test Plan
Patched `reqwest` locally to return an error in `bytes()`. Verified that
it was _not_ caught prior to this PR, but was caught afterwards.
- Adds a collapsible section for the project concept
- Splits the project concept document into several child documents.
- Moves the workspace and dependencies documents to under the project
section
- Adds a mkdocs plugin for redirects, so links to the moved documents
still work
I attempted to make the minimum required changes to the contents of the
documents here. There is a lot of room for improvement on the content of
each new child document. For review purposes, I want to do that work
separately. I'd prefer if the review focused on this structure and idea
rather than the content of the files.
I expect to do this to other documentation pages that would otherwise be
very nested.
The project concept landing page and nav (collapsed by default) looks
like this now:
<img width="1507" alt="Screenshot 2024-11-14 at 11 28 45 AM"
src="https://github.com/user-attachments/assets/88288b09-8463-49d4-84ba-ee27144b62a5">
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
PR #4965 added `*-manylinux_2_31` as a target triple, and issue #4966
described the need for a more general solution.
In lieu of a general solution, this PR adds further explicit manylinux
target triples for different glibc version up to the one used by the
latest Ubuntu release (glibc 2.40 used in Ubuntu 24.10).
## Test Plan
<!-- How was it tested? -->
Local, manual testing with a Python wheel targeting
`x86_64-manylinux_2_35`.
Allow including data files in wheels, configured through
`pyproject.toml`. This configuration is currently only read in the build
backend. We'd only start using it in the frontend when we're adding a
fast path.
Each data entry is a directory, whose contents are copied to the
matching directory in the wheel in
`<name>-<version>.data/(purelib|platlib|headers|scripts|data)`. Upon
installation, this data is moved to its target location, as defined by
<https://docs.python.org/3.12/library/sysconfig.html#installation-paths>:
- `data`: Installed over the virtualenv environment root. Warning: This
may override existing files!
- `scripts`: Installed to the directory for executables, `<venv>/bin` on
Unix or `<venv>\Scripts` on Windows. This directory is added to PATH
when the virtual environment is activated or when using `uv run`, so
this data type can be used to install additional binaries. Consider
using `project.scripts` instead for starting Python code.
- `headers`: Installed to the include directory, where compilers
building Python packages with this package as built requirement will
search for header files.
- `purelib` and `platlib`: Installed to the `site-packages` directory.
It is not recommended to uses these two options.
For simplicity, for now we're just defining a directory to be copied for
each data directory, while using the glob based include mechanism in the
background. We thereby introduce a third mechanism next to the main
includes and the PEP 639 mechanism, which is not what we should finalize
on.
## Summary
The reqwest middleware doesn't retry errors that occur "after" the
request completes -- but in some cases, these do include spurious errors
that we want to retry. See https://github.com/astral-sh/uv/issues/8144
for examples. This PR adds a second retry layer during the response
_handler_, which should help with some of the spurious failures we see
in the linked issue.
Closes https://github.com/astral-sh/uv/issues/8144.
## Summary
We missed the case in which the user has a legacy non-`[project]` root
-- we were always installing all members.
Closes https://github.com/astral-sh/uv/issues/9214.
## Summary
This PR enables something like the "final boss" of PyTorch setups --
explicit support for CPU vs. GPU-enabled variants via extras:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch==2.5.1+cpu",
]
gpu = [
"torch==2.5.1",
]
[tool.uv.sources]
torch = [
{ index = "torch-cpu", extra = "cpu" },
{ index = "torch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "torch-gpu"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "gpu" },
],
]
```
It builds atop the conflicting extras work to allow sources to be marked
as specific to a dedicated extra being enabled or disabled.
As part of this work, sources now have an `extra` field. If a source has
an `extra`, it means that the source is only applied to the requirement
when defined within that optional group. For example, `{ index =
"torch-cpu", extra = "cpu" }` above only applies to
`"torch==2.5.1+cpu"`.
The `extra` field does _not_ mean that the source is "enabled" when the
extra is activated. For example, this wouldn't work:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13.0"
dependencies = ["torch"]
[tool.uv.sources]
torch = [
{ index = "torch-cpu", extra = "cpu" },
{ index = "torch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "torch-gpu"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
```
In this case, the sources would effectively be ignored. Extras are
really confusing... but I think this is correct? We don't want enabling
or disabling extras to affect resolution information that's _outside_ of
the relevant optional group.
## Summary
These were moved as part of a broader refactor to create a single
integration test module. That "single integration test module" did
indeed have a big impact on compile times, which is great! But we aren't
seeing any benefit from moving these tests into their own files (despite
the claim in [this blog
post](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html),
I see the same compilation pattern regardless of where the tests are
located). Plus, we don't have many of these, and same-file tests is such
a strong Rust convention.
## Summary
I moved this to a separate test. The packages may or may not be
downloaded already, since the previous command fails -- it just depends
on timing.
## Summary
The distributions used to be stored in a `BTreeMap`, keyed by name.
They're now stored in a graph... so iteration isn't guaranteed to
produce a deterministic hash!
This fixes a "flaky" test, though it's actually a real bug. The test was
right!
Closes#9137.
Fixes#9164
Using clap's `default_value_t` makes the `flag` function unhappy, so
just set the default when we unwrap. Tested with no flags,
`--verify-hashes`, `--no-verify-hashes` and setting in uv.toml
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
When building source distributions, we need to include the readme, so it
can become part the METADATA body when building the wheel. We also need
to support the license files from PEP 639. When building the source
distribution, we copy those file relative to their origin, when building
the wheel, we copy them to `.dist-info/licenses`.
The test for idempotence in wheel building is merged into the file
listing test, which also covers that source tree -> source dist -> wheel
is equivalent to source tree -> wheel, though we do need to check for
file inclusion stronger here.
Best reviewed commit-by-commit
## Summary
Align uv's workspace discovery with red knots (see
https://github.com/astral-sh/ruff/pull/14308#issuecomment-2474296308)
* Detect nested workspace inside the current workspace rather than
testing if the current workspace is a member of any outer workspace.
* Detect packages with identical names.
## Test Plan
I added two integration tests. I also back ported the tests to main to
verify that both these invalid workspaces aren't catched by uv today.
That makes this, technically, a breaking change but I would consider the
change a bug fix.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
I was wrongly using `.name()` to detect if a package was "not root", but
in `pip compile`, the root can have a name -- so we were failing to find
the derivation chain.
## Summary
This PR adds context to our error messages to explain _why_ a given
package was included, if we fail to download or build it.
It's quite a large change, but it motivated some good refactors and
improvements along the way.
Closes https://github.com/astral-sh/uv/issues/8962.
## Summary
This PR should not contain any user-visible changes, but the goal is to
refactor the `Resolution` type to retain a dependency graph. We want to
be able to explain _why_ a given package was excluded on error (see:
https://github.com/astral-sh/uv/issues/8962), which in turn requires
that at install time, we can go back and figure out the dependency
chain. At present, `Resolution` is just a map from package name to
distribution; this PR remodels it as a graph in which each node is a
package, and the edges contain markers plus extras or dependency groups.
A first milestone: source tree -> source dist -> wheel -> install works.
This PR adds a test for this.
There's obviously a lot still missing, including basics such as the
Readme inclusion.
## Summary
As discussed in Discord... This struct has evolved to include a lot of
information apart from the `petgraph::Graph`. And I want to add a graph
to the simplified `Resolution` type. So I think this name makes more
sense.
When doing a directory traversal for source dist inclusion, we want to
offer the user include and exclude options, and we want to avoid
traversing irrelevant directories. The latter is important for
performance, especially on network file systems, but also with large
data directories, or (not-included) directories with other permissions.
To support this, we introduce `GlobDirFilter`, which uses a DFA from
regex_automata to determine whether any children of a directory can be
included and skips the directory if not.
The globs are based on PEP 639. The syntax is more restricted than glob
or globset, but it's standardized. I chose it over glob or globset
because we're already using this syntax for `project.license-files` a
required by PEP 639, so it makes sense to use the same globs for all
includes (see e.g.
4f52a3bb62/pyproject.toml (L36-L48)
for example with same semantics for include and exclude)
### Semantics
Glob semantics are complex due to mixing directories and files,
expectations around simplicity and our need to exclude most of the tree
in the project from traversal. The current draft uses a syntax that
optimizes for simple default use cases for the start.
#### includes
Glob expressions which files and directories to include in the source
distribution.
Includes are anchored, which means that `pyproject.toml` includes only
`<project root>/pyproject.toml`. Use for example `assets/**/sample.csv`
to include for all
`sample.csv` files in `<project root>/assets` or any child directory. To
recursively include
all files under a directory, use a `/**` suffix, e.g. `src/**`. For
performance and
reproducibility, avoid unanchored matches such as `**/sample.csv`.
The glob syntax is the reduced portable glob from
[PEP 639](https://peps.python.org/pep-0639/#add-license-FILES-key).
#### excludes
Glob expressions which files and directories to exclude from the
previous source
distribution includes.
Excludes are not, which means that `__pycache__` excludes all
directories named
`__pycache__` and it's children anywhere. To anchor a directory, use a
`/` prefix, e.g.,
`/dist` will exclude only `<project root>/dist`.
The glob syntax is the reduced portable glob from
[PEP 639](https://peps.python.org/pep-0639/#add-license-FILES-key).
Surprisingly, this seems to be all that's necessary.
Previously, we were only extracting an extra from a
PubGrubPackage to test for conflicts. But now we extract
either an extra or a group. The surrounding code all
remains the same.
We do need to add some extra checking for groups
specifically, but I believe that's it.
This adds support for providing conflicting group names in addition to
extra names to `Conflicts`.
This merely makes "room" for it in the types while keeping everything
working. We'll add proper support for it in the next commit.
Note that one interesting trick we do here is depend directly on
`hashbrown` so that we can make use of its `Equivalent` trait. This in
turn lets us use things like `ConflictItemRef` as a lookup key for a
hashset that contains `ConflictItem`. This mirrors using a `&str` as a
lookup key for a hashset that contains `String`, but works for arbitrary
types. `std` doesn't support this, but `hashbrown` does. This trick in
turn lets us simplify some of our data structures.
This also rejiggers some of the serde-interaction with the conflicting
types. We now use a wire type to represent our conflicting items for
more flexibility. i.e., Support `extra` XOR `group` fields.
Since this is intended to support _both_ groups and extras, it doesn't
make sense to just name it for groups. And since there isn't really a
word that encapsulates both "extra" and "group," we just fall back to
the super general "conflicts."
We'll rename the variables and other things in the next commit.
## Summary
I need this for the derivation chain work
(https://github.com/astral-sh/uv/issues/8962), but it just seems
generally useful. You can't always get a version from a `Dist` (it could
be URL-based!), but when we create a `ResolvedDist`, we _do_ know the
version (and not just the URL). This PR preserves it.
## Summary
This PR ensures that `pylint>=3.2.6` followed by
`pylint-module-boundaries>=1.3.1` is considered sorted, despite the fact
that `>` is later in the alphabetic than `-`. By purely comparing
strings, they would _not_ be sorted; but by considering the name, then
the specifiers, they are.
Closes https://github.com/astral-sh/uv/issues/9076.
## Summary
Part of me wants to revert support for `--with "flask, requests"`, but
the multiple specifiers case actually isn't ambiguous, and handling it
is better than shipping a breaking change in a patch release.
Closes https://github.com/astral-sh/uv/issues/9081.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
-->
## Summary
Adds python-install-mirror and pypy-install-mirror as keys for uv.toml,
and cli args for `uv python install`.
Could leave the cli args out if we think the env vars and configs are
sufficient.
Fixes#8186
<!-- What's the purpose of the change? What does it do, and why? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
This restores behavior previously removed in
https://github.com/astral-sh/uv/pull/7649.
I thought it'd be clearer (and simpler) to have a consistent Python
executable name ordering. However, we've seen some cases where this can
be surprising and, in combination with #8481, can result in incorrect
behavior. For example, see https://github.com/astral-sh/uv/issues/9046
where we prefer `python3` over `python3.12` in the same directory even
though `python3.12` was requested. While `python3` and `python3.12` both
point to valid Python 3.12 environments there, the expectation is that
when `python3.12` is requested that the `python3.12` executable is
preferred. This expectation may be less obvious if the user requests
`python@3.12`, but uv does not distinguish between these request forms.
Similarly, this may be surprising as by default uv prefers `python` over
`python3` but when requesting `python3.12` the preference will be
swapped.
This PR adds support for conflicting extras. For example, consider
some optional dependencies like this:
```toml
[project.optional-dependencies]
project1 = ["numpy==1.26.3"]
project2 = ["numpy==1.26.4"]
```
These dependency specifications are not compatible with one another.
And if you ask uv to lock these, you'll get an unresolvable error.
With this PR, you can now add this to your `pyproject.toml` to get
around this:
```toml
[tool.uv]
conflicting-groups = [
[
{ package = "project", extra = "project1" },
{ package = "project", extra = "project2" },
],
]
```
This will make the universal resolver create additional forks
internally that keep the dependencies from the `project1` and
`project2` extras separate. And we make all of this work by reporting
an error at **install** time if one tries to install with two or more
extras that have been declared as conflicting. (If we didn't do this,
it would be possible to try and install two different versions of the
same package into the same environment.)
This PR does *not* add support for conflicting **groups**, but it is
intended to add support in a follow-up PR.
Closes#6981Fixes#8024
Ref #6729, Ref #6830
This should also hopefully unblock
https://github.com/dagster-io/dagster/pull/23814, but in my testing, I
did run into other problems (specifically, with `pywin`). But it does
resolve the problem with incompatible dependencies in two different
extras once you declare `test-airflow-1` and `test-airflow-2` as
conflicting for `dagster-airflow`.
NOTE: This PR doesn't make `conflicting-groups` public yet. And in a
follow-up PR, I plan to switch the name to `conflicts` instead of
`conflicting-groups`, since it will be able to accept conflicting extras
_and_ conflicting groups.
`uv init` shouldn't have been using `EnvironmentPreference::Any` for
discovery of a Python interpreter, it seems like an oversight that it
was reading from virtual environments. I changed it to
`EnvironmentPreference::OnlySystem` so we'll use the first Python on the
`PATH` instead. However, I think we actually do want to respect a
virtual environment's Python version if it's in the target project
directory already, so I've implemented that as well.
Closes https://github.com/astral-sh/uv/issues/9072
Closes https://github.com/astral-sh/uv/issues/8092
## Summary
Not thrilled with this but helps for now. I feel like this
error-handling should happen at the top-level, rather than on all these
individual commands. But we don't have a unified result type at the
top-level of the CLI -- all these commands return `anyhow::Result`.
## Summary
Shows similar diagnostics for failures that happen at install time,
rather than resolve time. This will ultimately feed into
https://github.com/astral-sh/uv/issues/8962 since we'll now have
consolidated handling for these kinds of failures.
## Summary
If a `uv add` fails at the sync stage, we need to clean up the changes
to the `uv.lock`, since it might've been edited during in the lock stage
(which, by necessity, succeeded). As-is, we revert the `pyproject.toml`
but not the `uv.lock`, so the two are out-of-sync.
Closes https://github.com/astral-sh/uv/issues/9028.
Closes https://github.com/astral-sh/uv/issues/7992.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR builds off of https://github.com/astral-sh/uv/pull/6738 to fix
#6724 (sorry for the new PR @charliermarsh I didn't want to push to your
branch, not even sure if I could). The reason the original PR doesn't
fix the issue described in #6724 is because the fastapi is ran in the
project context (as I assume a lot of use cases are). This PR adds an
extra commit to handle the signals in the project/run.rs file
~It also addresses the comment
[here](https://github.com/astral-sh/uv/pull/6738/files#r1734757548) to
not use the tokio ctrl-c method since we are now handling SIGINT
ourselves~ update, tokio handles SIGINT in a platform agnostic way,
intercepting this ouselves makes the logic more complicated with
windows, decided to leave the tokio ctrl-c handler
~[This
comment](https://github.com/astral-sh/uv/pull/6738/files#r1743510140)
remains unaddressed, however, the Child process does not have any other
methods besides kill() so I don't see how we can "preserve" the
interrupt call :/ I tried looking around but no luck.~ updated, this PR
is reduced to only handling SIGTERM propagation on unix machines, and
the sigterm call to the child is preserved by making use of the nix
package, instead of relying on tokio which only allowed for `kill()` on
a child process
## Test Plan
I tested this by building the docker container locally with these
changes and tagging it "myuv", and then using that as the base image in
uv-docker-example, (and ofc following the rest of the repro issues in
#6724. In my tests I see that ctrl-c in the docker-compose up command
exits the process almost immediately 👍
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
We're inconsistent with these -- sometimes it's `Error::Fetch` and
sometimes it's `Error::Download`. The message says download, so let's
just use that?
## Summary
This got moved to `InstallTarget`! Must've been an oversight not to
delete. I verified that no code was changed here since the date that we
moved it to `InstallTarget`.
## Summary
Just as we don't enforce tag compliance, we shouldn't enforce
`--no-build` when validating the lockfile. If we end up building from
source, the distribution database will correctly error.
Closes https://github.com/astral-sh/uv/issues/9016.
## Summary
At time of writing, `markupsafe==3.0.2` exists on the PyTorch index, but
there's
only a single wheel:
`MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl`
Meanwhile, there are a large number of wheels on PyPI for the same
version. If the
user is on Python 3.12, and we return the incompatible PyTorch wheel
without
considering the PyPI wheels, PubGrub will mark 3.0.2 as an incompatible
version,
even though there are compatible wheels on PyPI.
Closes https://github.com/astral-sh/uv/issues/8922.
## Summary
We were making some incorrect assumptions in the extra-merging code for
universal `pip compile`. This PR corrects those assumptions and adds a
bunch of additional tests.
Closes https://github.com/astral-sh/uv/issues/8915.
## Summary
The basic issue here is that `uv add` will compute and store a hash for
each package. But if you later run `uv pip install` _after_ `uv cache
prune --ci`, we need to re-download the source distribution. After
re-downloading, we compare the hashes before and after. But `uv pip
install` doesn't compute any hashes by default. So the hashes "differ"
and we error.
Instead, we need to compute a superset of the already-existing and
newly-requested hashes when performing this re-download. (In practice,
this will always be SHA-256.)
Closes https://github.com/astral-sh/uv/issues/8929.
## Test Plan
```shell
export UV_CACHE_DIR="$PWD/cache"
rm -rf "$UV_CACHE_DIR" .venv .venv-2 pyproject.toml uv.lock
echo $(uv --version)
uv init --name uv-cache-issue
cargo run add --python 3.13 "pycairo"
uv cache prune --ci
rm -rf .venv .venv-2
uv venv --python python3.11 .venv-2
. .venv-2/bin/activate
cargo run pip install "pycairo"
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
auditwheel is capable of generating riscv64 wheels for manylinux_2_31
and above. Here we modify uv-platform-tags so that those wheels can be
installed using uv.
Fixes: #8889
## Test Plan
- ran `cargo nextest run` locally on an x86 machine
- also ran `cargo nextest run` locally on a riscv64 VM but there were a
fair few failures (with and without this patch)
- built a riscv64 uv wheel, installed it on a riscv64 VM and checked
that I could use the newly built version of uv to install
manylinux_2_35_riscv64 wheels.
## Summary
As an oversight, these arguments weren't being respected from the CLI or
elsewhere -- we always hit PyPI, ignored `--exclude-newer`, etc. It has
to do with the way that the `PipOptions` are setup -- there's a global
struct that we pass around everywhere and fill in with defaults, so
there's no type safety to guarantee that we provide whatever it is we
need to use in the command. The newer APIs are much better about this.
Closes#8927.
## Summary
In the example outlined in https://github.com/astral-sh/uv/issues/8884,
this removes an unnecessary `jupyter_contrib_nbextensions-0.7.0.tar.gz`
segment (replacing it with `src`), thereby saving 39 characters and
getting that build working on my Windows machine.
This should _not_ require a version bump because we already have logic
in place to "heal" partial cache entries that lack an unzipped
distribution.
Closes https://github.com/astral-sh/uv/issues/8884.
Closes https://github.com/astral-sh/uv/issues/7376.
## Summary
See: https://github.com/astral-sh/uv/issues/8884. We build in a
directory that's deep within the cache; to help with file name length
limits, we should build at the top-level of the cache.
We don't actually want users to see this, but we should be stripping it
anyway. Without this change, we show ranges in the debug logs that look
like `>=1.0.0, <1.0.0`, which is more confusing than helpful. (We may
want to post-process those debug ranges to remove these.)
After https://github.com/astral-sh/uv/pull/8797, we have spec-compliant
handling for local version identifiers and can completely remove all the
special-casing around it.
Implement a full working version of local version semantics. The (AFAIA)
major move towards this was implemented in #2430. This added support
such that the version specifier `torch==2.1.0+cpu` would install
`torch@2.1.0+cpu` and consider `torch@2.1.0+cpu` a valid way to satisfy
the requirement `torch==2.1.0` in further dependency resolution.
In this feature, we more fully support local version semantics. Namely,
we now allow `torch==2.1.0` to install `torch@2.1.0+cpu` regardless of
whether `torch@2.1.0` (no local tag) actually exists.
We do this by adding an internal-only `Max` value to local versions that
compare greater to all other local versions. Then we can translate
`torch==2.1.0` into bounds: greater than 2.1.0 with no local tag and
less than 2.1.0 with the `Max` local tag.
Depends on https://github.com/astral-sh/packse/pull/227.
Uses #6369 for test coverage.
Updates version file discovery to search up into parent directories.
Also refactors Python request determination to avoid duplicating the
user request / version file / workspace lookup logic in every command
(this supersedes the work started in
https://github.com/astral-sh/uv/pull/6372).
There is a bit of remaining work here, mostly around documentation.
There are some edge-cases where we don't use the refactored request
utility, like `uv build` — I'm not sure how I'm going to handle that yet
as it needs a separate root directory.
Not verifying the certificates of certain hosts should be supported for
all kinds of HTTPS connections, so we're making it a global option, just
like native tls. This fixes the remaining places using a client but were
not configuring allow insecure host.
Fixes#6983 (i think)
Closes#6983
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
These settings can only be defined in `pyproject.toml`, since they're
project-centric, and not _configuration_.
Closes https://github.com/astral-sh/uv/issues/8539.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Co-authored-by: konsti <konstin@mailbox.org>
closes#6640
Could you suggest how I should test it?
(already tested locally)
---------
Co-authored-by: konstin <konstin@mailbox.org>
Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
e.g.
```
❯ echo "anyio" | cargo run -q -- pip compile - -v
DEBUG uv 0.4.30 (107ab3d71 2024-11-07)
DEBUG Starting Python discovery for a default Python
DEBUG Looking for exact match for request a default Python
DEBUG Searching for default Python interpreter in virtual environments, managed installations, or search path
DEBUG Found `cpython-3.12.7-macos-aarch64-none` at `/Users/zb/workspace/uv/.venv/bin/python3` (virtual environment)
```
```
❯ cargo run -q -- pip install anyio -v
DEBUG uv 0.4.30 (107ab3d71 2024-11-07)
DEBUG Searching for default Python interpreter in virtual environments
DEBUG Found `cpython-3.12.7-macos-aarch64-none` at `/Users/zb/workspace/uv/.venv/bin/python3` (virtual environment)
```
vs
```
❯ uv pip install anyio -v
DEBUG uv 0.4.30 (61ed2a236 2024-11-04)
DEBUG Searching for default Python interpreter in system path
DEBUG Found `cpython-3.12.7-macos-aarch64-none` at `/Users/zb/workspace/uv/.venv/bin/python3` (virtual environment)
```
```
❯ echo "anyio" | uv pip compile - -v
DEBUG uv 0.4.30 (61ed2a236 2024-11-04)
DEBUG Starting Python discovery for a default Python
DEBUG Looking for exact match for request a default Python
DEBUG Searching for default Python interpreter in managed installations or system path
DEBUG Found `cpython-3.12.7-macos-aarch64-none` at `/Users/zb/workspace/uv/.venv/bin/python3` (virtual environment)
```
Very basic source distribution support. What's included:
- Include and exclude patterns (hard-coded): Currently, we have
globset+walkdir in one part and glob in the other. I'll migrate
everything to globset+walkset and some custom perf optimizations to
avoid traversing irrelevant directories on top. I'll also pick a glob
syntax (or subset), PEP 639 seems like a good candidate since it's
consistent with what we already have to support.
- Add the `PKG-INFO` file with metadata: Thanks to Code Metadata 2.2,
this metadata is reliable and can be read statically by external tools.
Example output:
```
$ tar -ztvf dist/dummy-0.1.0.tar.gz
-rw-r--r-- 0/0 154 1970-01-01 01:00 dummy-0.1.0/PKG-INFO
-rw-rw-r-- 0/0 509 1970-01-01 01:00 dummy-0.1.0/pyproject.toml
drwxrwxr-x 0/0 0 1970-01-01 01:00 dummy-0.1.0/src/dummy
drwxrwxr-x 0/0 0 1970-01-01 01:00 dummy-0.1.0/src/dummy/submodule
-rw-rw-r-- 0/0 30 1970-01-01 01:00 dummy-0.1.0/src/dummy/submodule/impl.py
-rw-rw-r-- 0/0 14 1970-01-01 01:00 dummy-0.1.0/src/dummy/submodule/__init__.py
-rw-rw-r-- 0/0 12 1970-01-01 01:00 dummy-0.1.0/src/dummy/__init__.py
```
No tests since the source distributions don't build valid wheels yet.
## Summary
Like pip, we now allow the semicolon to directly proceed the URL (but
require that it's either preceded or followed by a space):
```
# OK
./test.whl; sys_platform == 'darwin'
# OK
./test.whl ;sys_platform == 'darwin'
# Error
./test.whl;sys_platform == 'darwin'
```
Closes https://github.com/astral-sh/uv/issues/8831.
This PR simplifies the VersionSmall implementation a bit by utilizing
more constants. That is, if the bit-level format changes, *most* of
those changes should be implementable by just changing the constants.
Previously, you would need to audit and tweak the code as well. (The
exception here is `push_release`. If the release segment bit format is
changed, then that function will need to be tweaked. I didn't think it
was worth over-complicating things to make its implementation more
general.)
See https://github.com/astral-sh/uv/pull/8531#issuecomment-2442698889,
we hint users coming from twine to use `--check-url` instead.
> `uv publish` does not support `--skip-existing`, use `--check-url`
with the simple index URL instead.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Previously, we'd use the `--reinstall` flag to determine if we should
replace existing Python executables in the bin directory during an
install. There are a few problems with this:
- We replace executables we don't manage
- We can replace executables from other uv Python installations during
reinstall (surprising)
- We don't do the "right" thing when installing patch versions e.g.
installing `3.12.4` then `3.12.6` would fail without the reinstall flag
In `uv tool`, we have separate `--force` and `--reinstall` concepts.
Here we separate the flags (`--force` was previously just a
`--reinstall` alias) and add inspection of the existing executables to
inform a decision on replacement.
In brief, we will:
- Replace any executables with `--force`
- Replace executables for the same installation with `--reinstall`
- Replace executables for an older patch version by default
## Summary
This PR pulls in https://github.com/astral-sh/uv/pull/8263 and
https://github.com/astral-sh/uv/pull/8463, which were originally merged
into the v0.5 tracking branch but can now be committed separately, as
we've made `.env` loading opt-in.
In summary:
- `.env` loading is now opt-in (`--env-file .env`).
- `.env` remains supported on `uv run`, so it's meant for providing
environment variables to the run command, rather than to uv itself.
---------
Co-authored-by: Eduardo González Vaquero <47718648+edugzlez@users.noreply.github.com>
Running this test manually on the latest release shows that it still
emits a `python_version < '0'` marker:
$ uv-0.4.29 pip compile requirements.in -c constraints.txt
--annotation-style line --python 3.10 --universal | rg "< '0'"
Resolved 151 packages in 97ms
apache-airflow-providers-common-sql==1.19.0 ; python_version < '0' # via
apache-airflow-providers-sqlite
That is, even though this is a smaller test, it's still testing the same
bug.
This is about twice as fast on my machine. It's probably still worth
moving to a packse test, but this was quick to do.
This updates the surrounding code to use the new ResolverEnvironment
type. In some cases, this simplifies caller code by removing case
analysis. There *shouldn't* be any behavior changes here. Some test
snapshots were updated to account for some minor tweaks to error
messages.
I didn't split this up into separate commits because it would have been
too difficult/costly.
This type is intended to replace `ResolverMarkers`. The main difference
between them is that this type encapsulates more decision making by
un-exporting the different cases. So instead of callers needing to do
explicit case analysis depending on the type of resolver environment,
callers instead use methods that know how to do the right thing. In the
next commit, there are at least a few cases where this greatly
simplifies case analysis on the caller side.
The motivation for this type is to centralize decision making about
forking. In particular, we want to expand forking to include conflicting
groups instead of just `MarkerTree`. So to a certain extent, the
refactor here is about removing bare use of `MarkerTree` in favor of a
more purpose built type that encapsulates the forking logic.
The encapsulation is not quite perfect here. I expect to improve on it a
bit once we add support for conflicting groups.
This is split off from the subsequent commit (that makes use of
`ResolverEnvironment`) so that it's a bit easier to review the addition
in isolation.
`ExtraName` did implement `AsRef<str>`, but that should generally
only be used in a context with an `AsRef<str>` generic bound. If
you just want a `&str` from a concrete `ExtraName`, then a specific
method for that purpose should be used.
This PR is a revival of #3502, albeit in a much simpler form.
This would allow for different middlewares like authentication and such,
useful for if you want to deviate from the keychain authentication
methods when using uv as a library.
@zanieb I hope I made the changes as you noted you wanted to see them :)
Happy to add/change anything you need.
## Summary
At present, when we have a Python requirement and we see a wheel, we
verify that the Python requirement is compatible with the wheel. For
source distributions, though, we verify that both the Python requirement
_and_ the currently-installed version are compatible, because we assume
that we'll need to build the source distribution in order to get
metadata. However, we can often extract source distribution metadata
_without_ building (e.g., if there's a `pyproject.toml` with no dynamic
keys).
This PR thus modifies the source distribution handling to defer that
incompatibility ("We couldn't get metadata for this project, because it
has no static metadata and requires a higher Python version to run /
build") until we actually try to build the package. As a result, you can
now resolve source distribution-only packages using Python versions
below their `requires-python`, as long as they include static metadata.
Closes https://github.com/astral-sh/uv/issues/8767.
## Summary
* Env docs now support anchors, which allows sending a link to someone
with a direct reference to an env var or cross-reference them in the
docs.
* Marked additional env vars as hidden from the docs due to their
internal use
* Updates some tests still using literals to use the static env vars
## Test Plan
<img width="1370" alt="env_var_anchors"
src="https://github.com/user-attachments/assets/52ae1caa-5199-4798-9eb5-81b8f5b57c24">
## Summary
Resolves#8417
I've just begun learning procedural macros, so this PR is more of a
proof of concept. It's still a work in progress, and I welcome any
assistance or feedback.
## Summary
This PR improves the interaction of `--frozen` such that we reduce the
dependency on the `pyproject.toml` and increase the dependency on the
`uv.lock`. Specifically, we now read the list of workspace members from
the `uv.lock` rather than the `pyproject.toml`, which means we don't
need to discover the member `pyproject.toml` files in order to perform a
`uv sync --frozen --all-packages`.
## Summary
This PR enables `uv sync --all-packages` to sync all packages in a
workspace. It removes a common use-case for the legacy non-`[project]`
packages that we're trying to move away from.
Closes https://github.com/astral-sh/uv/issues/8724.
This PR fixes a bug where it was possible for dependencies to be
included in a final resolution with markers that always evaluate to
false. Specifically, `python_version < '0'`.
While we do filter based on Python markers during forking, it turns out
that the markers for each fork are "combined" *after* this filtering
step. But the process of combination can result in a more specific
marker that is always false for the configured Python requirement. This
could result in dependencies with markers that are always false (like
`python_version < '0'`) appearing in the resolution.
The first commit in this PR adds a regression test (with an undesirable
result), and the second commit fixes the regression and updates the
test.
Fixes#8676
This still utilizes the RFC 2822 datetime formatter, but utilizes new
methods [added in jiff 0.1.14] to emit timestamps in a format strictly
compatible with RFC 9110.
It seems like most HTTP servers were pretty flexible and supported RFC
2822 datetime formats, but #8747 shows at least one case where that
isn't true. Given that the [MDN docs prescribe RFC 9110], we defer to
them.
Fixes#8747
[added in jiff 0.1.14]: https://github.com/BurntSushi/jiff/pull/154
[MDN docs prescribe RFC 9110]:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since
## Summary
It turns out that when locking, we were only taking the groups from the
root `pyproject.toml` into account, and ignoring groups that were only
defined in a workspace member.
## Summary
By default, `uv tree` shows the full workspace, not _just_ the root. If
the root depended on a workspace member as a dev dependency, then we'd
still show it as `(group: dev)` in `uv tree` even if you passed
`--no-dev`, because we weren't filtering the edges in the right place.
This is still somewhat confusing, because if `root` depends on workspace
member `child` as a dev dependency, `uv tree --no-dev` still shows both.
Closes https://github.com/astral-sh/uv/issues/8719.
Incorporating #8637 into #8458
- Adds `python-managed` feature selection to Windows CI for `python
install` tests
- Adds trampoline sniffing utilities to `uv-trampoline-builder`
- Uses a trampoline to install Python executables into the `PATH` on
Windows
Pulling out of https://github.com/astral-sh/uv/pull/8650 for
readability.
Trying to clean this up to simplify extensions in the future. This is
not a strict refactor, there are behavioral changes here.
- Adds some structs for managing state.
- Addresses some likely inconsistent behavior for weird edge-cases.
- We fill platform information before checking if a request is
satisfied.
- We error earlier if we can't find a download for the request, i.e.,
even if you somehow have it installed.
- Only reports versions as uninstalled if a download actually replaces
them.
- Moves some of the default output to tracing messages.
- Even if an installation was already satisfied, we'll check that it is
setup properly
When resolving workspace dependencies (from one workspace member to
another) from a workspace that's in git, we need to emit these
transitive dependencies as git dependencies, not path dependencies as
all other workspace deps. This fixes a bug where we would treat them as
path dependencies inside the checkout directory, leading either to
clashes (between a local path and another direct git dependency) or
invalid lockfiles (referencing the checkout dir in the lockfile when we
should be referencing the git repo).
Fixes#8087Fixes#4920Fixes#3936 since we needed that information anyway
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Updates `uv python install` to link `python3.x` in the executable
directory (i.e., `~/.local/bin`) to the the managed interpreter path.
Includes
- #8569
- #8571
Remaining work
- #8663
- #8650
- Add an opt-out setting and flag
- Update documentation
## Summary
These use really heavy test packages, like SciPy, NumPy, scikit-learn --
and packages with large dependency trees, like packse.
I removed a few redundant tests, and replaced the tests with smaller
packages.
Closes https://github.com/astral-sh/uv/issues/8674.
Previously, when doing a `uv pip` resolution, we would only return the
first entry in the map. But there should only ever be one entry, or else
we would have incompatible dependencies. So we can collapse the case
with the "one universal fork" case.
(I found this while doing some refactoring of how we handle forking, and
collapsing these cases simplifies some of that refactoring work.)
Previously, when tests were run in `~/.local/share/uv`, the behavior of
some tests could be impacted by a git repository in `~` (as on my
system). To avoid this, we set `GIT_CEILING_DIRECTORIES` to forcefully
prevent git from climbing out of its test directory to look for parent
git repositories.
When this code was written, we didn't have "proper" disjointness checks,
and so simple equality was used instead. Arguably disjointness checks
are
more correct, and this would also simplify some case analysis in an
ongoing
refactor.
Currently, our trampoline is used to convert `<command> [args]` to
`python <command> [args]` for script entrypoints installed into virtual
environments. For #8458, it'd be nice to convert a shim `python3.12
[args]` to `python [args]`. Here, we modify the trampolines to support
this use-case.
The only change we really need here is to avoid injecting `<command>`
into the child process. We change the "magic number" at the end of the
trampoline executables from `UVUV` to `UVSC` and `UVPY` which define
"script" and "python" variants to the trampoline. We then omit the
`<command>` injection in the latter case. We also omit writing the zip
script payload.
To support construction of the new variant, a new
`uv-trampoline-builder` crate is introduced — this avoids requirements
on `uv-install-wheel` in future work. I also use `uv-trampoline-builder`
to consolidate some of the test setup for `uv-trampoline`.
There should be no backwards compatibility concerns, since trampolines
are fully self-referential.
I rebased to fix the commits at the end, as this took many iterations to
get working via CI. This should roughly be reviewable by commit if you
prefer.
## Summary
Unfortunately, it looks like we lost
https://github.com/astral-sh/uv/pull/8501 somewhere in a bad rebase.
This PR re-adds the change, with compatibility for those lockfiles
created in v0.4.27. I'm not certain we should actually merge this. It
might be less painful and confusing to just bite the bullet on the
change.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Refs:
- #8626
## Summary
Current documentation incorrectly suggests that the macOS cache
directory location is `$HOME/Library/Caches/uv`, but that changed in:
- #5806
Updates docs to say this instead:
> <p>Defaults to <code>$HOME/.cache/uv</code> on macOS,
<code>$XDG_CACHE_HOME/uv</code> or <code>$HOME/.cache/uv</code> on
Linux, and <code>%LOCALAPPDATA%\uv\cache</code> on Windows. The <code>uv
cache dir</code> command will show the location of the cache
directory.</p>
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
The changes in this commit introduce the `UV_NO_PROGRESS` environment
variable as an alternative way to control progress output suppression in
uv-cli, equivalent to using the `--no-progress` flag. This enhancement
simplifies configuration in CI environments and automated scripts by
eliminating the need to detect whether the script is running in a CI
environment.
Previously, disabling progress output required either passing the
`--no-progress` flag directly or implementing script logic to detect CI
environments and conditionally add the flag. With this change, users can
now simply set `UV_NO_PROGRESS=true` in their environment to achieve the
same effect.
The changes include:
- Adding the `UV_NO_PROGRESS` environment variable to the `EnvVars`
struct in `crates/uv-static/src/env_vars.rs`.
- Updating the `GlobalArgs` struct in `crates/uv-cli/src/lib.rs` to
include a new `no_progress` field that is bound to the `UV_NO_PROGRESS`
environment variable.
- Adding documentation for the new `UV_NO_PROGRESS` environment variable
in `docs/configuration/environment.md`.
## Test Plan
After creating a uv project using `uv init` in a temp directory in this
project:
```
cargo run cache clean && cargo run venv && UV_NO_PROGRESS=false cargo run sync
cargo run cache clean && cargo run venv && cargo run sync
```
produce the expected default behavior
```
cargo run cache clean && cargo run venv && UV_NO_PROGRESS=false cargo run sync
```
produces the same behavior as having the `--no-progress` flag.
## Summary
This PR removes an unnecessary `return` statement from Rust code in the
Maturin template. This makes the generated code more idiomatic and
prevents a Clippy warning.
## Test Plan
This change was tested both in unit tests and by manually creating a
Maturin project.
The shared arguments were resetting the `UV_PYTHON_INSTALL_DIR`,
breaking the intent of the test.
Added some uninstall tests too, needed later for #8571
## Summary
It turns out we were omitting empty dependency groups from the lockfile
metadata, which was then causing us to reject locks when empty groups
were defined.
We now include them (that section of the lock is meant to be a true
representation of the metadata, and an empty-but-defined group is
different from an absent group), though we can ignore them for
validation, since it doesn't affect any behavior.
Closes https://github.com/astral-sh/uv/issues/8581.
## Summary
We already support `tool.uv.dev-dependencies` in the legacy
non-`[project]` projects. This adds equivalent support for
`[dependency-groups]`, e.g.:
```toml
[tool.uv.workspace]
[dependency-groups]
lint = ["ruff"]
```
## Summary
`uv add --dev` now updates the `dependency-groups.dev` section, rather
than `tool.uv.dev-dependencies` -- unless the dependency is already
present in `tool.uv.dev-dependencies`.
`uv remove --dev` now removes from both `dependency-groups.dev` and
`tool.uv.dev-dependencies`.
`--dev` and `--group dev` are now treated equivalently in `uv add` and
`uv remove`.
This PR adds support for `tool.uv.default-groups`, which defaults to
`["dev"]` for backwards-compatibility. These represent the groups we
sync by default.
## Summary
We have to iterate over all user-defined dependencies here. We were
missing the new `[dependency-groups]` section.
Part of https://github.com/astral-sh/uv/pull/8272.
Part of #8090
Unblocks https://github.com/astral-sh/uv/pull/8274
Refactors `DevMode` and `DevSpecification` into a shared type
`DevGroupsSpecification` that allows us to track if `--dev` was
implicitly or explicitly provided.
Part of #8090
Adds the ability to read group inclusions (`include-group = <name>`) in
the `pyproject.toml`. Resolves groups into concrete dependencies for
resolution.
See https://github.com/astral-sh/uv/pull/8110 for a bit more commentary
on deferred work.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Part of #8090
Adds the ability to include a group (`--group`) in the sync or _only_
sync a group (`--only-group`). Includes all groups in the resolution,
which will have the same limitations as extras as described in #6981.
There's a great deal of refactoring of the "development" concept into
"groups" behind the scenes that I am continuing to defer here to
minimize the diff.
Additionally, this does not yet resolve interactions with the existing
`dev` group — we'll tackle that separately as well. I probably won't
merge the stack until that design is resolved. The current proposal is
that we'll just "combine' the `dev-dependencies` contents into the `dev`
group.
Part of #8090
Adds the ability to add and remove dependencies from arbitrary groups
using `uv add` and `uv remove`. Does not include resolving with the new
dependencies — tackling that in #8110.
Additionally, this does not yet resolve interactions with the existing
`dev` group — we'll tackle that separately as well. I probably won't
merge the stack until that design is resolved.
## Summary
These two sentences in the docs for `--publish-url` seem to basically be
duplicates:
3eda248ef5/crates/uv-cli/src/lib.rs (L4616-L4618)
I found the first to be easier to read, so this commit removes the
second.
## Test Plan
No tests, change is docs-only.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
For example, in:
```toml
[tool.uv]
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
```
We can just omit `[tool.uv]`.
## Summary
We were including dependencies that were only included by a dependency
that isn't relevant on the current platform (i.e., we were enforcing the
"current environment" at one level, but not transitively).
Closes https://github.com/astral-sh/uv/issues/8516.
## Summary
Historically, we haven't enforced schema versions. This PR adds a
versioning policy such that, if a uv version writes schema v2, then...
- It will always reject lockfiles with schema v3 or later.
- It _may_ reject lockfiles with schema v1, but can also choose to read
them, if possible.
(For example, the change we proposed to rename `dev-dependencies` to
`dependency-groups` would've been backwards-compatible: newer versions
of uv could still read lockfiles that used the `dev-dependencies` field
name, but older versions should reject lockfiles that use the
`dependency-groups` field name.)
Closes https://github.com/astral-sh/uv/issues/8465.
With a change like https://github.com/astral-sh/uv/pull/8458, we really
need tests for these.
I'm just going to take the possible performance hit of these slow tests
and deal with optimizing them separately.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Fix the flag description: `to detect and with` --> `to detect packages
with`
This PR adds support for `uv lock --dry-run`, as described in issue
#6408.
One thing to note: this functionality, as implemented, isn't limited to
`-U` (if someone adds a dependency to the project's `pyproject.toml`,
the plan will include these changes).
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Previously, `uv tree --package` had some strange behavior due to how we
were computing the root nodes. This PR refactors the entire
implementation to use `petgraph` so we can do proper operations on a
graph structure.
Closes https://github.com/astral-sh/uv/issues/8382.
## Summary
This PR delays the removal of an existing version after downloading the
new version when running `uv python install --reinstall`.
If the download fails, we can keep the existing version working.
## Test Plan
```console
$ cargo run -- python install 3.13
$ cargo run -- python install --reinstall 3.13 # when downloading, `ctrl-c` to interrupt
$ cargo run -- python list
```
## Summary
Instead of creating a new entry, we should reuse the existing entry (to
preserve decor); similarly, we should avoid overwriting fields that are
already "correct".
Closes https://github.com/astral-sh/uv/issues/8483.
## Summary
We no longer need this struct; we bumped the cache bucket version
anyway, so the `Timestamp` variant is never encountered. This means we
get real Serde error messages.
Closes https://github.com/astral-sh/uv/issues/8488.
## Summary
This is part of making
https://github.com/astral-sh/uv/issues/7299#issuecomment-2385286341
better. You can now use `tool.uv.dependency-metadata` for direct URL
requirements. Unfortunately, you _must_ include a version, since we need
one to perform resolution.
## Summary
Look for a system level uv.toml config file under `/etc/uv/uv.toml` or
`C:\ProgramData`.
This PR is to address #6742 and start a conversation.
## Test Plan
This was tested locally manually on MacOS. I am happy to contribute
tests once we settle on the approach.
cc @thatch
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
If the user has an upper-bound in a `requires-python`, we don't
correctly narrow it during resolution. We should be narrowing based on
the intersection.
Closes#8297.
## Summary
The desired behavior for `uv tree` and `uv pip list` with `-q | --quiet`
flag is
https://github.com/astral-sh/uv/issues/8379#issuecomment-2425093709 to
still produce output. This is implemented here.
Closes https://github.com/astral-sh/uv/issues/8379
## Test Plan
Use `uv tree -q` or `uv pip list -q` on any uv project setup and expect
the corresponding output.
Added tests for that as well.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Now, we use four space (rather than one space) for cases like:
```toml
dependencies = [ # comment 0
# comment 1
"anyio==3.7.0", # comment 2
# comment 3
]
```
## Summary
Rather than relying on the distribution and package URL being the same
(which isn't true for Git dependencies), we can just use the
intersection of the markers directly.
Closes https://github.com/astral-sh/uv/issues/8381.
Cherry-picked from #8347
Might fix https://github.com/astral-sh/uv/issues/6940 — I'm not seeing a
failure over there after this change. I think there may be some problem
with concurrent reads of junctioned files on the DevDrive? It's really
hard to say.
We might lose some important test coverage with this change. I'm not
sure what to do about that either.
Cherry-picked from https://github.com/astral-sh/uv/pull/8347
Seems generally really helpful to see the unfiltered snapshot when a
test fails. Especially when debugging filters on Windows.
## Summary
This PR is is to address the problem when the same-line comments in
`pyproject.toml` could be found in unpredictable positions after `uv
add` or `remove` reformats the `pyproject.toml` file.
Introduced the `Comment` structure in `pyproject_mut` module to
distinguish "same-line" comments and "full-line" comments while
reformatting, because logic for them differs.
Sorry, the implementation could be clumsy, I'm just learning Rust, but
it seems to work 😅
Closes https://github.com/astral-sh/uv/issues/8343
## Test Plan
Added the new test:
`add_preserves_comments_indentation_and_sameline_comments`
To test followed the actions from the issue ticket
https://github.com/astral-sh/uv/issues/8343
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Going forward, we're going to provide better versioning guarantees
around using the same cache across multiple uv versions, so this PR
updates the docs to reflect that. It also bumps the `sdists-` version to
fix the inconvenience demonstrated in
https://github.com/astral-sh/uv/issues/8367.
Closes https://github.com/astral-sh/uv/issues/8367.
## Summary
Resolves#7685
## Test Plan
```console
$ echo "this is an invalid netrc" > .netrc
$ NETRC=.netrc cargo run -- pip install anyio --index-url https://pypi-proxy.fly.dev/basic-auth/simple --strict -v
DEBUG uv 0.4.24 (f4d5fba61 2024-10-19)
DEBUG Searching for default Python interpreter in system path or `py` launcher
DEBUG Found `cpython-3.11.2-windows-x86_64-none` at `D:\Projects\Rust\uv\.venv\Scripts\python.exe` (virtual environment)
DEBUG Using Python 3.11.2 environment at .venv
DEBUG Acquired lock for `.venv`
DEBUG At least one requirement is not satisfied: anyio
DEBUG Using request timeout of 30s
DEBUG Solving with installed Python version: 3.11.2
DEBUG Solving with target Python version: >=3.11.2
DEBUG Adding direct dependency: anyio*
DEBUG No cache entry for: https://pypi-proxy.fly.dev/basic-auth/simple/anyio/
WARN Error reading netrc file: parsing error: bad toplevel token 'this' (line 1) in the file '.netrc'
DEBUG Searching for a compatible version of anyio (*)
DEBUG No compatible version found for: anyio
× No solution found when resolving dependencies:
╰─▶ Because anyio was not found in the package registry and you require anyio, we can conclude that your
requirements are unsatisfiable.
hint: An index URL (https://pypi-proxy.fly.dev/basic-auth/simple) could not be queried due to a lack of valid
authentication credentials (401 Unauthorized).
DEBUG Released lock at `D:\Projects\Rust\uv\.venv\.lock`
error: process didn't exit successfully: `target\debug\uv.exe pip install anyio --index-url https://pypi-proxy.fly.dev/basic-auth/simple --strict -v` (exit code: 1)
```
Part of https://github.com/astral-sh/uv/issues/2777
I noticed we're seeing "Python ABI" _a lot_ in error messages which I
did not expect. This improves a common case by being a little more
specific.
Basically, if username-only authentication came from the _cache_ instead
of being present on the _request URL_ to start, we'd end up ignoring it
during password lookups which breaks keyring.
Includes some cosmetic changes to the logging and commentary in the
middleware, because I was confused when reading the code and logs.
The docs reference `UV_INDEX_`, but the code actually uses
UV_HTTP_BASIC_ as the prefix for environment variable credentials.
See PR #7741
Code is at
https://github.com/astral-sh/uv/blob/main/crates/uv-static/src/env_vars.rs#L163
```rust
/// Generates the environment variable key for the HTTP Basic authentication username.
pub fn http_basic_username(name: &str) -> String {
format!("UV_HTTP_BASIC_{name}_USERNAME")
}
/// Generates the environment variable key for the HTTP Basic authentication password.
pub fn http_basic_password(name: &str) -> String {
format!("UV_HTTP_BASIC_{name}_PASSWORD")
}
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
The hack in pip itself only modifies entry points called
`pip<number>.<number>` and `easy_install-<number>.<number>`, uv
previously dropped too many items including any of the form
`foo.<number>`.
Found while trying to install `memray` which somewhat notably does not
provide an abi3 wheel, so the installed, suffixed script matches. At a
minimum, this makes the installed files match the `entry_points.txt`
more than it did previously, which makes `pickley` happy.
## Test Plan
New test provided for previously-untested code.
Closes https://github.com/astral-sh/uv/issues/8228
e.g., on this branch
```
❯ uv python install 3.13t 3.13
❯ cargo build
❯ cargo run -q --bin uvx -- --from build python -c "import sys; print(sys.base_prefix)"
/Users/zb/.local/share/uv/python/cpython-3.13.0-macos-aarch64-none
❯ cargo run -q --bin uvx -- -p 3.13 --from build python -c "import sys; print(sys.base_prefix)"
/Users/zb/.local/share/uv/python/cpython-3.13.0-macos-aarch64-none
❯ cargo run -q --bin uvx -- -p 3.13t --from build python -c "import sys; print(sys.base_prefix)"
/Users/zb/.local/share/uv/python/cpython-3.13.0+freethreaded-macos-aarch64-none
```
and on main
```
❯ cargo build
❯ cargo run -q --bin uvx -- --from build python -c "import sys; print(sys.base_prefix)"
Installed 3 packages in 12ms
/Users/zb/.local/share/uv/python/cpython-3.13.0+freethreaded-macos-aarch64-none
```
I want to add more test coverage around this, but I've noticed the
free-threaded discovery tests are a bit off as-is and it'll be a bigger
task. I think the recent bugs around discovery indicate we should invest
more into that test framework.
Extends #7959
While I was looking at that message, I noticed I didn't love the
readability of the existing message and opted to follow-up with a change
to them both.
When patch version isn't specified and a matching version is referenced,
it will default patch to 0 which could be unclear/confusing. This PR
warns the user of that default.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
The first part of this issue
https://github.com/astral-sh/uv/issues/7426. Will tackle the second part
mentioned (`~=`) in a separate PR once I know this is the correct way to
warn users.
## Test Plan
<!-- How was it tested? -->
Unit tests were added
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
We shouldn't show these in `uv add`, especially when the thing we're
adding is about to have a lower-bound put on it. Now, we only show these
when the user runs `uv lock` or `uv sync`.
## Summary
If you pass a named index via the CLI, you can now reference it as a
named source. This required some surprisingly large refactors, since we
now need to be able to track whether a given index was provided on the
CLI vs. elsewhere (since, e.g., we don't want users to be able to
reference named indexes defined in global configuration).
Closes https://github.com/astral-sh/uv/issues/7899.
## Summary
This PR adds an index pin with `uv add` when the user provides exactly
one named index. We don't pin if the user provides an unnamed index, or
if they provide multiple indexes.
We probably _could_ pin on multiple indexes by writing the sources
_after_ resolution, if that's desirable. But we have no idea which index
the user _expects_ each package to come from.
Possible extensions:
- `uv add --no-pin` to avoid this pinning.
- Warn if they provide a single, unnamed index? I'm not sure if that's
worth a warn. Open to input.
## Summary
This was already properly handled, but the operation itself was in a
`debug_assert!`, so it wasn't running at all in production builds...
Closes https://github.com/astral-sh/uv/issues/8208.
## Summary
This PR lifts the restriction that a package must come from a single
index. For example, you can now do:
```toml
[project]
name = "project"
version = "0.1.0"
readme = "README.md"
requires-python = ">=3.12"
dependencies = ["jinja2"]
[tool.uv.sources]
jinja2 = [
{ index = "torch-cu118", marker = "sys_platform == 'darwin'"},
{ index = "torch-cu124", marker = "sys_platform != 'darwin'"},
]
[[tool.uv.index]]
name = "torch-cu118"
url = "https://download.pytorch.org/whl/cu118"
[[tool.uv.index]]
name = "torch-cu124"
url = "https://download.pytorch.org/whl/cu124"
```
The construction is very similar to the way we handle URLs today: you
can have multiple URLs for a given package, but they must appear in
disjoint forks. So most of the code is just adding that abstraction to
the resolver, following our handling of URLs.
Closes#7761.
## Summary
The behavior is as follows:
- If you provide `--index` or `--default-index` on the command-line, we
add those indexes to the `pyproject.toml` (with names, if provided, as
in `--index pytorch=https://download.pytorch.org/whl/cu121`.
- If you provide `--index-url` or `--default-index`, we warn, but don't
add the indexes to the file. (This seems wrong -- why not add them?)
- If you provide an index with a name or URL that already exists, we
remove that entry, and add the new index to the top of the list (since
it now has highest priority).
- If you provide a `--default-index`, and an index already has `default
= true`, we remove that entry, since it won't be used anymore.
We do _not_ pin packages to specific indexes yet.
## Summary
This PR enables users to provide index credentials via named environment
variables.
For example, given an index named `internal` that requires a username
(`public`) and password
(`koala`), you can define the index (without credentials) in your
`pyproject.toml`:
```toml
[[tool.uv.index]]
name = "internal"
url = "https://pypi-proxy.corp.dev/simple"
```
Then set the `UV_INDEX_INTERNAL_USERNAME` and
`UV_INDEX_INTERNAL_PASSWORD`
environment variables, where `INTERNAL` is the uppercase version of the
index name:
```sh
export UV_INDEX_INTERNAL_USERNAME=public
export UV_INDEX_INTERNAL_PASSWORD=koala
```
## Summary
This PR adds a first-class API for defining registry indexes, beyond our
existing `--index-url` and `--extra-index-url` setup.
Specifically, you now define indexes like so in a `uv.toml` or
`pyproject.toml` file:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
```
You can also provide indexes via `--index` and `UV_INDEX`, and override
the default index with `--default-index` and `UV_DEFAULT_INDEX`.
### Index priority
Indexes are prioritized in the order in which they're defined, such that
the first-defined index has highest priority.
Indexes are also inherited from parent configuration (e.g., the
user-level `uv.toml`), but are placed after any indexes in the current
project, matching our semantics for other array-based configuration
values.
You can mix `--index` and `--default-index` with the legacy
`--index-url` and `--extra-index-url` settings; the latter two are
merely treated as unnamed `[[tool.uv.index]]` entries.
### Index pinning
If an index includes a name (which is optional), it can then be
referenced via `tool.uv.sources`:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
[tool.uv.sources]
torch = { index = "pytorch" }
```
If an index is marked as `explicit = true`, it can _only_ be used via
such references, and will never be searched implicitly:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
explicit = true
[tool.uv.sources]
torch = { index = "pytorch" }
```
Indexes defined outside of the current project (e.g., in the user-level
`uv.toml`) can _not_ be explicitly selected.
(As of now, we only support using a single index for a given
`tool.uv.sources` definition.)
### Default index
By default, we include PyPI as the default index. This remains true even
if the user defines a `[[tool.uv.index]]` -- PyPI is still used as a
fallback. You can mark an index as `default = true` to (1) disable the
use of PyPI, and (2) bump it to the bottom of the prioritized list, such
that it's used only if a package does not exist on a prior index:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
default = true
```
### Name reuse
If a name is reused, the higher-priority index with that name is used,
while the lower-priority indexes are ignored entirely.
For example, given:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
[[tool.uv.index]]
name = "pytorch"
url = "https://test.pypi.org/simple"
```
The `https://test.pypi.org/simple` index would be ignored entirely,
since it's lower-priority than `https://download.pytorch.org/whl/cu121`
but shares the same name.
Closes#171.
## Future work
- Users should be able to provide authentication for named indexes via
environment variables.
- `uv add` should automatically write `--index` entries to the
`pyproject.toml` file.
- Users should be able to provide multiple indexes for a given package,
stratified by platform:
```toml
[tool.uv.sources]
torch = [
{ index = "cpu", markers = "sys_platform == 'darwin'" },
{ index = "gpu", markers = "sys_platform != 'darwin'" },
]
```
- Users should be able to specify a proxy URL for a given index, to
avoid writing user-specific URLs to a lockfile:
```toml
[[tool.uv.index]]
name = "test"
url = "https://private.org/simple"
proxy = "http://<omitted>/pypi/simple"
```
## Summary
Cache the path to git executable in a `LazyLock` and reuse it throughout
the process. This might reduce some costs on finding the git executable.
When building a source distribution to a wheels, we perform the build
inside a temporary directory inside the output directory. By default,
the output directory is `dist/` in the repository root. This temp dir
placement allows us to move the final wheel to the output directory
instead of copying it (a temp dir might be on another device, which
means we need to copy instead of moving).
Some build backends such as hatchling traverse upwards from the current
directory (the source dist build location) looking for gitignore files
to consider. By adding a gitignore in `dist/` with `*`, we caused
hatchling to ignore all files in our temporary build directory below it,
causing empty wheels. To prevent this, we add a `.git` file as a phony
git root. We are already using this trick successfully in the cache.
Hatchling sees this `.git` file, considers it a boundary and does not
traverse up to `dist/.gitignore`.
Fixes#8200
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Closes https://github.com/astral-sh/uv/issues/8213
I didn't mean to remove these when updating the regular expression.
Arguably, they shouldn't be used anymore, but we should make that choice
with intention.
As mentioned in https://github.com/astral-sh/uv/issues/8189
We only checked if an interpreter was free-threaded _when_ free-threaded
variants were requested. But we should not use free-threaded
interpreters unless explicitly requested.
## Summary
This PR declares and documents all environment variables that are used
in one way or another in `uv`, either internally, or externally, or
transitively under a common struct.
I think over time as uv has grown there's been many environment
variables introduced. Its harder to know which ones exists, which ones
are missing, what they're used for, or where are they used across the
code. The docs only documents a handful of them, for others you'd have
to dive into the code and inspect across crates to know which crates
they're used on or where they're relevant.
This PR is a starting attempt to unify them, make it easier to discover
which ones we have, and maybe unlock future posibilities in automating
generating documentation for them.
I think we can split out into multiple structs later to better organize,
but given the high influx of PR's and possibly new environment variables
introduced/re-used, it would be hard to try to organize them all now
into their proper namespaced struct while this is all happening given
merge conflicts and/or keeping up to date.
I don't think this has any impact on performance as they all should
still be inlined, although it may affect local build times on changes to
the environment vars as more crates would likely need a rebuild. Lastly,
some of them are declared but not used in the code, for example those in
`build.rs`. I left them declared because I still think it's useful to at
least have a reference.
Did I miss any? Are their initial docs cohesive?
Note, `uv-static` is a terrible name for a new crate, thoughts? Others
considered `uv-vars`, `uv-consts`.
## Test Plan
Existing tests
## Summary
If you `uv add` a Git dependency, we resolve it twice:

While we need to avoid sharing state between `lock` and `sync` (see the
large TODO that moved in this change), we should prioritize sharing
state between different resolver operations.
Allow '*' as a value to match all hosts, and provide
`reqwest_blocking_get` for uv tests, so that they also respect
UV_INSECURE_HOST (since they respect `ALL_PROXY`).
This lets those tests pass with a forward proxy - we can think about
setting a root certificate later so that we don't need to disable
certificate verification at all.
---
I tested this locally by running:
```bash
GIT_SSL_NO_VERIFY=true ALL_PROXY=localhost:8080 UV_INSECURE_HOST="*" cargo nextest run sync_wheel_path_source_error
```
With my forward proxy showing:
```
2024-10-09T18:20:16.300188Z INFO fopro: Proxied GET cc2fedbd88a6546c1727ae13fa977a/cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl (headers 480.024958ms + body 92.345666ms)
2024-10-09T18:20:16.913298Z INFO fopro: Proxied GET https://pypi.org/simple/pycparser/ (headers 509.664834ms + body 269.291µs)
2024-10-09T18:20:17.383975Z INFO fopro: Proxied GET 5f610ebe4298517d912eb1c76e1a53/pycparser-2.21-py2.py3-none-any.whl.metadata (headers 443.184208ms + body 2.094792ms)
```
## Summary
When trying out standalone scripts I noticed a warning that said
`--no_readme` is a no-op when I provided `--no-readme`.
I searched for this "--\w+_" pattern in the codebase and found a similar
typo in warnings in other places, so I fixed them here.
## Test Plan
no plan, since these commands are mainly for interactive use I would
assume nobody parses out warnings about unnecessary options?
As per
https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html
Before that, there were 91 separate integration tests binary.
(As discussed on Discord — I've done the `uv` crate, there's still a few
more commits coming before this is mergeable, and I want to see how it
performs in CI and locally).
## Summary
Related issues: #8009#7549
Although `PYTHONIOENCODING=utf-8` forces python to use UTF-8 for
`stdout`/`stderr`, it can't prevent code like
`sys.stdout.buffer.write()` or `subprocess.call(["cl.exe", ...])` to
bypass the encoder. This PR uses lossy UTF-8 conversion to avoid
decoding error.
## Alternative
Using `bstr` crate might be better since it can preserve original
information. Or we should follow the Windows convention, unset
`PYTHONIOENCODING` and decode with system default encoding.
## Test Plan
Running locally with non-ASCII character in `UV_CACHE_DIR` works fine,
but I have no unit test plan. Testing locale problem is hard :(
This is to address my own issue #7908
## Summary
This change makes use of the `clap` value_delimiter parser to populate
the `with` `Vec<String>` which currently can either only be empty or
with 1 value for each `--with` flag.
This makes use of the current code structure but allows for multiple
arguments with a single `--with` flag.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Can be tested with the following CLI:
```bash
target/debug/uv tool run --with numpy,polars,matplotlib ipython -c "import numpy;import polars;import matplotlib;"
```
And former behavior of multiple `--with` flags are kept
```bash
target/debug/uv tool run --with numpy --with polars --with matplotlib ipython -c "import numpy;import polars;import matplotlib;"
```
<!-- How was it tested? -->
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Fixes#8097. One challenge is that the `Pep723Script` is used for both
reading
and writing the metadata, so I wasn't sure about how to handle
`script.write`
when stdin (currently just ignoring it, but maybe we should raise an
error?).
## Test Plan
Added a test case copying the `test_stdin` with PEP 723 metadata.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
First off, congratulations on the 0.3 release! The PEP 723 standalone
scripts support is awesome, and I can already imagine a long tail of
little scripts of my own that would benefit from this functionality.
## Background
I really like the Deno CLI's support for running and installing remote
scripts.
```
deno run <url>
```
```
deno install --name foo <url>
```
I can see parallels with `uv run` and `uvx`. After mentioning this on
Discord, @zanieb suggested I could take a stab at a PR to implement
similar functionality for uv.
## Summary
This PR attempts to add support for executing remote standalone scripts
directly with `uv run`. While this is already possible by downloading
the script (i.e., via curl/wget) and then using uv run, having direct
support would be convenient.
The proposed functionality is:
```sh
uv run <url>
```
Another addition/alternative could be to support running scripts via
stdin:
```sh
curl -sL <url> | uv run -
```
But that is not implemented in this PR.
## Test Plan
I noticed that GitHub and `files.pythonhosted.org` URLs are used in some
of the tests. I've created a personal [GitHub
Gist](https://gist.github.com/manzt/cb24f3066c32983672025b04b9f98d1f)
with the example from PEP 723 for now to test this functionality.
~However, I couldn't figure out how to get the `with_snapshot` config
filter to filter out the tempfile path, so the test is currently
failing. Any assistance with this would be appreciated.~
## Notes
I'm not totally pleased with the implementation of this PR. I think it
would be better to handle the case earlier (and probably reuse the
cache), and avoid mutation, but since run command requires a local path
this was the simplest implementation I could come up with.
I know that performance is paramount with uv so I totally understand if
this requires a different approach or something more explicit to avoid
"inferring" the path. I'm just taking this as an opportunity to learn a
little more Rust and acquaint myself with the code base. cheers!
---------
Co-authored-by: Andrew Gallant <jamslam@gmail.com>
## Summary
In the routine we use to verify whether the lockfile is up-to-date, we
sometimes have to resolve package metadata. If that resolution step
fails, the resolver is left in a bad state, as various tasks are marked
as pending despite the error. Treating that as a recoverable failure
thus leads to a deadlock.
This PR modifies the errors to be treated as fatal.
I think a more holistic fix here would be to add some kind of guard to
ensure that any tasks that fail are no longer marked as pending (or
enforce this in the type system).
Closes https://github.com/astral-sh/uv/issues/8074.
## Summary
We can't rely on reading these from the `pyproject.toml`; instead, we
resolve the project metadata (which will typically just require reading
the `pyproject.toml`, but will go through our standard metadata paths).
Closes https://github.com/astral-sh/uv/issues/8071.
This commit fixes a bug where disjointness checking didn't always
satisfy commutativity. And it *should*. The `is_disjoint_commutative`
test added here demonstrates a regression test. Before this commit,
its second assertion failed.
That is, given `m1 = extra == "A" and extra != "B"` and
`m2 = extra == "A"`, we were saying that m1 was disjoint with
m2 (wrong) but that m2 was not disjoint with m1 (right).
It turned out that this was a "simple" matter of not using the
correct parent node when calling negation. Likely just a
transcription snafu.
This bug does not seem restricted in scope to extras, which is
how I found it, so it's not clear why we haven't noticed it until
now. I noticed it because I was formulating markers in a similar
format for resolver forking based on conflicting extras, and this
resulted in incorrectly filtering out dependencies due to `is_disjoint`
returning a false positive.
I found myself using this more verbose representation to double
check that there wasn't any other "hidden" state occurring in
markers, and that the graph debug display wasn't hiding anything
that I was missing.
## Summary
Now that `uv-install-wheel` output shows up in `--verbose`, lets leave
`debug!` to logs that users might want to see. Logging _every_ file we
install seems excessive.
Fixes: #8058
## Test Plan
Integration test (but only for Unix, because symlinks on Windows require
admin privs. Plus, they are not really all that idiomatic on Windows)
## Summary
These values can include spaces when passed on the command-line... Clap
doesn't give us a way to provide a value separator for _only_ an
environment variable (as is pip's behavior), so I think we're stuck
using comma-separated for here right now.
See, e.g., https://github.com/clap-rs/clap/discussions/3796.
Closes#8057.
## Summary
The issue here is that, if you user has a `requires-python` like `>=
3.7, != 3.8.5`, this gets expanded to the following bounds:
- `[3.7, 3.8.5)`
- `(3.8.5, ...`
We then convert this to the specific `>= 3.7, < 3.8.5, > 3.8.5`. But the
commas in that expression are conjunctions... So it's impossible to
satisfy? No version is both `< 3.8.5` and `> 3.8.5`.
Instead, we now preserve the input `requires-python` and just
concatenate the terms, only using PubGrub to compute the _bounds_.
Closes https://github.com/astral-sh/uv/issues/7862.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
- Adds detail to the `uv tool run --help` CLI that lets users know about
the shorthand, `uvx`
## Test Plan
<!-- How was it tested? -->
Building and running CLI
```
…📝✓] via 🐋 orbstack via 🎁 v0.4.16 via pyenv via ⚙️ v1.81.0on ☁️ (us-east-2) took 3s
➜ ./target/debug/uv tool run --help
Run a command provided by a Python package. Also available via the alias `uvx`.
Usage: uv tool run [OPTIONS] [COMMAND]
...
You can also use `uvx` as an alias for `uv tool run`.
Use `uv help tool run` for more details.
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Closes#7977. Makes `PythonDownloadRequest` account for the prerelease
part if allowed. Also stores the prerelease in `PythonInstallationKey`
directly as a `Prerelease` rather than a string.
## Test Plan
Correctly picks the relevant prerelease (rather than picking the most
recent one):
```
λ cargo run python install 3.13.0rc2
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.17s
Running `target/debug/uv python install 3.13.0rc2`
Searching for Python versions matching: Python 3.13rc2
cpython-3.13.0rc2-macos-aarch64-none ------------------------------ 457.81 KiB/14.73 MiB ^C
λ cargo run python install 3.13.0rc3
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.17s
Running `target/debug/uv python install 3.13.0rc3`
Searching for Python versions matching: Python 3.13rc3
Found existing installation for Python 3.13rc3: cpython-3.13.0rc3-macos-aarch64-none
```
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
## Summary
This PR adds the ability to list available scripts in the environment
when `uv run` is invoked without any arguments.
It somewhat mimics the behavior of `rye run` command
(See https://rye.astral.sh/guide/commands/run).
This is an attempt to fix#4024.
## Test Plan
I added test cases. The CI pipeline should pass.
### Manuel Tests
```shell
❯ uv run
Provide a command or script to invoke with `uv run <command>` or `uv run script.py`.
The following scripts are available:
normalizer
python
python3
python3.12
See `uv run --help` for more information.
```
---------
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Improve hints when using the simple index URL instead of the upload URL
in `uv publish`. This is the most common confusion when publishing, so
we give it some extra care and put it more centrally in the CLI help.
Fixes#7860
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
https://github.com/astral-sh/uv/pull/7766 banned using `uv add` to
create self-dependencies in the `dev` section which breaks `uv add --dev
.[extra]` which is a fair use-case for adding a self-dependency.
Maybe we should only allow this if the added requirement includes an
extra group? Otherwise it's a bit weird.
## Summary
Closes#7841. If there are other env vars that would also benefit from
this value parser, please let me know and I can add them to this PR.
## Test Plan
When running the same example from the linked issue, it now works:
```
UV_PYTHON= cargo run -- init x
Compiling ...
Finished `dev` profile [unoptimized + debuginfo] target(s) in 29.06s
Running `target/debug/uv init x`
Initialized project `x` at `/Users/krishnanchandra/Projects/uv/x`
```
## Summary
This PR adds support for the `UV_FIND_LINKS` environment variable as an
alternative to the `--find-links` command-line option, as requested in
#1839.
## Test Plan
A unit test was added to validate that setting `UV_FIND_LINKS` provided
the same result as a link provided with the `--find-links` command-line
option.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Unlike `cp36-...`, which requires exactly CPython 3.6, `py36-none` is
compatible with all versions starting at Python 3.6.
Note that `py3x-none` should not be used. Instead, use `py3-none` with
`requires-python`.
Fixes#7800
## Summary
PythonDownloadKey (cpython-3.13.0rc3-darwin-aarch64-none) and
PlatformTriple in `fetch-download-metadata.py` have a slight
inconsistency in the ordering of `os` and `arch`. In PythonDownloadKey,
`os` precedes `arch`, while in PlatformTriple, `arch` comes before
`platform` (equivalent to os). This difference in ordering affects the
sorting logic, giving arch higher priority than platform in the
`download-metadata.json` file, leading to a little bit of unexpected
order of entries.
Before:
<img width="676" alt="image"
src="https://github.com/user-attachments/assets/adb24a2e-da70-4a09-a702-4b5d71600b2c">
After:
<img width="725" alt="image"
src="https://github.com/user-attachments/assets/c6c76e6a-d3fd-43dc-bfb0-b3a4a3fe2b6b">
## Summary
If a supported environment includes a Python marker, we don't simplify
it out, despite _storing_ the simplified markers. This PR modifies the
validation code to compare simplified to simplified markers.
Closes https://github.com/astral-sh/uv/issues/7876.
## Summary
When using `uv tree --package foo`, an extra empty line appears at the
beginning, which seems unnecessary since `uv tree` without the package
option doesn’t have this. It’s possible that the intention was to add
separation between packages, i.e. the correct implementation shoule be:
```rust
if !std::mem::take(&mut first) {
lines.push(String::new());
}
```
Even if corrected, this extra spacing might be redundant as `uv tree`
doesn’t include these empty lines between packages by default.
```console
$ uv init project
$ cd project
$ uv init foo
$ uv tree
Using CPython 3.12.5
Resolved 2 packages in 1ms
foo v0.1.0
project v0.1.0
$ uv tree --package project
Using CPython 3.12.5
Resolved 2 packages in 1ms
project v0.1.0
```
## Summary
Closes https://github.com/astral-sh/uv/issues/4931.
## Test Plan
Tried running the following commands locally to make sure that all cases
work:
```
unset PAGER
cargo run -- help venv
```
With no pager set, `uv` correctly finds `less` on the system as it did
before and passes the help output to it.
---
```
PAGER= cargo run -- help venv
```
This correctly prints out to stdout and does not use any pager.
---
```
PAGER=most cargo run -- help venv
```
This correctly opens the `most` pager as shown below:
<img width="1917" alt="Screenshot 2024-07-27 at 5 14 42 PM"
src="https://github.com/user-attachments/assets/dfaa5a83-b47e-4f5c-9be1-b0b1e9818932">
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
We now display the "Did you mean `python-dotenv`?"-style errors on build
failure, rather than in `uv add`. This is less opinionated and couples
us less to specific content in the registry.
## Test Plan

Would it be okay to expose this struct? We currently use our own
ResolveProvider, and it would be nice to use the `FlatDistributions` for
easy `VersionMap` creation.
Thanks!
## Summary
Create a function main as the default for a packaged app. Configure the
default executable as:
`example-packaged-app = "example_packaged_app:main"`
Which is often what you want - the executable has the same name as the
app.
The purpose is to more often hit what the user wants, so they don't have
to even rename the command to start developing.
## Test Plan
- existing tests are updated
## Summary
This PR enables users to provide multiple source entries in
`tool.uv.sources`, e.g.:
```toml
[tool.uv.sources]
httpx = [
{ git = "https://github.com/encode/httpx", tag = "0.27.2", marker = "sys_platform == 'darwin'" },
{ git = "https://github.com/encode/httpx", tag = "0.24.1", marker = "sys_platform == 'linux'" },
]
```
The implementation is relatively straightforward: when we lower the
requirement, we now return an iterator rather than a single requirement.
In other words, the above is transformed into two requirements:
```txt
httpx @ git+https://github.com/encode/httpx@0.27.2 ; sys_platform == 'darwin'
httpx @ git+https://github.com/encode/httpx@0.24.1 ; sys_platform == 'linux'
```
We verify (at deserialization time) that the markers are
non-overlapping.
Closes https://github.com/astral-sh/uv/issues/3397.
## Summary
This was brought up on Twitter recently. `dotenv` hasn't been updated in
years and doesn't build successfully anymore. Users almost always mean
to install `python-dotenv`. I think we can add helpful hints here to
point users in the right direction.
## Test Plan

## Summary
`click` has one dependency of `colorama` only on Windows, `uv tree
--invert` should not include `colorama` on non-Windows platforms, but
currently:
```console
$ uv init
$ uv add click
$ uv tree --invert --python-platform macos
colorama v0.4.6
```
it should:
```console
$ uv tree --invert --python-platform macos
click v8.1.7
└── project v0.1.0
```
## Summary
This PR modifies our parsing to allow spaces in URLs. I don't know if
this is a correct change... But we now parse URLs until we see:
- A newline.
- A semicolon (marker) or hash (comment), _preceded_ by a space. We
parse the URL until the last
non-whitespace character (inclusive).
- A semicolon (marker) or hash (comment) _followed_ by a space. We treat
this as an error, since
the end of the URL is ambiguous (e.g., `https://foo.com; marker`) would
be a URL that ends in `;`).
Closes https://github.com/astral-sh/uv/issues/6032.
## Summary
This is a longstanding piece of technical debt. After we resolve, we
have a bunch of `ResolvedDist` entries. We then convert those to
`Requirement` (which is lossy -- we lose information like "the index
that the package was resolved to"), and then back to `Dist`.
## Summary
Historically, we've allowed the use of wheels that were downloaded from
PyPI even when the user passes `--no-binary`, if the wheel exists in the
cache. This PR modifies the cache lookup code such that we respect
`--no-build` and `--no-binary` in those paths.
Closes https://github.com/astral-sh/uv/issues/2154.
## Summary
My last changes (#6616) used by mistake == instead of !=.
😥 Making values currently never trimmed despite
what we wanted.
Values should now be trimmed if needed.
Also removes the trim of the header name, because if a header contains
spaces, the header will be skipped by the mailparse crate in the first
place.
## Test Plan
- A unit test has been added to validate that we correctly trim values.
- A unit test has been added to validate the header names containing
spaces are skipped.
## Summary
I have a workflow where I want use `uv` as a dependency solver only, and
manage my environments with external tooling (Nix).
## Test Plan
Manually tested. Automated testing seems excessive for such a trivial
change.
## Problems
It's still not as useful as I'd like it to be.
`uv` uncondtionally creates a virtual environment, something I would
expect that `--no-sync` should disable.
This looks a bit more tricky to achieve and I'm not sure about how to
best structure it.
## Summary
This is another attempt using `module: bool` instead of `module:
Option<String>` following #7322.
The original PR can't be reopened after a force-push to the branch, I've
created this new PR.
Resolves#6638
## Summary
This PR fixes#7733. According to [CPython documentation on
`sys.stdout`](https://docs.python.org/3.12/library/sys.html#sys.stdout),
when `stdout`/`stderr` is non-character device like pipe, the encoding
will be set to system locale on windows. However, on the Rust side
`stdout_reader` and `stderr_reader` expect them to be encoded in UTF-8
and will fail when child process write non-ASCII character to
stdout/stderr, e.g., build directory name containing non-ASCII
character.
Both
[CPython3](https://docs.python.org/3.12/using/cmdline.html#envvar-PYTHONIOENCODING)
and [PyPy](https://doc.pypy.org/en/default/man/pypy3.1.html#environment)
support environment variable `PYTHONIOENCODING`. When it is set to
`utf-8`, python will use UTF-8 encoding for `stdin`/`stdout`/`stderr`.
Since `stdin` is not used by the spawned python process and we expect
`stdout`/`stderr` to use UTF-8, this fix should work as expected.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
I only tested it on my computer with CPython 3.12 and 3.7. With the fix
applied I confirmed that [the case I
described](https://github.com/astral-sh/uv/issues/7733#issuecomment-2380416093)
is fixed.
I'm using Windows 11 with system locale set to code page 936.
## Summary
Small follow up to https://github.com/astral-sh/uv/pull/7724
## Test Plan
`cargo test`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Resolves#7705
## Test Plan
`cargo test` and tested locally.
The snapshots were unstable due to the packages being built in a
non-deterministic order, so I used the quiet flag to suppress the
output.
Another question is whether we should label the build output to indicate
which package it belongs to?
## Summary
Adds a helpful context message when `uvx` is run without arguments
To clarify, it is displaying the installed tools.
This addresses confusion, such as the one highlighted in issue #7348,
by making the output more user-friendly and informative.
Related #4024
## Test Plan
Updated the test snapshots to include the new output.
Running the tests locally with `cargo nextest run` confirms that the
tests pass.
The CI pipeline should also pass.
### Manuel Testing
**uvx**
```shell
# Make sure you have the updated version of uv installed on your path.
# cargo install --path ./crates/uv --force
❯ uvx
Provide a command to invoke with `uvx <command>` or `uvx --from <package> <command>`.
The following tools are already installed:
black v24.8.0
- black
- blackd
ruff v0.6.7
- ruff
See `uvx --help` for more information.
```
**uv tool list**
```shell
# Make sure you have the updated version of uv installed on your path.
# cargo install --path ./crates/uv --force
❯ uv tool list
black v24.8.0
- black
- blackd
ruff v0.6.7
- ruff
```
**uv tool run**
```shell
# Make sure you have the updated version of uv installed on your path.
# cargo install --path ./crates/uv --force
❯ uv tool run
Provide a command to invoke with `uv tool run <command>` or `uv tool run --from <package> <command>`.
The following tools are already installed:
black v24.8.0
- black
- blackd
ruff v0.6.7
- ruff
See `uv tool run --help` for more information.
```
---
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
---------
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
## Summary
Similiar to `cargo init --vcs <VCS>`, this PR adds the `--vcs <VCS>`
flag for `uv init`, allowing to create a version control system during
initialization. By default, `uv init` will create a Git repository if
the `--vcs` flag is not provided. Use `--vcs none` to disable this
feature.
Currently, only Git is supported. While Cargo also supports hg, pijul,
and fossil, this initial PR only includes Git. We can add more later if
there are any user requests.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This PR adds support for ```uv init --script```, as defined in issue
#7402 (started working on this before I saw jbvsmo's PR). Wanted to
highlight a few decisions I made that differ from the existing PR:
1. ```--script``` takes a path, instead of a path/name. This potentially
leads to a little ambiguity (I can certainly elaborate in the docs,
lmk!), but strictly allowing ```uv init --script path/to/script.py```
felt a little more natural than allowing for ```uv init --script path/to
--name script.py``` (which I also thought would prompt more questions
for users, such as should the name include the .py extension?)
2. The request is processed immediately in the ```init``` method,
sharing logic in resolving which python version to use with ```uv add
--script```. This made more sense to me — since scripts are meant to
operate in isolation, they shouldn't consider the context of an
encompassing package should one exist (I also think this decision makes
the relative codepaths for scripts/packages easier to follow).
3. No readme — readme felt a little excessive for a script, but I can of
course add it in!
---------
Co-authored-by: João Bernardo Oliveira <jbvsmo@gmail.com>
uv will soon support both a build frontend (`uv build`) and a build
backend (`build-system = "uv"`). To avoid the name clash, I'm renaming
the `uv-build` crate to `uv-build-frontend`. In a follow-up PR, I will
add a `uv-build-backend` crate with the build backend implementation.
This PR adds support for upgrading the build environment of tools with
the addition of a ```--python``` argument to ```uv upgrade```, as
specified in #7471.
Some things to note:
- I added support for individual packages — I didn't think there was a
good reason for ```--python``` to only apply to all packages
- Upgrading with ```--python``` also upgrades the package itself — I
think this is fair as if a user wants to _strictly_ switch the version
of Python being used to build a tool's environment they can use ```uv
install```. This behavior can of course be modified if others don't
agree!
Closes https://github.com/astral-sh/uv/issues/6297.
Closes https://github.com/astral-sh/uv/issues/7471.
#7226 modified the check to skip prefetching of source dists without
proper minimum-version bounds, and wound up flipping the boolean
expression. This change flips the some/none expression so that the
intended skip happens as expected.
Fixes#7680.
Closes#7118
This only really affects managed interpreters, as we exclude alternative
Python implementations from the search path during the
`VersionRequest::executable_names` part of discovery.
## Summary
Random, but I noticed that we can remove a ton of serialize and
deserialize derives by using `rkyv` for the flat-index caches. (We
already use `rkyv` for these same structs in the registry cache.)
This PR adds some additional sanity checking on resolution graphs to
ensure we can never install different versions of the same package into
the same environment.
I used code similar to this to provoke bugs in the resolver before the
release, but it never made it into `main`. Here, we add the error
checking to the creation of `ResolutionGraph`, since this is where it's
most convenient to access the "full" markers of each distribution.
We only report an error when `debug_assertions` are enabled to avoid
rendering `uv` *completely* unusuable if a bug were to occur in a
production binary. For example, maybe a conflict is detected in a marker
environment that isn't actually used. While not ideal, `uv` is still
usable for any other marker environment.
Closes#5598
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
closes#4828
First iteration for an implementation. I need to add more tests but
wanted your opinion on the implementation first.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Currently tested using the following command but will add tests shortly:
```console
D:\repo\uv> cargo run venv -p 3.13t && .venv\Scripts\python.exe
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.52s
Running `target\debug\uv.exe venv -p 3.13t`
Using Python 3.13.0rc1 interpreter at: C:\Users\bschoen\AppData\Local\Programs\Python\Python313\python3.13t.exe
Creating virtualenv at: .venv
Activate with: .venv\Scripts\activate
Python 3.13.0rc1 experimental free-threading build (tags/v3.13.0rc1:e4a3e78, Jul 31 2024, 21:06:58) [MSC v.1940 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
There are two parts to this.
The first is a restructuring and refactoring. We had some debt around
expected executable name generation, which we address here by
consolidating into a single function that generates a combination of
names. This includes a bit of extra code around free-threaded variants
because this was written on top of #7431 — I'll rebase that on top of
this.
The second addresses some bugs around alternative implementations.
Notably, `uv python list` does not discovery executables with
alternative implementation names. Now, we properly generate all of the
executable names for `VersionRequest::Any` (originally implemented in
https://github.com/astral-sh/uv/pull/7508) to properly show all the
implementations we can find:
```
❯ cargo run -q -- python list --no-python-downloads
cpython-3.12.6-macos-aarch64-none /opt/homebrew/opt/python@3.12/bin/python3.12 -> ../Frameworks/Python.framework/Versions/3.12/bin/python3.12
cpython-3.11.10-macos-aarch64-none /opt/homebrew/opt/python@3.11/bin/python3.11 -> ../Frameworks/Python.framework/Versions/3.11/bin/python3.11
cpython-3.9.6-macos-aarch64-none /Library/Developer/CommandLineTools/usr/bin/python3 -> ../../Library/Frameworks/Python3.framework/Versions/3.9/bin/python3
pypy-3.10.14-macos-aarch64-none /opt/homebrew/bin/pypy3 -> ../Cellar/pypy3.10/7.3.17/bin/pypy3
```
While doing both of these changes, I ended up changing the priority of
interpreter discovery slightly. For example, given that the executables
are in the same directory, do we query `python` or `python3.10` first
when you request `--python 3.10`? Previously, we'd check `python3.10`
but I think that was an incorrect optimization. I think we should always
prefer the bare name (i.e. `python`) first. Similarly, this applies to
`python` and an executable for an alternative implementation like
`pypy`. If it's not compatible with the request, we'll skip it anyway.
We might have to query more interpreters with this approach but it seems
rare.
Closes https://github.com/astral-sh/uv/issues/7286 superseding
https://github.com/astral-sh/uv/pull/7508
- **Do not attempt to reflink directories on linux**
- **Refactor clone_recursive**
## Summary
On linux, reflink does not work on a directory. Currently, we first
attempt to reflink directory, and only if it fails with `AlreadyExists`
we attempt to reflink recursively.
This has the effect that, on linux, `uv pip install --link-mode=clone`
would always fall back to `copy`.
We resolve this by only attempting to reflink directories on macos. In
the process, we refactored `clone_recursive` in an attempt to make it
easier to reason about its logic.
## Test Plan
I tested that after this change, `uv pip install --link-mode=clone
numpy` would behave as expected in the following cases:
* linux, btrfs filesystem, venv on the same filesystem as cache
(correctly reflinked)
* linux, btrfs filesystem, venv on a different filesystem than cache
(fallback to copy)
I have not tested it on macos or windows, as I currently don't have
access to any macos or windows machines, unfortunately.
## Summary
If the `requires-python` bound expands, the space covered by
`resolution-markers` may no longer include all supported Python
versions. In such cases, we need to avoid reusing the forks (but we
_can_ reuse the preferred versions).
Closes https://github.com/astral-sh/uv/issues/7618.
## Summary
`uv run --project ./path/to/project` now uses the provided directory as
the starting point for any file discovery. However, relative paths are
still resolved relative to the current working directory.
Closes https://github.com/astral-sh/uv/issues/5613.
## Summary
`#[serde(flatten)]` has a disastrous effect on error messages: serde no
longer tells you which field errored, nor does it show it to you in the
diagnostic output.
Before:
```
warning: Failed to parse `pyproject.toml` during settings discovery:
TOML parse error at line 9, column 1
|
9 | [tool.uv]
| ^^^^^^^^^
invalid type: string "foo", expected a sequence
```
After:
```
warning: Failed to parse `pyproject.toml` during settings discovery:
TOML parse error at line 10, column 19
|
10 | extra-index-url = "foo"
| ^^^^^
invalid type: string "foo", expected a sequence
```
Closes https://github.com/astral-sh/uv/issues/7113.
## Summary
This reverts commit 3060fd22c0.
These are now _never_ shown to users, because `tracing` isn't set up at
that point. I'm going to try and improve the solution more holistically,
but this is better than the status quo.
Closes https://github.com/astral-sh/uv/issues/7573.
This enhances the hints generator in the resolver with some heuristic to
detect and warn in case of failures due to version mismatches on a local
package. Those may be the symptom of name conflict/shadowing with a
transitive dependency.
Closes: https://github.com/astral-sh/uv/issues/7329
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Because a problem was found with Powershell and combining the generated
completion scripts for uv and uvx, let's try separating uv and uvx
command completion scripts.
The generated powershell script template can be seen in clap_complete
source, and it starts with `using` directives, which makes it impossible
(apparently) to concatenate two of those script outputs.
As a side effect, this is available under `uv tool run
--generate-shell-completion` too.
Fixes#7482
## Test Plan
- `eval "$(cargo run --bin uvx -- --generate-shell-completion bash)"`
- Test Powershell
## Summary
The AGPL license is confusing some analyzers. This replaces pretix with
saleor which (similarly) is a web application.
Closes https://github.com/astral-sh/uv/issues/7566.
## Summary
`uv cache prune --ci` will remove the source distribution directory. If
we then need to build a _different_ wheel (e.g., you're building a
package that has Python minor version-specific wheels), we fail, because
we expect the source to be there.
Now, if the source is missing, we re-download it. It would be slightly
easier to just _ignore_ that revision, but that would mean we'd also
lose the already-built wheels -- so if you ran against many Python
versions, we'd continuously lose the cached data.
Closes https://github.com/astral-sh/uv/issues/7543.
## Test Plan
We can add tests, but they _need_ to build non-pure Python wheels, which
tends to be expensive...
For reference:
```console
$ cargo run venv --python 3.12
$ cargo run pip install mercurial==6.8.1 --verbose
$ cargo run cache prune --ci
$ cargo run venv --python 3.11
$ cargo run pip install mercurial==6.8.1 --verbose
```
I also did this with a local `.tar.gz` that I downloaded from PyPI.
## Summary
Both of these can contain rkyv data in their HTTP cache envelopes. As
such, the entries aren't readable by earlier versions of uv, and `uv
cache prune` can break. I should make `uv cache prune` robust to this,
but this feels safest.
## Summary
I think this is just inverted. It means that when we fail in
https://github.com/astral-sh/uv/issues/7553, we show a message for
"invalid Python implementation" (since there are some wheels that don't
match), but we should be showing "invalid platform", matching the order
of operations in our compatibility check.
Closes https://github.com/astral-sh/uv/issues/7553.
Adds display of the target path of the link (since the link filename
itself is basically static) and distinguishes between broken links and
missing files.
## Summary
Improve the description of override-dependencies based on the statement
in `concepts/resolution.md`: "As with constraints, overrides do not add
a dependency on the package and only take effect if the package is
requested in a direct or transitive dependency."
I tested it locally, `concepts/resolution.md` is correct. It would be
better to also include this in the Reference Chapter of the docs.
I think it's best practice to use a placeholder when we transform
something, and #7522 is having snapshot issues because this filter
conflicts with `with_filtered_virtualenv_bin`
As we support more complex Python discovery behaviors such as:
- #7431
- #7335
- #7300
`Any` is no longer accurate, we actually are looking for a reasonable
default Python version to use which may exclude the first one we find.
Separately, we need the idea of `Any` to improve behavior when listing
versions (e.g., #7286) where we do actually want to match _any_ Python
version. As a first step, we'll rename `Any` to `Default`. Then, we'll
introduce a new `Any` that actually behaves as we'd expect.
Recently, rkyv 0.8 was released. Its API is a fair bit simpler now for
higher level uses (like for us in `uv`) and results in us being able to
delete a fair bit of code. This also removes our last dependency on `syn
1.0`, and thus drops that dependency.
Performance (via testing on the `transformers` example) seems to remain
about the same, which is what was expected:
```
$ hyperfine -w5 -r100 'uv lock' 'uv-ag-rkyv-update lock'
Benchmark 1: uv lock
Time (mean ± σ): 55.6 ms ± 6.4 ms [User: 30.4 ms, System: 35.1 ms]
Range (min … max): 43.0 ms … 73.1 ms 100 runs
Benchmark 2: uv-ag-rkyv-update lock
Time (mean ± σ): 56.5 ms ± 7.2 ms [User: 30.5 ms, System: 36.3 ms]
Range (min … max): 39.1 ms … 71.5 ms 100 runs
Summary
uv lock ran
1.02 ± 0.18 times faster than uv-ag-rkyv-update lock
```
Closes#7415
This changes the structure of the hints generator in the resolver when
encountering solution errors, so that it re-uses a single output buffer
owned by the caller.
It avoids repeated allocations of a temporary buffer within each
recursive function call.
## Summary
See: https://github.com/astral-sh/uv/issues/7485. The test was using `uv
pip sync` which doesn't require fetching metadata, and the failure was
in fetching metadata.
## Summary
Closes https://github.com/astral-sh/uv/issues/7485.
## Test Plan
```
$ cargo run cache clean
$ cargo run venv
$ cargo run pip install django-allauth==0.51.0
$ cargo run venv
$ cargo run pip install django-allauth==0.51.0
```
This changes `uv tool install` behavior with regards to re-using
existing environments.
In particular, this replaces the existing version-matching logic with a
tighter one, enforcing
a same-interpreter match.
This allows to properly switch between system and managed interpreter,
at the cost of
more eagerly invalidating existing environments every time there is an
interpreter change.
Closes: https://github.com/astral-sh/uv/issues/7320
## Summary
This PR enables users to provide pre-defined static metadata for
dependencies. It's intended for situations in which the user depends on
a package that does _not_ declare static metadata (e.g., a
`setup.py`-only sdist), and that is expensive to build or even cannot be
built on some architectures. For example, you might have a Linux-only
dependency that can't be built on ARM -- but we need to build that
package in order to generate the lockfile. By providing static metadata,
the user can instruct uv to avoid building that package at all.
For example, to override all `anyio` versions:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
requires-dist = ["iniconfig"]
```
Or, to override a specific version:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
version = "3.7.0"
requires-dist = ["iniconfig"]
```
The current implementation uses `Metadata23` directly, so we adhere to
the exact schema expected internally and defined by the standards. Any
entries are treated similarly to overrides, in that we won't even look
for `anyio@3.7.0` metadata in the above example. (In a way, this also
enables #4422, since you could remove a dependency for a specific
package, though it's probably too unwieldy to use in practice, since
you'd need to redefine the _rest_ of the metadata, and do that for every
package that requires the package you want to omit.)
This is under-documented, since I want to get feedback on the core ideas
and names involved.
Closes https://github.com/astral-sh/uv/issues/7393.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
close#6272
## Test Plan
<!-- How was it tested? -->
As in https://github.com/astral-sh/uv/pull/6262
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
When syncing a lockfile, we need to respect credentials defined in the
`pyproject.toml`, even if they won't be used for resolution.
Unfortunately, this includes credentials in `tool.uv.sources`,
`tool.uv.dev-dependencies`, `project.dependencies`, and
`project.optional-dependencies`.
Closes https://github.com/astral-sh/uv/issues/7453.
## Summary
This PR adds support to include Python pre-releases when requesting
versions.
Check out the docs for commands that support the `Python` option:
```text
--python, -p python
The Python interpreter to use for the virtual environment.
```
At least the following scenarios are supported:
```bash
3.13.0a1
3.13b2
3.13rc4
313rc1
```
## Test Plan
I added a basic unit test to `uv/crates/uv-python/src/discovery.rs`. I
could have added more, but I have not discovered any relevant places.
CI passes
Note: I was unable to execute the entire test set locally. There were at
least some timeout issues (some tests took over 60 seconds).
========== output ===========
beta version
```bash
cargo run -- venv --python 3.13.0b3 ░▒▓ 94%
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.20s
Running `target/debug/uv venv --python 3.13.0b3`
Using Python 3.13.0b3 interpreter at: /home/mikko/.pyenv/versions/3.13.0b3/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
````
release candidate
```bash
cargo run -- venv --python 3.13.0rc2 ░▒▓ 94%
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.83s
Running `target/debug/uv venv --python 3.13.0rc2`
Using Python 3.13.0rc2 interpreter at: /home/mikko/.pyenv/versions/3.13.0rc2/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
```bash
cargo run -- venv --python 313rc2 ░▒▓ 94%
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.31s
Running `target/debug/uv venv --python 313rc2`
Using Python 3.13.0rc2 interpreter at: /home/mikko/.pyenv/versions/3.13.0rc2/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Generate shell completion for uvx.
Create a `uvx` toplevel command just for completion by combining `uv
tool uvx` (hidden alias for `uv tool run`) with global arguments. This
explicit combination is needed otherwise global arguments are missing
(if they are missing, clap debug assertions fail when `uv tool run`
arguments refer to global arguments in directives like conflicts with).
Fixes#7258
## Test Plan
- Tested using bash using `eval "$(cargo run --bin uv
generate-shell-completion bash)"`
## Summary
All the registry wheels were getting cached under
`index/b2a7eb67d4c26b82` rather than `pypi`, because we used
`IndexUrl::Url` rather than `IndexUrl::from`.
## Summary
It's very unlikely that retaining these is beneficial, since you tend to
partition the cache by platform anyway.
Closes https://github.com/astral-sh/uv/issues/7394.
## Summary
Since https://github.com/astral-sh/uv/pull/7208, this is now _always_
firing, for every directory, because the version gets normalized (e.g.,
`1.2.3` gets normalized to `1-2-3`, which never matches the parsed
version). pip doesn't warn here, I guess we won't either, because I
can't figure out a robust way to do this... We need to get the
non-normalized remainder after stripping the normalized package name,
but we strip the normalized package name from the normalized string, so
we only have a normalized remainder.
## Summary
Running `uv lock --no-sources` should still include dev dependencies,
since dev dependencies are defined separately from sources.
Closes https://github.com/astral-sh/uv/issues/7406.