## Summary
This PR extends `[[tool.uv.index]]` to support `--find-links`-style
"flat" indexes, so that users can point to such indexes without using
`--find-links` _and_ get access to the full functionality of
`[[tool.uv.index]]` (e.g., they can now pin packages to
`--find-links`-style indexes).
Note that, at present, `--find-links` indexes actually have some quirky
behavior, in that we combine them into a single entity and then merge
the discovered distributions into each Simple API-style index. The
motivation here, IIRC, was to match pip's behavior quite closely. I'm
interested in _removing_ that behavior, but it'd be breaking (and may
also be inconvenient for some use-cases). So, the behavior for indexes
passed in via `--find-links` remains completely unchanged. However,
`[[tool.uv.index]]` entries with `format = "flat"` are now treated
identically to those defined with `format = "simple"` (the default), in
that we stop after we find the first-matching index, etc.
Closes https://github.com/astral-sh/uv/issues/11634.
## Summary
This PR modifies the requirement source entities to store a (new)
container struct that wraps `IndexUrl`. This will allow us to store
user-defined metadata alongside `IndexUrl`, and propagate that metadata
throughout resolution.
Specifically, I need to store the "kind" of the index (Simple API vs.
`--find-links`), but I also ran into this problem when I tried to add
support for overriding `Cache-Control` headers on a per-index basis: at
present, we have no way to passing around metadata alongside an
`IndexUrl`.
## Summary
In general, we merge `--find-links` entries into each index. If a
package is pinned to an index, though, it seems surprising (and wrong)
that we'd ever select a distribution from `--find-links`. This PR
modifies the provider to ignore `--find-links` for any explicitly pinned
packages.
## Summary
This crate is for standards-compliant types, but this is explicitly a
type that's custom to uv. It's also strange because we kind of want to
reference `IndexUrl` on the registry type, but that's in a crate that
_depends_ on `uv-pypi-types`, which to me is a sign that this is off.
## Summary
This is a prototype that I'm considering shipping under `--preview`,
based on [`light-the-torch`](https://github.com/pmeier/light-the-torch).
`light-the-torch` patches pip to pull PyTorch packages from the PyTorch
indexes automatically. And, in particular, `light-the-torch` will query
the installed CUDA drivers to determine which indexes are compatible
with your system.
This PR implements equivalent behavior under `--torch-backend auto`,
though you can also set `--torch-backend cpu`, etc. for convenience.
When enabled, the registry client will fetch from the appropriate
PyTorch index when it sees a package from the PyTorch ecosystem (and
ignore any other configured indexes, _unless_ the package is explicitly
pinned to a different index).
Right now, this is only implemented in the `uv pip` CLI, since it
doesn't quite fit into the lockfile APIs given that it relies on feature
detection on the currently-running machine.
## Test Plan
On macOS, you can test this with (e.g.):
```shell
UV_TORCH_BACKEND=auto UV_CUDA_DRIVER_VERSION=450.80.2 cargo run \
pip install torch --python-platform linux --python-version 3.12
```
On a GPU-enabled EC2 machine:
```shell
ubuntu@ip-172-31-47-149:~/uv$ UV_TORCH_BACKEND=auto cargo run pip install torch -v
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.31s
Running `target/debug/uv pip install torch -v`
DEBUG uv 0.6.6 (e95ca063b 2025-03-14)
DEBUG Searching for default Python interpreter in virtual environments
DEBUG Found `cpython-3.13.0-linux-x86_64-gnu` at `/home/ubuntu/uv/.venv/bin/python3` (virtual environment)
DEBUG Using Python 3.13.0 environment at: .venv
DEBUG Acquired lock for `.venv`
DEBUG At least one requirement is not satisfied: torch
warning: The `--torch-backend` setting is experimental and may change without warning. Pass `--preview` to disable this warning.
DEBUG Detected CUDA driver version from `/sys/module/nvidia/version`: 550.144.3
...
```
## Summary
The order here is slightly off... As-is, we fetch the metadata for the
dependency, _then_ insert the URLs and indexes into the fork state -- so
the fetch doesn't take the explicit index or URL into account. This has
mostly been unobserved because we re-fetch anyway in the next request,
but if we do things in the right order (add to fork state, fetch
dependencies, insert dependencies), we can cut down on the fetches.
Closes https://github.com/astral-sh/uv/issues/12056.
## Summary
* Upgrade the rust toolchain to 1.85.0. This does not increase the MSRV.
* Update windows trampoline to 1.86 nightly beta (previously in 1.85
nightly beta).
## Test Plan
Existing tests
Solving spent a chunk of its time just converting resolutions, the left
two blocks:

These blocks are `ResolverOutput::from_state` with 1.3% and
`ForkState::into_resolution` with 4.1% of resolver thread runtime for
apache airflow universal.
We reduce the overhead spent in those functions, to now 1.1% and 2.1% of
resolver time spend in those functions by:
Commit 1: Replace the hash set for the edges with a vec in
`ForkState::into_resolution`. We deduplicate edges anyway when
collecting them, and the hash-and-insert was slow.
Commit 2: Reduce the distribution clonign in
`ResolverOutput::from_state` by using an `Arc`.
The same profile excerpt for the resolver with the branch (note that
there is now an unrelated block between the two we optimized):

Wall times are noisy, but the profiles show those changes as
improvements.
```
$ hyperfine --warmup 2 "./uv-main pip compile --no-progress scripts/requirements/airflow.in --universal" "./uv-branch pip compile --no-progress scripts/requirements/airflow.in --universal"
Benchmark 1: ./uv-main pip compile --no-progress scripts/requirements/airflow.in --universal
Time (mean ± σ): 99.1 ms ± 3.8 ms [User: 111.8 ms, System: 115.5 ms]
Range (min … max): 93.6 ms … 110.4 ms 29 runs
Benchmark 2: ./uv-branch pip compile --no-progress scripts/requirements/airflow.in --universal
Time (mean ± σ): 97.1 ms ± 4.3 ms [User: 114.8 ms, System: 112.0 ms]
Range (min … max): 90.9 ms … 112.4 ms 29 runs
Summary
./uv-branch pip compile --no-progress scripts/requirements/airflow.in --universal ran
1.02 ± 0.06 times faster than ./uv-main pip compile --no-progress scripts/requirements/airflow.in --universal
```
## Summary
This PR revives https://github.com/astral-sh/uv/pull/10017, which might
be viable now that we _don't_ enforce any platforms by default.
The basic idea here is that users can mark certain platforms as required
(empty, by default). When resolving, we ensure that the specified
platforms have wheel coverage, backtracking if not.
For example, to require that we include a version of PyTorch that
supports Intel macOS:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = ["torch>1.13"]
[tool.uv]
required-platforms = [
"sys_platform == 'darwin' and platform_machine == 'x86_64'"
]
```
Other than that, the forking is identical to past iterations of this PR.
This would give users a way to resolve the tail of issues in #9711, but
with manual opt-in to supporting specific platforms.
With the parallel simple index fetching, we would only acquire one
download concurrency token, meaning that we could in the worst case make
times the number of indexes more requests than the user requested limit.
We fix this by passing the semaphore down to the simple API method.
Looks like the set based prioritize tracking from
https://github.com/pubgrub-rs/pubgrub/pull/313 is a slight speedup.
I assume the changed derivation tree in the error snapshot is due to
out-of-sync virtual package priorities, while the main package priority
defining the solution remains stable.
```
$ hyperfine --warmup 2 "./uv-main pip compile --no-progress scripts/requirements/airflow.in --universal" "./uv-branch pip compile --no-progress scripts/requirements/airflow.in --universal"
Benchmark 1: ./uv-main pip compile --no-progress scripts/requirements/airflow.in --universal
Time (mean ± σ): 115.0 ms ± 4.8 ms [User: 131.0 ms, System: 113.6 ms]
Range (min … max): 108.1 ms … 125.8 ms 25 runs
Benchmark 2: ./uv-branch pip compile --no-progress scripts/requirements/airflow.in --universal
Time (mean ± σ): 105.4 ms ± 2.6 ms [User: 118.5 ms, System: 113.5 ms]
Range (min … max): 101.1 ms … 111.9 ms 28 runs
Summary
./uv-branch pip compile --no-progress scripts/requirements/airflow.in --universal ran
1.09 ± 0.05 times faster than ./uv-main pip compile --no-progress scripts/requirements/airflow.in --universal
```
This removes the error that was causing folks problems.
This does result in some snapshot updates that are arguably wrong, or at
least sub-optimal. However, it's actually intended. Because the approach
we're going to take is going to permit the creation of uninstallable
lock files as a side effect. In the future, we will modify this test to
check that, while `uv lock` succeeds, `uv sync` will always fail.
## Summary
This PR reverts https://github.com/astral-sh/uv/pull/10441 and applies a
different fix for https://github.com/astral-sh/uv/issues/10425.
In #10441, I changed prioritization to visit proxies eagerly. I think
this is actually wrong, since it means we prioritize proxy packages
above _everything_ else. And while a proxy only depends on itself, it
does mean we're selecting a _version_ for the proxy package earlier than
anything else. So, if you look at #10828, we end up choosing a version
for `async-timeout` before we choose a version for `langchain`, despite
the latter being a first-party dependency. (`async-timeout` has a marker
on it, so it has a proxy package, so we solve for it first.)
To fix#10425, we instead need to make sure we visit proxies in the
order we see them. I think the virtual tiebreaker for proxies is
reversed? We want to visit the package we see first, first.
So, in short: this reverts #10441, then corrects the ordering for
visiting proxies.
Closes https://github.com/astral-sh/uv/issues/10828.
## Summary
I previously made this required, but we now need to be able to create
these from a lockfile that _omits_ versions for dynamic source trees.
They should still be present in most cases, but it's best-effort.
## Summary
The idea here is to show both (1) an example of a compatible tag and (2)
the tags that were available, whenever we fail to resolve due to an
abscence of matching wheels.
Closes https://github.com/astral-sh/uv/issues/2777.
## Summary
If you have a dependency with a marker, and you add a constraint, it
causes us to _always_ fork, because we represent the constraint as a
second dependency with the marker repeated (and, therefore, we have two
requirements of the same name, both with markers). I don't think we
should fork here -- and in the end it's leading to this undesirable
resolution: #10481.
I tried to change constraints such that we just _reuse_ and augment the
initial requirement, but that has a fairly negative effect on error
messages: #10489. So this fix seems a bit better to me.
Closes https://github.com/astral-sh/uv/issues/10481.
N.B. After fixing #10430, `ArcStr` became the fastest implementation
(and the gains were significantly reduced, down to 1-2%). See:
https://github.com/astral-sh/uv/pull/10453#issuecomment-2583344414.
## Summary
I tried out a variety of small string crates, but `Arc<str>`
outperformed them, giving a ~10% speed-up:
```console
❯ hyperfine "../arcstr lock" "../flexstr lock" "uv lock" "../arc lock" "../compact_str lock" --prepare "rm -f uv.lock" --min-runs 50 --warmup 20
Benchmark 1: ../arcstr lock
Time (mean ± σ): 304.6 ms ± 2.3 ms [User: 302.9 ms, System: 117.8 ms]
Range (min … max): 299.0 ms … 311.3 ms 50 runs
Benchmark 2: ../flexstr lock
Time (mean ± σ): 319.2 ms ± 1.7 ms [User: 317.7 ms, System: 118.2 ms]
Range (min … max): 316.8 ms … 323.3 ms 50 runs
Benchmark 3: uv lock
Time (mean ± σ): 330.6 ms ± 1.5 ms [User: 328.1 ms, System: 139.3 ms]
Range (min … max): 326.6 ms … 334.2 ms 50 runs
Benchmark 4: ../arc lock
Time (mean ± σ): 303.0 ms ± 1.2 ms [User: 301.6 ms, System: 118.4 ms]
Range (min … max): 300.3 ms … 305.3 ms 50 runs
Benchmark 5: ../compact_str lock
Time (mean ± σ): 320.4 ms ± 2.0 ms [User: 318.7 ms, System: 120.8 ms]
Range (min … max): 317.3 ms … 326.7 ms 50 runs
Summary
../arc lock ran
1.01 ± 0.01 times faster than ../arcstr lock
1.05 ± 0.01 times faster than ../flexstr lock
1.06 ± 0.01 times faster than ../compact_str lock
1.09 ± 0.01 times faster than uv lock
```
## Summary
The issue here is that we add `urllib3{python_full_version >= '3.8'}` as
a dependency, then `requests{python_full_version >= '3.8'}`, which adds
`urllib3`, but at that point, we haven't expanded
`urllib3{python_full_version >= '3.8'}`, so we "lose" the singleton
constraint. The solution is to ensure that we visit proxies eagerly, so
that we accumulate constraints as early as possible.
Closes
https://github.com/astral-sh/uv/issues/10425#issuecomment-2580324578.
Ref https://github.com/astral-sh/uv/issues/10344
Not a performance optimization, but the function had become too large.
No logic changes, just code moving around. Looks slightly better when
ignoring whitespace changes.
It's still too complex but i haven't found an apt simplification.
## Summary
Sort of undecided on this. These are already stored as `dyn Reporter` in
each struct, so we're already using dynamic dispatch in that sense. But
all the methods take `impl Reporter`. This is sometimes nice (the
callsites are simpler?), but it also means that in practice, you often
_can't_ pass `None` to these methods that accept `Option<impl
Reporter>`, because Rust can't infer the generic type.
Anyway, this adds more consistency and simplifies the setup by using
`Arc<dyn Reporter>` everywhere.
## Summary
This PR extends #10046 to also handle architectures, which allows us to
correctly include `2.5.1` on the `cu124` index for ARM Linux.
Closes https://github.com/astral-sh/uv/issues/9655.
## Summary
This is yet another variation on
https://github.com/astral-sh/uv/pull/9928, with a few minor changes:
1. It only applies to local versions (e.g., `2.5.1+cpu`).
2. It only _considers_ the non-local version as an alternative (e.g.,
`2.5.1`).
3. It only _considers_ the non-local alternative if it _does_ support
the unsupported platform.
4. Instead of failing, it falls back to using the local version.
So, this is far less strict, and is effectively designed to solve
PyTorch but nothing else. It's also not user-configurable, except by way
of using `environments` to exclude platforms.
Previously, the batch prefetcher was part of the solver loop, used
across forks. This would lead to each preference in a fork being counted
as a tried version, so that after 5 forks with the identical version, we
would start batch prefetching. The reported numbers of tried versions
are also reported. By tracking the batch prefetcher on the fork the
numbers are corrected.
An alternative would be tracking the actually tried versions, but that
would mean more overhead in the top level solver loop when the current
heuristic works.
In `ecosystem/transformers`:
```
$ hyperfine --runs 10 --prepare "rm -f uv.lock" "../../target/release/uv lock --exclude-newer 2024-08-08T00:00:00Z" "uv lock --exclude-newer 2024-08-08T00:00:00Z"
Benchmark 1: ../../target/release/uv lock --exclude-newer 2024-08-08T00:00:00Z
Time (mean ± σ): 386.2 ms ± 6.1 ms [User: 396.0 ms, System: 144.5 ms]
Range (min … max): 378.5 ms … 397.9 ms 10 runs
Benchmark 2: uv lock --exclude-newer 2024-08-08T00:00:00Z
Time (mean ± σ): 422.0 ms ± 5.5 ms [User: 459.6 ms, System: 190.3 ms]
Range (min … max): 415.0 ms … 430.5 ms 10 runs
Summary
../../target/release/uv lock --exclude-newer 2024-08-08T00:00:00Z ran
1.09 ± 0.02 times faster than uv lock --exclude-newer 2024-08-08T00:00:00Z
```
## Summary
With the advent of `--fork-strategy requires-python` (the default), we
actually _want_ to solve higher lower-bound forks before lower
lower-bound forks. The former ensures we get the most compatible
versions, while the latter ensures we get fewer overall versions. These
two strategies match up with `--fork-strategy`, but need to be respected
as such.
Closes https://github.com/astral-sh/uv/issues/9998.
Build failures are one of the most common user facing failures that
aren't "obivous" errors (such as typos) or resolver errors. Currently,
they show more technical details than being focussed on this being an
error in a subprocess that is either on the side of the package or -
more likely - in the build environment, e.g. the user needs to install a
dev package or their python version is incompatible.
The new error message clearly delineates the part that's important (this
is a build backend problem) from the internals (we called this hook) and
is consistent about which part of the dist building stage failed. We
have to calibrate the exact wording of the error message some more. Most
of the implementation is working around the orphan rule, (this)error
rules and trait rules, so it came out more of a refactoring than
intended.
Example:

Background reading: https://github.com/astral-sh/uv/issues/8157
Companion PR: https://github.com/astral-sh/pubgrub/pull/36
Requires for test coverage: https://github.com/astral-sh/packse/pull/230
When two packages A and B conflict, we have the option to choose a lower
version of A, or a lower version of B. Currently, we determine this by
the order we saw a package (assuming equal specificity of the
requirement): If we saw A before B, we pin A until all versions of B are
exhausted. This can lead to undesirable outcomes, from cases where it's
just slow (sentry) to others cases without lower bounds where be
backtrack to a very old version of B. This old version may fail to build
(terminating the resolution), or it's a version so old that it doesn't
depend on A (or the shared conflicting package) anymore - but also is
too old for the user's application (fastapi). #8157 collects such cases,
and the `wrong-backtracking` packse scenario contains a minimized
example.
We try to solve this by tracking which packages are "A"s, culprits, and
"B"s, affected, and manually interfering with project selection and
backtracking. Whenever a version we just chose is rejected, we give the
current package a counter for being affected, and the package it
conflicted with a counter for being a culprit. If a package accumulates
more counts than a threshold, we reprioritize: Undecided after the
culprits, after the affected, after packages that only have a single
version (URLs, `==<version>`). We then ask pubgrub to backtrack just
before the culprit. Due to the changed priorities, we now select package
B, the affected, instead of package A, the culprit.
To do this efficiently, we ask pubgrub for the incompatibility that
caused backtracking, or just the last version to be discarded (due to
its dependencies). For backtracking, we use the last incompatibility
from unit propagation as a heuristic. When a version is discarded
because one of its dependencies conflicts with the partial solution, the
incompatibility tells us the package in the partial solution that
conflicted.
We only backtrack once per package, on the first time it passes the
threshold. This prevents backtracking loops in which we make the same
decisions over and over again. But we also changed the priority, so that
we shouldn't take the same path even after the one time we backtrack (it
would defeat the purpose of this change).
There are some parameters that can be tweaked: Currently, the threshold
is set to 5, which feels not too eager with so me of the conflicts that
we want to tolerate but also changes strategies quickly. The relative
order of the new priorities can also be changed, as for each (A, B) pair
the priority of B is afterwards lower than that for A. Currently,
culprits capture conflict for the whole package, but we could limit that
to a specific version. We could discard conflict counters after
backtracking instead of keeping them eternally as we do now. Note that
we're always taking about pairs (A, B), but in practice we track
individual packages, not pairs.
A case that we wouldn't capture is when B is only introduced to the
dependency graph after A, but I think that would require cyclical
dependency for A and B to conflict? There may also be cases where
looking at the last incompatibility is insufficient.
Another example that we can't repair with prioritization is
urllib3/boto3/botocore: We actually have to check all the newer versions
of boto3 and botocore to identify the version that allows with the older
urllib3, no shortcuts allowed.
```
urllib3<1.25.4
boto3
```
All examples I tested were cases with two packages where we only had to
switch the order, so I've abstracted them into a single packse case.
This PR changes the resolution for certain paths, and there is the risk
for regressions.
Fixes#8157
---
All tested examples improved.
Input fastapi:
```text
starlette<=0.36.0
fastapi<=0.115.2
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/fastapi.txt
annotated-types==0.7.0
anyio==4.6.0
fastapi==0.1.17
idna==3.10
pydantic==2.9.2
pydantic-core==2.23.4
sniffio==1.3.1
starlette==0.36.0
typing-extensions==4.12.2
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/fastapi.txt
annotated-types==0.7.0
anyio==4.6.0
fastapi==0.109.1
idna==3.10
pydantic==2.9.2
pydantic-core==2.23.4
sniffio==1.3.1
starlette==0.35.1
typing-extensions==4.12.2
```
Input xarray:
```text
xarray[accel]
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/xarray-accel.txt
bottleneck==1.4.0
flox==0.9.13
llvmlite==0.36.0
numba==0.53.1
numbagg==0.8.2
numpy==2.1.1
numpy-groupies==0.11.2
opt-einsum==3.4.0
packaging==24.1
pandas==2.2.3
python-dateutil==2.9.0.post0
pytz==2024.2
scipy==1.14.1
setuptools==75.1.0
six==1.16.0
toolz==0.12.1
tzdata==2024.2
xarray==2024.9.0
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/xarray-accel.txt
bottleneck==1.4.0
flox==0.9.13
llvmlite==0.43.0
numba==0.60.0
numbagg==0.8.2
numpy==2.0.2
numpy-groupies==0.11.2
opt-einsum==3.4.0
packaging==24.1
pandas==2.2.3
python-dateutil==2.9.0.post0
pytz==2024.2
scipy==1.14.1
six==1.16.0
toolz==0.12.1
tzdata==2024.2
xarray==2024.9.0
```
Input sentry: The resolution is identical, but arrived at much faster:
main tries 69 versions (sentry-kafka-schemas: 63), PR tries 12 versions
(sentry-kafka-schemas: 6; 5 times conflicting, then once the right
version).
```text
python-rapidjson<=1.20,>=1.4
sentry-kafka-schemas<=0.1.113,>=0.1.50
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.11 --exclude-newer 2024-10-01 --no-annotate debug/sentry.txt
fastjsonschema==2.20.0
msgpack==1.1.0
python-rapidjson==1.8
pyyaml==6.0.2
sentry-kafka-schemas==0.1.111
typing-extensions==4.12.2
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.11 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/sentry.txt
fastjsonschema==2.20.0
msgpack==1.1.0
python-rapidjson==1.8
pyyaml==6.0.2
sentry-kafka-schemas==0.1.111
typing-extensions==4.12.2
```
Input apache-beam
```text
# Run on Python 3.10
dill<0.3.9,>=0.2.2
apache-beam<=2.49.0
```
```
# BEFORE
$ uv pip --no-progress compile -p 3.10 --exclude-newer 2024-10-01 --no-annotate debug/apache-beam.txt
× Failed to download and build `apache-beam==2.0.0`
╰─▶ Build backend failed to determine requirements with `build_wheel()` (exit status: 1)
# AFTER
$ cargo run --profile fast-build --no-default-features pip compile -p 3.10 --no-progress --exclude-newer 2024-10-01 --no-annotate debug/apache-beam.txt
apache-beam==2.49.0
certifi==2024.8.30
charset-normalizer==3.3.2
cloudpickle==2.2.1
crcmod==1.7
dill==0.3.1.1
dnspython==2.6.1
docopt==0.6.2
fastavro==1.9.7
fasteners==0.19
grpcio==1.66.2
hdfs==2.7.3
httplib2==0.22.0
idna==3.10
numpy==1.24.4
objsize==0.6.1
orjson==3.10.7
proto-plus==1.24.0
protobuf==4.23.4
pyarrow==11.0.0
pydot==1.4.2
pymongo==4.10.0
pyparsing==3.1.4
python-dateutil==2.9.0.post0
pytz==2024.2
regex==2024.9.11
requests==2.32.3
six==1.16.0
typing-extensions==4.12.2
urllib3==2.2.3
zstandard==0.23.0
```