This effectively combines a PEP 508 marker and an as-yet-specified
marker for expressing conflicts among extras and groups.
This just defines the type and threads it through most of the various
points in the code that previously used `MarkerTree` only. Some parts
do still continue to use `MarkerTree` specifically, e.g., when dealing
with non-universal resolution or exporting to `requirements.txt`.
This doesn't change any behavior.
This doesn't change any behavior. My guess is that this code was
a casualty of refactoring. But basically, it was doing redundant
case analysis and iterating over all resolutions (even though it's
in the branch that can only occur when there is only one
resolution).
This filtering is now redundant, since forking now avoids these
degenerate cases by construction.
The main change to forking that enables skipping over "always
false" forks is that forking now starts with the parent's markers
instead of starting with MarkerTree::TRUE and trying to combine
them with the parent's markers later. This in turn leads to
skipping over anything that "can't" happen when combined with the
parents markers. So we never hit the case of generating a fork
that, when combined with the parent's markers, results in a
marker that is always false. We just avoid it in the first place.
## Summary
The issue here is fairly complex. Consider the following:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
cu124 = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "cu124" },
],
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
```
When solving this project, we first pick a PyTorch version from PyPI, to
solve the `cu124` extra, selecting `2.5.1`.
Later, we try to solve the `cpu` extra. In solving that extra, we look
at the PyTorch CPU index. Ideally, we'd select `2.5.1+cpu`... But
`2.5.1` is already a preference. So we choose that.
Now, we only respect preferences for explicit indexes if they came from
the same index.
Closes https://github.com/astral-sh/uv/issues/9295.
## Summary
The reqwest middleware doesn't retry errors that occur "after" the
request completes -- but in some cases, these do include spurious errors
that we want to retry. See https://github.com/astral-sh/uv/issues/8144
for examples. This PR adds a second retry layer during the response
_handler_, which should help with some of the spurious failures we see
in the linked issue.
Closes https://github.com/astral-sh/uv/issues/8144.
## Summary
This PR enables something like the "final boss" of PyTorch setups --
explicit support for CPU vs. GPU-enabled variants via extras:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch==2.5.1+cpu",
]
gpu = [
"torch==2.5.1",
]
[tool.uv.sources]
torch = [
{ index = "torch-cpu", extra = "cpu" },
{ index = "torch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "torch-gpu"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "gpu" },
],
]
```
It builds atop the conflicting extras work to allow sources to be marked
as specific to a dedicated extra being enabled or disabled.
As part of this work, sources now have an `extra` field. If a source has
an `extra`, it means that the source is only applied to the requirement
when defined within that optional group. For example, `{ index =
"torch-cpu", extra = "cpu" }` above only applies to
`"torch==2.5.1+cpu"`.
The `extra` field does _not_ mean that the source is "enabled" when the
extra is activated. For example, this wouldn't work:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13.0"
dependencies = ["torch"]
[tool.uv.sources]
torch = [
{ index = "torch-cpu", extra = "cpu" },
{ index = "torch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "torch-gpu"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
```
In this case, the sources would effectively be ignored. Extras are
really confusing... but I think this is correct? We don't want enabling
or disabling extras to affect resolution information that's _outside_ of
the relevant optional group.
## Summary
These were moved as part of a broader refactor to create a single
integration test module. That "single integration test module" did
indeed have a big impact on compile times, which is great! But we aren't
seeing any benefit from moving these tests into their own files (despite
the claim in [this blog
post](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html),
I see the same compilation pattern regardless of where the tests are
located). Plus, we don't have many of these, and same-file tests is such
a strong Rust convention.
## Summary
I was wrongly using `.name()` to detect if a package was "not root", but
in `pip compile`, the root can have a name -- so we were failing to find
the derivation chain.
## Summary
This PR adds context to our error messages to explain _why_ a given
package was included, if we fail to download or build it.
It's quite a large change, but it motivated some good refactors and
improvements along the way.
Closes https://github.com/astral-sh/uv/issues/8962.
## Summary
This PR should not contain any user-visible changes, but the goal is to
refactor the `Resolution` type to retain a dependency graph. We want to
be able to explain _why_ a given package was excluded on error (see:
https://github.com/astral-sh/uv/issues/8962), which in turn requires
that at install time, we can go back and figure out the dependency
chain. At present, `Resolution` is just a map from package name to
distribution; this PR remodels it as a graph in which each node is a
package, and the edges contain markers plus extras or dependency groups.
## Summary
As discussed in Discord... This struct has evolved to include a lot of
information apart from the `petgraph::Graph`. And I want to add a graph
to the simplified `Resolution` type. So I think this name makes more
sense.
Surprisingly, this seems to be all that's necessary.
Previously, we were only extracting an extra from a
PubGrubPackage to test for conflicts. But now we extract
either an extra or a group. The surrounding code all
remains the same.
We do need to add some extra checking for groups
specifically, but I believe that's it.
This adds support for providing conflicting group names in addition to
extra names to `Conflicts`.
This merely makes "room" for it in the types while keeping everything
working. We'll add proper support for it in the next commit.
Note that one interesting trick we do here is depend directly on
`hashbrown` so that we can make use of its `Equivalent` trait. This in
turn lets us use things like `ConflictItemRef` as a lookup key for a
hashset that contains `ConflictItem`. This mirrors using a `&str` as a
lookup key for a hashset that contains `String`, but works for arbitrary
types. `std` doesn't support this, but `hashbrown` does. This trick in
turn lets us simplify some of our data structures.
This also rejiggers some of the serde-interaction with the conflicting
types. We now use a wire type to represent our conflicting items for
more flexibility. i.e., Support `extra` XOR `group` fields.
Since this is intended to support _both_ groups and extras, it doesn't
make sense to just name it for groups. And since there isn't really a
word that encapsulates both "extra" and "group," we just fall back to
the super general "conflicts."
We'll rename the variables and other things in the next commit.
## Summary
I need this for the derivation chain work
(https://github.com/astral-sh/uv/issues/8962), but it just seems
generally useful. You can't always get a version from a `Dist` (it could
be URL-based!), but when we create a `ResolvedDist`, we _do_ know the
version (and not just the URL). This PR preserves it.
This PR adds support for conflicting extras. For example, consider
some optional dependencies like this:
```toml
[project.optional-dependencies]
project1 = ["numpy==1.26.3"]
project2 = ["numpy==1.26.4"]
```
These dependency specifications are not compatible with one another.
And if you ask uv to lock these, you'll get an unresolvable error.
With this PR, you can now add this to your `pyproject.toml` to get
around this:
```toml
[tool.uv]
conflicting-groups = [
[
{ package = "project", extra = "project1" },
{ package = "project", extra = "project2" },
],
]
```
This will make the universal resolver create additional forks
internally that keep the dependencies from the `project1` and
`project2` extras separate. And we make all of this work by reporting
an error at **install** time if one tries to install with two or more
extras that have been declared as conflicting. (If we didn't do this,
it would be possible to try and install two different versions of the
same package into the same environment.)
This PR does *not* add support for conflicting **groups**, but it is
intended to add support in a follow-up PR.
Closes#6981Fixes#8024
Ref #6729, Ref #6830
This should also hopefully unblock
https://github.com/dagster-io/dagster/pull/23814, but in my testing, I
did run into other problems (specifically, with `pywin`). But it does
resolve the problem with incompatible dependencies in two different
extras once you declare `test-airflow-1` and `test-airflow-2` as
conflicting for `dagster-airflow`.
NOTE: This PR doesn't make `conflicting-groups` public yet. And in a
follow-up PR, I plan to switch the name to `conflicts` instead of
`conflicting-groups`, since it will be able to accept conflicting extras
_and_ conflicting groups.
## Summary
We're inconsistent with these -- sometimes it's `Error::Fetch` and
sometimes it's `Error::Download`. The message says download, so let's
just use that?
## Summary
This got moved to `InstallTarget`! Must've been an oversight not to
delete. I verified that no code was changed here since the date that we
moved it to `InstallTarget`.
## Summary
Just as we don't enforce tag compliance, we shouldn't enforce
`--no-build` when validating the lockfile. If we end up building from
source, the distribution database will correctly error.
Closes https://github.com/astral-sh/uv/issues/9016.
## Summary
At time of writing, `markupsafe==3.0.2` exists on the PyTorch index, but
there's
only a single wheel:
`MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl`
Meanwhile, there are a large number of wheels on PyPI for the same
version. If the
user is on Python 3.12, and we return the incompatible PyTorch wheel
without
considering the PyPI wheels, PubGrub will mark 3.0.2 as an incompatible
version,
even though there are compatible wheels on PyPI.
Closes https://github.com/astral-sh/uv/issues/8922.
## Summary
We were making some incorrect assumptions in the extra-merging code for
universal `pip compile`. This PR corrects those assumptions and adds a
bunch of additional tests.
Closes https://github.com/astral-sh/uv/issues/8915.
After https://github.com/astral-sh/uv/pull/8797, we have spec-compliant
handling for local version identifiers and can completely remove all the
special-casing around it.
Implement a full working version of local version semantics. The (AFAIA)
major move towards this was implemented in #2430. This added support
such that the version specifier `torch==2.1.0+cpu` would install
`torch@2.1.0+cpu` and consider `torch@2.1.0+cpu` a valid way to satisfy
the requirement `torch==2.1.0` in further dependency resolution.
In this feature, we more fully support local version semantics. Namely,
we now allow `torch==2.1.0` to install `torch@2.1.0+cpu` regardless of
whether `torch@2.1.0` (no local tag) actually exists.
We do this by adding an internal-only `Max` value to local versions that
compare greater to all other local versions. Then we can translate
`torch==2.1.0` into bounds: greater than 2.1.0 with no local tag and
less than 2.1.0 with the `Max` local tag.
Depends on https://github.com/astral-sh/packse/pull/227.
closes#6640
Could you suggest how I should test it?
(already tested locally)
---------
Co-authored-by: konstin <konstin@mailbox.org>
Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This updates the surrounding code to use the new ResolverEnvironment
type. In some cases, this simplifies caller code by removing case
analysis. There *shouldn't* be any behavior changes here. Some test
snapshots were updated to account for some minor tweaks to error
messages.
I didn't split this up into separate commits because it would have been
too difficult/costly.
This type is intended to replace `ResolverMarkers`. The main difference
between them is that this type encapsulates more decision making by
un-exporting the different cases. So instead of callers needing to do
explicit case analysis depending on the type of resolver environment,
callers instead use methods that know how to do the right thing. In the
next commit, there are at least a few cases where this greatly
simplifies case analysis on the caller side.
The motivation for this type is to centralize decision making about
forking. In particular, we want to expand forking to include conflicting
groups instead of just `MarkerTree`. So to a certain extent, the
refactor here is about removing bare use of `MarkerTree` in favor of a
more purpose built type that encapsulates the forking logic.
The encapsulation is not quite perfect here. I expect to improve on it a
bit once we add support for conflicting groups.
This is split off from the subsequent commit (that makes use of
`ResolverEnvironment`) so that it's a bit easier to review the addition
in isolation.
## Summary
At present, when we have a Python requirement and we see a wheel, we
verify that the Python requirement is compatible with the wheel. For
source distributions, though, we verify that both the Python requirement
_and_ the currently-installed version are compatible, because we assume
that we'll need to build the source distribution in order to get
metadata. However, we can often extract source distribution metadata
_without_ building (e.g., if there's a `pyproject.toml` with no dynamic
keys).
This PR thus modifies the source distribution handling to defer that
incompatibility ("We couldn't get metadata for this project, because it
has no static metadata and requires a higher Python version to run /
build") until we actually try to build the package. As a result, you can
now resolve source distribution-only packages using Python versions
below their `requires-python`, as long as they include static metadata.
Closes https://github.com/astral-sh/uv/issues/8767.
## Summary
This PR improves the interaction of `--frozen` such that we reduce the
dependency on the `pyproject.toml` and increase the dependency on the
`uv.lock`. Specifically, we now read the list of workspace members from
the `uv.lock` rather than the `pyproject.toml`, which means we don't
need to discover the member `pyproject.toml` files in order to perform a
`uv sync --frozen --all-packages`.
## Summary
This PR enables `uv sync --all-packages` to sync all packages in a
workspace. It removes a common use-case for the legacy non-`[project]`
packages that we're trying to move away from.
Closes https://github.com/astral-sh/uv/issues/8724.
This PR fixes a bug where it was possible for dependencies to be
included in a final resolution with markers that always evaluate to
false. Specifically, `python_version < '0'`.
While we do filter based on Python markers during forking, it turns out
that the markers for each fork are "combined" *after* this filtering
step. But the process of combination can result in a more specific
marker that is always false for the configured Python requirement. This
could result in dependencies with markers that are always false (like
`python_version < '0'`) appearing in the resolution.
The first commit in this PR adds a regression test (with an undesirable
result), and the second commit fixes the regression and updates the
test.
Fixes#8676
## Summary
By default, `uv tree` shows the full workspace, not _just_ the root. If
the root depended on a workspace member as a dev dependency, then we'd
still show it as `(group: dev)` in `uv tree` even if you passed
`--no-dev`, because we weren't filtering the edges in the right place.
This is still somewhat confusing, because if `root` depends on workspace
member `child` as a dev dependency, `uv tree --no-dev` still shows both.
Closes https://github.com/astral-sh/uv/issues/8719.
Previously, when doing a `uv pip` resolution, we would only return the
first entry in the map. But there should only ever be one entry, or else
we would have incompatible dependencies. So we can collapse the case
with the "one universal fork" case.
(I found this while doing some refactoring of how we handle forking, and
collapsing these cases simplifies some of that refactoring work.)
When this code was written, we didn't have "proper" disjointness checks,
and so simple equality was used instead. Arguably disjointness checks
are
more correct, and this would also simplify some case analysis in an
ongoing
refactor.
## Summary
Unfortunately, it looks like we lost
https://github.com/astral-sh/uv/pull/8501 somewhere in a bad rebase.
This PR re-adds the change, with compatibility for those lockfiles
created in v0.4.27. I'm not certain we should actually merge this. It
might be less painful and confusing to just bite the bullet on the
change.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
It turns out we were omitting empty dependency groups from the lockfile
metadata, which was then causing us to reject locks when empty groups
were defined.
We now include them (that section of the lock is meant to be a true
representation of the metadata, and an empty-but-defined group is
different from an absent group), though we can ignore them for
validation, since it doesn't affect any behavior.
Closes https://github.com/astral-sh/uv/issues/8581.
## Summary
We already support `tool.uv.dev-dependencies` in the legacy
non-`[project]` projects. This adds equivalent support for
`[dependency-groups]`, e.g.:
```toml
[tool.uv.workspace]
[dependency-groups]
lint = ["ruff"]
```
This PR adds support for `tool.uv.default-groups`, which defaults to
`["dev"]` for backwards-compatibility. These represent the groups we
sync by default.
Part of #8090
Unblocks https://github.com/astral-sh/uv/pull/8274
Refactors `DevMode` and `DevSpecification` into a shared type
`DevGroupsSpecification` that allows us to track if `--dev` was
implicitly or explicitly provided.
Part of #8090
Adds the ability to add and remove dependencies from arbitrary groups
using `uv add` and `uv remove`. Does not include resolving with the new
dependencies — tackling that in #8110.
Additionally, this does not yet resolve interactions with the existing
`dev` group — we'll tackle that separately as well. I probably won't
merge the stack until that design is resolved.
## Summary
We were including dependencies that were only included by a dependency
that isn't relevant on the current platform (i.e., we were enforcing the
"current environment" at one level, but not transitively).
Closes https://github.com/astral-sh/uv/issues/8516.
## Summary
Historically, we haven't enforced schema versions. This PR adds a
versioning policy such that, if a uv version writes schema v2, then...
- It will always reject lockfiles with schema v3 or later.
- It _may_ reject lockfiles with schema v1, but can also choose to read
them, if possible.
(For example, the change we proposed to rename `dev-dependencies` to
`dependency-groups` would've been backwards-compatible: newer versions
of uv could still read lockfiles that used the `dev-dependencies` field
name, but older versions should reject lockfiles that use the
`dependency-groups` field name.)
Closes https://github.com/astral-sh/uv/issues/8465.
## Summary
Previously, `uv tree --package` had some strange behavior due to how we
were computing the root nodes. This PR refactors the entire
implementation to use `petgraph` so we can do proper operations on a
graph structure.
Closes https://github.com/astral-sh/uv/issues/8382.
## Summary
This is part of making
https://github.com/astral-sh/uv/issues/7299#issuecomment-2385286341
better. You can now use `tool.uv.dependency-metadata` for direct URL
requirements. Unfortunately, you _must_ include a version, since we need
one to perform resolution.
## Summary
If the user has an upper-bound in a `requires-python`, we don't
correctly narrow it during resolution. We should be narrowing based on
the intersection.
Closes#8297.
## Summary
Rather than relying on the distribution and package URL being the same
(which isn't true for Git dependencies), we can just use the
intersection of the markers directly.
Closes https://github.com/astral-sh/uv/issues/8381.
Part of https://github.com/astral-sh/uv/issues/2777
I noticed we're seeing "Python ABI" _a lot_ in error messages which I
did not expect. This improves a common case by being a little more
specific.
When patch version isn't specified and a matching version is referenced,
it will default patch to 0 which could be unclear/confusing. This PR
warns the user of that default.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
The first part of this issue
https://github.com/astral-sh/uv/issues/7426. Will tackle the second part
mentioned (`~=`) in a separate PR once I know this is the correct way to
warn users.
## Test Plan
<!-- How was it tested? -->
Unit tests were added
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
If you pass a named index via the CLI, you can now reference it as a
named source. This required some surprisingly large refactors, since we
now need to be able to track whether a given index was provided on the
CLI vs. elsewhere (since, e.g., we don't want users to be able to
reference named indexes defined in global configuration).
Closes https://github.com/astral-sh/uv/issues/7899.
## Summary
This PR lifts the restriction that a package must come from a single
index. For example, you can now do:
```toml
[project]
name = "project"
version = "0.1.0"
readme = "README.md"
requires-python = ">=3.12"
dependencies = ["jinja2"]
[tool.uv.sources]
jinja2 = [
{ index = "torch-cu118", marker = "sys_platform == 'darwin'"},
{ index = "torch-cu124", marker = "sys_platform != 'darwin'"},
]
[[tool.uv.index]]
name = "torch-cu118"
url = "https://download.pytorch.org/whl/cu118"
[[tool.uv.index]]
name = "torch-cu124"
url = "https://download.pytorch.org/whl/cu124"
```
The construction is very similar to the way we handle URLs today: you
can have multiple URLs for a given package, but they must appear in
disjoint forks. So most of the code is just adding that abstraction to
the resolver, following our handling of URLs.
Closes#7761.
## Summary
This PR enables users to provide index credentials via named environment
variables.
For example, given an index named `internal` that requires a username
(`public`) and password
(`koala`), you can define the index (without credentials) in your
`pyproject.toml`:
```toml
[[tool.uv.index]]
name = "internal"
url = "https://pypi-proxy.corp.dev/simple"
```
Then set the `UV_INDEX_INTERNAL_USERNAME` and
`UV_INDEX_INTERNAL_PASSWORD`
environment variables, where `INTERNAL` is the uppercase version of the
index name:
```sh
export UV_INDEX_INTERNAL_USERNAME=public
export UV_INDEX_INTERNAL_PASSWORD=koala
```
## Summary
This PR adds a first-class API for defining registry indexes, beyond our
existing `--index-url` and `--extra-index-url` setup.
Specifically, you now define indexes like so in a `uv.toml` or
`pyproject.toml` file:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
```
You can also provide indexes via `--index` and `UV_INDEX`, and override
the default index with `--default-index` and `UV_DEFAULT_INDEX`.
### Index priority
Indexes are prioritized in the order in which they're defined, such that
the first-defined index has highest priority.
Indexes are also inherited from parent configuration (e.g., the
user-level `uv.toml`), but are placed after any indexes in the current
project, matching our semantics for other array-based configuration
values.
You can mix `--index` and `--default-index` with the legacy
`--index-url` and `--extra-index-url` settings; the latter two are
merely treated as unnamed `[[tool.uv.index]]` entries.
### Index pinning
If an index includes a name (which is optional), it can then be
referenced via `tool.uv.sources`:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
[tool.uv.sources]
torch = { index = "pytorch" }
```
If an index is marked as `explicit = true`, it can _only_ be used via
such references, and will never be searched implicitly:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
explicit = true
[tool.uv.sources]
torch = { index = "pytorch" }
```
Indexes defined outside of the current project (e.g., in the user-level
`uv.toml`) can _not_ be explicitly selected.
(As of now, we only support using a single index for a given
`tool.uv.sources` definition.)
### Default index
By default, we include PyPI as the default index. This remains true even
if the user defines a `[[tool.uv.index]]` -- PyPI is still used as a
fallback. You can mark an index as `default = true` to (1) disable the
use of PyPI, and (2) bump it to the bottom of the prioritized list, such
that it's used only if a package does not exist on a prior index:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
default = true
```
### Name reuse
If a name is reused, the higher-priority index with that name is used,
while the lower-priority indexes are ignored entirely.
For example, given:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
[[tool.uv.index]]
name = "pytorch"
url = "https://test.pypi.org/simple"
```
The `https://test.pypi.org/simple` index would be ignored entirely,
since it's lower-priority than `https://download.pytorch.org/whl/cu121`
but shares the same name.
Closes#171.
## Future work
- Users should be able to provide authentication for named indexes via
environment variables.
- `uv add` should automatically write `--index` entries to the
`pyproject.toml` file.
- Users should be able to provide multiple indexes for a given package,
stratified by platform:
```toml
[tool.uv.sources]
torch = [
{ index = "cpu", markers = "sys_platform == 'darwin'" },
{ index = "gpu", markers = "sys_platform != 'darwin'" },
]
```
- Users should be able to specify a proxy URL for a given index, to
avoid writing user-specific URLs to a lockfile:
```toml
[[tool.uv.index]]
name = "test"
url = "https://private.org/simple"
proxy = "http://<omitted>/pypi/simple"
```
## Summary
This PR declares and documents all environment variables that are used
in one way or another in `uv`, either internally, or externally, or
transitively under a common struct.
I think over time as uv has grown there's been many environment
variables introduced. Its harder to know which ones exists, which ones
are missing, what they're used for, or where are they used across the
code. The docs only documents a handful of them, for others you'd have
to dive into the code and inspect across crates to know which crates
they're used on or where they're relevant.
This PR is a starting attempt to unify them, make it easier to discover
which ones we have, and maybe unlock future posibilities in automating
generating documentation for them.
I think we can split out into multiple structs later to better organize,
but given the high influx of PR's and possibly new environment variables
introduced/re-used, it would be hard to try to organize them all now
into their proper namespaced struct while this is all happening given
merge conflicts and/or keeping up to date.
I don't think this has any impact on performance as they all should
still be inlined, although it may affect local build times on changes to
the environment vars as more crates would likely need a rebuild. Lastly,
some of them are declared but not used in the code, for example those in
`build.rs`. I left them declared because I still think it's useful to at
least have a reference.
Did I miss any? Are their initial docs cohesive?
Note, `uv-static` is a terrible name for a new crate, thoughts? Others
considered `uv-vars`, `uv-consts`.
## Test Plan
Existing tests
As per
https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html
Before that, there were 91 separate integration tests binary.
(As discussed on Discord — I've done the `uv` crate, there's still a few
more commits coming before this is mergeable, and I want to see how it
performs in CI and locally).
## Summary
In the routine we use to verify whether the lockfile is up-to-date, we
sometimes have to resolve package metadata. If that resolution step
fails, the resolver is left in a bad state, as various tasks are marked
as pending despite the error. Treating that as a recoverable failure
thus leads to a deadlock.
This PR modifies the errors to be treated as fatal.
I think a more holistic fix here would be to add some kind of guard to
ensure that any tasks that fail are no longer marked as pending (or
enforce this in the type system).
Closes https://github.com/astral-sh/uv/issues/8074.
## Summary
The issue here is that, if you user has a `requires-python` like `>=
3.7, != 3.8.5`, this gets expanded to the following bounds:
- `[3.7, 3.8.5)`
- `(3.8.5, ...`
We then convert this to the specific `>= 3.7, < 3.8.5, > 3.8.5`. But the
commas in that expression are conjunctions... So it's impossible to
satisfy? No version is both `< 3.8.5` and `> 3.8.5`.
Instead, we now preserve the input `requires-python` and just
concatenate the terms, only using PubGrub to compute the _bounds_.
Closes https://github.com/astral-sh/uv/issues/7862.
Unlike `cp36-...`, which requires exactly CPython 3.6, `py36-none` is
compatible with all versions starting at Python 3.6.
Note that `py3x-none` should not be used. Instead, use `py3-none` with
`requires-python`.
Fixes#7800
## Summary
If a supported environment includes a Python marker, we don't simplify
it out, despite _storing_ the simplified markers. This PR modifies the
validation code to compare simplified to simplified markers.
Closes https://github.com/astral-sh/uv/issues/7876.
## Summary
When using `uv tree --package foo`, an extra empty line appears at the
beginning, which seems unnecessary since `uv tree` without the package
option doesn’t have this. It’s possible that the intention was to add
separation between packages, i.e. the correct implementation shoule be:
```rust
if !std::mem::take(&mut first) {
lines.push(String::new());
}
```
Even if corrected, this extra spacing might be redundant as `uv tree`
doesn’t include these empty lines between packages by default.
```console
$ uv init project
$ cd project
$ uv init foo
$ uv tree
Using CPython 3.12.5
Resolved 2 packages in 1ms
foo v0.1.0
project v0.1.0
$ uv tree --package project
Using CPython 3.12.5
Resolved 2 packages in 1ms
project v0.1.0
```
Would it be okay to expose this struct? We currently use our own
ResolveProvider, and it would be nice to use the `FlatDistributions` for
easy `VersionMap` creation.
Thanks!
## Summary
`click` has one dependency of `colorama` only on Windows, `uv tree
--invert` should not include `colorama` on non-Windows platforms, but
currently:
```console
$ uv init
$ uv add click
$ uv tree --invert --python-platform macos
colorama v0.4.6
```
it should:
```console
$ uv tree --invert --python-platform macos
click v8.1.7
└── project v0.1.0
```
#7226 modified the check to skip prefetching of source dists without
proper minimum-version bounds, and wound up flipping the boolean
expression. This change flips the some/none expression so that the
intended skip happens as expected.
Fixes#7680.
This PR adds some additional sanity checking on resolution graphs to
ensure we can never install different versions of the same package into
the same environment.
I used code similar to this to provoke bugs in the resolver before the
release, but it never made it into `main`. Here, we add the error
checking to the creation of `ResolutionGraph`, since this is where it's
most convenient to access the "full" markers of each distribution.
We only report an error when `debug_assertions` are enabled to avoid
rendering `uv` *completely* unusuable if a bug were to occur in a
production binary. For example, maybe a conflict is detected in a marker
environment that isn't actually used. While not ideal, `uv` is still
usable for any other marker environment.
Closes#5598
This enhances the hints generator in the resolver with some heuristic to
detect and warn in case of failures due to version mismatches on a local
package. Those may be the symptom of name conflict/shadowing with a
transitive dependency.
Closes: https://github.com/astral-sh/uv/issues/7329
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Recently, rkyv 0.8 was released. Its API is a fair bit simpler now for
higher level uses (like for us in `uv`) and results in us being able to
delete a fair bit of code. This also removes our last dependency on `syn
1.0`, and thus drops that dependency.
Performance (via testing on the `transformers` example) seems to remain
about the same, which is what was expected:
```
$ hyperfine -w5 -r100 'uv lock' 'uv-ag-rkyv-update lock'
Benchmark 1: uv lock
Time (mean ± σ): 55.6 ms ± 6.4 ms [User: 30.4 ms, System: 35.1 ms]
Range (min … max): 43.0 ms … 73.1 ms 100 runs
Benchmark 2: uv-ag-rkyv-update lock
Time (mean ± σ): 56.5 ms ± 7.2 ms [User: 30.5 ms, System: 36.3 ms]
Range (min … max): 39.1 ms … 71.5 ms 100 runs
Summary
uv lock ran
1.02 ± 0.18 times faster than uv-ag-rkyv-update lock
```
Closes#7415
This changes the structure of the hints generator in the resolver when
encountering solution errors, so that it re-uses a single output buffer
owned by the caller.
It avoids repeated allocations of a temporary buffer within each
recursive function call.
## Summary
This PR enables users to provide pre-defined static metadata for
dependencies. It's intended for situations in which the user depends on
a package that does _not_ declare static metadata (e.g., a
`setup.py`-only sdist), and that is expensive to build or even cannot be
built on some architectures. For example, you might have a Linux-only
dependency that can't be built on ARM -- but we need to build that
package in order to generate the lockfile. By providing static metadata,
the user can instruct uv to avoid building that package at all.
For example, to override all `anyio` versions:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
requires-dist = ["iniconfig"]
```
Or, to override a specific version:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
version = "3.7.0"
requires-dist = ["iniconfig"]
```
The current implementation uses `Metadata23` directly, so we adhere to
the exact schema expected internally and defined by the standards. Any
entries are treated similarly to overrides, in that we won't even look
for `anyio@3.7.0` metadata in the above example. (In a way, this also
enables #4422, since you could remove a dependency for a specific
package, though it's probably too unwieldy to use in practice, since
you'd need to redefine the _rest_ of the metadata, and do that for every
package that requires the package you want to omit.)
This is under-documented, since I want to get feedback on the core ideas
and names involved.
Closes https://github.com/astral-sh/uv/issues/7393.
## Summary
All the registry wheels were getting cached under
`index/b2a7eb67d4c26b82` rather than `pypi`, because we used
`IndexUrl::Url` rather than `IndexUrl::from`.
## Summary
This is arguably breaking, arguably a bug... Today, if project A depends
on project B, and you install A with dev dependencies enabled, you also
get B's dev dependencies. I think this is incorrect. Just like you
shouldn't be importing B's dependencies from A, you shouldn't be using
B's dev dependencies when developing on A.
Closes#7310.
## Summary
We have to call `to_dist` to get metadata while validating the lockfile,
but some of the distributions won't match the current platform -- and
that's fine!
## Summary
We need to apply the `--no-install` filters earlier, such that we don't
error if we only have a source distribution for a given package when
`--no-build` is provided but that package is _omitted_.
Closes#7247.
This is preparatory work for the upload functionality, which needs to
read the METADATA file and attach its parsed contents to the POST
request: We move finding the `.dist-info` from `install-wheel-rs` and
`uv-client` to a new `uv-metadata` crate, so it can be shared with the
publish crate.
I don't properly know if its the right place since the upload code isn't
ready, but i'm PR-ing it now because it already had merge conflicts.
## Summary
We now track the discovered `IndexCapabilities` for each `IndexUrl`. If
we learn that an index doesn't support range requests, we avoid doing
any batch prefetching.
Closes https://github.com/astral-sh/uv/issues/7221.
## Summary
If we have a singleton `Range`, we don't need to iterate over the map of
available ranges; instead, we can just get the singleton directly.
Closes#6131.
This finally gets rid of our hack for working around "hidden"
state. We no longer do a roundtrip marker serialization and
deserialization just to avoid the hidden state.
## Summary
I think a better tradeoff here is to skip fetching metadata, even though
we can't validate the extras.
It will help with situations like
https://github.com/astral-sh/uv/issues/5073#issuecomment-2334235588 in
which, otherwise, we have to download the wheels twice.
(This is part of #5711)
## Summary
@BurntSushi and I spotted that the `derivative` crate is only used for
one enum in the entire codebase — however, it's a proc macro, and we pay
for the cost of (re)compiling it in many different contexts.
This replaces it with a private `Inner` core which uses the regular std
derive macros — inlining and optimizations should make this equivalent
to the other implementation, and not too hard to maintain hopefully
(versus a manual impl of `PartialEq` and `Hash` which have to be kept in
sync.)
## Test Plan
Trust CI?
## Summary
Like `uv sync`, you can omit the current project (`--no-emit-project`),
a specific package (`--no-emit-package`), or the entire workspace
(`--no-emit-workspace`).
Closes https://github.com/astral-sh/uv/issues/6960.
Closes#6995.
Follow-up to #6959 and #6961: Use the reachability computation instead
of `propagate_markers` everywhere.
With `marker_reachability`, we have a function that computes for each
node the markers under which it is (`requirements.txt`, no markers
provided on installation) or can be (`uv.lock`, depending on the markers
provided on installation) included in the installation. Put differently:
If the marker computed by `marker_reachability` is not fulfilled for the
current platform, the package is never required on the current platform.
We compute the markers for each package in the graph, this includes the
virtual extra packages and the base packages. Since we know that each
virtual extra package depends on its base package (`foo[bar]` implied
`foo`), we only retain the base package marker in the `requirements.txt`
graph.
In #6959/#6961 we were only using it for pruning packages in `uv.lock`,
now we're also using it for the markers in `requirements.txt`.
I think this closes#4645, CC @bluss.
## Summary
We need to prioritize hashes for the distribution over hashes for the
related packages.
I think this needs to be redone entirely though. I can see other issues
with the current approach.
Closes https://github.com/astral-sh/uv/issues/7059.
When a package is included under a platform-specific marker, we know
that wheels that mismatch this marker can never be installed, so we drop
them from the lockfile.
In transformers, we have:
* `tensorflow-text`: `tensorflow-macos; python_full_version >= '3.13'
and platform_machine == 'arm64' and platform_system == 'Darwin'`
* `tensorflow-macos`: `tensorflow-cpu-aws; (python_full_version < '3.10'
and platform_machine == 'aarch64' and platform_system == 'Linux') or
(python_full_version >= '3.13' and platform_machine == 'aarch64' and
platform_system == 'Linux') or (python_full_version >= '3.13' and
platform_machine == 'arm64' and platform_system == 'Linux')`
* `tensorflow-macos`: `tensorflow-intel; python_full_version >= '3.13'
and platform_system == 'Windows'`
This means that `tensorflow-cpu-aws` and `tensorflow-intel` can never be
installed, and we can drop them from the lockfile.
This commit refactors how deal with `requires-python` so that instead of
simplifying markers of dependencies inside the resolver, we do it at the
edges of our system. When writing markers to output, we simplify when
there's an obvious `requires-python` context. And when reading markers
as input, we complexity markers with the relevant `requires-python`
constraint.
When I first wrote this routine, it was intended to only emit a trace
for the final "unioned" resolution. But we actually moved that semantic
operation to the construction of the resolution *graph*. So there is no
unioned `Resolution` any more.
But this is still useful to see. So I changed this to just emit a trace
of *every* resolution right before constructing the graph.
It might be nice to also emit a trace of the unioned graph too. Or
perhaps we should do that instead if this proves too noisy. (Although
this is only emitted at TRACE level.)
## Summary
Right now, we have slightly different `requires-python` semantics for
`-p 3.11` vs. `-p 3.11 --universal`, and slightly different (wrong)
semantics for how we compare against the _installed_ Python version
(which doesn't ignore upper bounds, but should).
This PR rips it all out and replaces it with consistent semantics across
`uv lock`, `uv pip compile -p 3.11`, and `uv pip compile -p 3.11
--universal`. We now always ignore upper bounds.
Closes https://github.com/astral-sh/uv/issues/6859.
Closes https://github.com/astral-sh/uv/issues/5045.
## Summary
We now respect the user-provided upper-bound in for `requires-python`.
So, if the user has `requires-python = "==3.11.*"`, we won't explore
forks that have `python_version >= '3.12'`, for example.
However, we continue to _only_ compare the lower bounds when assessing
whether a dependency is compatible with a given Python range.
Closes https://github.com/astral-sh/uv/issues/6150.
## Summary
The interface here is intentionally a bit more limited than `uv pip
compile`, because we don't want `requirements.txt` to be a system of
record -- it's just an export format. So, we don't write annotation
comments (i.e., which dependency is requested from which), we don't
allow writing extras, etc. It's just a flat list of requirements, with
their markers and hashes.
Closes#6007.
Closes#6668.
Closes#6670.
## Summary
Whether a package is itself virtual isn't captured in the package
metadata, so we have to compare the sources.
Closes https://github.com/astral-sh/uv/issues/6749.
## Summary
Use a dedicated source type for non-package requirements. Also enables
us to support non-package `path` dependencies _and_ removes the need to
have the member `pyproject.toml` files available when we sync _and_
makes it explicit which dependencies are virtual vs. not (as evidenced
by the snapshot changes). All good things!
## Summary
This is similar to https://github.com/astral-sh/uv/pull/6171 but more
expansive... _Anywhere_ that we test requirements for platform
compatibility, we _need_ to respect the resolver-friendly markers. In
fixing the motivating issue (#6621), I also realized that we had a bunch
of bugs here around `pip install` with `--python-platform` and
`--python-version`, because we always performed our `satisfy` and `Plan`
operations on the interpreter's markers, not the adjusted markers!
Closes https://github.com/astral-sh/uv/issues/6621.
For users who were using absolute paths in the `pyproject.toml`
previously, this is a behavior change: We now convert all absolute paths
in `path` entries to relative paths. Since i assume that no-one relies
on absolute path in their lockfiles - they are intended to be portable -
I'm tagging this as a bugfix.
Closes https://github.com/astral-sh/uv/pull/6438
Fixes https://github.com/astral-sh/uv/issues/6371
## Summary
It turns out we weren't applying the collapse logic here, so dev deps
with extras were repeated. This was generally ok... unless we ended up
_dropping_ an extra, in which case, you now have a duplicate.
Closes https://github.com/astral-sh/uv/issues/6380.
## Summary
We're gonna work on a more comprehensive review of whether we should
preserve the username here, but for now, `git@` is effectively a
convention for GitHub and GitLab etc.
Closes https://github.com/astral-sh/uv/issues/6305.
## Test Plan
I guess we don't have infrastructure for testing SSH private keys right
now, but...
```
❯ cargo run init foo
❯ cd foo
❯ cargo run add git+ssh://git@github.com/astral-sh/mkdocs-material-insiders.git
```
## Summary
For non-virtual workspaces, these are covered by the _members_. But for
virtual workspaces, they aren't captured anywhere else in the lock. So,
we weren't invalidating `uv.lock` when the dev dependencies changed,
which led to a panic.
Closes https://github.com/astral-sh/uv/issues/6288
This PR migrates uv's use of `chrono` to `jiff`.
I did most of this work a while back as one of my tests to ensure Jiff
could actually be used in a real world project. I decided to revive
this because I noticed that `reqwest-retry` dropped its Chrono
dependency,
which is I believe the only other thing requiring Chrono in uv.
(Although, we use a fork of `reqwest-middleware` at present, and that
hasn't been updated to latest upstream yet. I wasn't quite sure of the
process we have for that.)
In course of doing this, I actually made two changes to uv:
First is that the lock file now writes an RFC 3339 timestamp for
`exclude-newer`. Previously, we were using Chrono's `Display`
implementation for this which is a non-standard but "human readable"
format. I think the right thing to do here is an RFC 3339 timestamp.
Second is that, in addition to an RFC 3339 timestamp, `--exclude-newer`
used to accept a "UTC date." But this PR changes it to a "local date."
That is, a date in the user's system configured time zone. I think
this makes more sense than a UTC date, but one alternative is to drop
support for a date and just rely on an RFC 3339 timestamp. The main
motivation here is that automatically assuming UTC is often somewhat
confusing, since just writing an unqualified date like `2024-08-19` is
often assumed to be interpreted relative to the writer's "local" time.
## Summary
The strategy here is: if the user provides supported environments, we
use those as the initial forks when resolving. As a result, we never add
or explore branches that are disjoint with the supported environments.
(If the supported environments change, we ignore the lockfile entirely,
so we don't have to worry about any interactions between supported
environments and the preference forks.)
Closes https://github.com/astral-sh/uv/issues/6184.
## Summary
I probably won't land https://github.com/astral-sh/uv/pull/6210 for the
release, but I do want to make one breaking change to prep for it:
renaming `environment-markers` to `resolution-markers` (or
`solution-markers`?) so that it's delineated from the user-defined
markers in that PR.
Indented blocks in Markdown are treated as code blocks, and rustdoc
treats all unadorned code blocks as Rust doctests. Since this wasn't
intended as a doctest and isn't valid Rust, it makes `cargo test --doc`
fail. We fix this by using an explicit code block labeled as `text`.
In https://github.com/astral-sh/uv/issues/5046, we show the tautological
proof:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on jax>=0.4.20.
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
(1)
```
This is a part of the error tree because the statement
`colabfold[alphafold]==1.5.5 depends on jax>=0.4.20` is actually a
simplification of `colabfold[alphafold]==1.5.5 depends on
jax>=0.4.20,<0.5.0` and the no versions clause is a proof of that
simplification.
Without simplification, the clause looks like:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20,<0.5.0 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on one of:
jax==0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
```
I don't think we have a great way to avoid performing the simplification
of the range conditionally and it makes the error simpler to just jump
straight to `colabfold[alphafold]==1.5.5 depends on jax>=0.4.20`.
The derivation for this clause looks like:
```
jax==0.4.20 | ==0.4.21 | ==0.4.22 | ==0.4.23 | ==0.4.24 | ==0.4.25 | ==0.4.26 | ==0.4.27 | ==0.4.28 | ==0.4.29 | ==0.4.30 | ==0.4.31 depends on numpy>=1.26.0
no versions of jax>0.4.20, <0.4.21 | >0.4.21, <0.4.22 | >0.4.22, <0.4.23 | >0.4.23, <0.4.24 | >0.4.24, <0.4.25 | >0.4.25, <0.4.26 | >0.4.26, <0.4.27 | >0.4.27, <0.4.28 | >0.4.28, <0.4.29 | >0.4.29, <0.4.30 | >0.4.30, <0.4.31 | >0.4.31, <0.5.0
colabfold[alphafold]==1.5.5 depends on jax>=0.4.20, <0.5.0
```
So it looks like we can take trees of this form and drop the "no
versions" clause _if_ the ranges are compatible[*]. See [this
comment](https://github.com/astral-sh/uv/pull/6160#discussion_r1720280922)
for a simpler explanation.
With this pull request, the clause simplifies to
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and jax>=0.4.20 depends on numpy>=1.26.0,
we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0. (1)
```
Unfortunately, this doesn't change any snapshots in our test suite so
I'm uncertain if the strategy generalizes. In some incorrect iterations
of this logic, the snapshots did reveal my mistakes.
[*] "if the ranges are compatible" includes a bit of hand-waving. I'm
not 100% sure if I've chosen the correct range heuristic here.
Closes https://github.com/astral-sh/uv/issues/6167
We've been seeing intermittent failures in CI, which we thought were
unexpected HTTP 401s but it actually looks like a panic when handling an
expected HTTP error. I believe the problem is that an early client error
can cause the channel to close and we crash when we unwrap the `send`.
## Summary
In the resolver, we use release-only semantics to normalize
`python_full_version`. So, if we see `python_full_version < '3.13'`, we
treat that as `(Unbounded, Exclude(3.13))`. `3.13b0` evaluates as `true`
to that range, so we were accepting pre-releases for these markers.
Instead, we need to exclude pre-release segments when performing these
evaluations.
Closes https://github.com/astral-sh/uv/issues/6169.
## Test Plan
Hard to write a test for this because you need a pre-release Python
locally... so:
`echo "sqlalchemy==2.0.32" | cargo run pip compile - --python 3.13 -n`
Now that these incompatibilities are collected into a single range
(https://github.com/astral-sh/uv/pull/6154), we can simplify the range
using the known available versions to reduce verbosity.
There were different `PubGrubPackage` types so they never matched the
available versions set! Luckily, the available versions are agnostic to
the markers and optional dependencies so we can just broaden to using
`PackageName` as a lookup key.
Addresses yet another complaint in
https://github.com/astral-sh/uv/issues/5046
I need this for debugging error messages.
I used an environment variable instead of a trace log so you can do
`UV_INTERNAL__SHOW_DERIVATION_TREE=1` and run a test to see the tree in
the test snapshot without further changes.
e.g.
```rust
// Resolving should fail.
uv_snapshot!(context.filters(), context.lock().arg("--preview").current_dir(&workspace), @r###"
success: false
exit_code: 1
----- stdout -----
UV_INTERNAL__SHOW_DERIVATION_TREE
root==0a0.dev0 depends on foo*
root==0a0.dev0 depends on bar[some-extra]*
foo==0.1.0 depends on anyio==4.1.0
bar[some-extra]==0.1.0 depends on anyio==4.2.0
no versions of bar[some-extra]<0.1.0 | >0.1.0
----- stderr -----
Using Python 3.12.[X] interpreter at: [PYTHON-3.12]
× No solution found when resolving dependencies:
╰─▶ Because only bar[some-extra]==0.1.0 is available and bar[some-extra] depends on anyio==4.2.0, we can conclude that all versions of bar[some-extra] depend on anyio==4.2.0.
And because foo depends on anyio==4.1.0, we can conclude that foo and all versions of bar[some-extra] are incompatible.
And because your workspace requires bar[some-extra] and foo, we can conclude that your workspace's requirements are unsatisfiable.
"###
);
```
We have bad error messages for optional (extra) dependencies and
development dependencies in workspaces:
1. We weren't showing the full package, so we'd drop `:dev` and
`[extra]` by accident
2. We didn't include derived packages, e.g., `member[extra]` in tree
processing collapse operation, so we'd include extra clauses like the
ones we removed in #6092
Also
- Reverts
f0de4f71f2
— it turns out it wasn't quite correct and it didn't seem worth using
the custom incompatibility anymore.
- Fixes a bug in the display of `package:dev` which was not showing
`:dev` for some variants (see 94d8020b58)
## Summary
Normalize all `python_version` markers to their equivalent
`python_full_version` form. This avoids false positives in forking
because we currently cannot detect any relationships between the two
forms. It also avoids subtle bugs due to the truncating semantics of
`python_version`. For example, given `requires-python = ">3.12"`, we
currently simplify the marker `python_version <= 3.12` to `false`.
However, the version `3.12.1` will be truncated to `3.12` for
`python_version` comparisons, and thus it satisfies the python
requirement and evaluates to `true`.
It is possible to simplify back to `python_version` when writing markers
to the lockfile. However, the equivalent `python_full_version` markers
are often clearer and easier to simplify, so I lean towards leaving them
as `python_full_version`.
There are *a lot* of snapshot updates from this change. I'd like more
eyes on the transformation logic in `python_version_to_full_version` to
ensure that they are all correct.
Resolves https://github.com/astral-sh/uv/issues/6125.
Includes the changes from https://github.com/astral-sh/uv/pull/6071 but
takes them way further.
When we have the set of available versions for a package, we can do a
much better job displaying an error.
For example:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only the following versions of httpx are available:
httpx<=999
httpx>=9999
and example==0.1.0 depends on httpx>999,<9999, we can conclude that example==0.1.0 cannot be used.
And because only example==0.1.0 is available and you require example, we can conclude that the requirements are unsatisfiable.
```
The resolver has demonstrated that the requested range cannot be used
because there are only versions in ranges _outside_ the requested range.
However, the display of the range of available versions is pretty bad!
We say there are versions of httpx available in ranges that definitely
have no versions available.
With this pull request, the error becomes:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only httpx<=1.0.0b0 is available and example depends on httpx>999,<9999, we can conclude that example's
requirements are unsatisfiable.
And because your workspace requires example, we can conclude that your workspace's requirements are unsatisfiable.
```
We achieve this by:
1. Dropping ranges disjoint with the range of available versions, e.g.,
this removes `httpx>=9999`
2. Replacing ranges that capture the _entire_ range of available
versions with the smaller range, e.g., this replaces `httpx<=999` with
`<=1.0.0b0`.
~Note that when we perform (2), we may include an additional bound that
is not relevant, e.g., we include the lower bound of `>=0.6.7`. This is
a bit extraneous, but I don't think it's confusing. We can consider some
advanced logic to avoid that later.~ (edit: I did this, it wasn't hard)
We also improve error messages when there is _only_ one version
available by showing that version instead of a range.
## Summary
Gives the caller control over how messages are reported back to the
user. Also merges the index-location validation into the lock, since
we're already iterating over the packages.
## Summary
This is no longer required since we no longer implement `Eq` on `Lock`.
It will also sometimes be "wrong" as of #6076, since we now apply
different `requires-python` filtering to different parts of the tree
during resolution.
## Summary
Using https://github.com/astral-sh/uv/issues/6064 as a motivating
example: at present, on main, we're not properly propagating the
`Requires-Python` simplifications. In that case, for example, we end up
solving for a branch with `python_version < 3.11`, and a branch `>=
3.11`, even though `Requires-Python` is `>=3.11`. Later, when we get to
the graph, we apply version simplification based on `Requires-Python`,
which causes us to _remove_ the `python_version < 3.11` markers
entirely, leaving us with duplicate dependencies for `pylint`.
This PR instead tries to ensure that we always apply this narrowing to
requirements and forks, so that we don't need to apply the same
simplification when constructing the graph at all.
Closes https://github.com/astral-sh/uv/issues/6064.
Closes#6059.
In particular, I added this as a hack to avoid a kinda of
instability that was caused by our marker code not correctly
detecting markers that were always false. But that has since
been fixed.
Removing this code doesn't change any tests. Arguably it
should be possible to come up with a test that failed with
this hack inserted but succeeded without it. In particular,
with this hack, new forks were being prevented from being
added even when they ought to be added, e.g., when preferences
get updated.
## Summary
This PR changes the definition of `--locked` from:
> Produces the same `Lock`
To:
> Passes `Lock::satisfies`
This is a subtle but important difference. Previous, if
`Lock::satisfies` failed, we would run a resolution, then do
`existing_lock == lock`. If the two weren't equal, and `--locked` was
specified, we'd throw an error.
The equality check is hard to get right. For example, it means that we
can't ship #6076 without changing our marker representation, since the
deserialized lockfile "loses" some of the internal marker state that
gets accumulated during resolution.
The downside of this change is that there could be scenarios in which
`uv lock --locked` fails even though the lockfile would actually work
and the exact TOML would be unchanged. But... I think it's ok if
`--locked` fails after the user modifies something?
Extends https://github.com/astral-sh/uv/pull/6092 to improve resolver
error messages for workspaces that have a single member.
As before, this requires a two-step approach of
1. Traversing the derivation tree and collapsing some members. In this
case, we drop the empty root node in favor of the project.
2. Using special-case formatting for packages. In this case, the
workspace package is referred to with "your project" instead of its
name.
An extension of #6090 that replaces #6066.
In brief,
1. Workspace member names are passed to the resolver for no solution
errors
2. There is a new derivation tree pre-processing step that trims
`NoVersion` incompatibilities for workspace members from the derivation
tree. This avoids showing redundant clauses like `Because only
bird==0.1.0 is available and bird==0.1.0 depends on anyio==4.3.0, we can
conclude that all versions of bird depend on anyio==4.3.0.`. As a minor
note, we use a custom incompatibility kind to mark these
incompatibilities at resolution-time instead of afterwards.
3. Root dependencies on workspace members say `your workspace requires
bird` rather than `you require bird`
4. Workspace member package display omits the version, e.g., `bird`
instead of `bird==0.1.0`
5. Instead of reporting a workspace member as unusable we note that its
requirements cannot be solved, e.g., `bird's requirements are
unsatisfiable` instead of `bird cannot be used`.
6. Instead of saying `your requirements are unsatisfiable` we say `your
workspace's requirements are unsatisfiable` when in a workspace, since
we're not in a "provide direct requirements" paradigm.
As an annoying but minor implementation detail, `PackageRange` now
requires access to the `PubGrubReportFormatter` so it can determine if
it is formatting a workspace member or not. We could probably improve
the abstractions in the future.
As a follow-up, we should additional special casing for "single project"
workspaces to avoid mention of the workspace concept in simple projects.
However, it looks like this will require additional tree manipulations
so I'm going to keep it separate.
## Summary
Historically, in order to "resolve from a lockfile", we've taken the
lockfile, used it to pre-populate the in-memory metadata index, then run
a resolution. If the resolution didn't match our existing resolution, we
re-resolved from scratch.
This was an appealing approach because (in theory) it didn't require any
dedicated logic beyond pre-populating the index. However, it's proven to
be _really_ hard to get right, because it's a stricter requirement than
we need. We just need the current lockfile to _satisfy_ the requirements
provided by the user. We don't actually need a second resolution to
produce the exact same result. And it's not uncommon that this second
resolution differs, because we seed it with preferences, which
fundamentally changes its course. We've worked hard to minimize those
"instabilities", but they're still present.
The approach here is intended to be much simpler. Instead of resolving
from the lockfile, we just check if the current resolution satisfies the
state of the workspace. Specifically, we check if the lockfile (1)
contains all the relevant members, and (2) matches the metadata for all
dependencies, recursively. (We skip registry dependencies, assuming that
they're immutable.)
This may actually be too conservative, since we can have resolutions
that satisfy the requirements, even if the requirements have changed
slightly. But we want to bias towards correctness for now.
My hope is that this scheme will be more performant, simpler, and more
robust.
Closes https://github.com/astral-sh/uv/issues/6063.
## Summary
This was added in https://github.com/astral-sh/uv/pull/5405 but is now
the cause of an instability in `github_wikidata_bot`. Specifically, on
the initial run, we fork in `pydantic==2.8.2`, via:
```
Requires-Dist: typing-extensions>=4.12.2; python_version >= '3.13'
Requires-Dist: typing-extensions>=4.6.1; python_version < '3.13'
```
In the end, we resolve a single version of `typing-extensions`
(`4.12.2`)... But we don't recognize the two resolutions as the "same
graph", because we propagate the fork markers, and so the "edges" have
different markers on them...
In the second run through, when we have the forks in advance, we don't
split on Pydantic... We just try to solve from the root with the current
forks. This is fundamentally different and I fear it will be the cause
of many instabilities. But removing this graph check fixes the proximate
issue.
I don't really understand why this was added since there was no test
coverage in the PR.
## Summary
When constructing the `Resolution`, we only propagated the fork markers
to the package node, but not the extras node. This led to cases in which
an extra could be included unconditionally or otherwise diverge from the
base package version.
Closes https://github.com/astral-sh/uv/issues/6062.
## Summary
Right now, we store the environment markers in a `BTreeSet` -- so
they're sorted, but the sort doesn't really tell us anything. I think we
should instead store them in the order in which we solved. I thought
this might fix an instability (it didn't), but I think it's still good
to ensure we solve in the same order.
I also changed from `Option<Vec>` to just `Vec`, since there was no
distinction between `None` and empty.
## Summary
Now, if you resolve against a registry, then swap it out for another, we
won't reuse the lockfile. (If you don't provide any registry
configuration, then we won't enforce this, so that `uv lock --index-url
foo` and `uv lock` is stable.)
Closes https://github.com/astral-sh/uv/issues/5920.
## Summary
Our current handling of `--find-links` merges the entries in each index.
As a result, we can end up with `AnnotatedDist` entries that reference
distributions across indexes.
I'd like to change `--find-links` such that each `--find-links` entry is
just treated as its own index (so, e.g., if `requests` exists in the
first `--find-links` entry, we don't even check the registry by
default), which would _also_ fix this problem automatically. But that's
a behavior change... So for now, in the lockfile, we filter
distributions that don't match the source index URL.
There are two cases to consider:
- There's a source distribution. Then, for the ID to reference the
`--find-links` registry, the source distribution _must_ have come from
the `--find-links` entry, so it's fine to discard any wheels from the
"wrong" registry without breaking any compatibility guarantees.
- There's no source distribution. Then the best wheel must come from the
`--find-links` registry. We might lose some platform coverage by
discarding the other wheels, but it shouldn't break any of the
"guarantees", since we have at least one wheel that fits in the version
range.
Closes https://github.com/astral-sh/uv/issues/6015.
Surprisingly, this is a lockfile schema change: We can't store relative
paths in urls, so we have to store a `filename` entry instead of the
whole url.
Fixes#4355
## Summary
We now persist the `ResolverInstallerOptions` when writing out a tool
receipt. When upgrading, we grab the saved options, and merge with the
command-line arguments and user-level filesystem settings (CLI > receipt
> filesystem).
Warn when there are missing bounds on transitive dependencies with
`--resolution lowest`.
Implemented as a lazy resolution graph check. Dev deps are odd because
they are missing the edge from the root that extras have (they are
currently orphans in the resolution graph), but this is more complex to
solve properly because we can put dev dep information in a `Requirement`
so i special cased them here.
Closes#2797
Should help with #1718
---------
Co-authored-by: Ibraheem Ahmed <ibraheem@ibraheem.ca>
## Summary
This PR rewrites the `MarkerTree` type to use algebraic decision
diagrams (ADD). This has many benefits:
- The diagram is canonical for a given marker function. It is impossible
to create two functionally equivalent marker trees that don't refer to
the same underlying ADD. This also means that any trivially true or
unsatisfiable markers are represented by the same constants.
- The diagram can handle complex operations (conjunction/disjunction) in
polynomial time, as well as constant-time negation.
- The diagram can be converted to a simplified DNF form for user-facing
output.
The new representation gives us a lot more confidence in our marker
operations and simplification, which is proving to be very important
(see https://github.com/astral-sh/uv/pull/5733 and
https://github.com/astral-sh/uv/pull/5163).
Unfortunately, it is not easy to split this PR into multiple commits
because it is a large rewrite of the `marker` module. I'd suggest
reading through the `marker/algebra.rs`, `marker/simplify.rs`, and
`marker/tree.rs` files for the new implementation, as well as the
updated snapshots to verify how the new simplification rules work in
practice. However, a few other things were changed:
- [We now use release-only comparisons for `python_full_version`, where
we previously only did for
`python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522).
I'm unsure how marker operations should work in the presence of
pre-release versions if we decide that this is incorrect.
- [Meaningless marker expressions are now
ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502).
This means that a marker such as `'x' == 'x'` will always evaluate to
`true` (as if the expression did not exist), whereas we previously
treated this as always `false`. It's negation however, remains `false`.
- [Unsatisfiable markers are written as `python_version <
'0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329).
- The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub`
crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends
on the `pubgrub` crate for the `Range` type, we probably want to move
`pubgrub::Range` into a separate crate to break this, but I don't think
that should block this PR (cc @konstin).
There is still some remaining work here that I decided to leave for now
for the sake of unblocking some of the related work on the resolver.
- We still use `Option<MarkerTree>` throughout uv, which is unnecessary
now that `MarkerTree::TRUE` is canonical.
- The `MarkerTree` type is now interned globally and can potentially
implement `Copy`. However, it's unclear if we want to add more
information to marker trees that would make it `!Copy`. For example, we
may wish to attach extra and requires-python environment information to
avoid simplifying after construction.
- We don't currently combine `python_full_version` and `python_version`
markers.
- I also have not spent too much time investigating performance and
there is probably some low-hanging fruit. Many of the test cases I did
run actually saw large performance improvements due to the markers being
simplified internally, reducing the stress on the old `normalize`
routine, especially for the extremely large markers seen in
`transformers` and other projects.
Resolves https://github.com/astral-sh/uv/issues/5660,
https://github.com/astral-sh/uv/issues/5179.
## Summary
This PR adds a `DistExtension` field to some of our distribution types,
which requires that we validate that the file type is known and
supported when parsing (rather than when attempting to unzip). It
removes a bunch of extension parsing from the code too, in favor of
doing it once upfront.
Closes https://github.com/astral-sh/uv/issues/5858.
## Summary
This _used_ to be true but we now require fetching metadata for all
distributions even with `--no-deps` since, e.g., we validate that any
declared extras exist.
Currently, the entry for a package+version+source table is called
`distribution`. That is incorrect, the `sdist` and `wheel` fields inside
of that table are distributions, the table itself is for a package. We
also align ourselves closer with PEP 751.
I went through `lock.rs` and renamed all occurrences of "distribution"
that actually referred to a "package".
This change invalidates all existing lockfiles.
Bikeshedding: Do we call it `package` or `packages`? See also
https://github.com/python/peps/pull/3877
`package` is nice because it looks like a header:
```toml
[[package]]
name = "anyio"
version = "4.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna" },
{ name = "sniffio" },
]
sdist = { url = "3970183622d484d08e3285104333d3/anyio-4.3.0.tar.gz", hash = "sha256:f75253795a87df48568485fd18cdd2a3fa5c4f7c5be8e5e36637733fce06fed6", size = 159642 }
wheels = [
{ url = "2f20c40b45242c0b33774da0e2e34f/anyio-4.3.0-py3-none-any.whl", hash = "sha256:048e05d0f6caeed70d731f3db756d35dcc1f35747c8c403364a8332c630441b8", size = 85584 },
]
```
`packages` is nice because the field is not a single entry, but a list.
2/3 for https://github.com/astral-sh/uv/issues/4893
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
There are three options that determine resolver behavior:
* resolution mode
* prerelease mode
* exclude newer
They are different from the other top level options: If they mismatch,
we recreate the resolution. To distinguish them from the rest of the
lockfile, we group them under an `[options]` header.
1/3 for #4893
## Summary
We were dropping the query and fragment in the wrong place, so the URLs
didn't match up after resolving from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5851.
## Summary
Very subtle bug. The scenario is as follows:
- We resolve: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git" }`
- The user then changes the request to: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git", rev =
"44d2f4b19d6837ea990c16f494bdf7543d57483d" }`
- When we go to re-lock, we note two facts:
1. The "default branch" resolves to
`44d2f4b19d6837ea990c16f494bdf7543d57483d`.
2. The metadata for `44d2f4b19d6837ea990c16f494bdf7543d57483d` is
(whatever we grab from the lockfile).
- In the resolver, we then ask for the metadata for
`44d2f4b19d6837ea990c16f494bdf7543d57483d`. It's already in the cache,
so we return it; thus, we never add the
`44d2f4b19d6837ea990c16f494bdf7543d57483d` ->
`44d2f4b19d6837ea990c16f494bdf7543d57483d` mapping to the Git resolver,
because we never have to resolve it.
This would apply for any case in which a requested tag or branch was
replaced by its precise SHA. Replacing with a different commit is fine.
It only applied to `tool.uv.sources`, and not PEP 508 URLs, because the
underlying issue is that we aren't consistent about "automatically"
extracting the precise commit from a Git reference.
Closes https://github.com/astral-sh/uv/issues/5860.
## Summary
This fixes a bug introduced by
https://github.com/astral-sh/uv/pull/5232. It turns out that the
`universal_disjoint_base_or_local_requirement` test does not actually do
what it was meant to because of the incorrect python requirement. With a
valid python requirement, it fails on `main`. The problem is that we try
to exclude the original base version from the range of allowed versions
to try and prefer local versions. However, in the test, there is a
branch that depends on the non-local version, with no applicable local
in its fork. We should remove this exclusion as prioritization is
handled by the candidate resolver.
I don't think this will save any time in serialization, but it should
save us some deserialization, since we only need to parse URLs for the
packages we use...
## Summary
Okay, I tested this against...
- Our public "private" proxy
- Fury
- AWS CodeArtifact
- Azure Artifacts
It took a long time.
All of them work as expected with this approach: we omit the credentials
from the lockfile, then wire them back up when the index URL is provided
during subsequent operations.
Closes https://github.com/astral-sh/uv/issues/5119.
## Summary
`uv tree` will now filter to the current platform by default. You can
pass `--universal` to show the entire tree.
Closes https://github.com/astral-sh/uv/issues/5760.
## Summary
Right now, if you have a `requirements.txt` with a pre-release, but the
`requirements.in` does not have a pre-release marker for that dependency
we drop the pre-release. (In the selector, we end up returning
`AllowPrerelease::IfNecessary`, the default.)
I played with a few ways of solving this... The first was to remove that
guard altogether. But if we do that,
`universal_transitive_disjoint_prerelease_requirement` fails (we use
`1.17.0rc1` in both forks, when it should only apply to one of the two).
The second was to do that, but also avoid pushing pre-releases as
preferences when we solve a fork. But then
`universal_disjoint_prereleases` fails, because we return a different
pre-release in each fork.
Finally, I settled on allowing existing pre-releases in forks if they
have no markers on them, i.e., they are "global" preferences. I believe
this is true IFF the preference came from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5729.
## Summary
If _both_ nodes in the derivation tree are proxies, we need to remove
the _entire_ node. So, the function now returns an `Option<Tree>`
instead of taking `&mut Tree`.
Closes https://github.com/astral-sh/uv/issues/5618.
## Summary
Partially resolves#5561. Haven't added overrides support yet but I can
add it tomorrow if the current approach for constraints is ok.
## Test Plan
`cargo test`
Manually checked trace logs after changing the constraints.
## Summary
As-is, if you have a workspace with mixed `requires-python`
requirements, resolution will _never_ succeed, since we'll use the union
as the `requires-python` bound (i.e., take the lowest value), and fail
when we see the package that only supports some more narrow range.
This PR modifies the behavior to take the intersection (i.e., the
highest value), so if you have one package that supports Python 3.12 and
later, and another that supports Python 3.8 and later, we lock for
Python 3.12. If you try to sync or run with Python 3.8, we raise an
error, since the lockfile will be incompatible with that request.
Konsti has a write-up in https://github.com/astral-sh/uv/issues/5594
that outlines what could be a longer-term strategy.
Closes https://github.com/astral-sh/uv/issues/5578.
By resolving for each fork from the lockfile individually and by adding
using preferences for the current fork, we solve the instability #5180.
I've tested the locally and will add the packse test scenarios upstack.
Part of
https://github.com/astral-sh/uv/issues/5180#issuecomment-2247696198
## Summary
Given a fork like:
```
pylint < 3 ; sys_platform == 'darwin'
pylint > 2 ; sys_platform != 'darwin'
```
Solving the top branch will typically yield a solution that also
satisfies the bottom branch, due to maximum version selection (while the
inverse isn't true).
To quote an example from the docs:
```rust
// If there's no difference, prioritize forks with upper bounds. We'd prefer to solve
// `numpy <= 2` before solving `numpy >= 1`, since the resolution produced by the former
// might work for the latter, but the inverse is unlikely to be true due to maximum
// version selection. (Selecting `numpy==2.0.0` would satisfy both forks, but selecting
// the latest `numpy` would not.)
```
Closes https://github.com/astral-sh/uv/issues/4926 for now.
## Summary
First part of: https://github.com/astral-sh/uv/issues/4926. We should
solve forks that _don't_ expand the world of supported versions (e.g.,
`python_version >= '3.11'` enables us to select new packages, since we
narrow the supported version range).
This PR represents a different approach to marker propagation in an
attempt to unblock #4640. In particular, instead of propagating markers
when forks are created, we wait until resolution is complete to
propagate all markers to all dependencies in each fork. This ends up
being both more robust (we should never miss anything) and simpler to
implement because it doesn't require mutating a `PubGrubPackage` (which
was pretty annoying). I think the main downside here is that this can
sometimes add markers where they aren't needed.
This actually winds up making quite a few snapshot changes. I went
through each of them. Some of them look like legitimate bug fixes. Some
of them look like superfluous additions. And some of them look like they
would be removed if we had perfect marker normalization. But I don't
think any of the changes are _wrong_.
Consider the following packse scenario:
```toml
[root]
requires = [
"a>=1.0.0 ; python_version < '3.10'",
"a>=1.1.0 ; python_version >= '3.10'",
"a>=1.2.0 ; python_version >= '3.11'",
]
[packages.a.versions."1.0.0"]
[packages.a.versions."1.1.0"]
[packages.a.versions."1.2.0"]
```
On current `main`, this produces a dependency on `a` that looks like
this:
```toml
dependencies = [
{ name = "fork-overlapping-markers-basic-a", marker = "python_version < '3.10' or python_version >= '3.11'" },
]
```
But the marker expression is clearly wrong here, since it implies that
`a` isn't installed at all for Python 3.10. With this PR, the above
dependency becomes:
```toml
dependencies = [
{ name = "fork-overlapping-markers-basic-a" },
]
```
That is, it's unconditional. Which is I believe correct here since there
aren't any other constraints on which version to select.
The specific bug here is that when we found overlapping dependency
specifications for the same package *within* a pre-existing fork, we
intersected all of their marker expressions instead of unioning them.
That in turn resulted in incorrect marker expressions.
While this doesn't fix any known bug on the issue tracker (like #4640),
it does appear to fix a couple of our snapshot tests. And fixes a basic
test case I came up with while working on #4732.
For the packse scenario test: https://github.com/astral-sh/packse/pull/206
When a fork occurs, we divide not just the dependencies that
provoked a fork into distinct groups, but we also add the
corresponding sibling dependencies to each fork. Previously,
while we track markers on the fork itself, the individual
dependencies that had markers only corresponded to markers
written from the dependency specification.
This meant that the sibling dependencies that got added to
each fork would not themselves have markers attached to them.
This in turn meant they would not have markers associated with
them in the lock file.
In many cases, this is actually okay, because the resolver will
pick a version that is "universal" across all forks in most
cases. But in some cases, this just simply isn't possible as
the marker expressions in the fork can and do influence resolution.
In which case, it is possible for the same package with different
versions to show up in the lock file unconditionally. Which is a
big no-no.
So in this commit, after we determine the forks, we intersect the
markers on each fork with each of its dependencies.
This does seem to balloon the marker expressions in some cases.
I plucked one low hanging fruit to avoid doing `x and x` in
trivial cases. (And this eliminated a portion of the snapshot
diffs.) But some pretty gnarly diffs remain.
This commit also fixes another bug: previously, when we created a fork
to capture the "remaining" universe of an incomplete set of markers, we
left out dependencies that should be included in that fork. We rectify
that here.
Fixes#5086
Partially addresses #4732
Two changes split out from the instability work:
* Break `ResolutionGraph::from_state` into methods before adding new
logic to it.
* `ResolutionGraph`: Convert `NodeKey` type to `PackageRef` struct:
Another small refactoring to make subsequent changes easier.
No functional changes.
@BurntSushi I hope this doesn't interfere with your work too much, the
`PackageRef` should at least make debugging panics here easier.
In preparation for the preferences changes with forking, change the
method structure in `CandidateSelector`. Split out into its own PR to
avoid merge conflicts with main. No functional changes.
Consider these requirements from pylint 3.2.5:
```
Requires-Dist: dill >=0.3.6 ; python_version >= "3.11"
Requires-Dist: dill >=0.3.7 ; python_version >= "3.12"
```
We will split on the python version, but then we may pick a version of
`dill` that's `>=0.3.7` in both branches and also have an otherwise
identical resolution in both forks. In this case, we merge both forks
and store only their conjoined markers.
## Summary
Normalize the order of marker expressions on construction. This removes
the distinction between expressions like `os_name == 'Linux'` vs.
`'Linux' == os_name` throughout the codebase. One caveat here is that
the `in` operator does not have a direct inverse, so we introduce
`MarkerOperator::Contains` to handle that case.
I wanted to land this smaller change before some more intrusive changes
as it simplifies the existing code quite a bit.
## Summary
Prior to this change, the resolver would panic if we ran with
`--offline` and `--no-deps` and we had cached metadata for a _package_
(i.e., the versions) but no cached metadata for the _distribution_
(i.e., the specific wheel), since we weren't validating that the
returned metadata in the `--no-deps` case was actually successful. (We
need metadata, even for `--no-deps`, so that we can validate extras.)
## Test Plan
The added test panics on the previous branch.
## Summary
Prefers, in order:
- The major-minor version of an interpreter discovered via `--python`.
- The `requires-python` from the workspace.
- The major-minor version of the default interpreter.
If the `--python` request is a version or a version range, we use that
without fetching an interpreter.
Closes https://github.com/astral-sh/uv/issues/5299.
The `RequirementsTxtComparator` was written assuming there is one
distribution per package name. This changed with the universal
resolution, which allows multiple versions or urls for the same package
name. The sorting we emitted for these new entries was incidental.
With this change, we properly sort these entries by name, version and
then url in universal mode.
This is an output format change for `--universal` users.
## Summary
Excellent find from @konstin. If we have a package that's included in
two forks at the same version, but with different URLs, we need to avoid
collapsing them in the lockfile.
Closes https://github.com/astral-sh/uv/issues/5294.
Looking at how to merge identical forks, i found that the `Resolution`
can be simplified by treating it as a nodes and edges store (plus pins,
they are separate since they are per name-version, not per
(virtual-)package-version). This should also make #5294 more apparent,
which i didn't touch here.
I additionally added some doc comments to the `Resolution` types.
Our everything-method `solve` tends to grow large, so before i'm adding
more logic, i'm moving some code and some logging statements around to
keep it manageable.
I made minor changes to the logging, otherwise no logic changes, only
refactoring.
## Summary
`dearpygui==1.9.1 has no wheels are available with a matching Python
ABI` is way better than `he requested Python version (>=3.12.3) does not
satisfy Python>=3.12.3`.
Closes https://github.com/astral-sh/uv/issues/5284.
This fixes resolving packages that publish an invalid stub to pypi, such
as tensorrt-llm.
## Summary
In https://github.com/astral-sh/uv/pull/3138 , we implemented
`unsafe-best-match`. However, it seems to not quite work as expected.
When multiple indices contain the same version, it's not clear which
index the current code uses. This PR fixes that to use the first index
the package is in.
## Test Plan
```console
$ echo 'tensorrt-llm==0.11.0' | ./target/debug/uv pip compile - --extra-index-url https://pypi.nvidia.com --python-version=3.10 --index-strategy=unsafe-best-match --annotation-style=line
```
## Summary
Given `Requires-Python: ">=3.12.3"`, we were rejecting wheels like
`dearpygui-1.11.1-cp312-cp312-win_amd64.whl`, since `3.12.0` is not
included in `>=3.12.3`. We instead need to test against the major-minor
version of `Requires-Python`.
The easiest way to do this, I think, is the use the `RequiresPython`
struct, which has a single bound that we can truncate the major-minor.
This also means that we now allow
`dearpygui-1.11.1-cp312-cp312-win_amd64.whl` for specifiers like
`Requires-Python: "==3.10.*"`. This is incorrect on the surface, but it
does match our semantics for `Requires-Python` elsewhere: we treat it as
a lower bound.
Closes https://github.com/astral-sh/uv/issues/5287.
## Summary
This PR modifies the lockfile to include the impactful resolution
settings, like the resolution and pre-release mode. If any of those
values change, we want to ignore the existing lockfile. Otherwise,
`--resolution lowest-direct` will typically have no effect, which is
really unintuitive.
Closes https://github.com/astral-sh/uv/issues/5226.
## Summary
This fixes a few bugs introduced by
https://github.com/astral-sh/uv/pull/5104. I previously thought we could
track conflicting locals the same way we track conflicting URLs in
forks, but it turns out that ends up being very tricky. URL forks work
because we prioritize directly URL requirements. We can't prioritize
locals in the same way without conflicting with the URL prioritization
(this may be possible but it's not trivial), so we run into issues where
a correct resolution depends on the order in which dependencies are
traversed.
Instead, we track local versions across all forks in `Locals`. When
applying a local version, we apply all locals with markers that
intersect with the current fork. This way we end up applying some local
versions without creating a fork. For example, given:
```
// pyproject.toml
dependencies = [
"torch==2.0.0+cu118 ; platform_machine == 'x86_64'",
]
// requirements.in
torch==2.0.0
.
```
We choose `2.0.0+cu118` in all cases. However, if a disjoint fork is
created based on local versions, the resolver will choose the most
compatible local when it narrows to a specific fork. Thus we correctly
respect local versions when forking:
```
// pyproject.toml
dependencies = [
"torch==2.0.0+cu118 ; platform_machine == 'x86_64'",
"torch==2.0.0+cpu ; platform_machine != 'x86_64'"
]
// requirements.in
torch==2.0.0
.
```
We should also be able to use a similar strategy for
https://github.com/astral-sh/uv/pull/5150.
## Test Plan
This fixes https://github.com/astral-sh/uv/issues/5220 locally for me,
as well as a few other bugs that were not reported yet.
## Summary
This ensures that we process Python installs and uninstalls as soon as
they complete, rather than waiting for them all to complete, then
processing them sequentially. In practice, it shouldn't be much of a
difference (since the processing is code is fairly light), but it
strikes me as more correct.
As user, you specify a list of extras. Internally, we decompose this
into one virtual package per extra. We currently leak this abstraction
by writing one entry per extra to the lockfile:
```toml
[[distribution]]
name = "foo"
version = "4.39.0.dev0"
source = { editable = "." }
dependencies = [
{ name = "pandas" },
{ name = "pandas", extra = "excel" },
{ name = "pandas", extra = "hdf5" },
{ name = "pandas", extra = "html", marker = "os_name != 'posix'" },
{ name = "pandas", extra = "output-formatting", marker = "os_name == 'posix'" },
{ name = "pandas", extra = "plot", marker = "os_name == 'posix'" },
]
```
Instead, we should merge the extras into a list of extras, creating a
more concise lockfile:
```toml
[[distribution]]
name = "foo"
version = "4.39.0.dev0"
source = { editable = "." }
dependencies = [
{ name = "pandas", extra = ["excel", "hdf5"] },
{ name = "pandas", extra = ["html"], marker = "os_name != 'posix'" },
{ name = "pandas", extra = ["output-formatting", "plot"], marker = "os_name == 'posix'" },
]
```
The base package is now implicitly included, as it is in PEP 508.
Fixes#4888
Warn when there is a direct dependency without a lower bound and
`--resolution lowest` is set.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Hashes will be validated if present, but aren't required (since, e.g.,
some registries will omit them, as will Git dependencies and such).
Closes https://github.com/astral-sh/uv/issues/5168.
## Summary
We only need to store one hash -- it should be the "strongest" hash. In
practice, most registries (like PyPI) only serve one, and we only
compute a SHA256 hash for direct URLs.
Part of: https://github.com/astral-sh/uv/issues/4924
## Test Plan
I verified that changing:
```diff
diff --git a/crates/distribution-types/src/hash.rs b/crates/distribution-types/src/hash.rs
index 553a74f55..d36c62286 100644
--- a/crates/distribution-types/src/hash.rs
+++ b/crates/distribution-types/src/hash.rs
@@ -31,7 +31,7 @@ impl<'a> HashPolicy<'a> {
pub fn algorithms(&self) -> Vec<HashAlgorithm> {
match self {
Self::None => vec![],
- Self::Generate => vec![HashAlgorithm::Sha256],
+ Self::Generate => vec![HashAlgorithm::Sha256, HashAlgorithm::Sha512],
Self::Validate(hashes) => {
let mut algorithms = hashes.iter().map(HashDigest::algorithm).collect::<Vec<_>>();
algorithms.sort();
```
Then running `uv lock` with a URL gave me:
```toml
[[distribution]]
name = "iniconfig"
version = "2.0.0"
source = { url = "62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl" }
wheels = [
{ url = "62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl", hash = "sha512:44cc53a6c8dd7cf4d6d52bded308bcc4b4f85fff2ed081f60f7d4beaa86a7cde6d099e3976331232d4cbd472ad5d1781064725b0999c7cd3a2a4d42df687ee81" },
]
```
* Use a dedicated `ResolverMarkers` check in the fork state. This is
better than the `MarkerTree::And(Vec::new())` check.
* Report the timing correct naming universal resolution instead of two
spaces around an empty string when there are no markers.
* On resolution error, show the split that we're in. I'm not sure how to
word this, since we're doing a universal resolution until we fork, so
the trace may contain information from requirements that are not part of
this fork.
## Summary
Currently, the `Locals` type relies on there being a single local
version for a given package. With marker expressions this may not be
true, a similar problem to https://github.com/astral-sh/uv/pull/4435.
This changes the `Locals` type to `ForkLocals`, which tracks locals for
a given fork. Local versions are now tracked on `PubGrubRequirement`
before forking.
Resolves https://github.com/astral-sh/uv/issues/4580.
Specifically, this shows the resolution produced by the
resolver *before* constructing a resolution graph.
Unlike most trace messages, this is a multi-line message
that needs to do some small amount of work to build
itself. So we do an explicit gating on the log level here
instead of just relying on the `trace!` macro itself.
## Summary
The example in the linked issue doesn't quite work, but I think it has
to do with the existing filtering logic. Will follow-up separately.
Closes https://github.com/astral-sh/uv/issues/5012.
In some cases, it's possible for the marker expressions on conflicting
dependency specification to be disjoint but *incomplete*. That is, if
one unions the disjoint markers, the result is not the complete set of
marker environments possible. There may be some "gap" of marker
environments not covered by the markers.
This is a problem in practice because, before this commit, we only
created forks in the resolver for specific marker expressions. So if a
dependency happened to fall in a "gap," our resolver would never see it.
This commit fixes this by adding a new split covering the negation of
the union of all marker expressions in a set of forks for a specific
package.
Originally, I had planned on only creating this split when it was known
that the gap actually existed. That is, when the negation of the marker
expressions did *not* correspond to the empty set. After a lot of
thought, unfortunately, this (I believe) effectively boils down to 3SAT,
which is NP-complete.
Instead, what we do here is *always* create an extra split unless we can
definitively tell that it is empty. We look for a few cases, but
otherwise throw our hands up and potentially do wasted work.
This also updates the lock scenario tests to reflect the actual bug fix
here.
The only pubgrub error that can occur is a `NoSolutionError`, and the
only place it can occur is `unit_propagation`, all other variants if
`PubGrubError` are unreachable. By changing the return type on pubgrub's
side (https://github.com/astral-sh/pubgrub/pull/28), we can remove the
pattern matching and the `unreachable!()` asserts on `PubGrubError`.
Our pubgrub error wrapper used to have a two phased initialization,
first mostly stubs in `solve[_tracked]()` and then adding the actual
context in `resolve()`. When constructing the error in `solve` we
already have all this context, so we can unify this to a regular
constructor and remove the special casing in `resolve()` and `hints()`.
Currently, with
```toml
[project]
name = "transformers"
version = "4.39.0.dev0"
requires-python = ">=3.10"
dependencies = [
"torch==1.10.0"
]
```
i get
```
$ uv sync --preview
Resolved 3 packages in 7ms
error: found distribution torch==1.10.0 @ registry+https://pypi.org/simple with neither wheels nor source distribution
```
This error message is wrong, there are wheels, they are just not
compatible. I initially got this error message during `uv lock` (in a
build), so i also added that this is about installation, not about
locking.
We should reject this version immediately because with the current
requires python, it can never be installed, but even then we need to
change the error message because you can be on the correct python
version, but an unsupported platform.
## Summary
Use the lockfile to prefill the `InMemoryIndex` used by the resolver.
This enables us to resolve completely from the lockfile without making
any network requests/builds if the requirements are unchanged. It also
means that if new requirements are added we can still avoid most I/O
during resolution, partially addressing
https://github.com/astral-sh/uv/issues/3925.
The main limitation of this PR is that resolution from the lockfile can
fail if new versions are requested that are not present in the lockfile,
in which case we have to perform a fresh resolution. Fixing this would
likely require lazy version/metadata requests by `VersionMap` (this is
different from the lazy parsing we do, the list of versions in a
`VersionMap` is currently immutable).
Resolves https://github.com/astral-sh/uv/issues/3892.
## Test Plan
Added a `deterministic!` macro that ensures that a resolve from the
lockfile and a clean resolve result in the same lockfile output for all
our current tests.
## Summary
We currently store wheel URLs in an unparsed state because we don't have
a stable parsed representation to use with rykv. Unfortunately this
means we end up reparsing unnecessarily in a lot of places, especially
when constructing a `Lock`. This PR adds a `UrlString` type that lets us
avoid reparsing without losing the validity of the `Url`.
## Test Plan
Shaves off another ~10 ms from
https://github.com/astral-sh/uv/issues/4860.
```
➜ transformers hyperfine "../../uv/target/profiling/uv lock" "../../uv/target/profiling/baseline lock" --warmup 3
Benchmark 1: ../../uv/target/profiling/uv lock
Time (mean ± σ): 120.9 ms ± 2.5 ms [User: 126.0 ms, System: 80.6 ms]
Range (min … max): 116.8 ms … 125.7 ms 23 runs
Benchmark 2: ../../uv/target/profiling/baseline lock
Time (mean ± σ): 129.9 ms ± 4.2 ms [User: 127.1 ms, System: 86.1 ms]
Range (min … max): 123.4 ms … 141.2 ms 23 runs
Summary
../../uv/target/profiling/uv lock ran
1.07 ± 0.04 times faster than ../../uv/target/profiling/baseline lock
```
## Summary
Avoid serializing and writing the lockfile if a cheap comparison shows
that the contents have not changed.
## Test Plan
Shaves ~10ms off of https://github.com/astral-sh/uv/issues/4860 for me.
```
➜ transformers hyperfine "../../uv/target/profiling/uv lock" "../../uv/target/profiling/baseline lock" --warmup 3
Benchmark 1: ../../uv/target/profiling/uv lock
Time (mean ± σ): 130.5 ms ± 2.5 ms [User: 130.3 ms, System: 85.0 ms]
Range (min … max): 126.8 ms … 136.9 ms 23 runs
Benchmark 2: ../../uv/target/profiling/baseline lock
Time (mean ± σ): 140.5 ms ± 5.0 ms [User: 142.8 ms, System: 85.5 ms]
Range (min … max): 133.2 ms … 153.3 ms 21 runs
Summary
../../uv/target/profiling/uv lock ran
1.08 ± 0.04 times faster than ../../uv/target/profiling/baseline lock
```
This is an attempt to solve https://github.com/astral-sh/uv/issues/ by
applying the extra marker of the requirement to overrides and
constraints.
Say in `a` we have a requirements
```
b==1; python_version < "3.10"
c==1; extra == "feature"
```
and overrides
```
b==2; python_version < "3.10"
b==3; python_version >= "3.10"
c==2; python_version < "3.10"
c==3; python_version >= "3.10"
```
Our current strategy is to discard the markers in the original
requirements. This means that on 3.12 for `a` we install `b==3`, but it
also means that we add `c` to `a` without `a[feature]`, causing #4826.
With this PR, the new requirement become,
```
b==2; python_version < "3.10"
b==3; python_version >= "3.10"
c==2; python_version < "3.10" and extra == "feature"
c==3; python_version >= "3.10" and extra == "feature"
```
allowing to override markers while preserving optional dependencies as
such.
Fixes#4826
## Summary
In marker normalization, we now remove any markers that are redundant
with the `requires-python` specifier (i.e., always true for the given
Python requirement).
For example, given `iniconfig ; python_version >= '3.7'`, we can remove
the `python_version >= '3.7'` marker when resolving with
`--python-version 3.8`.
Closes#4852.
## Summary
Given `python_version != '3.8' and python_version < '3.10'`, the first
term was expanded to `python_version < '3.8'` and `python_version >
'3.8'`. We then AND'd all three terms together. We don't seem to have a
way to differentiate between the terms to AND and the terms to OR in the
normalization code (it all gets flattened together), so instead this PR
expands the expressions at the leaf level and then flattens them at the
level above when appropriate.
Closes https://github.com/astral-sh/uv/issues/4910.
## Summary
More marker simplification:
- Filters out redundant subtrees based on outer expressions, e.g. `a and (a or
b)` simplifies to `a`.
- Flattens nested trees internally, e.g. `(a and b) and c`
Resolves https://github.com/astral-sh/uv/issues/4536.
The PR #4707 introduced the notion of "version narrowing," where a
Requires-Python constraint was _possibly_ narrowed whenever the
universal resolver created a fork. The version narrowing would occur
when the fork was a result of a marker expression on `python_version`
that is *stricter* than the configured `Requires-Python` (via, say,
`pyproject.toml`).
The crucial conceptual change made by #4707 is therefore that
`Requires-Python` is no longer an invariant configuration of resolution,
but rather a mutable constraint that can vary from fork to fork. This in
turn can result in some cases, such as in #4885, where different
versions of dependencies are selected. We aren't sure whether we can fix
those or not, with version narrowing, so for now, we do this revert to
restore the previous behavior and we'll try to address the version
narrowing some other time.
This also adds the case from #4885 as a regression test, ensuring that
we don't break that in the future. I confirmed that with version
narrowing, this test outputs duplicate distributions. Without narrowing,
there are no duplicates.
Ref #4707, Fixes#4885
## Summary
There are a few ideas at play here:
1. pip always strips versions to the release when evaluating against a
`Requires-Python`, so we now do the same. That means, e.g., using
`3.13.0b0` will be accepted by a project with `Requires-Python: >=
3.13`, which does _not_ adhere to PEP 440 semantics but is somewhat
intuitive.
2. Because we know we'll only be evaluating against release-only
versions, we can use different semantics in PubGrub that let us collapse
ranges. For example, `python_version >= '3.10' or python_version <
'3.10'` can be collapsed to the truthy marker.
Closes https://github.com/astral-sh/uv/issues/4714.
Closes https://github.com/astral-sh/uv/issues/4272.
Closes https://github.com/astral-sh/uv/issues/4719.
Remove wheels from the lockfile that don't match the required python
version. For example, we remove
`charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl` when we have
`requires-python = ">=3.12"`.
Our snapshots barely show changes since we avoid the large binaries for
which matters. Here are 3 real world `uv.lock` before/after comparisons
to show the large difference:
*
[warehouse](https://gist.github.com/konstin/9a1ed6a32b410e250fcf4c6ea8c536a5)
(5677 -> 4214)
*
[transformers](https://gist.github.com/konstin/5636281b5226f64aa44ce3244d5230cd)
(6484 -> 5816)
*
[github-wikidata-bot](https://gist.github.com/konstin/ebbd7b9474523aaa61d9a8945bc02071)
(793 -> 454)
We only remove wheels we are certain don't match the python version and
still keep those with unknown tags. We could remove even more wheels by
also considering other markers, e.g. removing linux wheels for a
windows-only dep, but we would trade complex, easy-to-get-wrong logic
for diminishing returns.
Whew this is a lot.
The user-facing changes are:
- `uv toolchain` to `uv python` e.g. `uv python find`, `uv python
install`, ...
- `UV_TOOLCHAIN_DIR` to` UV_PYTHON_INSTALL_DIR`
- `<UV_STATE_DIR>/toolchains` to `<UV_STATE_DIR>/python` (with
[automatic
migration](https://github.com/astral-sh/uv/pull/4735/files#r1663029330))
- User-facing messages no longer refer to toolchains, instead using
"Python", "Python versions" or "Python installations"
The internal changes are:
- `uv-toolchain` crate to `uv-python`
- `Toolchain` no longer referenced in type names
- Dropped unused `SystemPython` type (previously replaced)
- Clarified the type names for "managed Python installations"
- (more little things)
## Summary
Given:
```text
numpy >=1.26 ; python_version >= '3.9'
numpy <1.26 ; python_version < '3.9'
```
When resolving for Python 3.8, we need to narrow the `requires-python`
requirement in the top branch of the fork, because `numpy >=1.26` all
require Python 3.9 or later -- but we know (in that branch) that we only
need to _solve_ for Python 3.9 or later.
Closes https://github.com/astral-sh/uv/issues/4669.
## Summary
This is required to solve https://github.com/astral-sh/uv/issues/4669,
because the `Requires-Python` version can now vary across a resolution.
For example, within certain forks, we might have a more narrow range,
which would allow us to use distributions that would not be allowed for
the global resolution.
This should be fine because `requires-python` is part of the package
metadata, so it should be consistent between files within a package
version. As such, there shouldn't be any risk that we incorrectly
prioritize distributions by omitting this information.
(To be more specific, the risk is something like: we prioritize some
wheel over a source distribution within a package-version, so we don't
track the source distribution at all. Then, later, when we choose a
candidate, we see that the wheel doesn't meet the `Requires-Python`
requirement, even though the source distribution _would've_ met it. If
files within a distribution could have varied support, this would be a
real risk.)
## Summary
This doesn't actually change any behaviors, but it does make it a bit
easier to solve #4669, because we don't have to support "version
narrowing" for the non-`RequiresPython` variants in here. Right now, the
semantics are kind of muddied, because the `target` variant is
_sometimes_ interpreted as an exact version and sometimes as a lower
bound.
It's hard to talk about solve state and resolver state, so i'm renaming
them to fork state and resolver state, indicating the hierarchy between
more directly.
## Summary
This should both make it faster to solve forks (since we have a guess
for a valid resolution, and will bias towards packages we've already
fetched) and improve consistency between forks.
Closes https://github.com/astral-sh/uv/issues/4617.
## Summary
`remove_edge` will invalidate the last index in the graph, so we need to
ensure that each index we look at is "earlier" than the last.
Co-authored-by: bluss <bluss@users.noreply.github.com>
## Summary
When a constraint is applied to a requirement with a marker, the marker
needs to be propagated to the constraint.
If both the constraint and the requirement have a marker, they need to
be merged together (via `and`).
Closes https://github.com/astral-sh/uv/issues/4575.
## Summary
This PR dodges some of the bigger issues raised by
https://github.com/astral-sh/uv/pull/4554 and
https://github.com/astral-sh/uv/pull/4555 by _not_ changing any of the
bigger semantics around syncing and instead merely changing virtual
workspace roots to sync all packages in the workspace (rather than
erroring due to being unable to find a project).
Closes#4541.