When a fork occurs, we divide not just the dependencies that
provoked a fork into distinct groups, but we also add the
corresponding sibling dependencies to each fork. Previously,
while we track markers on the fork itself, the individual
dependencies that had markers only corresponded to markers
written from the dependency specification.
This meant that the sibling dependencies that got added to
each fork would not themselves have markers attached to them.
This in turn meant they would not have markers associated with
them in the lock file.
In many cases, this is actually okay, because the resolver will
pick a version that is "universal" across all forks in most
cases. But in some cases, this just simply isn't possible as
the marker expressions in the fork can and do influence resolution.
In which case, it is possible for the same package with different
versions to show up in the lock file unconditionally. Which is a
big no-no.
So in this commit, after we determine the forks, we intersect the
markers on each fork with each of its dependencies.
This does seem to balloon the marker expressions in some cases.
I plucked one low hanging fruit to avoid doing `x and x` in
trivial cases. (And this eliminated a portion of the snapshot
diffs.) But some pretty gnarly diffs remain.
This commit also fixes another bug: previously, when we created a fork
to capture the "remaining" universe of an incomplete set of markers, we
left out dependencies that should be included in that fork. We rectify
that here.
Fixes#5086
Partially addresses #4732
Consider these requirements from pylint 3.2.5:
```
Requires-Dist: dill >=0.3.6 ; python_version >= "3.11"
Requires-Dist: dill >=0.3.7 ; python_version >= "3.12"
```
We will split on the python version, but then we may pick a version of
`dill` that's `>=0.3.7` in both branches and also have an otherwise
identical resolution in both forks. In this case, we merge both forks
and store only their conjoined markers.
## Summary
Prior to this change, the resolver would panic if we ran with
`--offline` and `--no-deps` and we had cached metadata for a _package_
(i.e., the versions) but no cached metadata for the _distribution_
(i.e., the specific wheel), since we weren't validating that the
returned metadata in the `--no-deps` case was actually successful. (We
need metadata, even for `--no-deps`, so that we can validate extras.)
## Test Plan
The added test panics on the previous branch.
## Summary
Excellent find from @konstin. If we have a package that's included in
two forks at the same version, but with different URLs, we need to avoid
collapsing them in the lockfile.
Closes https://github.com/astral-sh/uv/issues/5294.
Looking at how to merge identical forks, i found that the `Resolution`
can be simplified by treating it as a nodes and edges store (plus pins,
they are separate since they are per name-version, not per
(virtual-)package-version). This should also make #5294 more apparent,
which i didn't touch here.
I additionally added some doc comments to the `Resolution` types.
Our everything-method `solve` tends to grow large, so before i'm adding
more logic, i'm moving some code and some logging statements around to
keep it manageable.
I made minor changes to the logging, otherwise no logic changes, only
refactoring.
## Summary
This PR modifies the lockfile to include the impactful resolution
settings, like the resolution and pre-release mode. If any of those
values change, we want to ignore the existing lockfile. Otherwise,
`--resolution lowest-direct` will typically have no effect, which is
really unintuitive.
Closes https://github.com/astral-sh/uv/issues/5226.
## Summary
This fixes a few bugs introduced by
https://github.com/astral-sh/uv/pull/5104. I previously thought we could
track conflicting locals the same way we track conflicting URLs in
forks, but it turns out that ends up being very tricky. URL forks work
because we prioritize directly URL requirements. We can't prioritize
locals in the same way without conflicting with the URL prioritization
(this may be possible but it's not trivial), so we run into issues where
a correct resolution depends on the order in which dependencies are
traversed.
Instead, we track local versions across all forks in `Locals`. When
applying a local version, we apply all locals with markers that
intersect with the current fork. This way we end up applying some local
versions without creating a fork. For example, given:
```
// pyproject.toml
dependencies = [
"torch==2.0.0+cu118 ; platform_machine == 'x86_64'",
]
// requirements.in
torch==2.0.0
.
```
We choose `2.0.0+cu118` in all cases. However, if a disjoint fork is
created based on local versions, the resolver will choose the most
compatible local when it narrows to a specific fork. Thus we correctly
respect local versions when forking:
```
// pyproject.toml
dependencies = [
"torch==2.0.0+cu118 ; platform_machine == 'x86_64'",
"torch==2.0.0+cpu ; platform_machine != 'x86_64'"
]
// requirements.in
torch==2.0.0
.
```
We should also be able to use a similar strategy for
https://github.com/astral-sh/uv/pull/5150.
## Test Plan
This fixes https://github.com/astral-sh/uv/issues/5220 locally for me,
as well as a few other bugs that were not reported yet.
Warn when there is a direct dependency without a lower bound and
`--resolution lowest` is set.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
* Use a dedicated `ResolverMarkers` check in the fork state. This is
better than the `MarkerTree::And(Vec::new())` check.
* Report the timing correct naming universal resolution instead of two
spaces around an empty string when there are no markers.
* On resolution error, show the split that we're in. I'm not sure how to
word this, since we're doing a universal resolution until we fork, so
the trace may contain information from requirements that are not part of
this fork.
## Summary
Currently, the `Locals` type relies on there being a single local
version for a given package. With marker expressions this may not be
true, a similar problem to https://github.com/astral-sh/uv/pull/4435.
This changes the `Locals` type to `ForkLocals`, which tracks locals for
a given fork. Local versions are now tracked on `PubGrubRequirement`
before forking.
Resolves https://github.com/astral-sh/uv/issues/4580.
Specifically, this shows the resolution produced by the
resolver *before* constructing a resolution graph.
Unlike most trace messages, this is a multi-line message
that needs to do some small amount of work to build
itself. So we do an explicit gating on the log level here
instead of just relying on the `trace!` macro itself.
## Summary
The example in the linked issue doesn't quite work, but I think it has
to do with the existing filtering logic. Will follow-up separately.
Closes https://github.com/astral-sh/uv/issues/5012.
In some cases, it's possible for the marker expressions on conflicting
dependency specification to be disjoint but *incomplete*. That is, if
one unions the disjoint markers, the result is not the complete set of
marker environments possible. There may be some "gap" of marker
environments not covered by the markers.
This is a problem in practice because, before this commit, we only
created forks in the resolver for specific marker expressions. So if a
dependency happened to fall in a "gap," our resolver would never see it.
This commit fixes this by adding a new split covering the negation of
the union of all marker expressions in a set of forks for a specific
package.
Originally, I had planned on only creating this split when it was known
that the gap actually existed. That is, when the negation of the marker
expressions did *not* correspond to the empty set. After a lot of
thought, unfortunately, this (I believe) effectively boils down to 3SAT,
which is NP-complete.
Instead, what we do here is *always* create an extra split unless we can
definitively tell that it is empty. We look for a few cases, but
otherwise throw our hands up and potentially do wasted work.
This also updates the lock scenario tests to reflect the actual bug fix
here.
The only pubgrub error that can occur is a `NoSolutionError`, and the
only place it can occur is `unit_propagation`, all other variants if
`PubGrubError` are unreachable. By changing the return type on pubgrub's
side (https://github.com/astral-sh/pubgrub/pull/28), we can remove the
pattern matching and the `unreachable!()` asserts on `PubGrubError`.
Our pubgrub error wrapper used to have a two phased initialization,
first mostly stubs in `solve[_tracked]()` and then adding the actual
context in `resolve()`. When constructing the error in `solve` we
already have all this context, so we can unify this to a regular
constructor and remove the special casing in `resolve()` and `hints()`.
This is an attempt to solve https://github.com/astral-sh/uv/issues/ by
applying the extra marker of the requirement to overrides and
constraints.
Say in `a` we have a requirements
```
b==1; python_version < "3.10"
c==1; extra == "feature"
```
and overrides
```
b==2; python_version < "3.10"
b==3; python_version >= "3.10"
c==2; python_version < "3.10"
c==3; python_version >= "3.10"
```
Our current strategy is to discard the markers in the original
requirements. This means that on 3.12 for `a` we install `b==3`, but it
also means that we add `c` to `a` without `a[feature]`, causing #4826.
With this PR, the new requirement become,
```
b==2; python_version < "3.10"
b==3; python_version >= "3.10"
c==2; python_version < "3.10" and extra == "feature"
c==3; python_version >= "3.10" and extra == "feature"
```
allowing to override markers while preserving optional dependencies as
such.
Fixes#4826
## Summary
In marker normalization, we now remove any markers that are redundant
with the `requires-python` specifier (i.e., always true for the given
Python requirement).
For example, given `iniconfig ; python_version >= '3.7'`, we can remove
the `python_version >= '3.7'` marker when resolving with
`--python-version 3.8`.
Closes#4852.
## Summary
There are a few ideas at play here:
1. pip always strips versions to the release when evaluating against a
`Requires-Python`, so we now do the same. That means, e.g., using
`3.13.0b0` will be accepted by a project with `Requires-Python: >=
3.13`, which does _not_ adhere to PEP 440 semantics but is somewhat
intuitive.
2. Because we know we'll only be evaluating against release-only
versions, we can use different semantics in PubGrub that let us collapse
ranges. For example, `python_version >= '3.10' or python_version <
'3.10'` can be collapsed to the truthy marker.
Closes https://github.com/astral-sh/uv/issues/4714.
Closes https://github.com/astral-sh/uv/issues/4272.
Closes https://github.com/astral-sh/uv/issues/4719.
## Summary
Given:
```text
numpy >=1.26 ; python_version >= '3.9'
numpy <1.26 ; python_version < '3.9'
```
When resolving for Python 3.8, we need to narrow the `requires-python`
requirement in the top branch of the fork, because `numpy >=1.26` all
require Python 3.9 or later -- but we know (in that branch) that we only
need to _solve_ for Python 3.9 or later.
Closes https://github.com/astral-sh/uv/issues/4669.
## Summary
This is required to solve https://github.com/astral-sh/uv/issues/4669,
because the `Requires-Python` version can now vary across a resolution.
For example, within certain forks, we might have a more narrow range,
which would allow us to use distributions that would not be allowed for
the global resolution.
This should be fine because `requires-python` is part of the package
metadata, so it should be consistent between files within a package
version. As such, there shouldn't be any risk that we incorrectly
prioritize distributions by omitting this information.
(To be more specific, the risk is something like: we prioritize some
wheel over a source distribution within a package-version, so we don't
track the source distribution at all. Then, later, when we choose a
candidate, we see that the wheel doesn't meet the `Requires-Python`
requirement, even though the source distribution _would've_ met it. If
files within a distribution could have varied support, this would be a
real risk.)
## Summary
This doesn't actually change any behaviors, but it does make it a bit
easier to solve #4669, because we don't have to support "version
narrowing" for the non-`RequiresPython` variants in here. Right now, the
semantics are kind of muddied, because the `target` variant is
_sometimes_ interpreted as an exact version and sometimes as a lower
bound.
It's hard to talk about solve state and resolver state, so i'm renaming
them to fork state and resolver state, indicating the hierarchy between
more directly.
## Summary
This should both make it faster to solve forks (since we have a guess
for a valid resolution, and will bias towards packages we've already
fetched) and improve consistency between forks.
Closes https://github.com/astral-sh/uv/issues/4617.
## Summary
When a constraint is applied to a requirement with a marker, the marker
needs to be propagated to the constraint.
If both the constraint and the requirement have a marker, they need to
be merged together (via `and`).
Closes https://github.com/astral-sh/uv/issues/4575.
Looks much better than #4618:
```
DEBUG Pre-fork split universal took 0.644s
DEBUG Split python_version >= '3.12' and platform_machine == 'aarch64' and platform_system == 'Darwin' and platform_system == 'Linux' took 0.659s
DEBUG Split python_version == '3.9' and platform_machine == 'arm64' and platform_system == 'Darwin' took 0.291s
```
This includes a functional change, we now skip the forked state pop/push
if we didn't fork.
From transformers:
```
DEBUG Pre-fork split universal took 0.036s
DEBUG Split python_version >= '3.10' and python_version >= '3.10' and platform_system == 'Darwin' and python_version >= '3.11' and python_version >= '3.12' and python_version >= '3.6' and platform_system == 'Linux' and platform_machine == 'aarch64' took 0.048s
DEBUG Split python_version <= '3.9' and platform_system == 'Darwin' and platform_machine == 'arm64' and python_version >= '3.7' and python_version >= '3.8' and python_version >= '3.9' took 0.038s
```
The messages could use simplification from
https://github.com/astral-sh/uv/issues/4536
We can consider nested spans in the future but this works nicely for
now.
## Summary
In the dependency refactor (https://github.com/astral-sh/uv/pull/4430),
the logic for requirements and constraints was combined. Specifically,
we were applying constraints _before_ filtering on markers and extras,
and then applying that same filtering to the constraints. As a result,
constraints that should only be activated when an extra is enabled were
being enabled unconditionally.
Closes https://github.com/astral-sh/uv/issues/4569.
`ResolverState::choose_version` had become huge, with an odd match due
to the url handling from #4435. This refactoring breaks it into
`choose_version`, `choose_version_registry` and `choose_version_url`. No
functional changes.
Downstack PR: #4481
## Introduction
We support forking the dependency resolution to support conflicting
registry requirements for different platforms, say on package range is
required for an older python version while a newer is required for newer
python versions, or dependencies that are different per platform. We
need to extend this support to direct URL requirements.
```toml
dependencies = [
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'"
]
```
This did not work because `Urls` was built on the assumption that there
is a single allowed URL per package. We collect all allowed URL ahead of
resolution by following direct URL dependencies (including path
dependencies) transitively, i.e. a registry distribution can't require a
URL.
## The same package can have Registry and URL requirements
Consider the following two cases:
requirements.in:
```text
werkzeug==2.0.0
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```
pyproject.toml:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, we want the URL to override the registry dependency,
in the second case we want to fork and have one branch use the registry
and the other the URL. We have to know about this in
`PubGrubRequirement::from_registry_requirement`, but we only fork after
the current method.
Consider the following case too:
a:
```
c==1.0.0
b @ https://b.zip
```
b:
```
c @ https://c_new.zip ; python_version >= '3.12'",
c @ https://c_old.zip ; python_version < '3.12'",
```
When we convert the requirements of `a`, we can't know the url of `c`
yet. The solution is to remove the `Url` from `PubGrubPackage`: The
`Url` is redundant with `PackageName`, there can be only one url per
package name per fork. We now do the following: We track the urls from
requirements in `PubGrubDependency`. After forking, we call
`add_package_version_dependencies` where we apply override URLs, check
if the URL is allowed and check if the url is unique in this fork. When
we request a distribution, we ask the fork urls for the real URL. Since
we prioritize url dependencies over registry dependencies and skip
packages with `Urls` entries in pre-visiting, we know that when fetching
a package, we know if it has a url or not.
## URL conflicts
pyproject.toml (invalid):
```toml
dependencies = [
"iniconfig @ e96292c7f723f1fa332fe4ed6dfbec/iniconfig-1.1.0.tar.gz",
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'",
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
]
```
On the fork state, we keep `ForkUrls` that check for conflicts after
forking, rejecting the third case because we added two packages of the
same name with different URLs.
We need to flatten out the requirements before transformation into
pubgrub requirements to get the full list of other requirements which
may contain a URL, which was changed in a previous PR: #4430.
## Complex Example
a:
```toml
dependencies = [
# Force a split
"anyio==4.3.0 ; python_version >= '3.12'",
"anyio==4.2.0 ; python_version < '3.12'",
# Include URLs transitively
"b"
]
```
b:
```toml
dependencies = [
# Only one is used in each split.
"b1 ; python_version < '3.12'",
"b2 ; python_version >= '3.12'",
"b3 ; python_version >= '3.12'",
]
```
b1:
```toml
dependencies = [
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl",
]
```
b2:
```toml
dependencies = [
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl",
]
```
b3:
```toml
dependencies = [
"iniconfig @ e96292c7f723f1fa332fe4ed6dfbec/iniconfig-1.1.0.tar.gz",
]
```
In this example, all packages are url requirements (directory
requirements) and the root package is `a`. We first split on `a`, `b`
being in each split. In the first fork, we reach `b1`, the fork URLs are
empty, we insert the iniconfig 1.1.1 URL, and then we skip over `b2` and
`b3` since the mark is disjoint with the fork markers. In the second
fork, we skip over `b1`, visit `b2`, insert the iniconfig 2.0.0 URL into
the again empty fork URLs, then visit `b3` and try to insert the
iniconfig 1.1.0 URL. At this point we find a conflict for the iniconfig
URL and error.
## Closing
The git tests are slow, but they make the best example for different URL
types i could find.
Part of #3927. This PR does not handle `Locals` or pre-releases yet.
In the last PR (#4430), we flatten the requirements. In the next PR
(#4435), we want to pass `Url` around next to `PubGrubPackage` and
`Range<Version>` to keep track of which `Requirement`s added a url
across forking. This PR is a refactoring split out from #4435 that rolls
the dependency conversion into a single iterator and introduces a new
`PubGrubDependency` struct as abstraction over `(PubGrubPackage,
Range<Version>)` (or `(PubGrubPackage, Range<Version>,
VerbatimParsedUrl)` in the next PR), and it removes the now unnecessary
`PubGrubDependencies` abstraction.
Downstack PR: #4515 Upstack PR: #4481
Consider these two cases:
A:
```
werkzeug==2.0.0
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```
B:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, `werkzeug==2.0.0` should be overridden by the url. In
the second case `iniconfig == 1.1.1` is in a different fork and must
remain a registry distribution.
That means the conversion from `Requirement` to `PubGrubPackage` is
dependent on the other requirements of the package. We can either look
into the other packages immediately, or we can move the forking before
the conversion to `PubGrubDependencies` instead of after. Either version
requires a flat list of `Requirement`s to use. This refactoring gives us
this list.
I'll add support for both of the above cases in the forking urls branch
before merging this PR. I also have to move constraints over to this.
## Summary
This is an intermediary change in enabling universal resolution for
`requirements.txt` files. To start, we need to be able to preserve
markers in the `requirements.txt` output _and_ propagate those markers,
such that if you have a dependency that's only included with a given
marker, the transitive dependencies respect that marker too.
Closes#1429.