This PR adds a notion of "conflict markers" to the lock file as an
attempt to address #9289. The idea is to encode a new kind of boolean
expression indicating how to choose dependencies based on which extras
are activated.
As an example of what conflict markers look like, consider one of the
cases
brought up in #9289, where `anyio` had unconditional dependencies on
two different versions of `idna`. Now, those are gated by markers, like
this:
```toml
[[package]]
name = "anyio"
version = "4.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna", version = "3.5", source = { registry = "https://pypi.org/simple" }, marker = "extra == 'extra-7-project-foo'" },
{ name = "idna", version = "3.6", source = { registry = "https://pypi.org/simple" }, marker = "extra == 'extra-7-project-bar' or extra != 'extra-7-project-foo'" },
{ name = "sniffio" },
]
```
The odd extra values like `extra-7-project-foo` are an encoding of not
just the conflicting extra (`foo`) but also the package it's declared
for (`project`). We need both bits of information because different
packages may have the same extra name, even if they are completely
unrelated. The `extra-` part is a prefix to distinguish it from groups
(which, in this case, would be encoded as `group-7-project-foo` if `foo`
were a dependency group). And the `7` part indicates the length of the
package name which makes it possible to parse out the package and extra
name from this encoding. (We don't actually utilize that property, but
it seems like good sense to do it in case we do need to extra
information from these markers.)
While this preserves PEP 508 compatibility at a surface level, it does
require utilizing this encoding scheme in order
to evaluate them when they're present (which only occurs when
conflicting extras/groups are declared).
My sense is that the most complex part of this change is not just adding
conflict markers, but their simplification. I tried to address this in
the code comments and commit messages.
Reviewers should look at this commit-by-commit.
Fixes#9289, Fixes#9546, Fixes#9640, Fixes#9622, Fixes#9498, Fixes
#9701, Fixes#9734
## Summary
Sort of ridiculous, but today this passes, when it should fail:
```toml
[project]
name = "foo"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13.0"
dependencies = []
[project.optional-dependencies]
async = [
"foo[async]==0.2.0",
]
```
Instead of modifying the error to replace a dummy derivation chain from
construction with the real one, build the error with the real derivation
chain directly.
This came up when trying to improve the build error reporting.
Introduces `DistErrorKind` to avoid error variants for each case that
are only different in one line of the message.
In https://github.com/astral-sh/uv/issues/8155#issuecomment-2508969900,
resolution lowest was complaining about missing lower bounds for a
pacakge, even though the package had a URL, too:
```
uv pip install dist/pymatgen-2024.10.3.tar.gz pymatgen[ci,optional] --resolution=lowest
```
The error was raised from `pymatgen[ci,optional]`, because we were
looking at it before looking at the "URL"
`dist/pymatgen-2024.10.3.tar.gz`.
I've also added constraints and overrides to the bounds lookup, since
they are missing from the dependency graph.
Fixes#8155 (again)
When encountering `dynamic = ["version"]` in the pyproject.toml of a
source dist, we can ignore that and treat it as a statically known
metadata distribution, since the filename tells us the version and that
version must not change on build.
This fixed locking PyGObject 3.50.0 from `pygobject-3.50.0.tar.gz`
(minimized):
```toml
[project]
name = "PyGObject"
description = "Python bindings for GObject Introspection"
requires-python = ">=3.9, <4.0"
dependencies = [
"pycairo>=1.16"
]
dynamic = ["version"]
```
Afterwards, `uv add --no-sync toga` passes on Ubuntu 24.04 without the
pygobject build deps, when previously it needed `{ name = "pygobject",
version = "3.50.0", requires-dist = [], requires-python = ">=3.9" }`.
I've added a check that source distribution versions are respected after
build.
Fixes#9548
## Summary
Today, our dependency group implementation is a little awkward... For
each package `P`, we check if `P` contains dependencies for each enabled
group, then add a dependency on `P` with the group enabled. There are a
few issues here:
1. It's sort of backwards... We add a dependency from the base package
`P` to `P` with the group enabled. Then `P` with the group enabled adds
a dependency on the base package.
2. We can't, e.g., enable different groups for different packages. (We
don't have a way for users to specify this on the CLI, but there's no
reason that it should be _impossible_ in the resolver.)
3. It's inconsistent with how extras work, which leads to confusing
differences in the resolver.
Instead, our internal requirement type can now include dependency
groups, which makes dependency groups look much, much more like extras
in the resolver.
## Summary
Discovered while working on https://github.com/astral-sh/uv/issues/9516.
In the linked repo, the root uses a `../dependency` path for the
workspace member, which we weren't normalizing.
This _partially_ unwinds the optimization in #9540 by adding back the
base package dependency as a sibling to the extra package dependency
in some cases. Specifically, this occurs when _any_ of the extras are
declared as conflicting.
This is believed to be necessary (until another method is found) to
handle the forking logic based on conflicts. Namely, the forking logic
depends on the base and extra packages being sibling dependencies. If
only the extra is present, then it won't be included in the fork that
excludes all conflicting extras. And that means the base package won't
either, even though it should be included in that fork in some cases. If
the base package dependency is deferred, then it will never be reached.
This also adds another test and updates the snapshots that would have
caught the regression in #9540 if the conflict tests had been enabled.
## Summary
Previously, when we encountered `foo[bar]`, we'd add a dependency on
`PubGrubPackage::Package` for `foo`, and then `PubGrubPackage::Extra`
for `foo[bar]`.
Later, when we ask for the dependencies of the `PubGrubPackage::Extra`,
we add `PubGrubPackage::Package` for `foo`, and
`PubGrubPackage::Package` for `foo[bar]`. This is an intentional
strategy because it ensures that PubGrub "knows" that these have to be
solved to the same version as early as possible.
It turns out that the first part here ("add a dependency on
`PubGrubPackage::Package` for `foo`") is suboptimal, because it means
PubGrub might try to solve _just_ `foo` without realizing that it also
has to accommodate all the constraints from the extra.
Instead, we now add _just_ `PubGrubPackage::Extra` for `foo[bar]`, and
defer adding the base package. It looks like this leads to a far more
efficient solve for Airflow.
## Summary
When we serialize and deserialize the lockfile, we remove the conflict
markers. So in the linked case, the edges for the `tqdm` entries are
like:
```
complexified_marker: UniversalMarker {
pep508_marker: python_full_version >= '3.9.0',
conflict_marker: true,
},
```
However... when we evaluate in-memory, the conflict markers are still
there...
```
complexified_marker: UniversalMarker {
pep508_marker: true,
conflict_marker: extra == 't1' and extra != 't2',
},
```
So if `uv run` creates the lockfile, we evaluate this as `false`.
We should make this consistent, and I expect @BurntSushi is aware. But
for now, it's reasonable / correct to pass the extra when evaluating at
this specific point, since we know the dependency was enabled by the
marker.
Closes
https://github.com/astral-sh/uv/issues/9533#issuecomment-2508908591.
## Summary
A lot of good new lints, and most importantly, error stabilizations. I
tried to find a few usages of the new stabilizations, but I'm sure there
are more.
IIUC, this _does_ require bumping our MSRV.
## Summary
We never construct these -- they should be impossible, since we always
translate to `python_full_version`. This PR encodes that impossibility
in the types.
## Summary
I want to move towards a more normalized marker representation within
the marker tree, which means that the things we warn against will
disappear by the time we get to evaluation. I think it makes more sense
to show these warnings when we create the tree, rather than when we
evaluate it.
<!--
Thank you for contributing to uv! To help us review effectively, please
ensure that:
- The pull request includes a summary of the change.
- The title is descriptive and concise.
- Relevant issues are referenced where applicable.
-->
## Summary
Resolves#9333
This pull request introduces support for the `--no-extra` command-line
flag and the corresponding `no-extra` UV setting.
### Behavior
- When `--all-extras` is supplied, the specified extras in `--no-extra`
will be excluded from the installation.
- If `--all-extras` is not supplied, `--no-extra` has no effect and is
safely ignored.
## Test Plan
Since `ExtrasSpecification::from_args` and
`ExtrasSpecification::extra_names` are the most important parts in the
implementation, I added the following tests in the
`uv-configuration/src/extras.rs` module:
- **`test_no_extra_full`**: Verifies behavior when `no_extra` includes
the entire list of extras.
- **`test_no_extra_partial`**: Tests partial exclusion, ensuring only
specified extras are excluded.
- **`test_no_extra_empty`**: Confirms that no extras are excluded if
`no_extra` is empty.
- **`test_no_extra_excessive`**: Ensures the implementation ignores
`no_extra` values that don't match any available extras.
- **`test_no_extra_without_all_extras`**: Validates that `no_extra` has
no effect when `--all-extras` is not supplied.
- **`test_no_extra_without_package_extras`**: Confirms correct behavior
when no extras are available in the package.
- **`test_no_extra_duplicates`**: Verifies that duplicate entries in
`pkg_extras` or `no_extra` do not cause errors.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This adds a `--prune` flag to the `export` command to correspond with
the `--prune` flag of the `tree` command.
The purpose is for generating a `requirements.txt` that omits a package
and all of that package's unique dependencies. This is useful for cases
where the project has a dependency on a common core package, but where
that package does not need to be installed in the target environment.
For example, a pyspark job needs spark for development, but when
installing into a cluster that already has pyspark installed, it is
desirable to omit pyspark's whole dependency tree so that only the
unique dependencies that your job needs get installed, and do not risk
breaking the pyspark dependencies with something incompatible.
Dev groups cannot always cover this case because there are other
projects where this common dependency occurs as a transitive. One
example is Airflow providers, which include Airflow itself as a
dependency, but it is unnecessary and undesirable to include Airflow's
dependency tree in the `requirements.txt` for your DAGs.
Partly related to #7214, though I'm not sure it covers the ask in that
one of having this functionality extend to the project's actual
published metadata.
## Test Plan
An integration test was added, and some manual testing. Let me know if
more would be better.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
When we generate conflict markers for each resolution after the
resolver runs, it turns out that generating them just from exclusion
rules is not sufficient.
For example, if `foo` and `bar` are declared as conflicting extras, then
we end up with the following forks:
A: extra != 'foo'
B: extra != 'bar'
C: extra != 'foo' and extra != 'bar'
Now let's take an example where these forks don't share the same version
for all packages. Consider a case where `idna==3.9` is in forks A and C,
but where `idna==3.10` is in fork B. If we combine the markers in forks
A and C through disjunction, we get the following:
idna==3.9: extra != 'foo' or (extra != 'foo' and extra != 'bar')
idna==3.10: extra != 'bar'
Which simplifies to:
idna==3.9: extra != 'foo'
idna==3.10: extra != 'bar'
But these are clearly not disjoint. Both dependencies could be selected,
for example, when neither `foo` nor `bar` are active. We can remedy this
by keeping around the inclusion rules for each fork:
A: extra != 'foo' and extra == 'bar'
B: extra != 'bar' and extra == 'foo'
C: extra != 'foo' and extra != 'bar'
And so for `idna`, we have:
idna==3.9: (extra != 'foo' and extra == 'bar') or (extra != 'foo' and extra != 'bar')
idna==3.10: extra != 'bar' and extra == 'foo'
Which simplifies to:
idna==3.9: extra != 'foo'
idna==3.10: extra != 'bar' and extra == 'foo'
And these *are* properly disjoint. There is no way for them both to be
active. This also correctly accounts for fork C where neither `foo` nor
`bar` are active, and yet, `idna==3.9` is still enabled but `idna==3.10`
is not. (In the [motivating example], this comes from `baz` being enabled.)
That is, this captures the idea that for `idna==3.10` to be installed,
there must actually be a specific extra that is enabled. That's what
makes it disjoint from `idna==3.9`.
We aren't quite done yet, because this does add *too many* conflict
markers to dependency edges that don't need it. In the next commit,
we'll add in our world knowledge to simplify these conflict markers.
[motivating example]: https://github.com/astral-sh/uv/issues/9289
Previously, we had copied the behavior of `try_markers` to return
`None` in the case where the marker was always true. I believe this
was done because it somewhat implies that there is no forking
happening. But I find this somewhat strange personally, and instead
flipped this around so that it still returns a marker in that case.
The one call site that is impacted by this is the resolution
graph construction. If we left it as-is, it would end up with
a list of one marker that is always true in some cases. And this
in turn results in writing an empty `resolution-markers` to the
lock file. Probably the output logic should be tweaked instead,
but we leave it alone for now.
This effectively combines a PEP 508 marker and an as-yet-specified
marker for expressing conflicts among extras and groups.
This just defines the type and threads it through most of the various
points in the code that previously used `MarkerTree` only. Some parts
do still continue to use `MarkerTree` specifically, e.g., when dealing
with non-universal resolution or exporting to `requirements.txt`.
This doesn't change any behavior.
This doesn't change any behavior. My guess is that this code was
a casualty of refactoring. But basically, it was doing redundant
case analysis and iterating over all resolutions (even though it's
in the branch that can only occur when there is only one
resolution).
This filtering is now redundant, since forking now avoids these
degenerate cases by construction.
The main change to forking that enables skipping over "always
false" forks is that forking now starts with the parent's markers
instead of starting with MarkerTree::TRUE and trying to combine
them with the parent's markers later. This in turn leads to
skipping over anything that "can't" happen when combined with the
parents markers. So we never hit the case of generating a fork
that, when combined with the parent's markers, results in a
marker that is always false. We just avoid it in the first place.
## Summary
The issue here is fairly complex. Consider the following:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
cu124 = [
"torch>=2.5.1",
"torchvision>=0.20.1",
]
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "cu124" },
],
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
]
torchvision = [
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
```
When solving this project, we first pick a PyTorch version from PyPI, to
solve the `cu124` extra, selecting `2.5.1`.
Later, we try to solve the `cpu` extra. In solving that extra, we look
at the PyTorch CPU index. Ideally, we'd select `2.5.1+cpu`... But
`2.5.1` is already a preference. So we choose that.
Now, we only respect preferences for explicit indexes if they came from
the same index.
Closes https://github.com/astral-sh/uv/issues/9295.
## Summary
The reqwest middleware doesn't retry errors that occur "after" the
request completes -- but in some cases, these do include spurious errors
that we want to retry. See https://github.com/astral-sh/uv/issues/8144
for examples. This PR adds a second retry layer during the response
_handler_, which should help with some of the spurious failures we see
in the linked issue.
Closes https://github.com/astral-sh/uv/issues/8144.
## Summary
This PR enables something like the "final boss" of PyTorch setups --
explicit support for CPU vs. GPU-enabled variants via extras:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13.0"
dependencies = []
[project.optional-dependencies]
cpu = [
"torch==2.5.1+cpu",
]
gpu = [
"torch==2.5.1",
]
[tool.uv.sources]
torch = [
{ index = "torch-cpu", extra = "cpu" },
{ index = "torch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "torch-gpu"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
[tool.uv]
conflicts = [
[
{ extra = "cpu" },
{ extra = "gpu" },
],
]
```
It builds atop the conflicting extras work to allow sources to be marked
as specific to a dedicated extra being enabled or disabled.
As part of this work, sources now have an `extra` field. If a source has
an `extra`, it means that the source is only applied to the requirement
when defined within that optional group. For example, `{ index =
"torch-cpu", extra = "cpu" }` above only applies to
`"torch==2.5.1+cpu"`.
The `extra` field does _not_ mean that the source is "enabled" when the
extra is activated. For example, this wouldn't work:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13.0"
dependencies = ["torch"]
[tool.uv.sources]
torch = [
{ index = "torch-cpu", extra = "cpu" },
{ index = "torch-gpu", extra = "gpu" },
]
[[tool.uv.index]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true
[[tool.uv.index]]
name = "torch-gpu"
url = "https://download.pytorch.org/whl/cu124"
explicit = true
```
In this case, the sources would effectively be ignored. Extras are
really confusing... but I think this is correct? We don't want enabling
or disabling extras to affect resolution information that's _outside_ of
the relevant optional group.
## Summary
These were moved as part of a broader refactor to create a single
integration test module. That "single integration test module" did
indeed have a big impact on compile times, which is great! But we aren't
seeing any benefit from moving these tests into their own files (despite
the claim in [this blog
post](https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html),
I see the same compilation pattern regardless of where the tests are
located). Plus, we don't have many of these, and same-file tests is such
a strong Rust convention.
## Summary
I was wrongly using `.name()` to detect if a package was "not root", but
in `pip compile`, the root can have a name -- so we were failing to find
the derivation chain.
## Summary
This PR adds context to our error messages to explain _why_ a given
package was included, if we fail to download or build it.
It's quite a large change, but it motivated some good refactors and
improvements along the way.
Closes https://github.com/astral-sh/uv/issues/8962.
## Summary
This PR should not contain any user-visible changes, but the goal is to
refactor the `Resolution` type to retain a dependency graph. We want to
be able to explain _why_ a given package was excluded on error (see:
https://github.com/astral-sh/uv/issues/8962), which in turn requires
that at install time, we can go back and figure out the dependency
chain. At present, `Resolution` is just a map from package name to
distribution; this PR remodels it as a graph in which each node is a
package, and the edges contain markers plus extras or dependency groups.
## Summary
As discussed in Discord... This struct has evolved to include a lot of
information apart from the `petgraph::Graph`. And I want to add a graph
to the simplified `Resolution` type. So I think this name makes more
sense.
Surprisingly, this seems to be all that's necessary.
Previously, we were only extracting an extra from a
PubGrubPackage to test for conflicts. But now we extract
either an extra or a group. The surrounding code all
remains the same.
We do need to add some extra checking for groups
specifically, but I believe that's it.
This adds support for providing conflicting group names in addition to
extra names to `Conflicts`.
This merely makes "room" for it in the types while keeping everything
working. We'll add proper support for it in the next commit.
Note that one interesting trick we do here is depend directly on
`hashbrown` so that we can make use of its `Equivalent` trait. This in
turn lets us use things like `ConflictItemRef` as a lookup key for a
hashset that contains `ConflictItem`. This mirrors using a `&str` as a
lookup key for a hashset that contains `String`, but works for arbitrary
types. `std` doesn't support this, but `hashbrown` does. This trick in
turn lets us simplify some of our data structures.
This also rejiggers some of the serde-interaction with the conflicting
types. We now use a wire type to represent our conflicting items for
more flexibility. i.e., Support `extra` XOR `group` fields.
Since this is intended to support _both_ groups and extras, it doesn't
make sense to just name it for groups. And since there isn't really a
word that encapsulates both "extra" and "group," we just fall back to
the super general "conflicts."
We'll rename the variables and other things in the next commit.
## Summary
I need this for the derivation chain work
(https://github.com/astral-sh/uv/issues/8962), but it just seems
generally useful. You can't always get a version from a `Dist` (it could
be URL-based!), but when we create a `ResolvedDist`, we _do_ know the
version (and not just the URL). This PR preserves it.
This PR adds support for conflicting extras. For example, consider
some optional dependencies like this:
```toml
[project.optional-dependencies]
project1 = ["numpy==1.26.3"]
project2 = ["numpy==1.26.4"]
```
These dependency specifications are not compatible with one another.
And if you ask uv to lock these, you'll get an unresolvable error.
With this PR, you can now add this to your `pyproject.toml` to get
around this:
```toml
[tool.uv]
conflicting-groups = [
[
{ package = "project", extra = "project1" },
{ package = "project", extra = "project2" },
],
]
```
This will make the universal resolver create additional forks
internally that keep the dependencies from the `project1` and
`project2` extras separate. And we make all of this work by reporting
an error at **install** time if one tries to install with two or more
extras that have been declared as conflicting. (If we didn't do this,
it would be possible to try and install two different versions of the
same package into the same environment.)
This PR does *not* add support for conflicting **groups**, but it is
intended to add support in a follow-up PR.
Closes#6981Fixes#8024
Ref #6729, Ref #6830
This should also hopefully unblock
https://github.com/dagster-io/dagster/pull/23814, but in my testing, I
did run into other problems (specifically, with `pywin`). But it does
resolve the problem with incompatible dependencies in two different
extras once you declare `test-airflow-1` and `test-airflow-2` as
conflicting for `dagster-airflow`.
NOTE: This PR doesn't make `conflicting-groups` public yet. And in a
follow-up PR, I plan to switch the name to `conflicts` instead of
`conflicting-groups`, since it will be able to accept conflicting extras
_and_ conflicting groups.
## Summary
We're inconsistent with these -- sometimes it's `Error::Fetch` and
sometimes it's `Error::Download`. The message says download, so let's
just use that?
## Summary
This got moved to `InstallTarget`! Must've been an oversight not to
delete. I verified that no code was changed here since the date that we
moved it to `InstallTarget`.
## Summary
Just as we don't enforce tag compliance, we shouldn't enforce
`--no-build` when validating the lockfile. If we end up building from
source, the distribution database will correctly error.
Closes https://github.com/astral-sh/uv/issues/9016.
## Summary
At time of writing, `markupsafe==3.0.2` exists on the PyTorch index, but
there's
only a single wheel:
`MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl`
Meanwhile, there are a large number of wheels on PyPI for the same
version. If the
user is on Python 3.12, and we return the incompatible PyTorch wheel
without
considering the PyPI wheels, PubGrub will mark 3.0.2 as an incompatible
version,
even though there are compatible wheels on PyPI.
Closes https://github.com/astral-sh/uv/issues/8922.
## Summary
We were making some incorrect assumptions in the extra-merging code for
universal `pip compile`. This PR corrects those assumptions and adds a
bunch of additional tests.
Closes https://github.com/astral-sh/uv/issues/8915.
After https://github.com/astral-sh/uv/pull/8797, we have spec-compliant
handling for local version identifiers and can completely remove all the
special-casing around it.
Implement a full working version of local version semantics. The (AFAIA)
major move towards this was implemented in #2430. This added support
such that the version specifier `torch==2.1.0+cpu` would install
`torch@2.1.0+cpu` and consider `torch@2.1.0+cpu` a valid way to satisfy
the requirement `torch==2.1.0` in further dependency resolution.
In this feature, we more fully support local version semantics. Namely,
we now allow `torch==2.1.0` to install `torch@2.1.0+cpu` regardless of
whether `torch@2.1.0` (no local tag) actually exists.
We do this by adding an internal-only `Max` value to local versions that
compare greater to all other local versions. Then we can translate
`torch==2.1.0` into bounds: greater than 2.1.0 with no local tag and
less than 2.1.0 with the `Max` local tag.
Depends on https://github.com/astral-sh/packse/pull/227.
closes#6640
Could you suggest how I should test it?
(already tested locally)
---------
Co-authored-by: konstin <konstin@mailbox.org>
Co-authored-by: Charles Tapley Hoyt <cthoyt@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
This updates the surrounding code to use the new ResolverEnvironment
type. In some cases, this simplifies caller code by removing case
analysis. There *shouldn't* be any behavior changes here. Some test
snapshots were updated to account for some minor tweaks to error
messages.
I didn't split this up into separate commits because it would have been
too difficult/costly.
This type is intended to replace `ResolverMarkers`. The main difference
between them is that this type encapsulates more decision making by
un-exporting the different cases. So instead of callers needing to do
explicit case analysis depending on the type of resolver environment,
callers instead use methods that know how to do the right thing. In the
next commit, there are at least a few cases where this greatly
simplifies case analysis on the caller side.
The motivation for this type is to centralize decision making about
forking. In particular, we want to expand forking to include conflicting
groups instead of just `MarkerTree`. So to a certain extent, the
refactor here is about removing bare use of `MarkerTree` in favor of a
more purpose built type that encapsulates the forking logic.
The encapsulation is not quite perfect here. I expect to improve on it a
bit once we add support for conflicting groups.
This is split off from the subsequent commit (that makes use of
`ResolverEnvironment`) so that it's a bit easier to review the addition
in isolation.
## Summary
At present, when we have a Python requirement and we see a wheel, we
verify that the Python requirement is compatible with the wheel. For
source distributions, though, we verify that both the Python requirement
_and_ the currently-installed version are compatible, because we assume
that we'll need to build the source distribution in order to get
metadata. However, we can often extract source distribution metadata
_without_ building (e.g., if there's a `pyproject.toml` with no dynamic
keys).
This PR thus modifies the source distribution handling to defer that
incompatibility ("We couldn't get metadata for this project, because it
has no static metadata and requires a higher Python version to run /
build") until we actually try to build the package. As a result, you can
now resolve source distribution-only packages using Python versions
below their `requires-python`, as long as they include static metadata.
Closes https://github.com/astral-sh/uv/issues/8767.
## Summary
This PR improves the interaction of `--frozen` such that we reduce the
dependency on the `pyproject.toml` and increase the dependency on the
`uv.lock`. Specifically, we now read the list of workspace members from
the `uv.lock` rather than the `pyproject.toml`, which means we don't
need to discover the member `pyproject.toml` files in order to perform a
`uv sync --frozen --all-packages`.
## Summary
This PR enables `uv sync --all-packages` to sync all packages in a
workspace. It removes a common use-case for the legacy non-`[project]`
packages that we're trying to move away from.
Closes https://github.com/astral-sh/uv/issues/8724.
This PR fixes a bug where it was possible for dependencies to be
included in a final resolution with markers that always evaluate to
false. Specifically, `python_version < '0'`.
While we do filter based on Python markers during forking, it turns out
that the markers for each fork are "combined" *after* this filtering
step. But the process of combination can result in a more specific
marker that is always false for the configured Python requirement. This
could result in dependencies with markers that are always false (like
`python_version < '0'`) appearing in the resolution.
The first commit in this PR adds a regression test (with an undesirable
result), and the second commit fixes the regression and updates the
test.
Fixes#8676
## Summary
By default, `uv tree` shows the full workspace, not _just_ the root. If
the root depended on a workspace member as a dev dependency, then we'd
still show it as `(group: dev)` in `uv tree` even if you passed
`--no-dev`, because we weren't filtering the edges in the right place.
This is still somewhat confusing, because if `root` depends on workspace
member `child` as a dev dependency, `uv tree --no-dev` still shows both.
Closes https://github.com/astral-sh/uv/issues/8719.
Previously, when doing a `uv pip` resolution, we would only return the
first entry in the map. But there should only ever be one entry, or else
we would have incompatible dependencies. So we can collapse the case
with the "one universal fork" case.
(I found this while doing some refactoring of how we handle forking, and
collapsing these cases simplifies some of that refactoring work.)
When this code was written, we didn't have "proper" disjointness checks,
and so simple equality was used instead. Arguably disjointness checks
are
more correct, and this would also simplify some case analysis in an
ongoing
refactor.
## Summary
Unfortunately, it looks like we lost
https://github.com/astral-sh/uv/pull/8501 somewhere in a bad rebase.
This PR re-adds the change, with compatibility for those lockfiles
created in v0.4.27. I'm not certain we should actually merge this. It
might be less painful and confusing to just bite the bullet on the
change.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
It turns out we were omitting empty dependency groups from the lockfile
metadata, which was then causing us to reject locks when empty groups
were defined.
We now include them (that section of the lock is meant to be a true
representation of the metadata, and an empty-but-defined group is
different from an absent group), though we can ignore them for
validation, since it doesn't affect any behavior.
Closes https://github.com/astral-sh/uv/issues/8581.
## Summary
We already support `tool.uv.dev-dependencies` in the legacy
non-`[project]` projects. This adds equivalent support for
`[dependency-groups]`, e.g.:
```toml
[tool.uv.workspace]
[dependency-groups]
lint = ["ruff"]
```
This PR adds support for `tool.uv.default-groups`, which defaults to
`["dev"]` for backwards-compatibility. These represent the groups we
sync by default.
Part of #8090
Unblocks https://github.com/astral-sh/uv/pull/8274
Refactors `DevMode` and `DevSpecification` into a shared type
`DevGroupsSpecification` that allows us to track if `--dev` was
implicitly or explicitly provided.
Part of #8090
Adds the ability to add and remove dependencies from arbitrary groups
using `uv add` and `uv remove`. Does not include resolving with the new
dependencies — tackling that in #8110.
Additionally, this does not yet resolve interactions with the existing
`dev` group — we'll tackle that separately as well. I probably won't
merge the stack until that design is resolved.
## Summary
We were including dependencies that were only included by a dependency
that isn't relevant on the current platform (i.e., we were enforcing the
"current environment" at one level, but not transitively).
Closes https://github.com/astral-sh/uv/issues/8516.
## Summary
Historically, we haven't enforced schema versions. This PR adds a
versioning policy such that, if a uv version writes schema v2, then...
- It will always reject lockfiles with schema v3 or later.
- It _may_ reject lockfiles with schema v1, but can also choose to read
them, if possible.
(For example, the change we proposed to rename `dev-dependencies` to
`dependency-groups` would've been backwards-compatible: newer versions
of uv could still read lockfiles that used the `dev-dependencies` field
name, but older versions should reject lockfiles that use the
`dependency-groups` field name.)
Closes https://github.com/astral-sh/uv/issues/8465.
## Summary
Previously, `uv tree --package` had some strange behavior due to how we
were computing the root nodes. This PR refactors the entire
implementation to use `petgraph` so we can do proper operations on a
graph structure.
Closes https://github.com/astral-sh/uv/issues/8382.
## Summary
This is part of making
https://github.com/astral-sh/uv/issues/7299#issuecomment-2385286341
better. You can now use `tool.uv.dependency-metadata` for direct URL
requirements. Unfortunately, you _must_ include a version, since we need
one to perform resolution.
## Summary
If the user has an upper-bound in a `requires-python`, we don't
correctly narrow it during resolution. We should be narrowing based on
the intersection.
Closes#8297.
## Summary
Rather than relying on the distribution and package URL being the same
(which isn't true for Git dependencies), we can just use the
intersection of the markers directly.
Closes https://github.com/astral-sh/uv/issues/8381.
Part of https://github.com/astral-sh/uv/issues/2777
I noticed we're seeing "Python ABI" _a lot_ in error messages which I
did not expect. This improves a common case by being a little more
specific.
When patch version isn't specified and a matching version is referenced,
it will default patch to 0 which could be unclear/confusing. This PR
warns the user of that default.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
The first part of this issue
https://github.com/astral-sh/uv/issues/7426. Will tackle the second part
mentioned (`~=`) in a separate PR once I know this is the correct way to
warn users.
## Test Plan
<!-- How was it tested? -->
Unit tests were added
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
If you pass a named index via the CLI, you can now reference it as a
named source. This required some surprisingly large refactors, since we
now need to be able to track whether a given index was provided on the
CLI vs. elsewhere (since, e.g., we don't want users to be able to
reference named indexes defined in global configuration).
Closes https://github.com/astral-sh/uv/issues/7899.
## Summary
This PR lifts the restriction that a package must come from a single
index. For example, you can now do:
```toml
[project]
name = "project"
version = "0.1.0"
readme = "README.md"
requires-python = ">=3.12"
dependencies = ["jinja2"]
[tool.uv.sources]
jinja2 = [
{ index = "torch-cu118", marker = "sys_platform == 'darwin'"},
{ index = "torch-cu124", marker = "sys_platform != 'darwin'"},
]
[[tool.uv.index]]
name = "torch-cu118"
url = "https://download.pytorch.org/whl/cu118"
[[tool.uv.index]]
name = "torch-cu124"
url = "https://download.pytorch.org/whl/cu124"
```
The construction is very similar to the way we handle URLs today: you
can have multiple URLs for a given package, but they must appear in
disjoint forks. So most of the code is just adding that abstraction to
the resolver, following our handling of URLs.
Closes#7761.
## Summary
This PR enables users to provide index credentials via named environment
variables.
For example, given an index named `internal` that requires a username
(`public`) and password
(`koala`), you can define the index (without credentials) in your
`pyproject.toml`:
```toml
[[tool.uv.index]]
name = "internal"
url = "https://pypi-proxy.corp.dev/simple"
```
Then set the `UV_INDEX_INTERNAL_USERNAME` and
`UV_INDEX_INTERNAL_PASSWORD`
environment variables, where `INTERNAL` is the uppercase version of the
index name:
```sh
export UV_INDEX_INTERNAL_USERNAME=public
export UV_INDEX_INTERNAL_PASSWORD=koala
```
## Summary
This PR adds a first-class API for defining registry indexes, beyond our
existing `--index-url` and `--extra-index-url` setup.
Specifically, you now define indexes like so in a `uv.toml` or
`pyproject.toml` file:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
```
You can also provide indexes via `--index` and `UV_INDEX`, and override
the default index with `--default-index` and `UV_DEFAULT_INDEX`.
### Index priority
Indexes are prioritized in the order in which they're defined, such that
the first-defined index has highest priority.
Indexes are also inherited from parent configuration (e.g., the
user-level `uv.toml`), but are placed after any indexes in the current
project, matching our semantics for other array-based configuration
values.
You can mix `--index` and `--default-index` with the legacy
`--index-url` and `--extra-index-url` settings; the latter two are
merely treated as unnamed `[[tool.uv.index]]` entries.
### Index pinning
If an index includes a name (which is optional), it can then be
referenced via `tool.uv.sources`:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
[tool.uv.sources]
torch = { index = "pytorch" }
```
If an index is marked as `explicit = true`, it can _only_ be used via
such references, and will never be searched implicitly:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
explicit = true
[tool.uv.sources]
torch = { index = "pytorch" }
```
Indexes defined outside of the current project (e.g., in the user-level
`uv.toml`) can _not_ be explicitly selected.
(As of now, we only support using a single index for a given
`tool.uv.sources` definition.)
### Default index
By default, we include PyPI as the default index. This remains true even
if the user defines a `[[tool.uv.index]]` -- PyPI is still used as a
fallback. You can mark an index as `default = true` to (1) disable the
use of PyPI, and (2) bump it to the bottom of the prioritized list, such
that it's used only if a package does not exist on a prior index:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
default = true
```
### Name reuse
If a name is reused, the higher-priority index with that name is used,
while the lower-priority indexes are ignored entirely.
For example, given:
```toml
[[tool.uv.index]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
[[tool.uv.index]]
name = "pytorch"
url = "https://test.pypi.org/simple"
```
The `https://test.pypi.org/simple` index would be ignored entirely,
since it's lower-priority than `https://download.pytorch.org/whl/cu121`
but shares the same name.
Closes#171.
## Future work
- Users should be able to provide authentication for named indexes via
environment variables.
- `uv add` should automatically write `--index` entries to the
`pyproject.toml` file.
- Users should be able to provide multiple indexes for a given package,
stratified by platform:
```toml
[tool.uv.sources]
torch = [
{ index = "cpu", markers = "sys_platform == 'darwin'" },
{ index = "gpu", markers = "sys_platform != 'darwin'" },
]
```
- Users should be able to specify a proxy URL for a given index, to
avoid writing user-specific URLs to a lockfile:
```toml
[[tool.uv.index]]
name = "test"
url = "https://private.org/simple"
proxy = "http://<omitted>/pypi/simple"
```
## Summary
This PR declares and documents all environment variables that are used
in one way or another in `uv`, either internally, or externally, or
transitively under a common struct.
I think over time as uv has grown there's been many environment
variables introduced. Its harder to know which ones exists, which ones
are missing, what they're used for, or where are they used across the
code. The docs only documents a handful of them, for others you'd have
to dive into the code and inspect across crates to know which crates
they're used on or where they're relevant.
This PR is a starting attempt to unify them, make it easier to discover
which ones we have, and maybe unlock future posibilities in automating
generating documentation for them.
I think we can split out into multiple structs later to better organize,
but given the high influx of PR's and possibly new environment variables
introduced/re-used, it would be hard to try to organize them all now
into their proper namespaced struct while this is all happening given
merge conflicts and/or keeping up to date.
I don't think this has any impact on performance as they all should
still be inlined, although it may affect local build times on changes to
the environment vars as more crates would likely need a rebuild. Lastly,
some of them are declared but not used in the code, for example those in
`build.rs`. I left them declared because I still think it's useful to at
least have a reference.
Did I miss any? Are their initial docs cohesive?
Note, `uv-static` is a terrible name for a new crate, thoughts? Others
considered `uv-vars`, `uv-consts`.
## Test Plan
Existing tests
As per
https://matklad.github.io/2021/02/27/delete-cargo-integration-tests.html
Before that, there were 91 separate integration tests binary.
(As discussed on Discord — I've done the `uv` crate, there's still a few
more commits coming before this is mergeable, and I want to see how it
performs in CI and locally).
## Summary
In the routine we use to verify whether the lockfile is up-to-date, we
sometimes have to resolve package metadata. If that resolution step
fails, the resolver is left in a bad state, as various tasks are marked
as pending despite the error. Treating that as a recoverable failure
thus leads to a deadlock.
This PR modifies the errors to be treated as fatal.
I think a more holistic fix here would be to add some kind of guard to
ensure that any tasks that fail are no longer marked as pending (or
enforce this in the type system).
Closes https://github.com/astral-sh/uv/issues/8074.
## Summary
The issue here is that, if you user has a `requires-python` like `>=
3.7, != 3.8.5`, this gets expanded to the following bounds:
- `[3.7, 3.8.5)`
- `(3.8.5, ...`
We then convert this to the specific `>= 3.7, < 3.8.5, > 3.8.5`. But the
commas in that expression are conjunctions... So it's impossible to
satisfy? No version is both `< 3.8.5` and `> 3.8.5`.
Instead, we now preserve the input `requires-python` and just
concatenate the terms, only using PubGrub to compute the _bounds_.
Closes https://github.com/astral-sh/uv/issues/7862.
Unlike `cp36-...`, which requires exactly CPython 3.6, `py36-none` is
compatible with all versions starting at Python 3.6.
Note that `py3x-none` should not be used. Instead, use `py3-none` with
`requires-python`.
Fixes#7800
## Summary
If a supported environment includes a Python marker, we don't simplify
it out, despite _storing_ the simplified markers. This PR modifies the
validation code to compare simplified to simplified markers.
Closes https://github.com/astral-sh/uv/issues/7876.
## Summary
When using `uv tree --package foo`, an extra empty line appears at the
beginning, which seems unnecessary since `uv tree` without the package
option doesn’t have this. It’s possible that the intention was to add
separation between packages, i.e. the correct implementation shoule be:
```rust
if !std::mem::take(&mut first) {
lines.push(String::new());
}
```
Even if corrected, this extra spacing might be redundant as `uv tree`
doesn’t include these empty lines between packages by default.
```console
$ uv init project
$ cd project
$ uv init foo
$ uv tree
Using CPython 3.12.5
Resolved 2 packages in 1ms
foo v0.1.0
project v0.1.0
$ uv tree --package project
Using CPython 3.12.5
Resolved 2 packages in 1ms
project v0.1.0
```
Would it be okay to expose this struct? We currently use our own
ResolveProvider, and it would be nice to use the `FlatDistributions` for
easy `VersionMap` creation.
Thanks!
## Summary
`click` has one dependency of `colorama` only on Windows, `uv tree
--invert` should not include `colorama` on non-Windows platforms, but
currently:
```console
$ uv init
$ uv add click
$ uv tree --invert --python-platform macos
colorama v0.4.6
```
it should:
```console
$ uv tree --invert --python-platform macos
click v8.1.7
└── project v0.1.0
```
#7226 modified the check to skip prefetching of source dists without
proper minimum-version bounds, and wound up flipping the boolean
expression. This change flips the some/none expression so that the
intended skip happens as expected.
Fixes#7680.
This PR adds some additional sanity checking on resolution graphs to
ensure we can never install different versions of the same package into
the same environment.
I used code similar to this to provoke bugs in the resolver before the
release, but it never made it into `main`. Here, we add the error
checking to the creation of `ResolutionGraph`, since this is where it's
most convenient to access the "full" markers of each distribution.
We only report an error when `debug_assertions` are enabled to avoid
rendering `uv` *completely* unusuable if a bug were to occur in a
production binary. For example, maybe a conflict is detected in a marker
environment that isn't actually used. While not ideal, `uv` is still
usable for any other marker environment.
Closes#5598
This enhances the hints generator in the resolver with some heuristic to
detect and warn in case of failures due to version mismatches on a local
package. Those may be the symptom of name conflict/shadowing with a
transitive dependency.
Closes: https://github.com/astral-sh/uv/issues/7329
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Recently, rkyv 0.8 was released. Its API is a fair bit simpler now for
higher level uses (like for us in `uv`) and results in us being able to
delete a fair bit of code. This also removes our last dependency on `syn
1.0`, and thus drops that dependency.
Performance (via testing on the `transformers` example) seems to remain
about the same, which is what was expected:
```
$ hyperfine -w5 -r100 'uv lock' 'uv-ag-rkyv-update lock'
Benchmark 1: uv lock
Time (mean ± σ): 55.6 ms ± 6.4 ms [User: 30.4 ms, System: 35.1 ms]
Range (min … max): 43.0 ms … 73.1 ms 100 runs
Benchmark 2: uv-ag-rkyv-update lock
Time (mean ± σ): 56.5 ms ± 7.2 ms [User: 30.5 ms, System: 36.3 ms]
Range (min … max): 39.1 ms … 71.5 ms 100 runs
Summary
uv lock ran
1.02 ± 0.18 times faster than uv-ag-rkyv-update lock
```
Closes#7415
This changes the structure of the hints generator in the resolver when
encountering solution errors, so that it re-uses a single output buffer
owned by the caller.
It avoids repeated allocations of a temporary buffer within each
recursive function call.
## Summary
This PR enables users to provide pre-defined static metadata for
dependencies. It's intended for situations in which the user depends on
a package that does _not_ declare static metadata (e.g., a
`setup.py`-only sdist), and that is expensive to build or even cannot be
built on some architectures. For example, you might have a Linux-only
dependency that can't be built on ARM -- but we need to build that
package in order to generate the lockfile. By providing static metadata,
the user can instruct uv to avoid building that package at all.
For example, to override all `anyio` versions:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
requires-dist = ["iniconfig"]
```
Or, to override a specific version:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
version = "3.7.0"
requires-dist = ["iniconfig"]
```
The current implementation uses `Metadata23` directly, so we adhere to
the exact schema expected internally and defined by the standards. Any
entries are treated similarly to overrides, in that we won't even look
for `anyio@3.7.0` metadata in the above example. (In a way, this also
enables #4422, since you could remove a dependency for a specific
package, though it's probably too unwieldy to use in practice, since
you'd need to redefine the _rest_ of the metadata, and do that for every
package that requires the package you want to omit.)
This is under-documented, since I want to get feedback on the core ideas
and names involved.
Closes https://github.com/astral-sh/uv/issues/7393.
## Summary
All the registry wheels were getting cached under
`index/b2a7eb67d4c26b82` rather than `pypi`, because we used
`IndexUrl::Url` rather than `IndexUrl::from`.
## Summary
This is arguably breaking, arguably a bug... Today, if project A depends
on project B, and you install A with dev dependencies enabled, you also
get B's dev dependencies. I think this is incorrect. Just like you
shouldn't be importing B's dependencies from A, you shouldn't be using
B's dev dependencies when developing on A.
Closes#7310.
## Summary
We have to call `to_dist` to get metadata while validating the lockfile,
but some of the distributions won't match the current platform -- and
that's fine!
## Summary
We need to apply the `--no-install` filters earlier, such that we don't
error if we only have a source distribution for a given package when
`--no-build` is provided but that package is _omitted_.
Closes#7247.
This is preparatory work for the upload functionality, which needs to
read the METADATA file and attach its parsed contents to the POST
request: We move finding the `.dist-info` from `install-wheel-rs` and
`uv-client` to a new `uv-metadata` crate, so it can be shared with the
publish crate.
I don't properly know if its the right place since the upload code isn't
ready, but i'm PR-ing it now because it already had merge conflicts.
## Summary
We now track the discovered `IndexCapabilities` for each `IndexUrl`. If
we learn that an index doesn't support range requests, we avoid doing
any batch prefetching.
Closes https://github.com/astral-sh/uv/issues/7221.
## Summary
If we have a singleton `Range`, we don't need to iterate over the map of
available ranges; instead, we can just get the singleton directly.
Closes#6131.
This finally gets rid of our hack for working around "hidden"
state. We no longer do a roundtrip marker serialization and
deserialization just to avoid the hidden state.
## Summary
I think a better tradeoff here is to skip fetching metadata, even though
we can't validate the extras.
It will help with situations like
https://github.com/astral-sh/uv/issues/5073#issuecomment-2334235588 in
which, otherwise, we have to download the wheels twice.
(This is part of #5711)
## Summary
@BurntSushi and I spotted that the `derivative` crate is only used for
one enum in the entire codebase — however, it's a proc macro, and we pay
for the cost of (re)compiling it in many different contexts.
This replaces it with a private `Inner` core which uses the regular std
derive macros — inlining and optimizations should make this equivalent
to the other implementation, and not too hard to maintain hopefully
(versus a manual impl of `PartialEq` and `Hash` which have to be kept in
sync.)
## Test Plan
Trust CI?
## Summary
Like `uv sync`, you can omit the current project (`--no-emit-project`),
a specific package (`--no-emit-package`), or the entire workspace
(`--no-emit-workspace`).
Closes https://github.com/astral-sh/uv/issues/6960.
Closes#6995.
Follow-up to #6959 and #6961: Use the reachability computation instead
of `propagate_markers` everywhere.
With `marker_reachability`, we have a function that computes for each
node the markers under which it is (`requirements.txt`, no markers
provided on installation) or can be (`uv.lock`, depending on the markers
provided on installation) included in the installation. Put differently:
If the marker computed by `marker_reachability` is not fulfilled for the
current platform, the package is never required on the current platform.
We compute the markers for each package in the graph, this includes the
virtual extra packages and the base packages. Since we know that each
virtual extra package depends on its base package (`foo[bar]` implied
`foo`), we only retain the base package marker in the `requirements.txt`
graph.
In #6959/#6961 we were only using it for pruning packages in `uv.lock`,
now we're also using it for the markers in `requirements.txt`.
I think this closes#4645, CC @bluss.
## Summary
We need to prioritize hashes for the distribution over hashes for the
related packages.
I think this needs to be redone entirely though. I can see other issues
with the current approach.
Closes https://github.com/astral-sh/uv/issues/7059.
When a package is included under a platform-specific marker, we know
that wheels that mismatch this marker can never be installed, so we drop
them from the lockfile.
In transformers, we have:
* `tensorflow-text`: `tensorflow-macos; python_full_version >= '3.13'
and platform_machine == 'arm64' and platform_system == 'Darwin'`
* `tensorflow-macos`: `tensorflow-cpu-aws; (python_full_version < '3.10'
and platform_machine == 'aarch64' and platform_system == 'Linux') or
(python_full_version >= '3.13' and platform_machine == 'aarch64' and
platform_system == 'Linux') or (python_full_version >= '3.13' and
platform_machine == 'arm64' and platform_system == 'Linux')`
* `tensorflow-macos`: `tensorflow-intel; python_full_version >= '3.13'
and platform_system == 'Windows'`
This means that `tensorflow-cpu-aws` and `tensorflow-intel` can never be
installed, and we can drop them from the lockfile.
This commit refactors how deal with `requires-python` so that instead of
simplifying markers of dependencies inside the resolver, we do it at the
edges of our system. When writing markers to output, we simplify when
there's an obvious `requires-python` context. And when reading markers
as input, we complexity markers with the relevant `requires-python`
constraint.
When I first wrote this routine, it was intended to only emit a trace
for the final "unioned" resolution. But we actually moved that semantic
operation to the construction of the resolution *graph*. So there is no
unioned `Resolution` any more.
But this is still useful to see. So I changed this to just emit a trace
of *every* resolution right before constructing the graph.
It might be nice to also emit a trace of the unioned graph too. Or
perhaps we should do that instead if this proves too noisy. (Although
this is only emitted at TRACE level.)
## Summary
Right now, we have slightly different `requires-python` semantics for
`-p 3.11` vs. `-p 3.11 --universal`, and slightly different (wrong)
semantics for how we compare against the _installed_ Python version
(which doesn't ignore upper bounds, but should).
This PR rips it all out and replaces it with consistent semantics across
`uv lock`, `uv pip compile -p 3.11`, and `uv pip compile -p 3.11
--universal`. We now always ignore upper bounds.
Closes https://github.com/astral-sh/uv/issues/6859.
Closes https://github.com/astral-sh/uv/issues/5045.
## Summary
We now respect the user-provided upper-bound in for `requires-python`.
So, if the user has `requires-python = "==3.11.*"`, we won't explore
forks that have `python_version >= '3.12'`, for example.
However, we continue to _only_ compare the lower bounds when assessing
whether a dependency is compatible with a given Python range.
Closes https://github.com/astral-sh/uv/issues/6150.
## Summary
The interface here is intentionally a bit more limited than `uv pip
compile`, because we don't want `requirements.txt` to be a system of
record -- it's just an export format. So, we don't write annotation
comments (i.e., which dependency is requested from which), we don't
allow writing extras, etc. It's just a flat list of requirements, with
their markers and hashes.
Closes#6007.
Closes#6668.
Closes#6670.
## Summary
Whether a package is itself virtual isn't captured in the package
metadata, so we have to compare the sources.
Closes https://github.com/astral-sh/uv/issues/6749.
## Summary
Use a dedicated source type for non-package requirements. Also enables
us to support non-package `path` dependencies _and_ removes the need to
have the member `pyproject.toml` files available when we sync _and_
makes it explicit which dependencies are virtual vs. not (as evidenced
by the snapshot changes). All good things!
## Summary
This is similar to https://github.com/astral-sh/uv/pull/6171 but more
expansive... _Anywhere_ that we test requirements for platform
compatibility, we _need_ to respect the resolver-friendly markers. In
fixing the motivating issue (#6621), I also realized that we had a bunch
of bugs here around `pip install` with `--python-platform` and
`--python-version`, because we always performed our `satisfy` and `Plan`
operations on the interpreter's markers, not the adjusted markers!
Closes https://github.com/astral-sh/uv/issues/6621.
For users who were using absolute paths in the `pyproject.toml`
previously, this is a behavior change: We now convert all absolute paths
in `path` entries to relative paths. Since i assume that no-one relies
on absolute path in their lockfiles - they are intended to be portable -
I'm tagging this as a bugfix.
Closes https://github.com/astral-sh/uv/pull/6438
Fixes https://github.com/astral-sh/uv/issues/6371
## Summary
It turns out we weren't applying the collapse logic here, so dev deps
with extras were repeated. This was generally ok... unless we ended up
_dropping_ an extra, in which case, you now have a duplicate.
Closes https://github.com/astral-sh/uv/issues/6380.
## Summary
We're gonna work on a more comprehensive review of whether we should
preserve the username here, but for now, `git@` is effectively a
convention for GitHub and GitLab etc.
Closes https://github.com/astral-sh/uv/issues/6305.
## Test Plan
I guess we don't have infrastructure for testing SSH private keys right
now, but...
```
❯ cargo run init foo
❯ cd foo
❯ cargo run add git+ssh://git@github.com/astral-sh/mkdocs-material-insiders.git
```
## Summary
For non-virtual workspaces, these are covered by the _members_. But for
virtual workspaces, they aren't captured anywhere else in the lock. So,
we weren't invalidating `uv.lock` when the dev dependencies changed,
which led to a panic.
Closes https://github.com/astral-sh/uv/issues/6288
This PR migrates uv's use of `chrono` to `jiff`.
I did most of this work a while back as one of my tests to ensure Jiff
could actually be used in a real world project. I decided to revive
this because I noticed that `reqwest-retry` dropped its Chrono
dependency,
which is I believe the only other thing requiring Chrono in uv.
(Although, we use a fork of `reqwest-middleware` at present, and that
hasn't been updated to latest upstream yet. I wasn't quite sure of the
process we have for that.)
In course of doing this, I actually made two changes to uv:
First is that the lock file now writes an RFC 3339 timestamp for
`exclude-newer`. Previously, we were using Chrono's `Display`
implementation for this which is a non-standard but "human readable"
format. I think the right thing to do here is an RFC 3339 timestamp.
Second is that, in addition to an RFC 3339 timestamp, `--exclude-newer`
used to accept a "UTC date." But this PR changes it to a "local date."
That is, a date in the user's system configured time zone. I think
this makes more sense than a UTC date, but one alternative is to drop
support for a date and just rely on an RFC 3339 timestamp. The main
motivation here is that automatically assuming UTC is often somewhat
confusing, since just writing an unqualified date like `2024-08-19` is
often assumed to be interpreted relative to the writer's "local" time.
## Summary
The strategy here is: if the user provides supported environments, we
use those as the initial forks when resolving. As a result, we never add
or explore branches that are disjoint with the supported environments.
(If the supported environments change, we ignore the lockfile entirely,
so we don't have to worry about any interactions between supported
environments and the preference forks.)
Closes https://github.com/astral-sh/uv/issues/6184.
## Summary
I probably won't land https://github.com/astral-sh/uv/pull/6210 for the
release, but I do want to make one breaking change to prep for it:
renaming `environment-markers` to `resolution-markers` (or
`solution-markers`?) so that it's delineated from the user-defined
markers in that PR.
Indented blocks in Markdown are treated as code blocks, and rustdoc
treats all unadorned code blocks as Rust doctests. Since this wasn't
intended as a doctest and isn't valid Rust, it makes `cargo test --doc`
fail. We fix this by using an explicit code block labeled as `text`.
In https://github.com/astral-sh/uv/issues/5046, we show the tautological
proof:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on jax>=0.4.20.
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
(1)
```
This is a part of the error tree because the statement
`colabfold[alphafold]==1.5.5 depends on jax>=0.4.20` is actually a
simplification of `colabfold[alphafold]==1.5.5 depends on
jax>=0.4.20,<0.5.0` and the no versions clause is a proof of that
simplification.
Without simplification, the clause looks like:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20,<0.5.0 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on one of:
jax==0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
```
I don't think we have a great way to avoid performing the simplification
of the range conditionally and it makes the error simpler to just jump
straight to `colabfold[alphafold]==1.5.5 depends on jax>=0.4.20`.
The derivation for this clause looks like:
```
jax==0.4.20 | ==0.4.21 | ==0.4.22 | ==0.4.23 | ==0.4.24 | ==0.4.25 | ==0.4.26 | ==0.4.27 | ==0.4.28 | ==0.4.29 | ==0.4.30 | ==0.4.31 depends on numpy>=1.26.0
no versions of jax>0.4.20, <0.4.21 | >0.4.21, <0.4.22 | >0.4.22, <0.4.23 | >0.4.23, <0.4.24 | >0.4.24, <0.4.25 | >0.4.25, <0.4.26 | >0.4.26, <0.4.27 | >0.4.27, <0.4.28 | >0.4.28, <0.4.29 | >0.4.29, <0.4.30 | >0.4.30, <0.4.31 | >0.4.31, <0.5.0
colabfold[alphafold]==1.5.5 depends on jax>=0.4.20, <0.5.0
```
So it looks like we can take trees of this form and drop the "no
versions" clause _if_ the ranges are compatible[*]. See [this
comment](https://github.com/astral-sh/uv/pull/6160#discussion_r1720280922)
for a simpler explanation.
With this pull request, the clause simplifies to
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and jax>=0.4.20 depends on numpy>=1.26.0,
we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0. (1)
```
Unfortunately, this doesn't change any snapshots in our test suite so
I'm uncertain if the strategy generalizes. In some incorrect iterations
of this logic, the snapshots did reveal my mistakes.
[*] "if the ranges are compatible" includes a bit of hand-waving. I'm
not 100% sure if I've chosen the correct range heuristic here.
Closes https://github.com/astral-sh/uv/issues/6167
We've been seeing intermittent failures in CI, which we thought were
unexpected HTTP 401s but it actually looks like a panic when handling an
expected HTTP error. I believe the problem is that an early client error
can cause the channel to close and we crash when we unwrap the `send`.
## Summary
In the resolver, we use release-only semantics to normalize
`python_full_version`. So, if we see `python_full_version < '3.13'`, we
treat that as `(Unbounded, Exclude(3.13))`. `3.13b0` evaluates as `true`
to that range, so we were accepting pre-releases for these markers.
Instead, we need to exclude pre-release segments when performing these
evaluations.
Closes https://github.com/astral-sh/uv/issues/6169.
## Test Plan
Hard to write a test for this because you need a pre-release Python
locally... so:
`echo "sqlalchemy==2.0.32" | cargo run pip compile - --python 3.13 -n`
Now that these incompatibilities are collected into a single range
(https://github.com/astral-sh/uv/pull/6154), we can simplify the range
using the known available versions to reduce verbosity.
There were different `PubGrubPackage` types so they never matched the
available versions set! Luckily, the available versions are agnostic to
the markers and optional dependencies so we can just broaden to using
`PackageName` as a lookup key.
Addresses yet another complaint in
https://github.com/astral-sh/uv/issues/5046
I need this for debugging error messages.
I used an environment variable instead of a trace log so you can do
`UV_INTERNAL__SHOW_DERIVATION_TREE=1` and run a test to see the tree in
the test snapshot without further changes.
e.g.
```rust
// Resolving should fail.
uv_snapshot!(context.filters(), context.lock().arg("--preview").current_dir(&workspace), @r###"
success: false
exit_code: 1
----- stdout -----
UV_INTERNAL__SHOW_DERIVATION_TREE
root==0a0.dev0 depends on foo*
root==0a0.dev0 depends on bar[some-extra]*
foo==0.1.0 depends on anyio==4.1.0
bar[some-extra]==0.1.0 depends on anyio==4.2.0
no versions of bar[some-extra]<0.1.0 | >0.1.0
----- stderr -----
Using Python 3.12.[X] interpreter at: [PYTHON-3.12]
× No solution found when resolving dependencies:
╰─▶ Because only bar[some-extra]==0.1.0 is available and bar[some-extra] depends on anyio==4.2.0, we can conclude that all versions of bar[some-extra] depend on anyio==4.2.0.
And because foo depends on anyio==4.1.0, we can conclude that foo and all versions of bar[some-extra] are incompatible.
And because your workspace requires bar[some-extra] and foo, we can conclude that your workspace's requirements are unsatisfiable.
"###
);
```
We have bad error messages for optional (extra) dependencies and
development dependencies in workspaces:
1. We weren't showing the full package, so we'd drop `:dev` and
`[extra]` by accident
2. We didn't include derived packages, e.g., `member[extra]` in tree
processing collapse operation, so we'd include extra clauses like the
ones we removed in #6092
Also
- Reverts
f0de4f71f2
— it turns out it wasn't quite correct and it didn't seem worth using
the custom incompatibility anymore.
- Fixes a bug in the display of `package:dev` which was not showing
`:dev` for some variants (see 94d8020b58)
## Summary
Normalize all `python_version` markers to their equivalent
`python_full_version` form. This avoids false positives in forking
because we currently cannot detect any relationships between the two
forms. It also avoids subtle bugs due to the truncating semantics of
`python_version`. For example, given `requires-python = ">3.12"`, we
currently simplify the marker `python_version <= 3.12` to `false`.
However, the version `3.12.1` will be truncated to `3.12` for
`python_version` comparisons, and thus it satisfies the python
requirement and evaluates to `true`.
It is possible to simplify back to `python_version` when writing markers
to the lockfile. However, the equivalent `python_full_version` markers
are often clearer and easier to simplify, so I lean towards leaving them
as `python_full_version`.
There are *a lot* of snapshot updates from this change. I'd like more
eyes on the transformation logic in `python_version_to_full_version` to
ensure that they are all correct.
Resolves https://github.com/astral-sh/uv/issues/6125.
Includes the changes from https://github.com/astral-sh/uv/pull/6071 but
takes them way further.
When we have the set of available versions for a package, we can do a
much better job displaying an error.
For example:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only the following versions of httpx are available:
httpx<=999
httpx>=9999
and example==0.1.0 depends on httpx>999,<9999, we can conclude that example==0.1.0 cannot be used.
And because only example==0.1.0 is available and you require example, we can conclude that the requirements are unsatisfiable.
```
The resolver has demonstrated that the requested range cannot be used
because there are only versions in ranges _outside_ the requested range.
However, the display of the range of available versions is pretty bad!
We say there are versions of httpx available in ranges that definitely
have no versions available.
With this pull request, the error becomes:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only httpx<=1.0.0b0 is available and example depends on httpx>999,<9999, we can conclude that example's
requirements are unsatisfiable.
And because your workspace requires example, we can conclude that your workspace's requirements are unsatisfiable.
```
We achieve this by:
1. Dropping ranges disjoint with the range of available versions, e.g.,
this removes `httpx>=9999`
2. Replacing ranges that capture the _entire_ range of available
versions with the smaller range, e.g., this replaces `httpx<=999` with
`<=1.0.0b0`.
~Note that when we perform (2), we may include an additional bound that
is not relevant, e.g., we include the lower bound of `>=0.6.7`. This is
a bit extraneous, but I don't think it's confusing. We can consider some
advanced logic to avoid that later.~ (edit: I did this, it wasn't hard)
We also improve error messages when there is _only_ one version
available by showing that version instead of a range.
## Summary
Gives the caller control over how messages are reported back to the
user. Also merges the index-location validation into the lock, since
we're already iterating over the packages.
## Summary
This is no longer required since we no longer implement `Eq` on `Lock`.
It will also sometimes be "wrong" as of #6076, since we now apply
different `requires-python` filtering to different parts of the tree
during resolution.
## Summary
Using https://github.com/astral-sh/uv/issues/6064 as a motivating
example: at present, on main, we're not properly propagating the
`Requires-Python` simplifications. In that case, for example, we end up
solving for a branch with `python_version < 3.11`, and a branch `>=
3.11`, even though `Requires-Python` is `>=3.11`. Later, when we get to
the graph, we apply version simplification based on `Requires-Python`,
which causes us to _remove_ the `python_version < 3.11` markers
entirely, leaving us with duplicate dependencies for `pylint`.
This PR instead tries to ensure that we always apply this narrowing to
requirements and forks, so that we don't need to apply the same
simplification when constructing the graph at all.
Closes https://github.com/astral-sh/uv/issues/6064.
Closes#6059.
In particular, I added this as a hack to avoid a kinda of
instability that was caused by our marker code not correctly
detecting markers that were always false. But that has since
been fixed.
Removing this code doesn't change any tests. Arguably it
should be possible to come up with a test that failed with
this hack inserted but succeeded without it. In particular,
with this hack, new forks were being prevented from being
added even when they ought to be added, e.g., when preferences
get updated.
## Summary
This PR changes the definition of `--locked` from:
> Produces the same `Lock`
To:
> Passes `Lock::satisfies`
This is a subtle but important difference. Previous, if
`Lock::satisfies` failed, we would run a resolution, then do
`existing_lock == lock`. If the two weren't equal, and `--locked` was
specified, we'd throw an error.
The equality check is hard to get right. For example, it means that we
can't ship #6076 without changing our marker representation, since the
deserialized lockfile "loses" some of the internal marker state that
gets accumulated during resolution.
The downside of this change is that there could be scenarios in which
`uv lock --locked` fails even though the lockfile would actually work
and the exact TOML would be unchanged. But... I think it's ok if
`--locked` fails after the user modifies something?
Extends https://github.com/astral-sh/uv/pull/6092 to improve resolver
error messages for workspaces that have a single member.
As before, this requires a two-step approach of
1. Traversing the derivation tree and collapsing some members. In this
case, we drop the empty root node in favor of the project.
2. Using special-case formatting for packages. In this case, the
workspace package is referred to with "your project" instead of its
name.
An extension of #6090 that replaces #6066.
In brief,
1. Workspace member names are passed to the resolver for no solution
errors
2. There is a new derivation tree pre-processing step that trims
`NoVersion` incompatibilities for workspace members from the derivation
tree. This avoids showing redundant clauses like `Because only
bird==0.1.0 is available and bird==0.1.0 depends on anyio==4.3.0, we can
conclude that all versions of bird depend on anyio==4.3.0.`. As a minor
note, we use a custom incompatibility kind to mark these
incompatibilities at resolution-time instead of afterwards.
3. Root dependencies on workspace members say `your workspace requires
bird` rather than `you require bird`
4. Workspace member package display omits the version, e.g., `bird`
instead of `bird==0.1.0`
5. Instead of reporting a workspace member as unusable we note that its
requirements cannot be solved, e.g., `bird's requirements are
unsatisfiable` instead of `bird cannot be used`.
6. Instead of saying `your requirements are unsatisfiable` we say `your
workspace's requirements are unsatisfiable` when in a workspace, since
we're not in a "provide direct requirements" paradigm.
As an annoying but minor implementation detail, `PackageRange` now
requires access to the `PubGrubReportFormatter` so it can determine if
it is formatting a workspace member or not. We could probably improve
the abstractions in the future.
As a follow-up, we should additional special casing for "single project"
workspaces to avoid mention of the workspace concept in simple projects.
However, it looks like this will require additional tree manipulations
so I'm going to keep it separate.
## Summary
Historically, in order to "resolve from a lockfile", we've taken the
lockfile, used it to pre-populate the in-memory metadata index, then run
a resolution. If the resolution didn't match our existing resolution, we
re-resolved from scratch.
This was an appealing approach because (in theory) it didn't require any
dedicated logic beyond pre-populating the index. However, it's proven to
be _really_ hard to get right, because it's a stricter requirement than
we need. We just need the current lockfile to _satisfy_ the requirements
provided by the user. We don't actually need a second resolution to
produce the exact same result. And it's not uncommon that this second
resolution differs, because we seed it with preferences, which
fundamentally changes its course. We've worked hard to minimize those
"instabilities", but they're still present.
The approach here is intended to be much simpler. Instead of resolving
from the lockfile, we just check if the current resolution satisfies the
state of the workspace. Specifically, we check if the lockfile (1)
contains all the relevant members, and (2) matches the metadata for all
dependencies, recursively. (We skip registry dependencies, assuming that
they're immutable.)
This may actually be too conservative, since we can have resolutions
that satisfy the requirements, even if the requirements have changed
slightly. But we want to bias towards correctness for now.
My hope is that this scheme will be more performant, simpler, and more
robust.
Closes https://github.com/astral-sh/uv/issues/6063.
## Summary
This was added in https://github.com/astral-sh/uv/pull/5405 but is now
the cause of an instability in `github_wikidata_bot`. Specifically, on
the initial run, we fork in `pydantic==2.8.2`, via:
```
Requires-Dist: typing-extensions>=4.12.2; python_version >= '3.13'
Requires-Dist: typing-extensions>=4.6.1; python_version < '3.13'
```
In the end, we resolve a single version of `typing-extensions`
(`4.12.2`)... But we don't recognize the two resolutions as the "same
graph", because we propagate the fork markers, and so the "edges" have
different markers on them...
In the second run through, when we have the forks in advance, we don't
split on Pydantic... We just try to solve from the root with the current
forks. This is fundamentally different and I fear it will be the cause
of many instabilities. But removing this graph check fixes the proximate
issue.
I don't really understand why this was added since there was no test
coverage in the PR.
## Summary
When constructing the `Resolution`, we only propagated the fork markers
to the package node, but not the extras node. This led to cases in which
an extra could be included unconditionally or otherwise diverge from the
base package version.
Closes https://github.com/astral-sh/uv/issues/6062.
## Summary
Right now, we store the environment markers in a `BTreeSet` -- so
they're sorted, but the sort doesn't really tell us anything. I think we
should instead store them in the order in which we solved. I thought
this might fix an instability (it didn't), but I think it's still good
to ensure we solve in the same order.
I also changed from `Option<Vec>` to just `Vec`, since there was no
distinction between `None` and empty.
## Summary
Now, if you resolve against a registry, then swap it out for another, we
won't reuse the lockfile. (If you don't provide any registry
configuration, then we won't enforce this, so that `uv lock --index-url
foo` and `uv lock` is stable.)
Closes https://github.com/astral-sh/uv/issues/5920.
## Summary
Our current handling of `--find-links` merges the entries in each index.
As a result, we can end up with `AnnotatedDist` entries that reference
distributions across indexes.
I'd like to change `--find-links` such that each `--find-links` entry is
just treated as its own index (so, e.g., if `requests` exists in the
first `--find-links` entry, we don't even check the registry by
default), which would _also_ fix this problem automatically. But that's
a behavior change... So for now, in the lockfile, we filter
distributions that don't match the source index URL.
There are two cases to consider:
- There's a source distribution. Then, for the ID to reference the
`--find-links` registry, the source distribution _must_ have come from
the `--find-links` entry, so it's fine to discard any wheels from the
"wrong" registry without breaking any compatibility guarantees.
- There's no source distribution. Then the best wheel must come from the
`--find-links` registry. We might lose some platform coverage by
discarding the other wheels, but it shouldn't break any of the
"guarantees", since we have at least one wheel that fits in the version
range.
Closes https://github.com/astral-sh/uv/issues/6015.
Surprisingly, this is a lockfile schema change: We can't store relative
paths in urls, so we have to store a `filename` entry instead of the
whole url.
Fixes#4355
## Summary
We now persist the `ResolverInstallerOptions` when writing out a tool
receipt. When upgrading, we grab the saved options, and merge with the
command-line arguments and user-level filesystem settings (CLI > receipt
> filesystem).
Warn when there are missing bounds on transitive dependencies with
`--resolution lowest`.
Implemented as a lazy resolution graph check. Dev deps are odd because
they are missing the edge from the root that extras have (they are
currently orphans in the resolution graph), but this is more complex to
solve properly because we can put dev dep information in a `Requirement`
so i special cased them here.
Closes#2797
Should help with #1718
---------
Co-authored-by: Ibraheem Ahmed <ibraheem@ibraheem.ca>
## Summary
This PR rewrites the `MarkerTree` type to use algebraic decision
diagrams (ADD). This has many benefits:
- The diagram is canonical for a given marker function. It is impossible
to create two functionally equivalent marker trees that don't refer to
the same underlying ADD. This also means that any trivially true or
unsatisfiable markers are represented by the same constants.
- The diagram can handle complex operations (conjunction/disjunction) in
polynomial time, as well as constant-time negation.
- The diagram can be converted to a simplified DNF form for user-facing
output.
The new representation gives us a lot more confidence in our marker
operations and simplification, which is proving to be very important
(see https://github.com/astral-sh/uv/pull/5733 and
https://github.com/astral-sh/uv/pull/5163).
Unfortunately, it is not easy to split this PR into multiple commits
because it is a large rewrite of the `marker` module. I'd suggest
reading through the `marker/algebra.rs`, `marker/simplify.rs`, and
`marker/tree.rs` files for the new implementation, as well as the
updated snapshots to verify how the new simplification rules work in
practice. However, a few other things were changed:
- [We now use release-only comparisons for `python_full_version`, where
we previously only did for
`python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522).
I'm unsure how marker operations should work in the presence of
pre-release versions if we decide that this is incorrect.
- [Meaningless marker expressions are now
ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502).
This means that a marker such as `'x' == 'x'` will always evaluate to
`true` (as if the expression did not exist), whereas we previously
treated this as always `false`. It's negation however, remains `false`.
- [Unsatisfiable markers are written as `python_version <
'0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329).
- The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub`
crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends
on the `pubgrub` crate for the `Range` type, we probably want to move
`pubgrub::Range` into a separate crate to break this, but I don't think
that should block this PR (cc @konstin).
There is still some remaining work here that I decided to leave for now
for the sake of unblocking some of the related work on the resolver.
- We still use `Option<MarkerTree>` throughout uv, which is unnecessary
now that `MarkerTree::TRUE` is canonical.
- The `MarkerTree` type is now interned globally and can potentially
implement `Copy`. However, it's unclear if we want to add more
information to marker trees that would make it `!Copy`. For example, we
may wish to attach extra and requires-python environment information to
avoid simplifying after construction.
- We don't currently combine `python_full_version` and `python_version`
markers.
- I also have not spent too much time investigating performance and
there is probably some low-hanging fruit. Many of the test cases I did
run actually saw large performance improvements due to the markers being
simplified internally, reducing the stress on the old `normalize`
routine, especially for the extremely large markers seen in
`transformers` and other projects.
Resolves https://github.com/astral-sh/uv/issues/5660,
https://github.com/astral-sh/uv/issues/5179.
## Summary
This PR adds a `DistExtension` field to some of our distribution types,
which requires that we validate that the file type is known and
supported when parsing (rather than when attempting to unzip). It
removes a bunch of extension parsing from the code too, in favor of
doing it once upfront.
Closes https://github.com/astral-sh/uv/issues/5858.
## Summary
This _used_ to be true but we now require fetching metadata for all
distributions even with `--no-deps` since, e.g., we validate that any
declared extras exist.
Currently, the entry for a package+version+source table is called
`distribution`. That is incorrect, the `sdist` and `wheel` fields inside
of that table are distributions, the table itself is for a package. We
also align ourselves closer with PEP 751.
I went through `lock.rs` and renamed all occurrences of "distribution"
that actually referred to a "package".
This change invalidates all existing lockfiles.
Bikeshedding: Do we call it `package` or `packages`? See also
https://github.com/python/peps/pull/3877
`package` is nice because it looks like a header:
```toml
[[package]]
name = "anyio"
version = "4.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna" },
{ name = "sniffio" },
]
sdist = { url = "3970183622d484d08e3285104333d3/anyio-4.3.0.tar.gz", hash = "sha256:f75253795a87df48568485fd18cdd2a3fa5c4f7c5be8e5e36637733fce06fed6", size = 159642 }
wheels = [
{ url = "2f20c40b45242c0b33774da0e2e34f/anyio-4.3.0-py3-none-any.whl", hash = "sha256:048e05d0f6caeed70d731f3db756d35dcc1f35747c8c403364a8332c630441b8", size = 85584 },
]
```
`packages` is nice because the field is not a single entry, but a list.
2/3 for https://github.com/astral-sh/uv/issues/4893
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
There are three options that determine resolver behavior:
* resolution mode
* prerelease mode
* exclude newer
They are different from the other top level options: If they mismatch,
we recreate the resolution. To distinguish them from the rest of the
lockfile, we group them under an `[options]` header.
1/3 for #4893
## Summary
We were dropping the query and fragment in the wrong place, so the URLs
didn't match up after resolving from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5851.
## Summary
Very subtle bug. The scenario is as follows:
- We resolve: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git" }`
- The user then changes the request to: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git", rev =
"44d2f4b19d6837ea990c16f494bdf7543d57483d" }`
- When we go to re-lock, we note two facts:
1. The "default branch" resolves to
`44d2f4b19d6837ea990c16f494bdf7543d57483d`.
2. The metadata for `44d2f4b19d6837ea990c16f494bdf7543d57483d` is
(whatever we grab from the lockfile).
- In the resolver, we then ask for the metadata for
`44d2f4b19d6837ea990c16f494bdf7543d57483d`. It's already in the cache,
so we return it; thus, we never add the
`44d2f4b19d6837ea990c16f494bdf7543d57483d` ->
`44d2f4b19d6837ea990c16f494bdf7543d57483d` mapping to the Git resolver,
because we never have to resolve it.
This would apply for any case in which a requested tag or branch was
replaced by its precise SHA. Replacing with a different commit is fine.
It only applied to `tool.uv.sources`, and not PEP 508 URLs, because the
underlying issue is that we aren't consistent about "automatically"
extracting the precise commit from a Git reference.
Closes https://github.com/astral-sh/uv/issues/5860.
## Summary
This fixes a bug introduced by
https://github.com/astral-sh/uv/pull/5232. It turns out that the
`universal_disjoint_base_or_local_requirement` test does not actually do
what it was meant to because of the incorrect python requirement. With a
valid python requirement, it fails on `main`. The problem is that we try
to exclude the original base version from the range of allowed versions
to try and prefer local versions. However, in the test, there is a
branch that depends on the non-local version, with no applicable local
in its fork. We should remove this exclusion as prioritization is
handled by the candidate resolver.
I don't think this will save any time in serialization, but it should
save us some deserialization, since we only need to parse URLs for the
packages we use...
## Summary
Okay, I tested this against...
- Our public "private" proxy
- Fury
- AWS CodeArtifact
- Azure Artifacts
It took a long time.
All of them work as expected with this approach: we omit the credentials
from the lockfile, then wire them back up when the index URL is provided
during subsequent operations.
Closes https://github.com/astral-sh/uv/issues/5119.
## Summary
`uv tree` will now filter to the current platform by default. You can
pass `--universal` to show the entire tree.
Closes https://github.com/astral-sh/uv/issues/5760.
## Summary
Right now, if you have a `requirements.txt` with a pre-release, but the
`requirements.in` does not have a pre-release marker for that dependency
we drop the pre-release. (In the selector, we end up returning
`AllowPrerelease::IfNecessary`, the default.)
I played with a few ways of solving this... The first was to remove that
guard altogether. But if we do that,
`universal_transitive_disjoint_prerelease_requirement` fails (we use
`1.17.0rc1` in both forks, when it should only apply to one of the two).
The second was to do that, but also avoid pushing pre-releases as
preferences when we solve a fork. But then
`universal_disjoint_prereleases` fails, because we return a different
pre-release in each fork.
Finally, I settled on allowing existing pre-releases in forks if they
have no markers on them, i.e., they are "global" preferences. I believe
this is true IFF the preference came from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5729.
## Summary
If _both_ nodes in the derivation tree are proxies, we need to remove
the _entire_ node. So, the function now returns an `Option<Tree>`
instead of taking `&mut Tree`.
Closes https://github.com/astral-sh/uv/issues/5618.
## Summary
Partially resolves#5561. Haven't added overrides support yet but I can
add it tomorrow if the current approach for constraints is ok.
## Test Plan
`cargo test`
Manually checked trace logs after changing the constraints.
## Summary
As-is, if you have a workspace with mixed `requires-python`
requirements, resolution will _never_ succeed, since we'll use the union
as the `requires-python` bound (i.e., take the lowest value), and fail
when we see the package that only supports some more narrow range.
This PR modifies the behavior to take the intersection (i.e., the
highest value), so if you have one package that supports Python 3.12 and
later, and another that supports Python 3.8 and later, we lock for
Python 3.12. If you try to sync or run with Python 3.8, we raise an
error, since the lockfile will be incompatible with that request.
Konsti has a write-up in https://github.com/astral-sh/uv/issues/5594
that outlines what could be a longer-term strategy.
Closes https://github.com/astral-sh/uv/issues/5578.