Updates the snapshot for the deployment from
https://github.com/astral-sh/pypi-proxy/pull/9 — for a while now, we've
only been failing on file requests not registry requests because the
proxy auth was setup wrong.
For users who were using absolute paths in the `pyproject.toml`
previously, this is a behavior change: We now convert all absolute paths
in `path` entries to relative paths. Since i assume that no-one relies
on absolute path in their lockfiles - they are intended to be portable -
I'm tagging this as a bugfix.
Closes https://github.com/astral-sh/uv/pull/6438
Fixes https://github.com/astral-sh/uv/issues/6371
Previously, we excluded these and only looked at system interpreters.
However, it makes sense for this to match the typical Python discovery
experience. We could consider swapping the default... I'm not sure what
makes more sense. If we change the default (as written now) — this could
arguably be a breaking change.
If we don't do this, and `uv run` invokes something like `uv run
--isolated uv pip install foo` uv won't mutate the isolated environment,
it'll mutate whatever outer environment it finds.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This is a attempt at fixing https://github.com/astral-sh/uv/issues/6486.
It reverts changes made to `pyproject.toml` when sync fails during `uv
add`. This solution felt a little heavy handed and could probably be
improved but it is what happens when locking fails during `uv add` so I
thought it would be a good start.
## Test Plan
<!-- How was it tested? -->
I have added a test case for this to `tests/edit.rs`. It uses
`pytorch==1.0.2` to achieve the desired failure.
## Summary
Indicate the previous version from which uv was upgraded when running
`uv self update`. Thought that it could be useful in some situations to
have a trace of the previous version that was installed.
## Test Plan
Did not find a way to test this, since this heavily relies on being able
to use the installation script and the ability to publish artifacts for
a specific tag.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Two small typo fixes: one in the documentation and one in a comment in
the source code I happened to come across.
## Summary
We accidentally changed the Windows cache directory from
`C:\Users\User\AppData\Local\uv\cache` to
`C:\Users\User\AppData\Local\uv` in v0.3.0. We're considering this a
bug, since it does _not_ match the documentation, and prior to v0.3.0,
we always used the former. This PR migrates back to the previous
location. It should be seamless for users, as we move the cache items to
the new location on startup.
Closes https://github.com/astral-sh/uv/issues/6417.
While working on https://github.com/astral-sh/uv/pull/6389 I discovered
we never checked `cache.get_url` here, which is wrong — though I don't
think it had much effect in practice since the realm would typically
match first. The main problem is that when we call `get_url` later we
hard-code the username to `None` because we assume we checked up here
with the username if present.
## Summary
This reverts commit 7d92915f3d.
I thought this would be a net performance improvement, but we've now had
multiple reports that this made locking _extremely_ slow. I also tested
this today with a very large codebase against a registry that does not
support range requests, and the number of downloads was sort of wild to
watch. Reverting the reduced resolution time by over 50%.
Closes#6104.
## Summary
Mark the new tests requiring Python 3.12.1 specifically as requiring
python-patch feature. This makes the test suite pass again on systems
not having this specific version (and disabling the feature).
## Test Plan
`cargo test` on Gentoo :-).
## Summary
It turns out we weren't applying the collapse logic here, so dev deps
with extras were repeated. This was generally ok... unless we ended up
_dropping_ an extra, in which case, you now have a duplicate.
Closes https://github.com/astral-sh/uv/issues/6380.
In https://github.com/astral-sh/uv/pull/6359, I accidentally made `uv
python install` prefer `.python-version` files over `.python-versions`
files -.-, kind of niche but it's a regression.
Closes https://github.com/astral-sh/uv/issues/6285
Introduces a new problem if the user says `python` but it doesn't exist
with that name:
```
❯ cargo run -q -- -v run python --version
DEBUG uv 0.3.0
DEBUG Found project root: `/Users/zb/workspace/uv`
DEBUG Project `uv` is marked as unmanaged
DEBUG No project found; searching for Python interpreter
DEBUG Reading requests from `/Users/zb/workspace/uv/.python-version`
DEBUG Searching for Python 3.11 in managed installations or system path
DEBUG Found `cpython-3.12.4-macos-aarch64-none` at `/Users/zb/workspace/uv/.venv/bin/python3` (virtual environment)
DEBUG Searching for managed installations at `/Users/zb/Library/Application Support/uv/python`
DEBUG Found `cpython-3.11.9-macos-aarch64-none` at `/opt/homebrew/bin/python3.11` (search path)
DEBUG Using Python 3.11.9 interpreter at: /opt/homebrew/opt/python@3.11/bin/python3.11
DEBUG Running `python --version`
error: Failed to spawn: `python`
Caused by: No such file or directory (os error 2)
```
I'll fix this separately.
## Summary
We're gonna work on a more comprehensive review of whether we should
preserve the username here, but for now, `git@` is effectively a
convention for GitHub and GitLab etc.
Closes https://github.com/astral-sh/uv/issues/6305.
## Test Plan
I guess we don't have infrastructure for testing SSH private keys right
now, but...
```
❯ cargo run init foo
❯ cd foo
❯ cargo run add git+ssh://git@github.com/astral-sh/mkdocs-material-insiders.git
```
For a path dep such as the root project, uv can read metadata statically
from `pyproject.toml` or dynamically from the build backend.
Python's `packaging`
[sorts](cc938f984b/src/packaging/specifiers.py (L777))
specifiers before emitting them, so all build backends built on top of
it - such as hatchling - will change the specifier order compared to
pyproject.toml. The core metadata spec does say "If a field is not
marked as Dynamic, then the value of the field in any wheel built from
the sdist MUST match the value in the sdist", but it doesn't specify if
"match" means string equivalent or semantically equivalent, so it's
arguable if that spec conformant. This change means that the specifiers
have a different ordering when coming from the build backend than when
read statically from pyproject.toml.
Previously, we tried to read path dep metadata in order:
* From the (built wheel) cache (`packaging` order)
* From pyproject.toml (verbatim specifier)
* From a fresh build (`packaging` order)
This behaviour is unstable: On the first run, we cache is cold, so we
read the verbatim specifier from `pyproject.toml`, then we build and
store the metadata in the cache. On the second run, we read the
`packaging` sorted specifier from the cache.
Reproducer:
```shell
rm -rf newproj
uv init -q --no-config newproj
cd newproj/
uv add -q "anyio>=4,<5"
cat uv.lock | grep "requires-dist"
uv sync -q
cat uv.lock | grep "requires-dist"
cd ..
```
```
requires-dist = [{ name = "anyio", specifier = ">=4,<5" }]
requires-dist = [{ name = "anyio", specifier = "<5,>=4" }]
```
A project either has static metadata, so we can read from
pyproject.toml, or it doesn't, and we always read from the build through
`packaging`. We can use this to stabilize the behavior by slightly
switching the order.
* From pyproject.toml (verbatim specifier)
* From the (built wheel) cache (`packaging` order)
* From a fresh build (`packaging` order)
Potentially, we still want to sort the specifiers we get anyway, after
all, the is no guarantee that the specifiers from a build backend are
deterministic. But our metadata reading behavior should be independent
of the cache state, hence changing the order in the PR.
Fixes#6316
## Summary
resolves https://github.com/astral-sh/uv/issues/5915, not entirely sure
if `manylinux_compatible` should be a separate field in the JSON
returned by the interpreter or there's some way to use the existing
`platform` for it.
## Test Plan
ran the below
```
rm -rf .venv
target/debug/uv venv
# commenting out the line below triggers the change..
# target/debug/uv pip install no-manylinux
target/debug/uv pip install cryptography --no-cache
```
is there an easy way to add this into the existing snapshot-based test
suite? looking around to see if there's a way that doesn't involve
something implementation-dependent like mocks.
~update: i think the output does differ between these two, so probably
we can use that.~ i lied - that "building..." output seems to be
discarded.
## Summary
For non-virtual workspaces, these are covered by the _members_. But for
virtual workspaces, they aren't captured anywhere else in the lock. So,
we weren't invalidating `uv.lock` when the dev dependencies changed,
which led to a panic.
Closes https://github.com/astral-sh/uv/issues/6288
## Summary
This ensures that we don't stream output to the `--output-file`, since
other processes may rely on reading it.
Closes https://github.com/astral-sh/uv/issues/6239.
The ADD `MarkerTree` was including the non-deterministic, unstable
`NodeId` in its `Ord` implementation since switching algebraic decision
diagrams. By replacing this with a correct `Ord` implementation, we fix
#6249.
Requires https://github.com/astral-sh/pubgrub/pull/31
This is a fallback mode that we supported when we decided to use PEP 517
builds by default. I can't find a single reference to it on GitHub or in
our issue tracker, so I want to drop support for it as part of v0.3.0.
Executable name requests were being treated as explicit requests to
install into system environments, but I don't think it should be as it's
implicit what environment you'll end up in. Following #4308, we allow
multiple executables to be found so we can filter here.
Concretely, this means `--system` is required to install into a system
environment discovered with e.g. `--python=python`. The flag is still
not required for cases where we're not mutating environment.
These are global and non-specific to the `pip` API, so I think they
should be elevated.
- Ran `UV_CONCURRENT_DOWNLOADS=1 cargo run pip list`; verified that
`downloads` resolved to 1.
- Added `concurrent-downloads = 5` under `[tool.uv]` in
`pyproject.toml`; ran `cargo run pip list`; verified that `downloads`
resolved to 5.
- Ran `UV_CONCURRENT_DOWNLOADS=1 cargo run pip list`; verified that
`downloads` resolved to 1.
This PR moves us to the Linux strategy for our global directories on
macOS. We both feel on the team _and_ have received feedback (in Issues
and Polls) that the `Application Support` directories are more intended
for GUIs, and CLI tools are correct to respect the XDG variables and use
the same directory paths on Linux and macOS.
Namely, we now use:
- `/Users/crmarsh/.local/share/uv/tools` (for tools)
- `/Users/crmarsh/.local/share/uv/python` (for Pythons)
- `/Users/crmarsh/.cache/uv` (for the cache)
The strategy is such that if the `/Users/crmarsh/Library/Application
Support/uv` already exists, we keep using it -- same goes for
`/Users/crmarsh/Library/Caches/uv`, so **it's entirely backwards
compatible**.
If you want to force a migration to the new schema, you can run:
- `uv cache clean`
- `uv tool uninstall --all`
- `uv python uninstall --all`
Which will clean up the macOS-specific directories, paving the way for
the above paths. In other words, once you run those commands, subsequent
`uv` operations will automatically use the `~/.cache` and `~/.local`
variants.
Closes https://github.com/astral-sh/uv/issues/4411.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
- Removes "experimental" labels from command documentation
- Removes preview warnings
- Removes `PreviewMode` from most structs and methods — we could keep it
around but I figure we can propagate it again easily where needed in the
future
- Enables preview behavior by default everywhere, e.g., `uv venv` will
download Python versions
This PR migrates uv's use of `chrono` to `jiff`.
I did most of this work a while back as one of my tests to ensure Jiff
could actually be used in a real world project. I decided to revive
this because I noticed that `reqwest-retry` dropped its Chrono
dependency,
which is I believe the only other thing requiring Chrono in uv.
(Although, we use a fork of `reqwest-middleware` at present, and that
hasn't been updated to latest upstream yet. I wasn't quite sure of the
process we have for that.)
In course of doing this, I actually made two changes to uv:
First is that the lock file now writes an RFC 3339 timestamp for
`exclude-newer`. Previously, we were using Chrono's `Display`
implementation for this which is a non-standard but "human readable"
format. I think the right thing to do here is an RFC 3339 timestamp.
Second is that, in addition to an RFC 3339 timestamp, `--exclude-newer`
used to accept a "UTC date." But this PR changes it to a "local date."
That is, a date in the user's system configured time zone. I think
this makes more sense than a UTC date, but one alternative is to drop
support for a date and just rely on an RFC 3339 timestamp. The main
motivation here is that automatically assuming UTC is often somewhat
confusing, since just writing an unqualified date like `2024-08-19` is
often assumed to be interpreted relative to the writer's "local" time.
e.g.
```
❯ uv pip install httpx==0.26.0 --reinstall-package httpx
Resolved 7 packages in 67ms
Prepared 1 package in 0.95ms
Uninstalled 1 package in 2ms
Installed 1 package in 12ms
~ httpx==0.26.0
```
```
❯ uv add httpx==0.26.0
warning: `uv add` is experimental and may change without warning
warning: `uv.sources` is experimental and may change without warning
Resolved 15 packages in 23ms
Built example @ file:///Users/zb/workspace/example
Prepared 2 packages in 187ms
Uninstalled 2 packages in 2ms
Installed 2 packages in 2ms
~ example==0.1.0 (from file:///Users/zb/workspace/example)
- httpx==0.27.0
+ httpx==0.26.0
```
Motivated by trying to reduce the diff for project updates in `uv add`.
I think it makes sense in general though. We'll also want a special
output for upgrades in the future, e.g. reducing the `httpx` changes in
the second example to a single line indicating the original and
subsequent distribution versions.
I'd like to have
> Reinstalled 1 package in 2ms
but it seems non-trivial to show the timing for it? I think we'd need to
determine what we want to call a "Reinstall" during the operations and
split doing them from the rest of the uninstalls and installs.
## Summary
The strategy here is: if the user provides supported environments, we
use those as the initial forks when resolving. As a result, we never add
or explore branches that are disjoint with the supported environments.
(If the supported environments change, we ignore the lockfile entirely,
so we don't have to worry about any interactions between supported
environments and the preference forks.)
Closes https://github.com/astral-sh/uv/issues/6184.
## Summary
I probably won't land https://github.com/astral-sh/uv/pull/6210 for the
release, but I do want to make one breaking change to prep for it:
renaming `environment-markers` to `resolution-markers` (or
`solution-markers`?) so that it's delineated from the user-defined
markers in that PR.
Indented blocks in Markdown are treated as code blocks, and rustdoc
treats all unadorned code blocks as Rust doctests. Since this wasn't
intended as a doctest and isn't valid Rust, it makes `cargo test --doc`
fail. We fix this by using an explicit code block labeled as `text`.
In https://github.com/astral-sh/uv/issues/5046, we show the tautological
proof:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on jax>=0.4.20.
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
(1)
```
This is a part of the error tree because the statement
`colabfold[alphafold]==1.5.5 depends on jax>=0.4.20` is actually a
simplification of `colabfold[alphafold]==1.5.5 depends on
jax>=0.4.20,<0.5.0` and the no versions clause is a proof of that
simplification.
Without simplification, the clause looks like:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20,<0.5.0 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on one of:
jax==0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
```
I don't think we have a great way to avoid performing the simplification
of the range conditionally and it makes the error simpler to just jump
straight to `colabfold[alphafold]==1.5.5 depends on jax>=0.4.20`.
The derivation for this clause looks like:
```
jax==0.4.20 | ==0.4.21 | ==0.4.22 | ==0.4.23 | ==0.4.24 | ==0.4.25 | ==0.4.26 | ==0.4.27 | ==0.4.28 | ==0.4.29 | ==0.4.30 | ==0.4.31 depends on numpy>=1.26.0
no versions of jax>0.4.20, <0.4.21 | >0.4.21, <0.4.22 | >0.4.22, <0.4.23 | >0.4.23, <0.4.24 | >0.4.24, <0.4.25 | >0.4.25, <0.4.26 | >0.4.26, <0.4.27 | >0.4.27, <0.4.28 | >0.4.28, <0.4.29 | >0.4.29, <0.4.30 | >0.4.30, <0.4.31 | >0.4.31, <0.5.0
colabfold[alphafold]==1.5.5 depends on jax>=0.4.20, <0.5.0
```
So it looks like we can take trees of this form and drop the "no
versions" clause _if_ the ranges are compatible[*]. See [this
comment](https://github.com/astral-sh/uv/pull/6160#discussion_r1720280922)
for a simpler explanation.
With this pull request, the clause simplifies to
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and jax>=0.4.20 depends on numpy>=1.26.0,
we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0. (1)
```
Unfortunately, this doesn't change any snapshots in our test suite so
I'm uncertain if the strategy generalizes. In some incorrect iterations
of this logic, the snapshots did reveal my mistakes.
[*] "if the ranges are compatible" includes a bit of hand-waving. I'm
not 100% sure if I've chosen the correct range heuristic here.
## Summary
Passing `--upgrade` to `tool run` is confusing, because it doesn't
upgrade the installed tool. It just causes us to use an isolated tool
environment, which seems wrong.
Closes#3683
Note our semantics do not exactly match the specification so we can
perform algebra on the markers. See the caveats in the documentation
(and in the discussion below).
The test output seems to depend on using Python 3.12.1 specifically.
While I'm not sure how it happens, it seems like these can get out of
sync between CI and local testing. In this case, I had a problem where
the marker expressions emitted locally were tied to Python 3.12.4, but
the tests in CI were tied to Python 3.12.1. Changing the test to require
3.12.1 specifically fixes this.
This initial set is meant to be a basic starting point where we
can test the interaction between 'uv' commands more systematically.
And specifically, with a focus on how the lock file changes.
This adds some variations on 'uv add' and 'uv remove' specifically
for testing changes to the lock file (and not anything else).
We also rejigger 'run_and_format' so that we can use it in other
contexts, particularly for error reporting.
And we add a 'diff_lock' helper for returning the changes made to
a lock file after running a command.
This was only being used in the ecosystem tests. Since we now don't do a
resolve when `uv lock` is run and when the lock file satisfies the
`pyproject.toml`, deterministic checking was removed since it's avoided
by construction. It was removed everywhere else, so we remove it here as
well.
Closes https://github.com/astral-sh/uv/issues/6167
We've been seeing intermittent failures in CI, which we thought were
unexpected HTTP 401s but it actually looks like a panic when handling an
expected HTTP error. I believe the problem is that an early client error
can cause the channel to close and we crash when we unwrap the `send`.
## Summary
Fixes#6177
This ensures a `pyproject.toml` file without a `[project]` table is not
a fatal error for `uv venv`, which is just trying to discover/respect
the project's `python-requires` (#5592).
Similarly, any caught `WorkspaceError` is now also non-fatal and instead
prints a warning message (feeback welcome here, felt less surprising
than e.g. a malformed `pyproject.toml` breaking `uv venv`).
## Test Plan
I added two test cases: `cargo test -p uv --test venv`
Also, existing venv tests were failing for me since I use fish and the
printed activation script was `source .venv/bin/activate.fish` (to
repro, just run the tests with `SHELL=fish`). So added an insta filter
to normalize that.
## Summary
PR #4533 introduced (almost) spec compliant parsing of `.egg-info`
filenames, but added the overly strict requirement that the distribution
version must be present. This causes various `uv pip` operations to fail
in environments where there are `.egg-info` files without a version
component, so loosen this check by making the version component optional
and reading the version from the egg metadata when it is not present.
As an example of the issue, running `uv pip list` on my system currently
results in
```
error: Failed to read metadata from: `/usr/lib/python3.12/site-packages/PySide6.egg-info`
Caused by: The `.egg-info` filename "PySide6.egg-info" is missing a version
```
whereas regular `pip list` succeeds:
```
$ pip list | rg -S pyside
PySide6 6.7.2
```
## Test Plan
This has been tested by altering the `.egg-info` filename tests as
needed and ensuring the full test suite passes locally.
Resolve#6151
## Test Plan
Execution result of `cargo run -- help`
```bash
An extremely fast Python package manager.
Usage: uv [OPTIONS] <COMMAND>
Commands:
run Run a command or script (experimental)
init Create a new project (experimental)
add Add dependencies to the project (experimental)
remove Remove dependencies from the project (experimental)
sync Update the project's environment (experimental)
lock Update the project's lockfile (experimental)
tree Display the project's dependency tree (experimental)
tool Run and install commands provided by Python packages (experimental)
python Manage Python versions and installations (experimental)
pip Manage Python packages with a pip-compatible interface
venv Create a virtual environment
cache Manage uv's cache
version Display uv's version
generate-shell-completion Generate shell completion
help Display documentation for a command
...
```
Execution result of `cargo run -- -h` and `cargo run -- --help`
```bash
An extremely fast Python package manager.
Usage: uv [OPTIONS] <COMMAND>
Commands:
run Run a command or script (experimental)
init Create a new project (experimental)
add Add dependencies to the project (experimental)
remove Remove dependencies from the project (experimental)
sync Update the project's environment (experimental)
lock Update the project's lockfile (experimental)
tree Display the project's dependency tree (experimental)
tool Run and install commands provided by Python packages (experimental)
python Manage Python versions and installations (experimental)
pip Manage Python packages with a pip-compatible interface
venv Create a virtual environment
cache Manage uv's cache
version Display uv's version
help Display documentation for a command
...
```
## Summary
In the resolver, we use release-only semantics to normalize
`python_full_version`. So, if we see `python_full_version < '3.13'`, we
treat that as `(Unbounded, Exclude(3.13))`. `3.13b0` evaluates as `true`
to that range, so we were accepting pre-releases for these markers.
Instead, we need to exclude pre-release segments when performing these
evaluations.
Closes https://github.com/astral-sh/uv/issues/6169.
## Test Plan
Hard to write a test for this because you need a pre-release Python
locally... so:
`echo "sqlalchemy==2.0.32" | cargo run pip compile - --python 3.13 -n`
Resolve#6152
## Summary
## Test Plan
Execution result of `cargo run generate-shell-completion --help`
```bash
Generate shell completion
Usage: uv generate-shell-completion <SHELL>
Arguments:
<SHELL> The shell to generate the completion script for [possible values: bash, elvish, fish, nushell, powershell, zsh]
```
Execution result of `cargo run help generate-shell-completion`
```bash
Generate shell completion
Usage: uv generate-shell-completion <SHELL>
Arguments:
<SHELL>
The shell to generate the completion script for
[possible values: bash, elvish, fish, nushell, powershell, zsh]
```
## Summary
Resolves https://github.com/astral-sh/uv/issues/4537
- First commit avoids overwriting dependencies with different markers.
- Second commit supports adding from requirements files.
## Test Plan
`cargo test`
Now that these incompatibilities are collected into a single range
(https://github.com/astral-sh/uv/pull/6154), we can simplify the range
using the known available versions to reduce verbosity.
There were different `PubGrubPackage` types so they never matched the
available versions set! Luckily, the available versions are agnostic to
the markers and optional dependencies so we can just broaden to using
`PackageName` as a lookup key.
Addresses yet another complaint in
https://github.com/astral-sh/uv/issues/5046
I need this for debugging error messages.
I used an environment variable instead of a trace log so you can do
`UV_INTERNAL__SHOW_DERIVATION_TREE=1` and run a test to see the tree in
the test snapshot without further changes.
e.g.
```rust
// Resolving should fail.
uv_snapshot!(context.filters(), context.lock().arg("--preview").current_dir(&workspace), @r###"
success: false
exit_code: 1
----- stdout -----
UV_INTERNAL__SHOW_DERIVATION_TREE
root==0a0.dev0 depends on foo*
root==0a0.dev0 depends on bar[some-extra]*
foo==0.1.0 depends on anyio==4.1.0
bar[some-extra]==0.1.0 depends on anyio==4.2.0
no versions of bar[some-extra]<0.1.0 | >0.1.0
----- stderr -----
Using Python 3.12.[X] interpreter at: [PYTHON-3.12]
× No solution found when resolving dependencies:
╰─▶ Because only bar[some-extra]==0.1.0 is available and bar[some-extra] depends on anyio==4.2.0, we can conclude that all versions of bar[some-extra] depend on anyio==4.2.0.
And because foo depends on anyio==4.1.0, we can conclude that foo and all versions of bar[some-extra] are incompatible.
And because your workspace requires bar[some-extra] and foo, we can conclude that your workspace's requirements are unsatisfiable.
"###
);
```
We have bad error messages for optional (extra) dependencies and
development dependencies in workspaces:
1. We weren't showing the full package, so we'd drop `:dev` and
`[extra]` by accident
2. We didn't include derived packages, e.g., `member[extra]` in tree
processing collapse operation, so we'd include extra clauses like the
ones we removed in #6092
Also
- Reverts
f0de4f71f2
— it turns out it wasn't quite correct and it didn't seem worth using
the custom incompatibility anymore.
- Fixes a bug in the display of `package:dev` which was not showing
`:dev` for some variants (see 94d8020b58)
## Summary
Normalize all `python_version` markers to their equivalent
`python_full_version` form. This avoids false positives in forking
because we currently cannot detect any relationships between the two
forms. It also avoids subtle bugs due to the truncating semantics of
`python_version`. For example, given `requires-python = ">3.12"`, we
currently simplify the marker `python_version <= 3.12` to `false`.
However, the version `3.12.1` will be truncated to `3.12` for
`python_version` comparisons, and thus it satisfies the python
requirement and evaluates to `true`.
It is possible to simplify back to `python_version` when writing markers
to the lockfile. However, the equivalent `python_full_version` markers
are often clearer and easier to simplify, so I lean towards leaving them
as `python_full_version`.
There are *a lot* of snapshot updates from this change. I'd like more
eyes on the transformation logic in `python_version_to_full_version` to
ensure that they are all correct.
Resolves https://github.com/astral-sh/uv/issues/6125.
While it's slightly more convenient to log this where we were, it was
pretty unhelpful e.g.
```
DEBUG Interpreter meets the requested Python: `Python >=3.9`
```
What interpreter are we referring to here?
Includes the changes from https://github.com/astral-sh/uv/pull/6071 but
takes them way further.
When we have the set of available versions for a package, we can do a
much better job displaying an error.
For example:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only the following versions of httpx are available:
httpx<=999
httpx>=9999
and example==0.1.0 depends on httpx>999,<9999, we can conclude that example==0.1.0 cannot be used.
And because only example==0.1.0 is available and you require example, we can conclude that the requirements are unsatisfiable.
```
The resolver has demonstrated that the requested range cannot be used
because there are only versions in ranges _outside_ the requested range.
However, the display of the range of available versions is pretty bad!
We say there are versions of httpx available in ranges that definitely
have no versions available.
With this pull request, the error becomes:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only httpx<=1.0.0b0 is available and example depends on httpx>999,<9999, we can conclude that example's
requirements are unsatisfiable.
And because your workspace requires example, we can conclude that your workspace's requirements are unsatisfiable.
```
We achieve this by:
1. Dropping ranges disjoint with the range of available versions, e.g.,
this removes `httpx>=9999`
2. Replacing ranges that capture the _entire_ range of available
versions with the smaller range, e.g., this replaces `httpx<=999` with
`<=1.0.0b0`.
~Note that when we perform (2), we may include an additional bound that
is not relevant, e.g., we include the lower bound of `>=0.6.7`. This is
a bit extraneous, but I don't think it's confusing. We can consider some
advanced logic to avoid that later.~ (edit: I did this, it wasn't hard)
We also improve error messages when there is _only_ one version
available by showing that version instead of a range.
## Summary
Gives the caller control over how messages are reported back to the
user. Also merges the index-location validation into the lock, since
we're already iterating over the packages.
## Summary
This is no longer required since we no longer implement `Eq` on `Lock`.
It will also sometimes be "wrong" as of #6076, since we now apply
different `requires-python` filtering to different parts of the tree
during resolution.
## Summary
Using https://github.com/astral-sh/uv/issues/6064 as a motivating
example: at present, on main, we're not properly propagating the
`Requires-Python` simplifications. In that case, for example, we end up
solving for a branch with `python_version < 3.11`, and a branch `>=
3.11`, even though `Requires-Python` is `>=3.11`. Later, when we get to
the graph, we apply version simplification based on `Requires-Python`,
which causes us to _remove_ the `python_version < 3.11` markers
entirely, leaving us with duplicate dependencies for `pylint`.
This PR instead tries to ensure that we always apply this narrowing to
requirements and forks, so that we don't need to apply the same
simplification when constructing the graph at all.
Closes https://github.com/astral-sh/uv/issues/6064.
Closes#6059.
In particular, I added this as a hack to avoid a kinda of
instability that was caused by our marker code not correctly
detecting markers that were always false. But that has since
been fixed.
Removing this code doesn't change any tests. Arguably it
should be possible to come up with a test that failed with
this hack inserted but succeeded without it. In particular,
with this hack, new forks were being prevented from being
added even when they ought to be added, e.g., when preferences
get updated.
## Summary
This PR changes the definition of `--locked` from:
> Produces the same `Lock`
To:
> Passes `Lock::satisfies`
This is a subtle but important difference. Previous, if
`Lock::satisfies` failed, we would run a resolution, then do
`existing_lock == lock`. If the two weren't equal, and `--locked` was
specified, we'd throw an error.
The equality check is hard to get right. For example, it means that we
can't ship #6076 without changing our marker representation, since the
deserialized lockfile "loses" some of the internal marker state that
gets accumulated during resolution.
The downside of this change is that there could be scenarios in which
`uv lock --locked` fails even though the lockfile would actually work
and the exact TOML would be unchanged. But... I think it's ok if
`--locked` fails after the user modifies something?
Extends https://github.com/astral-sh/uv/pull/6092 to improve resolver
error messages for workspaces that have a single member.
As before, this requires a two-step approach of
1. Traversing the derivation tree and collapsing some members. In this
case, we drop the empty root node in favor of the project.
2. Using special-case formatting for packages. In this case, the
workspace package is referred to with "your project" instead of its
name.
An extension of #6090 that replaces #6066.
In brief,
1. Workspace member names are passed to the resolver for no solution
errors
2. There is a new derivation tree pre-processing step that trims
`NoVersion` incompatibilities for workspace members from the derivation
tree. This avoids showing redundant clauses like `Because only
bird==0.1.0 is available and bird==0.1.0 depends on anyio==4.3.0, we can
conclude that all versions of bird depend on anyio==4.3.0.`. As a minor
note, we use a custom incompatibility kind to mark these
incompatibilities at resolution-time instead of afterwards.
3. Root dependencies on workspace members say `your workspace requires
bird` rather than `you require bird`
4. Workspace member package display omits the version, e.g., `bird`
instead of `bird==0.1.0`
5. Instead of reporting a workspace member as unusable we note that its
requirements cannot be solved, e.g., `bird's requirements are
unsatisfiable` instead of `bird cannot be used`.
6. Instead of saying `your requirements are unsatisfiable` we say `your
workspace's requirements are unsatisfiable` when in a workspace, since
we're not in a "provide direct requirements" paradigm.
As an annoying but minor implementation detail, `PackageRange` now
requires access to the `PubGrubReportFormatter` so it can determine if
it is formatting a workspace member or not. We could probably improve
the abstractions in the future.
As a follow-up, we should additional special casing for "single project"
workspaces to avoid mention of the workspace concept in simple projects.
However, it looks like this will require additional tree manipulations
so I'm going to keep it separate.
## Summary
Historically, in order to "resolve from a lockfile", we've taken the
lockfile, used it to pre-populate the in-memory metadata index, then run
a resolution. If the resolution didn't match our existing resolution, we
re-resolved from scratch.
This was an appealing approach because (in theory) it didn't require any
dedicated logic beyond pre-populating the index. However, it's proven to
be _really_ hard to get right, because it's a stricter requirement than
we need. We just need the current lockfile to _satisfy_ the requirements
provided by the user. We don't actually need a second resolution to
produce the exact same result. And it's not uncommon that this second
resolution differs, because we seed it with preferences, which
fundamentally changes its course. We've worked hard to minimize those
"instabilities", but they're still present.
The approach here is intended to be much simpler. Instead of resolving
from the lockfile, we just check if the current resolution satisfies the
state of the workspace. Specifically, we check if the lockfile (1)
contains all the relevant members, and (2) matches the metadata for all
dependencies, recursively. (We skip registry dependencies, assuming that
they're immutable.)
This may actually be too conservative, since we can have resolutions
that satisfy the requirements, even if the requirements have changed
slightly. But we want to bias towards correctness for now.
My hope is that this scheme will be more performant, simpler, and more
robust.
Closes https://github.com/astral-sh/uv/issues/6063.
## Summary
This was added in https://github.com/astral-sh/uv/pull/5405 but is now
the cause of an instability in `github_wikidata_bot`. Specifically, on
the initial run, we fork in `pydantic==2.8.2`, via:
```
Requires-Dist: typing-extensions>=4.12.2; python_version >= '3.13'
Requires-Dist: typing-extensions>=4.6.1; python_version < '3.13'
```
In the end, we resolve a single version of `typing-extensions`
(`4.12.2`)... But we don't recognize the two resolutions as the "same
graph", because we propagate the fork markers, and so the "edges" have
different markers on them...
In the second run through, when we have the forks in advance, we don't
split on Pydantic... We just try to solve from the root with the current
forks. This is fundamentally different and I fear it will be the cause
of many instabilities. But removing this graph check fixes the proximate
issue.
I don't really understand why this was added since there was no test
coverage in the PR.
## Summary
When constructing the `Resolution`, we only propagated the fork markers
to the package node, but not the extras node. This led to cases in which
an extra could be included unconditionally or otherwise diverge from the
base package version.
Closes https://github.com/astral-sh/uv/issues/6062.
## Summary
Right now, we store the environment markers in a `BTreeSet` -- so
they're sorted, but the sort doesn't really tell us anything. I think we
should instead store them in the order in which we solved. I thought
this might fix an instability (it didn't), but I think it's still good
to ensure we solve in the same order.
I also changed from `Option<Vec>` to just `Vec`, since there was no
distinction between `None` and empty.
## Summary
We retain them if you use `--raw-sources`, but otherwise they're
removed. We still respect them in the subsequent `uv.lock` via an
in-process store.
Closes#6056.
## Summary
Now, if you resolve against a registry, then swap it out for another, we
won't reuse the lockfile. (If you don't provide any registry
configuration, then we won't enforce this, so that `uv lock --index-url
foo` and `uv lock` is stable.)
Closes https://github.com/astral-sh/uv/issues/5920.
This example came up in discussion and it was initially unclear whether
we should try to support it. Specifically, by automatically assuming
that the `datasets < 2.19` dependency had a marker corresponding to the
negation of the conjunction of the other sibling markers for that same
package. But this was deemed, I think, a little too magical.
This in turn implies that whenever there are sibling dependencies with
overlapping marker expressions, their version constraints also need to
be overlapping. Otherwise, for any marker environment that matches both
marker expressions, it would be impossible to select a single version.
The test in this case has this comment:
```
/// If a dependency requests a prerelease version with an overlapping marker expression,
/// we should prefer the prerelease version in both forks.
```
With this setup:
```
let pyproject_toml = context.temp_dir.child("pyproject.toml");
pyproject_toml.write_str(indoc! {r#"
[project]
name = "example"
version = "0.0.0"
dependencies = [
"cffi >= 1.17.0rc1 ; os_name == 'Linux'"
]
requires-python = ">=3.11"
"#})?;
let requirements_in = context.temp_dir.child("requirements.in");
requirements_in.write_str(indoc! {"
cffi
.
"})?;
```
The change in this commit _seems_ more correct that what we had,
although it does seem to contradict the comment. Namely, in the `os_name
!= "Linux"` fork, we don't prefer the pre-release version since the
`cffi >= 1.17.0rc1` bound doesn't apply.
It's not quite clear what to do in this instance.
I believe these are all changes that aren't necessarily
expected, but also seem harmless. Like the order in which
fork markers are written to the lock file. (Although one
wonders if we should fix that once and for all by defining
a complete sort function for forks.)
At a high level, this PR adds a smattering of new tests that
effectively snapshot the output of `uv lock` for a selection of
"ecosystem" projects. That is, real Python projects for which we expect
`uv` to work well with.
The main idea with these tests is to get a better idea of how changes
in `uv` impact the lock files of real world projects. For example,
we're hoping that these tests will help give us data for how #5733
differs from #5887.
This has already revealed some bugs. Namely, re-running `uv lock` for a
second time will produce a different lock file for some projects. So to
prioritize getting the tests added, for those projects, we don't do the
deterministic checking.
## Summary
Our current handling of `--find-links` merges the entries in each index.
As a result, we can end up with `AnnotatedDist` entries that reference
distributions across indexes.
I'd like to change `--find-links` such that each `--find-links` entry is
just treated as its own index (so, e.g., if `requests` exists in the
first `--find-links` entry, we don't even check the registry by
default), which would _also_ fix this problem automatically. But that's
a behavior change... So for now, in the lockfile, we filter
distributions that don't match the source index URL.
There are two cases to consider:
- There's a source distribution. Then, for the ID to reference the
`--find-links` registry, the source distribution _must_ have come from
the `--find-links` entry, so it's fine to discard any wheels from the
"wrong" registry without breaking any compatibility guarantees.
- There's no source distribution. Then the best wheel must come from the
`--find-links` registry. We might lose some platform coverage by
discarding the other wheels, but it shouldn't break any of the
"guarantees", since we have at least one wheel that fits in the version
range.
Closes https://github.com/astral-sh/uv/issues/6015.
## Summary
Added the actual error message to the warning when uv fails to parse
`pyproject.toml`.
Resolves https://github.com/astral-sh/uv/issues/5934
## Test Plan
Took the case from the issue:
- have `pyproject.toml` which contains
```
[tool.uv]
foobar = false
```
-
```
$ uv venv --preview -v
```
- Expect the message that contains the actual problem in the
`pyproject.toml` like:
```
warning: Failed to parse `pyproject.toml` during settings discovery: unknown field `foobar`; skipping...
```
## Summary
A lot of the existing tests were no-ops. For convenience, we now use the
trick of: install from Test PyPI (to get an outdated "latest"), then
upgrade from PyPI.
Right now, the URL gets out-of-sync with the install path, since the
install path is canonicalized. This leads to a subtle error on Windows
(in CI) in which we don't preserve caching across resolution and
installation.
Surprisingly, this is a lockfile schema change: We can't store relative
paths in urls, so we have to store a `filename` entry instead of the
whole url.
Fixes#4355
## Summary
We now persist the `ResolverInstallerOptions` when writing out a tool
receipt. When upgrading, we grab the saved options, and merge with the
command-line arguments and user-level filesystem settings (CLI > receipt
> filesystem).
The loose consensus is that "fetch" doesn't have much meaning and that a
boolean flag makes more sense from the command line.
1. Adds `--allow-python-downloads` (hidden, default) and
`--no-python-downloads` to the CLI to quickly enable or disable
downloads
2. Deprecates `--python-fetch` in favor of the options from (1)
3. Removes `python-fetch` in favor of a `python-downloads` setting
5. Adds a `never` variant to the enum, allowing even explicit installs
to be disabled via the configuration file
## Test plan
I tested this with various `pyproject.toml`-level settings and `uv venv
--preview --python 3.12.2` and `uv python install 3.12.2` with and
without the new CLI flags.
Warn when there are missing bounds on transitive dependencies with
`--resolution lowest`.
Implemented as a lazy resolution graph check. Dev deps are odd because
they are missing the edge from the root that extras have (they are
currently orphans in the resolution graph), but this is more complex to
solve properly because we can put dev dep information in a `Requirement`
so i special cased them here.
Closes#2797
Should help with #1718
---------
Co-authored-by: Ibraheem Ahmed <ibraheem@ibraheem.ca>
## Summary
This PR rewrites the `MarkerTree` type to use algebraic decision
diagrams (ADD). This has many benefits:
- The diagram is canonical for a given marker function. It is impossible
to create two functionally equivalent marker trees that don't refer to
the same underlying ADD. This also means that any trivially true or
unsatisfiable markers are represented by the same constants.
- The diagram can handle complex operations (conjunction/disjunction) in
polynomial time, as well as constant-time negation.
- The diagram can be converted to a simplified DNF form for user-facing
output.
The new representation gives us a lot more confidence in our marker
operations and simplification, which is proving to be very important
(see https://github.com/astral-sh/uv/pull/5733 and
https://github.com/astral-sh/uv/pull/5163).
Unfortunately, it is not easy to split this PR into multiple commits
because it is a large rewrite of the `marker` module. I'd suggest
reading through the `marker/algebra.rs`, `marker/simplify.rs`, and
`marker/tree.rs` files for the new implementation, as well as the
updated snapshots to verify how the new simplification rules work in
practice. However, a few other things were changed:
- [We now use release-only comparisons for `python_full_version`, where
we previously only did for
`python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522).
I'm unsure how marker operations should work in the presence of
pre-release versions if we decide that this is incorrect.
- [Meaningless marker expressions are now
ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502).
This means that a marker such as `'x' == 'x'` will always evaluate to
`true` (as if the expression did not exist), whereas we previously
treated this as always `false`. It's negation however, remains `false`.
- [Unsatisfiable markers are written as `python_version <
'0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329).
- The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub`
crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends
on the `pubgrub` crate for the `Range` type, we probably want to move
`pubgrub::Range` into a separate crate to break this, but I don't think
that should block this PR (cc @konstin).
There is still some remaining work here that I decided to leave for now
for the sake of unblocking some of the related work on the resolver.
- We still use `Option<MarkerTree>` throughout uv, which is unnecessary
now that `MarkerTree::TRUE` is canonical.
- The `MarkerTree` type is now interned globally and can potentially
implement `Copy`. However, it's unclear if we want to add more
information to marker trees that would make it `!Copy`. For example, we
may wish to attach extra and requires-python environment information to
avoid simplifying after construction.
- We don't currently combine `python_full_version` and `python_version`
markers.
- I also have not spent too much time investigating performance and
there is probably some low-hanging fruit. Many of the test cases I did
run actually saw large performance improvements due to the markers being
simplified internally, reducing the stress on the old `normalize`
routine, especially for the extremely large markers seen in
`transformers` and other projects.
Resolves https://github.com/astral-sh/uv/issues/5660,
https://github.com/astral-sh/uv/issues/5179.
## Summary
This PR adds a `DistExtension` field to some of our distribution types,
which requires that we validate that the file type is known and
supported when parsing (rather than when attempting to unzip). It
removes a bunch of extension parsing from the code too, in favor of
doing it once upfront.
Closes https://github.com/astral-sh/uv/issues/5858.
## Summary
I think this seems reasonable... Otherwise, we might not go back to PyPI
to revalidate the list of available versions despite the user passing
`--upgrade`.
## Summary
Previously, we wouldn't respect configuration files in directories
_above_ a workspace root. But this is somewhat problematic, because any
`pyproject.toml` will define a workspace root...
Instead, I think we should _start_ the search at the workspace root, but
go above it if necessary.
Closes: #5929.
See: https://github.com/astral-sh/uv/pull/4295.
## Summary
Resolves#5188. Most of the changes involve creating a new function in
`tool/common.rs` to contain the common functionality previously found in
`tool/install.rs`.
## Test Plan
`cargo test`
```console
❯ ./target/debug/uv tool upgrade black
warning: `uv tool upgrade` is experimental and may change without warning.
Resolved 6 packages in 25ms
Uninstalled 1 package in 3ms
Installed 1 package in 19ms
- black==23.1.0
+ black==24.4.2
Installed 2 executables: black, blackd
```
e.g.
```
❯ cargo run -- venv --no-system
Blocking waiting for file lock on build directory
Compiling uv v0.2.34 (/Users/zb/workspace/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 19.85s
Running `target/debug/uv venv --no-system`
warning: The `--no-system` flag has no effect, a system Python interpreter is always used in `uv venv`
Using Python 3.12.4 interpreter at: /opt/homebrew/opt/python@3.12/bin/python3.12
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
❯ cargo run -- venv --system
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv venv --system`
warning: The `--system` flag has no effect, a system Python interpreter is always used in `uv venv`
Using Python 3.12.4 interpreter at: /opt/homebrew/opt/python@3.12/bin/python3.12
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
## Summary
This _used_ to be true but we now require fetching metadata for all
distributions even with `--no-deps` since, e.g., we validate that any
declared extras exist.
## Summary
Initially, we showed _all_ resolver and installer output in `uv run` and
`uv tool run`, since it was way too much for workhorse commands. Then,
we moved to showing _no_ output by default, which was way too little --
you had no idea why anything was happening, and commands appeared to
hang.
This PR adds a more nuanced middle-ground. With `--verbose`, we continue
to show everything. But by default, in `uv run` and `uv tool run`...
- During resolution, we show any "Building" and "Build" messages, if you
need to build a source distribution. But we don't show any other output.
(This _could_ be too little for expensive resolutions; we may want to
show a spinner.)
- If there are no changes to be made after resolving, we don't show any
other output.
- If we have to install, we show the progress bars for downloads (which
disappear on completion) followed by a single summary line stating the
number of packages installed.
This feels pretty good, in my limited testing. When everything is built
/ cached, you don't get _any_ additional output. When there's work to
do, you have a sense for what's happening, and we leave you with a
single summary line ("Installed X packages") at the end.
Closes https://github.com/astral-sh/uv/issues/5758.
## Test Plan
Notice that the first `tool run` ends with an install line; the second
shows no additional output:

If you run `uv run` in a package for the first time, we _do_ tell you
that we're building / built it:

But on the second run, there's no output:

If you add a `--with`, we'll show you all the installer progress bars
(which disappear once they're done), and then a single summary line:

Currently, the entry for a package+version+source table is called
`distribution`. That is incorrect, the `sdist` and `wheel` fields inside
of that table are distributions, the table itself is for a package. We
also align ourselves closer with PEP 751.
I went through `lock.rs` and renamed all occurrences of "distribution"
that actually referred to a "package".
This change invalidates all existing lockfiles.
Bikeshedding: Do we call it `package` or `packages`? See also
https://github.com/python/peps/pull/3877
`package` is nice because it looks like a header:
```toml
[[package]]
name = "anyio"
version = "4.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna" },
{ name = "sniffio" },
]
sdist = { url = "3970183622d484d08e3285104333d3/anyio-4.3.0.tar.gz", hash = "sha256:f75253795a87df48568485fd18cdd2a3fa5c4f7c5be8e5e36637733fce06fed6", size = 159642 }
wheels = [
{ url = "2f20c40b45242c0b33774da0e2e34f/anyio-4.3.0-py3-none-any.whl", hash = "sha256:048e05d0f6caeed70d731f3db756d35dcc1f35747c8c403364a8332c630441b8", size = 85584 },
]
```
`packages` is nice because the field is not a single entry, but a list.
2/3 for https://github.com/astral-sh/uv/issues/4893
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Whenever we call `resolve`, we immediately call `fetch` after. And in
some cases `resolve` actually calls `fetch` internally. It seems a lot
simpler to just merge these into one method that returns a `Fetch`
(which itself contains the fully-resolved URL).
Closes https://github.com/astral-sh/uv/issues/5876.
There are three options that determine resolver behavior:
* resolution mode
* prerelease mode
* exclude newer
They are different from the other top level options: If they mismatch,
we recreate the resolution. To distinguish them from the rest of the
lockfile, we group them under an `[options]` header.
1/3 for #4893
Following #5869, the documentation has some less-than-helpful
suggestions to use `uv help python` for details — we should link to the
`uv python` section instead.
## Summary
We were dropping the query and fragment in the wrong place, so the URLs
didn't match up after resolving from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5851.
## Summary
Very subtle bug. The scenario is as follows:
- We resolve: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git" }`
- The user then changes the request to: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git", rev =
"44d2f4b19d6837ea990c16f494bdf7543d57483d" }`
- When we go to re-lock, we note two facts:
1. The "default branch" resolves to
`44d2f4b19d6837ea990c16f494bdf7543d57483d`.
2. The metadata for `44d2f4b19d6837ea990c16f494bdf7543d57483d` is
(whatever we grab from the lockfile).
- In the resolver, we then ask for the metadata for
`44d2f4b19d6837ea990c16f494bdf7543d57483d`. It's already in the cache,
so we return it; thus, we never add the
`44d2f4b19d6837ea990c16f494bdf7543d57483d` ->
`44d2f4b19d6837ea990c16f494bdf7543d57483d` mapping to the Git resolver,
because we never have to resolve it.
This would apply for any case in which a requested tag or branch was
replaced by its precise SHA. Replacing with a different commit is fine.
It only applied to `tool.uv.sources`, and not PEP 508 URLs, because the
underlying issue is that we aren't consistent about "automatically"
extracting the precise commit from a Git reference.
Closes https://github.com/astral-sh/uv/issues/5860.
## Summary
This is an experimental PR to replace more unsafe calls with more rust
while still trying to keep the binary size small enough. These changes
roughly increase the size of the trampolines to about 40kb~. This is a
alternate PR to https://github.com/astral-sh/uv/pull/5751.
The primary changes here include
* Switch to use rust path components for ease of path management
* Leverage `std::process::exit` for process exit and cleanup
* Use `std::io::Error::last_os_error` for IO Errors to remove
`FormatMessage` complexity
* Use `std::env::current_exe` to get the current executable instead of
`GetModuleFileNameA`
## Test Plan
Added one more existing test case to trampoline tests.
Still need to verify dunce::canonicalize is desired or not on
find_python_exe.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
We need to avoid using incompatible versions for build dependencies that
are also part of the resolved
environment. This is a very subtle issue, but: when locking, we don't
enforce platform
compatibility. So, if we reuse the resolver state to install, and the
install itself has to
preform a resolution (e.g., for the build dependencies of a source
distribution), that
resolution may choose incompatible versions.
The key property here is that there's a shared package between the build
dependencies and the
project dependencies.
Closes https://github.com/astral-sh/uv/issues/5836.
This already rejects `pyproject.toml`... but because the schema
validation is relaxed (we allow unknown fields, and all fields are
optional), a `pyproject.toml` doesn't get properly rejected here.
This PR makes the schema stricter, but in a safe way (by adding the
other `tool.uv` fields, like `workspace`, as any).
Closes#5832.
## Summary
We allow the use of (e.g.) `.whl.metadata` files when `--no-binary` is
enabled, so it makes sense that we'd also also allow wheels to be
downloaded for metadata extraction. So now, we validate `--no-binary` at
install time, rather than metadata-fetch time.
Closes https://github.com/astral-sh/uv/issues/5699.
## Summary
This fixes a bug introduced by
https://github.com/astral-sh/uv/pull/5232. It turns out that the
`universal_disjoint_base_or_local_requirement` test does not actually do
what it was meant to because of the incorrect python requirement. With a
valid python requirement, it fails on `main`. The problem is that we try
to exclude the original base version from the range of allowed versions
to try and prefer local versions. However, in the test, there is a
branch that depends on the non-local version, with no applicable local
in its fork. We should remove this exclusion as prioritization is
handled by the candidate resolver.
I don't think this will save any time in serialization, but it should
save us some deserialization, since we only need to parse URLs for the
packages we use...
## Summary
Okay, I tested this against...
- Our public "private" proxy
- Fury
- AWS CodeArtifact
- Azure Artifacts
It took a long time.
All of them work as expected with this approach: we omit the credentials
from the lockfile, then wire them back up when the index URL is provided
during subsequent operations.
Closes https://github.com/astral-sh/uv/issues/5119.
Part of #4454
e.g.
```
$ uv add --help
Add one or more packages to the project requirements
Usage: uv add [OPTIONS] <REQUIREMENTS>...
Arguments:
<REQUIREMENTS>... The packages to add, as PEP 508 requirements (e.g., `ruff==0.5.0`)
Options:
--dev Add the requirements as development dependencies
--optional <OPTIONAL> Add the requirements to the specified optional dependency group
--no-editable Don't add the requirements as editables
--raw-sources Add source requirements to `project.dependencies`, rather than `tool.uv.sources`
--rev <REV> Specific commit to use when adding from Git
--tag <TAG> Tag to use when adding from git
--branch <BRANCH> Branch to use when adding from git
--extra <EXTRA> Extras to activate for the dependency; may be provided more than once
--locked Assert that the `uv.lock` will remain unchanged
--frozen Add the requirements without updating the `uv.lock` file
--package <PACKAGE> Add the dependency to a specific package in the workspace
-p, --python <PYTHON> The Python interpreter into which packages should be installed. [env: UV_PYTHON=]
Index options:
-i, --index-url <INDEX_URL> The URL of the Python package index (by default: <https://pypi.org/simple>) [env: UV_INDEX_URL=]
--extra-index-url <EXTRA_INDEX_URL> Extra URLs of package indexes to use, in addition to `--index-url` [env: UV_EXTRA_INDEX_URL=]
-f, --find-links <FIND_LINKS> Locations to search for candidate distributions, in addition to those found in the registry indexes
--no-index Ignore the registry index (e.g., PyPI), instead relying on direct URL dependencies and those provided via `--find-links`
--index-strategy <INDEX_STRATEGY> The strategy to use when resolving against multiple index URLs [env: UV_INDEX_STRATEGY=] [possible values: first-index, unsafe-first-match, unsafe-best-match]
--keyring-provider <KEYRING_PROVIDER> Attempt to use `keyring` for authentication for index URLs [env: UV_KEYRING_PROVIDER=] [possible values: disabled, subprocess]
Resolver options:
-U, --upgrade Allow package upgrades, ignoring pinned versions in any existing output file
-P, --upgrade-package <UPGRADE_PACKAGE> Allow upgrades for a specific package, ignoring pinned versions in any existing output file
--resolution <RESOLUTION> The strategy to use when selecting between the different compatible versions for a given package requirement [env: UV_RESOLUTION=] [possible values: highest, lowest, lowest-direct]
--prerelease <PRERELEASE> The strategy to use when considering pre-release versions [env: UV_PRERELEASE=] [possible values: disallow, allow, if-necessary, explicit, if-necessary-or-explicit]
--exclude-newer <EXCLUDE_NEWER> Limit candidate packages to those that were uploaded prior to the given date [env: UV_EXCLUDE_NEWER=]
Installer options:
--reinstall Reinstall all packages, regardless of whether they're already installed. Implies `--refresh`
--reinstall-package <REINSTALL_PACKAGE> Reinstall a specific package, regardless of whether it's already installed. Implies `--refresh-package`
--link-mode <LINK_MODE> The method to use when installing packages from the global cache [env: UV_LINK_MODE=] [possible values: clone, copy, hardlink, symlink]
--compile-bytecode Compile Python files to bytecode after installation
Build options:
-C, --config-setting <CONFIG_SETTING> Settings to pass to the PEP 517 build backend, specified as `KEY=VALUE` pairs
--no-build Don't build source distributions
--no-build-package <NO_BUILD_PACKAGE> Don't build source distributions for a specific package
--no-binary Don't install pre-built wheels
--no-binary-package <NO_BINARY_PACKAGE> Don't install pre-built wheels for a specific package
Cache options:
-n, --no-cache Avoid reading from or writing to the cache, instead using a temporary directory for the duration of the operation [env: UV_NO_CACHE=]
--cache-dir <CACHE_DIR> Path to the cache directory [env: UV_CACHE_DIR=]
--refresh Refresh all cached data
--refresh-package <REFRESH_PACKAGE> Refresh cached data for a specific package
Python options:
--python-preference <PYTHON_PREFERENCE> Whether to prefer using Python installations that are already present on the system, or those that are downloaded and installed by uv [possible values: only-managed, managed, system, only-system]
--python-fetch <PYTHON_FETCH> Whether to automatically download Python when required [possible values: automatic, manual]
Global options:
-q, --quiet Do not print any output
-v, --verbose... Use verbose output
--color <COLOR_CHOICE> Control colors in output [default: auto] [possible values: auto, always, never]
--native-tls Whether to load TLS certificates from the platform's native certificate store [env: UV_NATIVE_TLS=]
--offline Disable network access, relying only on locally cached data and locally available files
--no-progress Hides all progress outputs when set
--config-file <CONFIG_FILE> The path to a `uv.toml` file to use for configuration [env: UV_CONFIG_FILE=]
--no-config Avoid discovering configuration files (`pyproject.toml`, `uv.toml`) in the current directory, parent directories, or user configuration directories [env: UV_NO_CONFIG=]
-h, --help Print help
-V, --version Print version
Use `uv help add` for more details.
```
## Summary
`uv tree` will now filter to the current platform by default. You can
pass `--universal` to show the entire tree.
Closes https://github.com/astral-sh/uv/issues/5760.
## Summary
It's fine for this to be in the cache, I think, since we don't
necessarily need to colocate it with the Python directory.
Closes https://github.com/astral-sh/uv/issues/5747.
## Summary
After referring to https://github.com/astral-sh/uv/pull/5637 and doing
additional testing.
The default value in a stable state seems more reasonable to be
``only-system``. ``managed`` in preview.
```
cpython-3.11.9-windows-x86_64-none C:\Users\name\AppData\Local\Programs\Python\Python311\python.exe
cpython-3.10.14-windows-x86_64-none C:\Users\name\AppData\Roaming\uv\data\python\cpython-3.10.14-windows-x86_64-none\install\python.exe
cpython-3.10.11-windows-x86_64-none C:\Users\name\AppData\Local\Programs\Python\Python310\python.exe
cpython-3.9.19-windows-x86_64-none C:\Users\name\AppData\Roaming\uv\data\python\cpython-3.9.19-windows-x86_64-none\python.exe
```
test on uv 0.2.33 (build from
257007ccaf)
### Stable version
``uv venv -p 3.10`` is ``3.10.11`` (System Python)
``uv venv -p 3.9`` is ``No interpreter found``(3.9.19 for managed
Python)
``uv venv -p 3.9 --python-preference only-system`` is ``No interpreter
found``(fail)
``uv venv -p 3.9 --python-preference only-managed`` is
``3.9.19``(success)
Do not use managed Python, only use the system Python, so it can be
determined as ``only-system``.
### Preview mode
**Note:** ``3.10.14`` is managed python, ``3.10.11`` is system python.
``uv venv -p 3.11 --preview`` is ``3.11.9`` (System Python)
``uv venv -p 3.10 --preview`` is ``3.10.14``
``uv venv -p 3.10 --preview --python-preference only-managed`` is
``3.10.14``
``uv venv -p 3.10 --preview --python-preference managed`` is ``3.10.14``
``uv venv -p 3.10 --preview --python-preference system`` is ``3.10.11``
``venv -p 3.10 --preview --python-preference only-system`` is
``3.10.11``
Prioritize the managed Python and then select the system Python, so it
can be determined as ``managed``.
-----
fixed#5754
## Test Plan
Run website in local.

## Summary
Right now, if you have a `requirements.txt` with a pre-release, but the
`requirements.in` does not have a pre-release marker for that dependency
we drop the pre-release. (In the selector, we end up returning
`AllowPrerelease::IfNecessary`, the default.)
I played with a few ways of solving this... The first was to remove that
guard altogether. But if we do that,
`universal_transitive_disjoint_prerelease_requirement` fails (we use
`1.17.0rc1` in both forks, when it should only apply to one of the two).
The second was to do that, but also avoid pushing pre-releases as
preferences when we solve a fork. But then
`universal_disjoint_prereleases` fails, because we return a different
pre-release in each fork.
Finally, I settled on allowing existing pre-releases in forks if they
have no markers on them, i.e., they are "global" preferences. I believe
this is true IFF the preference came from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5729.
## Summary
If _both_ nodes in the derivation tree are proxies, we need to remove
the _entire_ node. So, the function now returns an `Option<Tree>`
instead of taking `&mut Tree`.
Closes https://github.com/astral-sh/uv/issues/5618.
To enforce the 100 character line limit in markdown files introduced in
https://github.com/astral-sh/uv/pull/5635, and to automate the
formatting of markdown files, i've added prettier and formatted our
markdown files with it.
I've excluded the changelog and the generated references documentation
from this for having too many changes, but we can also include them.
I'm not particular on which style we use. My main motivations are
(major) not having to reflow markdown files myself anymore and (minor)
consistence between all markdown files. I've chosen prettier for similar
reason as we chose black, it's a single good style that's automated and
shared in the community. I do prefer prettier's style of not breaking
inside of a link name though.
This PR is in two parts, the first adds prettier to CI and documents
using it, while the second actually formats the docs. When merge
conflicts arise, we can drop the last commit and regenerate it with `npx
prettier --prose-wrap always --write BENCHMARKS.md CONTRIBUTING.md
README.md STYLE.md docs/*.md docs/concepts/**/*.md docs/guides/**/*.md
docs/pip/**/*.md`.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Partially resolves#5561. Haven't added overrides support yet but I can
add it tomorrow if the current approach for constraints is ok.
## Test Plan
`cargo test`
Manually checked trace logs after changing the constraints.
## Summary
Gives you a nice error message if you attempt to sync with, e.g., `-p
3.8` when that version is supported by at least one workspace member,
but your project's minimum requirement is `>=3.12`
Closes https://github.com/astral-sh/uv/issues/5662.
## Summary
I think it's reasonable to only sync the affected group, e.g., `uv add`
on its own should not require syncing all extras.
Closes https://github.com/astral-sh/uv/issues/4418.
Part of #4454
e.g. for `uv help pip compile`
```
Python options:
--python <PYTHON>
The Python interpreter against which to compile the requirements.
By default, uv uses the virtual environment in the current working directory or any parent
directory, falling back to searching for a Python executable in `PATH`. The `--python`
option allows you to specify a different interpreter.
Supported formats:
- `3.10` looks for an installed Python 3.10 using `py --list-paths` on Windows, or
`python3.10` on Linux and macOS.
- `python3.10` or `python.exe` looks for a binary with the given name in `PATH`.
- `/home/ferris/.local/bin/python3.10` uses the exact Python at the given path.
-p, --python-version <PYTHON_VERSION>
The minimum Python version that should be supported by the resolved requirements (e.g., `3.8` or `3.8.17`).
If a patch version is omitted, the minimum patch version is assumed. For example, `3.8` is mapped to `3.8.0`.
--python-preference <PYTHON_PREFERENCE>
Whether to prefer using Python installations that are already present on the system, or those that are downloaded and installed by uv
Possible values:
- only-managed: Only use managed Python installations; never use system Python installations
- managed: Prefer managed Python installations over system Python installations
- system: Prefer system Python installations over managed Python installations
- only-system: Only use system Python installations; never use managed Python installations
--python-fetch <PYTHON_FETCH>
Whether to automatically download Python when required
Possible values:
- automatic: Automatically fetch managed Python installations when needed
- manual: Do not automatically fetch managed Python installations; require explicit installation
```
## Summary
In #5494, I made breaking changes to the tool receipt format. This would
break existing tools for all users. This PR makes the change
backwards-compatible by supporting deserialization for the deprecated
format.
Closes https://github.com/astral-sh/uv/issues/5680.
## Test Plan
Beyond the automated tests, you can run `cargo run tool list` on your
existing machine.
Before:
```
warning: `uv tool list` is experimental and may change without warning
warning: Ignoring malformed tool `black` (run `uv tool uninstall black` to remove)
warning: Ignoring malformed tool `poetry` (run `uv tool uninstall poetry` to remove)
warning: Ignoring malformed tool `ruff` (run `uv tool uninstall ruff` to remove)
```
After:
```
warning: `uv tool list` is experimental and may change without warning
black v0.1.0
- black
poetry v1.8.3
- poetry
ruff v0.0.60
- ruff
```
## Summary
When we add a new optional group in `uv add`, we never to update the
`pyproject.toml` before locking. Otherwise, we use the stale
`pyproject.toml` and omit the optional group.
Closes https://github.com/astral-sh/uv/issues/5687.
## Summary
Fixes a bug in #5494. The `RequirementSourceWire` representation was
ambiguous, and so the order of the fields meant that all variants were
mapped to `Registry` when deserializing. (So the snapshots were right,
but behaviors were wrong.)
## Summary
This could still be made more robust, but it's not critical, since you
can always `--force`. It's good to handle this case, though, since we
have an explicit error for it.
Closes https://github.com/astral-sh/uv/issues/5490.
It transpires that detecting the directory a script was sourced from is
non-trivial across `bash`, `ksh` and `zsh`.
The previous version was a one-liner and supported `bash` and `zsh` but
not `ksh`.
It is possible to keep the one-liner and add `ksh` support, but that is
mutually-exclusive with `zsh`.
Therefore, the only way to square this circle is to add an `if` block. A
silver lining here is that although longer, the script is probably
easier to follow as there is less code-golfing going on.
## Summary
The current receipt doesn't capture quite enough information. For
example, it doesn't differentiate between editable and non-editable
requirements. This PR instead uses the full `Requirement` type. I think
we should use a custom representation like we do in the lockfile, but
I'm just using the default representation to demonstrate the idea.
## Summary
As-is, if you have a workspace with mixed `requires-python`
requirements, resolution will _never_ succeed, since we'll use the union
as the `requires-python` bound (i.e., take the lowest value), and fail
when we see the package that only supports some more narrow range.
This PR modifies the behavior to take the intersection (i.e., the
highest value), so if you have one package that supports Python 3.12 and
later, and another that supports Python 3.8 and later, we lock for
Python 3.12. If you try to sync or run with Python 3.8, we raise an
error, since the lockfile will be incompatible with that request.
Konsti has a write-up in https://github.com/astral-sh/uv/issues/5594
that outlines what could be a longer-term strategy.
Closes https://github.com/astral-sh/uv/issues/5578.
By resolving for each fork from the lockfile individually and by adding
using preferences for the current fork, we solve the instability #5180.
I've tested the locally and will add the packse test scenarios upstack.
Part of
https://github.com/astral-sh/uv/issues/5180#issuecomment-2247696198
The comment in the code explains the bulk of this:
```rust
// We previously computed this heuristic freshness lifetime by
// looking at the difference between the last modified header and
// the response's date header. We then asserted that the cached
// response ought to be "fresh" for 10% of that interval.
//
// It turns out that this can result in very long freshness
// lifetimes[1] that lead to uv caching too aggressively.
//
// Since PyPI sets a max-age of 600 seconds and since we're
// principally just interacting with Python package indices here,
// we just assume a freshness lifetime equal to what PyPI has.
//
// Note though that a better solution here is for the index to
// support proper HTTP caching headers (ideally Cache-Control, but
// Expires also works too, as above).
```
We also remove the `heuristic_percent` field on `CacheConfig`. Since
that's actually part of the cache itself, we bump the simple cache
version.
Finally, we add some more `trace!` calls that should hopefully make
diagnosing issues related to the freshness lifetime a bit easier in the
future.
Fixes#5351
## Summary
Given a fork like:
```
pylint < 3 ; sys_platform == 'darwin'
pylint > 2 ; sys_platform != 'darwin'
```
Solving the top branch will typically yield a solution that also
satisfies the bottom branch, due to maximum version selection (while the
inverse isn't true).
To quote an example from the docs:
```rust
// If there's no difference, prioritize forks with upper bounds. We'd prefer to solve
// `numpy <= 2` before solving `numpy >= 1`, since the resolution produced by the former
// might work for the latter, but the inverse is unlikely to be true due to maximum
// version selection. (Selecting `numpy==2.0.0` would satisfy both forks, but selecting
// the latest `numpy` would not.)
```
Closes https://github.com/astral-sh/uv/issues/4926 for now.
## Summary
First part of: https://github.com/astral-sh/uv/issues/4926. We should
solve forks that _don't_ expand the world of supported versions (e.g.,
`python_version >= '3.11'` enables us to select new packages, since we
narrow the supported version range).
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
`uv venv` should support adopting python version specified in
`requires-python` from `pyproject.toml`. This allows customization on
the venv setup when syncing from python project.
Closes https://github.com/astral-sh/uv/issues/5552.
It also serves as a workaround to close
https://github.com/astral-sh/uv/issues/5258.
## Test Plan
<!-- How was it tested? -->
1. Run `uv venv` in folder with `pyroject.toml` specifying
`requries-python = "<3.10"`. Python 3.9 is selected for venv.
2. Change to `requries-python = "<3.11"` and run `uv venv` again. Python
3.10 is selected now.
3. Switch to a folder without `pyproject.toml` then run `uv venv`.
Python 3.12 is selected now.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Collapses the previous default into "managed" and makes the "managed"
behavior match "installed". People should use "only-managed" if they
want that behavior, it seems overly complicated otherwise.
## Summary
This PR deprecates the `--isolated` flag. The treatment varies across
the APIs:
- For non-preview APIs, we warn but treat it as equivalent to
`--no-config`.
- For preview APIs, we warn and ignore it, with two exceptions...
- For `tool run` and `run` specifically, we don't even warn, because we
can't differentiate the command-specific `--isolated` from the global
`--isolated`.
## Summary
The culmination of #4730. We now have `uv run --isolated` which always
uses a fresh environment (but includes the workspace dependencies as
needed). This enables you to test with strict isolation (e.g., `uv run
--isolated -p foo` will ensure that `foo` is unable to import anything
that isn't an actual dependency).
Closes#5430.