## Summary
Improve the description of override-dependencies based on the statement
in `concepts/resolution.md`: "As with constraints, overrides do not add
a dependency on the package and only take effect if the package is
requested in a direct or transitive dependency."
I tested it locally, `concepts/resolution.md` is correct. It would be
better to also include this in the Reference Chapter of the docs.
I think it's best practice to use a placeholder when we transform
something, and #7522 is having snapshot issues because this filter
conflicts with `with_filtered_virtualenv_bin`
As we support more complex Python discovery behaviors such as:
- #7431
- #7335
- #7300
`Any` is no longer accurate, we actually are looking for a reasonable
default Python version to use which may exclude the first one we find.
Separately, we need the idea of `Any` to improve behavior when listing
versions (e.g., #7286) where we do actually want to match _any_ Python
version. As a first step, we'll rename `Any` to `Default`. Then, we'll
introduce a new `Any` that actually behaves as we'd expect.
Recently, rkyv 0.8 was released. Its API is a fair bit simpler now for
higher level uses (like for us in `uv`) and results in us being able to
delete a fair bit of code. This also removes our last dependency on `syn
1.0`, and thus drops that dependency.
Performance (via testing on the `transformers` example) seems to remain
about the same, which is what was expected:
```
$ hyperfine -w5 -r100 'uv lock' 'uv-ag-rkyv-update lock'
Benchmark 1: uv lock
Time (mean ± σ): 55.6 ms ± 6.4 ms [User: 30.4 ms, System: 35.1 ms]
Range (min … max): 43.0 ms … 73.1 ms 100 runs
Benchmark 2: uv-ag-rkyv-update lock
Time (mean ± σ): 56.5 ms ± 7.2 ms [User: 30.5 ms, System: 36.3 ms]
Range (min … max): 39.1 ms … 71.5 ms 100 runs
Summary
uv lock ran
1.02 ± 0.18 times faster than uv-ag-rkyv-update lock
```
Closes#7415
This changes the structure of the hints generator in the resolver when
encountering solution errors, so that it re-uses a single output buffer
owned by the caller.
It avoids repeated allocations of a temporary buffer within each
recursive function call.
## Summary
See: https://github.com/astral-sh/uv/issues/7485. The test was using `uv
pip sync` which doesn't require fetching metadata, and the failure was
in fetching metadata.
## Summary
Closes https://github.com/astral-sh/uv/issues/7485.
## Test Plan
```
$ cargo run cache clean
$ cargo run venv
$ cargo run pip install django-allauth==0.51.0
$ cargo run venv
$ cargo run pip install django-allauth==0.51.0
```
This changes `uv tool install` behavior with regards to re-using
existing environments.
In particular, this replaces the existing version-matching logic with a
tighter one, enforcing
a same-interpreter match.
This allows to properly switch between system and managed interpreter,
at the cost of
more eagerly invalidating existing environments every time there is an
interpreter change.
Closes: https://github.com/astral-sh/uv/issues/7320
## Summary
This PR enables users to provide pre-defined static metadata for
dependencies. It's intended for situations in which the user depends on
a package that does _not_ declare static metadata (e.g., a
`setup.py`-only sdist), and that is expensive to build or even cannot be
built on some architectures. For example, you might have a Linux-only
dependency that can't be built on ARM -- but we need to build that
package in order to generate the lockfile. By providing static metadata,
the user can instruct uv to avoid building that package at all.
For example, to override all `anyio` versions:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
requires-dist = ["iniconfig"]
```
Or, to override a specific version:
```toml
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = ["anyio"]
[[tool.uv.dependency-metadata]]
name = "anyio"
version = "3.7.0"
requires-dist = ["iniconfig"]
```
The current implementation uses `Metadata23` directly, so we adhere to
the exact schema expected internally and defined by the standards. Any
entries are treated similarly to overrides, in that we won't even look
for `anyio@3.7.0` metadata in the above example. (In a way, this also
enables #4422, since you could remove a dependency for a specific
package, though it's probably too unwieldy to use in practice, since
you'd need to redefine the _rest_ of the metadata, and do that for every
package that requires the package you want to omit.)
This is under-documented, since I want to get feedback on the core ideas
and names involved.
Closes https://github.com/astral-sh/uv/issues/7393.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
close#6272
## Test Plan
<!-- How was it tested? -->
As in https://github.com/astral-sh/uv/pull/6262
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
When syncing a lockfile, we need to respect credentials defined in the
`pyproject.toml`, even if they won't be used for resolution.
Unfortunately, this includes credentials in `tool.uv.sources`,
`tool.uv.dev-dependencies`, `project.dependencies`, and
`project.optional-dependencies`.
Closes https://github.com/astral-sh/uv/issues/7453.
## Summary
This PR adds support to include Python pre-releases when requesting
versions.
Check out the docs for commands that support the `Python` option:
```text
--python, -p python
The Python interpreter to use for the virtual environment.
```
At least the following scenarios are supported:
```bash
3.13.0a1
3.13b2
3.13rc4
313rc1
```
## Test Plan
I added a basic unit test to `uv/crates/uv-python/src/discovery.rs`. I
could have added more, but I have not discovered any relevant places.
CI passes
Note: I was unable to execute the entire test set locally. There were at
least some timeout issues (some tests took over 60 seconds).
========== output ===========
beta version
```bash
cargo run -- venv --python 3.13.0b3 ░▒▓ 94%
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.20s
Running `target/debug/uv venv --python 3.13.0b3`
Using Python 3.13.0b3 interpreter at: /home/mikko/.pyenv/versions/3.13.0b3/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
````
release candidate
```bash
cargo run -- venv --python 3.13.0rc2 ░▒▓ 94%
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.83s
Running `target/debug/uv venv --python 3.13.0rc2`
Using Python 3.13.0rc2 interpreter at: /home/mikko/.pyenv/versions/3.13.0rc2/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
```bash
cargo run -- venv --python 313rc2 ░▒▓ 94%
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.31s
Running `target/debug/uv venv --python 313rc2`
Using Python 3.13.0rc2 interpreter at: /home/mikko/.pyenv/versions/3.13.0rc2/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Generate shell completion for uvx.
Create a `uvx` toplevel command just for completion by combining `uv
tool uvx` (hidden alias for `uv tool run`) with global arguments. This
explicit combination is needed otherwise global arguments are missing
(if they are missing, clap debug assertions fail when `uv tool run`
arguments refer to global arguments in directives like conflicts with).
Fixes#7258
## Test Plan
- Tested using bash using `eval "$(cargo run --bin uv
generate-shell-completion bash)"`
## Summary
All the registry wheels were getting cached under
`index/b2a7eb67d4c26b82` rather than `pypi`, because we used
`IndexUrl::Url` rather than `IndexUrl::from`.
## Summary
It's very unlikely that retaining these is beneficial, since you tend to
partition the cache by platform anyway.
Closes https://github.com/astral-sh/uv/issues/7394.
## Summary
Since https://github.com/astral-sh/uv/pull/7208, this is now _always_
firing, for every directory, because the version gets normalized (e.g.,
`1.2.3` gets normalized to `1-2-3`, which never matches the parsed
version). pip doesn't warn here, I guess we won't either, because I
can't figure out a robust way to do this... We need to get the
non-normalized remainder after stripping the normalized package name,
but we strip the normalized package name from the normalized string, so
we only have a normalized remainder.
## Summary
Running `uv lock --no-sources` should still include dev dependencies,
since dev dependencies are defined separately from sources.
Closes https://github.com/astral-sh/uv/issues/7406.
We got user reports where users were confused about why they can't use
`[project.urls]` in `pyproject.toml` (i think that's from poetry?). This
PR adds a hint that (according to PEP 621), you need to set
`project.name` when using any `project` fields. (PEP 621 also requires
`project.version` xor `dynamic = ["version"]`, but we check that later.)
The intermediate parsing layer to tell apart syntax errors from schema
errors doesn't incur a performance penalty according to epage
(https://github.com/toml-rs/toml/issues/778#issuecomment-2310369253).
Closes#6419Closes#6760
closes#7365
Summary
This pull request adds support for additional file extension aliases in
the SourceDistExtension and ExtensionError enums. The newly supported
file extensions include .tbz, .tgz, .txz, .tar.lz, .tar.lzma. These
changes align the extensions supported by the SourceDistExtension with
those used in Python packaging tools, enhancing compatibility with a
broader range of source distribution formats.
Test Plan
should be added or updated to verify that the new extensions are
correctly recognized as valid source distributions and that errors are
correctly raised when unsupported extensions are provided.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Part of https://github.com/astral-sh/uv/issues/7007.
Settings documentation reference currently doesn't separate "project
metadata" and "configuration" options, implying that it's possible to
set things like `dev-dependencies` in `uv.toml` while it's not. This is
an attempt at better separating those options, by having 2 different
sections:
- `Project metadata`, that holds configuration that can only be set in
`pyproject.toml`
- `Configuration`, that holds configuration that can be set both in
`pyproject.toml` and `uv.toml`
Here are some screenshots to show what this looks like (note that I
don't have code highlighting in the right navigation, which makes them
clunky, as first item is always bigger because of the missing "span" --
I think that's because it's an `mkdocs-material` insider feature, since
I have the same thing on `main` branch):
- Right side navigation:
<img width="241" alt="Screenshot 2024-09-05 at 01 19 50"
src="https://github.com/user-attachments/assets/012f64a4-8d34-4e34-a506-8d02dc1fbf98">
<img width="223" alt="Screenshot 2024-09-05 at 01 20 01"
src="https://github.com/user-attachments/assets/0b0fb71d-c9c3-4ee3-8f6e-cf35180b1a99">
- An option from "Project metadata" section that only applies to
`pyproject.toml`:
<img width="788" alt="Screenshot 2024-09-05 at 01 20 11"
src="https://github.com/user-attachments/assets/64349fbb-8623-4b81-a475-d6ff38c658f1">
- An option from "Configuration" section that applies both to
`pyproject.toml` and `uv.toml`:
<img width="787" alt="Screenshot 2024-09-05 at 01 20 33"
src="https://github.com/user-attachments/assets/732e43d3-cc64-4f5a-8929-23a5555d4c53">
## Test Plan
Local run of the documentation.
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Updates the output of `uv export` to include the command that produced
it, similar to how `uv pip compile` does. This addresses #7159 - I had
this same itch today, figured it was a good time to dive in!
## Test Plan
All the export unit tests were updated to test the new output format.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
fix symbol error
## Test Plan
<!-- How was it tested? -->
Signed-off-by: liangmulu <liangmulu@outlook.com>
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
It often reaches the GitHub API rate limit and shows error like `error:
HTTP status client error (403 Forbidden) for url
(https://api.github.com/repos/astral-sh/uv/releases)` when running `uv
self update`.
To bypass this rate limit issue, allow user to pass a GitHub token via
`--token` or `UV_GITHUB_TOKEN` env.
## Test Plan
<!-- How was it tested? -->
---------
Signed-off-by: Frost Ming <me@frostming.com>
Summary
This pull request fixes a typo in the --build-constraints flag, which
should be singular (--build-constraint). This update ensures consistency
across the documentation and prevents potential confusion for users.
Closes#7315
## Test Plan
The change was verified by reviewing the relevant documentation files
where the flag is referenced. No functional code changes were made, so
no additional testing is required beyond confirming the documentation
update.
## Tested
The change was tested by visually inspecting the updated documentation
to confirm that the typo has been corrected
## Summary
This is arguably breaking, arguably a bug... Today, if project A depends
on project B, and you install A with dev dependencies enabled, you also
get B's dev dependencies. I think this is incorrect. Just like you
shouldn't be importing B's dependencies from A, you shouldn't be using
B's dev dependencies when developing on A.
Closes#7310.
Similar to our semantics for packages with pre-release versions.
We will not use prerelease versions unless there are only prerelease
versions available, a specific version is requested,
or the prerelease version is found in a reasonable source (active
environment, explicit path, etc. but not `PATH`).
For example, `uv python install 3.13 && uv run python --version` will no
longer use `3.13.0rc2` unless that is the only Python version available,
`--python 3.13` is used, or that's the Python version that is present in
`.venv`.
## Summary
This has been asked for a few times. There are risks that these checks
could be slow, but they're buyer-beware.
Closes https://github.com/astral-sh/uv/issues/7246.
## Summary
We have to call `to_dist` to get metadata while validating the lockfile,
but some of the distributions won't match the current platform -- and
that's fine!
## Summary
We need to apply the `--no-install` filters earlier, such that we don't
error if we only have a source distribution for a given package when
`--no-build` is provided but that package is _omitted_.
Closes#7247.
Following #7263 the 3.13.0rc2 releases are at the top of the download
list but we should not select them unless 3.13 is actually requested.
Prior to this, `uv python install` would install `3.13.0rc2`.
```
❯ cargo run -- python install --no-config
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.14s
Running `target/debug/uv python install --no-config`
Searching for Python installations
Installed Python 3.12.6 in 1.33s
+ cpython-3.12.6-macos-aarch64-none
```
```
❯ cargo run -- python install --no-config 3.13
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.14s
Running `target/debug/uv python install --no-config 3.13`
Searching for Python versions matching: Python 3.13
Installed Python 3.13.0rc2 in 1.18s
+ cpython-3.13.0rc2-macos-aarch64-none
```
## Summary
Replace the unmaintained `tokio-tar` crate with the `krata-tokio-tar`
fork. The latter just merged a fix necessary for the crate to work on
PowerPC, and has better chances of future maintenance.
Fixes#3423
## Test Plan
`cargo test`
This is preparatory work for the upload functionality, which needs to
read the METADATA file and attach its parsed contents to the POST
request: We move finding the `.dist-info` from `install-wheel-rs` and
`uv-client` to a new `uv-metadata` crate, so it can be shared with the
publish crate.
I don't properly know if its the right place since the upload code isn't
ready, but i'm PR-ing it now because it already had merge conflicts.
## Summary
If `--config-settings` are provided, we cache the built wheels under one
more subdirectory.
We _don't_ invalidate the actual source (i.e., trigger a re-download) or
metadata, though -- those can be reused even when `--config-settings`
change.
Closes https://github.com/astral-sh/uv/issues/7028.
Let's promote type hints!
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
The generated script now annotates the return type of the dummy function
`hello()`.
## Test Plan
<!-- How was it tested? -->
All existing tests have been synced with this update.
## Summary
This PR adds a more flexible cache invalidation abstraction for uv, and
uses that new abstraction to improve support for dynamic metadata.
Specifically, instead of relying solely on a timestamp, we now pass
around a `CacheInfo` struct which (as of now) contains
`Option<Timestamp>` and `Option<Commit>`. The `CacheInfo` is saved in
`dist-info` as `uv_cache.json`, so we can test already-installed
distributions for cache validity (along with testing _cached_
distributions for cache validity).
Beyond the defaults (`pyproject.toml`, `setup.py`, and `setup.cfg`
changes), users can also specify additional cache keys, and it's easy
for us to extend support in the future. Right now, cache keys can either
be instructions to include the current commit (for `setuptools_scm` and
similar) or file paths (for `hatch-requirements-txt` and similar):
```toml
[tool.uv]
cache-keys = [{ file = "requirements.txt" }, { git = true }]
```
This change should be fully backwards compatible.
Closes https://github.com/astral-sh/uv/issues/6964.
Closes https://github.com/astral-sh/uv/issues/6255.
Closes https://github.com/astral-sh/uv/issues/6860.
## Summary
We now track the discovered `IndexCapabilities` for each `IndexUrl`. If
we learn that an index doesn't support range requests, we avoid doing
any batch prefetching.
Closes https://github.com/astral-sh/uv/issues/7221.
## Summary
We were only applying exclusions when discovering the root, apparently.
Our logic now matches the original intent, which is...
- `exclude` always post-filters `members`.
- We don't treat globs any differently than non-globs.
The one confusing setup that falls out of this is that given:
```toml
members = ["foo/bar/baz"]
exclude = ["foo/bar"]
```
`foo/bar/baz` **would** be included. To exclude it, you would need:
```toml
members = ["foo/bar/baz"]
exclude = ["foo/bar/*"]
```
Closes https://github.com/astral-sh/uv/issues/7071.
## Summary
If we have a singleton `Range`, we don't need to iterate over the map of
available ranges; instead, we can just get the singleton directly.
Closes#6131.
## Summary
Use a path file (`.pth`) instead of `sitecustomize.py` for configuring
path in emphemeral virtualenvs, overlaying the ephemeral venv on top of
the base `.venv`.
`sitecustomize.py` is a module in the python installation and as such a
unique resource - homebrew pythons on macos already install such a file
and thus uv's `sitecustomize.py`, placed in the ephemeral env, did not
have any effect.
I don't find any documentation explicitly saying that addsitedir is
valid in `.pth` files but from trial it seems to be - and there is the
precedent of the existing _virtualenv.pth _virtualenv.py pair that do
nontrivial operations.
## Test Plan
- Testing on ephemeral venv, resolving to base venv including editable
install in base: done (py3.7, 3.12)
- Testing on homebrew python/macos: done (py3.11)
- tests: run_editable
Fixes#7152
## Summary
Fixes#7081
Treats source distribution `.tgz` the same as `.tar.gz` plans
## Test Plan
Quick Version
```bash
cd $(mktemp -d)
uv init
uv add --dev build
.venv/bin/python -m build -s .
mv -v dist/*tar.gz dist/"$(basename dist/*.tar.gz .tar.gz)".tgz
uv pip install dist/*.tgz
```
Can add a proper test to the branch if requested
Change the registry Python sorting implementation to be easier to
follow, making it clearer what it does and that it is a total order. No
functional changes.
## Summary
Explicitly list the formats and extensions that uv supports, based on
[this
list](86ee8d2c01/crates/distribution-filename/src/extension.rs (L70-L77)).
Not a huge fan of adding the section in `concepts/resolution.md`, but I
did not find a better place. Alternatively we could maybe add a
dedicated page that shortly explains Python package types (wheels,
sdists), where such a section could live?
## Test Plan
Local run of the documentation.
This finally gets rid of our hack for working around "hidden"
state. We no longer do a roundtrip marker serialization and
deserialization just to avoid the hidden state.
This adds new routines to `MarkerTree` for "simplifying" and
"complexifying" a tree with respect to lower and upper Python version
bounds.
In effect, "simplifying" a marker is what you do when you write it to a
lock file. Namely, since `uv.lock` includes a `requires-python` bound at
the top, one can say that it acts as a bound on the supported Python
versions. That is, it establishes a context in which one can assume that
bound is true. Therefore, the markers we write can be simplified using
this assumption.
The reverse is "complexifying" a marker, and it's what you do when you
read a marker from the lock file. Namely, once a marker is read, it can
be very difficult in code to keep the corresponding requires-python
context from the lock file. If you lose track of it and decide to
operate on the "simplified" marker, then it's trivial for that to
produce an incorrect result.
I split this change into its own commit because I'm hoping it
crystalizes what it means when we say "a `MarkerTree` has hidden state."
That is, it isn't so much that there is some explicit member of a
`MarkerTree` that is omitted, but rather, the lower and upper version
bounds on `python_full_version` are are rewritten as "unbounded" when
traversing the ADD for display.
We will actually retain this functionality, but rejigger it so that it's
explicit when we do this. In particular, this simplification has been
problematic for us because it fundamentally changes the truth tables of
a marker expression *unless* you are extremely careful to interpret it
only under the original context in which it was simplified. This is
quite difficult to do generally, and in prior work in #6268, we
completed a refactor where we worked around this type of simplification
and moved it to the edges of uv.
In subsequent commits, we'll re-implement this form of simplification as
a more explicit step.
## Summary
I think a better tradeoff here is to skip fetching metadata, even though
we can't validate the extras.
It will help with situations like
https://github.com/astral-sh/uv/issues/5073#issuecomment-2334235588 in
which, otherwise, we have to download the wheels twice.
(This is part of #5711)
## Summary
@BurntSushi and I spotted that the `derivative` crate is only used for
one enum in the entire codebase — however, it's a proc macro, and we pay
for the cost of (re)compiling it in many different contexts.
This replaces it with a private `Inner` core which uses the regular std
derive macros — inlining and optimizations should make this equivalent
to the other implementation, and not too hard to maintain hopefully
(versus a manual impl of `PartialEq` and `Hash` which have to be kept in
sync.)
## Test Plan
Trust CI?
This PR revives #6129, but is less bold:
* It doesn't rename anything. (I think the rename is probably right
though.)
* It doesn't change the _default_ `Debug` impl. Instead, it offers this
as a new `MarkerTree::debug_graph` method.
I found this pretty useful for debugging since it gives a display format
that is more faithful to the internal representation of a `MarkerTree`.
So I think it's worth having around. But making it available in `Debug`
is perhaps a bridge too far since it isn't as familiar as the typical
PEP 508 representation and isn't as succinct.
I did consider printing this when using `{:#?}` (i.e., the "alternate"
debug representation), but too many things use that (like `insta` I
think) to make it practical.
Closes#6129
## Summary
This has bothered me for a while and should be fairly impactful for
users. It requires a weird implementation, since the
distribution-building crate depends on the cache, and so the prune
operation can't live in the cache, since it needs to access internals of
the distribution-building crate.
Closes https://github.com/astral-sh/uv/issues/7096.
## Summary
Like `uv sync`, you can omit the current project (`--no-emit-project`),
a specific package (`--no-emit-package`), or the entire workspace
(`--no-emit-workspace`).
Closes https://github.com/astral-sh/uv/issues/6960.
Closes#6995.
Follow-up to #6959 and #6961: Use the reachability computation instead
of `propagate_markers` everywhere.
With `marker_reachability`, we have a function that computes for each
node the markers under which it is (`requirements.txt`, no markers
provided on installation) or can be (`uv.lock`, depending on the markers
provided on installation) included in the installation. Put differently:
If the marker computed by `marker_reachability` is not fulfilled for the
current platform, the package is never required on the current platform.
We compute the markers for each package in the graph, this includes the
virtual extra packages and the base packages. Since we know that each
virtual extra package depends on its base package (`foo[bar]` implied
`foo`), we only retain the base package marker in the `requirements.txt`
graph.
In #6959/#6961 we were only using it for pruning packages in `uv.lock`,
now we're also using it for the markers in `requirements.txt`.
I think this closes#4645, CC @bluss.
## Summary
We need to prioritize hashes for the distribution over hashes for the
related packages.
I think this needs to be redone entirely though. I can see other issues
with the current approach.
Closes https://github.com/astral-sh/uv/issues/7059.
## Summary
With #6917, there are a lot more PyPy downloads in `uv python list
--all-versions`. I find it clearer to have all the CPython downloads
listed, then all the PyPy downloads, rather than interspersing them. But
this is subjective, feel free to push back!
## Summary
This PR adds `--package` support to `uv build`, such that you can use
`--package` from anywhere in a workspace to build any member.
If a source directory is provided, we use that as the workspace root.
If a file is provided, we error.
For now, `uv build` only builds the current package, making it
semantically identical to `uv sync`.
## Summary
This PR allows users to run `uv build --wheel ./path/to/source.tar.gz`
to build a wheel from a source distribution. This is also the default
behavior if you run `uv build ./path/to/source.tar.gz`. If you pass
`--sdist`, we error.
## Summary
This PR exposes uv's PEP 517 implementation via a `uv build` frontend,
such that you can use `uv build` to build source and binary
distributions (i.e., wheels and sdists) from a given directory.
There are some TODOs that I'll tackle in separate PRs:
- [x] Support building a wheel from a source distribution (rather than
from source) (#6898)
- [x] Stream the build output (#6912)
Closes https://github.com/astral-sh/uv/issues/1510
Closes https://github.com/astral-sh/uv/issues/1663.
In the `lock_redact_https` test specifically, it prompts a link mode
warning from `uv` on my system. Debugging seems to suggest it is
provoked by attempting to hardlink between `/tmp` and `~/.local`. Since
these are on different file systems for me (with `/tmp` being a
ramdisk), it provokes the warning, and this turn spoils the snapshot
when running tests locally.
This PR adds a test specific filter rule to fix this.
## Summary
The error handlers now happen one level higher, matching on _any_ `Err`
that's returned from the lock-and-sync operations.
Closes https://github.com/astral-sh/uv/issues/7011.
`_virtualenv.py` doesn't need to import `__future__.annotations`, as it
has none.
Removing the import:
* Restores the action of the VIRTUALENV_PATCH on Python 3.6
* Eliminates 24 lines of error messages displayed by Python 3.6 when it
starts in an environment created by uv:
```plaintext
Error processing line 1 of /tmp/tmp.ENwqZ0oeyb/lib/python3.6/site-packages/_virtualenv.pth:
Traceback (most recent call last):
File "~/.pyenv/versions/3.6.15/lib/python3.6/site.py", line 168, in addpackage
exec(line)
File "<string>", line 1, in <module>
File "/tmp/tmp.ENwqZ0oeyb/lib/python3.6/site-packages/_virtualenv.py", line 3
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
Remainder of file ignored
```
(Python displays the errors above twice.)
I appreciate the Python team no longer support Python 3.6, but
RedHat-style Linux distributions will support Python 3.6 in their
`/usr/libexec/platform-python` until [releasever 8 expires in
2029](https://access.redhat.com/support/policy/updates/errata#RHEL8_Planning_Guide).
I'm happy for the community to move on, in general, but don't see the
harm in helping those who can't.
I'm not yet sure what in the “remainder of file ignored” is necessary
for my project's build, as I haven't yet finished digging that from
under Hatch. I'll follow up on #6426 when I do, so we can concentrate on
getting to the happy cow.
## Test Plan
```sh
( set -eu
export VIRTUAL_ENV="$(mktemp -d)"
./target/release/uv venv "$VIRTUAL_ENV" --python=python3.6
./target/release/uv pip install cowsay
$VIRTUAL_ENV/bin/python -m cowsay --text 'Look, a talking cow!' )
```
Happy output:
```plaintext
Using Python 3.6.15 interpreter at: ~/.local/bin/python3.6
Creating virtualenv at: /tmp/tmp.VHl4XNi3oI
Activate with: source /tmp//tmp.VHl4XNi3oI/bin/activate
Resolved 1 package in 929ms
Installed 1 package in 17ms
+ cowsay==6.0
____________________
| Look, a talking cow! |
====================
\
\
^__^
(oo)\_______
(__)\ )\/\
||----w |
|| ||
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Resolves issues mentioned in comments
* https://github.com/astral-sh/uv/issues/6699#issuecomment-2322515962
* https://github.com/astral-sh/uv/issues/6866#issuecomment-2322785906
Further investigation on the comments revealed that the pointer
arithmethic being performed in `let handle_start = unsafe {
crt_magic.offset(1 + handle_count) };` from [posy
trampoline](dda22e6f90/src/trampolines/windows-trampolines/posy-trampoline/src/bounce.rs (L146))
had some slight errors. Since `crt_magic` was a `*const u32`, doing an
offset by `1 + handle_count` would offset by too much, with some
possible out of bounds reads or attempts to call CloseHandle on garbage.
We needed to offset differently since we want to offset by
`handle_count` bytes after the initial offset as seen in
[launcher.c](888c48b568/PC/launcher.c (L578)).
Similarly, we needed to skip the first 3 handles, otherwise we'd still
be attempting to close standard I/O handles of the parent (in this case
the shell from `busybox.exe sh -l`).
I also added a few extra checks available from `launcher.c` which checks
if the handle value is `-2` just to match the distlib implementation
more closely and minimize differences.
## Test Plan
Manually compiled distlib's launcher with additional logging and
replaced `Lib/site-packages/pip/_vendor/distlib/t64.exe` with the
compiled one to log pointers. As a result, I was able to verify the
retrieved handle memory addresses in this function actually match in
both uv and distlib's implementation from within busybox.exe nested
shell where this behavior can be observed and manually tested.
I was also able to confirm this fixes the issues mentioned in the
comments, at least with busybox's shell, but I assume this would fix the
case with cmake.
## Open areas
`launcher.c` also [checks the
size](888c48b568/PC/launcher.c (L573-L576))
of `cbReserved2` before retrieving `handle_start` which this function
currently doesn't do. If we wanted to, we could add the additional check
here as well, but I wasn't fully sure why it wasn't added in the first
place. Thoughts?
```rust
// Verify the buffer is large enough
if si.cbReserved2 < (size_of::<u32>() as isize + handle_count + size_of::<HANDLE>() as isize * handle_count) as u16 {
return;
}
```
---------
Co-authored-by: konstin <konstin@mailbox.org>
When a package is included under a platform-specific marker, we know
that wheels that mismatch this marker can never be installed, so we drop
them from the lockfile.
In transformers, we have:
* `tensorflow-text`: `tensorflow-macos; python_full_version >= '3.13'
and platform_machine == 'arm64' and platform_system == 'Darwin'`
* `tensorflow-macos`: `tensorflow-cpu-aws; (python_full_version < '3.10'
and platform_machine == 'aarch64' and platform_system == 'Linux') or
(python_full_version >= '3.13' and platform_machine == 'aarch64' and
platform_system == 'Linux') or (python_full_version >= '3.13' and
platform_machine == 'arm64' and platform_system == 'Linux')`
* `tensorflow-macos`: `tensorflow-intel; python_full_version >= '3.13'
and platform_system == 'Windows'`
This means that `tensorflow-cpu-aws` and `tensorflow-intel` can never be
installed, and we can drop them from the lockfile.
## Summary
I'm not convinced that the behavior is correct as-implemented. When the
user passes a `--python >=3.8` or we discover a `requires-python` from
the workspace, we're currently writing that request out to
`.python-version`. I would probably rather that we write the resolved
patch version?
Closes https://github.com/astral-sh/uv/issues/6821.
The key change here is to use raw strings so that we don't need to
double-escape things like `\d`. And in particular, we rely on the fact
that `"\n"` and `r"\n"` are precisely equivalent when fed to
`Regex::new` in the `regex` crate.
Previously we were using `[+-~]`, but this includes the full range of
characters from `+` to `~`. Incidentally, this does include `-`. We
instead rewrite this as `[-+~]`, which probably matches the intent.
Unlike the previous update, this message is specifically referring to a
fork's markers inside the resolver. We probably *could* massage the
message to be simplified with respect to requires-python, but it's not
obvious to me that that is the right thing to do.
It is not clear whether this update is correct or not. Moreover, it's
not clear whether the status quo is correct or not. The problem is that
`transformers` is so big that it's very hard to understand what the
right output is without a deeper investigation.
One thing that is interesting is if fork prioritization is removed in
this PR *and* on `main`, then the differences in this ecosystem test go
away.
We've decided for now to move forward with this update even though we're
uncertain because this PR fixes a few outstanding correctness issues.
This update changes the error message to one that is worse than the
status quo, but it is still correct because `datasets >= 2.19` doesn't
actually exist given our `EXCLUDE_NEWER` in tests at present.
The underlying cause here seems to be in how PubGrub deals with
reporting incompatibilities. Namely, when it has `foo < 1` and
`foo >= 1`, it reports an incompatibility immediately before looking for
versions. But when it has `foo < 1` and `foo >= 1 ; marker`, then
because they aren't both pubgrub "packages," it starts requesting
versions first and hits the "not available" error path instead of the
"incompatible" error path.
Since this is more of an underlying issue with how we setup
`PubGrubPackage` and our interaction with pubgrub, we ended up deciding
to move forward here with the regression since this PR is fixing a
correctness issue. In particular, if one changes the `requires-python`
to `>=3.8`, then both `main` and this PR produce similarly bad error
messages.
This commit refactors how deal with `requires-python` so that instead of
simplifying markers of dependencies inside the resolver, we do it at the
edges of our system. When writing markers to output, we simplify when
there's an obvious `requires-python` context. And when reading markers
as input, we complexity markers with the relevant `requires-python`
constraint.
When I first wrote this routine, it was intended to only emit a trace
for the final "unioned" resolution. But we actually moved that semantic
operation to the construction of the resolution *graph*. So there is no
unioned `Resolution` any more.
But this is still useful to see. So I changed this to just emit a trace
of *every* resolution right before constructing the graph.
It might be nice to also emit a trace of the unioned graph too. Or
perhaps we should do that instead if this proves too noisy. (Although
this is only emitted at TRACE level.)
These are regression tests for #6269, #6412 and #6836. In this commit,
their test outputs are all wrong. We'll update these snapshots after
fixing the underlying bug by refactoring how `requires-python`
simplification works.
- Respect `UV_PROJECT_ENVIRONMENT` when in project root
- Add `--no-project` and `--no-workspace` to opt-out of above and
`requires-python` detection
- Rename `[NAME]` to `[PATH]` in CLI
Allows configuration of the (currently hard-coded) path to the virtual
environment in projects using the `UV_PROJECT_ENVIRONMENT` environment
variable.
If empty, we'll ignore it. If a relative path, it will be resolved
relative to the workspace root. If an absolute path, we'll use that.
This feature targets use in Docker images and CI. The variable is
intended to be set once in an isolated system and used for all uv
operations.
We do not expose a CLI option or configuration file setting — we may
pursue those later but I see them as lower priority. I think a
system-level environment variable addresses the most pressing use-cases
here.
This doesn't special-case the system environment. Which means that you
can use this to write to the system Python environment. I would
generally strongly recommend against doing so. The insightful comment
from @edmorley at
https://github.com/astral-sh/uv/issues/5229#issuecomment-2312702902
provides some context on why. More generally, `uv sync` will remove
packages from the environment by default. This means that if the system
environment contains any packages relevant to the operation of the
system (that are not dependencies of your project), `uv sync` will break
it. I'd only use this in Docker or CI, if anywhere. Virtual environments
have lots of benefits, and it's only [one line to "activate"
them](https://docs.astral.sh/uv/guides/integration/docker/#using-the-environment).
If you are considering using this feature to use Docker bind mounts for
developing in containers, I would highly recommend reading our [Docker
container development
documentation](https://docs.astral.sh/uv/guides/integration/docker/#developing-in-a-container)
first. If the solutions there do not work for you, please open an issue
describing your use-case and why.
We do not read `VIRTUAL_ENV` and do not have plans to at this time.
Reading `VIRTUAL_ENV` is high-risk, because users can easily leave an
environment active and use the uv project interface today. Reading
`VIRTUAL_ENV` would be a breaking change. Additionally, uv is
intentionally moving away from the concept of "active environments" and
I don't think syncing to an "active" environment is the right behavior
while managing projects. I plan to add a warning if `VIRTUAL_ENV` is
set, to avoid confusion in this area (see
https://github.com/astral-sh/uv/pull/6864).
This does not directly enable centrally managed virtual environments. If
you set `UV_PROJECT_ENVIRONMENT` to an absolute path and use it across
multiple projects, they will clobber each other's environments. However,
you could use this with something like `direnv` to achieve "centrally
managed" environments. I intend to build a prototype of this eventually.
See #1495 for more details on this use-case.
Lots of discussion about this feature in:
- https://github.com/astral-sh/rye/issues/371
- https://github.com/astral-sh/rye/pull/1222
- https://github.com/astral-sh/rye/issues/1211
- https://github.com/astral-sh/uv/issues/5229
- https://github.com/astral-sh/uv/issues/6669
- https://github.com/astral-sh/uv/issues/6612
Follow-ups:
- #6835
- https://github.com/astral-sh/uv/pull/6864
- Document this in the project concept documentation (can probably
re-use some of this post)
Closes https://github.com/astral-sh/uv/issues/6669
Closes https://github.com/astral-sh/uv/issues/5229
Closes https://github.com/astral-sh/uv/issues/6612
## Summary
Http headers are supposed to be case-insensitive (RFC 2616), but there
are some implementations that don't normalize them.
I noticed it while migrating to `uv`, calls to an internal registry
failed. A man in the middle server helped me to find that `pip` uses
Title-Case while `uv pip` uses lowercase.
## Test Plan
I tested `uv` with the same server and now it works fine.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Separate exceptions for different timeouts to make it easier to debug
issues like #6105.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
Not tested at all.
## Summary
Closes#6319.
## Test Plan
I tested with `file:///mirror`, `file://localhost/mirror`, and
`http://mirror` to confirm that it was working as expected.
``` shell-session
/private/tmp/mirror-local 07:08:18
:) tree mirror
mirror/
└── 20240814/
└── cpython-3.12.5+20240814-aarch64-apple-darwin-install_only_stripped.tar.gz
```
<img width="626" alt="image"
src="https://github.com/user-attachments/assets/9c04224d-305c-47ee-a524-4a6abeb79da4">
## Summary
Right now, we have slightly different `requires-python` semantics for
`-p 3.11` vs. `-p 3.11 --universal`, and slightly different (wrong)
semantics for how we compare against the _installed_ Python version
(which doesn't ignore upper bounds, but should).
This PR rips it all out and replaces it with consistent semantics across
`uv lock`, `uv pip compile -p 3.11`, and `uv pip compile -p 3.11
--universal`. We now always ignore upper bounds.
Closes https://github.com/astral-sh/uv/issues/6859.
Closes https://github.com/astral-sh/uv/issues/5045.
Forward an error for missing temp directories:
```
$ env TMPDIR=.tmp uv-debug pip install httpx
error: No such file or directory (os error 2) at path "/home/konsti/projects/uv/.tmp/.tmpgIBhhh"
```
Fixes#6878
## Summary
A few of these should use `absolute` instead of `canonicalize`; and
apparently we no longer need to strip the `CANONICAL_CWD` to get tests
passing.
## Summary
This PR addresses an issue on Windows where `std::fs::canonicalize` can
fail or panic when resolving paths on mapped network drives. By
replacing it with `Simplified::simple_canonicalize`, we aim to improve
the robustness and cross-platform compatibility of path resolution.
### Changes
* Updated `CANONICAL_CWD` in `path.rs` to use
`Simplified::simple_canonicalize` instead of `std::fs::canonicalize`.
### Why
* `std::fs::canonicalize` has known issues with resolving paths on
mapped network drives on Windows, which can lead to panics or incorrect
path resolution.
* `Simplified::simple_canonicalize` internally uses
`dunce::canonicalize`, which handles these cases more gracefully,
ensuring better stability and reliability.
## Test Plan
Since `simple_canonicalize` has already been tested in a prior PR, this
change is expected to work without introducing any new issues. No
additional tests are necessary beyond ensuring existing tests pass,
which will confirm the correctness of the change.
## Summary
- The change relates to #6635 is to include compiled python files (.pyc)
in the uv run command.
- After this change `uv run foo.pyc` should spawn `python foo.pyc`.
## Test Plan
- There is a test that uses TestContext to compile and run a simple
python file that prints "Hello World".
- I built the project locally and tried the same with a simple python
file that I had compiled.
## Summary
Closes https://github.com/astral-sh/uv/issues/6840.
## Test Plan
```
❯ ~/workspace/uv/target/debug/uv pip list --verbose
DEBUG uv 0.4.0
DEBUG Searching for Python interpreter in system path
DEBUG Found `cpython-3.12.3-macos-aarch64-none` at `/Users/crmarsh/.local/share/rtx/installs/python/3.12.3/bin/python3` (search path)
DEBUG Using Python 3.12.3 environment at /Users/crmarsh/.local/share/rtx/installs/python/3.12.3/bin/python3
```
As suggested by @samypr100 on #6680:
https://github.com/astral-sh/uv/pull/6680#issuecomment-2313607984
## Summary
Instead of using `UV_INTERNAL__TEST_DIR`, it simply exports `TEMP` when
running Windows jobs.
## Test Plan
I'm going to run this manually under ProcMon on my Windows machine and
see where uv writes temp files, hopefully to the dev drive and not
`%(LOCAL)APPDATA%` or something.
I'm going to commit a dummy code change and look at build time changes
in CI.
## Summary
Closes https://github.com/astral-sh/uv/issues/6699
On cases like the ones described in
https://github.com/astral-sh/uv/issues/6699, `lpReserved2` somehow seems
to report multiple file descriptors that were not tied to any valid
handles. The previous implementation was faulting as it would try to
dereference these invalid handles. This change moves to using `HANDLE`
directly and check if its is_invalid instead before attempting to close
them.
## Test Plan
Manually tested and verified using `busybox-w32` like described in the
issue.
---------
Co-authored-by: konstin <konstin@mailbox.org>
We currently normalize package and extra names and drop the whitespace
from version specifiers, but we were not normalizing the order of the
specifiers. By sorting them we match the behavior of `packaging` and
become independent of build backends reordering specifiers (#6332).
Surprisingly, the snapshot diff isn't large - most people were already
writing sorted specifiers. Still, this will lead to observable
differences in lockfiles between releases in cases where there are
entries in `requires-dist` that were not previously sorted (while the
total number of `requires-dist` is already small compared to the overall
lockfile).
Microsoft Store Pythons do not always register themselves in the
registry, so we port
<58ce131037/PC/launcher2.c (L1744)>
and look them up on the filesystem in known locations.
## Test Plan
So far I've confirmed that we find a store Python when I use `cargo run
python list`, can we make this a part of any of the platform tests
maybe?
Our current strategy of parsing the output of `py --list-paths` to get
the installed python versions on windows is brittle (#6524, missing
`py`, etc.) and it's slow (10ms last time i measured).
Instead, we should behave spec-compliant and read the python versions
from the registry following PEP 514.
It's not fully clear which errors we should ignore and which ones we
need to raise.
We're using the official rust-for-windows crates for accessing the
registry.
Fixes#1521Fixes#6524
Most times we compile with `cargo build`, we don't actually need
`uv-dev`. By making `uv-dev` dependent on a new `dev` feature, it
doesn't get built by default anymore, but only when passing `--features
dev`.
Hopefully a small improvement for compile times or at least system load.
## Summary
We now respect the user-provided upper-bound in for `requires-python`.
So, if the user has `requires-python = "==3.11.*"`, we won't explore
forks that have `python_version >= '3.12'`, for example.
However, we continue to _only_ compare the lower bounds when assessing
whether a dependency is compatible with a given Python range.
Closes https://github.com/astral-sh/uv/issues/6150.
Previously, we were always asking Cargo to rebuild `uv-cli` if
`.git/HEAD` had changed. But in a worktree, `.git` is a file, not a
directory. And the file contains the path to git's internal worktree
state, which also has its own `HEAD` file. So in the case of a worktree,
we read the file and tell Cargo to watch the worktree-specific `HEAD`
file instead of `.git/head`.
The main thing this fixes is that, previously, in a worktree, `cargo
build` would *always* re-compile `uv` even if nothing changed.
This doesn't impact or fix anything in "typical" clones of uv though.
Only in worktrees.
Closes#6196, Closes#6197
## Summary
The interface here is intentionally a bit more limited than `uv pip
compile`, because we don't want `requirements.txt` to be a system of
record -- it's just an export format. So, we don't write annotation
comments (i.e., which dependency is requested from which), we don't
allow writing extras, etc. It's just a flat list of requirements, with
their markers and hashes.
Closes#6007.
Closes#6668.
Closes#6670.
## Summary
resolves https://github.com/astral-sh/uv/issues/6203
## Test Plan
added a test fixing the bug described in the issue.
---------
Co-authored-by: konstin <konstin@mailbox.org>
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
- Resolves issue #6690
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
```
$ cargo run python list
```
<!-- How was it tested? -->
---------
Co-authored-by: leaf-soba <leaf-soba@gmail.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Whether a package is itself virtual isn't captured in the package
metadata, so we have to compare the sources.
Closes https://github.com/astral-sh/uv/issues/6749.
## Summary
If you're in a directory, and there's workspace above it, we check if
the directory is excluded from the workspace members... But not if it's
_included_ in the first place.
Closes https://github.com/astral-sh/uv/issues/6732.
## Summary
Use a dedicated source type for non-package requirements. Also enables
us to support non-package `path` dependencies _and_ removes the need to
have the member `pyproject.toml` files available when we sync _and_
makes it explicit which dependencies are virtual vs. not (as evidenced
by the snapshot changes). All good things!
## Summary
This PR makes `cargo test | windows` faster in CI.
### Before

### After

## Also
This PR disables the `brotli` feature of `async-compression` since it's
not strictly needed, but this has little to do with the improvements
(it's still less code to build).
This PR introduces additional code in uv tool uninstall to ignore errors
(that only seem to happen on ReFS, ie. on Dev Drives) akin to "the thing
we're trying to delete cannot be deleted because it's already being
deleted".
If `raw_os_error` was stable we could do u32 matching instead of that
`.to_string().contains()` abomination.
## Summary
In theory this problem already existed for `PKG-INFO`, but `egg-info`
would be more common, I think, since it's built in the source tree.
Closes https://github.com/astral-sh/uv/issues/6712.
Changes the `uv init` experience with a focus on working for more
use-cases out of the box.
- Adds `--app` and `--lib` options to control the created project style
- Changes the default from a library with `src/` and a build backend
(`--lib`) to an application that is not packaged (`--app`)
- Hides the `--virtual` option and replaces it with `--package` and
`--no-package`
- `--no-package` is not allowed with `--lib` right now, but it could be
in the future once we understand a use-case
- Creates a runnable project
- Applications have a `hello.py` file which you can run with `uv run
hello.py`
- Packaged applications, e.g., `uv init --app --package` create a
package and script entrypoint, which you can run with `uv run hello`
- Libraries provide a demo API function, e.g., `uv run python -c "import
name; print(name.hello())"` — this is unchanged
Closes#6471
## Summary
We were writing empty lines between the dependencies and the
`tool.uv.sources` table, which led to the `/// script` tag being
unclosed and thus not recognized.
Closes https://github.com/astral-sh/uv/issues/6700.
Should this be user-facing by default? It seems annoying because then
it's unavoidable if you (for whatever reason) have an intentionally
unclosed tag.
Motivated by https://github.com/astral-sh/uv/issues/6700.
## Summary
The basic idea here is: any project can either be a package, or not
("virtual").
If a project is virtual, we don't build or install it.
A project is virtual if either of the following are true:
- `tool.uv.virtual = true` is set.
- `[build-system]` is absent.
The concept of "virtual projects" only applies to workspace member right
now; it doesn't apply to `path` dependencies which are treated like
arbitrary Python source trees.
TODOs that should be resolved prior to merging:
- [ ] Documentation
- [ ] How do we reconcile this with "virtual workspace roots" which are
a little different -- they omit `[project]` entirely and don't even have
a name?
- [x] `uv init --virtual` should create a virtual project rather than a
virtual workspace.
- [x] Running `uv sync` in a virtual project after `uv init --virtual`
shows `Audited 0 packages in 0.01ms`, which is awkward. (See:
https://github.com/astral-sh/uv/pull/6588.)
Closes https://github.com/astral-sh/uv/issues/6511.
## Summary
Tries to improve the following:
```
❯ cargo run sync
Compiling uv-cli v0.0.1 (/Users/crmarsh/workspace/uv/crates/uv-cli)
Compiling uv v0.3.3 (/Users/crmarsh/workspace/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 3.81s
Running `/Users/crmarsh/workspace/uv/target/debug/uv sync`
Using Python 3.12.1
Creating virtualenv at: .venv
Resolved in 7ms
Audited environment in 0.05ms
```
In this case we don't actually have any dependencies -- should we just
omit `Resolved in...` and perhaps even the audited line?
## Summary
This PR revives https://github.com/astral-sh/uv/pull/4944, which I think
was a good start towards adding `--trusted-host`. Last night, I tried to
add `--trusted-host` with a custom verifier, but we had to vendor a lot
of `reqwest` code and I eventually hit some private APIs. I'm not
confident that I can implement it correctly with that mechanism, and
since this is security, correctness is the priority.
So, instead, we now use two clients and multiplex between them.
Closes https://github.com/astral-sh/uv/issues/1339.
## Test Plan
Created self-signed certificate, and ran `python3 -m http.server --bind
127.0.0.1 4443 --directory . --certfile cert.pem --keyfile key.pem` from
the packse index directory.
Verified that `cargo run pip install
transitive-yanked-and-unyanked-dependency-a-0abad3b6 --index-url
https://127.0.0.1:8443/simple-html` failed with:
```
error: Request failed after 3 retries
Caused by: error sending request for url (https://127.0.0.1:8443/simple-html/transitive-yanked-and-unyanked-dependency-a-0abad3b6/)
Caused by: client error (Connect)
Caused by: invalid peer certificate: Other(OtherError(CaUsedAsEndEntity))
```
Verified that `cargo run pip install
transitive-yanked-and-unyanked-dependency-a-0abad3b6 --index-url
'https://127.0.0.1:8443/simple-html' --trusted-host '127.0.0.1:8443'`
failed with the expected error (invalid resolution) and made valid
requests.
Verified that `cargo run pip install
transitive-yanked-and-unyanked-dependency-a-0abad3b6 --index-url
'https://127.0.0.1:8443/simple-html' --trusted-host '127.0.0.2' -n` also
failed.
As described in #4242, we're currently incorrectly downloading glibc
python-build-standalone on musl target, but we also can't fix this by
using musl python-build-standalone on musl targets since the musl builds
are effectively broken.
We reintroduce the libc detection previously removed in #2381, using it
to detect which libc is the current one before we have a python
interpreter. I changed the strategy a big to support an empty `PATH`
which we use in the tests.
For simplicity, i've decided to just filter out the musl
python-build-standalone archives from the list of available archive,
given this is temporary. This means we show the same error message as if
we don't have a build for the platform. We could also add a dedicated
error message for musl.
Fixes#4242
## Test Plan
Tested manually.
On my ubuntu host, python downloads continue to pass:
```
target/x86_64-unknown-linux-musl/debug/uv python install
```
On alpine, we fail:
```
$ docker run -it --rm -v .:/io alpine /io/target/x86_64-unknown-linux-musl/debug/uv python install
Searching for Python installations
error: No download found for request: cpython-any-linux-x86_64-musl
```
## Summary
We now respect the `environments` field in `uv pip compile --universal`,
e.g.:
```toml
[tool.uv]
environments = ["platform_system == 'Emscripten'"]
```
Closes https://github.com/astral-sh/uv/issues/6641.
## Summary
This is similar to https://github.com/astral-sh/uv/pull/6171 but more
expansive... _Anywhere_ that we test requirements for platform
compatibility, we _need_ to respect the resolver-friendly markers. In
fixing the motivating issue (#6621), I also realized that we had a bunch
of bugs here around `pip install` with `--python-platform` and
`--python-version`, because we always performed our `satisfy` and `Plan`
operations on the interpreter's markers, not the adjusted markers!
Closes https://github.com/astral-sh/uv/issues/6621.
For various reasons, I have a preference for out of tree virtual
environments. Things just work if I symlink, but I don't know that this
is guaranteed, so I thought I'd add a test for it. It looks like there's
another code path that matters (`FoundInterpreter::discover ->
PythonEnvironment::from_root`) for the higher level commands, but
couldn't spot a good place to test that.
Related discussion:
https://github.com/astral-sh/uv/issues/1495#issuecomment-1950442191 /
https://github.com/astral-sh/uv/issues/1578#issuecomment-1949911871
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This changes the behavior a bit of the per-dependency build-isolation
override. That, if the dist name is known, it is passed into the
`SourceBuild::Setup` function. This allows for this override to work for
projects without a `pyproject.toml`, like `detectron2`, using the
specified requirement name. Previously only the `pyproject.toml` name
could be used, which these projects are lacking. An example of a
use-case is given in the *Test Plan* section.
Additionally, the `no_build_isolation_package` has been adding to
`InstallerSettingsRef` and used in `sync` and other commands, as this
was not done yet.
This is useful if you want to **non**-isolate a single package, even
ones without a proper `pyproject.toml`
## Test Plan
<!-- How was it tested? -->
With the following pyproject.toml.
```toml
[project]
name = "detectron-uv"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"detectron2",
"setuptools",
"torch",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.uv.sources]
detectron2 = { git = "https://github.com/facebookresearch/detectron2", rev = "bcfd464d0c810f0442d91a349c0f6df945467143" }
[tool.uv]
no-build-isolation-package = ["detectron2"]
```
The package `detectron2` is now correctly **non**-isolated. Before,
because the logic depended on getting the name from the
`pyproject.toml`, which is lacking in detectron2 you would get the
message, that the source could not be built. This was because it would
still be *isolated* in that case.
With these changes you can now install using (given that you are inside
a workspace with a venv):
```
uv pip install torch setuptools
uv sync
```
This would previously fail with something like:
```
error: Failed to prepare distributions
Caused by: Failed to fetch wheel: detectron2 @ git+https://github.com/facebookresearch/detectron2@bcfd464d0c810f0442d91a349c0f6df945467143
Caused by: Build backend failed to determine extra requires with `build_wheel()` with exit status: 1
--- stdout:
--- stderr:
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "/Users/tdejager/Library/Caches/uv/builds-v0/.tmptloDcZ/lib/python3.12/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/tdejager/Library/Caches/uv/builds-v0/.tmptloDcZ/lib/python3.12/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
self.run_setup()
File "/Users/tdejager/Library/Caches/uv/builds-v0/.tmptloDcZ/lib/python3.12/site-packages/setuptools/build_meta.py", line 502, in run_setup
super().run_setup(setup_script=setup_script)
File "/Users/tdejager/Library/Caches/uv/builds-v0/.tmptloDcZ/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 10, in <module>
ModuleNotFoundError: No module named 'torch'
---
Caused by: This error likely indicates that detectron2 @ git+https://github.com/facebookresearch/detectron2@bcfd464d0c810f0442d91a349c0f6df945467143 depends on torch, but doesn't declare it as a build dependency. If detectron2 @ git+https://github.com/facebookresearch/detectron2@bcfd464d0c810f0442d91a349c0f6df945467143 is a first-party package, consider adding torch to its `build-system.requires`. Otherwise, `uv pip install torch` into the environment and re-run with `--no-build-isolation`.
```
**Edit**:
Some wording, used isolated where it should be **non**-isolated.
## Summary
Fixes: #6615
Currently, some packages are not installable with `uv`, like `ziglang`
on Linux.
Everything is described in the issue! 😄
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
I added a unit test for the problematic use case.
I also checked that previous unit test are still running in order to
ensure the backward compatibility.
## Summary
This was an oversight. The existing test was (correctly) failing, but
for the wrong reason (failing to build the package during _resolution_).
Updates the snapshot for the deployment from
https://github.com/astral-sh/pypi-proxy/pull/9 — for a while now, we've
only been failing on file requests not registry requests because the
proxy auth was setup wrong.
For users who were using absolute paths in the `pyproject.toml`
previously, this is a behavior change: We now convert all absolute paths
in `path` entries to relative paths. Since i assume that no-one relies
on absolute path in their lockfiles - they are intended to be portable -
I'm tagging this as a bugfix.
Closes https://github.com/astral-sh/uv/pull/6438
Fixes https://github.com/astral-sh/uv/issues/6371
Previously, we excluded these and only looked at system interpreters.
However, it makes sense for this to match the typical Python discovery
experience. We could consider swapping the default... I'm not sure what
makes more sense. If we change the default (as written now) — this could
arguably be a breaking change.
If we don't do this, and `uv run` invokes something like `uv run
--isolated uv pip install foo` uv won't mutate the isolated environment,
it'll mutate whatever outer environment it finds.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This is a attempt at fixing https://github.com/astral-sh/uv/issues/6486.
It reverts changes made to `pyproject.toml` when sync fails during `uv
add`. This solution felt a little heavy handed and could probably be
improved but it is what happens when locking fails during `uv add` so I
thought it would be a good start.
## Test Plan
<!-- How was it tested? -->
I have added a test case for this to `tests/edit.rs`. It uses
`pytorch==1.0.2` to achieve the desired failure.
## Summary
Indicate the previous version from which uv was upgraded when running
`uv self update`. Thought that it could be useful in some situations to
have a trace of the previous version that was installed.
## Test Plan
Did not find a way to test this, since this heavily relies on being able
to use the installation script and the ability to publish artifacts for
a specific tag.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Two small typo fixes: one in the documentation and one in a comment in
the source code I happened to come across.
## Summary
We accidentally changed the Windows cache directory from
`C:\Users\User\AppData\Local\uv\cache` to
`C:\Users\User\AppData\Local\uv` in v0.3.0. We're considering this a
bug, since it does _not_ match the documentation, and prior to v0.3.0,
we always used the former. This PR migrates back to the previous
location. It should be seamless for users, as we move the cache items to
the new location on startup.
Closes https://github.com/astral-sh/uv/issues/6417.
While working on https://github.com/astral-sh/uv/pull/6389 I discovered
we never checked `cache.get_url` here, which is wrong — though I don't
think it had much effect in practice since the realm would typically
match first. The main problem is that when we call `get_url` later we
hard-code the username to `None` because we assume we checked up here
with the username if present.
## Summary
This reverts commit 7d92915f3d.
I thought this would be a net performance improvement, but we've now had
multiple reports that this made locking _extremely_ slow. I also tested
this today with a very large codebase against a registry that does not
support range requests, and the number of downloads was sort of wild to
watch. Reverting the reduced resolution time by over 50%.
Closes#6104.
## Summary
Mark the new tests requiring Python 3.12.1 specifically as requiring
python-patch feature. This makes the test suite pass again on systems
not having this specific version (and disabling the feature).
## Test Plan
`cargo test` on Gentoo :-).
## Summary
It turns out we weren't applying the collapse logic here, so dev deps
with extras were repeated. This was generally ok... unless we ended up
_dropping_ an extra, in which case, you now have a duplicate.
Closes https://github.com/astral-sh/uv/issues/6380.
In https://github.com/astral-sh/uv/pull/6359, I accidentally made `uv
python install` prefer `.python-version` files over `.python-versions`
files -.-, kind of niche but it's a regression.
Closes https://github.com/astral-sh/uv/issues/6285
Introduces a new problem if the user says `python` but it doesn't exist
with that name:
```
❯ cargo run -q -- -v run python --version
DEBUG uv 0.3.0
DEBUG Found project root: `/Users/zb/workspace/uv`
DEBUG Project `uv` is marked as unmanaged
DEBUG No project found; searching for Python interpreter
DEBUG Reading requests from `/Users/zb/workspace/uv/.python-version`
DEBUG Searching for Python 3.11 in managed installations or system path
DEBUG Found `cpython-3.12.4-macos-aarch64-none` at `/Users/zb/workspace/uv/.venv/bin/python3` (virtual environment)
DEBUG Searching for managed installations at `/Users/zb/Library/Application Support/uv/python`
DEBUG Found `cpython-3.11.9-macos-aarch64-none` at `/opt/homebrew/bin/python3.11` (search path)
DEBUG Using Python 3.11.9 interpreter at: /opt/homebrew/opt/python@3.11/bin/python3.11
DEBUG Running `python --version`
error: Failed to spawn: `python`
Caused by: No such file or directory (os error 2)
```
I'll fix this separately.
## Summary
We're gonna work on a more comprehensive review of whether we should
preserve the username here, but for now, `git@` is effectively a
convention for GitHub and GitLab etc.
Closes https://github.com/astral-sh/uv/issues/6305.
## Test Plan
I guess we don't have infrastructure for testing SSH private keys right
now, but...
```
❯ cargo run init foo
❯ cd foo
❯ cargo run add git+ssh://git@github.com/astral-sh/mkdocs-material-insiders.git
```
For a path dep such as the root project, uv can read metadata statically
from `pyproject.toml` or dynamically from the build backend.
Python's `packaging`
[sorts](cc938f984b/src/packaging/specifiers.py (L777))
specifiers before emitting them, so all build backends built on top of
it - such as hatchling - will change the specifier order compared to
pyproject.toml. The core metadata spec does say "If a field is not
marked as Dynamic, then the value of the field in any wheel built from
the sdist MUST match the value in the sdist", but it doesn't specify if
"match" means string equivalent or semantically equivalent, so it's
arguable if that spec conformant. This change means that the specifiers
have a different ordering when coming from the build backend than when
read statically from pyproject.toml.
Previously, we tried to read path dep metadata in order:
* From the (built wheel) cache (`packaging` order)
* From pyproject.toml (verbatim specifier)
* From a fresh build (`packaging` order)
This behaviour is unstable: On the first run, we cache is cold, so we
read the verbatim specifier from `pyproject.toml`, then we build and
store the metadata in the cache. On the second run, we read the
`packaging` sorted specifier from the cache.
Reproducer:
```shell
rm -rf newproj
uv init -q --no-config newproj
cd newproj/
uv add -q "anyio>=4,<5"
cat uv.lock | grep "requires-dist"
uv sync -q
cat uv.lock | grep "requires-dist"
cd ..
```
```
requires-dist = [{ name = "anyio", specifier = ">=4,<5" }]
requires-dist = [{ name = "anyio", specifier = "<5,>=4" }]
```
A project either has static metadata, so we can read from
pyproject.toml, or it doesn't, and we always read from the build through
`packaging`. We can use this to stabilize the behavior by slightly
switching the order.
* From pyproject.toml (verbatim specifier)
* From the (built wheel) cache (`packaging` order)
* From a fresh build (`packaging` order)
Potentially, we still want to sort the specifiers we get anyway, after
all, the is no guarantee that the specifiers from a build backend are
deterministic. But our metadata reading behavior should be independent
of the cache state, hence changing the order in the PR.
Fixes#6316
## Summary
resolves https://github.com/astral-sh/uv/issues/5915, not entirely sure
if `manylinux_compatible` should be a separate field in the JSON
returned by the interpreter or there's some way to use the existing
`platform` for it.
## Test Plan
ran the below
```
rm -rf .venv
target/debug/uv venv
# commenting out the line below triggers the change..
# target/debug/uv pip install no-manylinux
target/debug/uv pip install cryptography --no-cache
```
is there an easy way to add this into the existing snapshot-based test
suite? looking around to see if there's a way that doesn't involve
something implementation-dependent like mocks.
~update: i think the output does differ between these two, so probably
we can use that.~ i lied - that "building..." output seems to be
discarded.
## Summary
For non-virtual workspaces, these are covered by the _members_. But for
virtual workspaces, they aren't captured anywhere else in the lock. So,
we weren't invalidating `uv.lock` when the dev dependencies changed,
which led to a panic.
Closes https://github.com/astral-sh/uv/issues/6288
## Summary
This ensures that we don't stream output to the `--output-file`, since
other processes may rely on reading it.
Closes https://github.com/astral-sh/uv/issues/6239.
The ADD `MarkerTree` was including the non-deterministic, unstable
`NodeId` in its `Ord` implementation since switching algebraic decision
diagrams. By replacing this with a correct `Ord` implementation, we fix
#6249.
Requires https://github.com/astral-sh/pubgrub/pull/31
This is a fallback mode that we supported when we decided to use PEP 517
builds by default. I can't find a single reference to it on GitHub or in
our issue tracker, so I want to drop support for it as part of v0.3.0.
Executable name requests were being treated as explicit requests to
install into system environments, but I don't think it should be as it's
implicit what environment you'll end up in. Following #4308, we allow
multiple executables to be found so we can filter here.
Concretely, this means `--system` is required to install into a system
environment discovered with e.g. `--python=python`. The flag is still
not required for cases where we're not mutating environment.
These are global and non-specific to the `pip` API, so I think they
should be elevated.
- Ran `UV_CONCURRENT_DOWNLOADS=1 cargo run pip list`; verified that
`downloads` resolved to 1.
- Added `concurrent-downloads = 5` under `[tool.uv]` in
`pyproject.toml`; ran `cargo run pip list`; verified that `downloads`
resolved to 5.
- Ran `UV_CONCURRENT_DOWNLOADS=1 cargo run pip list`; verified that
`downloads` resolved to 1.
This PR moves us to the Linux strategy for our global directories on
macOS. We both feel on the team _and_ have received feedback (in Issues
and Polls) that the `Application Support` directories are more intended
for GUIs, and CLI tools are correct to respect the XDG variables and use
the same directory paths on Linux and macOS.
Namely, we now use:
- `/Users/crmarsh/.local/share/uv/tools` (for tools)
- `/Users/crmarsh/.local/share/uv/python` (for Pythons)
- `/Users/crmarsh/.cache/uv` (for the cache)
The strategy is such that if the `/Users/crmarsh/Library/Application
Support/uv` already exists, we keep using it -- same goes for
`/Users/crmarsh/Library/Caches/uv`, so **it's entirely backwards
compatible**.
If you want to force a migration to the new schema, you can run:
- `uv cache clean`
- `uv tool uninstall --all`
- `uv python uninstall --all`
Which will clean up the macOS-specific directories, paving the way for
the above paths. In other words, once you run those commands, subsequent
`uv` operations will automatically use the `~/.cache` and `~/.local`
variants.
Closes https://github.com/astral-sh/uv/issues/4411.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
- Removes "experimental" labels from command documentation
- Removes preview warnings
- Removes `PreviewMode` from most structs and methods — we could keep it
around but I figure we can propagate it again easily where needed in the
future
- Enables preview behavior by default everywhere, e.g., `uv venv` will
download Python versions
This PR migrates uv's use of `chrono` to `jiff`.
I did most of this work a while back as one of my tests to ensure Jiff
could actually be used in a real world project. I decided to revive
this because I noticed that `reqwest-retry` dropped its Chrono
dependency,
which is I believe the only other thing requiring Chrono in uv.
(Although, we use a fork of `reqwest-middleware` at present, and that
hasn't been updated to latest upstream yet. I wasn't quite sure of the
process we have for that.)
In course of doing this, I actually made two changes to uv:
First is that the lock file now writes an RFC 3339 timestamp for
`exclude-newer`. Previously, we were using Chrono's `Display`
implementation for this which is a non-standard but "human readable"
format. I think the right thing to do here is an RFC 3339 timestamp.
Second is that, in addition to an RFC 3339 timestamp, `--exclude-newer`
used to accept a "UTC date." But this PR changes it to a "local date."
That is, a date in the user's system configured time zone. I think
this makes more sense than a UTC date, but one alternative is to drop
support for a date and just rely on an RFC 3339 timestamp. The main
motivation here is that automatically assuming UTC is often somewhat
confusing, since just writing an unqualified date like `2024-08-19` is
often assumed to be interpreted relative to the writer's "local" time.
e.g.
```
❯ uv pip install httpx==0.26.0 --reinstall-package httpx
Resolved 7 packages in 67ms
Prepared 1 package in 0.95ms
Uninstalled 1 package in 2ms
Installed 1 package in 12ms
~ httpx==0.26.0
```
```
❯ uv add httpx==0.26.0
warning: `uv add` is experimental and may change without warning
warning: `uv.sources` is experimental and may change without warning
Resolved 15 packages in 23ms
Built example @ file:///Users/zb/workspace/example
Prepared 2 packages in 187ms
Uninstalled 2 packages in 2ms
Installed 2 packages in 2ms
~ example==0.1.0 (from file:///Users/zb/workspace/example)
- httpx==0.27.0
+ httpx==0.26.0
```
Motivated by trying to reduce the diff for project updates in `uv add`.
I think it makes sense in general though. We'll also want a special
output for upgrades in the future, e.g. reducing the `httpx` changes in
the second example to a single line indicating the original and
subsequent distribution versions.
I'd like to have
> Reinstalled 1 package in 2ms
but it seems non-trivial to show the timing for it? I think we'd need to
determine what we want to call a "Reinstall" during the operations and
split doing them from the rest of the uninstalls and installs.
## Summary
The strategy here is: if the user provides supported environments, we
use those as the initial forks when resolving. As a result, we never add
or explore branches that are disjoint with the supported environments.
(If the supported environments change, we ignore the lockfile entirely,
so we don't have to worry about any interactions between supported
environments and the preference forks.)
Closes https://github.com/astral-sh/uv/issues/6184.
## Summary
I probably won't land https://github.com/astral-sh/uv/pull/6210 for the
release, but I do want to make one breaking change to prep for it:
renaming `environment-markers` to `resolution-markers` (or
`solution-markers`?) so that it's delineated from the user-defined
markers in that PR.
Indented blocks in Markdown are treated as code blocks, and rustdoc
treats all unadorned code blocks as Rust doctests. Since this wasn't
intended as a doctest and isn't valid Rust, it makes `cargo test --doc`
fail. We fix this by using an explicit code block labeled as `text`.
In https://github.com/astral-sh/uv/issues/5046, we show the tautological
proof:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on jax>=0.4.20.
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
(1)
```
This is a part of the error tree because the statement
`colabfold[alphafold]==1.5.5 depends on jax>=0.4.20` is actually a
simplification of `colabfold[alphafold]==1.5.5 depends on
jax>=0.4.20,<0.5.0` and the no versions clause is a proof of that
simplification.
Without simplification, the clause looks like:
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20,<0.5.0 and only the following versions of jax are available:
jax<=0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
we can conclude that colabfold[alphafold]==1.5.5 depends on one of:
jax==0.4.20
jax==0.4.21
jax==0.4.22
jax==0.4.23
jax==0.4.24
jax==0.4.25
jax==0.4.26
jax==0.4.27
jax==0.4.28
jax==0.4.29
jax==0.4.30
jax==0.4.31
And because jax>=0.4.20 depends on numpy>=1.26.0, we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0.
```
I don't think we have a great way to avoid performing the simplification
of the range conditionally and it makes the error simpler to just jump
straight to `colabfold[alphafold]==1.5.5 depends on jax>=0.4.20`.
The derivation for this clause looks like:
```
jax==0.4.20 | ==0.4.21 | ==0.4.22 | ==0.4.23 | ==0.4.24 | ==0.4.25 | ==0.4.26 | ==0.4.27 | ==0.4.28 | ==0.4.29 | ==0.4.30 | ==0.4.31 depends on numpy>=1.26.0
no versions of jax>0.4.20, <0.4.21 | >0.4.21, <0.4.22 | >0.4.22, <0.4.23 | >0.4.23, <0.4.24 | >0.4.24, <0.4.25 | >0.4.25, <0.4.26 | >0.4.26, <0.4.27 | >0.4.27, <0.4.28 | >0.4.28, <0.4.29 | >0.4.29, <0.4.30 | >0.4.30, <0.4.31 | >0.4.31, <0.5.0
colabfold[alphafold]==1.5.5 depends on jax>=0.4.20, <0.5.0
```
So it looks like we can take trees of this form and drop the "no
versions" clause _if_ the ranges are compatible[*]. See [this
comment](https://github.com/astral-sh/uv/pull/6160#discussion_r1720280922)
for a simpler explanation.
With this pull request, the clause simplifies to
```
╰─▶ Because colabfold[alphafold]==1.5.5 depends on jax>=0.4.20 and jax>=0.4.20 depends on numpy>=1.26.0,
we can conclude that colabfold[alphafold]==1.5.5 depends on numpy>=1.26.0. (1)
```
Unfortunately, this doesn't change any snapshots in our test suite so
I'm uncertain if the strategy generalizes. In some incorrect iterations
of this logic, the snapshots did reveal my mistakes.
[*] "if the ranges are compatible" includes a bit of hand-waving. I'm
not 100% sure if I've chosen the correct range heuristic here.
## Summary
Passing `--upgrade` to `tool run` is confusing, because it doesn't
upgrade the installed tool. It just causes us to use an isolated tool
environment, which seems wrong.
Closes#3683
Note our semantics do not exactly match the specification so we can
perform algebra on the markers. See the caveats in the documentation
(and in the discussion below).
The test output seems to depend on using Python 3.12.1 specifically.
While I'm not sure how it happens, it seems like these can get out of
sync between CI and local testing. In this case, I had a problem where
the marker expressions emitted locally were tied to Python 3.12.4, but
the tests in CI were tied to Python 3.12.1. Changing the test to require
3.12.1 specifically fixes this.
This initial set is meant to be a basic starting point where we
can test the interaction between 'uv' commands more systematically.
And specifically, with a focus on how the lock file changes.
This adds some variations on 'uv add' and 'uv remove' specifically
for testing changes to the lock file (and not anything else).
We also rejigger 'run_and_format' so that we can use it in other
contexts, particularly for error reporting.
And we add a 'diff_lock' helper for returning the changes made to
a lock file after running a command.
This was only being used in the ecosystem tests. Since we now don't do a
resolve when `uv lock` is run and when the lock file satisfies the
`pyproject.toml`, deterministic checking was removed since it's avoided
by construction. It was removed everywhere else, so we remove it here as
well.
Closes https://github.com/astral-sh/uv/issues/6167
We've been seeing intermittent failures in CI, which we thought were
unexpected HTTP 401s but it actually looks like a panic when handling an
expected HTTP error. I believe the problem is that an early client error
can cause the channel to close and we crash when we unwrap the `send`.
## Summary
Fixes#6177
This ensures a `pyproject.toml` file without a `[project]` table is not
a fatal error for `uv venv`, which is just trying to discover/respect
the project's `python-requires` (#5592).
Similarly, any caught `WorkspaceError` is now also non-fatal and instead
prints a warning message (feeback welcome here, felt less surprising
than e.g. a malformed `pyproject.toml` breaking `uv venv`).
## Test Plan
I added two test cases: `cargo test -p uv --test venv`
Also, existing venv tests were failing for me since I use fish and the
printed activation script was `source .venv/bin/activate.fish` (to
repro, just run the tests with `SHELL=fish`). So added an insta filter
to normalize that.
## Summary
PR #4533 introduced (almost) spec compliant parsing of `.egg-info`
filenames, but added the overly strict requirement that the distribution
version must be present. This causes various `uv pip` operations to fail
in environments where there are `.egg-info` files without a version
component, so loosen this check by making the version component optional
and reading the version from the egg metadata when it is not present.
As an example of the issue, running `uv pip list` on my system currently
results in
```
error: Failed to read metadata from: `/usr/lib/python3.12/site-packages/PySide6.egg-info`
Caused by: The `.egg-info` filename "PySide6.egg-info" is missing a version
```
whereas regular `pip list` succeeds:
```
$ pip list | rg -S pyside
PySide6 6.7.2
```
## Test Plan
This has been tested by altering the `.egg-info` filename tests as
needed and ensuring the full test suite passes locally.
Resolve#6151
## Test Plan
Execution result of `cargo run -- help`
```bash
An extremely fast Python package manager.
Usage: uv [OPTIONS] <COMMAND>
Commands:
run Run a command or script (experimental)
init Create a new project (experimental)
add Add dependencies to the project (experimental)
remove Remove dependencies from the project (experimental)
sync Update the project's environment (experimental)
lock Update the project's lockfile (experimental)
tree Display the project's dependency tree (experimental)
tool Run and install commands provided by Python packages (experimental)
python Manage Python versions and installations (experimental)
pip Manage Python packages with a pip-compatible interface
venv Create a virtual environment
cache Manage uv's cache
version Display uv's version
generate-shell-completion Generate shell completion
help Display documentation for a command
...
```
Execution result of `cargo run -- -h` and `cargo run -- --help`
```bash
An extremely fast Python package manager.
Usage: uv [OPTIONS] <COMMAND>
Commands:
run Run a command or script (experimental)
init Create a new project (experimental)
add Add dependencies to the project (experimental)
remove Remove dependencies from the project (experimental)
sync Update the project's environment (experimental)
lock Update the project's lockfile (experimental)
tree Display the project's dependency tree (experimental)
tool Run and install commands provided by Python packages (experimental)
python Manage Python versions and installations (experimental)
pip Manage Python packages with a pip-compatible interface
venv Create a virtual environment
cache Manage uv's cache
version Display uv's version
help Display documentation for a command
...
```
## Summary
In the resolver, we use release-only semantics to normalize
`python_full_version`. So, if we see `python_full_version < '3.13'`, we
treat that as `(Unbounded, Exclude(3.13))`. `3.13b0` evaluates as `true`
to that range, so we were accepting pre-releases for these markers.
Instead, we need to exclude pre-release segments when performing these
evaluations.
Closes https://github.com/astral-sh/uv/issues/6169.
## Test Plan
Hard to write a test for this because you need a pre-release Python
locally... so:
`echo "sqlalchemy==2.0.32" | cargo run pip compile - --python 3.13 -n`
Resolve#6152
## Summary
## Test Plan
Execution result of `cargo run generate-shell-completion --help`
```bash
Generate shell completion
Usage: uv generate-shell-completion <SHELL>
Arguments:
<SHELL> The shell to generate the completion script for [possible values: bash, elvish, fish, nushell, powershell, zsh]
```
Execution result of `cargo run help generate-shell-completion`
```bash
Generate shell completion
Usage: uv generate-shell-completion <SHELL>
Arguments:
<SHELL>
The shell to generate the completion script for
[possible values: bash, elvish, fish, nushell, powershell, zsh]
```
## Summary
Resolves https://github.com/astral-sh/uv/issues/4537
- First commit avoids overwriting dependencies with different markers.
- Second commit supports adding from requirements files.
## Test Plan
`cargo test`
Now that these incompatibilities are collected into a single range
(https://github.com/astral-sh/uv/pull/6154), we can simplify the range
using the known available versions to reduce verbosity.
There were different `PubGrubPackage` types so they never matched the
available versions set! Luckily, the available versions are agnostic to
the markers and optional dependencies so we can just broaden to using
`PackageName` as a lookup key.
Addresses yet another complaint in
https://github.com/astral-sh/uv/issues/5046
I need this for debugging error messages.
I used an environment variable instead of a trace log so you can do
`UV_INTERNAL__SHOW_DERIVATION_TREE=1` and run a test to see the tree in
the test snapshot without further changes.
e.g.
```rust
// Resolving should fail.
uv_snapshot!(context.filters(), context.lock().arg("--preview").current_dir(&workspace), @r###"
success: false
exit_code: 1
----- stdout -----
UV_INTERNAL__SHOW_DERIVATION_TREE
root==0a0.dev0 depends on foo*
root==0a0.dev0 depends on bar[some-extra]*
foo==0.1.0 depends on anyio==4.1.0
bar[some-extra]==0.1.0 depends on anyio==4.2.0
no versions of bar[some-extra]<0.1.0 | >0.1.0
----- stderr -----
Using Python 3.12.[X] interpreter at: [PYTHON-3.12]
× No solution found when resolving dependencies:
╰─▶ Because only bar[some-extra]==0.1.0 is available and bar[some-extra] depends on anyio==4.2.0, we can conclude that all versions of bar[some-extra] depend on anyio==4.2.0.
And because foo depends on anyio==4.1.0, we can conclude that foo and all versions of bar[some-extra] are incompatible.
And because your workspace requires bar[some-extra] and foo, we can conclude that your workspace's requirements are unsatisfiable.
"###
);
```
We have bad error messages for optional (extra) dependencies and
development dependencies in workspaces:
1. We weren't showing the full package, so we'd drop `:dev` and
`[extra]` by accident
2. We didn't include derived packages, e.g., `member[extra]` in tree
processing collapse operation, so we'd include extra clauses like the
ones we removed in #6092
Also
- Reverts
f0de4f71f2
— it turns out it wasn't quite correct and it didn't seem worth using
the custom incompatibility anymore.
- Fixes a bug in the display of `package:dev` which was not showing
`:dev` for some variants (see 94d8020b58)
## Summary
Normalize all `python_version` markers to their equivalent
`python_full_version` form. This avoids false positives in forking
because we currently cannot detect any relationships between the two
forms. It also avoids subtle bugs due to the truncating semantics of
`python_version`. For example, given `requires-python = ">3.12"`, we
currently simplify the marker `python_version <= 3.12` to `false`.
However, the version `3.12.1` will be truncated to `3.12` for
`python_version` comparisons, and thus it satisfies the python
requirement and evaluates to `true`.
It is possible to simplify back to `python_version` when writing markers
to the lockfile. However, the equivalent `python_full_version` markers
are often clearer and easier to simplify, so I lean towards leaving them
as `python_full_version`.
There are *a lot* of snapshot updates from this change. I'd like more
eyes on the transformation logic in `python_version_to_full_version` to
ensure that they are all correct.
Resolves https://github.com/astral-sh/uv/issues/6125.
While it's slightly more convenient to log this where we were, it was
pretty unhelpful e.g.
```
DEBUG Interpreter meets the requested Python: `Python >=3.9`
```
What interpreter are we referring to here?
Includes the changes from https://github.com/astral-sh/uv/pull/6071 but
takes them way further.
When we have the set of available versions for a package, we can do a
much better job displaying an error.
For example:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only the following versions of httpx are available:
httpx<=999
httpx>=9999
and example==0.1.0 depends on httpx>999,<9999, we can conclude that example==0.1.0 cannot be used.
And because only example==0.1.0 is available and you require example, we can conclude that the requirements are unsatisfiable.
```
The resolver has demonstrated that the requested range cannot be used
because there are only versions in ranges _outside_ the requested range.
However, the display of the range of available versions is pretty bad!
We say there are versions of httpx available in ranges that definitely
have no versions available.
With this pull request, the error becomes:
```
❯ uv add 'httpx>999,<9999'
× No solution found when resolving dependencies:
╰─▶ Because only httpx<=1.0.0b0 is available and example depends on httpx>999,<9999, we can conclude that example's
requirements are unsatisfiable.
And because your workspace requires example, we can conclude that your workspace's requirements are unsatisfiable.
```
We achieve this by:
1. Dropping ranges disjoint with the range of available versions, e.g.,
this removes `httpx>=9999`
2. Replacing ranges that capture the _entire_ range of available
versions with the smaller range, e.g., this replaces `httpx<=999` with
`<=1.0.0b0`.
~Note that when we perform (2), we may include an additional bound that
is not relevant, e.g., we include the lower bound of `>=0.6.7`. This is
a bit extraneous, but I don't think it's confusing. We can consider some
advanced logic to avoid that later.~ (edit: I did this, it wasn't hard)
We also improve error messages when there is _only_ one version
available by showing that version instead of a range.
## Summary
Gives the caller control over how messages are reported back to the
user. Also merges the index-location validation into the lock, since
we're already iterating over the packages.
## Summary
This is no longer required since we no longer implement `Eq` on `Lock`.
It will also sometimes be "wrong" as of #6076, since we now apply
different `requires-python` filtering to different parts of the tree
during resolution.
## Summary
Using https://github.com/astral-sh/uv/issues/6064 as a motivating
example: at present, on main, we're not properly propagating the
`Requires-Python` simplifications. In that case, for example, we end up
solving for a branch with `python_version < 3.11`, and a branch `>=
3.11`, even though `Requires-Python` is `>=3.11`. Later, when we get to
the graph, we apply version simplification based on `Requires-Python`,
which causes us to _remove_ the `python_version < 3.11` markers
entirely, leaving us with duplicate dependencies for `pylint`.
This PR instead tries to ensure that we always apply this narrowing to
requirements and forks, so that we don't need to apply the same
simplification when constructing the graph at all.
Closes https://github.com/astral-sh/uv/issues/6064.
Closes#6059.
In particular, I added this as a hack to avoid a kinda of
instability that was caused by our marker code not correctly
detecting markers that were always false. But that has since
been fixed.
Removing this code doesn't change any tests. Arguably it
should be possible to come up with a test that failed with
this hack inserted but succeeded without it. In particular,
with this hack, new forks were being prevented from being
added even when they ought to be added, e.g., when preferences
get updated.
## Summary
This PR changes the definition of `--locked` from:
> Produces the same `Lock`
To:
> Passes `Lock::satisfies`
This is a subtle but important difference. Previous, if
`Lock::satisfies` failed, we would run a resolution, then do
`existing_lock == lock`. If the two weren't equal, and `--locked` was
specified, we'd throw an error.
The equality check is hard to get right. For example, it means that we
can't ship #6076 without changing our marker representation, since the
deserialized lockfile "loses" some of the internal marker state that
gets accumulated during resolution.
The downside of this change is that there could be scenarios in which
`uv lock --locked` fails even though the lockfile would actually work
and the exact TOML would be unchanged. But... I think it's ok if
`--locked` fails after the user modifies something?
Extends https://github.com/astral-sh/uv/pull/6092 to improve resolver
error messages for workspaces that have a single member.
As before, this requires a two-step approach of
1. Traversing the derivation tree and collapsing some members. In this
case, we drop the empty root node in favor of the project.
2. Using special-case formatting for packages. In this case, the
workspace package is referred to with "your project" instead of its
name.
An extension of #6090 that replaces #6066.
In brief,
1. Workspace member names are passed to the resolver for no solution
errors
2. There is a new derivation tree pre-processing step that trims
`NoVersion` incompatibilities for workspace members from the derivation
tree. This avoids showing redundant clauses like `Because only
bird==0.1.0 is available and bird==0.1.0 depends on anyio==4.3.0, we can
conclude that all versions of bird depend on anyio==4.3.0.`. As a minor
note, we use a custom incompatibility kind to mark these
incompatibilities at resolution-time instead of afterwards.
3. Root dependencies on workspace members say `your workspace requires
bird` rather than `you require bird`
4. Workspace member package display omits the version, e.g., `bird`
instead of `bird==0.1.0`
5. Instead of reporting a workspace member as unusable we note that its
requirements cannot be solved, e.g., `bird's requirements are
unsatisfiable` instead of `bird cannot be used`.
6. Instead of saying `your requirements are unsatisfiable` we say `your
workspace's requirements are unsatisfiable` when in a workspace, since
we're not in a "provide direct requirements" paradigm.
As an annoying but minor implementation detail, `PackageRange` now
requires access to the `PubGrubReportFormatter` so it can determine if
it is formatting a workspace member or not. We could probably improve
the abstractions in the future.
As a follow-up, we should additional special casing for "single project"
workspaces to avoid mention of the workspace concept in simple projects.
However, it looks like this will require additional tree manipulations
so I'm going to keep it separate.
## Summary
Historically, in order to "resolve from a lockfile", we've taken the
lockfile, used it to pre-populate the in-memory metadata index, then run
a resolution. If the resolution didn't match our existing resolution, we
re-resolved from scratch.
This was an appealing approach because (in theory) it didn't require any
dedicated logic beyond pre-populating the index. However, it's proven to
be _really_ hard to get right, because it's a stricter requirement than
we need. We just need the current lockfile to _satisfy_ the requirements
provided by the user. We don't actually need a second resolution to
produce the exact same result. And it's not uncommon that this second
resolution differs, because we seed it with preferences, which
fundamentally changes its course. We've worked hard to minimize those
"instabilities", but they're still present.
The approach here is intended to be much simpler. Instead of resolving
from the lockfile, we just check if the current resolution satisfies the
state of the workspace. Specifically, we check if the lockfile (1)
contains all the relevant members, and (2) matches the metadata for all
dependencies, recursively. (We skip registry dependencies, assuming that
they're immutable.)
This may actually be too conservative, since we can have resolutions
that satisfy the requirements, even if the requirements have changed
slightly. But we want to bias towards correctness for now.
My hope is that this scheme will be more performant, simpler, and more
robust.
Closes https://github.com/astral-sh/uv/issues/6063.
## Summary
This was added in https://github.com/astral-sh/uv/pull/5405 but is now
the cause of an instability in `github_wikidata_bot`. Specifically, on
the initial run, we fork in `pydantic==2.8.2`, via:
```
Requires-Dist: typing-extensions>=4.12.2; python_version >= '3.13'
Requires-Dist: typing-extensions>=4.6.1; python_version < '3.13'
```
In the end, we resolve a single version of `typing-extensions`
(`4.12.2`)... But we don't recognize the two resolutions as the "same
graph", because we propagate the fork markers, and so the "edges" have
different markers on them...
In the second run through, when we have the forks in advance, we don't
split on Pydantic... We just try to solve from the root with the current
forks. This is fundamentally different and I fear it will be the cause
of many instabilities. But removing this graph check fixes the proximate
issue.
I don't really understand why this was added since there was no test
coverage in the PR.
## Summary
When constructing the `Resolution`, we only propagated the fork markers
to the package node, but not the extras node. This led to cases in which
an extra could be included unconditionally or otherwise diverge from the
base package version.
Closes https://github.com/astral-sh/uv/issues/6062.
## Summary
Right now, we store the environment markers in a `BTreeSet` -- so
they're sorted, but the sort doesn't really tell us anything. I think we
should instead store them in the order in which we solved. I thought
this might fix an instability (it didn't), but I think it's still good
to ensure we solve in the same order.
I also changed from `Option<Vec>` to just `Vec`, since there was no
distinction between `None` and empty.
## Summary
We retain them if you use `--raw-sources`, but otherwise they're
removed. We still respect them in the subsequent `uv.lock` via an
in-process store.
Closes#6056.
## Summary
Now, if you resolve against a registry, then swap it out for another, we
won't reuse the lockfile. (If you don't provide any registry
configuration, then we won't enforce this, so that `uv lock --index-url
foo` and `uv lock` is stable.)
Closes https://github.com/astral-sh/uv/issues/5920.
This example came up in discussion and it was initially unclear whether
we should try to support it. Specifically, by automatically assuming
that the `datasets < 2.19` dependency had a marker corresponding to the
negation of the conjunction of the other sibling markers for that same
package. But this was deemed, I think, a little too magical.
This in turn implies that whenever there are sibling dependencies with
overlapping marker expressions, their version constraints also need to
be overlapping. Otherwise, for any marker environment that matches both
marker expressions, it would be impossible to select a single version.
The test in this case has this comment:
```
/// If a dependency requests a prerelease version with an overlapping marker expression,
/// we should prefer the prerelease version in both forks.
```
With this setup:
```
let pyproject_toml = context.temp_dir.child("pyproject.toml");
pyproject_toml.write_str(indoc! {r#"
[project]
name = "example"
version = "0.0.0"
dependencies = [
"cffi >= 1.17.0rc1 ; os_name == 'Linux'"
]
requires-python = ">=3.11"
"#})?;
let requirements_in = context.temp_dir.child("requirements.in");
requirements_in.write_str(indoc! {"
cffi
.
"})?;
```
The change in this commit _seems_ more correct that what we had,
although it does seem to contradict the comment. Namely, in the `os_name
!= "Linux"` fork, we don't prefer the pre-release version since the
`cffi >= 1.17.0rc1` bound doesn't apply.
It's not quite clear what to do in this instance.
I believe these are all changes that aren't necessarily
expected, but also seem harmless. Like the order in which
fork markers are written to the lock file. (Although one
wonders if we should fix that once and for all by defining
a complete sort function for forks.)
At a high level, this PR adds a smattering of new tests that
effectively snapshot the output of `uv lock` for a selection of
"ecosystem" projects. That is, real Python projects for which we expect
`uv` to work well with.
The main idea with these tests is to get a better idea of how changes
in `uv` impact the lock files of real world projects. For example,
we're hoping that these tests will help give us data for how #5733
differs from #5887.
This has already revealed some bugs. Namely, re-running `uv lock` for a
second time will produce a different lock file for some projects. So to
prioritize getting the tests added, for those projects, we don't do the
deterministic checking.
## Summary
Our current handling of `--find-links` merges the entries in each index.
As a result, we can end up with `AnnotatedDist` entries that reference
distributions across indexes.
I'd like to change `--find-links` such that each `--find-links` entry is
just treated as its own index (so, e.g., if `requests` exists in the
first `--find-links` entry, we don't even check the registry by
default), which would _also_ fix this problem automatically. But that's
a behavior change... So for now, in the lockfile, we filter
distributions that don't match the source index URL.
There are two cases to consider:
- There's a source distribution. Then, for the ID to reference the
`--find-links` registry, the source distribution _must_ have come from
the `--find-links` entry, so it's fine to discard any wheels from the
"wrong" registry without breaking any compatibility guarantees.
- There's no source distribution. Then the best wheel must come from the
`--find-links` registry. We might lose some platform coverage by
discarding the other wheels, but it shouldn't break any of the
"guarantees", since we have at least one wheel that fits in the version
range.
Closes https://github.com/astral-sh/uv/issues/6015.
## Summary
Added the actual error message to the warning when uv fails to parse
`pyproject.toml`.
Resolves https://github.com/astral-sh/uv/issues/5934
## Test Plan
Took the case from the issue:
- have `pyproject.toml` which contains
```
[tool.uv]
foobar = false
```
-
```
$ uv venv --preview -v
```
- Expect the message that contains the actual problem in the
`pyproject.toml` like:
```
warning: Failed to parse `pyproject.toml` during settings discovery: unknown field `foobar`; skipping...
```
## Summary
A lot of the existing tests were no-ops. For convenience, we now use the
trick of: install from Test PyPI (to get an outdated "latest"), then
upgrade from PyPI.
Right now, the URL gets out-of-sync with the install path, since the
install path is canonicalized. This leads to a subtle error on Windows
(in CI) in which we don't preserve caching across resolution and
installation.
Surprisingly, this is a lockfile schema change: We can't store relative
paths in urls, so we have to store a `filename` entry instead of the
whole url.
Fixes#4355
## Summary
We now persist the `ResolverInstallerOptions` when writing out a tool
receipt. When upgrading, we grab the saved options, and merge with the
command-line arguments and user-level filesystem settings (CLI > receipt
> filesystem).
The loose consensus is that "fetch" doesn't have much meaning and that a
boolean flag makes more sense from the command line.
1. Adds `--allow-python-downloads` (hidden, default) and
`--no-python-downloads` to the CLI to quickly enable or disable
downloads
2. Deprecates `--python-fetch` in favor of the options from (1)
3. Removes `python-fetch` in favor of a `python-downloads` setting
5. Adds a `never` variant to the enum, allowing even explicit installs
to be disabled via the configuration file
## Test plan
I tested this with various `pyproject.toml`-level settings and `uv venv
--preview --python 3.12.2` and `uv python install 3.12.2` with and
without the new CLI flags.
Warn when there are missing bounds on transitive dependencies with
`--resolution lowest`.
Implemented as a lazy resolution graph check. Dev deps are odd because
they are missing the edge from the root that extras have (they are
currently orphans in the resolution graph), but this is more complex to
solve properly because we can put dev dep information in a `Requirement`
so i special cased them here.
Closes#2797
Should help with #1718
---------
Co-authored-by: Ibraheem Ahmed <ibraheem@ibraheem.ca>
## Summary
This PR rewrites the `MarkerTree` type to use algebraic decision
diagrams (ADD). This has many benefits:
- The diagram is canonical for a given marker function. It is impossible
to create two functionally equivalent marker trees that don't refer to
the same underlying ADD. This also means that any trivially true or
unsatisfiable markers are represented by the same constants.
- The diagram can handle complex operations (conjunction/disjunction) in
polynomial time, as well as constant-time negation.
- The diagram can be converted to a simplified DNF form for user-facing
output.
The new representation gives us a lot more confidence in our marker
operations and simplification, which is proving to be very important
(see https://github.com/astral-sh/uv/pull/5733 and
https://github.com/astral-sh/uv/pull/5163).
Unfortunately, it is not easy to split this PR into multiple commits
because it is a large rewrite of the `marker` module. I'd suggest
reading through the `marker/algebra.rs`, `marker/simplify.rs`, and
`marker/tree.rs` files for the new implementation, as well as the
updated snapshots to verify how the new simplification rules work in
practice. However, a few other things were changed:
- [We now use release-only comparisons for `python_full_version`, where
we previously only did for
`python_version`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/algebra.rs#L522).
I'm unsure how marker operations should work in the presence of
pre-release versions if we decide that this is incorrect.
- [Meaningless marker expressions are now
ignored](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/parse.rs#L502).
This means that a marker such as `'x' == 'x'` will always evaluate to
`true` (as if the expression did not exist), whereas we previously
treated this as always `false`. It's negation however, remains `false`.
- [Unsatisfiable markers are written as `python_version <
'0'`](https://github.com/astral-sh/uv/blob/ibraheem/canonical-markers/crates/pep508-rs/src/marker/tree.rs#L1329).
- The `PubGrubSpecifier` type has been moved to the new `uv-pubgrub`
crate, shared by `pep508-rs` and `uv-resolver`. `pep508-rs` also depends
on the `pubgrub` crate for the `Range` type, we probably want to move
`pubgrub::Range` into a separate crate to break this, but I don't think
that should block this PR (cc @konstin).
There is still some remaining work here that I decided to leave for now
for the sake of unblocking some of the related work on the resolver.
- We still use `Option<MarkerTree>` throughout uv, which is unnecessary
now that `MarkerTree::TRUE` is canonical.
- The `MarkerTree` type is now interned globally and can potentially
implement `Copy`. However, it's unclear if we want to add more
information to marker trees that would make it `!Copy`. For example, we
may wish to attach extra and requires-python environment information to
avoid simplifying after construction.
- We don't currently combine `python_full_version` and `python_version`
markers.
- I also have not spent too much time investigating performance and
there is probably some low-hanging fruit. Many of the test cases I did
run actually saw large performance improvements due to the markers being
simplified internally, reducing the stress on the old `normalize`
routine, especially for the extremely large markers seen in
`transformers` and other projects.
Resolves https://github.com/astral-sh/uv/issues/5660,
https://github.com/astral-sh/uv/issues/5179.
## Summary
This PR adds a `DistExtension` field to some of our distribution types,
which requires that we validate that the file type is known and
supported when parsing (rather than when attempting to unzip). It
removes a bunch of extension parsing from the code too, in favor of
doing it once upfront.
Closes https://github.com/astral-sh/uv/issues/5858.
## Summary
I think this seems reasonable... Otherwise, we might not go back to PyPI
to revalidate the list of available versions despite the user passing
`--upgrade`.
## Summary
Previously, we wouldn't respect configuration files in directories
_above_ a workspace root. But this is somewhat problematic, because any
`pyproject.toml` will define a workspace root...
Instead, I think we should _start_ the search at the workspace root, but
go above it if necessary.
Closes: #5929.
See: https://github.com/astral-sh/uv/pull/4295.
## Summary
Resolves#5188. Most of the changes involve creating a new function in
`tool/common.rs` to contain the common functionality previously found in
`tool/install.rs`.
## Test Plan
`cargo test`
```console
❯ ./target/debug/uv tool upgrade black
warning: `uv tool upgrade` is experimental and may change without warning.
Resolved 6 packages in 25ms
Uninstalled 1 package in 3ms
Installed 1 package in 19ms
- black==23.1.0
+ black==24.4.2
Installed 2 executables: black, blackd
```
e.g.
```
❯ cargo run -- venv --no-system
Blocking waiting for file lock on build directory
Compiling uv v0.2.34 (/Users/zb/workspace/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 19.85s
Running `target/debug/uv venv --no-system`
warning: The `--no-system` flag has no effect, a system Python interpreter is always used in `uv venv`
Using Python 3.12.4 interpreter at: /opt/homebrew/opt/python@3.12/bin/python3.12
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
❯ cargo run -- venv --system
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv venv --system`
warning: The `--system` flag has no effect, a system Python interpreter is always used in `uv venv`
Using Python 3.12.4 interpreter at: /opt/homebrew/opt/python@3.12/bin/python3.12
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
## Summary
This _used_ to be true but we now require fetching metadata for all
distributions even with `--no-deps` since, e.g., we validate that any
declared extras exist.
## Summary
Initially, we showed _all_ resolver and installer output in `uv run` and
`uv tool run`, since it was way too much for workhorse commands. Then,
we moved to showing _no_ output by default, which was way too little --
you had no idea why anything was happening, and commands appeared to
hang.
This PR adds a more nuanced middle-ground. With `--verbose`, we continue
to show everything. But by default, in `uv run` and `uv tool run`...
- During resolution, we show any "Building" and "Build" messages, if you
need to build a source distribution. But we don't show any other output.
(This _could_ be too little for expensive resolutions; we may want to
show a spinner.)
- If there are no changes to be made after resolving, we don't show any
other output.
- If we have to install, we show the progress bars for downloads (which
disappear on completion) followed by a single summary line stating the
number of packages installed.
This feels pretty good, in my limited testing. When everything is built
/ cached, you don't get _any_ additional output. When there's work to
do, you have a sense for what's happening, and we leave you with a
single summary line ("Installed X packages") at the end.
Closes https://github.com/astral-sh/uv/issues/5758.
## Test Plan
Notice that the first `tool run` ends with an install line; the second
shows no additional output:

If you run `uv run` in a package for the first time, we _do_ tell you
that we're building / built it:

But on the second run, there's no output:

If you add a `--with`, we'll show you all the installer progress bars
(which disappear once they're done), and then a single summary line:

Currently, the entry for a package+version+source table is called
`distribution`. That is incorrect, the `sdist` and `wheel` fields inside
of that table are distributions, the table itself is for a package. We
also align ourselves closer with PEP 751.
I went through `lock.rs` and renamed all occurrences of "distribution"
that actually referred to a "package".
This change invalidates all existing lockfiles.
Bikeshedding: Do we call it `package` or `packages`? See also
https://github.com/python/peps/pull/3877
`package` is nice because it looks like a header:
```toml
[[package]]
name = "anyio"
version = "4.3.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "idna" },
{ name = "sniffio" },
]
sdist = { url = "3970183622d484d08e3285104333d3/anyio-4.3.0.tar.gz", hash = "sha256:f75253795a87df48568485fd18cdd2a3fa5c4f7c5be8e5e36637733fce06fed6", size = 159642 }
wheels = [
{ url = "2f20c40b45242c0b33774da0e2e34f/anyio-4.3.0-py3-none-any.whl", hash = "sha256:048e05d0f6caeed70d731f3db756d35dcc1f35747c8c403364a8332c630441b8", size = 85584 },
]
```
`packages` is nice because the field is not a single entry, but a list.
2/3 for https://github.com/astral-sh/uv/issues/4893
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Whenever we call `resolve`, we immediately call `fetch` after. And in
some cases `resolve` actually calls `fetch` internally. It seems a lot
simpler to just merge these into one method that returns a `Fetch`
(which itself contains the fully-resolved URL).
Closes https://github.com/astral-sh/uv/issues/5876.
There are three options that determine resolver behavior:
* resolution mode
* prerelease mode
* exclude newer
They are different from the other top level options: If they mismatch,
we recreate the resolution. To distinguish them from the rest of the
lockfile, we group them under an `[options]` header.
1/3 for #4893
Following #5869, the documentation has some less-than-helpful
suggestions to use `uv help python` for details — we should link to the
`uv python` section instead.
## Summary
We were dropping the query and fragment in the wrong place, so the URLs
didn't match up after resolving from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5851.
## Summary
Very subtle bug. The scenario is as follows:
- We resolve: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git" }`
- The user then changes the request to: `elmer-circuitbuilder = { git =
"https://github.com/ElmerCSC/elmer_circuitbuilder.git", rev =
"44d2f4b19d6837ea990c16f494bdf7543d57483d" }`
- When we go to re-lock, we note two facts:
1. The "default branch" resolves to
`44d2f4b19d6837ea990c16f494bdf7543d57483d`.
2. The metadata for `44d2f4b19d6837ea990c16f494bdf7543d57483d` is
(whatever we grab from the lockfile).
- In the resolver, we then ask for the metadata for
`44d2f4b19d6837ea990c16f494bdf7543d57483d`. It's already in the cache,
so we return it; thus, we never add the
`44d2f4b19d6837ea990c16f494bdf7543d57483d` ->
`44d2f4b19d6837ea990c16f494bdf7543d57483d` mapping to the Git resolver,
because we never have to resolve it.
This would apply for any case in which a requested tag or branch was
replaced by its precise SHA. Replacing with a different commit is fine.
It only applied to `tool.uv.sources`, and not PEP 508 URLs, because the
underlying issue is that we aren't consistent about "automatically"
extracting the precise commit from a Git reference.
Closes https://github.com/astral-sh/uv/issues/5860.
## Summary
This is an experimental PR to replace more unsafe calls with more rust
while still trying to keep the binary size small enough. These changes
roughly increase the size of the trampolines to about 40kb~. This is a
alternate PR to https://github.com/astral-sh/uv/pull/5751.
The primary changes here include
* Switch to use rust path components for ease of path management
* Leverage `std::process::exit` for process exit and cleanup
* Use `std::io::Error::last_os_error` for IO Errors to remove
`FormatMessage` complexity
* Use `std::env::current_exe` to get the current executable instead of
`GetModuleFileNameA`
## Test Plan
Added one more existing test case to trampoline tests.
Still need to verify dunce::canonicalize is desired or not on
find_python_exe.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
We need to avoid using incompatible versions for build dependencies that
are also part of the resolved
environment. This is a very subtle issue, but: when locking, we don't
enforce platform
compatibility. So, if we reuse the resolver state to install, and the
install itself has to
preform a resolution (e.g., for the build dependencies of a source
distribution), that
resolution may choose incompatible versions.
The key property here is that there's a shared package between the build
dependencies and the
project dependencies.
Closes https://github.com/astral-sh/uv/issues/5836.
This already rejects `pyproject.toml`... but because the schema
validation is relaxed (we allow unknown fields, and all fields are
optional), a `pyproject.toml` doesn't get properly rejected here.
This PR makes the schema stricter, but in a safe way (by adding the
other `tool.uv` fields, like `workspace`, as any).
Closes#5832.
## Summary
We allow the use of (e.g.) `.whl.metadata` files when `--no-binary` is
enabled, so it makes sense that we'd also also allow wheels to be
downloaded for metadata extraction. So now, we validate `--no-binary` at
install time, rather than metadata-fetch time.
Closes https://github.com/astral-sh/uv/issues/5699.
## Summary
This fixes a bug introduced by
https://github.com/astral-sh/uv/pull/5232. It turns out that the
`universal_disjoint_base_or_local_requirement` test does not actually do
what it was meant to because of the incorrect python requirement. With a
valid python requirement, it fails on `main`. The problem is that we try
to exclude the original base version from the range of allowed versions
to try and prefer local versions. However, in the test, there is a
branch that depends on the non-local version, with no applicable local
in its fork. We should remove this exclusion as prioritization is
handled by the candidate resolver.
I don't think this will save any time in serialization, but it should
save us some deserialization, since we only need to parse URLs for the
packages we use...
## Summary
Okay, I tested this against...
- Our public "private" proxy
- Fury
- AWS CodeArtifact
- Azure Artifacts
It took a long time.
All of them work as expected with this approach: we omit the credentials
from the lockfile, then wire them back up when the index URL is provided
during subsequent operations.
Closes https://github.com/astral-sh/uv/issues/5119.
Part of #4454
e.g.
```
$ uv add --help
Add one or more packages to the project requirements
Usage: uv add [OPTIONS] <REQUIREMENTS>...
Arguments:
<REQUIREMENTS>... The packages to add, as PEP 508 requirements (e.g., `ruff==0.5.0`)
Options:
--dev Add the requirements as development dependencies
--optional <OPTIONAL> Add the requirements to the specified optional dependency group
--no-editable Don't add the requirements as editables
--raw-sources Add source requirements to `project.dependencies`, rather than `tool.uv.sources`
--rev <REV> Specific commit to use when adding from Git
--tag <TAG> Tag to use when adding from git
--branch <BRANCH> Branch to use when adding from git
--extra <EXTRA> Extras to activate for the dependency; may be provided more than once
--locked Assert that the `uv.lock` will remain unchanged
--frozen Add the requirements without updating the `uv.lock` file
--package <PACKAGE> Add the dependency to a specific package in the workspace
-p, --python <PYTHON> The Python interpreter into which packages should be installed. [env: UV_PYTHON=]
Index options:
-i, --index-url <INDEX_URL> The URL of the Python package index (by default: <https://pypi.org/simple>) [env: UV_INDEX_URL=]
--extra-index-url <EXTRA_INDEX_URL> Extra URLs of package indexes to use, in addition to `--index-url` [env: UV_EXTRA_INDEX_URL=]
-f, --find-links <FIND_LINKS> Locations to search for candidate distributions, in addition to those found in the registry indexes
--no-index Ignore the registry index (e.g., PyPI), instead relying on direct URL dependencies and those provided via `--find-links`
--index-strategy <INDEX_STRATEGY> The strategy to use when resolving against multiple index URLs [env: UV_INDEX_STRATEGY=] [possible values: first-index, unsafe-first-match, unsafe-best-match]
--keyring-provider <KEYRING_PROVIDER> Attempt to use `keyring` for authentication for index URLs [env: UV_KEYRING_PROVIDER=] [possible values: disabled, subprocess]
Resolver options:
-U, --upgrade Allow package upgrades, ignoring pinned versions in any existing output file
-P, --upgrade-package <UPGRADE_PACKAGE> Allow upgrades for a specific package, ignoring pinned versions in any existing output file
--resolution <RESOLUTION> The strategy to use when selecting between the different compatible versions for a given package requirement [env: UV_RESOLUTION=] [possible values: highest, lowest, lowest-direct]
--prerelease <PRERELEASE> The strategy to use when considering pre-release versions [env: UV_PRERELEASE=] [possible values: disallow, allow, if-necessary, explicit, if-necessary-or-explicit]
--exclude-newer <EXCLUDE_NEWER> Limit candidate packages to those that were uploaded prior to the given date [env: UV_EXCLUDE_NEWER=]
Installer options:
--reinstall Reinstall all packages, regardless of whether they're already installed. Implies `--refresh`
--reinstall-package <REINSTALL_PACKAGE> Reinstall a specific package, regardless of whether it's already installed. Implies `--refresh-package`
--link-mode <LINK_MODE> The method to use when installing packages from the global cache [env: UV_LINK_MODE=] [possible values: clone, copy, hardlink, symlink]
--compile-bytecode Compile Python files to bytecode after installation
Build options:
-C, --config-setting <CONFIG_SETTING> Settings to pass to the PEP 517 build backend, specified as `KEY=VALUE` pairs
--no-build Don't build source distributions
--no-build-package <NO_BUILD_PACKAGE> Don't build source distributions for a specific package
--no-binary Don't install pre-built wheels
--no-binary-package <NO_BINARY_PACKAGE> Don't install pre-built wheels for a specific package
Cache options:
-n, --no-cache Avoid reading from or writing to the cache, instead using a temporary directory for the duration of the operation [env: UV_NO_CACHE=]
--cache-dir <CACHE_DIR> Path to the cache directory [env: UV_CACHE_DIR=]
--refresh Refresh all cached data
--refresh-package <REFRESH_PACKAGE> Refresh cached data for a specific package
Python options:
--python-preference <PYTHON_PREFERENCE> Whether to prefer using Python installations that are already present on the system, or those that are downloaded and installed by uv [possible values: only-managed, managed, system, only-system]
--python-fetch <PYTHON_FETCH> Whether to automatically download Python when required [possible values: automatic, manual]
Global options:
-q, --quiet Do not print any output
-v, --verbose... Use verbose output
--color <COLOR_CHOICE> Control colors in output [default: auto] [possible values: auto, always, never]
--native-tls Whether to load TLS certificates from the platform's native certificate store [env: UV_NATIVE_TLS=]
--offline Disable network access, relying only on locally cached data and locally available files
--no-progress Hides all progress outputs when set
--config-file <CONFIG_FILE> The path to a `uv.toml` file to use for configuration [env: UV_CONFIG_FILE=]
--no-config Avoid discovering configuration files (`pyproject.toml`, `uv.toml`) in the current directory, parent directories, or user configuration directories [env: UV_NO_CONFIG=]
-h, --help Print help
-V, --version Print version
Use `uv help add` for more details.
```
## Summary
`uv tree` will now filter to the current platform by default. You can
pass `--universal` to show the entire tree.
Closes https://github.com/astral-sh/uv/issues/5760.
## Summary
It's fine for this to be in the cache, I think, since we don't
necessarily need to colocate it with the Python directory.
Closes https://github.com/astral-sh/uv/issues/5747.
## Summary
After referring to https://github.com/astral-sh/uv/pull/5637 and doing
additional testing.
The default value in a stable state seems more reasonable to be
``only-system``. ``managed`` in preview.
```
cpython-3.11.9-windows-x86_64-none C:\Users\name\AppData\Local\Programs\Python\Python311\python.exe
cpython-3.10.14-windows-x86_64-none C:\Users\name\AppData\Roaming\uv\data\python\cpython-3.10.14-windows-x86_64-none\install\python.exe
cpython-3.10.11-windows-x86_64-none C:\Users\name\AppData\Local\Programs\Python\Python310\python.exe
cpython-3.9.19-windows-x86_64-none C:\Users\name\AppData\Roaming\uv\data\python\cpython-3.9.19-windows-x86_64-none\python.exe
```
test on uv 0.2.33 (build from
257007ccaf)
### Stable version
``uv venv -p 3.10`` is ``3.10.11`` (System Python)
``uv venv -p 3.9`` is ``No interpreter found``(3.9.19 for managed
Python)
``uv venv -p 3.9 --python-preference only-system`` is ``No interpreter
found``(fail)
``uv venv -p 3.9 --python-preference only-managed`` is
``3.9.19``(success)
Do not use managed Python, only use the system Python, so it can be
determined as ``only-system``.
### Preview mode
**Note:** ``3.10.14`` is managed python, ``3.10.11`` is system python.
``uv venv -p 3.11 --preview`` is ``3.11.9`` (System Python)
``uv venv -p 3.10 --preview`` is ``3.10.14``
``uv venv -p 3.10 --preview --python-preference only-managed`` is
``3.10.14``
``uv venv -p 3.10 --preview --python-preference managed`` is ``3.10.14``
``uv venv -p 3.10 --preview --python-preference system`` is ``3.10.11``
``venv -p 3.10 --preview --python-preference only-system`` is
``3.10.11``
Prioritize the managed Python and then select the system Python, so it
can be determined as ``managed``.
-----
fixed#5754
## Test Plan
Run website in local.

## Summary
Right now, if you have a `requirements.txt` with a pre-release, but the
`requirements.in` does not have a pre-release marker for that dependency
we drop the pre-release. (In the selector, we end up returning
`AllowPrerelease::IfNecessary`, the default.)
I played with a few ways of solving this... The first was to remove that
guard altogether. But if we do that,
`universal_transitive_disjoint_prerelease_requirement` fails (we use
`1.17.0rc1` in both forks, when it should only apply to one of the two).
The second was to do that, but also avoid pushing pre-releases as
preferences when we solve a fork. But then
`universal_disjoint_prereleases` fails, because we return a different
pre-release in each fork.
Finally, I settled on allowing existing pre-releases in forks if they
have no markers on them, i.e., they are "global" preferences. I believe
this is true IFF the preference came from an existing lockfile.
Closes https://github.com/astral-sh/uv/issues/5729.
## Summary
If _both_ nodes in the derivation tree are proxies, we need to remove
the _entire_ node. So, the function now returns an `Option<Tree>`
instead of taking `&mut Tree`.
Closes https://github.com/astral-sh/uv/issues/5618.
To enforce the 100 character line limit in markdown files introduced in
https://github.com/astral-sh/uv/pull/5635, and to automate the
formatting of markdown files, i've added prettier and formatted our
markdown files with it.
I've excluded the changelog and the generated references documentation
from this for having too many changes, but we can also include them.
I'm not particular on which style we use. My main motivations are
(major) not having to reflow markdown files myself anymore and (minor)
consistence between all markdown files. I've chosen prettier for similar
reason as we chose black, it's a single good style that's automated and
shared in the community. I do prefer prettier's style of not breaking
inside of a link name though.
This PR is in two parts, the first adds prettier to CI and documents
using it, while the second actually formats the docs. When merge
conflicts arise, we can drop the last commit and regenerate it with `npx
prettier --prose-wrap always --write BENCHMARKS.md CONTRIBUTING.md
README.md STYLE.md docs/*.md docs/concepts/**/*.md docs/guides/**/*.md
docs/pip/**/*.md`.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Partially resolves#5561. Haven't added overrides support yet but I can
add it tomorrow if the current approach for constraints is ok.
## Test Plan
`cargo test`
Manually checked trace logs after changing the constraints.
## Summary
Gives you a nice error message if you attempt to sync with, e.g., `-p
3.8` when that version is supported by at least one workspace member,
but your project's minimum requirement is `>=3.12`
Closes https://github.com/astral-sh/uv/issues/5662.
## Summary
I think it's reasonable to only sync the affected group, e.g., `uv add`
on its own should not require syncing all extras.
Closes https://github.com/astral-sh/uv/issues/4418.
Part of #4454
e.g. for `uv help pip compile`
```
Python options:
--python <PYTHON>
The Python interpreter against which to compile the requirements.
By default, uv uses the virtual environment in the current working directory or any parent
directory, falling back to searching for a Python executable in `PATH`. The `--python`
option allows you to specify a different interpreter.
Supported formats:
- `3.10` looks for an installed Python 3.10 using `py --list-paths` on Windows, or
`python3.10` on Linux and macOS.
- `python3.10` or `python.exe` looks for a binary with the given name in `PATH`.
- `/home/ferris/.local/bin/python3.10` uses the exact Python at the given path.
-p, --python-version <PYTHON_VERSION>
The minimum Python version that should be supported by the resolved requirements (e.g., `3.8` or `3.8.17`).
If a patch version is omitted, the minimum patch version is assumed. For example, `3.8` is mapped to `3.8.0`.
--python-preference <PYTHON_PREFERENCE>
Whether to prefer using Python installations that are already present on the system, or those that are downloaded and installed by uv
Possible values:
- only-managed: Only use managed Python installations; never use system Python installations
- managed: Prefer managed Python installations over system Python installations
- system: Prefer system Python installations over managed Python installations
- only-system: Only use system Python installations; never use managed Python installations
--python-fetch <PYTHON_FETCH>
Whether to automatically download Python when required
Possible values:
- automatic: Automatically fetch managed Python installations when needed
- manual: Do not automatically fetch managed Python installations; require explicit installation
```
## Summary
In #5494, I made breaking changes to the tool receipt format. This would
break existing tools for all users. This PR makes the change
backwards-compatible by supporting deserialization for the deprecated
format.
Closes https://github.com/astral-sh/uv/issues/5680.
## Test Plan
Beyond the automated tests, you can run `cargo run tool list` on your
existing machine.
Before:
```
warning: `uv tool list` is experimental and may change without warning
warning: Ignoring malformed tool `black` (run `uv tool uninstall black` to remove)
warning: Ignoring malformed tool `poetry` (run `uv tool uninstall poetry` to remove)
warning: Ignoring malformed tool `ruff` (run `uv tool uninstall ruff` to remove)
```
After:
```
warning: `uv tool list` is experimental and may change without warning
black v0.1.0
- black
poetry v1.8.3
- poetry
ruff v0.0.60
- ruff
```
## Summary
When we add a new optional group in `uv add`, we never to update the
`pyproject.toml` before locking. Otherwise, we use the stale
`pyproject.toml` and omit the optional group.
Closes https://github.com/astral-sh/uv/issues/5687.
## Summary
Fixes a bug in #5494. The `RequirementSourceWire` representation was
ambiguous, and so the order of the fields meant that all variants were
mapped to `Registry` when deserializing. (So the snapshots were right,
but behaviors were wrong.)
## Summary
This could still be made more robust, but it's not critical, since you
can always `--force`. It's good to handle this case, though, since we
have an explicit error for it.
Closes https://github.com/astral-sh/uv/issues/5490.
It transpires that detecting the directory a script was sourced from is
non-trivial across `bash`, `ksh` and `zsh`.
The previous version was a one-liner and supported `bash` and `zsh` but
not `ksh`.
It is possible to keep the one-liner and add `ksh` support, but that is
mutually-exclusive with `zsh`.
Therefore, the only way to square this circle is to add an `if` block. A
silver lining here is that although longer, the script is probably
easier to follow as there is less code-golfing going on.
## Summary
The current receipt doesn't capture quite enough information. For
example, it doesn't differentiate between editable and non-editable
requirements. This PR instead uses the full `Requirement` type. I think
we should use a custom representation like we do in the lockfile, but
I'm just using the default representation to demonstrate the idea.
## Summary
As-is, if you have a workspace with mixed `requires-python`
requirements, resolution will _never_ succeed, since we'll use the union
as the `requires-python` bound (i.e., take the lowest value), and fail
when we see the package that only supports some more narrow range.
This PR modifies the behavior to take the intersection (i.e., the
highest value), so if you have one package that supports Python 3.12 and
later, and another that supports Python 3.8 and later, we lock for
Python 3.12. If you try to sync or run with Python 3.8, we raise an
error, since the lockfile will be incompatible with that request.
Konsti has a write-up in https://github.com/astral-sh/uv/issues/5594
that outlines what could be a longer-term strategy.
Closes https://github.com/astral-sh/uv/issues/5578.
By resolving for each fork from the lockfile individually and by adding
using preferences for the current fork, we solve the instability #5180.
I've tested the locally and will add the packse test scenarios upstack.
Part of
https://github.com/astral-sh/uv/issues/5180#issuecomment-2247696198
The comment in the code explains the bulk of this:
```rust
// We previously computed this heuristic freshness lifetime by
// looking at the difference between the last modified header and
// the response's date header. We then asserted that the cached
// response ought to be "fresh" for 10% of that interval.
//
// It turns out that this can result in very long freshness
// lifetimes[1] that lead to uv caching too aggressively.
//
// Since PyPI sets a max-age of 600 seconds and since we're
// principally just interacting with Python package indices here,
// we just assume a freshness lifetime equal to what PyPI has.
//
// Note though that a better solution here is for the index to
// support proper HTTP caching headers (ideally Cache-Control, but
// Expires also works too, as above).
```
We also remove the `heuristic_percent` field on `CacheConfig`. Since
that's actually part of the cache itself, we bump the simple cache
version.
Finally, we add some more `trace!` calls that should hopefully make
diagnosing issues related to the freshness lifetime a bit easier in the
future.
Fixes#5351
## Summary
Given a fork like:
```
pylint < 3 ; sys_platform == 'darwin'
pylint > 2 ; sys_platform != 'darwin'
```
Solving the top branch will typically yield a solution that also
satisfies the bottom branch, due to maximum version selection (while the
inverse isn't true).
To quote an example from the docs:
```rust
// If there's no difference, prioritize forks with upper bounds. We'd prefer to solve
// `numpy <= 2` before solving `numpy >= 1`, since the resolution produced by the former
// might work for the latter, but the inverse is unlikely to be true due to maximum
// version selection. (Selecting `numpy==2.0.0` would satisfy both forks, but selecting
// the latest `numpy` would not.)
```
Closes https://github.com/astral-sh/uv/issues/4926 for now.
## Summary
First part of: https://github.com/astral-sh/uv/issues/4926. We should
solve forks that _don't_ expand the world of supported versions (e.g.,
`python_version >= '3.11'` enables us to select new packages, since we
narrow the supported version range).
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
`uv venv` should support adopting python version specified in
`requires-python` from `pyproject.toml`. This allows customization on
the venv setup when syncing from python project.
Closes https://github.com/astral-sh/uv/issues/5552.
It also serves as a workaround to close
https://github.com/astral-sh/uv/issues/5258.
## Test Plan
<!-- How was it tested? -->
1. Run `uv venv` in folder with `pyroject.toml` specifying
`requries-python = "<3.10"`. Python 3.9 is selected for venv.
2. Change to `requries-python = "<3.11"` and run `uv venv` again. Python
3.10 is selected now.
3. Switch to a folder without `pyproject.toml` then run `uv venv`.
Python 3.12 is selected now.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
Collapses the previous default into "managed" and makes the "managed"
behavior match "installed". People should use "only-managed" if they
want that behavior, it seems overly complicated otherwise.
## Summary
This PR deprecates the `--isolated` flag. The treatment varies across
the APIs:
- For non-preview APIs, we warn but treat it as equivalent to
`--no-config`.
- For preview APIs, we warn and ignore it, with two exceptions...
- For `tool run` and `run` specifically, we don't even warn, because we
can't differentiate the command-specific `--isolated` from the global
`--isolated`.
## Summary
The culmination of #4730. We now have `uv run --isolated` which always
uses a fresh environment (but includes the workspace dependencies as
needed). This enables you to test with strict isolation (e.g., `uv run
--isolated -p foo` will ensure that `foo` is unable to import anything
that isn't an actual dependency).
Closes#5430.
## Summary
This PR gets rid of the global `--isolated` flag (which serves a bunch
of independent responsibilities right now) on `uv tool run` in favor of
a dedicated `--isolated` flag, which tells uv to avoid re-using an
existing tool environment for this invocation. We'll add the same thing
to `uv run`, to avoid using the base project environment.
This will become a bit clearer in #5466, when we deprecate the
`--isolated` flag on the preview APIs.
## Summary
The idea here is that we hide all resolver output (the grayed out
resolver messages, plus the list of environment modifications) by
default in `uv run` and `uv tool run`. You can pass `--show-resolution`
to re-enable them.
Closes https://github.com/astral-sh/uv/issues/5458.
## Summary
Right now, `--isolated` is read from `uv run` and `uv init` to avoid
discovering the current workspace (or project). This PR moves that
behavior to a dedicated `--no-workspace` flag for `uv init`, and
`--no-project` for `uv run`. They could use the same flag, but
`--no-project` feels confusing for `uv init`, and `--no-workspace` seems
confusing for `uv run` (especially so once you read the documentation,
where we refer to the thing you're omitting as the project).
Closes https://github.com/astral-sh/uv/issues/5429.
This PR represents a different approach to marker propagation in an
attempt to unblock #4640. In particular, instead of propagating markers
when forks are created, we wait until resolution is complete to
propagate all markers to all dependencies in each fork. This ends up
being both more robust (we should never miss anything) and simpler to
implement because it doesn't require mutating a `PubGrubPackage` (which
was pretty annoying). I think the main downside here is that this can
sometimes add markers where they aren't needed.
This actually winds up making quite a few snapshot changes. I went
through each of them. Some of them look like legitimate bug fixes. Some
of them look like superfluous additions. And some of them look like they
would be removed if we had perfect marker normalization. But I don't
think any of the changes are _wrong_.
## Summary
If we just created an entrypoint script, we can of course set the
permissions (we just created it). However, if we're copying from the
cache, we might _not_ own the file. In that case, if we need to change
the permissions (we shouldn't, since the script is likely already
executable -- we set the permissions when we unzip, but I guess they
could _not_ be properly set in the zip itself), we have to copy it.
Closes https://github.com/astral-sh/uv/issues/5581.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
use windows-sys bindings maintained by microsoft devs. winapi didn't has
any updates for more than 3 years
## Test Plan
cargo test. it failed locally because I don't have Python 3.12 installed
## Summary
uv run --directory <path> means that one doesn't have to change to a
project's directory to run programs from it. It makes it possible to use
projects as if they are tool installations.
To support this, first the code reading .python-version was updated so
that
it can read such markers outside the current directory. Note the minor
change this causes (if I'm right), described in the commit.
## Test Plan
One test has been added.
## --directory
Not sure what the name of the argument should be, but it's following uv
sync's directory for now.
Other alternatives could be "--project". Uv run and uv tool run should
probably find common agreement on this (relevant for project-locked
tools).
I've implemented this same change in Rye, some time ago, and then we
went
with --pyproject `<`path to pyproject.toml file`>`. I think using
pyproject.toml file path and not directory was probably a mistake, an
overgeneralization one doesn't need.
Every packse version update is currently causing a huge diff (the size
of the `lock_scenarios.rs` diff in this PR). By redacting the version
from the snapshots, we will only have the actual change in the diff and
not the redundant version change noise.
The second commit moves all remaining packse url arg values to
`common/mod.rs`, which acts as a single source of truth for the packse
version.
## Summary
The package was being installed as editable, but it wasn't marked as
such in `uv pip list`, as the `direct-url.json` was wrong.
Closes https://github.com/astral-sh/uv/issues/5543.
## Summary
The idea here is similar to what we do for wheels: we create the
`CachedEnvironment` in the `archive-v0` bucket, then symlink it to its
content-addressed location. This ensures that we can always recreate
these environments without concern for whether anyone else is accessing
them.
Part of the challenge here is that we want the virtual environments to
be relocatable, because we're now building them in one location but
persisting them in another. This requires that we write relative (rather
than absolute) paths to scripts and entrypoints. The main risk with
relocatable virtual environments is that the scripts and entrypoints
_themselves_ are not relocatable, because they use a relative shebang.
But that's fine for cached environments, which are never intended to
leave the cache.
Closes https://github.com/astral-sh/uv/issues/5503.
## Summary
Adds a `--relocatable` CLI arg to `uv venv`. This flag does two things:
* ensures that the associated activation scripts do not rely on a
hardcoded
absolute path to the virtual environment (to the extent possible; `.csh`
and
`.nu` left as-is)
* persists a `relocatable` flag in `pyvenv.cfg`.
The flag in `pyvenv.cfg` in turn instructs the wheel `Installer` to
create script
entrypoints in a relocatable way (use `exec` trick + `dirname $0` on
POSIX;
use relative path to `python[w].exe` on Windows).
Fixes: #3863
## Test Plan
* Relocatable console scripts covered as additional scenarios in
existing test cases.
* Integration testing of boilerplate generation in `venv`.
* Manual testing of `uv venv` with and without `--relocatable`
## Summary
This PR adds support for `uv lock` and `uv sync` in the standardized
benchmarks script.
Part of: https://github.com/astral-sh/uv/issues/5263.
## Test Plan
For example:
```sh
python scripts/bench/__main__.py --uv-project --benchmark resolve-cold ./scripts/requirements/trio.in --verbose
```
## Summary
Closes#2187.
The [xz
backdoor](https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27)
is still fairly recent, but luckily the [Rust `xz2` crate bundles
version 5.2.5 of the C `xz`
package](https://github.com/alexcrichton/xz2-rs/tree/main/lzma-sys),
which is before the backdoor was introduced.
It's worth noting that a security risk still exists if you have a
compromised version of `xz` installed on your system, but that risk is
not introduced by `uv` or the Rust packages in general.
## Test Plan
Tried installing the package mentioned in the linked issue: `python-apt
@
https://launchpad.net/ubuntu/+archive/primary/+sourcefiles/python-apt/2.7.6/python-apt_2.7.6.tar.xz`
(Note that this will only work on Ubuntu - I tried on a Mac and while
the archive was extracted properly, the package did not install because
of some missing files)
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
I think it makes no sense to allow `--editable` and `--exclude-editable`
at the same time.
## Test Plan
```console
$ cargo run -- pip list --editable --exclude-editable
error: the argument '--editable' cannot be used with '--exclude-editable'
Usage: uv.exe pip list --editable
For more information, try '--help'.
```
Consider the following packse scenario:
```toml
[root]
requires = [
"a>=1.0.0 ; python_version < '3.10'",
"a>=1.1.0 ; python_version >= '3.10'",
"a>=1.2.0 ; python_version >= '3.11'",
]
[packages.a.versions."1.0.0"]
[packages.a.versions."1.1.0"]
[packages.a.versions."1.2.0"]
```
On current `main`, this produces a dependency on `a` that looks like
this:
```toml
dependencies = [
{ name = "fork-overlapping-markers-basic-a", marker = "python_version < '3.10' or python_version >= '3.11'" },
]
```
But the marker expression is clearly wrong here, since it implies that
`a` isn't installed at all for Python 3.10. With this PR, the above
dependency becomes:
```toml
dependencies = [
{ name = "fork-overlapping-markers-basic-a" },
]
```
That is, it's unconditional. Which is I believe correct here since there
aren't any other constraints on which version to select.
The specific bug here is that when we found overlapping dependency
specifications for the same package *within* a pre-existing fork, we
intersected all of their marker expressions instead of unioning them.
That in turn resulted in incorrect marker expressions.
While this doesn't fix any known bug on the issue tracker (like #4640),
it does appear to fix a couple of our snapshot tests. And fixes a basic
test case I came up with while working on #4732.
For the packse scenario test: https://github.com/astral-sh/packse/pull/206
When a fork occurs, we divide not just the dependencies that
provoked a fork into distinct groups, but we also add the
corresponding sibling dependencies to each fork. Previously,
while we track markers on the fork itself, the individual
dependencies that had markers only corresponded to markers
written from the dependency specification.
This meant that the sibling dependencies that got added to
each fork would not themselves have markers attached to them.
This in turn meant they would not have markers associated with
them in the lock file.
In many cases, this is actually okay, because the resolver will
pick a version that is "universal" across all forks in most
cases. But in some cases, this just simply isn't possible as
the marker expressions in the fork can and do influence resolution.
In which case, it is possible for the same package with different
versions to show up in the lock file unconditionally. Which is a
big no-no.
So in this commit, after we determine the forks, we intersect the
markers on each fork with each of its dependencies.
This does seem to balloon the marker expressions in some cases.
I plucked one low hanging fruit to avoid doing `x and x` in
trivial cases. (And this eliminated a portion of the snapshot
diffs.) But some pretty gnarly diffs remain.
This commit also fixes another bug: previously, when we created a fork
to capture the "remaining" universe of an incomplete set of markers, we
left out dependencies that should be included in that fork. We rectify
that here.
Fixes#5086
Partially addresses #4732
Interestingly, the empty string appears to be valid for these
types. I'm not sure if that's intended, but having a Default
impl is useful for use with `std::mem::take`.