This test was, I believe, relying on the XDG_DATA_HOME environment
variable not being set. When it is set, as is the case in my
environment, `uv tool run` will respect it and install `black` for this
particular test into my actual XDG_DATA_HOME directory. We fix this by
setting `XDG_DATA_HOME` explicitly.
By using `Box::pin(run())` we can reduce the artificial stack size for
running tests on windows in debug mode from 8MB to 2MB. I've checked and
1MB/no custom stack size still fail tests, e.g.
`add_workspace_editable`.
This test is failing most times for me when running nextest locally,
failing the overall test run, so i'm deactivating it for now. I'm still
not sure what the root cause here is. It seems to have something to do
with python stdin not being ready immediately after we spawn the process
and us being too fast.
## Summary
Resolves#4834
## Test Plan
```sh
# 3.12.3 is a `install_only` archive
$ cargo run -- python install --preview --force 3.12.3
# 3.9.4 has only `full` archive
$ cargo run -- python install --preview --force 3.9.4
```
## Summary
Partially closes#1917
This PR picks up on some of the great work from #1864 and opted to keep
`panic_immediate_abort` (for size reasons). I split the PR in different
isolated commits in case we want to separate/cherry-pick them out.
1. The first commit ports mostly all std changes from that PR into this
PR. Binary sizes stayed the same ~16kb.
2. The second commit migrates our existing usage of windows-sys to
windows for a safer ffi calls with Results!. It also changes all large
unsafe blocks to be isolated to the actual unsafe calls, and switches
some areas to use std such as getenv port ( which seemed buggy! ) from
launcher.c. In addition, this also adds more error checking in order to
match some missing assertions from distlib's launcher.c. Note, due to
the additional .text data, the binary sizes increased to ~20.5kb, but we
can cut back on some of the added error msgs as needed.
3. The third commit switches to using xwin for building on all 3
supported trampoline targets for sanity, and adds a CI bloat check for
core::fmt and panic as a precaution. Sadly, this will invalidate the
xwin cache on the first run.
## Test Plan
Most changes were tested on a couple of local GUI apps and console apps,
also tested some of the error states manually by using SetLastError at
different points in the code and/or passing in invalid handles.
I'm not sure how far we can get with migrating some of the other calls
without increasing binary size substantially. An initial attempt at
using std::path didn't seem so bad size wise when I tried it (~1k). On
other cases, such as std::process::exit added ~10k to the total binary
size.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
If you pass `--isolated` but no `--with`, at present, we don't create
any environment (so `--python` isn't respected and `python` will fail
entirely if it wasn't already in your path). Now, we create a base
environment in `--isolated` even if `with` wasn't provided.
Closes https://github.com/astral-sh/uv/issues/4846.
Closes https://github.com/astral-sh/uv/issues/4776.
## Summary
Like https://github.com/astral-sh/uv/pull/4808 but with a few more
changes. I suspect this will require some bikeshedding but I find the
use of "installation" and "installed" in the same sentence to be kind of
a lot.
## Summary
There are a few ideas at play here:
1. pip always strips versions to the release when evaluating against a
`Requires-Python`, so we now do the same. That means, e.g., using
`3.13.0b0` will be accepted by a project with `Requires-Python: >=
3.13`, which does _not_ adhere to PEP 440 semantics but is somewhat
intuitive.
2. Because we know we'll only be evaluating against release-only
versions, we can use different semantics in PubGrub that let us collapse
ranges. For example, `python_version >= '3.10' or python_version <
'3.10'` can be collapsed to the truthy marker.
Closes https://github.com/astral-sh/uv/issues/4714.
Closes https://github.com/astral-sh/uv/issues/4272.
Closes https://github.com/astral-sh/uv/issues/4719.
Remove wheels from the lockfile that don't match the required python
version. For example, we remove
`charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl` when we have
`requires-python = ">=3.12"`.
Our snapshots barely show changes since we avoid the large binaries for
which matters. Here are 3 real world `uv.lock` before/after comparisons
to show the large difference:
*
[warehouse](https://gist.github.com/konstin/9a1ed6a32b410e250fcf4c6ea8c536a5)
(5677 -> 4214)
*
[transformers](https://gist.github.com/konstin/5636281b5226f64aa44ce3244d5230cd)
(6484 -> 5816)
*
[github-wikidata-bot](https://gist.github.com/konstin/ebbd7b9474523aaa61d9a8945bc02071)
(793 -> 454)
We only remove wheels we are certain don't match the python version and
still keep those with unknown tags. We could remove even more wheels by
also considering other markers, e.g. removing linux wheels for a
windows-only dep, but we would trade complex, easy-to-get-wrong logic
for diminishing returns.
## Summary
Check the sha256 checksum when downloading a managed python toolchain.
## Test Plan
```sh
$ cargo run -- python install 3.12
warning: `uv python install` is experimental and may change without warning.
Looking for installation Python 3.12.3 (any-3.12.3-any-any-any)
Downloading cpython-3.12.3-windows-x86_64-none
Installed Python 3.12.3 to C:\Users\jo\AppData\Roaming\uv\data\python\cpython-3.12.3-windows-x86_64-none
Installed 1 installation in 6s
$ cargo run -- python uninstall 3.12
$ # manually change the hash in `crates/uv-python/src/downloads.inc`
$ cargo run -- python install 3.12
warning: `uv python install` is experimental and may change without warning.
Looking for installation Python 3.12 (any-3.12-any-any-any)
Downloading cpython-3.12.3-windows-x86_64-none
error: Hash mismatch for `cpython-3.12.3-windows-x86_64-none`
Expected:
xx
Computed:
776568c92c5f3b47dbf5f17c1c58578f70d75a32654419a158aa8bdc6f95b09a
```
## Summary
This seems like another good candidate for environment caching. If you
run a script repeatedly, we can just use the existing cached
environment.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Addresses https://github.com/astral-sh/uv/issues/4330, to reduce
duplication in the client creation logic.
## Test Plan
<!-- How was it tested? -->
https://github.com/astral-sh/uv/pull/4729#issuecomment-2204681655
## Summary
The resolver sometimes starts HTTP requests that end up not being
necessary. When dropping the Tokio runtime before exiting we currently
wait for those to complete. This can cause noticeable hangs in the CLI,
particularly when the runtime is blocked on slow DNS resolution.
Resolves https://github.com/astral-sh/uv/issues/4599.
## Test Plan
This change resolves any reproducible hangs for me locally.
## Summary
The basic strategy:
- When the user does `uv tool run`, we resolve the `from` and `with`
requirements (always).
- After resolving, we generate a hash of the requirements. For now, I'm
just converting to a lockfile and hashing _that_, but that's an
implementation detail.
- Once we have a hash, we _also_ hash the interpreter.
- We then store environments in
`${CACHE_DIR}/${INTERPRETER_HASH}/${RESOLUTION_HASH}`.
Some consequences:
- We cache based on the interpreter, so if you request a different
Python, we'll create a new environment (even if they're compatible).
This has the nice side-effect of ensuring that we don't use environments
for interpreters that were later deleted.
- We cache the `from` and `with` together. In practice, we may want to
cache them separately, then layer them? But this is also an
implementation detail that we could change later.
- Because we use the lockfile as the cache key, we will invalidate the
cache when the format changes. That seems ok, but we could improve it in
the future by generating a stable hash from a lockfile that's
independent of the schema.
Closes https://github.com/astral-sh/uv/issues/4752.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#4653
## Summary
Adds the tool version to the list command right beside the tool name
```
$ uv tool list
black v24.2.0
```
Following the proposed format discussed in #4653
## Test Plan
`cargo test tool_list`
<!-- How was it tested? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
The changes in https://github.com/astral-sh/uv/pull/4708 caused an
overflow in debug mode only of the 1MB default stack size in windows
during clap. This means that even trivial wrong argument tests would
fail without increasing the stack size. As remedy, we box the clap
types.
This fell out of my investigation of
https://github.com/astral-sh/uv/issues/4774 but the bug was fixed by the
reporter in #4775
- Adds support for `GH_TOKEN` authentication again — basically needed to
avoid rate limits when hacking on this.
- Clarifies some handling and logging of flavors
## Summary
This doesn't cache the tool environment; rather, it just uses the `tool
install` environment if it satisfies the request.
Closes https://github.com/astral-sh/uv/issues/4742.
Previously this displayed:
```
❯ cargo run -q -- --help
The command line interface for the uv binary.
Usage: uv [OPTIONS] <COMMAND>
```
This is.. weird. Here I remove it entirely. I could see adding
`about_long` text being helpful in the future.
Fix#4774.
## Summary
Change the python interpreter for linux installed with `uv python` to an
optimized one.
## Test Plan
I ran the following command on Linux (glibc) to confirm that an
optimized (not debug built) Python is installed.
```bash
# install python
uv python install 3.12.3
# check build type
uv run python -c "import sysconfig;print(sysconfig.get_config_var('Py_DEBUG'))"
0
```
## Summary
This used to be necessary because we purged the cache in the
`InstallPlan` if the user passed `--reinstall`. _However_, we later
changed the cache to be append-only.
## Test Plan
I ran through the test plan in https://github.com/astral-sh/uv/pull/933,
which includes an integration test and running `uv pip install
--reinstall` with:
```text
setuptools
devpi @ e334eb4dc9bb023329e4b610e4515b/devpi-2.2.0.tar.gz
```
Whew this is a lot.
The user-facing changes are:
- `uv toolchain` to `uv python` e.g. `uv python find`, `uv python
install`, ...
- `UV_TOOLCHAIN_DIR` to` UV_PYTHON_INSTALL_DIR`
- `<UV_STATE_DIR>/toolchains` to `<UV_STATE_DIR>/python` (with
[automatic
migration](https://github.com/astral-sh/uv/pull/4735/files#r1663029330))
- User-facing messages no longer refer to toolchains, instead using
"Python", "Python versions" or "Python installations"
The internal changes are:
- `uv-toolchain` crate to `uv-python`
- `Toolchain` no longer referenced in type names
- Dropped unused `SystemPython` type (previously replaced)
- Clarified the type names for "managed Python installations"
- (more little things)
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#4654
## Summary
The purpose of this is to show the entrypoints of each tool when running
`uv tool list` as below:
```
$ uv tool list
black
black
blackd
```
I used the proposed formatting as it was written in #4653 by @blueraft.
I had to use spaces instead of tabs in order to make the test
successful. Indeed in the test we are using a raw string and I did not
manage to make the test pass when escaping the tab in the list.rs file
so I used spaces everywhere.
I had a deeper look into #4653 as well but it is more difficult as we
need to get the version of the tool in the Tool object, I will continue
on this next one later.
Please tell me if anything else is needed I tried to follow the
contribution guidelines but I might have forgotten something.
Have a great day!
## Test Plan
`cargo clippy`
then by using the local version of uv as described in the Readme.md.
```
my-computer :~/mypath/uv$ cargo run -- tool list
Compiling uv-cli v0.0.1 (/mypath/uv/crates/uv-cli)
Compiling uv v0.2.18 (/mypath/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 18.69s
Running `target/debug/uv tool list`
warning: `uv tool list` is experimental and may change without warning.
black
black
blackd
isort
isort
isort-identify-imports
```
and
`cargo test tool_list`
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
For now the semantics are such that if the requested requirements from
the command line don't match the receipt (or if any `--reinstall` or
`--upgrade` is requested), we proceed with an install, passing the
`--reinstall` and `--upgrade` to the underlying Python environment.
This may lead to some unintuitive behaviors, but it's simplest for now.
For example:
- `uv tool install black<24` followed by `uv tool install black
--upgrade` will install the latest version of `black`, removing the
`<24` constraint.
- `uv tool install black --with black-plugin` followed by `uv tool
install black` will remove `black-plugin`.
Closes https://github.com/astral-sh/uv/issues/4659.
In #3514 and #2755, users had intermittent network errors, but it was
not always clear whether we had already retried these requests or not.
Building upon https://github.com/TrueLayer/reqwest-middleware/pull/159,
this PR adds the number of retries to the error message, so we can see
at first glance where we're missing retries and where we might need to
change retry settings.
Example error trace:
```
Could not connect, are you offline?
Caused by: Request failed after 3 retries
Caused by: error sending request for url (https://pypi.org/simple/uv/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
```
This code is ugly since i'm missing a better pattern for attaching
context to reqwest middleware errors in
https://github.com/TrueLayer/reqwest-middleware/pull/159.
## Summary
Given:
```text
numpy >=1.26 ; python_version >= '3.9'
numpy <1.26 ; python_version < '3.9'
```
When resolving for Python 3.8, we need to narrow the `requires-python`
requirement in the top branch of the fork, because `numpy >=1.26` all
require Python 3.9 or later -- but we know (in that branch) that we only
need to _solve_ for Python 3.9 or later.
Closes https://github.com/astral-sh/uv/issues/4669.
## Summary
This is required to solve https://github.com/astral-sh/uv/issues/4669,
because the `Requires-Python` version can now vary across a resolution.
For example, within certain forks, we might have a more narrow range,
which would allow us to use distributions that would not be allowed for
the global resolution.
This should be fine because `requires-python` is part of the package
metadata, so it should be consistent between files within a package
version. As such, there shouldn't be any risk that we incorrectly
prioritize distributions by omitting this information.
(To be more specific, the risk is something like: we prioritize some
wheel over a source distribution within a package-version, so we don't
track the source distribution at all. Then, later, when we choose a
candidate, we see that the wheel doesn't meet the `Requires-Python`
requirement, even though the source distribution _would've_ met it. If
files within a distribution could have varied support, this would be a
real risk.)
I think `--toolchain-preference system` is sufficiently clear and
`--toolchain-preference prefer-system` is excessively verbose. This was
discussed in the original pull request at
https://github.com/astral-sh/uv/pull/4424 but because we had a case for
preferring "installed managed" toolchains I was hesitant to change it.
Now that I've dropped that in #4601, I think we can drop the prefix.
Adds a `toolchain-fetch` option alongside `toolchain-preference` with
`automatic` (default) and `manual` values allowing automatic toolchain
fetches to be disabled (replaces
https://github.com/astral-sh/uv/pull/4425). When `manual`, toolchains
must be installed with `uv toolchain install`.
Note this was previously implemented with `if-necessary`, `always`,
`never` variants but the interaction between this and
`toolchain-preference` was too confusing. By reducing to a binary
option, things should be clearer. The `if-necessary` behavior moved to
`toolchain-preference=installed`. See
https://github.com/astral-sh/uv/pull/4601#discussion_r1657839633 and
https://github.com/astral-sh/uv/pull/4601#discussion_r1658658755
Closes https://github.com/astral-sh/uv/issues/4476
Originally, this used the changes in #4642 to invoke `main()` from a
`uvx` binary. This had the benefit of `uvx` being entirely standalone at
the cost of doubling our artifact size. We think that's the incorrect
trade-off.
Instead, we assume `uvx` is always next to `uv` and create a tiny binary
(<1MB) that invokes `uv` in a child process. This seems preferable to a
`cargo-dist` alias because we have more control over it. This binary
should "just work" for all of our cargo-dist distributions and
installers, but we'll need to add a new entry point for our PyPI
distribution. I'll probably tackle support there separately?
```
❯ ls -lah target/release/uv target/release/uvx
-rwxr-xr-x 1 zb staff 31M Jun 28 23:23 target/release/uv
-rwxr-xr-x 1 zb staff 452K Jun 28 23:22 target/release/uvx
```
This includes some small overhead:
```
❯ hyperfine --shell=none --warmup=100 './target/release/uv tool run --help' './target/release/uvx --help' --min-runs 2000
Benchmark 1: ./target/release/uv tool run --help
Time (mean ± σ): 2.2 ms ± 0.1 ms [User: 1.3 ms, System: 0.5 ms]
Range (min … max): 2.0 ms … 4.0 ms 2000 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: ./target/release/uvx --help
Time (mean ± σ): 2.9 ms ± 0.1 ms [User: 1.7 ms, System: 0.9 ms]
Range (min … max): 2.8 ms … 4.2 ms 2000 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Summary
./target/release/uv tool run --help ran
1.35 ± 0.09 times faster than ./target/release/uvx --help
```
I presume there may be some other downsides to a child process? The
wrapper is a little awkward. We could consider `execv` but this is
complicated across platforms. An example implementation of that over in
[monotrail](433af5aed9/crates/monotrail/src/monotrail.rs (L764-L799)).
## Summary
I ended up needing this for https://github.com/astral-sh/uv/issues/4664
but I think it's a good change more broadly. We should be able to share
this cached information across operations within a given invocation.
## Summary
These are changing in one of my branches but I can't tell _what's_
changing. Some tests include the lock, but others don't. This PR adds it
for all successful resolves in the suite.
## Summary
You can now add `managed = false` under `[tool.uv]` in a
`pyproject.toml` to explicitly opt out of the project and workspace
APIs.
If a project sets `managed = false`, we will (1) _not_ discover it as a
workspace root, and (2) _not_ discover it as a workspace member (similar
to using `exclude` in the workspace parent).
Closes https://github.com/astral-sh/uv/issues/4551.
## Summary
This doesn't actually change any behaviors, but it does make it a bit
easier to solve #4669, because we don't have to support "version
narrowing" for the non-`RequiresPython` variants in here. Right now, the
semantics are kind of muddied, because the `target` variant is
_sometimes_ interpreted as an exact version and sometimes as a lower
bound.
## Summary
`GitDatabase::contains` previously only parsed the commit to see if it
was a valid hash and didn't verify if the commit existed in the object
database. This led to the database never being updated.
Resolves https://github.com/astral-sh/uv/issues/4378.
## Test Plan
Added a test that fails without this change.
It's hard to talk about solve state and resolver state, so i'm renaming
them to fork state and resolver state, indicating the hierarchy between
more directly.
## Summary
Closes https://github.com/astral-sh/uv/issues/4688.
## Test Plan
```
❯ cargo run tool install ruff
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.14s
Running `target/debug/uv tool install ruff`
warning: `uv tool install` is experimental and may change without warning.
Resolved 1 package in 136ms
Installed 1 package in 3ms
+ ruff==0.5.0
No entrypoints to install for tool `ruff`
```
## Summary
Packages that provide scripts that _aren't_ Python entrypoints need to
respected in `uv tool install`. For example, Ruff ships a script in
`ruff-0.5.0.data/scripts`.
Unfortunately, the `.data` directory doesn't exist in the virtual
environment at all (it's removed, per the spec, after install). So this
PR changes the entry point detection to look at the `RECORD` file, which
is the only evidence that the scripts were installed.
Closes https://github.com/astral-sh/uv/issues/4691.
## Test Plan
`cargo run uv tool install ruff` (snapshot tests to-come)
## Summary
Resolves#4483Resolves#4484
## Test Plan
`cargo test`
```sh
❯ cargo run -- toolchain dir
warning: `uv toolchain dir` is experimental and may change without warning.
/Users/ahmedilyas/Library/Application Support/uv/toolchains
❯ cargo run -- tool dir
warning: `uv tool dir` is experimental and may change without warning.
/Users/ahmedilyas/Library/Application Support/uv/tools
```
## Summary
I think this may have just been a typo.
Closes https://github.com/astral-sh/uv/issues/4692.
## Test Plan
Run `cargo run tool install flask --force --reinstall` repeatedly.
## Summary
I noticed that `init_environment` and `find_interpreter` were both
calling `find_environment`, which seemed like a code smell to me.
Instead, `find_interpreter` now returns either a compatible environment
or an interpreter (if no compatible environment was found).
Additionally, `interpreter_meets_requirements` now no longer validates
`requires-python` if `--python` or `.python-version` is set. Instead, we
warn, which matches the behavior we get when creating a new environment
at the bottom of `find_interpreter`.
In total, I think this makes the data flow in project interpreter
discovery less repetitive and easier to reason about.
## Summary
This should both make it faster to solve forks (since we have a guess
for a valid resolution, and will bias towards packages we've already
fetched) and improve consistency between forks.
Closes https://github.com/astral-sh/uv/issues/4617.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
resolves https://github.com/astral-sh/uv/issues/4651
(pruning needs to happen at the parent level so that the number of
children being used to figure out the output is correct)
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
added a test that would've caught this bug 🌵
<!-- How was it tested? -->
## Summary
`remove_edge` will invalidate the last index in the graph, so we need to
ensure that each index we look at is "earlier" than the last.
Co-authored-by: bluss <bluss@users.noreply.github.com>
## Summary
When a constraint is applied to a requirement with a marker, the marker
needs to be propagated to the constraint.
If both the constraint and the requirement have a marker, they need to
be merged together (via `and`).
Closes https://github.com/astral-sh/uv/issues/4575.
## Summary
This PR dodges some of the bigger issues raised by
https://github.com/astral-sh/uv/pull/4554 and
https://github.com/astral-sh/uv/pull/4555 by _not_ changing any of the
bigger semantics around syncing and instead merely changing virtual
workspace roots to sync all packages in the workspace (rather than
erroring due to being unable to find a project).
Closes#4541.
We need this to power uninstallations!
The latter two commits were reviewed in:
- #4637
- #4638
Note this is a breaking change for existing tool installations, but it's
in preview and very new. In the future, we'll need a clear upgrade path
for tool receipt changes.
This centralizes writing out the DistributionId as TOML. This is again
just a refactor. No behavioral changes were made. In a subsequent
commit, we will tweak how `source` is written.
Looks much better than #4618:
```
DEBUG Pre-fork split universal took 0.644s
DEBUG Split python_version >= '3.12' and platform_machine == 'aarch64' and platform_system == 'Darwin' and platform_system == 'Linux' took 0.659s
DEBUG Split python_version == '3.9' and platform_machine == 'arm64' and platform_system == 'Darwin' took 0.291s
```
The journey here can be seen in:
- #4587
- #4589
- #4594
I collapsed all the commits here because only the last one in the stack
got us to a "correct" error message.
There are a few architectural changes:
- We have a dedicated `MissingEnvironment` and `EnvironmentNotFound`
type for `PythonEnvironment::find` allowing different error messages
when searching for environments
- `ToolchainNotFound` becomes a struct with the `ToolchainRequest` which
greatly simplifies missing toolchain error formatting
- `ToolchainNotFound` tracks the `EnvironmentPreference` so it can
accurately report the locations checked
The messages look like this now, instead of the bland (and often
incorrect): "No Python interpreter found in system toolchains".
```
❯ cargo run -q -- pip sync requirements.txt
error: No virtual environment found
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip sync requirements.txt --system
error: No system environment found
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip sync requirements.txt --python 3.12
error: No virtual environment found for Python 3.12
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip sync requirements.txt --python 3.12 --system
error: No system environment found for Python 3.12
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- toolchain find 3.12 --preview
error: No toolchain found for Python 3.12 in system path
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip compile requirements.in
error: No toolchain found in virtual environments or system path
```
I'd like to follow this with hints, suggesting creating an environment
or using system in some cases.
This includes a functional change, we now skip the forked state pop/push
if we didn't fork.
From transformers:
```
DEBUG Pre-fork split universal took 0.036s
DEBUG Split python_version >= '3.10' and python_version >= '3.10' and platform_system == 'Darwin' and python_version >= '3.11' and python_version >= '3.12' and python_version >= '3.6' and platform_system == 'Linux' and platform_machine == 'aarch64' took 0.048s
DEBUG Split python_version <= '3.9' and platform_system == 'Darwin' and platform_machine == 'arm64' and python_version >= '3.7' and python_version >= '3.8' and python_version >= '3.9' took 0.038s
```
The messages could use simplification from
https://github.com/astral-sh/uv/issues/4536
We can consider nested spans in the future but this works nicely for
now.
https://github.com/astral-sh/uv/pull/2419 appears to have only applied
this retry to wheels that were already downloaded (though I would have
to look more carefully to be certain). In
https://github.com/astral-sh/uv/issues/1491, we've gotten continued
reports of spurious failures on Windows and tracing reveals that we are
not applying our retry logic during the rename. I believe we're in this
code path — switching to our backoff retry should resolve the failures.
## Summary
resolves https://github.com/astral-sh/uv/issues/4609
previously, the implementation of `required_with_no_extra` was
incorrect, particularly when there are packages that do not require any
extras but have other types of markers.
## Test Plan
the existing tests also did cover this (my bad... missed it) but added a
smaller test since this bug would've been more obvious with this new
test.
## Summary
It turns out that `Topo` only works on graphs without cycles. If a graph
has a cycle, it seems to bail early. So we were losing markers for trees
that contain cycles (like Poetry, which depends on
`poetry-plugin-export`, which depends on Poetry).
Now, we remove cycles beforehand and re-add those edges afterwards.
It's a bit hard for me to reason about the implications of this. The way
that marker propagation works is that we do visit the nodes in-order and
propagate the markers from any incoming to any outgoing edges. We only
do this at a single depth (rather than recursively) because we visit the
nodes in-order anyway. But if you have a cycle... then in theory you
might need to propagate the markers recursively? Or maybe not?
As an example:
`A -> B -> C -> D -> B`
If `A -> B` has `sys_platform == 'darwin'`, and then `D -> B` has
`python_version >= '3.7`... then we don't need to propagate
`python_version >= '3.7'` back to `B` or any of its dependencies,
because the condition would be `(sys_platform == 'darwin' or
python_version >= '3.7) or sys_platform == 'darwin'`, which is
equivalent to `sys_platform == 'darwin'`.
Closes#4584.
This PR contains two style changes to the lockfile:
* Always indent lists of objects, even with they are only a single
element.
* Use 4 spaces instead of tabs for indenting, to mirror what we do in
the ruff formatter.
## Summary
Open to just making this a warning but no strong opinion.
Closes https://github.com/astral-sh/uv/issues/4593.
## Test Plan
Failure:
```
❯ echo "pandas==2.2.2" | cargo run pip compile --universal -p 3.11 --no-header - --python-platform linux
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv pip compile --universal -p 3.11 --no-header - --python-platform linux`
error: the argument '--universal' cannot be used with '--python-platform <PYTHON_PLATFORM>'
Usage: uv pip compile --universal --python-version <PYTHON_VERSION> --no-header <SRC_FILE>...
For more information, try '--help'.
```
Use indented inline tables for `distribution.dependencies`,
`distribution.optional-dependencies` and
`distribution.dev-dependencies`.
The new style is more concise (see examples below) and it makes the
association between a distribution and its dependencies clearer
(previously, they were both individual `[[...]]` blocks separated by
newlines). The style is optimized for small, meaningful diffs by placing
each dependency on a single line with a final trailing comma. Whenever a
dependency is added, removed or changed, there should be a one line diff
in `distribution.dependencies`. The final trailing comma ensures that
adding a dependency doesn't change the line ahead.
Part of #3611
## Examples
### Simple workspace package
Before:
```toml
[[distribution]]
name = "bird-feeder"
version = "1.0.0"
source = "editable+packages/bird-feeder"
[[distribution.dependencies]]
name = "anyio"
[[distribution.dependencies]]
name = "seeds"
```
After:
```toml
[[distribution]]
name = "bird-feeder"
version = "1.0.0"
source = "editable+packages/bird-feeder"
dependencies = [
{ name = "anyio" },
{ name = "seeds" },
]
```
### Flask
Before:
```toml
[[distribution]]
name = "flask"
version = "3.0.2"
source = "registry+https://pypi.org/simple"
sdist = { url = "a89e8120fa0bbafcb2c2387c0317be/flask-3.0.2.tar.gz", hash = "sha256:822c03f4b799204250a7ee84b1eddc40665395333973dfb9deebfe425fefcb7d", size = 675248 }
wheels = [{ url = "aa98bfe0ebf27ce224fb4f766acb23/flask-3.0.2-py3-none-any.whl", hash = "sha256:3232e0e9c850d781933cf0207523d1ece087eb8d87b23777ae38456e2fbe7c6e", size = 101300 }]
[[distribution.dependencies]]
name = "blinker"
[[distribution.dependencies]]
name = "click"
[[distribution.dependencies]]
name = "itsdangerous"
[[distribution.dependencies]]
name = "jinja2"
[[distribution.dependencies]]
name = "werkzeug"
[distribution.optional-dependencies]
[[distribution.optional-dependencies.dotenv]]
name = "python-dotenv"
```
After:
```toml
[[distribution]]
name = "flask"
version = "3.0.2"
source = "registry+https://pypi.org/simple"
sdist = { url = "a89e8120fa0bbafcb2c2387c0317be/flask-3.0.2.tar.gz", hash = "sha256:822c03f4b799204250a7ee84b1eddc40665395333973dfb9deebfe425fefcb7d", size = 675248 }
dependencies = [
{ name = "blinker" },
{ name = "click" },
{ name = "itsdangerous" },
{ name = "jinja2" },
{ name = "werkzeug" },
]
wheels = [{ url = "aa98bfe0ebf27ce224fb4f766acb23/flask-3.0.2-py3-none-any.whl", hash = "sha256:3232e0e9c850d781933cf0207523d1ece087eb8d87b23777ae38456e2fbe7c6e", size = 101300 }]
[distribution.optional-dependencies]
dotenv = [
{ name = "python-dotenv" },
]
```
### Forking
Before:
```toml
[[distribution]]
name = "project"
version = "0.1.0"
source = "editable+."
[[distribution.dependencies]]
name = "package-a"
version = "4.3.0"
source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/"
marker = "sys_platform == 'darwin'"
[[distribution.dependencies]]
name = "package-a"
version = "4.4.0"
source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/"
marker = "sys_platform == 'linux'"
[[distribution.dependencies]]
name = "package-b"
marker = "sys_platform == 'linux'"
[[distribution.dependencies]]
name = "package-c"
marker = "sys_platform == 'darwin'"
```
After:
```toml
[[distribution]]
name = "project"
version = "0.1.0"
source = "editable+."
dependencies = [
{ name = "package-a", version = "4.3.0", source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/", marker = "sys_platform == 'darwin'" },
{ name = "package-a", version = "4.4.0", source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/", marker = "sys_platform == 'linux'" },
{ name = "package-b", marker = "sys_platform == 'linux'" },
{ name = "package-c", marker = "sys_platform == 'darwin'" },
]
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#1329.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Mentions use of seed packages during `uv venv --seed`, and clarifies the
divergence in behavior when using Python 3.12+.
## Test Plan
<!-- How was it tested? -->
`cargo nextest run --test venv`
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Moves `--from` to a hidden argument — we allow it still but we validate
that it is compatible with whatever is passed to `uv tool install
<package>`. The positional package can now be a full specification,
allowing things like `uv tool install black==24.2.0`.
## Summary
In the dependency refactor (https://github.com/astral-sh/uv/pull/4430),
the logic for requirements and constraints was combined. Specifically,
we were applying constraints _before_ filtering on markers and extras,
and then applying that same filtering to the constraints. As a result,
constraints that should only be activated when an extra is enabled were
being enabled unconditionally.
Closes https://github.com/astral-sh/uv/issues/4569.
## Summary
- Adds a `--extra` flag to `uv add` that allows activating extras
without the PEP508 syntax.
- `uv add` now errors if the update is ambiguous (e.g. the dependency is
present twice with different markers)
- `uv add` is smarter about updates. For example, `uv add flask==3.0.0`
followed by `uv add flask --extra dotenv` preserves the previous version
specifier.
Resolves https://github.com/astral-sh/uv/issues/4419.
Adds support for `--reinstall` and `--reinstall-package` to `uv tool
install`. These are already available via the installer settings, we
just respect them now.
`--reinstall` implies a recreation of the environment and reinstallation
of the entry points.
`--reinstall-package` will only update a subset of the environment. If
the target package is the one with the entry points, we'll reinstall the
entry points. Otherwise, the entry points are not changed.
Adds detection of existing entry points, avoiding clobbering entry
points that were installed by another tool. If we see any existing entry
point collisions, we'll stop instead of overwriting them. The `--force`
flag can be used to opt-in to overwriting the files; we can't use `-f`
because it's taken by `--find-links` which is silly. The `--force` flag
also implies replacing a tool previously installed by uv (the
environment is rebuilt).
Similarly, #4504 adds support for reinstalls that _will not_ clobber
entry points managed by other tools.
## Summary
If the package _isn't_ marked as `workspace = true`, locking will fail
given:
```rust
let workspace_package_declared =
// We require that when you use a package that's part of the workspace, ...
!workspace.packages().contains_key(&requirement.name)
// ... it must be declared as a workspace dependency (`workspace = true`), ...
|| matches!(
source,
Some(Source::Workspace {
// By using toml, we technically support `workspace = false`.
workspace: true,
..
})
)
// ... except for recursive self-inclusion (extras that activate other extras), e.g.
// `framework[machine_learning]` depends on `framework[cuda]`.
|| &requirement.name == project_name;
if !workspace_package_declared {
return Err(LoweringError::UndeclaredWorkspacePackage);
}
```
Closes https://github.com/astral-sh/uv/issues/4552.
## Summary
This PR modifies `uv run` to fallback to discovering an interpreter
(e.g., a local `.venv`) if the command is run outside of a workspace.
`uv run --isolated` continues to completely skip workspace _and_
interpreter discovering, only installing whatever's provided with
`--with`.
The next step here is adding some ergonomic controls for enabling this
behavior even if your project is technically in a workspace (i.e., you
have a `pyproject.toml` but aren't using the Project APIs and don't want
locking etc.). I could imagine a setting in `pyproject.toml` that's also
exposed on the command-line. Something like: `managed = false` or
`project = false`.
See: https://github.com/astral-sh/uv/issues/3836.
This is the minimal "working" implementation. In summary, we:
- Resolve the requested requirements
- Create an environment at `$UV_STATE_DIR/tools/$name`
- Inspect the `dist-info` for the main requirement to determine its
entry points scripts
- Link the entry points from a user-executable directory
(`$XDG_BIN_HOME`) to the environment bin
- Create an entry at `$UV_STATE_DIR/tools/tools.toml` tracking the
user's request
The idea with `tools.toml` is that it allows us to perform upgrades and
syncs, retaining the original user request (similar to declarations in a
`pyproject.toml`). I imagine using a similar schema in the
`pyproject.toml` in the future if/when we add project-levle tools. I'm
also considering exposing `tools.toml` in the standard uv configuration
directory instead of the state directory, but it seems nice to tuck it
away for now while we iterate on it. Installing a tool won't perform a
sync of other tool environments, we'll probably have an explicit `uv
tool sync` command for that?
I've split out todos into follow-up pull requests:
- #4509 (failing on Windows)
- #4501
- #4504Closes#4485
`ResolverState::choose_version` had become huge, with an odd match due
to the url handling from #4435. This refactoring breaks it into
`choose_version`, `choose_version_registry` and `choose_version_url`. No
functional changes.
In the case of a direct URL sdist, it includes a hash, and this hash is
not (and probably should not) be part of the `source`. The URL is part
of the source because it permits uniquely identifying this particular
package as distinct from any other package with the same name. But, we
should still include the hash.
So in this commit, we rejigger what we did previously to make it so the
`SourceDist` value isn't even constructed at all when it isn't needed.
This also in turn lets us make the hash field required (which we will do
in a subsequent commit).
This does mean the URL is stored twice for direct URL dependencies in
the lock file. This seems non-ideal. We could make the URL for the sdist
optional, but this seems like a bridge too far? Another choice is to add
a new key to `distribution` that is just `direct-url-hash`, but that
also seems mucky.
Maybe the duplication here is okay given the relative rarity of direct
URL dependencies.
This updates all of the test snapshots where `sdist` was
strictly redundant and could be removed.
Note that there is one test failure whose snapshot I didn't
update: one where there is a direct URL dependency. In this
case, the sdist entry isn't strictly redundant, as it includes
a hash that isn't present in the source. We'll deal with that
in a subsequent commit.
This fixes an issue in the lock file where, in cases where we had a
non-registry sdist, the information in the sdist was strictly redundant
with the information in the source. This was born out in the code
already where the `sdist` field was only ever used to build a source
distribution type when the source was a registry. In all other cases,
the source distribution data can be materialized from the `source`
field.
This makes it clear that an actual `sdist` is only required when a
distribution is from a registry. In all other cases, a source
distribution is manufactured directly from the `source`.
Previously, we had Lock and LockWire impl blocks inter-mixed. This bugs
me a bit, so I've just shuffled things around so that we have Lock, impl
Lock, LockWire and then impl LockWire.
No changes are otherwise made to the code here.
This update follows from the removal of of `source` and `version` from
`distribution.dependency` entries in the lock file when the package name
unambiguously refers to a single distribution.
When there is only one distribution for a particular package name, any
dependencies (the edges in the resolution graph) that reference that
package name are completely unambiguous. Therefore, we can actually omit
their version and source information and instead derive it from the
distribution entry.
We add some tests to check the success and error cases. That is, when
`source` or `version` are omitted and there are more than one
corresponding distribution for the package name (i.e., it's ambiguous),
then lock deserialization should fail.
This commit prepares to make the `source` and `version` fields optional
in a `distribution.dependency` based on whether they have an unambiguous
value. e.g., When there is exactly one distribution with a matching
package name.
This refactor effectively defines "wire" types for most of the lock data
types (repeating the `WheelWire` and `LockWire` pattern) with one key
difference: we don't use serde's `TryFrom` integration. In this
refactor, we could have, and it would have worked. But in a subsequent
commit, we're going to be adding state to the `unwire()` calls that is
impossible to thread through a `TryFrom` implementation. This state will
tell us how to populate the `source` and `version` values on a
`Dependency` when they're missing.
The duplication of types here is unfortunate, but compiler should catch
any deviations. And the wire types are unexported, so they have a
limited blast radius on complexity.
Downstack PR: #4481
## Introduction
We support forking the dependency resolution to support conflicting
registry requirements for different platforms, say on package range is
required for an older python version while a newer is required for newer
python versions, or dependencies that are different per platform. We
need to extend this support to direct URL requirements.
```toml
dependencies = [
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'"
]
```
This did not work because `Urls` was built on the assumption that there
is a single allowed URL per package. We collect all allowed URL ahead of
resolution by following direct URL dependencies (including path
dependencies) transitively, i.e. a registry distribution can't require a
URL.
## The same package can have Registry and URL requirements
Consider the following two cases:
requirements.in:
```text
werkzeug==2.0.0
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```
pyproject.toml:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, we want the URL to override the registry dependency,
in the second case we want to fork and have one branch use the registry
and the other the URL. We have to know about this in
`PubGrubRequirement::from_registry_requirement`, but we only fork after
the current method.
Consider the following case too:
a:
```
c==1.0.0
b @ https://b.zip
```
b:
```
c @ https://c_new.zip ; python_version >= '3.12'",
c @ https://c_old.zip ; python_version < '3.12'",
```
When we convert the requirements of `a`, we can't know the url of `c`
yet. The solution is to remove the `Url` from `PubGrubPackage`: The
`Url` is redundant with `PackageName`, there can be only one url per
package name per fork. We now do the following: We track the urls from
requirements in `PubGrubDependency`. After forking, we call
`add_package_version_dependencies` where we apply override URLs, check
if the URL is allowed and check if the url is unique in this fork. When
we request a distribution, we ask the fork urls for the real URL. Since
we prioritize url dependencies over registry dependencies and skip
packages with `Urls` entries in pre-visiting, we know that when fetching
a package, we know if it has a url or not.
## URL conflicts
pyproject.toml (invalid):
```toml
dependencies = [
"iniconfig @ e96292c7f723f1fa332fe4ed6dfbec/iniconfig-1.1.0.tar.gz",
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'",
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
]
```
On the fork state, we keep `ForkUrls` that check for conflicts after
forking, rejecting the third case because we added two packages of the
same name with different URLs.
We need to flatten out the requirements before transformation into
pubgrub requirements to get the full list of other requirements which
may contain a URL, which was changed in a previous PR: #4430.
## Complex Example
a:
```toml
dependencies = [
# Force a split
"anyio==4.3.0 ; python_version >= '3.12'",
"anyio==4.2.0 ; python_version < '3.12'",
# Include URLs transitively
"b"
]
```
b:
```toml
dependencies = [
# Only one is used in each split.
"b1 ; python_version < '3.12'",
"b2 ; python_version >= '3.12'",
"b3 ; python_version >= '3.12'",
]
```
b1:
```toml
dependencies = [
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl",
]
```
b2:
```toml
dependencies = [
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl",
]
```
b3:
```toml
dependencies = [
"iniconfig @ e96292c7f723f1fa332fe4ed6dfbec/iniconfig-1.1.0.tar.gz",
]
```
In this example, all packages are url requirements (directory
requirements) and the root package is `a`. We first split on `a`, `b`
being in each split. In the first fork, we reach `b1`, the fork URLs are
empty, we insert the iniconfig 1.1.1 URL, and then we skip over `b2` and
`b3` since the mark is disjoint with the fork markers. In the second
fork, we skip over `b1`, visit `b2`, insert the iniconfig 2.0.0 URL into
the again empty fork URLs, then visit `b3` and try to insert the
iniconfig 1.1.0 URL. At this point we find a conflict for the iniconfig
URL and error.
## Closing
The git tests are slow, but they make the best example for different URL
types i could find.
Part of #3927. This PR does not handle `Locals` or pre-releases yet.
In the last PR (#4430), we flatten the requirements. In the next PR
(#4435), we want to pass `Url` around next to `PubGrubPackage` and
`Range<Version>` to keep track of which `Requirement`s added a url
across forking. This PR is a refactoring split out from #4435 that rolls
the dependency conversion into a single iterator and introduces a new
`PubGrubDependency` struct as abstraction over `(PubGrubPackage,
Range<Version>)` (or `(PubGrubPackage, Range<Version>,
VerbatimParsedUrl)` in the next PR), and it removes the now unnecessary
`PubGrubDependencies` abstraction.
This PR refactors the command creation in the test suite to remove the
duplication.
**1)** We add the same set of test stubbing args to almost any uv
invocation in the tests:
```rust
command
.arg("--cache-dir")
.arg(self.cache_dir.path())
.env("VIRTUAL_ENV", self.venv.as_os_str())
.env("UV_NO_WRAP", "1")
.env("HOME", self.home_dir.as_os_str())
.env("UV_TOOLCHAIN_DIR", "")
.env("UV_TEST_PYTHON_PATH", &self.python_path())
.current_dir(self.temp_dir.path());
if cfg!(all(windows, debug_assertions)) {
// TODO(konstin): Reduce stack usage in debug mode enough that the tests pass with the
// default windows stack of 1MB
command.env("UV_STACK_SIZE", (8 * 1024 * 1024).to_string());
}
```
Centralizing these into a `TestContext::add_shared_args` method removes
them from everywhere.
**2)** Prefix all `TextContext` methods of the pip interface with
`pip_`. This is now necessary due to `uv sync` vs. `uv pip sync`.
**3)** Move command creation in the various test files into dedicated
functions or methods to avoid repeating the arguments. Except for error
message tests, there should be at most one `Command::new(get_bin())`
call per test file. `EXCLUDE_NEWER` is exclusively used in
`TestContext`.
---
I'm considering adding a `TestCommand` on top of these changes (in
another PR) that holds a reference to the `TextContext`, has
`add_shared_args` as a method and uses `Fn(Self) -> Self` instead of
`Fn(&mut Self) -> Self` for methods to improved chaining.
Downstack PR: #4515 Upstack PR: #4481
Consider these two cases:
A:
```
werkzeug==2.0.0
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```
B:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, `werkzeug==2.0.0` should be overridden by the url. In
the second case `iniconfig == 1.1.1` is in a different fork and must
remain a registry distribution.
That means the conversion from `Requirement` to `PubGrubPackage` is
dependent on the other requirements of the package. We can either look
into the other packages immediately, or we can move the forking before
the conversion to `PubGrubDependencies` instead of after. Either version
requires a flat list of `Requirement`s to use. This refactoring gives us
this list.
I'll add support for both of the above cases in the forking urls branch
before merging this PR. I also have to move constraints over to this.
While working on https://github.com/astral-sh/uv/pull/4492 I noticed
that `--reinstall-package` was not actually respected by
`update_environment`, it exited early due to satisfied requirements.
Before
```
❯ cargo run -q -- tool install black -v --reinstall-package tomli
...
DEBUG All requirements satisfied: black | click>=8.0.0 | mypy-extensions>=0.4.3 | packaging>=22.0 | pathspec>=0.9.0 | platformdirs>=2 | tomli>=1.1.0 ; python_version < '3.11' | typing-extensions>=4.0.1 ; python_version < '3.11'
```
After
```
❯ cargo run -q -- tool install black -v --reinstall-package tomli
...
Uninstalled 1 package in 0.99ms
Installed 1 package in 4ms
- tomli==2.0.1
+ tomli==2.0.1
```
## Summary
This is an intermediary change in enabling universal resolution for
`requirements.txt` files. To start, we need to be able to preserve
markers in the `requirements.txt` output _and_ propagate those markers,
such that if you have a dependency that's only included with a given
marker, the transitive dependencies respect that marker too.
Closes#1429.
## Summary
If the user puts their configuration in a `pyproject.toml` that _isn't_
a valid workspace root (e.g., it's a Poetry file), we won't discover it,
because we only look in `uv.toml` files in that case. I think this is
somewhat debatable... We could choose to _require_ `uv.toml` there, but
as a user I'd probably expect it to work?
Closes https://github.com/astral-sh/uv/issues/4521.
`junction::create` apparently will happily succeed but not create a link
to files? Since our symlink function does not indicate that it cannot
handle files, this was quite surprising.
Tested over in #4509 which previously failed on an assertion that
`black.exe` existed.
```
error: Failed to install entrypoint
Caused by: Cannot create a junction for [TEMP_DIR]/tools/black/Scripts/black.exe: is not a directory
```
We should file an issue upstream too, I think?
## Summary
We currently accept `--index-url /path/to/index` on the command line,
but confusingly, not in `requirements.txt`. This PR just brings the two
in sync.
## Test Plan
New snapshot tests.
## Summary
pip allows these with the following logic:
```python
if os.path.exists(location): # Is a local path.
url = path_to_url(location)
path = location
elif location.startswith("file:"): # A file: URL.
url = location
path = url_to_path(location)
elif is_url(location):
url = location
```
Closes https://github.com/astral-sh/uv/issues/4510.
## Test Plan
`cargo run pip install --index-url ../packse/index/simple-html/
example-a-961b4c22 --reinstall --no-cache --no-deps`
I was getting this test failure locally on my Archlinux system:
```
-old snapshot
+new results
0 0 │ success: true
1 1 │ exit_code: 0
2 2 │ ----- stdout -----
3 │-[PYTHON-3.12]
3 │+/usr/bin/python3
4 4 │
5 5 │ ----- stderr -----
```
Where I have `/usr/bin/python3` and `/usr/bin/python3.12`.
Thanks @zanieb for the help with figuring out the fix here!
## Summary
In #4085, support was implemented for the `--prefix` option. When using
this option, however, a lock is either acquired on the virtualenv or
globally, preventing multiple installs to different `--prefix`s from the
same interpreter.
In this change, acquire the lock on just the prefix in question.
## Test Plan
Ran a `uv pip install` with `--prefix` and `RUST_LOG=trace` and observed
that the lock was acquired in the prefix.
## Summary
It turns out that the Git fetch implementation is initializing its own
client, which can be really expensive on macOS (due to loading native
certificates) _and_ bypasses any of our middleware. This PR modifies the
Git implementation to accept a shared client.
## Summary
We unconditionally update the submodules in our Git code, but AFAICT it
shouldn't be necessary if we already have a complete, up-to-date fetch
available.
## Summary
This does require cloning the settings, but I think it's fine. A better
solution would be to have owned and unowned settings structs, so that we
could convert `ResolverInstallerSettingsRef` to `InstallerSettingsRef`
without cloning, but that requires maintaining owned and unowned
variants.
Closes https://github.com/astral-sh/uv/issues/4455.
## Summary
I might be mistaken, but I think we need to read the header from the
response, not the request. The request would only contain headers that
we set.
I verified (with extra logging) that the request header is `None` while
PyPI returns a valid length in the response header.
## Summary
Ultimately decided to view this as part of `LockWire` normalization:
removing references to extras that don't exist. I think it would be nice
if the resolver avoided omitting these, but I don't know if it's fully
possible.
Closes https://github.com/astral-sh/uv/issues/4405.
I have to add yet another indentation level to the prerelease-available
check in `PubGrubReportFormatter::hints` for #4435, so i've broken the
code into methods and decreased indentation in this split out
refactoring-only change.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves https://github.com/astral-sh/uv/issues/4439 partially.
Implements for `uv pip tree`:
- `--no-dedupe` flag, similar to `cargo tree --no-dedupe` .
- denote dependency cycles with `(#)` and add a footnote if there's a
cycle (using `(*)` would require keeping track of the cycle state, so
opted to do this instead).
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
The existing tests pass + added a couple of tests to validate
`--no-dedupe` behavior.
<!-- How was it tested? -->
## Summary
This will make `uv lock` read `override-dependencies` from the
`[tool.uv]` section of `pyproject.toml`.
Resolves#4108
This [other](https://github.com/astral-sh/uv/pull/4446) implementation
touches more code but seems more consistent.
## Test Plan
Unit test
Releasing 0.2.15 with a few additions over 0.2.14. Motivated by the
incorrect tagging of 0.2.14 (#4474).
Generated the changelog with a small patch to Rooster allowing me to
force the previous commit to be correct.
```diff
diff --git a/src/rooster/_cli.py b/src/rooster/_cli.py
index 2a4f61b..4ec1299 100644
--- a/src/rooster/_cli.py
+++ b/src/rooster/_cli.py
@@ -38,6 +38,7 @@ def release(
without_sections: list[str] = typer.Option(
[], help="Sections to exclude from the changelog"
),
+ previous_commit: str = None,
):
"""
Create a new release.
@@ -58,7 +59,11 @@ def release(
typer.echo("It looks like there are no version tags for this project.")
# Get the commits since the last release
- changes = list(get_commits_between(config, repo, last_version))
+ changes = list(
+ get_commits_between(
+ config, repo, last_version, force_first_commit=previous_commit
+ )
+ )
since = "since last release" if last_version else "in the project"
typer.echo(f"Found {len(changes)} commits {since}.")
diff --git a/src/rooster/_git.py b/src/rooster/_git.py
index 597bb88..66bc54e 100644
--- a/src/rooster/_git.py
+++ b/src/rooster/_git.py
@@ -29,12 +29,13 @@ def get_commits_between(
target: Path,
first_version: Version | None = None,
second_version: Version | None = None,
+ force_first_commit: str | None = None,
) -> Generator[git.Commit, None, None]:
"""
Yield all commits between two tags
"""
repo = git.repository.Repository(target.absolute())
- first_commit = (
+ first_commit = force_first_commit or (
repo.lookup_reference(
TAG_PREFIX + config.version_tag_prefix + str(first_version)
)
```
## Summary
The `--index-strategy` is linked to the index locations, which we
propagate to source distribution builds; so it makes sense to pass the
`--index-strategy` too.
While I was here, I made `exclude_newer` a required argument so that we
don't forget to set it via the `with_options` builder.
Closes https://github.com/astral-sh/uv/issues/4465.
## Summary
This PR moves all the CLI code into its own crate, separate from the
`uv` crate. The `uv` crate is iterated on frequently, and the CLI code
comprises a significant portion of it but rarely changes. Removing the
CLI code reduces the `uv` crate size from 1.4MiB to 1.0MiB.
When executables were not named `python3` e.g. `python3.11` we would
construct a Python path that would only work for _some_ requests in
tests since we don't search for those names unless a specific version is
requested. To solve, we construct a test context with constant Python
executable names. For example, if a test context was created with `3.11`
and `3.12` we could end up with the search path
`/usr/local/python-3.11/bin:/usr/local/python-3.12/bin` where the
executables are named `python3.11` and `python3` respectively. A test
invocation of uv requesting any Python toolchain version would then
locate the `3.12` executable since the `3.11` executable doesn't have
the generic name, but we want `3.11` to come first.
On Windows, we just leave things as-is because executables are always
called `python.exe`.
Closes https://github.com/astral-sh/uv/issues/4376
## Summary
Closes https://github.com/astral-sh/uv/issues/2956
This changes the bootstrap launcher script to use `pythonw.exe` instead
of `python.exe` on `gui_scripts` via a helper fn both in the shebang and
the python exe path encoded before `UVUV` magic, that way
uv-trampoline's `find_python_exe` can use the right pythonw executable.
## Test Plan
New unit tests for the helper was added.
Tested on example from #2956 on Windows to make sure it works as
expected.
## Questions
I noticed the docs in `fn windows_script_launcher` says ```The launcher
will look for `python[w].exe` adjacent to it in the same directory to
start the embedded script.``` but I didn't find such functionality when
I looked in uv-trampoline.
I only saw `clear_app_starting_state` getting called when `is_gui` is
set.
Was the intention to do this in uv-trampoline at some point instead?
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
resolves https://github.com/astral-sh/uv/issues/3272
added it as a new subcommand rather than a flag on an existing
command since that seems more consistent with `cargo tree` + cleaner
code organization, but can make changes if it's preferred the other way.
Exposes the option added in #4416. Adds `--toolchain-preference` and
`tool.uv.toolchain-preference` to configure if system or managed
toolchains are preferred. Users can opt-out of managed toolchains or
system toolchains entirely as well.
Restores the `PythonEnvironment::find` API which was removed a while
back in favor of `Toolchain::find`. As mentioned in #4416, I'm
attempting to separate the case where you want an active environment
from the case where you want an installed toolchain in order to create
environments.
I wanted to drop `EnvironmentPreference` from `Toolchain::find` and just
have us consistently consider (or not consider) virtual environments
when discovering toolchains for creating environments. Unfortunately
this caused a few things to break so I reverted that change and will
explore it separately. Because I was exploring that change, there are
some minor changes to the `Toolchain` API here.
Adds support for the toolchain discovery preferences outlined in
https://github.com/astral-sh/uv/issues/4198 but we don't expose this to
users yet, I'll do that next to make it easier to review.
I've made some refactors in the toolchain discovery implementation to
enable this behavior and move us towards clearer abstractions. There's
still remaining work here, but I'd prefer tackle things in follow-ups
instead of expanding this pull request. I plan on opening a couple
before merging this.
I'd like to shift the public toolchain API to focus on discovering
either an **environment** or a **toolchain**. The first would be used by
commands that operate on an environment, while the latter would be used
by commands that just need an interpreter to create environments. I
haven't changed this here, but some of the refactors are in preparation
for supporting this idea.
In brief:
- We now allow different ordering of installed toolchain discovery based
on a `ToolchainPreference` type. This is the type we will expose to
users.
- `SystemPython` was changed into an `EnvironmentPreference` which is
used to determine if we should prefer virtual or system Python
environments.
- We drop the whole `ToolchainSources` selection concept, it was
confusing and the error messages from it were awkward. Most of the
functionality is now captured by the preference enums, but you can't do
things like "only find a toolchain from the parent interpreter" as
easily anymore.
Previously, distributions created through `Source::Workspace` would have
the absolute path as lock path. This didn't cause any problems, since in
`Urls` we would later overwrite those urls with the correct one created
from being workspace members by path.
Changing the order surfaced this. This change emits the correct lock
path. I've manually checked the difference with `dbg!`, this is not
observable on main, but on the diverging urls branch it fixes lockfile
creation.
When a fork is created from a list of dependencies, we were previously
adding all other sibling dependencies to every fork created. But this
isn't actually quite right, since the fork created is always created by
some marker expression. And while it is definitively disjoint from any
directly conflicting dependency specification, it is also possibly
disjoint with other dependencies. For example, as reported in #4414:
```toml
dependencies = [
"anyio==4.4.0 ; python_version >= '3.12'",
"anyio==4.3.0 ; python_version < '3.12'",
"b1 ; python_version >= '3.12'",
"b2 ; python_version < '3.12'",
]
```
The first two `anyio` requirements are conflicting with non-overlapping
marker expressions, and so a fork is created. Prior to this commit,
*both* `b1` and `b2` would be added to each fork. But of course, `b2` is
impossible in the `anyio==4.4.0` fork because of disjoint marker
expressions.
So in this commit, we specifically filter out any sibling dependencies
that could find their way into a fork that have disjoint markers with
that fork. We are careful to do this both when a new fork is created
from an existing set of dependencies, and when adding new dependencies
to a fork.
Fixes#4414
## Summary
After this change, `uv add` will try to use `tool.uv.sources` for all
source requirements. If a source cannot be resolved, i.e. an ambiguous
Git reference is provided, it will error. Git references can be
specified with the `--tag`, `--branch`, or `--rev` arguments. Editables
are also supported with `--editable`.
Users can opt-out of `tool.uv.sources` support with the `--raw` flag,
which will force uv to use `project.dependencies`.
Part of https://github.com/astral-sh/uv/issues/3959.
To support diverging urls, we have to check urls when adding
dependencies (after forking). To prepare for this, i've moved adding
dependencies for the current version to
`SolveState::add_package_version_dependencies` and removed the
duplication when checking for self-dependencies.
This changed is joined with a change in pubgrub
(https://github.com/astral-sh/pubgrub/pull/27) that simplifies the same
code path.
## Summary
Implements `uv add foo --workspace`, which adds `foo` as a workspace
dependency with the corresponding `tool.uv.sources` entry.
Part of https://github.com/astral-sh/uv/issues/3959.
Specifically, these are emitted when requirements fail to satisfy
`Requires-Python` or the markers associated with the current fork in the
resolver.
Closes#4373
Closes#4380
This is the same logic as `should_stop_discovery` but I changed the log
level and duplicated it because I don't really want that method to be
public. Maybe it should be though?
As in #4360, updates the uv project CLI to respect `.python-version`
files as default Python version requests. Additionally, updates project
interpreter discovery to fetch managed toolchains as in `uv venv
--preview`.
It was becoming problematic that the virtual environment test context
diverged from the other one i.e. we had to implement filtering twice.
This combines the contexts and tweaks the `TestContext` API and
filtering mechanisms for Python versions. Combined with my previous
changes to the test context at #4364 and
https://github.com/astral-sh/uv/pull/4368 this finally unblocks the
snapshots for test cases in #4360 and
https://github.com/astral-sh/uv/pull/4362.
## Summary
Make the git commit/date part of the version string matched in
cache_prune optional, so that the test also works correctly when uv is
built from an unpacked release tarball rather than a git repository.
Fixes#4374
## Test Plan
Ran `cargo test` inside the git repository and inside unpacked archive
for `0.2.12` release with the patch applied on top.
## Summary
Adds a `--no-clean` flag to `uv sync` that keeps extraneous
installations. This is the default in `uv run` and `uv add`, but not in
`uv sync` or `uv remove`. This means you need to run an explicit `uv
sync/remove` to clean the virtual environment.
Extends new filters for interpreter paths to apply to tests with
multiple Python versions. Adds patch version filtering for them as well,
which is needed for #4360 tests.
Extends #4364 automatically adding filters to the test context for
additional test virtual environments.
It turns out that the `pip sync` tests were really on the loose with
their virtual environment creation and it was difficult to use the new
helpers because they require mutability and the tests had immutable
borrows to extend the filters 😭. I refactored the tests to just reset
the environment.
A bare `uv toolchain install` invocation now reads default requests from
Python version files in the working directory. In order, a bare
invocation means:
- requests from `.python-versions`
- a single request from`.python-version`
- any installed managed toolchain
- the latest managed toolchain download
This replaces all the functionality of `cargo dev fetch-python`, which
we drop in #4337
## Summary
Closes https://github.com/astral-sh/uv/issues/4162
Changes keyring subprocess to allow display of stderr.
This aligns with pip's behavior since pip 23.1.
## Test Plan
* Tested using gnome-keyring-backend on a self-hosted private registry
as well as the keyring script described in #4162 to confirm both
existing functionality and the new stderr display.
* Existing tests using `scripts/packages/keyring_test_plugin` are now
showing its stderr output as well.
Allows installation of multiple toolchains in a single invocation
because I don't want to be limited to one! Most of the implementation
for concurrent downloads ported from `cargo dev fetch-python`.
Adds support for toolchain keys e.g. `cpython-3.11.2-macos` allowing you
to download toolchains for specific architectures and operating systems
using the format we use to uniquely identify a toolchain.
Closes#4240
e.g.
```
❯ cargo run -q -- pip install anyio --python "/Users/zb/Library/Application Support/uv/toolchains/cpython-3.12.0-macos-aarch64-none/install/bin/python3"
error: The interpreter at /Users/zb/Library/Application Support/uv/toolchains/cpython-3.12.0-macos-aarch64-none/install is externally managed, and indicates the following:
This toolchain is managed by uv and should not be modified.
Consider creating a virtual environment with `uv venv`.
```
I found this while testing the tracking of marker expressions across
resolver forks. Namely, given
sys_platform == 'darwin' and implementation_name == 'pypy'
And:
sys_platform == 'bar' or implementation_name == 'foo'
These should be disjoint, but the disjointness checker was reporting
them as overlapping. I fixed this by giving handling of disjunctions
higher precedence than conjunctions, although I am not 100% confident
that this is correct for all cases.
This commit adds marker expressions to our `Fork` type, which are in
turn passed down into `PubGrubDependencies::from_requirements` to filter
our any dependencies with markers that are disjoint from the fork's
marker expression.
This is necessary to avoid visiting packages in the dependency graph
that can never actually be installed. This is because when a fork is
created in the resolver, it always happens when there are two sibling
dependency specifications on a package with the same name, but with
non-overlapping marker expressions. Each fork corresponds to each
such conflicting dependency specification, and each fork assumes the
the corresponding marker expression as a pre-condition for any future
dependencies considered by it. That is, since the fork represents an
installation path that can only be taken when the corresponding
dependency specification (and its marker expression) is actually used,
it also therefore follows that the marker expression is true. Therefore,
any dependency visited in that fork with a marker expression that cannot
possibly be true when the markers of the fork are true can and ought to
be completely ignored.
There are some key invariants that I had to re-learn by reading the
code. This hopefully makes those invariants easier to discover by future
me (and others).
Otherwise, when testing `uv venv --preview` the default toolchain
directory will leak into the test.
I believe I've made a similar change for the standard `TestContext` in
another commit somewhere in my stack, if not I'll add it after.
Adds a command to find a toolchain on the system. Right now, it displays
the path to the first matching toolchain. We'll probably have more rich
output in the future (after implementing `toolchain show`).
The eventual plan (separate from here) is to port all of the toolchain
discovery tests to use this command. I'll add a few tests for this
command here anyway.
## Summary
This would be a lightweight solution to
https://github.com/astral-sh/uv/issues/4307 that doesn't fully engage
with all the possibilities in the design space (but would unblock
cross-platform for now).
## Summary
The fixtures here are pretty large, but it lets us test what we actually
care about (the resolved settings) rather than inferring the resolved
settings from behavior, which I think is a big improvement.
I also broke the tests down into more granular cases.
## Summary
Because the workspace member itself is part of the resolution, adding
the workspace name for the project leads to confusing errors, like:
```
❯ cargo run lock --preview
Compiling uv v0.2.11 (/Users/crmarsh/workspace/puffin/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 1.79s
Running `/Users/crmarsh/workspace/puffin/target/debug/uv lock --preview`
× No solution found when resolving dependencies:
╰─▶ Because only albatross==0.1.0 is available and albatross==0.1.0 depends on anyio<=3, we can conclude that all versions of albatross depend on anyio<=3.
And because bird-feeder==1.0.0 depends on anyio>=4.3.0,<5 and only bird-feeder==1.0.0 is available, we can conclude that all versions of albatross and all versions of bird-feeder are incompatible.
And because albatross depends on albatross and bird-feeder, we can conclude that the requirements are unsatisfiable.
```
(Notice "albatross depends on albatross".)
## Summary
In a workspace, we now read configuration from the workspace root.
Previously, we read configuration from the first `pyproject.toml` or
`uv.toml` file in path -- but in a workspace, that would often be the
_project_ rather than the workspace configuration.
We need to read configuration from the workspace root, rather than its
members, because we lock the workspace globally, so all configuration
applies to the workspace globally.
As part of this change, the `uv-workspace` crate has been renamed to
`uv-settings` and its purpose has been narrowed significantly (it no
longer discovers a workspace; instead, it just reads the settings from a
directory).
If a user has a `uv.toml` in their directory or in a parent directory
but is _not_ in a workspace, we will still respect that use-case as
before.
Closes#4249.
## Summary
This PR introduces top-level configuration for uv, such that you can do:
```toml
[tool.uv]
index-url = "https://test.pypi.org/simple"
```
And `uv pip compile`, `uv run`, `uv tool run`, etc., will all respect
that configuration.
The settings that were escalated to the top-level remain on
`tool.uv.pip` too, but they're only respected in `uv pip` commands. If
they're specified in both places, then the `pip` settings win out.
While making this change, I also wired up some of the global options,
like `connectivity` and `native_tls`, through to all the relevant
places.
Closes#4250.
Previously, we took the first executable on the `PATH` but if it was not
a usable interpreter we'd fail. Now, we'll continue searching in the
path until we find an interpreter as we do with the standard executable
names.
Previously, `b` in the test case would have been incorrectly locked to
the path of `a`. I've moved `relative_to` into uv-fs since it's now used
in two different places.
Previously failing lockfile when `a/pyproject.toml` and
`a/b/pyproject.toml` exist (not in a workspace) and `a` was depending on
`b`:
```toml
version = 1
requires-python = ">=3.11, <3.13"
[[distribution]]
name = "b"
version = "0.1.0"
source = "directory+/home/konsti/projects/uv/a"
sdist = { path = "/home/konsti/projects/uv/a" }
[[distribution]]
name = "black"
version = "0.1.0"
source = "editable+."
sdist = { path = "." }
[[distribution.dependencies]]
name = "b"
version = "0.1.0"
source = "directory+/home/konsti/projects/uv/a"
```
Includes system interpreters in `uv toolchain list`.
This includes a refactor of `find_toolchain` to support iterating over
all toolchains
that match a request rather than ending earlier.
## Summary
This i still a draft, but it gets some of the work started on issue
#4064 , which requests --no-binary functionality similar to `uv pip
install` to exist on `uv pip compile`.
So far this has moved the command line shape from install and cloned it
over to compile. The actual functionality of respecting --no-binary
<package> and generating the resulting line in requirements.txt I could
use a hand with.
My understanding is we want to create a requirements.in file such as:
```
yt
```
Then when running `cargo run -- pip compile --no-binary yt
requirements.in`
we want the file to have this line in it:
```
yt==4.3.1 --no-binary yt
```
## Test Plan
Existing unit tests continue to pass.
No new unit tests have been created yet.
The new command line options do show up when testing with `cargo run --
pip compile -h`
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
Closes https://github.com/astral-sh/uv/issues/4296.
## Test Plan
Ran `cargo run lock --verbose` from
`scripts/workspaces/albatross-virtual-workspace`:
```
DEBUG uv 0.2.11 (ef3bc1612 2024-06-12)
warning: `uv lock` is experimental and may change without warning.
DEBUG Found workspace root: `/Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace`
DEBUG Adding discovered workspace member: /Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace/packages/albatross
DEBUG Adding discovered workspace member: /Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace/packages/bird-feeder
DEBUG Adding discovered workspace member: /Users/crmarsh/workspace/puffin/scripts/workspaces/albatross-virtual-workspace/packages/seeds
DEBUG Searching for Python >=3.12 in search path or managed toolchains
DEBUG Searching for managed toolchains at `/Users/crmarsh/Library/Application Support/uv/toolchains`
DEBUG Found managed toolchain `cpython-3.12.3-macos-aarch64-none`
DEBUG Found CPython 3.12.3 at `/Users/crmarsh/Library/Application Support/uv/toolchains/cpython-3.12.3-macos-aarch64-none/install/bin/python3` (managed toolchains)
Using Python 3.12.3 interpreter at: /Users/crmarsh/Library/Application Support/uv/toolchains/cpython-3.12.3-macos-aarch64-none/install/bin/python3
```
Before 0.2.10 we would parse `--python=python` as an executable name.
After https://github.com/astral-sh/uv/pull/4214, we started treating
this as a Python version range request (with an empty version range).
This is not entirely unreasonable, but it was an unexpected regression
and I don't think `VersionRequest` should support empty ranges in its
`from_str` implementation without more consideration.
Updates `--no-binary <package>` to take precedence over `--only-binary
:all:` and `--only-binary <package>` to take precedence over
`--no-binary :all:`.
I'm not entirely sure about this behavior, e.g. maybe I provided
`--only-binary :all:` later on the command line and really want it to
override those earlier arguments of `--no-binary <package>` for safety.
Right now we just fail to solve though since we can't satisfy the
overlapping requests.
Closes https://github.com/astral-sh/uv/issues/4063
## Summary
This is what I consider to be the "real" fix for #8072. We now treat
directory and path URLs as separate `ParsedUrl` types and
`RequirementSource` types. This removes a lot of `.is_dir()` forking
within the `ParsedUrl::Path` arms and makes some states impossible
(e.g., you can't have a `.whl` path that is editable). It _also_ fixes
the `direct_url.json` for direct URLs that refer to files. Previously,
we wrote out to these as if they were installed as directories, which is
just wrong.
We weren't using the common interface in `uv lock` because it didn't
support finding an interpreter without touching the virtual environment.
Here I refactor the project interface to support what we need and update
`uv lock` to use the shared implementation.
In the time before universal resolving, we would always pass a
`MarkerEnvironment`, and this environment would capture any relevant
`Requires-Python` specifier (including if `-p/--python` was provided on
the CLI).
But in universal resolution, we very specifically do not use a
`MarkerEnvironment` because we want to produce a resolution that is
compatible across potentially multiple environments. This in turn meant
that we lost `Requires-Python` filtering.
This PR adds it back. We do this by converting our `PythonRequirement`
into a `MarkerTree` that encodes the version specifiers in a
`Requires-Python` specifier. We then ask whether that `MarkerTree` is
disjoint with a dependency specification's `MarkerTree`. If it is, then
we know it's impossible for that dependency specification to every be
true, and we can completely ignore it.
## Summary
Right now, we're _always_ reinstalling local wheel archives, even if the
timestamp didn't change.
I want to fix the TODO properly but I will do so in a separate PR.
## Summary
`normalize` now takes an owned value and returns `Option<MarkerTree>`,
such that if any sub-expression evaluates to `true`, we can normalize
out the entire marker.
Closes https://github.com/astral-sh/uv/issues/4267.
Closes https://github.com/astral-sh/uv/issues/3857
Instead of using custom `Arch`, `Os`, and `Libc` types I just use
`target-lexicon`'s which enumerate way more variants and implement
display and parsing. We use a wrapper type to represent a couple special
cases to support the "x86" alias for "i686" and "macos" for "darwin".
Alternatively we could try to use our `platform-tags` types but those
capture more information (like operating system versions) that we don't
have for downloads.
As discussed in https://github.com/astral-sh/uv/pull/4160, this is not
sufficient for proper libc detection but that work is larger and will be
handled separately.
## Summary
Fix the docsting where `remove` was used instead of `add` in the context
of `uv add` command.
## Test Plan
```
cargo run -- add --help
```
```
Add one or more packages to the project requirements
Usage: uv add [OPTIONS] <REQUIREMENTS>...
Arguments:
<REQUIREMENTS>...
The packages to add, as PEP 508 requirements (e.g., `flask==2.2.3`)
```
## Summary
I think this is a useful piece of connective tissue that will let us
avoid back-and-forths when folks include traces.
## Test Plan
```
❯ cargo run pip list --verbose
DEBUG uv 0.2.11 (44041bccd 2024-06-11)
DEBUG Searching for Python interpreter in virtual environments
DEBUG Found CPython 3.12.3 at `/Users/crmarsh/workspace/puffin/.venv/bin/python3` (virtual environment)
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
```
## Summary
If we're ORing an OR, we should just append rather than nesting in
another OR.
In my branch, this let us simplify:
```
python_version < '3.10' or python_version > '3.12' or (python_version < '3.8' or python_version > '3.12')
```
To:
```
python_version < '3.10' or python_version > '3.12
```
## Summary
The build tags in this case are like, e.g., `202206090410`. That's
larger than a `u32`, so we're rejecting the wheel. In theory build tags
could be even larger, but we already use `u64` for version segment so I
think it's fine to keep that constraint here.
I'm going to look into surfacing these errors separately.
Closes https://github.com/astral-sh/uv/issues/4252.
## Test Plan
`cargo run pip install monailabel`
e.g.
```
❯ uv toolchain install
Found installed toolchain 'cpython-3.9.19-macos-aarch64-none'
A toolchain is already installed. Use `uv toolchain install <request>` to install a specific toolchain
```
instead of
```
❯ uv toolchain install
Using latest Python version
Found installed toolchain 'cpython-3.9.19-macos-aarch64-none'
Already installed at /Users/zb/Library/Application Support/uv/toolchains/cpython-3.9.19-macos-aarch64-none
```
## Summary
Something that looks like it was forgotten to replace in #4164.
## Test Plan
Run `cargo run toolchain install` should display the warning: ``warning:
`uv toolchain install` is experimental and may change without warning.``
By splitting `path` into a lockable, relative (or absolute) and an
absolute installable path and by splitting between urls and paths by
dist type, we can store relative paths in the lockfile.
## Summary
We may choose to persist these eventually, but for now, it's useful to
have them colocated with the cache, and in their own dedicated bucket
(so, at the very least, we can keep track of the use-cases).
Closes https://github.com/astral-sh/uv/issues/4219.
A merge kerfuffle from #4222 and #4218
Now we fail because we genuinely can't find any interpreters since tests
contexts are isolated by default. I'll improve the error message and
maybe add another test case once `main` is fixed.
By setting the test search path to an empty path, we avoid accidentally
pulling interpreters from the system during a test case.
Cherry-picked from https://github.com/astral-sh/uv/pull/4214
Cherry-picked from https://github.com/astral-sh/uv/pull/4214
The first commit gets us some context on an IO error during queries:
Previously:
```
failed to canonicalize path `[VENV]/bin/python3`
Caused by: No such file or directory (os error 2)
```
Now:
```
Failed to query Python interpreter
Caused by: failed to canonicalize path `[VENV]/bin/python3`
Caused by: No such file or directory (os error 2)
```
but really we shouldn't attempt to query a missing interpreter during
discovery anyway, so we improve handling of that too.
## Summary
Should be no behavior changes, but one piece of technical debt I noticed
left over in the URL resolver. We already have structured paths, so we
shouldn't need to compare verbatim URLs.
## Summary
If the user requests a package as both editable and non-editable, the
editable now "wins".
Previously, `pip install -e . .` would install as editable. However,
`pip install -e . -r requirements.txt` would _not_ if `requirements.txt`
contained `.`, because we ignored `editable` when deduplicating and the
order of iteration was just dependent on internals.
Closes https://github.com/astral-sh/uv/issues/4053.
Adds a command (following #4163) to download and install specific
toolchains. While we fetch toolchains on demand, this is useful for,
e.g., pre-downloading a toolchain in a Docker image build.
~I kind of think we should call this `install` instead of `fetch`~ I
changed the name from `fetch` to `install`.
## Summary
Adds handling for a few cases to improve interoperability with Poetry:
- If the `project` schema is invalid, we now raise a hard error, rather
than treating the metadata as dynamic and then falling back to the build
backend. This could cause problems, I'm not sure. It's stricter than
before.
- If the project contains `tool.poetry` but omits
`project.dependencies`, we now treat it as dynamic. We could go even
further and treat _any_ Poetry project as dynamic, but then we'd be
ignoring user-declared dependencies, which is also confusing.
Closes https://github.com/astral-sh/uv/issues/4142.
Extends https://github.com/astral-sh/uv/pull/4121
Part of #2607
Adds support for managed toolchain fetching to `uv venv`, e.g.
```
❯ cargo run -q -- venv --python 3.9.18 --preview -v
DEBUG Searching for Python 3.9.18 in search path or managed toolchains
DEBUG Searching for managed toolchains at `/Users/zb/Library/Application Support/uv/toolchains`
DEBUG Found CPython 3.12.3 at `/opt/homebrew/bin/python3` (search path)
DEBUG Found CPython 3.9.6 at `/usr/bin/python3` (search path)
DEBUG Found CPython 3.12.3 at `/opt/homebrew/bin/python3` (search path)
DEBUG Requested Python not found, checking for available download...
DEBUG Using registry request timeout of 30s
INFO Fetching requested toolchain...
DEBUG Downloading https://github.com/indygreg/python-build-standalone/releases/download/20240224/cpython-3.9.18%2B20240224-aarch64-apple-darwin-pgo%2Blto-full.tar.zst to temporary location /Users/zb/Library/Application Support/uv/toolchains/.tmpgohKwp
DEBUG Extracting cpython-3.9.18%2B20240224-aarch64-apple-darwin-pgo%2Blto-full.tar.zst
DEBUG Moving /Users/zb/Library/Application Support/uv/toolchains/.tmpgohKwp/python to /Users/zb/Library/Application Support/uv/toolchains/cpython-3.9.18-macos-aarch64-none
Using Python 3.9.18 interpreter at: /Users/zb/Library/Application Support/uv/toolchains/cpython-3.9.18-macos-aarch64-none/install/bin/python3
Creating virtualenv at: .venv
INFO Removing existing directory
Activate with: source .venv/bin/activate
```
The preview flag is required. The fetch is performed if we can't find an
interpreter that satisfies the request. Once fetched, the toolchain will
be available for later invocations that include the `--preview` flag.
There will be follow-ups to improve toolchain management in general,
there is still outstanding work from the initial implementation.
## Summary
Similar to how we abstracted the dependencies into
`ResolutionDependencyNames`, I think it makes sense to abstract the base
packages into a `ResolutionPackage`. This also avoids leaking details
about the various `PubGrubPackage` enum variants to `ResolutionGraph`.
Remove the panic when there is an invalid wheel source, instead surface
the error. This error can only occur when manually editing the lock
file, but since it's an external file, we should error and not panic.
This change is helpful since the method needs to be able to error for
relative path support.
## Summary
If a package lacks a source distribution, and we can't find a compatible
wheel for the current platform, we need to just _assume_ that the
package will have a valid wheel on all platforms on which it's
requested; if not, we raise an error at install time.
It's possible that we can be smarter about this over time. For example,
if the package was requested _only_ for macOS, we could verify that
there's at least one macOS-compatible wheel. See the linked issue for
more details.
Closes https://github.com/astral-sh/uv/issues/4139.
## Summary
If we want more granular control over how these errors are handled, then
we need to move them out of the TOML deserialization.
No actual behavior changes here.
Part of https://github.com/astral-sh/uv/issues/4142.
## Summary
See the long comment inline. I think this is debatable but probably
right for now. The other options have their own problems, but there are
a few alternate ideas in the comment.
Closes https://github.com/astral-sh/uv/issues/4132.
The basic idea here is to make it so forking can only ever result in a
resolution that, for a particular marker environment, will only install
at most one version of a package. We can guarantee this by ensuring we
only fork on conflicting dependency specifications only when their
corresponding markers are completely disjoint. If they aren't, then
resolution _must_ find a single version of the package in the
intersection of the two dependency specifications.
A test for this case has been added to packse here:
https://github.com/astral-sh/packse/pull/182. Previously, that test
would result in a resolution with two different unconditional versions
of the same package. With this change, resolution fails (as it should).
A commit-by-commit review should be helpful here, since the first commit
is a refactor to make the second commit a bit more digestible.
## Summary
I think we should be able to model PubGrub such that this isn't
necessary (at least for the case described in the issue), but for now,
let's just avoid attempting to build very old distributions in
prefetching.
Closes https://github.com/astral-sh/uv/issues/4136.
Drops `find_toolchain`, `find_best_toolchain`, etc. in favor of
`Toolchain::find_...`
We can change this in the future, but there should only be one "right"
way to do it not two redundant ways in the public interface.
## Summary
Simplify and normalize marker expressions in the lockfile. Right now
this does a simple analysis by only looking at related operators at the
same level of precedence. I think anything more complex would be out of
scope.
Resolves https://github.com/astral-sh/uv/issues/4002.
## Summary
This PR adds a lowering similar to that seen in
https://github.com/astral-sh/uv/pull/3100, but this time, for markers.
Like `PubGrubPackageInner::Extra`, we now have
`PubGrubPackageInner::Marker`. The dependencies of the `Marker` are
`PubGrubPackageInner::Package` with and without the marker.
As an example of why this is useful: assume we have `urllib3>=1.22.0` as
a direct dependency. Later, we see `urllib3 ; python_version > '3.7'` as
a transitive dependency. As-is, we might (for some reason) pick a very
old version of `urllib3` to satisfy `urllib3 ; python_version > '3.7'`,
then attempt to fetch its dependencies, which could even involve
building a very old version of `urllib3 ; python_version > '3.7'`. Once
we fetch the dependencies, we would see that `urllib3` at the same
version is _also_ a dependency (because we tack it on). In the new
scheme though, as soon as we "choose" the very old version of `urllib3 ;
python_version > '3.7'`, we'd then see that `urllib3` (the base package)
is also a dependency; so we see a conflict before we even fetch the
dependencies of the old variant.
With this, I can successfully resolve the case in #4099.
Closes https://github.com/astral-sh/uv/issues/4099.
Extends #4120
Part of #2607
There should be no behavior changes here. Restructures the discovery API
to be focused on a toolchain first perspective in preparation for
exposing a `find_or_fetch` method for toolchains in
https://github.com/astral-sh/uv/pull/4138.
Partially addresses https://github.com/astral-sh/uv/issues/4056
We were incorrectly omitting the port from requests to `keyring` when
falling back to a realm/host query, e.g. `localhost` was used instead of
`localhost:1234`. We still won't include "standard" ports like `80` for
an HTTP request.
Follow-up to #4016.
This exposes `Range` and `PubGrubSpecifier` from outside the resolver to
use pubgrub's union creating a dependency edge we don't really want.
When creating a lockfile, lock the combined dependencies for all
packages in a workspace. This make the lockfile independent of where you
are in the workspace.
Fixes#3983
## Summary
This PR modifies our `Requires-Python` handling to treat
`Requires-Python` as a lower bound. There's extensive discussion around
this in https://github.com/astral-sh/uv/issues/4022 and the references
linked therein. I think it's an experiment worth trying. Even in my own
small projects, I'm running into issues whereby I'm being "forced" to
add a `<4` upper bound to my `Requires-Python` due to these caps.
Separately, we should explore adding a mechanism that's distinct from
`Requires-Python` to enable users to declare a supported range for
locking.
Closes https://github.com/astral-sh/uv/issues/4022.
## Summary
The condition enforced here isn't quite right. The same dependency can
appear multiple times, as long as the extra is different.
Closes https://github.com/astral-sh/uv/issues/4101.
## Summary
The "only include if relevant for the extra" filtering should _not_ be
applied to constraints. Otherwise, we'd only constrain when the extra
was included in the constraints file itself, which is incorrect.
Closes https://github.com/astral-sh/uv/issues/4091.
## Summary
I don't think it's worth maintaining this separate test harness for ~18
tests, when they can all be tested in the `uv` package itself. Let's
drop the maintenance burden.
## Summary
Externally, development dependencies are currently structured as a flat
list of PEP 580-compatible requirements:
```toml
[tool.uv]
dev-dependencies = ["werkzeug"]
```
When locking, we lock all development dependencies; when syncing, users
can provide `--dev`.
Internally, though, we model them as dependency groups, similar to
Poetry, PDM, and [PEP 735](https://peps.python.org/pep-0735). This
enables us to change out the user-facing frontend without changing the
internal implementation, once we've decided how these should be exposed
to users.
A few important decisions encoded in the implementation (which we can
change later):
1. Groups are enabled globally, for all dependencies. This differs from
extras, which are enabled on a per-requirement basis. Note, however,
that we'll only discover groups for uv-enabled packages anyway.
2. Installing a group requires installing the base package. We rely on
this in PubGrub to ensure that we resolve to the same version (even
though we only expect groups to come from workspace dependencies anyway,
which are unique). But anyway, that's encoded in the resolver right now,
just as it is for extras.
## Summary
As with other `.egg-info` and `.egg-link` distributions, it's easy to
support _existing_ `.egg-link` files. Like pip, we refuse to uninstall
these, since there's no way to know which files are part of the
distribution.
Closes https://github.com/astral-sh/uv/issues/4059.
## Test Plan
Verify that `vtk` is included here, which is installed as a `.egg-link`
file:
```
> conda create -c conda-forge -n uv-test python h5py vtk pyside6 cftime psutil
> cargo run pip freeze --python /opt/homebrew/Caskroom/miniforge/base/envs/uv-test/bin/python
aiohttp @ file:///Users/runner/miniforge3/conda-bld/aiohttp_1713964997382/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1667935791922/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1704011227531/work
cached-property @ file:///home/conda/feedstock_root/build_artifacts/cached_property_1615209429212/work
cftime @ file:///Users/runner/miniforge3/conda-bld/cftime_1715919201099/work
frozenlist @ file:///Users/runner/miniforge3/conda-bld/frozenlist_1702645558715/work
h5py @ file:///Users/runner/miniforge3/conda-bld/h5py_1715968397721/work
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1713279365350/work
loguru @ file:///Users/runner/miniforge3/conda-bld/loguru_1695547410953/work
msgpack @ file:///Users/runner/miniforge3/conda-bld/msgpack-python_1715670632250/work
multidict @ file:///Users/runner/miniforge3/conda-bld/multidict_1707040780513/work
numpy @ file:///Users/runner/miniforge3/conda-bld/numpy_1707225421156/work/dist/numpy-1.26.4-cp312-cp312-macosx_11_0_arm64.whl
pip==24.0
psutil @ file:///Users/runner/miniforge3/conda-bld/psutil_1705722460205/work
pyside6==6.7.1
setuptools==70.0.0
shiboken6==6.7.1
vtk==9.2.6
wheel==0.43.0
wslink @ file:///home/conda/feedstock_root/build_artifacts/wslink_1716591560747/work
yarl @ file:///Users/runner/miniforge3/conda-bld/yarl_1705508643525/work
```
Closes https://github.com/astral-sh/uv/issues/4072
This was an accidental change in
https://github.com/astral-sh/uv/pull/4029 in which I had updated the
pull request to support virtual environments as requested in review and
forgot to revert it.
Separately, we shouldn't fail if `VIRTUAL_ENV` points to an empty
directory and `SystemPython::Allowed` is used, will address that
separately.
## Summary
If `Requires-Python` is omitted in `uv lock` or `uv run`, we now warn
and default to `>=` the current minor version.
Closes https://github.com/astral-sh/uv/issues/4050.
## Summary
This PR adds the `Requires-Python` range to the user's lockfile. This
will enable us to validate it when installing.
For now, we repeat the `Requires-Python` back to the user;
alternatively, though, we could detect the supported Python range
automatically.
See: https://github.com/astral-sh/uv/issues/4052
We had previously changed the signature of
`DependencyProvider::get_dependencies` to return an iterator instead of
a hashmap to avoid the conversion cost from our dependencies `Vec` to
the pubgrub's hashmap. These changes are difficult to make in pubgrub
since they complicate the public api. But we don't actually use
`DependencyProvider::get_dependencies`, so we rolled those
customizations back in https://github.com/pubgrub-rs/pubgrub/pull/226
and instead opted to change only the internal
`add_incompatibility_from_dependencies` method that we exposed in our
fork. This aligns us closer with upstream, removes the design questions
about `DependencyProvider` from our concerns and reduces our diff (not
counting the github action) to +36 -12.
Retry the flaky `compile_invalid_pyc_invalidation_mode` if it fails. I
don't understand why this happening in the first place (we have code
that should catch those cases, but also those cases shouldn't be
happening at all) and this is terrible hack, but it fixes the test
flakes.
Fixes#2672
## Summary
This PR separates "gathering the requirements" from the rest of the
metadata (e.g., version), which isn't required when installing a
package's _dependencies_ (as opposed to installing the package itself).
It thus ensures that we don't need to build a package when a static
`pyproject.toml` is provided in `pip compile`.
Closes https://github.com/astral-sh/uv/issues/4040.
We know that `[project]` must exist for each workspace member, so we can
store it directly and avoid going through the `.and_then()` when we need
to access it. This requires cloning the struct due to lack of
self-referential structs. An alternative would taking the `Project` from
`PyProjectToml` instead, but this could be confusing when passing the
`PyProjectToml` around.
## Summary
Thankfully this is pretty rare since `pip sync` is usually run on `pip
compile` output, and `pip compile` never outputs markers.
Closes https://github.com/astral-sh/uv/issues/4044
## Summary
Closes#3955
Adds explicit support to `NO_COLOR` and `FORCE_COLOR` via
GlobalSettings.
The order, per specs is now `NO_COLOR` > `FORCE_COLOR` > `color`.
This PR is a backup plan pending rust-cli/anstyle#192.
## Test Plan
Tested all cases locally for now; I didn't see existing tests for
GlobalSettings parsing.
## Summary
Given `install -e dagster`, we need to assume that the user meant
`install -e ./dagster`, even though `install dagster` should _not_ be
treated as `install ./dagster`. I suspect pip will change this in the
future (since `pip install dagster` does _not_ meant `pip install
./dagster`) but for now it's what users expect.
Closes https://github.com/astral-sh/uv/issues/3994.
This is just the result of running
./scripts/sync_scenarios.sh
From the root of the `uv` repository.
When I initially ran this, it produced some tests with snapshots that
weren't being updated. It turned out this was because the tests weren't
running, as they were gated behind the `python-patch` feature. In this
commit, we add `python-patch` to our `cargo insta` command, which should
update all relevant snapshots.
There are still some superfluous updates as a result of a spell checker
being run on generated files, but
This is a quick fix for some flaky tests where the output in the lock
file isn't stable because marker expressions can be combined in a
non-deterministic order.
I believe there is ongoing work to simplify marker expressions which
will help here, but I think some kind of normalization is still
ultimately needed to guarantee consistent output.
I first noticed the flaky test in:
https://github.com/astral-sh/uv/pull/4015
## Summary
Avoid using work-stealing Tokio workers for bytecode compilation,
favoring instead dedicated threads. Tokio's work-stealing does not
really benefit us because we're spawning Python workers and scheduling
tasks ourselves — we don't want Tokio to re-balance our workers. Because
we're doing scheduling ourselves and compilation is a primarily
compute-bound task, we can also create dedicated runtimes for each
worker and avoid some synchronization overhead.
This is part of a general desire to avoid relying on Tokio's
work-stealing scheduler and be smarter about our workload. In this case
we already had the custom scheduler in place, Tokio was just getting in
the way (though the overhead is very minor).
## Test Plan
This improves performance by ~5% on my machine.
```
$ hyperfine --warmup 1 --prepare "target/profiling/uv-dev clear-compile .venv" "target/profiling/uv-dev compile .venv" "target/profiling/uv-dev-dedicated compile .venv"
Benchmark 1: target/profiling/uv-dev compile .venv
Time (mean ± σ): 1.279 s ± 0.011 s [User: 13.803 s, System: 2.998 s]
Range (min … max): 1.261 s … 1.296 s 10 runs
Benchmark 2: target/profiling/uv-dev-dedicated compile .venv
Time (mean ± σ): 1.220 s ± 0.021 s [User: 13.997 s, System: 3.330 s]
Range (min … max): 1.198 s … 1.272 s 10 runs
Summary
target/profiling/uv-dev-dedicated compile .venv ran
1.05 ± 0.02 times faster than target/profiling/uv-dev compile .venv
$ hyperfine --warmup 1 --prepare "target/profiling/uv-dev clear-compile .venv" "target/profiling/uv-dev compile .venv" "target/profiling/uv-dev-dedicated compile .venv"
Benchmark 1: target/profiling/uv-dev compile .venv
Time (mean ± σ): 3.631 s ± 0.078 s [User: 47.205 s, System: 4.996 s]
Range (min … max): 3.564 s … 3.832 s 10 runs
Benchmark 2: target/profiling/uv-dev-dedicated compile .venv
Time (mean ± σ): 3.521 s ± 0.024 s [User: 48.201 s, System: 5.392 s]
Range (min … max): 3.484 s … 3.566 s 10 runs
Summary
target/profiling/uv-dev-dedicated compile .venv ran
1.03 ± 0.02 times faster than target/profiling/uv-dev compile .venv
```
We retry several kinds of network request failures, but it's often
unclear whether a request was retried or not
(https://github.com/astral-sh/uv/issues/3514#issuecomment-2105485773).
This PR adds a small intermediary layer that logs all transient request
failures, adding the `DEBUG Transient request failure` lines:
```
DEBUG Searching for Python interpreter in virtual environments
DEBUG Found CPython 3.12.3 at `/home/konsti/projects/uv/.venv/bin/python3` (active virtual environment)
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
DEBUG Acquired lock for `.venv`
DEBUG At least one requirement is not satisfied: tqdm
DEBUG Using registry request timeout of 30s
DEBUG Solving with target Python version 3.12.3
DEBUG Adding direct dependency: tqdm*
DEBUG No cache entry for: https://pypi.org/simple/tqdm/
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
error: Could not connect, are you offline?
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
```
I decided for multi-line logging to show the complete error trace since
only `Transient request failure for https://pypi.org/simple/tqdm/,
retrying: Request error: error sending request for url
(https://pypi.org/simple/tqdm/)` doesn't tell you the actual problem (a
dns error).
Note that running with `-v` will not show messages about retry backoff
timing, but running with `RUST_LOG=debug` now shows a complete picture:
```
DEBUG starting new connection: https://pypi.org/
DEBUG resolving host="pypi.org"
DEBUG Transient request failure for https://pypi.org/simple/tqdm/, retrying: Request error: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: error sending request for url (https://pypi.org/simple/tqdm/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
WARN Retry attempt #2. Sleeping 528.728192ms before the next attempt
```
Fixes#3572
## Summary
See #4013
`uv pip ...` command loads workspace settings from pyproject.toml and
uv.toml.
Although a warning is implemented to output a warning when parsing
fails, it is not actually output.
https://github.com/astral-sh/uv/blob/main/crates/uv-workspace/src/workspace.rs#L38-L61
The reason is that the flag to display warnings is enabled after loading
the workspace settings.
This PR turns on the warning output flag before loading the workspace.
## Test Plan
pyproject.toml for test
```toml
[project]
name = "sample"
version = "0.0.0"
dependencies = ["ruff"]
[tool.uv.pip]
# originally string type.
index-url = 1
```
command output (before modification)
```bash
uv pip compile pyproject.toml
Resolved 1 package in 383ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml
ruff==0.4.7
# via sample (pyproject.toml)
```
command output (after modification)
```bash
uv pip compile pyproject.toml
warning: Failed to parse `pyproject.toml`: TOML parse error at line 7, column 13
|
7 | index-url = true
| ^^^^
invalid type: boolean `true`, expected a string
Resolved 1 package in 107ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml
ruff==0.4.7
# via sample (pyproject.toml)
```
The workspace test directories can be used both in tests and directly
for developing/debugging. In the latter, we shouldn't copy the venv and
the lockfile when running tests. Using the ignore crate over manual
recursion we exclude those files.
## Summary
Instead of checking if the target and installed version are the same, we
model the data such that the target version is only present if it was
specified by the user. This also means that we correctly say "requested
version" even if the two happen to be the same.
## Summary
I believe this is no longer necessary. Part of the problem here is that
we can't _know_ the full set of available Python versions, especially
once we start resolving against a `Requires-Python` rather than a fixed
set of two versions.
## Summary
Previously, when we locked something like `flask[dotenv]`, we created
two separate distributions in the lockfile: one for `flask`, which
included the base dependencies, and one for `flask[dotenv]`, which
included the base dependencies _and_ the `dotenv` dependencies. This was
easy to implement, but it meant that we were duplicating all of the
distribution files for every extra, and duplicating all of the base
dependencies for every extra.
This PR normalizes the data such that we now have one entry per
distribution (i.e., `ExtraName` was removed from `DistributionId`), with
an optional dependencies table with an entry per extra, like:
```toml
[[distribution]]
name = "project"
version = "0.1.0"
source = "editable+file://[TEMP_DIR]/"
sdist = { url = "file://[TEMP_DIR]/" }
[[distribution.dependencies]]
name = "anyio"
version = "3.7.0"
source = "registry+https://pypi.org/simple"
[distribution.optional-dependencies]
[[distribution.optional-dependencies.test]]
name = "iniconfig"
version = "2.0.0"
source = "registry+https://pypi.org/simple"
```
This requires a bit more work upfront, because we now need to merge
multiple packages from the `PetGraph` representation when creating the
lockfile.
Closes https://github.com/astral-sh/uv/issues/3916.
## Summary
Once we use a _range_ rather than a precise version, it won't actually
make sense to return a version here. It's no longer required, so I'm
removing it.
## Summary
Fixes a race condition in `OnceMap::wait_blocking` where the inserted
value could potentially be missed, leading to a deadlock. Fairly certain
this will resolve https://github.com/astral-sh/uv/issues/3724.
Otherwise the `uv lock` command wasn't respecting the index URL option.
This is a follow-up to #3984, and I believe should now allow #3970 to be
merged.
Seems like a recent Pull removed this, couldn't directly find out which.
I'm adding it back as we rely on this API, and I do not see another way
of accessing this, or am I mistaken?
Thanks!
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
See #3834 .
This PR adds a new namespace, `override-dependencies`, to
pyproject.toml/uv.toml.
This namespace assumes that the dependencies you want to override are
written in the form of `requirements.txt`.
a example of pyproject.toml
```toml
[project]
name = "example"
version = "0.0.0"
dependencies = [
"flask==3.0.0"
]
[tool.uv]
override-dependencies = [
"werkzeug==2.3.0"
]
```
This will improve usability by allowing you to override dependencies
without having to specify the --override option when running `uv pip
compile/install`.
## Test Plan
added test to `crates/uv/tests/pip_compile.rs`.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
Running a resolution that required forking was failing due to breaking
an invariant in PubGrub. It looks like we were adding the same
incompatibility multiple times, or something like that. The issue
appears to be that when forking, we modify the current state, then clone
it as the "next state", then push to the "forked states" -- but that
means we're cloning the _modified_ state.
This PR changes the order of operations such that we clone, then modify.
It shouldn't introduce any additional clones though.
Add a `--package` option that allows switching the current project in
the workspace. Wherever you are in a workspace, you should be able to
run with any other project as root. This is the uv equivalent of `cargo
run -p`.
I don't love the `--package` name, esp. since `-p` is already taken and
in general to many things start with p already.
Part of this change is moving the workspace discovery of
`ProjectWorkspace` to `Workspace` itself.
## Usage
In albatross-virtual-workspace:
```console
$ uv venv
$ uv run --preview --package bird-feeder python -c "import albatross"
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/bird-feeder
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/seeds
Built 2 editables in 167ms
Resolved 5 packages in 4ms
Installed 5 packages in 1ms
+ anyio==4.4.0
+ bird-feeder==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/bird-feeder)
+ idna==3.6
+ seeds==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/seeds)
+ sniffio==1.3.1
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'albatross'
$ uv venv
$ uv run --preview --package albatross python -c "import albatross"
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/albatross
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/bird-feeder
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/seeds
Built 3 editables in 173ms
Resolved 7 packages in 6ms
Installed 7 packages in 1ms
+ albatross==0.1.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/albatross)
+ anyio==4.4.0
+ bird-feeder==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/bird-feeder)
+ idna==3.6
+ seeds==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-virtual-workspace/packages/seeds)
+ sniffio==1.3.1
+ tqdm==4.66.4
```
In albatross-root-workspace:
```console
$ uv venv
$ uv run --preview --package bird-feeder python -c "import albatross"
Using Python 3.12.3 interpreter at: /home/konsti/.local/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.10s
Running `/home/konsti/projects/uv/target/debug/uv run --preview --package bird-feeder python -c 'import albatross'`
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/bird-feeder
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/seeds Built 2 editables in 161ms
Resolved 5 packages in 4ms
Installed 5 packages in 1ms
+ anyio==4.4.0
+ bird-feeder==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/bird-feeder)
+ idna==3.6
+ seeds==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/seeds)
+ sniffio==1.3.1
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'albatross'
$ uv venv
$ cargo run run --preview --package albatross python -c "import albatross"
Using Python 3.12.3 interpreter at: /home/konsti/.local/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.13s
Running `/home/konsti/projects/uv/target/debug/uv run --preview --package albatross python -c 'import albatross'`
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/bird-feeder
Built file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/seeds
Built 3 editables in 168ms
Resolved 7 packages in 5ms
Installed 7 packages in 1ms
+ albatross==0.1.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace)
+ anyio==4.4.0
+ bird-feeder==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/bird-feeder)
+ idna==3.6
+ seeds==1.0.0 (from file:///home/konsti/projects/uv/scripts/workspaces/albatross-root-workspace/packages/seeds)
+ sniffio==1.3.1
+ tqdm==4.66.4
```
## Summary
This PR ensures that if a lockfile already contains a resolved reference
(e.g., you locked with `main` previously, and it locked to a specific
commit), and you run `uv lock`, we use the same SHA, even if it's not
the latest SHA for that tag. This avoids upgrading Git dependencies
without `--upgrade`.
Closes#3920.
## Summary
This PR removes the static resolver map:
```rust
static RESOLVED_GIT_REFS: Lazy<Mutex<FxHashMap<RepositoryReference, GitSha>>> =
Lazy::new(Mutex::default);
```
With a `GitResolver` struct that we now pass around on the
`BuildContext`. There should be no behavior changes here; it's purely an
internal refactor with an eye towards making it cleaner for us to
"pre-populate" the list of resolved SHAs.
## Summary
This will help prevent bugs like #3934 by unifying the implementations
for editables and non-editable unnamed requirements. Specifically, both
of these now go through the same parsing paths and use the same struct
representations (with the exception that the editable flag is flipped in
the first case):
```
-e ./foo/bar
./foo/bar
```
We also now support PEP 508 in editable URLs. It turns out this is just
a limitation in pip, so it's correct to support it. For example, this
now works:
```
-e black[d] @ file://${PROJECT_ROOT}/scripts/packages/black_editable
```
Closes#3941.
Closes#3942.
Move `Metadata`, `MetadataLoweringError` and `ArchiveMetadata` into
their own file `metadata.rs` in `uv-distribution`, moving it out from
`lib.rs`. No functional changes.
Fixes these two warnings on nightly:
```
warning: unexpected `cfg` condition name: `codspeed`
--> crates/bench/src/lib.rs:5:15
|
5 | #[cfg(not(codspeed))]
| ^^^^^^^^ help: found config with similar value: `feature = "codspeed"`
|
= help: expected names are: `clippy`, `debug_assertions`, `doc`, `docsrs`, `doctest`, `feature`, `miri`, `overflow_checks`, `panic`, `proc_macro`, `relocation_model`, `rustfmt`, `sanitize`, `sanitizer_cfi_generalize_pointers`, `sanitizer_cfi_normalize_integers`, `target_abi`, `target_arch`, `target_endian`, `target_env`, `target_family`, `target_feature`, `target_has_atomic`, `target_has_atomic_equal_alignment`, `target_has_atomic_load_store`, `target_os`, `target_pointer_width`, `target_thread_local`, `target_vendor`, `test`, `ub_checks`, `unix`, and `windows`
= help: consider using a Cargo feature instead
= help: or consider adding in `Cargo.toml` the `check-cfg` lint config for the lint:
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(codspeed)'] }
= help: or consider adding `println!("cargo::rustc-check-cfg=cfg(codspeed)");` to the top of the `build.rs`
= note: see <https://doc.rust-lang.org/nightly/rustc/check-cfg/cargo-specifics.html> for more information about checking conditional configuration
= note: `#[warn(unexpected_cfgs)]` on by default
warning: unexpected `cfg` condition name: `codspeed`
--> crates/bench/src/lib.rs:8:11
|
8 | #[cfg(codspeed)]
| ^^^^^^^^ help: found config with similar value: `feature = "codspeed"`
|
= help: consider using a Cargo feature instead
= help: or consider adding in `Cargo.toml` the `check-cfg` lint config for the lint:
[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(codspeed)'] }
= help: or consider adding `println!("cargo::rustc-check-cfg=cfg(codspeed)");` to the top of the `build.rs`
= note: see <https://doc.rust-lang.org/nightly/rustc/check-cfg/cargo-specifics.html> for more information about checking conditional configuration
```
```
warning: unexpected `cfg` condition value: `unix`
--> crates/uv-extract/src/tar.rs:6:16
|
6 | #[cfg_attr(not(target_os = "unix"), allow(dead_code))]
| ^^^^^^^^^^^^^^^^^^
|
= note: expected values for `target_os` are: `aix`, `android`, `cuda`, `dragonfly`, `emscripten`, `espidf`, `freebsd`, `fuchsia`, `haiku`, `hermit`, `horizon`, `hurd`, `illumos`, `ios`, `l4re`, `linux`, `macos`, `netbsd`, `none`, `nto`, `openbsd`, `psp`, `redox`, `solaris`, `solid_asp3`, `teeos`, `tvos`, `uefi`, `unknown`, `visionos`, `vita`, `vxworks`, `wasi`, `watchos`, and `windows` and 2 more
= note: see <https://doc.rust-lang.org/nightly/rustc/check-cfg/cargo-specifics.html> for more information about checking conditional configuration
= note: requested on the command line with `-W unexpected-cfgs`
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
With the change, we remove the special casing of workspace dependencies
and resolve `tool.uv` for all git and directory distributions. This
gives us support for non-editable workspace dependencies and path
dependencies in other workspaces. It removes a lot of special casing
around workspaces. These changes are the groundwork for supporting
`tool.uv` with dynamic metadata.
The basis for this change is moving `Requirement` from
`distribution-types` to `pypi-types` and the lowering logic from
`uv-requirements` to `uv-distribution`. This changes should be split out
in separate PRs.
I've included an example workspace `albatross-root-workspace2` where
`bird-feeder` depends on `a` from another workspace `ab`. There's a
bunch of failing tests and regressed error messages that still need
fixing. It does fix the audited package count for the workspace tests.
## Summary
In general, it's not quite right to filter preferences by `--reinstall`
-- we still want to respect existing versions, we just don't want to
respect _installed_ versions. But now that the installed versions and
preferences are decoupled, we can remove this (`--reinstall` is enforced
on the installed versions via the `Exclusions` struct that we pass to
the resolver).
While I was here, I also cleaned up the lockfile preference code to
better match the structure for `requirements.txt`.
## Summary
I believe that this is not necessary, as the installer packages are
already considered in `CandidateSelector::get_preferred`.
Firstly, note that we never pass both non-empty installed packages _and_
non-empty preferences (the installer routines pass site packages and no
preferences; the resolver routines pass no site packages but lockfile
preferences).
However, in general, if you look at `CandidateSelector::get_preferred`,
and consider what's changing, we now skip the `if let Some(version) =
preferences.version(package_name)` case for installed packages. But we
then check installed packages within that `if`, and in the `else`. So it
seems like we'll still return them in either case?
The only behavior change is in the case that you have multiple versions
of a package installed. Previously, we'd respect one of them, because
`Preferences` takes the last winner (it's a hash map, so we just replace
the package key with the last version we see); but in installed
packages, we always ignore distributions with multiple versions, since
it's indicative of a broken environment. That's a fine change IMO. We
could change `CandidateSelector::get_preferred` to support this if we
wanted to.
## Summary
We currently rely on libgit2 for most git-related functionality.
However, libgit2 has long-standing performance issues, as well as lags
significantly behind git in terms of new features. For these reasons we
now use the git CLI by default for fetching repositories
(https://github.com/astral-sh/uv/pull/1781). This PR completely drops
libgit2 in favor of the git CLI for all git-related functionality, which
should allow us to use features such as partial clones and sparse
checkouts in the future for performance.
There is also a lot of technical debt in the current git code as it's
mostly taken from Cargo. Switching to the git CLI *vastly* simplifies
the `uv-git` codebase.
Eventually we might want to look into switching to
[`gitoxide`](https://github.com/Byron/gitoxide), but it's currently too
immature for our use case.
## Summary
This PR changes the lock-file format to use inline tables for wheels and
source distributions, which currently use separate tables that make the
file harder to follow.
```diff
[[distribution]]
name = "typing-extensions"
version = "4.10.0"
source = "registry+https://pypi.org/simple"
- [distribution.sdist]
- url = "0d26ce356c7c323176620b7b483e44/typing_extensions-4.10.0.tar.gz"
- hash = "sha256:b0abd7c89e8fb96f98db18d86106ff1d90ab692004eb746cf6eda2682f91b3cb"
- size = 77558
-
- [[distribution.wheel]]
- url = "dc04a3ea60b9bc8074b5d6b7ab90bb/typing_extensions-4.10.0-py3-none-any.whl"
- hash = "sha256:69b1a937c3a517342112fb4c6df7e72fc39a38e7891a5730ed4985b5214b5475"
- size = 33926
+ sdist = { url = "0d26ce356c7c323176620b7b483e44/typing_extensions-4.10.0.tar.gz", hash = "sha256:b0abd7c89e8fb96f98db18d86106ff1d90ab692004eb746cf6eda2682f91b3cb", size = 77558 }
+ wheel = [{ url = "dc04a3ea60b9bc8074b5d6b7ab90bb/typing_extensions-4.10.0-py3-none-any.whl", hash = "sha256:69b1a937c3a517342112fb4c6df7e72fc39a38e7891a5730ed4985b5214b5475", size = 33926 }]
```
The downside is that the inline-tables end up quite long and TOML
doesn't support line breaks in inline tables, yet.
Part of https://github.com/astral-sh/uv/issues/3611.
We significantly regressed performance in some cases because we were
cloning the resolver state one more time than we needed to. That doesn't
sound like a lot, but in the case where there are no forks, it implies
we were cloning the state for every `get_dependencies` called when we
shouldn't have been cloning it at all.
Avoiding the clone results in somewhat tortured code. This can probably
be refactored by moving bits out to a helper routine, but that also
seemed non-trivial. So we let this suffice for now.
This addresses the lack of marker support in prior commits.
Specifically, we add them as a new field to `AnnotatedDist`, and from
there, they get added to a `Distribution` in a `Lock`.
This commit is a pretty invasive change that implements the merging
of resolutions created by each fork of the resolver.
The main idea here is that each `SolveState` is converted into a
`Resolution` (a new type) and stored on the heap after its fork
completes. When all forks complete, they are all merged into a single
`Resolution`. This `Resolution` is then used to build a `ResolutionGraph`.
Construction of `ResolutionGraph` mostly stays the same (despite the
gnarly diff due to an indent change) with one exception: the code to
extract dependency edges out of PubGrub's state has been moved to
`SolveState::into_resolution`. The idea here is that once a fork
completes, we extract what we need from the PubGrub state and then
throw it away. We store these edges in our own intermediate type which
is then converted into petgraph edges in the `ResolutionGraph`
constructor.
One interesting change we make here is that our edge
data is now a `Version` instead of a `Range<Version>`. I don't think
`Range<Version>` was actually being used anywhere, so this seems okay?
In any case, I think `Version` here is correct because a resolution
corresponds to specific dependencies of each package. Moreover, I didn't
see an easy way to make things work with `Range<Version>`. Notably,
since we no longer have the guarantee that there is only one version of
each package, we need to use `(PackageName, Version)` instead of just
`PackageName` for inverted lookups in `ResolutionGraph::from_state`.
Finally, the main resolver loop itself is changed a bit to track all
forked resolutions and then merge them at the end.
Note that we don't really have any dealings with markers in this commit.
We'll get to that in a subsequent commit.
This changes the constructor to just take an `InMemoryIndex`
directly instead of the constituent parts. No real reason other
than it seems a little simpler.
There are still some TODOs/FIXMEs here, but this makes represents a
chunk of the resolver refactoring to enable forking. We don't do any
merging of resolutions yet, so crucially, this code is broken when no
marker environment is provided. But when a marker environment is
provided, this should behave the same as a non-forking resolver. In
particular, `get_dependencies_forking` is just `get_dependencies`
whenever there's a marker environment.
This makes it so we can pass any function to determine whether an extra
is always true or not.
For example, `markers.simplify_extras_with(|_| true)` will remove all
extras in any marker expression. This wasn't possible to express
(without knowing all of the marker names) using the old API, but becomes
trivial to express with a predicate function.
While this could be done by callers since the representation
of `MarkerTree` is public, they are just annoying enough to do
that I think it makes sense to provide them on `MarkerTree`
itself.
These could also be improved in the future to do even more
flattening of conjunctions/disjunctions (or perhaps even
more robust simplification). But for now, some basic flattening
is good enough.
These routines will be used to combine marker expressions when
merging forked resolutions.
## Summary
Ensures that we avoid upgrading packages unless `--upgrade` or similar
is passed.
For now, the resolver only respects these for registry distributions.
Closes https://github.com/astral-sh/uv/issues/3918.
## Summary
This PR adds extras to the lockfile, and enables users to selectively
sync extras in `uv sync` and `uv run`. The end result here was fairly
simple, though it required a few refactors to get here. The basic idea
is that `DistributionId` now includes `extra: Option<ExtraName>`, so we
effectively treat extras as separate packages. Generating the lockfile,
and generating the resolution from the lockfile, fall out of this
naturally with no special-casing or additional changes.
The main downside here is that it bloats the lockfile significantly.
Specifically:
- We include _all_ distribution URLs and hashes for _every_ extra
variant.
- We include all dependencies for the extra variant, even though that
are dependencies of the base package.
We could normalize this representation by changing each distribution
have an `optional-dependencies` hash map that keys on extras, but we
actually don't have the information we need to create that right now
(specifically, we can't differentiate between dependencies that
_require_ the extra and dependencies on the base package).
Closes#3700.
## Summary
This PR just ensures that when running `uv lock` (or `uv run`), we lock
with all extras. When we later install, we'll also _install_ with all
extras, but that will be changed in a future PR.
## Summary
Today, we represent each package as a single node in the graph, and
combine all the extras. This is helpful for the `requirements.txt`-style
resolution, in which we want to show each a single line for each package
with the extras combined into a single array.
This PR modifies the representation to instead use a separate node for
each (package, extra) pair. We then reduce into the previous format when
printing in the `requirements.txt`-style format, so there shouldn't be
any user-facing changes here.
## Summary
This PR addresses an issue where `tool.uv` settings are not read if
`tool.uv.sources` or `tool.uv.workspaces` are present in the TOML file.
## Test Plan
Tested locally.
## Summary
Modifies `uv run` to write and read from the lockfile, rather than
resolving the project requirements as-is on each invocation.
Closes https://github.com/astral-sh/uv/issues/3891.
## Summary
Resolves https://github.com/astral-sh/uv/issues/3896. Adding progress
bars to the `MultiProgress` after configuring them seems to not
synchronize the required state fully.
## Test Plan
The repeated output is gone when testing locally.
## Summary
There are a few behavior changes in here:
- We now enforce `--require-hashes` for editables, like pip. So if you
use `--require-hashes` with an editable requirement, we'll reject it. I
could change this if it seems off.
- We now treat source tree requirements, editable or not (e.g., both `-e
./black` and `./black`) as if `--refresh` is always enabled. This
doesn't mean that we _always_ rebuild them; but if you pass
`--reinstall`, then yes, we always rebuild them. I think this is an
improvement and is close to how editables work today.
Closes#3844.
Closes#2695.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
It removes the unused result when creating the Cache with the
`from_path` constructor. I don't believe it does any io operations any
more at least.
Add workspace support when using `-r <path>/pyproject.toml` or `-e
<path>` in the pip interface. It is limited to all-editable
static-metadata workspaces, and tests only include a single main
workspace, ignoring path dependencies in another workspace. This can be
considered the MVP for workspace support: You can create a workspace,
you can install from it, but some options and conveniences are still
missing. I'll file follow-up tickets (support in lockfiles, support path
deps in other workspace, #3625)
There is also support in `uv run`, but we need
https://github.com/astral-sh/uv/issues/3700 first to properly support
using different current projects in the bluejay interface, currently the
resolution and therefore the lockfile depends on the current project.
I'd do this change first (it's big enough already), then #3700, and then
add workspace support properly to bluejay.
Fixes#3404
This bumps the versions of pep580 and pep440 to coincide with the
crates.io versions. While not strictly the same, the new types in uv us
an `Inner` struct. Practically I've found I'm still able to use the
patched versions, as can seen from the open PR here:
https://github.com/prefix-dev/pixi/pull/1436.
Would be great if this bump can be done so we can keep combining the
types :)
Follow-up to https://github.com/astral-sh/uv/pull/3797 to clean up the
test isolation in `uv-interpreter`.
I still want to expose a CLI at some point like `uv python <...>` for
discovery and test from there, hopefully this will make that transition
simpler.
Extends #3726
Moves toolchain storage out of `UV_BOOTSTRAP_DIR` (`./bin`) into the
proper user data directory as defined by #3726.
Replaces `UV_BOOTSTRAP_DIR` with `UV_TOOLCHAIN_DIR` for customization.
Installed toolchains will be discovered without opt-in, but the idea is
still that these are not yet user-facing.
## Summary
This also adds filtering for the ARM Pythons, since that needs some libc
changes; and it closes https://github.com/astral-sh/uv/issues/3854 by
way of adding an "arm" branch.
## Summary
Allows, e.g., `UV_SYSTEM_PYTHON=false uv pip install --python
.venv/bin/python`.
This was intended to work after fixing
https://github.com/astral-sh/uv/issues/3000, but I think I misdiagnosed
the scope when closing that issue, and the linked PR there only fixed
some _other_ problems around index URLs.
The only thing we really lose here is we no longer error when
`--break-system-packages` is provided without `--system`, but we can
enforce that elsewhere if we want.
Closes https://github.com/astral-sh/uv/issues/3829.
## Summary
This PR makes a variety of invalid states unrepresentable by changing
`Preference` to require a `PackageName` and `Version`, rather than
accepting a generic `Requirement`. There should be no meaningful
behavior changes.
## Summary
We actually _already_ ignore these (preferences only apply to versions,
not URLs), it just happens later on. This PR thus just avoids crashing.
The behavior is unchanged.
Closes#3822.
Closes#3784
The cache did not use an absolute path. I'm not sure this is actually a
new bug, as this code wasn't touched in #3266 but perhaps there was a
slight difference in the paths we were passing around. Note, just
canonicalizing the path as soon as we see it doesn't work because then
we jump out of the virtual environmnent into the system interpreter.
## Test plan
```
❯ uv venv
Using Python 3.12.3 interpreter at: /opt/homebrew/opt/python@3.12/bin/python3.12
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
❯ uv pip install anyio
Resolved 3 packages in 81ms
Installed 3 packages in 4ms
+ anyio==4.3.0
+ idna==3.7
+ sniffio==1.3.1
❯ mkdir uv-issue-3784 && cd uv-issue-3784
❯ uv venv
Using Python 3.12.3 interpreter at: /opt/homebrew/opt/python@3.12/bin/python3.12
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
❯ gcm
Switched to branch 'main'
Your branch is up to date with 'origin/main'.
❯ cargo run -q -- pip list -v -p .venv
DEBUG Checking for Python interpreter in directory `.venv`
TRACE Cached interpreter info for Python 3.12.3, skipping probing: .venv/bin/python3
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
Package Version
------- -------
anyio 4.3.0
idna 3.7
sniffio 1.3.1
❯ cd uv-issue-3784
❯ cargo run -q -- pip list -v -p .venv
DEBUG Checking for Python interpreter in directory `.venv`
TRACE Cached interpreter info for Python 3.12.3, skipping probing: .venv/bin/python3
DEBUG Using Python 3.12.3 environment at /Users/zb/workspace/uv/.venv/bin/python3
Package Version
------- -------
anyio 4.3.0
idna 3.7
sniffio 1.3.1
❯ cd ..
❯ gco zb/fix-relative-venv
Switched to branch 'zb/fix-relative-venv'
❯ cargo run -q -- pip list -v -p .venv
DEBUG Checking for Python interpreter in directory `.venv`
TRACE Cached interpreter info for Python 3.12.3, skipping probing: .venv/bin/python3
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
Package Version
------- -------
anyio 4.3.0
idna 3.7
sniffio 1.3.1
❯ cd uv-issue-3784
❯ cargo run -q -- pip list -v -p .venv
DEBUG Checking for Python interpreter in directory `.venv`
TRACE Cached interpreter info for Python 3.12.3, skipping probing: .venv/bin/python3
DEBUG Using Python 3.12.3 environment at .venv/bin/python3
```
## Summary
Related to https://github.com/astral-sh/uv/issues/3818. We should
_always_ include the package name if we know it's not a file path, even
if it starts with an environment variable.
## Summary
I haven't tested on Windows yet, but the idea here is that we should use
a portable representation when printing paths.
I decided to limit the scope here to paths that we write to output
files.
Closes https://github.com/astral-sh/uv/issues/3800.
## Summary
It turns out that in the
[spec](https://packaging.python.org/en/latest/specifications/binary-distribution-format/#file-name-convention),
if a wheel filename includes a build tag, then we need to use it to
break ties. This PR implements that behavior. (Previously, we dropped
the build tag entirely.)
Closes#3779.
## Test Plan
Run: `cargo run pip install -i https://pypi.anaconda.org/intel/simple
mkl_fft==1.3.8 --python-platform linux --python-version 3.10`. This now
resolves without error. Previously, we selected build tag 63 of
`mkl_fft==1.3.8`, which led to an incompatibility with NumPy. Now, we
select build tag 70.
When parsing requirements from any source, directly parse the url parts
(and reject unsupported urls) instead of parsing url parts at a later
stage. This removes a bunch of error branches and concludes the work
parsing url parts once and passing them around everywhere.
Many usages of the assembled `VerbatimUrl` remain, but these can be
removed incrementally.
Please review commit-by-commit.
## Summary
This seems to be one of the most consistent benchmark cases we have in
terms of standard deviation:
```
➜ hyperfine "target/profiling/main pip compile scripts/requirements/airflow.in" --runs 200
Benchmark 1: target/profiling/main pip compile scripts/requirements/airflow.in
Time (mean ± σ): 292.6 ms ± 6.6 ms [User: 414.1 ms, System: 194.2 ms]
Range (min … max): 282.7 ms … 320.1 ms 200 runs
```
For smaller benchmarks, scispacy and dtlssocket seem to be a bit more
consistent than our current jupyter benchmark, but it hasn't given us
any problems so I'll leave it for now.
e.g. in `uv pip install anyio -v` this message is just noise
```
DEBUG Requirement satisfied: anyio
DEBUG Requirement satisfied: idna>=2.8
DEBUG Requirement satisfied: sniffio>=1.1
DEBUG All editables satisfied:
```
```
DEBUG Acquired lock for `.venv`
```
instead of
```
DEBUG Trying to lock if free: .venv/.lock
```
At trace level, this includes the pre-lock message as well
```
TRACE Checking lock for `.venv`
DEBUG Acquired lock for `.venv`
```
We'll still display the lock file path when something goes wrong
The venv subcommand requires a system interpreter. The tests python path
discovery would previously allow a venv interpreter, failing the venv
tests that don't have system interpreter anymore.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Allows requesting additional transitive dependencies when running a
tool.
e.g. `uv tool run -v --with anyio ruff check example.py` (Why would you
want anyio with ruff? Who knows 😄)
The motivation for doing this now is that I think the first
implementation of `uv tool install` might just shim into `uv tool run`
with pinned dependencies? Regardless this is something we need in the
long run and is a trivial addition right now.
## Summary
We now show yanks as part of the resolution diagnostics, so they now
appear for `sync`, `install`, `compile`, and any other operations.
Further, they'll also appear for cached packages (but not packages that
are _already_ installed).
Closes https://github.com/astral-sh/uv/issues/3768.
Closes#3766.
We usually infer the package the tool is pulled from to be the same name
as the tool itself, but that's not always the case. This allows users to
provide a custom package.
## Summary
This PR removes most of the code in `project/mod.rs` in favor of the
routines exposed in `pip/operations.rs`.
I think we can do a lot more to add more abstraction here and reduce the
verbosity, but for now it deduplicates a _ton_ of logic. The remaining
logic is just instantiating settings etc.
e.g. this error message is not great
```
❯ uv venv --python 3.12.2
× No interpreter found for Python 3.12.2 in provided path, search path, managed toolchains, or parent interpreter
```
e.g.
```
❯ echo "anyio" | cargo run -q -- pip compile - --python 3.9 -v
DEBUG Searching for interpreter that fulfills Python @ 3.9
DEBUG Found a virtual environment at: /Users/zb/workspace/uv/.venv
DEBUG Using Python 3.9.18 interpreter at bin/cpython-3.9.18-macos-aarch64-none/install/bin/python3 for builds
```
e.g. instead of
```
❯ uv venv --python pypy@3.10
× No interpreter found for pypy 3.10 in search path
```
we say
```
❯ uv venv --python pypy@3.10
× No interpreter found for PyPy 3.10 in search path
```
Closes#2222
Closes https://github.com/astral-sh/uv/issues/2058
Replaces https://github.com/astral-sh/uv/pull/2338
See also https://github.com/astral-sh/uv/issues/2649
We use an environment variable (`UV_INTERNAL__PARENT_INTERPRETER`) to
track the invoking interpreter when `python -m uv` is used. The parent
interpreter is preferred over all other sources (though it will be
skipped if it does not meet a `--python` request or if `--system` is
used and it belongs to a virtual environment). We warn if `--system` is
not provided and this interpreter would mutate system packages, but
allow it.
Previously, we enforced `SystemPython` outside of the interpreter
discovery exclusively with source selection. Now, we perform additional
filtering of interpreters depending on if they are a virtual
environment. This should not change any existing behavior, but will make
it much easier to have consistent behavior in ambiguous cases like
https://github.com/astral-sh/uv/pull/3736#discussion_r1610072262 where a
source could provide either a system interpreter or virtual environment
interpreter.
## Summary
This PR takes the functions used in `pip install`, moves them into a
common module, and then replaces all the `pip sync` logic with calls
into those functions. The net effect is that `pip install` and `pip
sync` share far more code and demonstrate much more consistent behavior.
Closes https://github.com/astral-sh/uv/issues/3555.
## Summary
This PR adds editables using a new source type (`editable+...`), and
then extracts the editables from the lockfile in `uv sync`.
Closes https://github.com/astral-sh/uv/issues/3695.
Otherwise `uv venv --python 3.12` can prefer `.venv/bin/python` over the
system Python (which is always used if you don't provide a `--python`
flag). I would find this confusing as a user.
Updates our executable name searches to support implementation names
i.e. `cpython` and `pypy` and adds support for PyPy.
We might want to _not_ support searching for `cpython` because that's
non-standard?
Adds `--offline` support to `uv tool run` and `uv run` because I needed
it on the airplane today.
I think we should move `--offline` to the global settings like
`--native-tls`.
## Summary
Closes https://github.com/astral-sh/uv/issues/3715.
## Test Plan
```
❯ echo "/../test" | cargo run pip compile -
error: Couldn't parse requirement in `-` at position 0
Caused by: path could not be normalized: /../test
/../test
^^^^^^^^
❯ echo "-e /../test" | cargo run pip compile -
error: Invalid URL in `-`: `/../test`
Caused by: path could not be normalized: /../test
Caused by: cannot normalize a relative path beyond the base directory
```
This is mostly a shorter version of `uv run` that infers a requirement
name from the command. The main goal here is to do the smallest amount
of work necessary to get #3560 started.
Closes#3613
e.g.
```shell
$ uv tool run -- ruff check
warning: `uv tool run` is experimental and may change without warning.
Resolved 1 package in 34ms
Installed 1 package in 2ms
+ ruff==0.4.4
error: Failed to parse example.py:1:5: Expected an expression
example.py:1:5: E999 SyntaxError: Expected an expression
Found 1 error.
```
Updates our Python interpreter discovery to conform to the rules
described in #2386, please see that issue for a full description of the
behavior. Briefly, we now will search for interpreters that satisfy a
requested version without stopping at the first Python executable.
Additionally, if retrieving information about an interpreter fails we
will continue to search for a working interpreter. We also add the
plumbing necessary to request Python implementations other than CPython,
though we do not add support for other implementations at this time.
A major internal goal of this work is to prepare for user-facing managed
toolchains i.e. fetching a requested version during `uv run`. These APIs
are not introduced, but there is some managed toolchain handling as
required for our test suite.
Some noteworthy implementation changes:
- The `uv_interpreter::find_python` module has been removed in favor of
a `uv_interpreter::discovery` module.
- There are new types to help structure interpreter requests and track
sources
- Executable discovery is implemented as a big lazy iterator and is a
central authority for source precedence
- `uv_interpreter::Error` variants were split into scoped types in each
module
- There's much more unit test coverage, but not for Windows yet
Remaining work:
- [x] Write new test cases
- [x] Determine correct behavior around executables in the current
directory
- _Future_: Combine `PythonVersion` and `VersionRequest`
- _Future_: Consider splitting `ManagedToolchain` into local and remote
variants
- _Future_: Add Windows unit test coverage
- _Future_: Explore behavior around implementation precedence (i.e.
CPython over PyPy)
Refactors split into:
- #3329
- #3330
- #3331
- #3332Closes#2386
Instead of saying
> we can conclude that you require==0a0.dev0 and
pandas-stubs==2.0.3.230814 are incompatible.
we'll say
> we can conclude that your requirements and pandas-stubs==2.0.3.230814
are incompatible.
Closes#3710
I'm not sure how to get unit test coverage for this, might look into
that. Ideally we'd skip this branch entirely?
I don't really understand why this only happens on windows clippy and
not on linux too, but as usual, boxing the error variant fixes it.
Fixup for #3585
Add minimal support for workspace discovery, only used for determining
paths in the bluejay commands.
We can now discover the workspace structure, namely that the
`pyproject.toml` of a package belongs to a workspace `pyproject.toml`
with members and exclusion. The globbing logic is inspired by cargo. We
don't resolve `workspace = true` metadata declarations yet.
Pubgrub stores incompatibilities as (package name, version range)
tuples, meaning it needs to clone the package name for each
incompatibility, and each non-borrowed operation on incompatibilities.
https://github.com/astral-sh/uv/pull/3673 made me realize that
`PubGrubPackage` has gotten large (expensive to copy), so like `Version`
and other structs, i've added an `Arc` wrapper around it.
It's a pity clippy forbids `.deref()`, it's less opaque than `&**` and
has IDE support (clicking on `.deref()` jumps to the right impl).
## Benchmarks
It looks like this matters most for complex resolutions which, i assume
because they carry larger `PubGrubPackageInner::Package` and
`PubGrubPackageInner::Extra` types.
```bash
hyperfine --warmup 5 "./uv-main pip compile -q ./scripts/requirements/jupyter.in" "./uv-branch pip compile -q ./scripts/requirements/jupyter.in"
hyperfine --warmup 5 "./uv-main pip compile -q ./scripts/requirements/airflow.in" "./uv-branch pip compile -q ./scripts/requirements/airflow.in"
hyperfine --warmup 5 "./uv-main pip compile -q ./scripts/requirements/boto3.in" "./uv-branch pip compile -q ./scripts/requirements/boto3.in"
```
```
Benchmark 1: ./uv-main pip compile -q ./scripts/requirements/jupyter.in
Time (mean ± σ): 18.2 ms ± 1.6 ms [User: 14.4 ms, System: 26.0 ms]
Range (min … max): 15.8 ms … 22.5 ms 181 runs
Benchmark 2: ./uv-branch pip compile -q ./scripts/requirements/jupyter.in
Time (mean ± σ): 17.8 ms ± 1.4 ms [User: 14.4 ms, System: 25.3 ms]
Range (min … max): 15.4 ms … 23.1 ms 159 runs
Summary
./uv-branch pip compile -q ./scripts/requirements/jupyter.in ran
1.02 ± 0.12 times faster than ./uv-main pip compile -q ./scripts/requirements/jupyter.in
```
```
Benchmark 1: ./uv-main pip compile -q ./scripts/requirements/airflow.in
Time (mean ± σ): 153.7 ms ± 3.5 ms [User: 165.2 ms, System: 157.6 ms]
Range (min … max): 150.4 ms … 163.0 ms 19 runs
Benchmark 2: ./uv-branch pip compile -q ./scripts/requirements/airflow.in
Time (mean ± σ): 123.9 ms ± 4.6 ms [User: 152.4 ms, System: 133.8 ms]
Range (min … max): 118.4 ms … 138.1 ms 24 runs
Summary
./uv-branch pip compile -q ./scripts/requirements/airflow.in ran
1.24 ± 0.05 times faster than ./uv-main pip compile -q ./scripts/requirements/airflow.in
```
```
Benchmark 1: ./uv-main pip compile -q ./scripts/requirements/boto3.in
Time (mean ± σ): 327.0 ms ± 3.8 ms [User: 344.5 ms, System: 71.6 ms]
Range (min … max): 322.7 ms … 334.6 ms 10 runs
Benchmark 2: ./uv-branch pip compile -q ./scripts/requirements/boto3.in
Time (mean ± σ): 311.2 ms ± 3.1 ms [User: 339.3 ms, System: 63.1 ms]
Range (min … max): 307.8 ms … 317.0 ms 10 runs
Summary
./uv-branch pip compile -q ./scripts/requirements/boto3.in ran
1.05 ± 0.02 times faster than ./uv-main pip compile -q ./scripts/requirements/boto3.in
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
This is bare-bones support for editables in `uv sync` as basis for
workspace support, notably without lockfile integration. It leverages
the existing `ResolvedEditables` infrastructure.
## Summary
This PR falls back to writing an unnamed requirement if it appears to be
a relative URL. pip is way more flexible when providing an unnamed
requirement than when providing a PEP 508 requirement. For example,
_only_ this works:
```
black @ file:///Users/crmarsh/workspace/uv/scripts/packages/black_editable
```
Any other form will fail.
Meanwhile, _all_ of these work:
```
file:///Users/crmarsh/workspace/uv/scripts/packages/black_editable
scripts/packages/black_editable
./scripts/packages/black_editable
file:./scripts/packages/black_editable
file:scripts/packages/black_editable
```
Closes https://github.com/astral-sh/uv/issues/3180.
Since we're adding a `Option<MarkerTree>` to `PubGrubPackage`, and since
we just make `PubGrubPackage` implement `Ord`, it follows that we want
`MarkerTree` to also implement `Ord`.
This makes use of the newly added `Ord` impl on `PubGrubPackage` to make
the output of `format_terms` independent of hashmap iteration order.
This was already collecting the terms into an intermediate `Vec`, so
sorting probably isn't going to add any significant overhead here.
(Plus, this is only running when formatting an error message after a
solution could not be found, so an extra sort doesn't seem like a big
deal here.)
Note that some tests are updated in this commit as a result of this
change. As far as I can tell, the semantic meaning of the output remains
the same. But the order of the listed packages does not.
Specific thing motivating this change is, in a subsequent, I added
`Option<MarkerTree>` to `PubGrubPackage::Package`, and this caused
similar changes in test output. So I backtracked and isolated this
change from the addition of `Option<MarkerTree>`.
It turns out that we use PubGrubPackage as the key in hashmaps in a fair
few places. And when we iterate over hashmaps, the order is unspecified.
This can in turn result in changes in output as a result of changes in
the PubGrubPackage definition, purely as a function of its changing
hash. This is confusing as there should be no semantic difference.
Thus, this is a precursor to introducing some more determinism to places
I found in the error reporting whose output depending on hashmap
iteration order.
It looks like the last vestiges of `Derivative` were removed in commit
7eaed07f6c, but the then rendered
superfluous `derive(Derivative)` wasn't removed.
This is split out from workspaces support, which needs editables in the
bluejay commands. It consists mainly of refactorings:
* Move the `editable` module one level up.
* Introduce a `BuiltEditableMetadata` type for `(LocalEditable,
Metadata23, Requirements)`.
* Add editables to `InstalledPackagesProvider` so we can use
`EmptyInstalledPackages` for them.
## Summary
If you have (e.g.) `extra-index-url` in your configuration file _and_
provide `--extra-index-url` on the command-line, we now merge the
options rather than ignoring those in the configuration file. As such,
merging the CLI and the persistent configuration is now semantically
identical to how we merge (project persistent configuration) with (user
persistent configuration).
Closes https://github.com/astral-sh/uv/issues/3541.
## Summary
The main motivation here is that the `.filename()` method that we
implement on `Url` will do URL decoding for the last segment, which we
were missing here.
The errors are a bit awkward, because in
`crates/uv-resolver/src/lock.rs`, we wrap in `failed to extract filename
from URL: {url}`, so in theory we want the underlying errors to _omit_
the URL? But sometimes they use `#[error(transparent)]`?
## Summary
Uncertain about this, but we don't actually need the full
`SourceDistFilename`, only the name and version -- and we often have
that information already (as in the lockfile routines). So by flattening
the fields onto `RegistrySourceDist`, we can avoid re-parsing for
information we already have.
Prompted by
https://github.com/astral-sh/uv/pull/3657#discussion_r1606041239
There's still some level of discomfort here, as the `tool` module needs
needs to import the `project` module to manage an environment. We should
probably move most of the basic operations in the `project` module root
into some sort of shared module for behind the scenes operations?
Regardless, this change should simplify that future move.
## Summary
Restore API-compatibility with pre-1.1.0 versions of the `zip` crate,
and pin the dependency to the 0.6 series, due to concerns discussed in
https://github.com/astral-sh/uv/issues/3642.
## Test Plan
```
cargo run -p uv-dev -- fetch-python
cargo test
```
## Summary
This PR adds lossless deserialization for `GitSourceDist` distributions
in the lockfile. Specifically, we now properly preserve the requested
revision, the subdirectory, and the precise Git commit SHA.
## Test Plan
`cargo test`
## Summary
This PR introduces parallelism to the resolver. Specifically, we can
perform PubGrub resolution on a separate thread, while keeping all I/O
on the tokio thread. We already have the infrastructure set up for this
with the channel and `OnceMap`, which makes this change relatively
simple. The big change needed to make this possible is removing the
lifetimes on some of the types that need to be shared between the
resolver and pubgrub thread.
A related PR, https://github.com/astral-sh/uv/pull/1163, found that
adding `yield_now` calls improved throughput. With optimal scheduling we
might be able to get away with everything on the same thread here.
However, in the ideal pipeline with perfect prefetching, the resolution
and prefetching can run completely in parallel without depending on one
another. While this would be very difficult to achieve, even with our
current prefetching pattern we see a consistent performance improvement
from parallelism.
This does also require reverting a few of the changes from
https://github.com/astral-sh/uv/pull/3413, but not all of them. The
sharing is isolated to the resolver task.
## Test Plan
On smaller tasks performance is mixed with ~2% improvements/regressions
on both sides. However, on medium-large resolution tasks we see the
benefits of parallelism, with improvements anywhere from 10-50%.
```
./scripts/requirements/jupyter.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 29.2 ms ± 1.8 ms [User: 20.3 ms, System: 29.8 ms]
Range (min … max): 26.4 ms … 36.0 ms 91 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 25.5 ms ± 1.0 ms [User: 19.5 ms, System: 25.5 ms]
Range (min … max): 23.6 ms … 27.8 ms 99 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.15 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/boto3.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 487.1 ms ± 6.2 ms [User: 464.6 ms, System: 61.6 ms]
Range (min … max): 480.0 ms … 497.3 ms 10 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 430.8 ms ± 9.3 ms [User: 529.0 ms, System: 77.2 ms]
Range (min … max): 417.1 ms … 442.5 ms 10 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.13 ± 0.03 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/airflow.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 478.1 ms ± 18.8 ms [User: 482.6 ms, System: 205.0 ms]
Range (min … max): 454.7 ms … 508.9 ms 10 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 308.7 ms ± 11.7 ms [User: 428.5 ms, System: 209.5 ms]
Range (min … max): 287.8 ms … 323.1 ms 10 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.55 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
## Summary
Fixes a small discrepancy between the pip compile outputs for
`annotation-style=split` and `annotation-style=line` commands.
### Problem
Consider the following `pyproject.toml` file.
```sh
$ cat pyproject.toml
[project]
name = "uv_test"
dynamic = ["version"]
dependencies = ["click"]
```
Running uv pip compile with annotation-style=split on uv 0.1.44 version
yields the following:
```sh
❯ uv pip compile pyproject.toml --annotation-style=split
Resolved 1 package in 2ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml --annotation-style=split
click==8.1.7
# via uv-test (pyproject.toml)
```
However, running uv pip compile with annotation-style=line doesn't
include source info for root level dependencies.
```sh
❯ uv pip compile pyproject.toml --annotation-style=line
Resolved 1 package in 1ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml --annotation-style=line
click==8.1.7
```
With this PR:
```sh
❯ ../target/debug/uv pip compile --annotation-style=line pyproject.toml
Resolved 1 package in 6ms
# This file was autogenerated by uv via the following command:
# uv pip compile --annotation-style=line pyproject.toml
click==8.1.7 # via uv-test (pyproject.toml)
```
This also now matches `pip-tools` output:
```sh
❯ pip-compile --annotation-style=line pyproject.toml
#
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
# pip-compile --annotation-style=line pyproject.toml
#
click==8.1.7 # via uv_test (pyproject.toml)
```
## Test Plan
`cargo test`
## Summary
If a user includes markers after an editable, we now ignore them (rather
than including them in the parsed URL). This matches pip's behavior. In
the future, we could further improve by respecting them, but that
_would_ be a deviation from pip.
For example, given:
```
-e ./scripts/packages/black_editable ; python_version >= "3.9" and python_ver
```
We now split at the first whitespace (just before the `;`), parse
everything before, and throw out everything after.
This logic also extends to extras. So given:
```
-e ./scripts/packages/black_editable[dev, colorama]
```
We'll now parse this as the URL
`./scripts/packages/black_editable[dev,`, and throw out ` colorama]`.
Instead, you need to do:
```
-e ./scripts/packages/black_editable[dev,colorama]
```
(I.e., remove the space.)
This _also_ matches pip's behavior. I could "fix" this but I'm unsure if
I should -- it means requirements files will be parseable by uv that
won't work with pip. Open to input. My gut reaction is that we _should_
properly support `-e ./scripts/packages/black_editable[dev, colorama]`
even if pip would reject it, but `requirements.txt` is
implementation-defined so it'd be a "deviation".
Closes https://github.com/astral-sh/uv/issues/3604.
Following from #3595, we'd like wheels to make their way into the lock
file even if the current environment selects an sdist. With #3595, this
didn't happen:
$ cargo run -p uv -- pip compile -p3.10 <(echo psycopg2)
--unstable-uv-lock-file
$ cat uv.lock
version = 1
[[distribution]]
name = "psycopg2"
version = "2.9.9"
source = "registry+https://pypi.org/simple"
[distribution.sdist]
url =
"dc6acaf46d76fce95daac5e0f0301b/psycopg2-2.9.9.tar.gz"
hash =
"sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"
The above example uses `psycopg2`, which has an sdist and wheels only on
Windows. Since I ran the above on Linux, an sdist was selected. But no
wheels appeared in the lock file.
With this PR, wheels are now correctly plumbed through:
$ cargo run -p uv -- pip compile -p3.10 <(echo psycopg2)
--unstable-uv-lock-file
$ cat uv.lock
version = 1
[[distribution]]
name = "psycopg2"
version = "2.9.9"
source = "registry+https://pypi.org/simple"
[distribution.sdist]
url =
"dc6acaf46d76fce95daac5e0f0301b/psycopg2-2.9.9.tar.gz"
hash =
"sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"
[[distribution.wheel]]
url =
"2767d96391f5cde90b82cb3e8c2a12/psycopg2-2.9.9-cp310-cp310-win32.whl"
hash =
"sha256:38a8dcc6856f569068b47de286b472b7c473ac7977243593a288ebce0dc89516"
[[distribution.wheel]]
url =
"6572dec6831f85491a5e4dda606a98/psycopg2-2.9.9-cp310-cp310-win_amd64.whl"
hash =
"sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"
[[distribution.wheel]]
url =
"1fc5b9d33c858a602868a592cdc1b0/psycopg2-2.9.9-cp311-cp311-win32.whl"
hash =
"sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"
[[distribution.wheel]]
url =
"5133dd3183e671b278ce248810b7f7/psycopg2-2.9.9-cp311-cp311-win_amd64.whl"
hash =
"sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"
[[distribution.wheel]]
url =
"f74ffe6b6fe119ccb8a6546c3fb893/psycopg2-2.9.9-cp312-cp312-win32.whl"
hash =
"sha256:d735786acc7dd25815e89cc4ad529a43af779db2e25aa7c626de864127e5a024"
[[distribution.wheel]]
url =
"c4a26e1918ab7ee854fb5247f16c40/psycopg2-2.9.9-cp312-cp312-win_amd64.whl"
hash =
"sha256:a7653d00b732afb6fc597e29c50ad28087dcb4fbfb28e86092277a559ae4e693"
[[distribution.wheel]]
url =
"ffeb9ac356ce0d6c4f2f34e396dbc0/psycopg2-2.9.9-cp37-cp37m-win32.whl"
hash =
"sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"
[[distribution.wheel]]
url =
"0a39176d36fd7105774e57996f63cd/psycopg2-2.9.9-cp37-cp37m-win_amd64.whl"
hash =
"sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"
[[distribution.wheel]]
url =
"86b90d30c4420cc3c0f6da2b8f3a9a/psycopg2-2.9.9-cp38-cp38-win32.whl"
hash =
"sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"
[[distribution.wheel]]
url =
"c439b378ef79997a935f10374f3c0d/psycopg2-2.9.9-cp38-cp38-win_amd64.whl"
hash =
"sha256:bac58c024c9922c23550af2a581998624d6e02350f4ae9c5f0bc642c633a2d5e"
[[distribution.wheel]]
url =
"5080c0e61ad5f08b9503e508aac116/psycopg2-2.9.9-cp39-cp39-win32.whl"
hash =
"sha256:c92811b2d4c9b6ea0285942b2e7cac98a59e166d59c588fe5cfe1eda58e72d59"
[[distribution.wheel]]
url =
"ec73fe66d4d65f5bbe54efb191d9e6/psycopg2-2.9.9-cp39-cp39-win_amd64.whl"
hash =
"sha256:de80739447af31525feddeb8effd640782cf5998e1a4e9192ebdf829717e3913"
Ref #3351
Our current flow of data from "simple registry package" to "final
resolved distribution" goes through a number of types:
* `SimpleMetadata` is the API response from a registry that includes all
published versions for a package. Each version has an assortment of
metadata
associated with it.
* `VersionFiles` is the aforementioned metadata. It is split in two: a
group of files for source distributions and a group of files for wheels.
* `PrioritizedDist` collects a subset of the files from `VersionFiles`
to form a selection of the "best" sdist and the "best" wheel for the
current environment.
* `CompatibleDist` is created from a borrowed `PrioritizedDist` that,
perhaps among other things, encapsulates the decision of whether to pick
an sdist or a wheel. (This decision depends both on compatibility and
the action being performed. e.g., When doing installation, a
`CompatibleDist` will sometimes select an sdist over a wheel.)
* `ResolvedDistRef` is like a `ResolvedDist`, but borrows a `Dist`.
* `ResolvedDist` is the almost-final-form of a distribution in a
resolution and is created from a `ResolvedDistRef`.
* `AnnotatedResolvedDist` is a new data type that is the actual final
form of a distribution that a universal lock file cares about. It
bundles a `ResolvedDist` with some metadata needed to generate a lock
file.
One of the requirements of a universal lock file is that we include all
wheels (and maybe all source distributions? but at least one if it's
present) associated with a distribution. But the above flow of data (in
the step from `VersionFiles` to `PrioritizedDist`) drops all wheels
except for the best one.
To remedy this, in this PR, we rejigger `PrioritizedDist`,
`CompatibleDist` and `ResolvedDistRef` so that all wheel data is
preserved. And when a `ResolvedDistRef` is finally turned into a
`ResolvedDist`, we copy all of the wheel data. And finally, we adjust
the `Lock` constructor to read this new data and include it in the lock
file. To make this work, we also modify `RegistryBuiltDist` so that it
can contain one or more wheels instead of just one.
One shortcoming here (called out in the code as a FIXME) is that if a
source distribution is selected as the "best" thing to use (perhaps
there are no compatible wheels), then the wheels won't end up in the
lock file. I plan to fix this in a follow-up PR.
We also aren't totally consistent on source distribution naming.
Sometimes we use `sdist`. Sometimes `source`. Sometimes `source_dist`.
I think it'd be nice to just use `sdist` everywhere, but I do prefer
the type names to be `SourceDist`. And sometimes you want function
names to match the type names (i.e., `from_source_dist`), which in turn
leads to an appearance of inconsistency. I'm open to ideas.
Closes#3351
## Summary
In `ResolutionGraph::from_state`, we have mechanisms to grab the hashes
and metadata for all distributions -- but we then throw that information
away. This PR preserves it on a new `AnnotatedDist` (yikes, open to
suggestions) that wraps `ResolvedDist` and includes (1) the hashes
(computed or from the registry) and (2) the `Metadata23`, which lets us
extract the version.
Closes https://github.com/astral-sh/uv/issues/3356.
Closes https://github.com/astral-sh/uv/issues/3357.
## Summary
Splits this into two loops that each handle independent cases, to make
the code a little easier to reason about. No behavioral or logic changes
-- just splitting the `match` across two loops.
## Summary
Closes
https://github.com/astral-sh/uv/issues/3578#issuecomment-2110675382.
## Test Plan
Verified that in the OpenSUSE test, we create both, and they're
symlinks:
```text
INFO: Creating virtual environment with `venv`...
INFO: Installing into `venv` virtual environment...
DEBUG Found a virtualenv named .venv at: /tmp/tmp4nape29h/.venv
DEBUG Cached interpreter info for Python 3.10.14, skipping probing: .venv/bin/python
DEBUG Using Python 3.10.14 environment at .venv/bin/python
DEBUG Trying to lock if free: .venv/.lock
purelib: "/tmp/tmp4nape29h/.venv/lib/python3.10/site-packages"
platlib: "/tmp/tmp4nape29h/.venv/lib64/python3.10/site-packages"
is_same_file(purelib, platlib): Ok(true)
```
## Summary
Increment the removed file counts in filters
in install_registry_source_dist_cached test, to make it work again on
Gentoo. The tested counts were updated
in 9a92a3ad37, but the filters were not.
That said, the respective count increased in Gentoo as well, so adjust
both input and output strings. I'm updating Windows as a guesswork,
though I suspect that filter may not be necessary anymore, given that CI
was passing.
## Test Plan
`cargo test` on Gentoo :-).
## Summary
Fixes a typo in a comment
## Test Plan
I assume there's no need to test comment changes, other than having a
human check they make sense. That's what this PR is for 😉
## Summary
Uses the editable handling from `pip sync`, and improves the
abstractions such that we can pass those resolved editables into the
resolver.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
It's confusing that we use `constraints` here because constraints mean
something else for us (e.g., `--constraint constraints.txt`). These are
really the dependencies of a given `PubGrubPackage` -- the type is even
called `PubGrubDependencies`.
Windows does not support cloning whole directories so clone each file
instead.
closes#3547
## Test Plan
Ran ` uv pip install setuptools --link-mode=clone` manually
## Summary
I don't love this, but it turns out that setuptools is not robust to
parallel builds: https://github.com/pypa/setuptools/issues/3119. As a
result, if you run uv from multiple processes, and they each attempt to
build the same source distribution, you can hit failures.
This PR applies an advisory lock to the source distribution directory.
We apply it unconditionally, even if we ultimately find something in the
cache and _don't_ do a build, which helps ensure that we only build the
distribution once (and wait for that build to complete) rather than
kicking off builds from each thread.
Closes https://github.com/astral-sh/uv/issues/3512.
## Test Plan
Ran:
```sh
#!/bin/bash
make_venv(){
target/debug/uv venv $1
source $1/bin/activate
target/debug/uv pip install opentracing --no-deps --verbose
}
for i in {1..8}
do
make_venv ./$1/$i &
done
```
## Summary
I think this is overall good change because it explicitly encodes (in
the type system) something that was previously implicit. I'm not a huge
fan of the names here, open to input.
It covers some of https://github.com/astral-sh/uv/issues/3506 but I
don't think it _closes_ it.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Just fix typos.
While `alpha-numeric` is not really a misspelling:
- it is missing from mainstream curated dictionaries, all of them
suggest `alphanumeric`;
- it is less used than `alphanumeric` (more than ⨉10 less) according to
the Google [Ngram
Viewer](https://books.google.com/ngrams/graph?content=alpha-numeric%2Calphanumeric&year_start=1900&year_end=2019&corpus=en-2019);
- it is [missing from
SCOWL](http://app.aspell.net/lookup?dict=en_US-large;words=alpha-numeric).
## Test Plan
CI jobs.
## Summary
runpy.run_path was added in python 2.7 and 3.2 - and every python that
is not EOL supports it.
It is arguably nicer to read and the path is only given once in the
command.
At least right now, runpy - unlike exec with S102 - is not flagged by
any bandit-derived ruff check.
(I guess because it loads from a file instead of a simple string...)
Because of the import, it is also not a one-liner anymore. (But that
could be fixed with an __import__('runpy').run_path...)
## Test Plan
import runpy
runpy.run_path('/path/to/venv/bin/activate_this.py')
## Summary
If you run the script included in the linked issue, then `uv cache
clean`, we hit permissions errors on certain directories created by
`setuptools`. The permissions on those directories look like:
```
❯ sudo ls -l /Users/crmarsh/Library/Caches/uv/built-wheels-v3/pypi/opentracing/2.4.0/M-fYsaHAaQQvedmPMUl9D/opentracing-2.4.0.tar.gz/build/bdist.macosx-14.2-arm64/wheel/opentracing
Password:
total 0
drwxr-xr-x 3 crmarsh staff 96 May 11 12:51 harness
```
This PR adds logic to make those directories readable by the current
user.
Closes https://github.com/astral-sh/uv/issues/3515.
## Summary
pip passes these as positional arguments, and at least one build backend
relies on that. My personal opinion is that it's a spec violation, and
the build backend should be updated, but I'd prefer to favor
compatibility over strictness here.
Closes https://github.com/astral-sh/uv/issues/3509.
## Test Plan
`cargo run pip install cryptacular==1.6.2`
## Summary
This PR consolidates the concurrency limits used throughout `uv` and
exposes two limits, `UV_CONCURRENT_DOWNLOADS` and
`UV_CONCURRENT_BUILDS`, as environment variables.
Currently, `uv` has a number of concurrent streams that it buffers using
relatively arbitrary limits for backpressure. However, many of these
limits are conflated. We run a relatively small number of tasks overall
and should start most things as soon as possible. What we really want to
limit are three separate operations:
- File I/O. This is managed by tokio's blocking pool and we should not
really have to worry about it.
- Network I/O.
- Python build processes.
Because the current limits span a broad range of tasks, it's possible
that a limit meant for network I/O is occupied by tasks performing
builds, reading from the file system, or even waiting on a `OnceMap`. We
also don't limit build processes that end up being required to perform a
download. While this may not pose a performance problem because our
limits are relatively high, it does mean that the limits do not do what
we want, making it tricky to expose them to users
(https://github.com/astral-sh/uv/issues/1205,
https://github.com/astral-sh/uv/issues/3311).
After this change, the limits on network I/O and build processes are
centralized and managed by semaphores. All other tasks are unbuffered
(note that these tasks are still bounded, so backpressure should not be
a problem).
This only makes hashes optional for wheels/sdists that come from
registires or direct URLs. For wheels/sdists that come from other
sources, a hash should not be present.
For path dependencies, a hash should not be present because the state of
the path dependency is not intended to be tracked in the lock file. This
is consistent with how other tools deal with path dependencies, and if
it were otherwise, the hash would I believe need to be updated for every
change to the path dependency.
For git dependencies (source dists only), a hash should not be present
because the lock will contain the specific commit revision hash. This is
functionally equivalent to a hash, and so a hash is redundant.
As part of this change, we validate the presence or absence of a hash
based on the dependency source. We also add our first regression tests.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
likely necessary to resolve https://github.com/astral-sh/uv/issues/2500
made this a separate PR in an attempt to make the changes as small as
possible; let me know if it's preferred to keep them as a single PR.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
- edited the test in `interpreter.rs`
- tested manually via `println!`
```
$ cargo run --quiet pip show test
["/Users/chankang/Library/Caches/uv/.tmpKzNEPN", "/Users/chankang/.pyenv/versions/3.12.2/lib/python312.zip", "/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12", "/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12/lib-dynload", "/Users/chankang/repos/uv/.venv/lib/python3.12/site-packages"]
warning: Package(s) not found for: test
chankang@chans-Air ~/repos/uv - (syspath)
$ git diff
diff --git a/crates/uv-interpreter/src/environment.rs b/crates/uv-interpreter/src/environment.rs
index 33b785ce..8ebf0864 100644
--- a/crates/uv-interpreter/src/environment.rs
+++ b/crates/uv-interpreter/src/environment.rs
@@ -106,6 +106,7 @@ impl PythonEnvironment {
/// Some distributions also create symbolic links from `purelib` to `platlib`; in such cases, we
/// still deduplicate the entries, returning a single path.
pub fn site_packages(&self) -> impl Iterator<Item = &Path> {
+ println!("{:?}", self.interpreter.sys_path());
if let Some(target) = self.interpreter.target() {
Either::Left(std::iter::once(target.root()))
} else {
chankang@chans-Air ~/repos/uv - (syspath)
$ python -c "import sys; print(sys.path)"
['', '/Users/chankang/.pyenv/versions/3.12.2/lib/python312.zip', '/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12', '/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12/lib-dynload', '/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12/site-packages']
chankang@chans-Air ~/repos/uv - (syspath)
```
<!-- How was it tested? -->
This still keeps the resolver state on the stack, but it organizes it
into a more structured representation. This is a precursor to
implementing resolver forking, where we will ultimately put this state
on the heap. The idea is that this will let us maintain multiple
independent resolver states that will all produce their own resolution
(and potentially other forked states).
Closes#3354
## Summary
I've started to refer to this as the "project" API in various places, it
seems less duplicative than the "workspace" API which is a little
different.
Now that the type is fully encapsulated, we can pretty easily
migrate to using an Arc inside of a MarkerEnvironment.
It looks like the pyo3 macros can't deal with an Arc, so we
write out the getter methods by hand.
We now use the getters and setters everywhere.
There were some places where we wanted to build a `MarkerEnvironment`
out of whole cloth, usually in tests. To facilitate those use cases, we
add a `MarkerEnvironmentBuilder` that provides a convenient constructor.
It's basically like a `MarkerEnvironment::new`, but with named
parameters. That's useful here because there are so many fields (and
they many have the same type).
This test was failing on master. I guess we don't test
this crate with the pyo3 feature enabled? I think this
regression was due to a recent change in the error reporting
of the pep440 crate.
## Summary
Ensures that we track the origins for requirements regardless of whether
they come from `pyproject.toml` or `setup.py` or `setup.cfg`.
Closes#3480.
This commit touches a lot of code, but the conceptual change here is
pretty simple: make it so we can run the resolver without providing a
`MarkerEnvironment`. This also indicates that the resolver should run in
universal mode. That is, the effect of a missing marker environment is
that all marker expressions that reference the marker environment are
evaluated to `true`. That is, they are ignored. (The only markers we
evaluate in that context are extras, which are the only markers that
aren't dependent on the environment.)
One interesting change here is that a `Resolver` no longer needs an
`Interpreter`. Previously, it had only been using it to construct a
`PythonRequirement`, by filling in the installed version from the
`Interpreter` state. But we now construct a `PythonRequirement`
explicitly since its `target` Python version should no longer be tied to
the `MarkerEnvironment`. (Currently, the marker environment is mutated
such that its `python_full_version` is derived from multiple sources,
including the CLI, which I found a touch confusing.)
The change in behavior can now be observed through the
`--unstable-uv-lock-file` flag. First, without it:
```
$ cat requirements.in
anyio>=4.3.0 ; sys_platform == "linux"
anyio<4 ; sys_platform == "darwin"
$ cargo run -qp uv -- pip compile -p3.10 requirements.in
anyio==4.3.0
exceptiongroup==1.2.1
# via anyio
idna==3.7
# via anyio
sniffio==1.3.1
# via anyio
typing-extensions==4.11.0
# via anyio
```
And now with it:
```
$ cargo run -qp uv -- pip compile -p3.10 requirements.in --unstable-uv-lock-file
x No solution found when resolving dependencies:
`-> Because you require anyio>=4.3.0 and anyio<4, we can conclude that the requirements are unsatisfiable.
```
This is expected at this point because the marker expressions are being
explicitly ignored, *and* there is no forking done yet to account for
the conflict.
We provide a new API on a `Requirement` that specifically
ignores the marker environment and only evaluates a requirement's
marker expression with respect to extras. Any marker expressions
that reference the marker environment automatically evaluate to
true.
Instead of duplicating the evaluation code, we just make a marker
environment optional on the lower level APIs. In theory, we could
just writer a separate evaluation routine that ignores everything
except extras, but the evaluator has a fair bit of other stuff in it
(such as emitting warnings) that would be good to keep DRY IMO.
This doc test seems to fail due to the recent change making
`Requirement` generic on its URL type. While the type parameter
was given a default of `VerbatimUrl`, this default doesn't always
apply. For example, the `FromStr` impl on `Requirement` is still
generic on any URL type, and so callers must indicate the type
of the URL to return. (An alternative would be to define the
`FromStr` impl for just the default URL type.)
## Summary
If a requirement is omitted due to a marker expression, we shouldn't
include it as the "source" of a package in the output.
For example, if your constraints include `pathspec ; python_version <
'3.12'`, and you're on Python 3.12, we should _not_ include the
constraint file as a "source" in the output annotations.
## Summary
Unfortunately, the `-I` flag was added in Python 3.4. So if we query a
Python version prior to 3.4 (e.g., Python 2.7), we can't run our script
at all, and lose the ability to match against our structured error.
This PR adds an additional check against the stderr output for these
cases.
Closes https://github.com/astral-sh/uv/issues/3474.
## Test Plan
Installed Python 2.7, and verified that it was skipped (and that we
instead found my `python3`).
## Summary
Fixes https://github.com/astral-sh/uv/issues/1343. This is kinda a first
draft at the moment, but does at least mostly work locally (barring some
bits of the test suite that seem to not work for me in general).
## Test Plan
Mostly running the existing tests and checking the revised output is
sane
## Outstanding issues
Most of these come down to "AFAIK, the existing tools don't support
these patterns, but `uv` does" and so I'm not sure there's an existing
good answer here! Most of the answers so far are "whatever was easiest
to build"
- [x] ~~Is "-r pyproject.toml" correct? Should it show something else or
get skipped entirely~~ No it wasn't. Fixed in
3044fa8b86
- [ ] If the requirements file is stdin, that just gets skipped. Should
it be recorded?
- [ ] Overrides get shown as "--override<override.txt>". Correct?
- [x] ~~Some of the tests (e.g.
`dependency_excludes_non_contiguous_range_of_compatible_versions`) make
assumptions about the order of package versions being outputted, which
this PR breaks. I'm not sure if the text is fairly arbitrary and can be
replaced or whether the behaviour needs fixing?~~ - fixed by removing
the custom pubgrub PartialEq/Hash
- [ ] Are all the `TrackedFromStr` et al changes needed, or is there an
easier way? I don't think so, I think it's necessary to track these sort
of things fairly comprehensively to make this feature work, and this
sort of invasive change feels necessary, but happy to be proved wrong
there :)
- [x] ~~If you have a requirement coming in from two or more different
requirements files only one turns up. I've got a closed-source example
for this (can go into more detail if needed), mostly consisting of a
complicated set of common deps creating a larger set. It's a rarer case,
but worth considering.~~ 042432b200
- [ ] Doesn't add annotations for `setup.py` yet
- This is pretty hard, as the correct location to insert the path is
`crates/pypi-types/src/metadata.rs`'s `parse_pkg_info`, which as it's
based off a source distribution has entirely thrown away such matters as
"where did this package requirement get built from". Could add "`built
package name`" as a dep, but that's a little odd.
## Summary
This PR takes a different approach to `--with` for `uv run`. Now,
instead of merging the requirements and re-resolving, we have two
phases: (1) sync the workspace requirements to the workspace
environment; then (2) sync the ephemeral `--with` requirements to an
ephemeral environment. The two environments are then layered by setting
the `PATH` and `PYTHONPATH` variables appropriately.
I think this approach simplifies a few things:
1. Once we have a lockfile, the semantics are much clearer, and we can
actually reuse it for the workspace. If we had to add arbitrary
dependencies via `--with`, then it's not really clear how the lockfile
would/should behave.
2. Caching becomes simpler, because we can just cache the ephemeral
environment based on the requirements.
The current version of this PR loses a few behaviors though that I need
to restore:
- `--python` support -- but I'm not yet sure how this is supposed to
behave within projects? It's also left unclear in `uv sync` and `uv
lock`.
- The "reuse the workspace environment if it already satisfies the
ephemeral requirements" behavior.
Closes#3411.
## Summary
This is universal environment variable used to determine the mac OS
deployment target. We now respect it in `--python-platform` -- so we
default to 12.0, but users can override it as needed.
## Summary
We already _don't_ discover a `pyproject.toml` in `~/.config/uv` -- it
must be `uv.toml`. This PR makes the same change for `--config-file` --
it _has_ to be a `uv.toml`.
I think this is reasonable and more consistent, though I'm not sure. A
`pyproject.toml` "means" something -- it defines a project itself, in
which case we should be using project configuration. But creating a
`pyproject.toml` outside the project and passing it via `--config-file`
seems like an anti-pattern.
## Summary
This PR follows Cargo's strategy for merging configuration, albeit in a
more limited way (we don't support as many configuration locations).
Specifically, we merge the user configuration with the workspace
configuration if both are present. The workspace configuration has
priority, such that we take values from the workspace configuration and
ignore those in the user configuration if both are specified for a given
setting -- with the exception of arrays and maps, which are
concatenated.
For now, if a user provides a configuration file with `--config-file`,
we _don't_ merge in the user settings.
See:
https://doc.rust-lang.org/cargo/reference/config.html#hierarchical-structure.
Closes#3420.
## Summary
This is annoying both locally in CI. If anyone wants to fuss with the
filters to fix it, that's fine too, but IMO it's better to disable than
leave it enabled on macOS for now.
When using `tool.uv.sources`, we warn that requirements have a bound,
i.e. at least a lower version constraint.
When using a library, the symbols you import were introduced in
different versions, creating an implicit lower bound. This warning makes
this explicit. This is crucial to prevent backtracking resolvers from
selecting an ancient versions that is not compatible (or worse, doesn't
build), and a performance optimization on top.
This feature is gated to `tool.uv.sources` (as it should have been to
begin with for #3263/#3443) to not unnecessarily break legacy workflows.
It is also helpful specifically when using a `tool.uv.sources` section
that contains constraints that are not published to pypi, e.g. for
workspace dependencies. We can adjust those later to e.g. not constrain
workspace dependencies with `publish = false`, but i think it's the
right setting to start with.
## Summary
These aren't intended for production use; instead, I'm just trying to
frame out the overall data flows and code-sharing for these commands. We
now have `uv sync` (sync the environment to match the lockfile, without
refreshing or resolving) and `uv lock` (generate the lockfile). Both
_require_ a virtual environment to exist (something we should change).
`uv sync`, `uv run`, and `uv lock` all share code for the underlying
subroutines (resolution and installation), so the commands themselves
are relatively small (~100 lines) and mostly consist of reading
arguments and such.
`uv lock` and `uv sync` don't actually really work yet, because we have
no way to include the project itself in the lockfile (that's a TODO in
the lockfile implementation).
Closes https://github.com/astral-sh/uv/issues/3432.
We would previously show the parsed version when erroring due to
trailing content after a valid version, which can look different than
the input. E.g. when encountering `0.1-bulbasaur`, we would display:
```
after parsing '0.1b0', found 'ulbasaur', which is not part of a valid version
```
With storing the input string instead of the input version, we now show:
```
after parsing '0.1-b', found 'ulbasaur', which is not part of a valid version
```
It turns out setuptools often uses Metadata-Version 2.1 in their
PKG-INFO:
4e766834d7/setuptools/dist.py (L64)
`Metadata23` requires Metadata-Version of at least 2.2.
This means that uv doesn't actually recognise legacy editable
installations from the most common way you'd actually get legacy
editable installations (works great for most legacy editables I make at
work though!)
Anyway, over here we only need the version and don't care about anything
else. Rather than make a `Metadata21`, I just add a version field to
`Metadata10`. The one slightly tricky thing is that only
Metadata-Version 1.2 and greater guarantee that the [version number is
PEP 440 compatible](https://peps.python.org/pep-0345/#version), so I
store the version in `Metadata10` as a `String` and only parse to
`Version` at time of use.
Also did you know that back in 2004, paramiko had a pokemon based
versioning system?
Pubgrub got a new feature where all unavailability is a custom, instead
of the reasonless `UnavailableDependencies` and our custom `String` type
previously (https://github.com/pubgrub-rs/pubgrub/pull/208). This PR
introduces a `UnavailableReason` that tracks either an entire version
being unusable, or a specific version. The error messages now also track
this difference properly.
The pubgrub commit is our main rebased onto the merged
https://github.com/pubgrub-rs/pubgrub/pull/208, i'll push
`konsti/main-rebase-generic-reason` to `main` after checking for rebase
problems.
## Summary
It's not clear to me that this should exist at all, but it's causing
errors in projects that don't use `tool.uv.sources`, so we should
definitely remove it for now.
## Summary
We already have a global `--isolated`, which means "ignore any on-disk
configuration". I think we should reuse this for the "ignore the
workspace" setting in `uv run`, rather than `--no-workspace`.
I've also merged the existing `--isolated` and `--no-workspace`
behaviors in `uv run` into a single flag. We may not need separate flags
for this, since the current intent seems to be "ignore the workspace
environment"? Though we could always re-add later.
Closes https://github.com/astral-sh/uv/issues/3421.
Resolves [#3419](https://github.com/astral-sh/uv/issues/3419)
## Summary
Add compatargs to pip install command and hint the user to create a venv
for --user arg.
## Test Plan
Tested it locally.
```bash
cargo run pip install --user flask
Compiling uv v0.1.39 (/home/ahmedilyas/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 8.96s
Running `target/debug/uv pip install --user flask`
error: pip install's `--user` is unsupported (use a virtual environment instead).
```
This change allows switching out the url type for requirements. The
original idea was to allow different types for different requirement
origins, so that core metadata reads can ban non-pep 508 requirements
while we only allow them for requirements.txt. This didn't work out
because we expect `&Requirement`s from all sources to match.
I also tried to split `pep508_rs` into a PEP 508 compliant crate and
into our extensions, but they are to tightly coupled.
I think this change is an improvement still as it reduces the hardcoded
dependence on `VerbatimUrl`.
We now correctly emit relative paths in `uv pip compile` with
`tool.uv.sources` path inputs.
`tool.uv.sources` is mainly intended to be used with the uv lock file
over requirements.txt, but it's good to have basic `uv pip` support
working.
Fixes#3366
## Summary
All of the resolver code is run on the main thread, so a lot of the
`Send` bounds and uses of `DashMap` and `Arc` are unnecessary. We could
also switch to using single-threaded versions of `Mutex` and `Notify` in
some places, but there isn't really a crate that provides those I would
be comfortable with using.
The `Arc` in `OnceMap` can't easily be removed because of the uv-auth
code which uses the
[reqwest-middleware](https://docs.rs/reqwest-middleware/latest/reqwest_middleware/trait.Middleware.html)
crate, that seems to adds unnecessary `Send` bounds because of
`async-trait`. We could duplicate the code and create a `OnceMapLocal`
variant, but I don't feel that's worth it.
## Summary
Users often find themselves dropped into environments that contain
`.egg-info` packages. While we won't support installing these, it's not
hard to support identifying them (e.g., in `pip freeze`) and
_uninstalling_ them.
Closes https://github.com/astral-sh/uv/issues/2841.
Closes#2928.
Closes#3341.
## Test Plan
Ran `cargo run pip freeze --python
/opt/homebrew/Caskroom/miniforge/base/envs/TEST/bin/python`, with an
environment that includes `pip` as an `.egg-info`
(`/opt/homebrew/Caskroom/miniforge/base/envs/TEST/lib/python3.12/site-packages/pip-24.0-py3.12.egg-info`):
```
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1696001825047/work
pip==24.0
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1711811537435/work
setuptools==69.5.1
wheel==0.43.0
```
Then ran `cargo run pip uninstall`, verified that `pip` was uninstalled,
and no longer listed in `pip freeze`.
Fixes#3371
It seems like uv doesn't proactively enforce 3.8+ and in most cases just
issues a warning. This PR keeps that property, only adding the new check
when it is known to fail. I checked the imports in this file and the
other ones seem fine.
## Summary
Refreshes some of the activation scripts, and fixes some bugs in
`activate_this.py` that were likely the rest of some erroneous
copy-pasting.
Closes https://github.com/astral-sh/uv/issues/3346.
## Test Plan
```
❯ python
Python 3.12.0 (main, Feb 28 2024, 09:44:16) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import httpx
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'httpx'
>>> activator = '.venv/bin/activate_this.py'
>>> with open(activator) as f:
... exec(f.read(), {'__file__': activator})
...
>>> import httpx
```
Only allow using `tool.uv.sources` with preview mode, the design isn't
finalized yet.
Not sure what to label this, do we want a preview section and label for
the release notes?
## Summary
We need to partition the editable and non-editable requirements. As-is,
`editable = true` requirements were still being installed as
non-editable.
## Summary
We were writing the build dependencies into the `--target` directory,
which both made builds fail and led to them leaking into the user's
directory.
Closes https://github.com/astral-sh/uv/issues/3349.
## Introduction
PEP 621 is limited. Specifically, it lacks
* Relative path support
* Editable support
* Workspace support
* Index pinning or any sort of index specification
The semantics of urls are a custom extension, PEP 440 does not specify
how to use git references or subdirectories, instead pip has a custom
stringly format. We need to somehow support these while still stying
compatible with PEP 621.
## `tool.uv.source`
Drawing inspiration from cargo, poetry and rye, we add `tool.uv.sources`
or (for now stub only) `tool.uv.workspace`:
```toml
[project]
name = "albatross"
version = "0.1.0"
dependencies = [
"tqdm >=4.66.2,<5",
"torch ==2.2.2",
"transformers[torch] >=4.39.3,<5",
"importlib_metadata >=7.1.0,<8; python_version < '3.10'",
"mollymawk ==0.1.0"
]
[tool.uv.sources]
tqdm = { git = "https://github.com/tqdm/tqdm", rev = "cc372d09dcd5a5eabdc6ed4cf365bdb0be004d44" }
importlib_metadata = { url = "https://github.com/python/importlib_metadata/archive/refs/tags/v7.1.0.zip" }
torch = { index = "torch-cu118" }
mollymawk = { workspace = true }
[tool.uv.workspace]
include = [
"packages/mollymawk"
]
[tool.uv.indexes]
torch-cu118 = "https://download.pytorch.org/whl/cu118"
```
See `docs/specifying_dependencies.md` for a detailed explanation of the
format. The basic gist is that `project.dependencies` is what ends up on
pypi, while `tool.uv.sources` are your non-published additions. We do
support the full range or PEP 508, we just hide it in the docs and
prefer the exploded table for easier readability and less confusing with
actual url parts.
This format should eventually be able to subsume requirements.txt's
current use cases. While we will continue to support the legacy `uv pip`
interface, this is a piece of the uv's own top level interface. Together
with `uv run` and a lockfile format, you should only need to write
`pyproject.toml` and do `uv run`, which generates/uses/updates your
lockfile behind the scenes, no more pip-style requirements involved. It
also lays the groundwork for implementing index pinning.
## Changes
This PR implements:
* Reading and lowering `project.dependencies`,
`project.optional-dependencies` and `tool.uv.sources` into a new
requirements format, including:
* Git dependencies
* Url dependencies
* Path dependencies, including relative and editable
* `pip install` integration
* Error reporting for invalid `tool.uv.sources`
* Json schema integration (works in pycharm, see below)
* Draft user-level docs (see `docs/specifying_dependencies.md`)
It does not implement:
* No `pip compile` testing, deprioritizing towards our own lockfile
* Index pinning (stub definitions only)
* Development dependencies
* Workspace support (stub definitions only)
* Overrides in pyproject.toml
* Patching/replacing dependencies
One technically breaking change is that we now require user provided
pyproject.toml to be valid wrt to PEP 621. Included files still fall
back to PEP 517. That means `pip install -r requirements.txt` requires
it to be valid while `pip install -r requirements.txt` with `-e .` as
content falls back to PEP 517 as before.
## Implementation
The `pep508` requirement is replaced by a new `UvRequirement` (name up
for bikeshedding, not particularly attached to the uv prefix). The still
existing `pep508_rs::Requirement` type is a url format copied from pip's
requirements.txt and doesn't appropriately capture all features we
want/need to support. The bulk of the diff is changing the requirement
type throughout the codebase.
We still use `VerbatimUrl` in many places, where we would expect a
parsed/decomposed url type, specifically:
* Reading core metadata except top level pyproject.toml files, we fail a
step later instead if the url isn't supported.
* Allowed `Urls`.
* `PackageId` with a custom `CanonicalUrl` comparison, instead of
canonicalizing urls eagerly.
* `PubGrubPackage`: We eventually convert the `VerbatimUrl` back to a
`Dist` (`Dist::from_url`), instead of remembering the url.
* Source dist types: We use verbatim url even though we know and require
that these are supported urls we can and have parsed.
I tried to make improve the situation be replacing `VerbatimUrl`, but
these changes would require massive invasive changes (see e.g.
https://github.com/astral-sh/uv/pull/3253). A main problem is the ref
`VersionOrUrl` and applying overrides, which assume the same
requirement/url type everywhere. In its current form, this PR increases
this tech debt.
I've tried to split off PRs and commits, but the main refactoring is
still a single monolith commit to make it compile and the tests pass.
## Demo
Adding
d1ae3b85d5/pyproject.json
as json schema (v7) to pycharm for `pyproject.toml`, you can try the IDE
support already:

[dove.webm](https://github.com/astral-sh/uv/assets/6826232/c293c272-c80b-459d-8c95-8c46a8d198a1)
In *some* places in our crates, `serde` (and `rkyv`) are optional
dependencies. I believe this was done out of reasons of "good sense,"
that is, it follows a Rust ecosystem pattern where serde integration
tends to be an opt-in crate feature. (And similarly for `rkyv`.)
However, ultimately, `uv` itself requires `serde` and `rkyv` to
function. Since our crates are strictly internal, there are limited
consumers for our crates without `serde` (and `rkyv`) enabled. I think
one possibility is that optional `serde` (and `rkyv`) integration means
that someone can do this:
cargo test -p pep440_rs
And this will run tests _without_ `serde` or `rkyv` enabled. That in
turn could lead to faster iteration time by reducing compile times. But,
I'm not sure this is worth supporting. The iterative compilation times
of
individual crates are probably fast enough in debug mode, even with
`serde` and `rkyv` enabled. Namely, `serde` and `rkyv` themselves
shouldn't need to be re-compiled in most cases. On `main`:
```
from-scratch: `cargo test -p pep440_rs --lib` 0.685
incremental: `cargo test -p pep440_rs --lib` 0.278s
from-scratch: `cargo test -p pep440_rs --features serde,rkyv --lib` 3.948s
incremental: `cargo test -p pep440_rs --features serde,rkyv --lib` 0.321s
```
So while a from-scratch build does take significantly longer, an
incremental build is about the same.
The benefit of doing this change is two-fold:
1. It brings out crates into alignment with "reality." In particular,
some crates were _implicitly_ relying on `serde` being enabled
without explicitly declaring it. This technically means that our
`Cargo.toml`s were wrong in some cases, but it is hard to observe it
because of feature unification in a Cargo workspace.
2. We no longer need to deal with the cognitive burden of writing
`#[cfg_attr(feature = "serde", ...)]` everywhere.
This PR principally adds a routine for converting a `Lock` to a
`Resolution`, where a `Resolution` is a map of package names pinned to
a specific version.
I'm not sure that a `Resolution` is ultimately what we want here (we
might need more stuff), but this was the quickest route I could find to
plug a `Lock` into our existing `uv pip install` infrastructure.
This commit also does a little refactoring of the `Lock` types. The
main thing is to permit extra state on some of the types (like a
`by_id` map on `Lock` for quick lookups of distributions) that aren't
included in the serialization format of a `Lock`. We achieve this
by defining separate `Wire` types that are automatically converted
to-and-from via `serde`.
Note that like with the lock file format types themselves, we leave a
few `todo!()` expressions around. The main idea is to get something
minimally working without spending too much effort here. (A fair bit
of refactoring will be required to generate a lock file, and it's
not clear how much this code will wind up needing to change anyway.)
In particular, we only handle the case of installing wheels from a
registry.
A demonstration of the full flow:
```
$ requirements.in
anyio
$ cargo run -p uv -- pip compile -p3.10 requirements.in --unstable-uv-lock-file
$ uv venv
$ cargo run -p uv -- pip install --unstable-uv-lock-file anyio -r requirements.in
Installed 5 packages in 7ms
+ anyio==4.3.0
+ exceptiongroup==1.2.1
+ idna==3.7
+ sniffio==1.3.1
+ typing-extensions==4.11.0
```
In order to install from a lock file, we start from the root and do a
breadth first traversal over its dependencies. We aren't yet filtering
on marker expressions (since they aren't in the lock file yet), but we
should be able to add that in the future. In so doing, the traversal
should select only the subset of distributions relevant for the current
platform.
Split out of #3266
The "selector" concept doesn't seem well enough defined as-is. For
example, `PythonVersion` belongs there but isn't present. Going for
smaller modules instead.
Split out of https://github.com/astral-sh/uv/pull/3266
If `UV_BOOTSTRAP_DIR` and `CARGO_MANIFEST_DIR` are both unset, we
currently panic. This isn't good once we start to use managed toolchains
in production. We'll need to change this more later once the toolchain
directory is more user-facing.
Moves all of `uv-toolchain` into `uv-interpreter`. We may split these
out in the future, but the refactoring I want to do for interpreter
discovery is easier if I don't have to deal with entanglement. Includes
some restructuring of `uv-interpreter`.
Part of #2386
Another split out from https://github.com/astral-sh/uv/pull/3263. This
abstracts the copy&pasted check whether an installed distribution
satisfies a requirement used by both plan.rs and site_packages.rs into a
shared module. It's less useful here than with the new requirement but
helps with reducing https://github.com/astral-sh/uv/pull/3263 diff size.
Previously, a noop `uv pip install` would only show "Audited {}
package(s)" but no details, not even with `-vv`. Now it debug logs which
requirements were met and it also debug logs which requirement was
missing to trigger the full routine, allowing it investigate caching
behaviour.
First `uv pip install -v jupyter`:
```
DEBUG At least one requirement is not satisfied: jupyter
```
Second `uv pip install -v jupyter`:
```
DEBUG Found a virtualenv named .venv at: /home/konsti/projects/uv-main/.venv
DEBUG Cached interpreter info for Python 3.12.1, skipping probing: .venv/bin/python
DEBUG Using Python 3.12.1 environment at .venv/bin/python
DEBUG Trying to lock if free: .venv/.lock
DEBUG Requirement satisfied: anyio
DEBUG Requirement satisfied: anyio>=3.1.0
DEBUG Requirement satisfied: argon2-cffi-bindings
DEBUG Requirement satisfied: argon2-cffi>=21.1
DEBUG Requirement satisfied: arrow>=0.15.0
DEBUG Requirement satisfied: asttokens>=2.1.0
DEBUG Requirement satisfied: async-lru>=1.0.0
DEBUG Requirement satisfied: attrs>=22.2.0
DEBUG Requirement satisfied: babel>=2.10
...
DEBUG Requirement satisfied: webencodings
DEBUG Requirement satisfied: webencodings>=0.4
DEBUG Requirement satisfied: websocket-client>=1.7
DEBUG Requirement satisfied: widgetsnbextension~=4.0.10
DEBUG All editables satisfied:
Audited 1 package in 12ms
```
This will clash with the `tool.uv.sources` PR, i'll rebase it on top.
## Summary
Hi! Added `UV_NO_BUILD_ISOLATION` as a boolean environment variable for
the `--no-build-isolation` command-line option.
Closes https://github.com/astral-sh/uv/issues/3309
## Test Plan
Added new test `respect_no_build_isolation_env_var` to check that the
behaviour is the same as if the ``--no-build-isolation``
command-line-option is set.
This is meant to be a base on which to build. There are some parts
which are implicitly incomplete and others which are explicitly
incomplete. The latter are indicated by TODO comments.
Here is a non-exhaustive list of incomplete things. In many cases, these
are incomplete simply because the data isn't present in a
`ResolutionGraph`. Future work will need to refactor our resolver so
that this data is correctly passed down.
* Not all wheels are included. Only the "selected" wheel for the current
distribution is included.
* Marker expressions are always absent.
* We don't emit hashes for certainly kinds of distributions (direct
URLs, git, and path).
* We don't capture git information from a dependency specification.
Right now, we just always emit "default branch."
There are perhaps also other changes we might want to make to the format
of a more cosmetic nature. Right now, all arrays are encoded using
whatever the `toml` crate decides to do. But we might want to exert more
control over this. For example, by using inline tables or squashing more
things into strings (like I did for `Source` and `Hash`). I think the
main trade-off here is that table arrays are somewhat difficult to read
(especially without indentation), where as squashing things down into a
more condensed format potentially makes future compatible additions
harder.
I also went pretty light on the documentation here than what I would
normally do. That's primarily because I think this code is going to
go through some evolution and I didn't want to spend too much time
documenting something that is likely to change.
Finally, here's an example of the lock file format in TOML for the
`anyio` dependency. I generated it with the following command:
```
cargo run -p uv -- pip compile -p3.10 ~/astral/tmp/reqs/anyio.in --unstable-uv-lock-file
```
And that writes out a `uv.lock` file:
```toml
version = 1
[[distribution]]
name = "anyio"
version = "4.3.0"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "2f20c40b45242c0b33774da0e2e34f/anyio-4.3.0-py3-none-any.whl"
hash = "sha256:048e05d0f6caeed70d731f3db756d35dcc1f35747c8c403364a8332c630441b8"
[[distribution.dependencies]]
name = "exceptiongroup"
version = "1.2.1"
source = "registry+https://pypi.org/simple"
[[distribution.dependencies]]
name = "idna"
version = "3.7"
source = "registry+https://pypi.org/simple"
[[distribution.dependencies]]
name = "sniffio"
version = "1.3.1"
source = "registry+https://pypi.org/simple"
[[distribution.dependencies]]
name = "typing-extensions"
version = "4.11.0"
source = "registry+https://pypi.org/simple"
[[distribution]]
name = "exceptiongroup"
version = "1.2.1"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "79fe92dd414cadab75055b0ae00b33/exceptiongroup-1.2.1-py3-none-any.whl"
hash = "sha256:5258b9ed329c5bbdd31a309f53cbfb0b155341807f6ff7606a1e801a891b29ad"
[[distribution]]
name = "idna"
version = "3.7"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "741d8c8280948df2ea0eda2c8b79e8/idna-3.7-py3-none-any.whl"
hash = "sha256:82fee1fc78add43492d3a1898bfa6d8a904cc97d8427f683ed8e798d07761aa0"
[[distribution]]
name = "sniffio"
version = "1.3.1"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "75a9c94214239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl"
hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"
[[distribution]]
name = "typing-extensions"
version = "4.11.0"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "936e2092671b43269810cd589ceaf5/typing_extensions-4.11.0-py3-none-any.whl"
hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"
```
Resolves https://github.com/astral-sh/uv/issues/3313
## Summary
Add a new env variable `UV_LINK_MODE` as alias for the cli argument
--link-mode.
Updated the README env variables section.
## Test Plan
Tested manually using `UV_LINK_MODE=hardlink cargo run -p uv pip install
flask`.
When running
```
set UV_CACHE_DIR=%LOCALAPPDATA%\uv\cache-foo && uv venv venv
```
in windows CMD, the error would be just
```
error: The system cannot find the path specified. (os error 3)
```
The problem is that the first action in the cache dir is adding the tag,
and the `cachedir` crate is using `std::fs` instead of `fs_err`. I've
copied the two functions we use from the crate and changed the import
from `std::fs` to `fs_err`.
The new error is
```
error: failed to open file `C:\Users\Konstantin\AppData\Local\uv\cache-foo \CACHEDIR.TAG`
Caused by: The system cannot find the path specified. (os error 3)
```
which correctly explains the problem.
Closes#3280
## Summary
It seems like Azure might return a 401 when you request a package that
doesn't exist (even with valid credentials)? But I admittedly haven't
tested this. (We already skip 403, and this seems similar?)
Closes https://github.com/astral-sh/uv/issues/3291.
## Summary
This index strategy resolves every package to the latest possible
version across indexes. If a version is in multiple indexes, the first
available index is selected.
Implements #3137
This closely matches pip.
## Test Plan
Good question. I'm hesitant to use my certifi example here, since that
would inevitably break when torch removes this package. Please comment!
## Summary
The approach taken here is to model `--target` as an install scheme in
which all the directories are just subdirectories of the `--target`.
From there, everything else... just works? Like, upgrade, uninstalls,
editables, etc. all "just work".
Closes#1517.
## Summary
based on PEP 508 the `platform_machine` should be same as
`platform.machine()` output:
https://peps.python.org/pep-0508/#environment-markers
From my macOS M2
```python
In [1]: import platform
In [2]: platform.machine()
Out[2]: 'arm64'
```
I napari we also use `arm64` in requirements and it works as expected:
9fcf63e69a/pyproject.toml (L120)
## Test Plan
<!-- How was it tested? -->
I rebased https://github.com/astral-sh/uv/pull/2757 then realized that
we want to implement this for more than `uv venv`.
Closes https://github.com/astral-sh/uv/issues/2587
Closes https://github.com/astral-sh/uv/issues/2757
```
❯ cargo run -q -- pip install -p /Users/mz/bin/python3.7 anyio
warning: uv is only compatible with Python 3.8+, found Python 3.7.17.
Audited 1 package in 84ms
❯ cargo run -q -- venv -p /Users/mz/bin/python3.7
warning: uv is only compatible with Python 3.8+, found Python 3.7.17.
Using Python 3.7.17 interpreter at: /Users/mz/bin/python3.7
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
---------
Co-authored-by: Stevie Gayet <stegayet@users.noreply.github.com>
## Summary
Simplifies dependency errors of the form `you require package-a and you
require package-b` to `you require package-a and package-b`. Resolves
https://github.com/astral-sh/uv/issues/1009.
The only thing a `OnceMap` really needs to be able to do with the value
is to clone it. All extant uses benefited from having this done for them
by automatically wrapping values in an `Arc`. But this isn't necessarily
true for all things. For example, a value might have an `Arc` internally
to making cloning cheap in other contexts, and it doesn't make sense to
re-wrap it in an `Arc` just to use it with a `OnceMap`. Or
alternatively, cloning might just be cheap enough on its own that an
`Arc` isn't worth it.
Previously, this would use the "system" temporary directory.
Because we rename the resulting directory to its final destination,
and because renaming is implemented via hardlinking, and because
hardlinking doesn't work across mount points, and because /tmp is
commonly on a different mount point than where `uv` is checked out,
we "fix" this by putting the temporary directory somewhere close to
the final destination of the fetched artifact.
There are alternatives we might consider pursuing. For example,
if the `rename` fails, then we should probably do a recursive
directory copy. But this is a quick fix for now and it also
consistent with colocation of other temporary directories in `uv`.
The main downside of this change is that if a user does ^C while
`uv-dev fetch-python` is running, then there is no mechanism for
cleaning up temporary directories.
## Summary
I initially implemented this by allowing arbitrary
`x86_64-manylinux_x_y`, but it makes the Clap, Serde, and Schemars
definitions more complicated, _and_ makes it harder for the user.
Ultimately, manylinux itself only provides images for 2_17 and 2_28, so
this seems like it should be sufficient.
Closes https://github.com/astral-sh/uv/issues/3222.
## Summary
We now recursively expand any self-dependencies via extras, which lets
us detect conflicts sooner and avoid building unnecessary versions of
packages that are excluded via the extra.
Closes https://github.com/astral-sh/uv/issues/3135.
Previously, uv-auth would fail to compile due to a missing process
feature. I chose to make all tokio features we use top level features,
so we can share the tokio cache between all test invocations.
## Summary
See the diff in the tests. If you have a constraint with an extra, we
should respect it, but we shouldn't _add_ the extra to the requirements.
## Summary
macOS 11 has been EOL for 7 months, and it seems like it's common to
publish ARM-only wheels at 12.0+. I think this is a better default for
ARM macs.
Closes https://github.com/astral-sh/uv/issues/3227.
## Test Plan
`cargo run pip install scikit-learn==1.3.2 --python-platform macos
--no-build`
## Summary
The cli gives <COLOR> as value placeholder which is misleading. I
changed this to <COLOR_CHOICE> to make it obvious you don't supply a
color but a ColorChoice value.
## Test Plan
Compiled and verified --help text was correct
## Summary
This is mainly a cleanup PR to leverage uv-version in uv-virtualenv
instead of passing it via `uv`.
In #1852 I introduced the ability to pass extra cfg to `gourgeist` for
the primary purpose of passing the uv version, but since the dawn of the
uv-version crate dynamically passing more values to pyvenv.cfg is no
longer needed.
## Test Plan
Existing `uv` tests should still verify `uv = <version>` exists in the
venv and make sure no regressions were introduced.
## Summary
The function `find_in_directory` joins `uv.toml`. The initial join in
`user` is redundant.
Also a small comment fix.
## Test Plan
Not really sure how to test this. Please suggest if any tests need to be
added.
## Summary
Avoid removing quotes from markers, e.g. `numpy (>=1.19) ;
python_version >= "3.7"` should not be rewritten. Fixes
https://github.com/astral-sh/uv/issues/2551.
This PR also makes fixups a bit more flexible internally for fixes that
aren't simple to implement with a pure regex replacement, like this one.
https://github.com/astral-sh/uv/pull/1529 fixed a similar problem but
the current regex is still not smart enough to avoid all markers
completely (like `python_version`).
## Test Plan
Added a few unit tests.
Fixes the failure to lookup credentials in
https://github.com/astral-sh/uv/issues/3205
The issue is that we seed the cache with the index URL which includes a
username but no password. We did not ensure that a password was present
in the cached credentials before attempting a request with them. Now,
the cache will not return credentials when a username is provided and
the cached credentials have no password — the cached credentials are
useless in that case.
Tested with a Google Artifact Registry and keyring
```
RUST_LOG=uv=trace cargo run -q -- pip install requests --index-url https://oauth2accesstoken@us-central1-python.pkg.dev/<project>/pypi/simple/ --no-cache --keyring-provider subprocess -v
```
## Summary
I found some of these too bare (e.g., when they _just_ show a package
name with no other information). For me, this makes it easier to
differentiate error message copy from data. But open to other opinions.
Take a look at the fixture changes and LMK!
## Summary
pip supports providing a `--platform` to `pip install`, which can be
used to seed an environment (e.g., for use in a container or otherwise).
This PR adds `--python-platform` to our commands to support a similar
workflow. It has some caveats, which are documented on the CLI.
Closes#2079.
Since we're now using read timeouts and not total timeouts, we can use a
lower threshold, a single read shouldn't take 5 min (and not even 10s).
The 10s value is somewhat arbitrary.
Like #3144, this is a breaking change in some sense.
Adds hidden `--preview` / `--no-preview` flags with `UV_PREVIEW`
environment variable support. Copies the `PreviewMode` type from Ruff.
Does a little bit of extra work to port `uv run` to the new settings
model.
Note we allow `uv run` invocations without preview and only use its
presence to toggle an experimental warning.
## Test plan
```
❯ cargo run -q -- run --no-workspace -- python --version
warning: `uv run` is experimental and may change without warning.
Python 3.12.2
❯ cargo run -q -- run --no-workspace --preview -- python --version
Python 3.12.2
❯ UV_PREVIEW=1 cargo run -q -- run --no-workspace -- python --version
Python 3.12.2
```
In #2976 I made some changes that led to regressions:
- We stopped tracking URLs that we had not seen credentials for in the
cache
- This means the cache no longer returns a value to indicate we've seen
a realm before
- We stopped seeding the cache with URLs
- Combined with the above, this means we no longer had a list of
locations that we would never attempt to fetch credentials for
- We added caching of credentials found on requests
- Previously the cache was only populated from the seed or credentials
found in the netrc or keyring
- This meant that the cache was populated for locations that we
previously did not cache, i.e. GitHub artifacts(?)
Unfortunately this unveiled problems with the granularity of our cache.
We cache credentials per realm (roughly the hostname) but some realms
have mixed authentication modes i.e. different credentials per URL or
URLs that do not require credentials. Applying credentials to a URL that
does not require it can lead to a failed request, as seen in #3123 where
GitHub throws a 401 when receiving credentials.
To resolve this, the cache is expanded to supporting caching at two
levels:
- URL, cached URL must be a prefix of the request URL
- Realm, exact match required
When we don't have URL-level credentials cached, we attempt the request
without authentication first. On failure, we'll search for realm-level
credentials or fetch credentials from external services. This avoids
providing credentials to new URLs unless we know we need them.
Closes https://github.com/astral-sh/uv/issues/3123
## Summary
This PR avoids: (1) using the lookahead resolver when `--no-deps` is
specified (we'll never use those requirements), and (2) including any
transitive requirements when searching for allowed URLs, etc., when
`--no-deps` is specified.
Closes https://github.com/astral-sh/uv/issues/3183.
Previously, we got `pypi_types::DirectUrl` (the pypa spec
direct_url.json format) and `distribution_types::DirectUrl` (an enum of
all the url types we support). This lead me to confusion, so i'm
renaming the latter one to the more appropriate `ParsedUrl`.
Add a dedicated error type for direct url parsing. This change is broken
out from the new uv requirement type, which uses direct url parsing
internally.
## Summary
This PR is adding `UV_CONSTRAINT` environment variable as analogous to
`PIP_CONSTRAINT` to allow providing constraint file via environment
variable. Implementing this will simplify adoption of uv in testing
procedure in projects that I'm involved (testing using tox).
This was my motivation for opening #1841 that is closed in favor of
#1789 which was closed without implementing this feature.
In this implementation, I have used space as a separator as analogous to
`pip`. This introduces an obvious problem if the path contains space.
Another option could be to use standard separator (`:` - UNIX like, `;`
- Windows). Which one did you prefer?
## Test Plan
It is my first contribution and first rust coding experience. It will be
nice if one could point how I should implement testing this.
I can't get this to reproduce on GitHub Actions -- maybe the builds
there differ, or maybe the builds changed since we added this fix? I'll
check locally, but regardless, this is a typo.
Closes#3158.
## Summary
No behavior changes, but the idea here is that we move the argument
normalization code (e.g., create an `Upgrade` struct from `--upgrade`
and `--upgrade-package`) into the `settings.rs` file, where we build the
common settings structs.
This reduces a lot of the logic and duplication across commands in
`main.rs`.
## Summary
pip prefers somewhat-constrained over unconstrained packages... but only
if they're at equal depths in the tree. We don't have a way to track the
latter property yet (I've added a TODO), so for now, we should remove
this constraint -- it seems to be counter-productive.
I've filed https://github.com/astral-sh/uv/issues/3149 as a follow-up.
Closes https://github.com/astral-sh/uv/issues/3143
## Test Plan
- `git clone https://github.com/drivendataorg/zamba.git`
- `cat "-e .[tests]" > req.in`
- `cargo run venv && cargo run pip compile req.in --refresh -n
--python-platform linux --python-version 3.8`
I added distributions to these projects so the commit changed.
We could pin but we want to test for resolution... so we don't. These
are pretty static so this should be rare.
## Summary
This leverages the new `read_timeout` property, which ensures that (like
pip) our timeout is not applied to the _entire_ request, but rather, to
each individual read operation.
Closes: #1921.
See: #1912.
This means that a bare `uv run` invocation drops you into a REPL.
This behavior is internally controversial, and may best be served by a
dedicated `uv repl` command. I would imagine it's important to fail if
no command is given in _some_ circumstances, but those may be resolved
by _not_ doing this if we do not detect a TTY.
Regardless, I'm interested in giving this a try for a bit during this
experimental phase.
In addition to the requested requirements, we include requirements from
a `pyproject.toml` file if it exists and install the current directory.
Closes https://github.com/astral-sh/uv/issues/3104
Holy cow does installation / resolution take a ton of options. We
side-step most of them here.
If the current environment satisfies the requirements, it is used.
Otherwise, we create a new environment with the requested dependencies.
## Summary
Following up
- https://github.com/astral-sh/uv/pull/3113
- https://github.com/astral-sh/uv/pull/3115
It looks like `uv pip compile` command with `UV_SYSTEM_PYTHON` is missed
because these two PRs are close in time. And thus resulting in
```bash
$ uv --version
uv 0.1.34 (9259eceeb 2024-04-19)
$ UV_SYSTEM_PYTHON=1 uv pip compile --upgrade requirements.in -o requirements.txt
error: invalid value '1' for '--system'
[possible values: true, false]
For more information, try '--help'.
```
Signed-off-by: Jack Cherng <jfcherng@gmail.com>
## Summary
I've wanted to try this for a long time, so decided to give it a shot.
The basic idea is that you can provide a target triple (e.g.,
`--platform x86_64-pc-windows-msvc`) and resolve against that platform,
rather than the currently-running platform. It's functionally similar to
`--python-version`, though a bit simpler since there's no need to engage
with `Requires-Python`.
Our infrastructure is well-setup for this and so, in the end, it's
actually pretty straightforward: for each triple, we just need to
override the markers and platform tags.
## Summary
We weren't setting a priority for editables, so they were being visited
last.
I think there's still a problem whereby we're not aggressive enough in
visiting recursive extras (and, in fact, that's making it really hard to
write a test -- I wrote a test, but the most-reduced case still fails,
and I'd need to add a layer of indirection to make it
fail-on-main-but-pass-on-this-branch), but that problem likely already
existed on main prior to #3087, so I just want to get this quick fix out
now.
Closes https://github.com/astral-sh/uv/issues/3127.
## Test Plan
- `git clone https://github.com/cda-tum/mqt-core.git`
- `cargo run venv`
- `cargo run pip install 'scikit-build-core[pyproject]>=0.8.1'
'setuptools_scm>=7' 'pybind11>=2.12' --resolution=lowest-direct`
- `cargo run pip install --no-build-isolation
'-ve.[test,qiskit,evaluation,coverage]' --resolution=lowest-direct`
Given requirements like:
```
black==23.1.0
black[colorama]
```
The resolver will (on `main`) add a dependency on Black, and then try to
use the most recent version of Black to satisfy `black[colorama]`. For
sake of example, assume `black==24.0.0` is the most recent version. Once
the selects this most recent version, it'll fetch the metadata, then
return the dependencies for `black==24.0.0` with the `colorama` extra
enabled. Finally, it will tack on `black==24.0.0` (a dependency on the
base package). The resolver will then detect a conflict between
`black==23.1.0` and `black==24.0.0`, and throw out
`black[colorama]==24.0.0`, trying to next most-recent version.
This is both wasteful and can cause problems, since we're fetching
metadata for versions that will _never_ satisfy the resolver. In the
`apache-airflow[all]` case, I also ran into an issue whereby we were
attempting to build very old versions of `apache-airflow` due to
`apache-airflow[pandas]`, which in turn led to resolution failures.
The solution proposed here is that we create a new proxy package with
exactly two dependencies: one on `black` and one of `black[colorama]`.
Both of these packages must be at the same version as the proxy package,
so the resolver knows much _earlier_ that (in the above example) the
extra variant _must_ match `23.1.0`.
## Summary
This PR adds a test that currently leads to an error, but should
successfully resolve as of https://github.com/astral-sh/uv/pull/3100.
The core idea is that if we have a pinned package, we shouldn't try to
build other versions of that package if we have an unconstrained variant
with an extra.
## Summary
This was unintended. We ended up reverting `Option<bool>` everywhere,
but I missed this once since it's in a separate file.
(If you use `Option<bool>`, Clap requires a value, like `--no-cache
true`.)
## Test Plan
`cargo run pip install flask --no-cache`
## Summary
I think these are useful to have for consistency, though the `--system`
variant requires some new threading.
Closes: https://github.com/astral-sh/uv/issues/2242.
## Summary
Right now, we only accept _exactly `UV_NATIVE_TLS=true` and
`UV_NATIVE_TLS=false`. `BoolishValueParser` accepts a wider range of
values:
```rust
/// True values are `y`, `yes`, `t`, `true`, `on`, and `1`.
pub(crate) const TRUE_LITERALS: [&str; 6] = ["y", "yes", "t", "true", "on", "1"];
/// False values are `n`, `no`, `f`, `false`, `off`, and `0`.
pub(crate) const FALSE_LITERALS: [&str; 6] = ["n", "no", "f", "false", "off", "0"];
```
I tend to use `0` and `1` personally so this surprised me.
## Summary
resolves https://github.com/astral-sh/uv/issues/3106
## Test Plan
added a simple test where the password provided in `UV_INDEX_URL` is
hidden in the output as expected.
## Summary
With an alias for backwards compatibility. It's clearer and matches the
setting in the TOML configuration (where `compile` was deemed too
vague).
## Summary
Now that we can pick up configuration values from persistent files, we
need to enable users to _disable_ those values from the CLI. For
example, if a user has `emit_index_url = true` in the configuration
file, they should be able to do `--no-emit-index-url` on the
command-line. This PR adds support for such negations, following the
same patterns we use in Ruff.
## Summary
Enables `uv` to read configuration from (e.g.)
`/Users/crmarsh/.config/uv/uv.toml` on macOS and Linux, and
`C:\Users\Charlie\AppData\Roaming\uv\uv.toml` on Windows.
This differs slightly from Ruff, which uses the `Application Support`
directory on macOS. But I've deviated here. based on the preferences
expressed in https://github.com/astral-sh/ruff/issues/10739.
## Summary
This PR adds the structs and logic necessary to respect settings from
the workspace. It's a ton of code, but it's mostly mechanical. And,
believe it or not, I pulled out a few refactors in advance to trim down
the code and complexity.
The highlights are:
- All CLI arguments are now `Option`, so that we can detect whether they
were provided (i.e., we can't let Clap fill in the defaults).
- We now have a `*Settings` struct for each command, which merges the
CLI and workspace options (e.g., `PipCompileSettings`).
I've only implemented `PipCompileSettings` for now. If approved, I'll
implement the others prior to merging, but it's very mechanical and I
both didn't want to do the conversion prior to receiving feedback _and_
realized it would make the PR harder to review.
If a virtual environment does not exist, we will create one for the
duration of the invocation.
Adds an `--isolated` flag to force this behavior (ignoring an existing
virtual environment).
Adds `uv run` which executes a command in your current virtual
environment.
This is a simple first milestone, lots of remaining work and behavior.
The command is hidden.
Fixes https://github.com/astral-sh/uv/issues/3060
## Summary
Allows passing a virtual environment (the path to the directory, rather
than the path to the Python interpreter within the directory) to the
`--python` option of the `uv pip` command.
## Test Plan
Tested manually to confirm that the expected new functionality works.
The test suite still passes after this change.
I don't know how to add tests for a new feature like this. I would be
happy to do so if someone can give me some pointers on how to do it.
## Summary
Source distributions in the .tar.bz2 format are still relatively common
within the existing code-bases, namely, the most common examples are the
Twisted source distributions up to the version 20.3.0. As quite so often
the ability to upgrade Twisted to a more recent version is not available
for a given project, we add the support for .tar.bz2 here to still allow
`uv` to be a drop-in replacement for `pip` in these projects.
## Test Plan
The feature was tested both by adding the corresponding test coverage,
and by directly installing a package of interest under a Python version
that doesn't have the corresponding wheel:
```sh
cargo run venv -p python3.8
cargo run pip install Twisted==20.3.0 --no-cache
```
The `--no-cache` argument in the example above serves the purpose of
cleaning the cached information regarding the unsatisfiability of the
requirements, as it may have been cached during some previous attempt to
install this package by `uv` version that didn't implement this feature
yet.
## Summary
This PR adds basic struct definitions along with a "workspace" concept
for discovering settings. (The "workspace" terminology is used to match
Ruff; I did not invent it.)
A few notes:
- We discover any `pyproject.toml` or `uv.toml` file in any parent
directory of the current working directory. (We could adjust this to
look at the directories of the input files.)
- We don't actually do anything with the configuration yet; but those
PRs are large and I want this to be reviewed in isolation.
Hello! This is my first PR so do not hesitate to let me know if anything
should be done differently 🙌🏽
## Summary
This PR starts adding useful error messages and warnings when people
pass redundant or unsupported arguments to `pip list`.
For now, I've just covered `pip list --outdated`, which is currently
unsupported.
Closes https://github.com/astral-sh/uv/issues/2948
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Since the [`junction` crate](https://crates.io/crates/junction)
implements Windows-only functionality, and since the only place it is
used is guarded by `#[cfg(windows)]`,
1f626bfc73/crates/uv-fs/src/lib.rs (L65-L86)
it makes sense not to depend on this crate at all on non-Windows
platforms.
If nothing else, this makes Linux distribution packagers’ lives just a
*tiny* bit easier.
## Test Plan
<!-- How was it tested? -->
On Fedora Linux 39:
```
# To avoid an error when /tmp and the working directory are on different filesystems:
$ mkdir _tmp
$ TMPDIR="${PWD}/tmp" cargo run -p uv-dev -- fetch-python
$ cargo test
```
I don’t have access to a Windows system.
## Summary
This makes it easier to add (e.g.) JSON Schema derivations to the type.
If we have support for other dates in the future, we can generalize it
to a `UserDate` or similar.
# Avoid cache invalidation on credentials renewal
Addresses
- https://github.com/astral-sh/uv/issues/3009#issue-2239221126
## Summary
Some private package registries (e.g. AWS CodeArtifact) use short-lived
credentials. Since they are short-lived, the exact URL that is assigned
to `UV_INDEX_URL` changes frequently and with that the cache key /
hashes of these URLs. This causes the cache to be missed on token
renewal.
This PR attempts to fix this by hashing URLs for cache keys without
their user credentials.
## Test Plan
I added a test that validates that:
1. Changing user credentials returns the same hash
2. Setting no user credentials yields the same as some user credentials
## Question
I'm not sure if we should also change the `hash` implementation of
`CanonicalUrl` / `RepositoryUrl`. They also run `.hash` within.
PS. this is the first time I'm writing `rust` so if I'm wasting your
precious time, let me know and I'll try to up my skills before I ask
again. Anyway, I figured it's good to get this issue on your radar :)
## Summary
This PR adds system install tests to verify the behavior described in
#2798. It turns out this behavior _also_ affects Fedora and Amazon
Linux, we just didn't have the right conditions enabled (specifically,
you need to create the virtualenv with `python -m venv` to get these
symlinks), so the test suite was expanded to capture that.
The issue itself is also fixed by way of deduplicating the
`site-packages` entries.
Closes: https://github.com/astral-sh/uv/issues/2798
## Summary
I'm surprised we haven't hit this before, but apparently we don't allow
comments after `--index-url`, `-e` entries, etc., in the
requirements.txt parser.
Closes#3011.
## Test Plan
`cargo test`
## Summary
It turns out that if you have an environment variable set, Clap will
consider that equivalent to passing the flag, even if it's set to (e.g.)
something falsy or the default value.
So, e.g., this fails:
```shell
UV_SYSTEM=false uv pip install --python ./.venv/bin/python flask
```
Worse, this fails, because it thinks `--no-index` and `--index-url` are
conflicting:
```shell
export UV_INDEX_URL=https://google.com
uv pip install flask --no-index
```
This PR removes some of the conflicts, namely those related to
environment variables, such that:
- You _can_ pass mixes of `--no-index`, `--index-url`, etc. If
`--no-index` is provided, all the index URLs will be ignored (but we
won't error).
- Passing `--pre` will always enable prereleases, even if `--prerelease`
is also provided. (We could warn here, although honestly it's not
trivial because we'd need to make `--prerelease` take an optional, then
we'd lose the default argument from the `--help`.)
- You _can_ pass `--system` and `--python`. If `--python` is provided,
we use that, and ignore `--system`. (We could warn here.)
I guess the underlying problem here is that we can't differentiate
between arguments passed on the CLI and those set as environment
variables. But making bigger changes here seems out-of-scope.
Closes https://github.com/astral-sh/uv/issues/3000.
## Summary
It turns out that `normalize_path` (sourced from Cargo) has a subtle
bug. If you pass it a relative path that traverses beyond the root, it
silently drops components. So, e.g., passing `../foo/bar`, it will just
drop the leading `..` and return `foo/bar`.
This PR encodes that behavior as a `Result` and avoids using it in such
cases.
Closes https://github.com/astral-sh/uv/issues/3012.
Inspired by https://github.com/astral-sh/uv/issues/2964, we now properly
log hardlink failures, e.g. when the cache is a docker container but the
venv is in a bind mount, e.g.:
```
DEBUG Failed to hardlink `/code/venv/uv/lib/python3.12/site-packages/asgiref-3.8.1.dist-info/WHEEL` to `/root/.cache/uv/archive-v0/nnpkKgUoM3LMxcNDmEKJQ/asgiref-3.8.1.dist-info/WHEEL`, attempting to copy files as a fallback
```
## Summary
I don't know if this is a good change, but `main.rs` is really large.
This just moves all the Clap arguments into their own `cli.rs` module.
freethreaded python reintroduces abiflags since it is incompatible with
regular native modules and abi3.
Tests: None yet! We're lacking cpython 3.13 no-gil builds we can use in
ci.
My test setup:
```
PYTHON_CONFIGURE_OPTS="--enable-shared --disable-gil" pyenv install 3.13.0a5
cargo run -q -- venv -q -p python3.13 .venv3.13 --no-cache-dir && cargo run -q -- pip install -v psutil --no-cache-dir && .venv3.13/bin/python -c "import psutil"
```
Fixes#2429
## Summary
In all of these ID types, we pass values to `cache_key::digest` prior to
passing to `DistributionId` or `ResourceId`. The `DistributionId` and
`ResourceId` are then hashed later, since they're used as keys in hash
maps.
It seems wasteful to hash the value, then hash the hashed value? So this
PR modifies those structs to be enums that can take one of a fixed set
of types.
## Summary
If there are no hashes for a given package, we now return
`Validate(&[])` so that the policy is impossible to satisfy. Previously,
we returned `None`, which is always satisfied.
We don't really ever expect to hit this, because we detect this case in
the resolver and raise a different error. But if we have a bug
somewhere, it's better to fail with an error than silently let the
package through.
## Summary
This PR enables `--require-hashes` with unnamed requirements. The key
change is that `PackageId` becomes `VersionId` (since it refers to a
package at a specific version), and the new `PackageId` consists of
_either_ a package name _or_ a URL. The hashes are keyed by `PackageId`,
so we can generate the `RequiredHashes` before we have names for all
packages, and enforce them throughout.
Closes#2979.
When running the `uv-client` tests, i would previously get:
```
warning: field `0` is never read
--> crates/uv-configuration/src/config_settings.rs:43:27
|
43 | pub struct ConfigSettings(BTreeMap<String, ConfigSettingValue>);
| -------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| field in this struct
|
= note: `ConfigSettings` has derived impls for the traits `Clone` and `Debug`, but these are intentionally ignored during dead code analysis
= note: `#[warn(dead_code)]` on by default
help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field
|
43 | pub struct ConfigSettings(());
| ~~
warning: `uv-configuration` (lib) generated 1 warning
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Closes#2564
## Test Plan
1. Changed existing linehaul tests to leverage insta.
2. Ran tests in various linux distros (Debian, Ubuntu, Centos, Fedora,
Alpine) to ensure they also pass locally again.
---------
Co-authored-by: konstin <konstin@mailbox.org>
The sync scenarios script is broken, so i did the updates manually
```
$ ./scripts/sync_scenarios.sh
Setting up a temporary environment...
Using Python 3.12.1 interpreter at: /home/konsti/projects/uv/.venv/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
× No solution found when resolving dependencies:
╰─▶ Because docutils==0.21.post1 is unusable because the package metadata was inconsistent and you require docutils==0.21.post1, we can conclude that the requirements are unsatisfiable.
hint: Metadata for docutils==0.21.post1 was inconsistent:
Package metadata version `0.21` does not match given version `0.21.post1`
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Similar to `Revision`, we now store IDs in the `Archive` entires rather
than absolute paths. This makes the cache robust to moves, etc.
Closes https://github.com/astral-sh/uv/issues/2908.
## Summary
This PR formalizes some of the concepts we use in the cache for
"pointers to things".
In the wheel cache, we have files like
`annotated_types-0.6.0-py3-none-any.http`. This represents an unzipped
wheel, cached alongside an HTTP caching policy. We now have a struct for
this to encapsulate the logic: `HttpArchivePointer`.
Similarly, we have files like `annotated_types-0.6.0-py3-none-any.rev`.
This represents an unzipped local wheel, alongside with a timestamp. We
now have a struct for this to encapsulate the logic:
`LocalArchivePointer`.
We have similar structs for source distributions too.
## Summary
This PR enables hash generation for URL requirements when the user
provides `--generate-hashes` to `pip compile`. While we include the
hashes from the registry already, today, we omit hashes for URLs.
To power hash generation, we introduce a `HashPolicy` abstraction:
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HashPolicy<'a> {
/// No hash policy is specified.
None,
/// Hashes should be generated (specifically, a SHA-256 hash), but not validated.
Generate,
/// Hashes should be validated against a pre-defined list of hashes. If necessary, hashes should
/// be generated so as to ensure that the archive is valid.
Validate(&'a [HashDigest]),
}
```
All of the methods on the distribution database now accept this policy,
instead of accepting `&'a [HashDigest]`.
Closes#2378.
## Summary
This PR modifies the distribution database to return both the
`Metadata23` and the computed hashes when clients request metadata.
No behavior changes, but this will be necessary to power
`--generate-hashes`.
## Summary
This represents a change to `--require-hashes` in the event that we
don't find a matching hash from the registry. The behavior in this PR is
closer to pip's.
Prior to this PR, if a distribution had no reported hash, or only
mismatched hashes, we would mark it as incompatible. Now, we mark it as
compatible, but we use the hash-agreement as part of the ordering, such
that we prefer any distribution with a matching hash, then any
distribution with no hash, then any distribution with a mismatched hash.
As a result, if an index reports incorrect hashes, but the user provides
the correct one, resolution now succeeds, where it would've failed.
Similarly, if an index omits hashes altogether, but the user provides
the correct one, resolution now succeeds, where it would've failed.
If we end up picking a distribution whose hash ultimately doesn't match,
we'll reject it later, after resolution.
## Summary
This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.
Here are some of the most important highlights:
- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.
There are a few notable TODOs:
- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.
Closes#474.
## Test Plan
I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
## Summary
This lets us remove circular dependencies (in the future, e.g., #2945)
that arise from `FlatIndex` needing a bunch of resolver-specific
abstractions (like incompatibilities, required hashes, etc.) that aren't
necessary to _fetch_ the flat index entries.
See https://github.com/astral-sh/uv/issues/2617
Note this also includes:
- #2918
- #2931 (pending)
A first step towards Python toolchain management in Rust.
First, we add a new crate to manage Python download metadata:
- Adds a new `uv-toolchain` crate
- Adds Rust structs for Python version download metadata
- Duplicates the script which downloads Python version metadata
- Adds a script to generate Rust code from the JSON metadata
- Adds a utility to download and extract the Python version
I explored some alternatives like a build script using things like
`serde` and `uneval` to automatically construct the code from our
structs but deemed it to heavy. Unlike Rye, I don't generate the Rust
directly from the web requests and have an intermediate JSON layer to
speed up iteration on the Rust types.
Next, we add add a `uv-dev` command `fetch-python` to download Python
versions per the bootstrapping script.
- Downloads a requested version or reads from `.python-versions`
- Extracts to `UV_BOOTSTRAP_DIR`
- Links executables for path extension
This command is not really intended to be user facing, but it's a good
PoC for the `uv-toolchain` API. Hash checking (via the sha256) isn't
implemented yet, we can do that in a follow-up.
Finally, we remove the `scripts/bootstrap` directory, update CI to use
the new command, and update the CONTRIBUTING docs.
<img width="1023" alt="Screenshot 2024-04-08 at 17 12 15"
src="https://github.com/astral-sh/uv/assets/2586601/57bd3cf1-7477-4bb8-a8e9-802a00d772cb">
## Summary
If the user runs with `--generate-hashes`, and the lockfile doesn't
contain _any_ hashes for a package (despite being pinned), we should add
new hashes. This mirrors running `uv pip compile --generate-hashes` for
the first time with an existing lockfile.
Closes#2962.
To get more insights into test performance, allow instrumenting tests
with tracing-durations-export.
Usage:
```shell
# A single test
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo test --features tracing-durations-export --test pip_install_scenarios no_binary -- --exact
# All tests
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo nextest run --features tracing-durations-export
```
Then we can e.g. look at
`target/test-traces/pip_install_scenarios::no_binary.svg` and see the
builds it performs:

## Summary
If we build a source distribution from the registry, and the version
doesn't match that of the filename, we should error, just as we do for
mismatched package names. However, we should also backtrack here, which
we didn't previously.
Closes https://github.com/astral-sh/uv/issues/2953.
## Test Plan
Verified that `cargo run pip install docutils --verbose --no-cache
--reinstall` installs `docutils==0.21` instead of the invalid
`docutils==0.21.post1`.
In the logs, I see:
```
WARN Unable to extract metadata for docutils: Package metadata version `0.21` does not match given version `0.21.post1`
```
## Summary
This updates to the version of axoupdater used in cargo-dist 0.13.0's
own selfupdate command, with all relevant fixes for platforms. It also
tentatively introduces a mildly dangerous self-runtest that runs `uv
self update` and checks that the binary is installed and executable.
I *believe* some adjustments need to be made to your CI to have this new
test run, because it requires the `self-update` feature to be enabled,
and I didn't want to just start messing with how you do feature coverage
in your CI. **As a result I haven't yet had a chance to actually fully
run this in CI**, though I've locally tested it on windows (with the
guard disabled).
## Test Plan
Most of the machinery here is provided by axoupdater itself (cargo-dist
also includes a variant of these tests in its codebase). This initial
implementation has a couple major limitations:
* This is For Reals modifying the system that runs the test (so it's off
unless it detects it's running in CI, and if you want variations on this
test they'll need to be [run in
serial](5e7826f7b0/cargo-dist/tests/cli-tests.rs (L235))).
Since many of the testing issues were surrounding precise details of
Actual Deployed Executions, this seemed worth the tradeoff.
* The actual installer *script* it's ultimately invoking is the one you
last published, and *not* the one that cargo-dist will make when you
next publish.
We're already working on implementing some logic for "get cargo-dist to
generate a fresh installer script too", which is in fact the basis of a
huge amount of cargo-dist's own testsuite. Now that we're dogfooding
this stuff, it should be quite hard for this stuff to break without
cargo-dist's own codebase noticing it first.
<!-- How was it tested? -->
## Summary
The prefetcher tallies the number of times we tried a given package, and
then once we hit a threshold, grabs the version map, assuming it's
already been fetched. For direct URL distributions, though, we don't
have a version map! And there's no need to prefetch.
Closes https://github.com/astral-sh/uv/issues/2941.
## Summary
Right now, we have a `Hashes` representation that looks like:
```rust
/// A dictionary mapping a hash name to a hex encoded digest of the file.
///
/// PEP 691 says multiple hashes can be included and the interpretation is left to the client.
#[derive(Debug, Clone, Eq, PartialEq, Default, Deserialize)]
pub struct Hashes {
pub md5: Option<Box<str>>,
pub sha256: Option<Box<str>>,
pub sha384: Option<Box<str>>,
pub sha512: Option<Box<str>>,
}
```
It stems from the PyPI API, which returns a dictionary of hashes.
We tend to pass these around as a vector of `Vec<Hashes>`. But it's a
bit strange because each entry in that vector could contain multiple
hashes. And it makes it difficult to ask questions like "Is
`sha256:ab21378ca980a8` in the set of hashes"?
This PR instead treats `Hashes` as the PyPI-internal type, and uses a
new `Vec<HashDigest>` everywhere in our own APIs.
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?
Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
Elides Python patch versions from the test suite unless the test
specifically requests a patch version.
This reduces some toil when not using our bootstrapped Python versions.
Partially addresses https://github.com/astral-sh/uv/issues/2165 though
we'll need changes to the scenario tests to really support their case.
## Summary
When you specify a source distribution via a path, it can either be a
path to an archive (like a `.tar.gz` file), or a source tree (a
directory). Right now, we handle both paths through the same methods in
the source database. This PR splits them up into separate handlers.
This will make hash generation a little easier, since we need to
generate hashes for archives, but _can't_ generate hashes for source
trees.
It also means that we can now store the unzipped source distribution in
the cache (in the case of archives), and avoid unzipping the source
distribution needlessly on every invocation; and, overall, let's un
enforce clearer expectations between the two routes (e.g., what errors
are possible vs. not), at the cost of duplicating some code.
Closes#2760 (incidentally -- not exactly the motivation for the change,
but it did accomplish it).
## Summary
I think this is a much clearer name for this concept: the set of
"versions" of a given wheel or source distribution. We also use
"Manifest" elsewhere to refer to the set of requirements, constraints,
etc., so this was overloaded.
## Summary
We have a heuristic in `File` that attempts to detect whether a URL is
absolute or relative. However, `contains("://")` is prone to false
positive. In the linked issues, the URLs look like:
```
/packages/5a/d8/4d75d1e4287ad9d051aab793c68f902c9c55c4397636b5ee540ebd15aedf/pytz-2005k.tar.bz2?hash=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c&remote=4d75d1e4287636b5ee540ebd15aedf/pytz-2005k.tar.bz2#sha256=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c
```
Which is relative, but includes `://`.
Instead, we should determine whether the URL has a _scheme_ which
matches the `Url` crate internally.
Closes https://github.com/astral-sh/uv/issues/2899.
## Summary
Right now, the path-based wheel cache just looks at the symlink to the
archives directory, checks the timestamp on it, and continues with that
symlink as long as the timestamp is up-to-date.
The HTTP-based wheel meanwhile, uses an intermediary `.http` file, which
includes the HTTP caching information. The `.http` file's payload is
just a path pointing to an entry in the archives directory.
This PR modifies the path-based codepaths to use a similar cache file,
which stores a timestamp along with a path to the archives directory.
The main advantage here is that we can add other data to this cache file
(namely, hashes in the future).
## Test Plan
Beyond existing tests, I also verified that this doesn't require a
version bump:
```
git checkout main
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
git checkout charlie/manifest
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall --refresh
```
## Summary
I think this is kind of just an oversight. If a wheel is available via
`--find-links`, and the index is "local", we never find it in the cache.
## Test Plan
`cargo test`
## Summary
In all cases, we unzip these immediately after returning. By moving the
unzipping into the database, we can remove a bunch of code (coming in a
separate PR), and pave the way for hash-checking, since hash generation
will _also_ happen in the database, and splitting the caching layers
across the database and the unzipper creates complications.
Closes#2863.
With pubgrub being fast for complex ranges, we can now compute the next
n candidates without taking a performance hit. This speeds up cold cache
`urllib3<1.25.4` `boto3` from maybe 40s - 50s to ~2s. See docstrings for
details on the heuristics.
**Before**

**After**

---
We need two parts of the prefetching, first looking for compatible
version and then falling back to flat next versions. After we selected a
boto3 version, there is only one compatible botocore version remaining,
so when won't find other compatible candidates for prefetching. We see
this as a pattern where we only prefetch boto3 (stack bars), but not
botocore (sequential requests between the stacked bars).

The risk is that we're completely wrong with the guess and cause a lot
of useless network requests. I think this is acceptable since this
mechanism only triggers when we're already on the bad path and we should
simply have fetched all versions after some seconds (assuming a fast
index like pypi).
---
It would be even better if the pubgrub state was copy-on-write so we
could simulate more progress than we actually have; currently we're
guessing what the next version is which could be completely wrong, but i
think this is still a valuable heuristic.
Fixes#170.