This test was, I believe, relying on the XDG_DATA_HOME environment
variable not being set. When it is set, as is the case in my
environment, `uv tool run` will respect it and install `black` for this
particular test into my actual XDG_DATA_HOME directory. We fix this by
setting `XDG_DATA_HOME` explicitly.
By using `Box::pin(run())` we can reduce the artificial stack size for
running tests on windows in debug mode from 8MB to 2MB. I've checked and
1MB/no custom stack size still fail tests, e.g.
`add_workspace_editable`.
This test is failing most times for me when running nextest locally,
failing the overall test run, so i'm deactivating it for now. I'm still
not sure what the root cause here is. It seems to have something to do
with python stdin not being ready immediately after we spawn the process
and us being too fast.
## Summary
Resolves#4834
## Test Plan
```sh
# 3.12.3 is a `install_only` archive
$ cargo run -- python install --preview --force 3.12.3
# 3.9.4 has only `full` archive
$ cargo run -- python install --preview --force 3.9.4
```
## Summary
Partially closes#1917
This PR picks up on some of the great work from #1864 and opted to keep
`panic_immediate_abort` (for size reasons). I split the PR in different
isolated commits in case we want to separate/cherry-pick them out.
1. The first commit ports mostly all std changes from that PR into this
PR. Binary sizes stayed the same ~16kb.
2. The second commit migrates our existing usage of windows-sys to
windows for a safer ffi calls with Results!. It also changes all large
unsafe blocks to be isolated to the actual unsafe calls, and switches
some areas to use std such as getenv port ( which seemed buggy! ) from
launcher.c. In addition, this also adds more error checking in order to
match some missing assertions from distlib's launcher.c. Note, due to
the additional .text data, the binary sizes increased to ~20.5kb, but we
can cut back on some of the added error msgs as needed.
3. The third commit switches to using xwin for building on all 3
supported trampoline targets for sanity, and adds a CI bloat check for
core::fmt and panic as a precaution. Sadly, this will invalidate the
xwin cache on the first run.
## Test Plan
Most changes were tested on a couple of local GUI apps and console apps,
also tested some of the error states manually by using SetLastError at
different points in the code and/or passing in invalid handles.
I'm not sure how far we can get with migrating some of the other calls
without increasing binary size substantially. An initial attempt at
using std::path didn't seem so bad size wise when I tried it (~1k). On
other cases, such as std::process::exit added ~10k to the total binary
size.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
If you pass `--isolated` but no `--with`, at present, we don't create
any environment (so `--python` isn't respected and `python` will fail
entirely if it wasn't already in your path). Now, we create a base
environment in `--isolated` even if `with` wasn't provided.
Closes https://github.com/astral-sh/uv/issues/4846.
Closes https://github.com/astral-sh/uv/issues/4776.
## Summary
Like https://github.com/astral-sh/uv/pull/4808 but with a few more
changes. I suspect this will require some bikeshedding but I find the
use of "installation" and "installed" in the same sentence to be kind of
a lot.
## Summary
There are a few ideas at play here:
1. pip always strips versions to the release when evaluating against a
`Requires-Python`, so we now do the same. That means, e.g., using
`3.13.0b0` will be accepted by a project with `Requires-Python: >=
3.13`, which does _not_ adhere to PEP 440 semantics but is somewhat
intuitive.
2. Because we know we'll only be evaluating against release-only
versions, we can use different semantics in PubGrub that let us collapse
ranges. For example, `python_version >= '3.10' or python_version <
'3.10'` can be collapsed to the truthy marker.
Closes https://github.com/astral-sh/uv/issues/4714.
Closes https://github.com/astral-sh/uv/issues/4272.
Closes https://github.com/astral-sh/uv/issues/4719.
Remove wheels from the lockfile that don't match the required python
version. For example, we remove
`charset_normalizer-3.3.2-cp311-cp311-win_amd64.whl` when we have
`requires-python = ">=3.12"`.
Our snapshots barely show changes since we avoid the large binaries for
which matters. Here are 3 real world `uv.lock` before/after comparisons
to show the large difference:
*
[warehouse](https://gist.github.com/konstin/9a1ed6a32b410e250fcf4c6ea8c536a5)
(5677 -> 4214)
*
[transformers](https://gist.github.com/konstin/5636281b5226f64aa44ce3244d5230cd)
(6484 -> 5816)
*
[github-wikidata-bot](https://gist.github.com/konstin/ebbd7b9474523aaa61d9a8945bc02071)
(793 -> 454)
We only remove wheels we are certain don't match the python version and
still keep those with unknown tags. We could remove even more wheels by
also considering other markers, e.g. removing linux wheels for a
windows-only dep, but we would trade complex, easy-to-get-wrong logic
for diminishing returns.
## Summary
Check the sha256 checksum when downloading a managed python toolchain.
## Test Plan
```sh
$ cargo run -- python install 3.12
warning: `uv python install` is experimental and may change without warning.
Looking for installation Python 3.12.3 (any-3.12.3-any-any-any)
Downloading cpython-3.12.3-windows-x86_64-none
Installed Python 3.12.3 to C:\Users\jo\AppData\Roaming\uv\data\python\cpython-3.12.3-windows-x86_64-none
Installed 1 installation in 6s
$ cargo run -- python uninstall 3.12
$ # manually change the hash in `crates/uv-python/src/downloads.inc`
$ cargo run -- python install 3.12
warning: `uv python install` is experimental and may change without warning.
Looking for installation Python 3.12 (any-3.12-any-any-any)
Downloading cpython-3.12.3-windows-x86_64-none
error: Hash mismatch for `cpython-3.12.3-windows-x86_64-none`
Expected:
xx
Computed:
776568c92c5f3b47dbf5f17c1c58578f70d75a32654419a158aa8bdc6f95b09a
```
## Summary
This seems like another good candidate for environment caching. If you
run a script repeatedly, we can just use the existing cached
environment.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Addresses https://github.com/astral-sh/uv/issues/4330, to reduce
duplication in the client creation logic.
## Test Plan
<!-- How was it tested? -->
https://github.com/astral-sh/uv/pull/4729#issuecomment-2204681655
## Summary
The resolver sometimes starts HTTP requests that end up not being
necessary. When dropping the Tokio runtime before exiting we currently
wait for those to complete. This can cause noticeable hangs in the CLI,
particularly when the runtime is blocked on slow DNS resolution.
Resolves https://github.com/astral-sh/uv/issues/4599.
## Test Plan
This change resolves any reproducible hangs for me locally.
## Summary
The basic strategy:
- When the user does `uv tool run`, we resolve the `from` and `with`
requirements (always).
- After resolving, we generate a hash of the requirements. For now, I'm
just converting to a lockfile and hashing _that_, but that's an
implementation detail.
- Once we have a hash, we _also_ hash the interpreter.
- We then store environments in
`${CACHE_DIR}/${INTERPRETER_HASH}/${RESOLUTION_HASH}`.
Some consequences:
- We cache based on the interpreter, so if you request a different
Python, we'll create a new environment (even if they're compatible).
This has the nice side-effect of ensuring that we don't use environments
for interpreters that were later deleted.
- We cache the `from` and `with` together. In practice, we may want to
cache them separately, then layer them? But this is also an
implementation detail that we could change later.
- Because we use the lockfile as the cache key, we will invalidate the
cache when the format changes. That seems ok, but we could improve it in
the future by generating a stable hash from a lockfile that's
independent of the schema.
Closes https://github.com/astral-sh/uv/issues/4752.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#4653
## Summary
Adds the tool version to the list command right beside the tool name
```
$ uv tool list
black v24.2.0
```
Following the proposed format discussed in #4653
## Test Plan
`cargo test tool_list`
<!-- How was it tested? -->
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
The changes in https://github.com/astral-sh/uv/pull/4708 caused an
overflow in debug mode only of the 1MB default stack size in windows
during clap. This means that even trivial wrong argument tests would
fail without increasing the stack size. As remedy, we box the clap
types.
This fell out of my investigation of
https://github.com/astral-sh/uv/issues/4774 but the bug was fixed by the
reporter in #4775
- Adds support for `GH_TOKEN` authentication again — basically needed to
avoid rate limits when hacking on this.
- Clarifies some handling and logging of flavors
## Summary
This doesn't cache the tool environment; rather, it just uses the `tool
install` environment if it satisfies the request.
Closes https://github.com/astral-sh/uv/issues/4742.
Previously this displayed:
```
❯ cargo run -q -- --help
The command line interface for the uv binary.
Usage: uv [OPTIONS] <COMMAND>
```
This is.. weird. Here I remove it entirely. I could see adding
`about_long` text being helpful in the future.
Fix#4774.
## Summary
Change the python interpreter for linux installed with `uv python` to an
optimized one.
## Test Plan
I ran the following command on Linux (glibc) to confirm that an
optimized (not debug built) Python is installed.
```bash
# install python
uv python install 3.12.3
# check build type
uv run python -c "import sysconfig;print(sysconfig.get_config_var('Py_DEBUG'))"
0
```
## Summary
This used to be necessary because we purged the cache in the
`InstallPlan` if the user passed `--reinstall`. _However_, we later
changed the cache to be append-only.
## Test Plan
I ran through the test plan in https://github.com/astral-sh/uv/pull/933,
which includes an integration test and running `uv pip install
--reinstall` with:
```text
setuptools
devpi @ e334eb4dc9bb023329e4b610e4515b/devpi-2.2.0.tar.gz
```
Whew this is a lot.
The user-facing changes are:
- `uv toolchain` to `uv python` e.g. `uv python find`, `uv python
install`, ...
- `UV_TOOLCHAIN_DIR` to` UV_PYTHON_INSTALL_DIR`
- `<UV_STATE_DIR>/toolchains` to `<UV_STATE_DIR>/python` (with
[automatic
migration](https://github.com/astral-sh/uv/pull/4735/files#r1663029330))
- User-facing messages no longer refer to toolchains, instead using
"Python", "Python versions" or "Python installations"
The internal changes are:
- `uv-toolchain` crate to `uv-python`
- `Toolchain` no longer referenced in type names
- Dropped unused `SystemPython` type (previously replaced)
- Clarified the type names for "managed Python installations"
- (more little things)
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#4654
## Summary
The purpose of this is to show the entrypoints of each tool when running
`uv tool list` as below:
```
$ uv tool list
black
black
blackd
```
I used the proposed formatting as it was written in #4653 by @blueraft.
I had to use spaces instead of tabs in order to make the test
successful. Indeed in the test we are using a raw string and I did not
manage to make the test pass when escaping the tab in the list.rs file
so I used spaces everywhere.
I had a deeper look into #4653 as well but it is more difficult as we
need to get the version of the tool in the Tool object, I will continue
on this next one later.
Please tell me if anything else is needed I tried to follow the
contribution guidelines but I might have forgotten something.
Have a great day!
## Test Plan
`cargo clippy`
then by using the local version of uv as described in the Readme.md.
```
my-computer :~/mypath/uv$ cargo run -- tool list
Compiling uv-cli v0.0.1 (/mypath/uv/crates/uv-cli)
Compiling uv v0.2.18 (/mypath/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 18.69s
Running `target/debug/uv tool list`
warning: `uv tool list` is experimental and may change without warning.
black
black
blackd
isort
isort
isort-identify-imports
```
and
`cargo test tool_list`
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
For now the semantics are such that if the requested requirements from
the command line don't match the receipt (or if any `--reinstall` or
`--upgrade` is requested), we proceed with an install, passing the
`--reinstall` and `--upgrade` to the underlying Python environment.
This may lead to some unintuitive behaviors, but it's simplest for now.
For example:
- `uv tool install black<24` followed by `uv tool install black
--upgrade` will install the latest version of `black`, removing the
`<24` constraint.
- `uv tool install black --with black-plugin` followed by `uv tool
install black` will remove `black-plugin`.
Closes https://github.com/astral-sh/uv/issues/4659.
In #3514 and #2755, users had intermittent network errors, but it was
not always clear whether we had already retried these requests or not.
Building upon https://github.com/TrueLayer/reqwest-middleware/pull/159,
this PR adds the number of retries to the error message, so we can see
at first glance where we're missing retries and where we might need to
change retry settings.
Example error trace:
```
Could not connect, are you offline?
Caused by: Request failed after 3 retries
Caused by: error sending request for url (https://pypi.org/simple/uv/)
Caused by: client error (Connect)
Caused by: dns error: failed to lookup address information: Name or service not known
Caused by: failed to lookup address information: Name or service not known
```
This code is ugly since i'm missing a better pattern for attaching
context to reqwest middleware errors in
https://github.com/TrueLayer/reqwest-middleware/pull/159.
## Summary
Given:
```text
numpy >=1.26 ; python_version >= '3.9'
numpy <1.26 ; python_version < '3.9'
```
When resolving for Python 3.8, we need to narrow the `requires-python`
requirement in the top branch of the fork, because `numpy >=1.26` all
require Python 3.9 or later -- but we know (in that branch) that we only
need to _solve_ for Python 3.9 or later.
Closes https://github.com/astral-sh/uv/issues/4669.
## Summary
This is required to solve https://github.com/astral-sh/uv/issues/4669,
because the `Requires-Python` version can now vary across a resolution.
For example, within certain forks, we might have a more narrow range,
which would allow us to use distributions that would not be allowed for
the global resolution.
This should be fine because `requires-python` is part of the package
metadata, so it should be consistent between files within a package
version. As such, there shouldn't be any risk that we incorrectly
prioritize distributions by omitting this information.
(To be more specific, the risk is something like: we prioritize some
wheel over a source distribution within a package-version, so we don't
track the source distribution at all. Then, later, when we choose a
candidate, we see that the wheel doesn't meet the `Requires-Python`
requirement, even though the source distribution _would've_ met it. If
files within a distribution could have varied support, this would be a
real risk.)
I think `--toolchain-preference system` is sufficiently clear and
`--toolchain-preference prefer-system` is excessively verbose. This was
discussed in the original pull request at
https://github.com/astral-sh/uv/pull/4424 but because we had a case for
preferring "installed managed" toolchains I was hesitant to change it.
Now that I've dropped that in #4601, I think we can drop the prefix.
Adds a `toolchain-fetch` option alongside `toolchain-preference` with
`automatic` (default) and `manual` values allowing automatic toolchain
fetches to be disabled (replaces
https://github.com/astral-sh/uv/pull/4425). When `manual`, toolchains
must be installed with `uv toolchain install`.
Note this was previously implemented with `if-necessary`, `always`,
`never` variants but the interaction between this and
`toolchain-preference` was too confusing. By reducing to a binary
option, things should be clearer. The `if-necessary` behavior moved to
`toolchain-preference=installed`. See
https://github.com/astral-sh/uv/pull/4601#discussion_r1657839633 and
https://github.com/astral-sh/uv/pull/4601#discussion_r1658658755
Closes https://github.com/astral-sh/uv/issues/4476
Originally, this used the changes in #4642 to invoke `main()` from a
`uvx` binary. This had the benefit of `uvx` being entirely standalone at
the cost of doubling our artifact size. We think that's the incorrect
trade-off.
Instead, we assume `uvx` is always next to `uv` and create a tiny binary
(<1MB) that invokes `uv` in a child process. This seems preferable to a
`cargo-dist` alias because we have more control over it. This binary
should "just work" for all of our cargo-dist distributions and
installers, but we'll need to add a new entry point for our PyPI
distribution. I'll probably tackle support there separately?
```
❯ ls -lah target/release/uv target/release/uvx
-rwxr-xr-x 1 zb staff 31M Jun 28 23:23 target/release/uv
-rwxr-xr-x 1 zb staff 452K Jun 28 23:22 target/release/uvx
```
This includes some small overhead:
```
❯ hyperfine --shell=none --warmup=100 './target/release/uv tool run --help' './target/release/uvx --help' --min-runs 2000
Benchmark 1: ./target/release/uv tool run --help
Time (mean ± σ): 2.2 ms ± 0.1 ms [User: 1.3 ms, System: 0.5 ms]
Range (min … max): 2.0 ms … 4.0 ms 2000 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: ./target/release/uvx --help
Time (mean ± σ): 2.9 ms ± 0.1 ms [User: 1.7 ms, System: 0.9 ms]
Range (min … max): 2.8 ms … 4.2 ms 2000 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Summary
./target/release/uv tool run --help ran
1.35 ± 0.09 times faster than ./target/release/uvx --help
```
I presume there may be some other downsides to a child process? The
wrapper is a little awkward. We could consider `execv` but this is
complicated across platforms. An example implementation of that over in
[monotrail](433af5aed9/crates/monotrail/src/monotrail.rs (L764-L799)).
## Summary
I ended up needing this for https://github.com/astral-sh/uv/issues/4664
but I think it's a good change more broadly. We should be able to share
this cached information across operations within a given invocation.
## Summary
These are changing in one of my branches but I can't tell _what's_
changing. Some tests include the lock, but others don't. This PR adds it
for all successful resolves in the suite.
## Summary
You can now add `managed = false` under `[tool.uv]` in a
`pyproject.toml` to explicitly opt out of the project and workspace
APIs.
If a project sets `managed = false`, we will (1) _not_ discover it as a
workspace root, and (2) _not_ discover it as a workspace member (similar
to using `exclude` in the workspace parent).
Closes https://github.com/astral-sh/uv/issues/4551.
## Summary
This doesn't actually change any behaviors, but it does make it a bit
easier to solve #4669, because we don't have to support "version
narrowing" for the non-`RequiresPython` variants in here. Right now, the
semantics are kind of muddied, because the `target` variant is
_sometimes_ interpreted as an exact version and sometimes as a lower
bound.
## Summary
`GitDatabase::contains` previously only parsed the commit to see if it
was a valid hash and didn't verify if the commit existed in the object
database. This led to the database never being updated.
Resolves https://github.com/astral-sh/uv/issues/4378.
## Test Plan
Added a test that fails without this change.
It's hard to talk about solve state and resolver state, so i'm renaming
them to fork state and resolver state, indicating the hierarchy between
more directly.
## Summary
Closes https://github.com/astral-sh/uv/issues/4688.
## Test Plan
```
❯ cargo run tool install ruff
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.14s
Running `target/debug/uv tool install ruff`
warning: `uv tool install` is experimental and may change without warning.
Resolved 1 package in 136ms
Installed 1 package in 3ms
+ ruff==0.5.0
No entrypoints to install for tool `ruff`
```
## Summary
Packages that provide scripts that _aren't_ Python entrypoints need to
respected in `uv tool install`. For example, Ruff ships a script in
`ruff-0.5.0.data/scripts`.
Unfortunately, the `.data` directory doesn't exist in the virtual
environment at all (it's removed, per the spec, after install). So this
PR changes the entry point detection to look at the `RECORD` file, which
is the only evidence that the scripts were installed.
Closes https://github.com/astral-sh/uv/issues/4691.
## Test Plan
`cargo run uv tool install ruff` (snapshot tests to-come)
## Summary
Resolves#4483Resolves#4484
## Test Plan
`cargo test`
```sh
❯ cargo run -- toolchain dir
warning: `uv toolchain dir` is experimental and may change without warning.
/Users/ahmedilyas/Library/Application Support/uv/toolchains
❯ cargo run -- tool dir
warning: `uv tool dir` is experimental and may change without warning.
/Users/ahmedilyas/Library/Application Support/uv/tools
```
## Summary
I think this may have just been a typo.
Closes https://github.com/astral-sh/uv/issues/4692.
## Test Plan
Run `cargo run tool install flask --force --reinstall` repeatedly.
## Summary
I noticed that `init_environment` and `find_interpreter` were both
calling `find_environment`, which seemed like a code smell to me.
Instead, `find_interpreter` now returns either a compatible environment
or an interpreter (if no compatible environment was found).
Additionally, `interpreter_meets_requirements` now no longer validates
`requires-python` if `--python` or `.python-version` is set. Instead, we
warn, which matches the behavior we get when creating a new environment
at the bottom of `find_interpreter`.
In total, I think this makes the data flow in project interpreter
discovery less repetitive and easier to reason about.
## Summary
This should both make it faster to solve forks (since we have a guess
for a valid resolution, and will bias towards packages we've already
fetched) and improve consistency between forks.
Closes https://github.com/astral-sh/uv/issues/4617.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
resolves https://github.com/astral-sh/uv/issues/4651
(pruning needs to happen at the parent level so that the number of
children being used to figure out the output is correct)
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
added a test that would've caught this bug 🌵
<!-- How was it tested? -->
## Summary
`remove_edge` will invalidate the last index in the graph, so we need to
ensure that each index we look at is "earlier" than the last.
Co-authored-by: bluss <bluss@users.noreply.github.com>
## Summary
When a constraint is applied to a requirement with a marker, the marker
needs to be propagated to the constraint.
If both the constraint and the requirement have a marker, they need to
be merged together (via `and`).
Closes https://github.com/astral-sh/uv/issues/4575.
## Summary
This PR dodges some of the bigger issues raised by
https://github.com/astral-sh/uv/pull/4554 and
https://github.com/astral-sh/uv/pull/4555 by _not_ changing any of the
bigger semantics around syncing and instead merely changing virtual
workspace roots to sync all packages in the workspace (rather than
erroring due to being unable to find a project).
Closes#4541.
We need this to power uninstallations!
The latter two commits were reviewed in:
- #4637
- #4638
Note this is a breaking change for existing tool installations, but it's
in preview and very new. In the future, we'll need a clear upgrade path
for tool receipt changes.
This centralizes writing out the DistributionId as TOML. This is again
just a refactor. No behavioral changes were made. In a subsequent
commit, we will tweak how `source` is written.
Looks much better than #4618:
```
DEBUG Pre-fork split universal took 0.644s
DEBUG Split python_version >= '3.12' and platform_machine == 'aarch64' and platform_system == 'Darwin' and platform_system == 'Linux' took 0.659s
DEBUG Split python_version == '3.9' and platform_machine == 'arm64' and platform_system == 'Darwin' took 0.291s
```
The journey here can be seen in:
- #4587
- #4589
- #4594
I collapsed all the commits here because only the last one in the stack
got us to a "correct" error message.
There are a few architectural changes:
- We have a dedicated `MissingEnvironment` and `EnvironmentNotFound`
type for `PythonEnvironment::find` allowing different error messages
when searching for environments
- `ToolchainNotFound` becomes a struct with the `ToolchainRequest` which
greatly simplifies missing toolchain error formatting
- `ToolchainNotFound` tracks the `EnvironmentPreference` so it can
accurately report the locations checked
The messages look like this now, instead of the bland (and often
incorrect): "No Python interpreter found in system toolchains".
```
❯ cargo run -q -- pip sync requirements.txt
error: No virtual environment found
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip sync requirements.txt --system
error: No system environment found
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip sync requirements.txt --python 3.12
error: No virtual environment found for Python 3.12
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip sync requirements.txt --python 3.12 --system
error: No system environment found for Python 3.12
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- toolchain find 3.12 --preview
error: No toolchain found for Python 3.12 in system path
❯ UV_TEST_PYTHON_PATH="" cargo run -q -- pip compile requirements.in
error: No toolchain found in virtual environments or system path
```
I'd like to follow this with hints, suggesting creating an environment
or using system in some cases.
This includes a functional change, we now skip the forked state pop/push
if we didn't fork.
From transformers:
```
DEBUG Pre-fork split universal took 0.036s
DEBUG Split python_version >= '3.10' and python_version >= '3.10' and platform_system == 'Darwin' and python_version >= '3.11' and python_version >= '3.12' and python_version >= '3.6' and platform_system == 'Linux' and platform_machine == 'aarch64' took 0.048s
DEBUG Split python_version <= '3.9' and platform_system == 'Darwin' and platform_machine == 'arm64' and python_version >= '3.7' and python_version >= '3.8' and python_version >= '3.9' took 0.038s
```
The messages could use simplification from
https://github.com/astral-sh/uv/issues/4536
We can consider nested spans in the future but this works nicely for
now.
https://github.com/astral-sh/uv/pull/2419 appears to have only applied
this retry to wheels that were already downloaded (though I would have
to look more carefully to be certain). In
https://github.com/astral-sh/uv/issues/1491, we've gotten continued
reports of spurious failures on Windows and tracing reveals that we are
not applying our retry logic during the rename. I believe we're in this
code path — switching to our backoff retry should resolve the failures.
## Summary
resolves https://github.com/astral-sh/uv/issues/4609
previously, the implementation of `required_with_no_extra` was
incorrect, particularly when there are packages that do not require any
extras but have other types of markers.
## Test Plan
the existing tests also did cover this (my bad... missed it) but added a
smaller test since this bug would've been more obvious with this new
test.
## Summary
It turns out that `Topo` only works on graphs without cycles. If a graph
has a cycle, it seems to bail early. So we were losing markers for trees
that contain cycles (like Poetry, which depends on
`poetry-plugin-export`, which depends on Poetry).
Now, we remove cycles beforehand and re-add those edges afterwards.
It's a bit hard for me to reason about the implications of this. The way
that marker propagation works is that we do visit the nodes in-order and
propagate the markers from any incoming to any outgoing edges. We only
do this at a single depth (rather than recursively) because we visit the
nodes in-order anyway. But if you have a cycle... then in theory you
might need to propagate the markers recursively? Or maybe not?
As an example:
`A -> B -> C -> D -> B`
If `A -> B` has `sys_platform == 'darwin'`, and then `D -> B` has
`python_version >= '3.7`... then we don't need to propagate
`python_version >= '3.7'` back to `B` or any of its dependencies,
because the condition would be `(sys_platform == 'darwin' or
python_version >= '3.7) or sys_platform == 'darwin'`, which is
equivalent to `sys_platform == 'darwin'`.
Closes#4584.
This PR contains two style changes to the lockfile:
* Always indent lists of objects, even with they are only a single
element.
* Use 4 spaces instead of tabs for indenting, to mirror what we do in
the ruff formatter.
## Summary
Open to just making this a warning but no strong opinion.
Closes https://github.com/astral-sh/uv/issues/4593.
## Test Plan
Failure:
```
❯ echo "pandas==2.2.2" | cargo run pip compile --universal -p 3.11 --no-header - --python-platform linux
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv pip compile --universal -p 3.11 --no-header - --python-platform linux`
error: the argument '--universal' cannot be used with '--python-platform <PYTHON_PLATFORM>'
Usage: uv pip compile --universal --python-version <PYTHON_VERSION> --no-header <SRC_FILE>...
For more information, try '--help'.
```
Use indented inline tables for `distribution.dependencies`,
`distribution.optional-dependencies` and
`distribution.dev-dependencies`.
The new style is more concise (see examples below) and it makes the
association between a distribution and its dependencies clearer
(previously, they were both individual `[[...]]` blocks separated by
newlines). The style is optimized for small, meaningful diffs by placing
each dependency on a single line with a final trailing comma. Whenever a
dependency is added, removed or changed, there should be a one line diff
in `distribution.dependencies`. The final trailing comma ensures that
adding a dependency doesn't change the line ahead.
Part of #3611
## Examples
### Simple workspace package
Before:
```toml
[[distribution]]
name = "bird-feeder"
version = "1.0.0"
source = "editable+packages/bird-feeder"
[[distribution.dependencies]]
name = "anyio"
[[distribution.dependencies]]
name = "seeds"
```
After:
```toml
[[distribution]]
name = "bird-feeder"
version = "1.0.0"
source = "editable+packages/bird-feeder"
dependencies = [
{ name = "anyio" },
{ name = "seeds" },
]
```
### Flask
Before:
```toml
[[distribution]]
name = "flask"
version = "3.0.2"
source = "registry+https://pypi.org/simple"
sdist = { url = "a89e8120fa0bbafcb2c2387c0317be/flask-3.0.2.tar.gz", hash = "sha256:822c03f4b799204250a7ee84b1eddc40665395333973dfb9deebfe425fefcb7d", size = 675248 }
wheels = [{ url = "aa98bfe0ebf27ce224fb4f766acb23/flask-3.0.2-py3-none-any.whl", hash = "sha256:3232e0e9c850d781933cf0207523d1ece087eb8d87b23777ae38456e2fbe7c6e", size = 101300 }]
[[distribution.dependencies]]
name = "blinker"
[[distribution.dependencies]]
name = "click"
[[distribution.dependencies]]
name = "itsdangerous"
[[distribution.dependencies]]
name = "jinja2"
[[distribution.dependencies]]
name = "werkzeug"
[distribution.optional-dependencies]
[[distribution.optional-dependencies.dotenv]]
name = "python-dotenv"
```
After:
```toml
[[distribution]]
name = "flask"
version = "3.0.2"
source = "registry+https://pypi.org/simple"
sdist = { url = "a89e8120fa0bbafcb2c2387c0317be/flask-3.0.2.tar.gz", hash = "sha256:822c03f4b799204250a7ee84b1eddc40665395333973dfb9deebfe425fefcb7d", size = 675248 }
dependencies = [
{ name = "blinker" },
{ name = "click" },
{ name = "itsdangerous" },
{ name = "jinja2" },
{ name = "werkzeug" },
]
wheels = [{ url = "aa98bfe0ebf27ce224fb4f766acb23/flask-3.0.2-py3-none-any.whl", hash = "sha256:3232e0e9c850d781933cf0207523d1ece087eb8d87b23777ae38456e2fbe7c6e", size = 101300 }]
[distribution.optional-dependencies]
dotenv = [
{ name = "python-dotenv" },
]
```
### Forking
Before:
```toml
[[distribution]]
name = "project"
version = "0.1.0"
source = "editable+."
[[distribution.dependencies]]
name = "package-a"
version = "4.3.0"
source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/"
marker = "sys_platform == 'darwin'"
[[distribution.dependencies]]
name = "package-a"
version = "4.4.0"
source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/"
marker = "sys_platform == 'linux'"
[[distribution.dependencies]]
name = "package-b"
marker = "sys_platform == 'linux'"
[[distribution.dependencies]]
name = "package-c"
marker = "sys_platform == 'darwin'"
```
After:
```toml
[[distribution]]
name = "project"
version = "0.1.0"
source = "editable+."
dependencies = [
{ name = "package-a", version = "4.3.0", source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/", marker = "sys_platform == 'darwin'" },
{ name = "package-a", version = "4.4.0", source = "registry+https://astral-sh.github.io/packse/0.3.29/simple-html/", marker = "sys_platform == 'linux'" },
{ name = "package-b", marker = "sys_platform == 'linux'" },
{ name = "package-c", marker = "sys_platform == 'darwin'" },
]
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#1329.
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Mentions use of seed packages during `uv venv --seed`, and clarifies the
divergence in behavior when using Python 3.12+.
## Test Plan
<!-- How was it tested? -->
`cargo nextest run --test venv`
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Moves `--from` to a hidden argument — we allow it still but we validate
that it is compatible with whatever is passed to `uv tool install
<package>`. The positional package can now be a full specification,
allowing things like `uv tool install black==24.2.0`.
## Summary
In the dependency refactor (https://github.com/astral-sh/uv/pull/4430),
the logic for requirements and constraints was combined. Specifically,
we were applying constraints _before_ filtering on markers and extras,
and then applying that same filtering to the constraints. As a result,
constraints that should only be activated when an extra is enabled were
being enabled unconditionally.
Closes https://github.com/astral-sh/uv/issues/4569.
## Summary
- Adds a `--extra` flag to `uv add` that allows activating extras
without the PEP508 syntax.
- `uv add` now errors if the update is ambiguous (e.g. the dependency is
present twice with different markers)
- `uv add` is smarter about updates. For example, `uv add flask==3.0.0`
followed by `uv add flask --extra dotenv` preserves the previous version
specifier.
Resolves https://github.com/astral-sh/uv/issues/4419.
Adds support for `--reinstall` and `--reinstall-package` to `uv tool
install`. These are already available via the installer settings, we
just respect them now.
`--reinstall` implies a recreation of the environment and reinstallation
of the entry points.
`--reinstall-package` will only update a subset of the environment. If
the target package is the one with the entry points, we'll reinstall the
entry points. Otherwise, the entry points are not changed.
Adds detection of existing entry points, avoiding clobbering entry
points that were installed by another tool. If we see any existing entry
point collisions, we'll stop instead of overwriting them. The `--force`
flag can be used to opt-in to overwriting the files; we can't use `-f`
because it's taken by `--find-links` which is silly. The `--force` flag
also implies replacing a tool previously installed by uv (the
environment is rebuilt).
Similarly, #4504 adds support for reinstalls that _will not_ clobber
entry points managed by other tools.
## Summary
If the package _isn't_ marked as `workspace = true`, locking will fail
given:
```rust
let workspace_package_declared =
// We require that when you use a package that's part of the workspace, ...
!workspace.packages().contains_key(&requirement.name)
// ... it must be declared as a workspace dependency (`workspace = true`), ...
|| matches!(
source,
Some(Source::Workspace {
// By using toml, we technically support `workspace = false`.
workspace: true,
..
})
)
// ... except for recursive self-inclusion (extras that activate other extras), e.g.
// `framework[machine_learning]` depends on `framework[cuda]`.
|| &requirement.name == project_name;
if !workspace_package_declared {
return Err(LoweringError::UndeclaredWorkspacePackage);
}
```
Closes https://github.com/astral-sh/uv/issues/4552.
## Summary
This PR modifies `uv run` to fallback to discovering an interpreter
(e.g., a local `.venv`) if the command is run outside of a workspace.
`uv run --isolated` continues to completely skip workspace _and_
interpreter discovering, only installing whatever's provided with
`--with`.
The next step here is adding some ergonomic controls for enabling this
behavior even if your project is technically in a workspace (i.e., you
have a `pyproject.toml` but aren't using the Project APIs and don't want
locking etc.). I could imagine a setting in `pyproject.toml` that's also
exposed on the command-line. Something like: `managed = false` or
`project = false`.
See: https://github.com/astral-sh/uv/issues/3836.
This is the minimal "working" implementation. In summary, we:
- Resolve the requested requirements
- Create an environment at `$UV_STATE_DIR/tools/$name`
- Inspect the `dist-info` for the main requirement to determine its
entry points scripts
- Link the entry points from a user-executable directory
(`$XDG_BIN_HOME`) to the environment bin
- Create an entry at `$UV_STATE_DIR/tools/tools.toml` tracking the
user's request
The idea with `tools.toml` is that it allows us to perform upgrades and
syncs, retaining the original user request (similar to declarations in a
`pyproject.toml`). I imagine using a similar schema in the
`pyproject.toml` in the future if/when we add project-levle tools. I'm
also considering exposing `tools.toml` in the standard uv configuration
directory instead of the state directory, but it seems nice to tuck it
away for now while we iterate on it. Installing a tool won't perform a
sync of other tool environments, we'll probably have an explicit `uv
tool sync` command for that?
I've split out todos into follow-up pull requests:
- #4509 (failing on Windows)
- #4501
- #4504Closes#4485
`ResolverState::choose_version` had become huge, with an odd match due
to the url handling from #4435. This refactoring breaks it into
`choose_version`, `choose_version_registry` and `choose_version_url`. No
functional changes.
In the case of a direct URL sdist, it includes a hash, and this hash is
not (and probably should not) be part of the `source`. The URL is part
of the source because it permits uniquely identifying this particular
package as distinct from any other package with the same name. But, we
should still include the hash.
So in this commit, we rejigger what we did previously to make it so the
`SourceDist` value isn't even constructed at all when it isn't needed.
This also in turn lets us make the hash field required (which we will do
in a subsequent commit).
This does mean the URL is stored twice for direct URL dependencies in
the lock file. This seems non-ideal. We could make the URL for the sdist
optional, but this seems like a bridge too far? Another choice is to add
a new key to `distribution` that is just `direct-url-hash`, but that
also seems mucky.
Maybe the duplication here is okay given the relative rarity of direct
URL dependencies.
This updates all of the test snapshots where `sdist` was
strictly redundant and could be removed.
Note that there is one test failure whose snapshot I didn't
update: one where there is a direct URL dependency. In this
case, the sdist entry isn't strictly redundant, as it includes
a hash that isn't present in the source. We'll deal with that
in a subsequent commit.
This fixes an issue in the lock file where, in cases where we had a
non-registry sdist, the information in the sdist was strictly redundant
with the information in the source. This was born out in the code
already where the `sdist` field was only ever used to build a source
distribution type when the source was a registry. In all other cases,
the source distribution data can be materialized from the `source`
field.
This makes it clear that an actual `sdist` is only required when a
distribution is from a registry. In all other cases, a source
distribution is manufactured directly from the `source`.
Previously, we had Lock and LockWire impl blocks inter-mixed. This bugs
me a bit, so I've just shuffled things around so that we have Lock, impl
Lock, LockWire and then impl LockWire.
No changes are otherwise made to the code here.
This update follows from the removal of of `source` and `version` from
`distribution.dependency` entries in the lock file when the package name
unambiguously refers to a single distribution.
When there is only one distribution for a particular package name, any
dependencies (the edges in the resolution graph) that reference that
package name are completely unambiguous. Therefore, we can actually omit
their version and source information and instead derive it from the
distribution entry.
We add some tests to check the success and error cases. That is, when
`source` or `version` are omitted and there are more than one
corresponding distribution for the package name (i.e., it's ambiguous),
then lock deserialization should fail.
This commit prepares to make the `source` and `version` fields optional
in a `distribution.dependency` based on whether they have an unambiguous
value. e.g., When there is exactly one distribution with a matching
package name.
This refactor effectively defines "wire" types for most of the lock data
types (repeating the `WheelWire` and `LockWire` pattern) with one key
difference: we don't use serde's `TryFrom` integration. In this
refactor, we could have, and it would have worked. But in a subsequent
commit, we're going to be adding state to the `unwire()` calls that is
impossible to thread through a `TryFrom` implementation. This state will
tell us how to populate the `source` and `version` values on a
`Dependency` when they're missing.
The duplication of types here is unfortunate, but compiler should catch
any deviations. And the wire types are unexported, so they have a
limited blast radius on complexity.
Downstack PR: #4481
## Introduction
We support forking the dependency resolution to support conflicting
registry requirements for different platforms, say on package range is
required for an older python version while a newer is required for newer
python versions, or dependencies that are different per platform. We
need to extend this support to direct URL requirements.
```toml
dependencies = [
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'"
]
```
This did not work because `Urls` was built on the assumption that there
is a single allowed URL per package. We collect all allowed URL ahead of
resolution by following direct URL dependencies (including path
dependencies) transitively, i.e. a registry distribution can't require a
URL.
## The same package can have Registry and URL requirements
Consider the following two cases:
requirements.in:
```text
werkzeug==2.0.0
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```
pyproject.toml:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, we want the URL to override the registry dependency,
in the second case we want to fork and have one branch use the registry
and the other the URL. We have to know about this in
`PubGrubRequirement::from_registry_requirement`, but we only fork after
the current method.
Consider the following case too:
a:
```
c==1.0.0
b @ https://b.zip
```
b:
```
c @ https://c_new.zip ; python_version >= '3.12'",
c @ https://c_old.zip ; python_version < '3.12'",
```
When we convert the requirements of `a`, we can't know the url of `c`
yet. The solution is to remove the `Url` from `PubGrubPackage`: The
`Url` is redundant with `PackageName`, there can be only one url per
package name per fork. We now do the following: We track the urls from
requirements in `PubGrubDependency`. After forking, we call
`add_package_version_dependencies` where we apply override URLs, check
if the URL is allowed and check if the url is unique in this fork. When
we request a distribution, we ask the fork urls for the real URL. Since
we prioritize url dependencies over registry dependencies and skip
packages with `Urls` entries in pre-visiting, we know that when fetching
a package, we know if it has a url or not.
## URL conflicts
pyproject.toml (invalid):
```toml
dependencies = [
"iniconfig @ e96292c7f723f1fa332fe4ed6dfbec/iniconfig-1.1.0.tar.gz",
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl ; python_version < '3.12'",
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl ; python_version >= '3.12'",
]
```
On the fork state, we keep `ForkUrls` that check for conflicts after
forking, rejecting the third case because we added two packages of the
same name with different URLs.
We need to flatten out the requirements before transformation into
pubgrub requirements to get the full list of other requirements which
may contain a URL, which was changed in a previous PR: #4430.
## Complex Example
a:
```toml
dependencies = [
# Force a split
"anyio==4.3.0 ; python_version >= '3.12'",
"anyio==4.2.0 ; python_version < '3.12'",
# Include URLs transitively
"b"
]
```
b:
```toml
dependencies = [
# Only one is used in each split.
"b1 ; python_version < '3.12'",
"b2 ; python_version >= '3.12'",
"b3 ; python_version >= '3.12'",
]
```
b1:
```toml
dependencies = [
"iniconfig @ b3c12c6d70988d7baea9578f3c48f3/iniconfig-1.1.1-py2.py3-none-any.whl",
]
```
b2:
```toml
dependencies = [
"iniconfig @ 62565a6e1ceac6173dc9db836a5b46/iniconfig-2.0.0-py3-none-any.whl",
]
```
b3:
```toml
dependencies = [
"iniconfig @ e96292c7f723f1fa332fe4ed6dfbec/iniconfig-1.1.0.tar.gz",
]
```
In this example, all packages are url requirements (directory
requirements) and the root package is `a`. We first split on `a`, `b`
being in each split. In the first fork, we reach `b1`, the fork URLs are
empty, we insert the iniconfig 1.1.1 URL, and then we skip over `b2` and
`b3` since the mark is disjoint with the fork markers. In the second
fork, we skip over `b1`, visit `b2`, insert the iniconfig 2.0.0 URL into
the again empty fork URLs, then visit `b3` and try to insert the
iniconfig 1.1.0 URL. At this point we find a conflict for the iniconfig
URL and error.
## Closing
The git tests are slow, but they make the best example for different URL
types i could find.
Part of #3927. This PR does not handle `Locals` or pre-releases yet.
In the last PR (#4430), we flatten the requirements. In the next PR
(#4435), we want to pass `Url` around next to `PubGrubPackage` and
`Range<Version>` to keep track of which `Requirement`s added a url
across forking. This PR is a refactoring split out from #4435 that rolls
the dependency conversion into a single iterator and introduces a new
`PubGrubDependency` struct as abstraction over `(PubGrubPackage,
Range<Version>)` (or `(PubGrubPackage, Range<Version>,
VerbatimParsedUrl)` in the next PR), and it removes the now unnecessary
`PubGrubDependencies` abstraction.
This PR refactors the command creation in the test suite to remove the
duplication.
**1)** We add the same set of test stubbing args to almost any uv
invocation in the tests:
```rust
command
.arg("--cache-dir")
.arg(self.cache_dir.path())
.env("VIRTUAL_ENV", self.venv.as_os_str())
.env("UV_NO_WRAP", "1")
.env("HOME", self.home_dir.as_os_str())
.env("UV_TOOLCHAIN_DIR", "")
.env("UV_TEST_PYTHON_PATH", &self.python_path())
.current_dir(self.temp_dir.path());
if cfg!(all(windows, debug_assertions)) {
// TODO(konstin): Reduce stack usage in debug mode enough that the tests pass with the
// default windows stack of 1MB
command.env("UV_STACK_SIZE", (8 * 1024 * 1024).to_string());
}
```
Centralizing these into a `TestContext::add_shared_args` method removes
them from everywhere.
**2)** Prefix all `TextContext` methods of the pip interface with
`pip_`. This is now necessary due to `uv sync` vs. `uv pip sync`.
**3)** Move command creation in the various test files into dedicated
functions or methods to avoid repeating the arguments. Except for error
message tests, there should be at most one `Command::new(get_bin())`
call per test file. `EXCLUDE_NEWER` is exclusively used in
`TestContext`.
---
I'm considering adding a `TestCommand` on top of these changes (in
another PR) that holds a reference to the `TextContext`, has
`add_shared_args` as a method and uses `Fn(Self) -> Self` instead of
`Fn(&mut Self) -> Self` for methods to improved chaining.
Downstack PR: #4515 Upstack PR: #4481
Consider these two cases:
A:
```
werkzeug==2.0.0
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```
B:
```toml
dependencies = [
"iniconfig == 1.1.1 ; python_version < '3.12'",
"iniconfig @ git+https://github.com/pytest-dev/iniconfig@93f5930e668c0d1ddf4597e38dd0dea4e2665e7a ; python_version >= '3.12'",
]
```
In the first case, `werkzeug==2.0.0` should be overridden by the url. In
the second case `iniconfig == 1.1.1` is in a different fork and must
remain a registry distribution.
That means the conversion from `Requirement` to `PubGrubPackage` is
dependent on the other requirements of the package. We can either look
into the other packages immediately, or we can move the forking before
the conversion to `PubGrubDependencies` instead of after. Either version
requires a flat list of `Requirement`s to use. This refactoring gives us
this list.
I'll add support for both of the above cases in the forking urls branch
before merging this PR. I also have to move constraints over to this.
While working on https://github.com/astral-sh/uv/pull/4492 I noticed
that `--reinstall-package` was not actually respected by
`update_environment`, it exited early due to satisfied requirements.
Before
```
❯ cargo run -q -- tool install black -v --reinstall-package tomli
...
DEBUG All requirements satisfied: black | click>=8.0.0 | mypy-extensions>=0.4.3 | packaging>=22.0 | pathspec>=0.9.0 | platformdirs>=2 | tomli>=1.1.0 ; python_version < '3.11' | typing-extensions>=4.0.1 ; python_version < '3.11'
```
After
```
❯ cargo run -q -- tool install black -v --reinstall-package tomli
...
Uninstalled 1 package in 0.99ms
Installed 1 package in 4ms
- tomli==2.0.1
+ tomli==2.0.1
```
## Summary
This is an intermediary change in enabling universal resolution for
`requirements.txt` files. To start, we need to be able to preserve
markers in the `requirements.txt` output _and_ propagate those markers,
such that if you have a dependency that's only included with a given
marker, the transitive dependencies respect that marker too.
Closes#1429.
## Summary
If the user puts their configuration in a `pyproject.toml` that _isn't_
a valid workspace root (e.g., it's a Poetry file), we won't discover it,
because we only look in `uv.toml` files in that case. I think this is
somewhat debatable... We could choose to _require_ `uv.toml` there, but
as a user I'd probably expect it to work?
Closes https://github.com/astral-sh/uv/issues/4521.
`junction::create` apparently will happily succeed but not create a link
to files? Since our symlink function does not indicate that it cannot
handle files, this was quite surprising.
Tested over in #4509 which previously failed on an assertion that
`black.exe` existed.
```
error: Failed to install entrypoint
Caused by: Cannot create a junction for [TEMP_DIR]/tools/black/Scripts/black.exe: is not a directory
```
We should file an issue upstream too, I think?
## Summary
We currently accept `--index-url /path/to/index` on the command line,
but confusingly, not in `requirements.txt`. This PR just brings the two
in sync.
## Test Plan
New snapshot tests.
## Summary
pip allows these with the following logic:
```python
if os.path.exists(location): # Is a local path.
url = path_to_url(location)
path = location
elif location.startswith("file:"): # A file: URL.
url = location
path = url_to_path(location)
elif is_url(location):
url = location
```
Closes https://github.com/astral-sh/uv/issues/4510.
## Test Plan
`cargo run pip install --index-url ../packse/index/simple-html/
example-a-961b4c22 --reinstall --no-cache --no-deps`
I was getting this test failure locally on my Archlinux system:
```
-old snapshot
+new results
0 0 │ success: true
1 1 │ exit_code: 0
2 2 │ ----- stdout -----
3 │-[PYTHON-3.12]
3 │+/usr/bin/python3
4 4 │
5 5 │ ----- stderr -----
```
Where I have `/usr/bin/python3` and `/usr/bin/python3.12`.
Thanks @zanieb for the help with figuring out the fix here!
## Summary
In #4085, support was implemented for the `--prefix` option. When using
this option, however, a lock is either acquired on the virtualenv or
globally, preventing multiple installs to different `--prefix`s from the
same interpreter.
In this change, acquire the lock on just the prefix in question.
## Test Plan
Ran a `uv pip install` with `--prefix` and `RUST_LOG=trace` and observed
that the lock was acquired in the prefix.
## Summary
It turns out that the Git fetch implementation is initializing its own
client, which can be really expensive on macOS (due to loading native
certificates) _and_ bypasses any of our middleware. This PR modifies the
Git implementation to accept a shared client.
## Summary
We unconditionally update the submodules in our Git code, but AFAICT it
shouldn't be necessary if we already have a complete, up-to-date fetch
available.
## Summary
This does require cloning the settings, but I think it's fine. A better
solution would be to have owned and unowned settings structs, so that we
could convert `ResolverInstallerSettingsRef` to `InstallerSettingsRef`
without cloning, but that requires maintaining owned and unowned
variants.
Closes https://github.com/astral-sh/uv/issues/4455.
## Summary
I might be mistaken, but I think we need to read the header from the
response, not the request. The request would only contain headers that
we set.
I verified (with extra logging) that the request header is `None` while
PyPI returns a valid length in the response header.
## Summary
Ultimately decided to view this as part of `LockWire` normalization:
removing references to extras that don't exist. I think it would be nice
if the resolver avoided omitting these, but I don't know if it's fully
possible.
Closes https://github.com/astral-sh/uv/issues/4405.
I have to add yet another indentation level to the prerelease-available
check in `PubGrubReportFormatter::hints` for #4435, so i've broken the
code into methods and decreased indentation in this split out
refactoring-only change.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves https://github.com/astral-sh/uv/issues/4439 partially.
Implements for `uv pip tree`:
- `--no-dedupe` flag, similar to `cargo tree --no-dedupe` .
- denote dependency cycles with `(#)` and add a footnote if there's a
cycle (using `(*)` would require keeping track of the cycle state, so
opted to do this instead).
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
The existing tests pass + added a couple of tests to validate
`--no-dedupe` behavior.
<!-- How was it tested? -->
## Summary
This will make `uv lock` read `override-dependencies` from the
`[tool.uv]` section of `pyproject.toml`.
Resolves#4108
This [other](https://github.com/astral-sh/uv/pull/4446) implementation
touches more code but seems more consistent.
## Test Plan
Unit test
Releasing 0.2.15 with a few additions over 0.2.14. Motivated by the
incorrect tagging of 0.2.14 (#4474).
Generated the changelog with a small patch to Rooster allowing me to
force the previous commit to be correct.
```diff
diff --git a/src/rooster/_cli.py b/src/rooster/_cli.py
index 2a4f61b..4ec1299 100644
--- a/src/rooster/_cli.py
+++ b/src/rooster/_cli.py
@@ -38,6 +38,7 @@ def release(
without_sections: list[str] = typer.Option(
[], help="Sections to exclude from the changelog"
),
+ previous_commit: str = None,
):
"""
Create a new release.
@@ -58,7 +59,11 @@ def release(
typer.echo("It looks like there are no version tags for this project.")
# Get the commits since the last release
- changes = list(get_commits_between(config, repo, last_version))
+ changes = list(
+ get_commits_between(
+ config, repo, last_version, force_first_commit=previous_commit
+ )
+ )
since = "since last release" if last_version else "in the project"
typer.echo(f"Found {len(changes)} commits {since}.")
diff --git a/src/rooster/_git.py b/src/rooster/_git.py
index 597bb88..66bc54e 100644
--- a/src/rooster/_git.py
+++ b/src/rooster/_git.py
@@ -29,12 +29,13 @@ def get_commits_between(
target: Path,
first_version: Version | None = None,
second_version: Version | None = None,
+ force_first_commit: str | None = None,
) -> Generator[git.Commit, None, None]:
"""
Yield all commits between two tags
"""
repo = git.repository.Repository(target.absolute())
- first_commit = (
+ first_commit = force_first_commit or (
repo.lookup_reference(
TAG_PREFIX + config.version_tag_prefix + str(first_version)
)
```
## Summary
The `--index-strategy` is linked to the index locations, which we
propagate to source distribution builds; so it makes sense to pass the
`--index-strategy` too.
While I was here, I made `exclude_newer` a required argument so that we
don't forget to set it via the `with_options` builder.
Closes https://github.com/astral-sh/uv/issues/4465.
## Summary
This PR moves all the CLI code into its own crate, separate from the
`uv` crate. The `uv` crate is iterated on frequently, and the CLI code
comprises a significant portion of it but rarely changes. Removing the
CLI code reduces the `uv` crate size from 1.4MiB to 1.0MiB.
When executables were not named `python3` e.g. `python3.11` we would
construct a Python path that would only work for _some_ requests in
tests since we don't search for those names unless a specific version is
requested. To solve, we construct a test context with constant Python
executable names. For example, if a test context was created with `3.11`
and `3.12` we could end up with the search path
`/usr/local/python-3.11/bin:/usr/local/python-3.12/bin` where the
executables are named `python3.11` and `python3` respectively. A test
invocation of uv requesting any Python toolchain version would then
locate the `3.12` executable since the `3.11` executable doesn't have
the generic name, but we want `3.11` to come first.
On Windows, we just leave things as-is because executables are always
called `python.exe`.
Closes https://github.com/astral-sh/uv/issues/4376
## Summary
Closes https://github.com/astral-sh/uv/issues/2956
This changes the bootstrap launcher script to use `pythonw.exe` instead
of `python.exe` on `gui_scripts` via a helper fn both in the shebang and
the python exe path encoded before `UVUV` magic, that way
uv-trampoline's `find_python_exe` can use the right pythonw executable.
## Test Plan
New unit tests for the helper was added.
Tested on example from #2956 on Windows to make sure it works as
expected.
## Questions
I noticed the docs in `fn windows_script_launcher` says ```The launcher
will look for `python[w].exe` adjacent to it in the same directory to
start the embedded script.``` but I didn't find such functionality when
I looked in uv-trampoline.
I only saw `clear_app_starting_state` getting called when `is_gui` is
set.
Was the intention to do this in uv-trampoline at some point instead?
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
resolves https://github.com/astral-sh/uv/issues/3272
added it as a new subcommand rather than a flag on an existing
command since that seems more consistent with `cargo tree` + cleaner
code organization, but can make changes if it's preferred the other way.
Exposes the option added in #4416. Adds `--toolchain-preference` and
`tool.uv.toolchain-preference` to configure if system or managed
toolchains are preferred. Users can opt-out of managed toolchains or
system toolchains entirely as well.
Restores the `PythonEnvironment::find` API which was removed a while
back in favor of `Toolchain::find`. As mentioned in #4416, I'm
attempting to separate the case where you want an active environment
from the case where you want an installed toolchain in order to create
environments.
I wanted to drop `EnvironmentPreference` from `Toolchain::find` and just
have us consistently consider (or not consider) virtual environments
when discovering toolchains for creating environments. Unfortunately
this caused a few things to break so I reverted that change and will
explore it separately. Because I was exploring that change, there are
some minor changes to the `Toolchain` API here.
Adds support for the toolchain discovery preferences outlined in
https://github.com/astral-sh/uv/issues/4198 but we don't expose this to
users yet, I'll do that next to make it easier to review.
I've made some refactors in the toolchain discovery implementation to
enable this behavior and move us towards clearer abstractions. There's
still remaining work here, but I'd prefer tackle things in follow-ups
instead of expanding this pull request. I plan on opening a couple
before merging this.
I'd like to shift the public toolchain API to focus on discovering
either an **environment** or a **toolchain**. The first would be used by
commands that operate on an environment, while the latter would be used
by commands that just need an interpreter to create environments. I
haven't changed this here, but some of the refactors are in preparation
for supporting this idea.
In brief:
- We now allow different ordering of installed toolchain discovery based
on a `ToolchainPreference` type. This is the type we will expose to
users.
- `SystemPython` was changed into an `EnvironmentPreference` which is
used to determine if we should prefer virtual or system Python
environments.
- We drop the whole `ToolchainSources` selection concept, it was
confusing and the error messages from it were awkward. Most of the
functionality is now captured by the preference enums, but you can't do
things like "only find a toolchain from the parent interpreter" as
easily anymore.
Previously, distributions created through `Source::Workspace` would have
the absolute path as lock path. This didn't cause any problems, since in
`Urls` we would later overwrite those urls with the correct one created
from being workspace members by path.
Changing the order surfaced this. This change emits the correct lock
path. I've manually checked the difference with `dbg!`, this is not
observable on main, but on the diverging urls branch it fixes lockfile
creation.
When a fork is created from a list of dependencies, we were previously
adding all other sibling dependencies to every fork created. But this
isn't actually quite right, since the fork created is always created by
some marker expression. And while it is definitively disjoint from any
directly conflicting dependency specification, it is also possibly
disjoint with other dependencies. For example, as reported in #4414:
```toml
dependencies = [
"anyio==4.4.0 ; python_version >= '3.12'",
"anyio==4.3.0 ; python_version < '3.12'",
"b1 ; python_version >= '3.12'",
"b2 ; python_version < '3.12'",
]
```
The first two `anyio` requirements are conflicting with non-overlapping
marker expressions, and so a fork is created. Prior to this commit,
*both* `b1` and `b2` would be added to each fork. But of course, `b2` is
impossible in the `anyio==4.4.0` fork because of disjoint marker
expressions.
So in this commit, we specifically filter out any sibling dependencies
that could find their way into a fork that have disjoint markers with
that fork. We are careful to do this both when a new fork is created
from an existing set of dependencies, and when adding new dependencies
to a fork.
Fixes#4414
## Summary
After this change, `uv add` will try to use `tool.uv.sources` for all
source requirements. If a source cannot be resolved, i.e. an ambiguous
Git reference is provided, it will error. Git references can be
specified with the `--tag`, `--branch`, or `--rev` arguments. Editables
are also supported with `--editable`.
Users can opt-out of `tool.uv.sources` support with the `--raw` flag,
which will force uv to use `project.dependencies`.
Part of https://github.com/astral-sh/uv/issues/3959.
To support diverging urls, we have to check urls when adding
dependencies (after forking). To prepare for this, i've moved adding
dependencies for the current version to
`SolveState::add_package_version_dependencies` and removed the
duplication when checking for self-dependencies.
This changed is joined with a change in pubgrub
(https://github.com/astral-sh/pubgrub/pull/27) that simplifies the same
code path.
## Summary
Implements `uv add foo --workspace`, which adds `foo` as a workspace
dependency with the corresponding `tool.uv.sources` entry.
Part of https://github.com/astral-sh/uv/issues/3959.