It'd be nice to avoid churn for contributors. This is a pretty frequent
cause of CI failures and I don't think we really need to have the
reference documentation committed.
## Summary
There are a class of outcomes whereby an index might not be included in
"allowed indexes", but could still correctly appear in a lockfile. In
the linked case, we have two `default = true` indexes, and one of them
is also named. We omit the second `default = true` index from the list
of "allowed indexes", but since it's named, a dependency can reference
it explicitly. We handle this correctly for `project.dependencies`, but
the handling was incorrectly omitting dependency groups.
Closes https://github.com/astral-sh/uv/issues/16843.
Error when a built wheel is for the wrong platform. This can happen
especially when using `--python-platform` or `--python-version` with `uv
pip install`.
Fixes#16019
Resolves https://github.com/astral-sh/uv/issues/16833
`uv self update` could error with a better message if it knows the user
has installed it with brew. This diff adds an `InstallSource` we can use
the detect where a `uv` binary comes from and augment error messages
with better context. We're only adding brew for now, but it could easily
be extend with other detection heuristics.
## Summary
Given `bump-workspace-crate-versions.py` will bump all crates, we also
need to update uv-trampoline lockfile references to those new versions
(for uv-static, uv-macros) after
https://github.com/astral-sh/uv/pull/16950.
## Test Plan
Ran release script manually and verify uv-trampoline lockfile is up to
date after the changes from bump-workspace-crate-versions.py
When a process is running and another calls `uv cache clean` or `uv
cache prune` we currently deadlock - sometimes until the CI timeout
(https://github.com/astral-sh/setup-uv/issues/588). To avoid this, we
add a default 5 min timeout waiting for a lock. 5 min balances allowing
in-progress builds to finish, especially with larger native
dependencies, while also giving timely errors for deadlocks on (remote)
systems.
Commit 1 is a refactoring.
This branch also fixes a problem with the logging where acquired and
released resources currently mismatch:
```
DEBUG Acquired lock for `https://github.com/tqdm/tqdm`
DEBUG Using existing Git source `https://github.com/tqdm/tqdm`
DEBUG Released lock at `C:\Users\Konsti\AppData\Local\uv\cache\git-v0\locks\16bb813afef8edd2`
```
This was discovered by https://github.com/astral-sh/uv/pull/16074, where
the wrong tag now fails the Pyston integration test.
I'm not sure what the exact schema of the cache tag is, but since the
project is dead, I don't expect any new non-matching versions to follow.
Clarify `--project` flag help text to indicate project discovery
Update the help text for `--project` from "Run the command within
the given project directory" to "Discover a project in the given
directory" to better reflect its actual behavior.
The `--project` flag affects file discovery (pyproject.toml, uv.toml,
etc.) but does not change the working directory. Users should use
`--directory` for changing the working directory.
Fixes#16718
## Summary
Inform users who encounter #16879 that `uv pip debug` is currently
intentionally not implemented.
## Test Plan
Added an integration test for the warning, ran the test suite.
## Summary
This change updates the guide about integration with pytorch to include
the CUDA 13.0 variant.
## Test Plan
I generated the documentation and verified that it looked correct.
Fixes https://github.com/astral-sh/uv/issues/16447
Passing this around explicitly uncovers some behaviors where we pass
e.g. the credentials store to reading the lockfile. The changes in this
PR should preserve the existing behavior for now, they only make the
locations we read from more explicit.
Labeling this PR as "Enhancement" instead of "Internal" in case this
changes behavior when it shouldn't have.
## Summary
Like in `uv.lock`, we should omit artifacts that are filtered out by
`--no-binary` or by the target platform tags.
Closes https://github.com/astral-sh/uv/issues/13413.
## Summary
When the test suite panics due to a missing python version, it now
prints some helpful information to guide users to `cargo run python
install` and `CONTRIBUTING.md`
~~~
---- pip_sync::incompatible_build_constraint stdout ----
thread 'pip_sync::incompatible_build_constraint' (19737) panicked at
crates/uv/tests/it/common/mod.rs:1793:17:
Could not find Python 3.9 for test
Try `cargo run python install` first, or refer to CONTRIBUTING.md
~~~
## Test Plan
Uninstalled python3.9 and ran tests which depended on it.
## Summary
Fix#16906 by pruning modules or submodules which are already included
(either directly, or through a parent).
Generates warnings when this happens.
Example:
```bash session
$ uv build
Building source distribution (uv build backend)...
warning: Ignoring redundant module name(s): test_lib.bar test_lib test_lib.bar.baz test_lib.baz
Building wheel from source distribution (uv build backend)...
Successfully built dist/test-0.1.0.tar.gz
Successfully built dist/test-0.1.0-py3-none-any.whl
```
## Test Plan
Added some unit tests for the pruning function and one for the whole
build backend. Added an integration test for the warnings. Ran the full
test suite. Manually tested.
The unit test for the function doesn't cater for the fact that it
doesn't guarantee an order at the moment. I think this is fine.
---------
Co-authored-by: konsti <konstin@mailbox.org>
## Summary
* Updates existing references to use EnvVars where usage was missing.
* Adds missing entries to env var usages, e.g. new env var declarations
in uv-trampoline, tests, etc.
* Note: this doesn't affect trampoline sizes as the end result is the
same
* Fixes versioning of `UV_HIDE_BUILD_OUTPUT`.
## Test Plan
Existing Tests. Compiled the trampolines locally to verify zero changes
(size, binary).
## Question
Will this complicate the crates publishing release process? I'm not
certain yet if it will be an issue for uv-trampoline (non-workspace
member) to reference a uv workspace member from a bump & release
perspective wrt lock files. If so, I'll revert the uv-trampoline changes
but keep the others.
See https://github.com/astral-sh/uv/pull/16944
The `crates.io` publish succeeded and is not idempotent (i.e., it'll
fail on another publish attempt) so we will skip it for a re-run of the
release workflow.
## Summary
This broke the release and I haven't figured out why yet.
## Test Plan
Blame my past self.
Signed-off-by: William Woodruff <william@astral.sh>
## Summary
Follow up to https://github.com/astral-sh/uv/pull/15563
Closes https://github.com/astral-sh/uv/issues/13485
This is a first-pass at adding support for conditional support for Git
LFS between git sources, initial feedback welcome.
e.g.
```
[tool.uv.sources]
test-lfs-repo = { git = "https://github.com/zanieb/test-lfs-repo.git", lfs = true }
```
For context previously a user had to set `UV_GIT_LFS` to have uv fetch
lfs objects on git sources. This env var was all or nothing, meaning you
must always have it set to get consistent behavior and it applied to all
git sources. If you fetched lfs objects at a revision and then turned
off lfs (or vice versa), the git db, corresponding checkout lfs
artifacts would not be updated properly. Similarly, when git source
distributions were built, there would be no distinction between sources
with lfs and without lfs. Hence, it could corrupt the git, sdist, and
archive caches.
In order to support some sources being LFS enabled and other not, this
PR adds a stateful layer roughly similar to how `subdirectory` works but
for `lfs` since the git database, the checkouts and the corresponding
caching layers needed to be LFS aware (requested vs installed). The
caches also had to isolated and treated entirely separate when handling
LFS sources.
Summary
* Adds `lfs = true` or `lfs = false` to git sources in pyproject.toml
* Added `lfs=true` query param / fragments to most relevant url structs
(not parsed as user input)
* In the case of uv add / uv tool, `--lfs` is supported instead
* `UV_GIT_LFS` environment variable support is still functional for
non-project entrypoints (e.g. uv pip)
* `direct-url.json` now has an custom `git_lfs` entry under VcsInfo
(note, this is not in the spec currently -- see caveats).
* git database and checkouts have an different cache key as the sources
should be treated effectively different for the same rev.
* sdists cache also differ in the cache key of a built distribution if
it was built using LFS enabled revisions to distinguish between non-LFS
same revisions. This ensures the strong assumption for archive-v0 that
an unpacked revision "doesn't change sources" stays valid.
Caveats
* `pylock.toml` import support has not been added via git_lfs=true,
going through the spec it wasn't clear to me it's something we'd support
outside of the env var (for now).
* direct-url struct was modified by adding a non-standard `git_lfs`
field under VcsInfo which may be undersirable although the PEP 610 does
say `Additional fields that would be necessary to support such VCS
SHOULD be prefixed with the VCS command name` which could be interpret
this change as ok.
* There will be a slight lockfile and cache churn for users that use
`UV_GIT_LFS` as all git lockfile entries will get a `lfs=true` fragment.
The cache version does not need an update, but LFS sources will get
their own namespace under git-v0 and sdist-v9/git hence a cache-miss
will occur once but this can be sufficient to label this as breaking for
workflows always setting `UV_GIT_LFS`.
## Test Plan
Some initial tests were added. More tests likely to follow as we reach
consensus on a final approach.
For IT test, we may want to move to use a repo under astral namespace in
order to test lfs functionality.
Manual testing was done for common pathological cases like killing LFS
fetch mid-way, uninstalling LFS after installing an sdist with it and
reinstalling, fetching LFS artifacts in different commits, etc.
PSA: Please ignore the docker build failures as its related to depot
OIDC issues.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
If you have requirements files that are included multiple times, we can
avoid going back to disk. This also guards against accidental repeated
reads on standard input streams.