## Summary
If we fail to deserialize cached metadata in the cache, we should just
ignore it, rather than failing.
Ideally, this never happens. If it does, it means we missed a cache
version bump. But if it does happen, it should still be non-fatal.
Closes https://github.com/astral-sh/uv/issues/11043.
Closes https://github.com/astral-sh/uv/issues/11101.
## Test Plan
Prior to this PR, the following would fail:
- `uvx uv@0.5.25 venv --python 3.12 --cache-dir foo`
- `uvx uv@0.5.25 pip install ./scripts/packages/hatchling_dynamic
--no-deps --python 3.12 --cache-dir foo`
- `uvx uv@0.5.18 venv --python 3.12 --cache-dir foo`
- `uvx uv@0.5.18 pip install ./scripts/packages/hatchling_dynamic
--no-deps --python 3.12 --cache-dir foo`
We can't go back and fix 0.5.18, but this will prevent such regressions
in the future.
## Summary
The issue here boils down to: when we write metadata that came from
building the wheel itself, we aren't setting the `dynamic` field.
We now _always_ set the dynamic field when reading, even when we read
cached data.
Closes https://github.com/astral-sh/uv/issues/11047.
## Summary
On Windows, we have a lot of issues with atomic replacement and such.
There are a bunch of different failure modes, but they generally
involve: trying to persist a fail to a path at which the file already
exists, trying to replace or remove a file while someone else is reading
it, etc.
This PR adds locks to all of the relevant database paths. We already use
these advisory locks when building source distributions; now we use them
when unzipping wheels, storing metadata, etc.
Closes#11002.
## Test Plan
I ran the following script:
```shell
# Define the cache directory path
$cacheDir = "C:\Users\crmar\workspace\uv\cache"
# Clear the cache directory if it exists
if (Test-Path $cacheDir) {
Remove-Item -Recurse -Force $cacheDir
}
# Create the cache directory again
New-Item -ItemType Directory -Force -Path $cacheDir
# Define the command to run with --cache-dir flag
$command = {
param ($venvPath)
# Create a virtual environment in the specified path with --python
uv venv $venvPath
# Run the pip install command with --cache-dir flag
C:\Users\crmar\workspace\uv\target\profiling\uv.exe pip install flask==1.0.4 --no-binary flask --cache-dir C:\Users\crmar\workspace\uv\cache -v --python $venvPath
}
# Define the paths for the different virtual environments
$venv1 = "C:\Users\crmar\workspace\uv\venv1"
$venv2 = "C:\Users\crmar\workspace\uv\venv2"
$venv3 = "C:\Users\crmar\workspace\uv\venv3"
$venv4 = "C:\Users\crmar\workspace\uv\venv4"
$venv5 = "C:\Users\crmar\workspace\uv\venv5"
# Start the command in parallel five times using Start-Job, each with a different venv
$job1 = Start-Job -ScriptBlock $command -ArgumentList $venv1
$job2 = Start-Job -ScriptBlock $command -ArgumentList $venv2
$job3 = Start-Job -ScriptBlock $command -ArgumentList $venv3
$job4 = Start-Job -ScriptBlock $command -ArgumentList $venv4
$job5 = Start-Job -ScriptBlock $command -ArgumentList $venv5
# Wait for all jobs to complete
$jobs = @($job1, $job2, $job3, $job4, $job5)
$jobs | ForEach-Object { Wait-Job $_ }
# Retrieve the results (optional)
$jobs | ForEach-Object { Receive-Job -Job $_ }
# Clean up the jobs
$jobs | ForEach-Object { Remove-Job -Job $_ }
```
And ensured it succeeded in five straight invocations (whereas on
`main`, it consistently fails with a variety of different traces).
## Summary
For example: in the linked issue, the user has a symlink at
`pyproject.toml`. The GitHub CDN doesn't give us any way to determine
whether a file is a symlink, so we should just log the error and move on
to the slow path.
Closes https://github.com/astral-sh/uv/issues/10857
## Summary
I noticed that we're only handling `Error::WheelMetadataNameMismatch`
here; but `Error::WheelMetadataVersionMismatch` should also be treated
as non-fatal.
## Summary
When resolving Git metadata, we may be able to fetch the metadata from
GitHub directly in some cases. This is _way_ faster, since we don't need
to perform many Git operations and, in particular, don't need to clone
the repo.
This only works in the following cases:
- The Git repository is public. Otherwise, I believe you need an access
token, which we don't have.
- The `pyproject.toml` has static metadata.
- The `pyproject.toml` has no `tool.uv.sources`. Otherwise, we need to
lower them... And, if there are any paths or workspace sources, that
requires an install path (i.e., we need the content on-disk).
- The project is in the repo root. If it's in a subdirectory, it could
be a workspace member. And if it's a workspace member, there could be
sources defined in the workspace root. But we can't know without
fetching the workspace root -- and we need the workspace in order to
find the root...
Closes#10568.
## Summary
This PR modifies the lockfile to omit versions for source trees that use
`dynamic` versioning, thereby enabling projects to use dynamic
versioning with `uv.lock`.
Prior to this change, dynamic versioning was largely incompatible with
locking, especially for popular tools like `setuptools_scm` -- in that
case, every commit bumps the version, so every commit invalidates the
committed lockfile.
Closes https://github.com/astral-sh/uv/issues/7533.
## Summary
Sort of undecided on this. These are already stored as `dyn Reporter` in
each struct, so we're already using dynamic dispatch in that sense. But
all the methods take `impl Reporter`. This is sometimes nice (the
callsites are simpler?), but it also means that in practice, you often
_can't_ pass `None` to these methods that accept `Option<impl
Reporter>`, because Rust can't infer the generic type.
Anyway, this adds more consistency and simplifies the setup by using
`Arc<dyn Reporter>` everywhere.
Build failures are one of the most common user facing failures that
aren't "obivous" errors (such as typos) or resolver errors. Currently,
they show more technical details than being focussed on this being an
error in a subprocess that is either on the side of the package or -
more likely - in the build environment, e.g. the user needs to install a
dev package or their python version is incompatible.
The new error message clearly delineates the part that's important (this
is a build backend problem) from the internals (we called this hook) and
is consistent about which part of the dist building stage failed. We
have to calibrate the exact wording of the error message some more. Most
of the implementation is working around the orphan rule, (this)error
rules and trait rules, so it came out more of a refactoring than
intended.
Example:

## Summary
On `main`, if you ask for a source but name a missing subdirectory, you
just get:
```
{source} does not appear to be a Python project, as neither `pyproject.toml` nor `setup.py` are present in the directory
```
But, in reality, the directory doesn't exist at all.
## Summary
We were reading an `.egg-info` file from the root directory that didn't
apply to the root member -- it was for another workspace member. I think
this is driven from some idiosyncracies in the `setuptools` setup for
that workspace member, but it's still wrong to fail.
This PR adds a few measures to fix this:
1. We validate the `egg-info` filename against the package metadata.
2. We skip, rather than fail, if we see incorrect metadata in an
`egg-info` file or similar. This is an optimization anyway; worst case,
we try to build the package, then fail there.
Closes https://github.com/astral-sh/uv/issues/9743.
## Summary
Small thing I noticed while working on another change: if we error when
extracting `requires-dist`, we go through the full metadata build. We
need to distinguish between fatal errors and "the data isn't static".
This is like #9556, but at the level of all other builds, including the
resolver and installer. Going through PEP 517 to build a package is
slow, so when building a package with the uv build backend, we can call
into the uv build backend directly instead: No temporary virtual env, no
temp venv sync, no python subprocess calls, no uv subprocess calls.
This fast path is gated through preview. Since the uv wheel is not
available at test time, I've manually confirmed the feature by comparing
`uv venv && cargo run pip install . -v --preview --reinstall .` and `uv
venv && cargo run pip install . -v --reinstall .`. When hacking the
preview so that the python uv build backend works without the setting
the direct build also (wheel built with `maturin build --profile
profiling`), we can see the perfomance difference:
```
$ hyperfine --prepare "uv venv" --warmup 3 \
"UV_PREVIEW=1 target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --preview" \
"target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --find-links target/wheels/"
Benchmark 1: UV_PREVIEW=1 target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --preview
Time (mean ± σ): 33.1 ms ± 2.5 ms [User: 25.7 ms, System: 13.0 ms]
Range (min … max): 29.8 ms … 47.3 ms 73 runs
Benchmark 2: target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --find-links target/wheels/
Time (mean ± σ): 115.1 ms ± 4.3 ms [User: 54.0 ms, System: 27.0 ms]
Range (min … max): 109.2 ms … 123.8 ms 25 runs
Summary
UV_PREVIEW=1 target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --preview ran
3.48 ± 0.29 times faster than target/profiling/uv pip install --no-deps --reinstall scripts/packages/built-by-uv --find-links target/wheels/
```
Do we need a global option to disable the fast path? There is one for
`uv build` because `--force-pep517` moves `uv build` much closer to a
`pip install` from source that a user of a library would experience (See
discussion at #9610), but uv overall doesn't really make guarantees
around the build env of dependencies, so I consider the direct build a
valid option.
Best reviewed commit-by-commit, only the last commit is the actual
implementation, while the preview mode introduction is just a
refactoring touching too many files.
When encountering `dynamic = ["version"]` in the pyproject.toml of a
source dist, we can ignore that and treat it as a statically known
metadata distribution, since the filename tells us the version and that
version must not change on build.
This fixed locking PyGObject 3.50.0 from `pygobject-3.50.0.tar.gz`
(minimized):
```toml
[project]
name = "PyGObject"
description = "Python bindings for GObject Introspection"
requires-python = ">=3.9, <4.0"
dependencies = [
"pycairo>=1.16"
]
dynamic = ["version"]
```
Afterwards, `uv add --no-sync toga` passes on Ubuntu 24.04 without the
pygobject build deps, when previously it needed `{ name = "pygobject",
version = "3.50.0", requires-dist = [], requires-python = ">=3.9" }`.
I've added a check that source distribution versions are respected after
build.
Fixes#9548
## Summary
Include the `git_member` when fetching metadata from cache.
h/t to @PhilipVinc for the suggested fix
Resolves#8887
## Test Plan
Pending
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
A lot of good new lints, and most importantly, error stabilizations. I
tried to find a few usages of the new stabilizations, but I'm sure there
are more.
IIUC, this _does_ require bumping our MSRV.
## Summary
The reqwest middleware doesn't retry errors that occur "after" the
request completes -- but in some cases, these do include spurious errors
that we want to retry. See https://github.com/astral-sh/uv/issues/8144
for examples. This PR adds a second retry layer during the response
_handler_, which should help with some of the spurious failures we see
in the linked issue.
Closes https://github.com/astral-sh/uv/issues/8144.
## Summary
The basic issue here is that `uv add` will compute and store a hash for
each package. But if you later run `uv pip install` _after_ `uv cache
prune --ci`, we need to re-download the source distribution. After
re-downloading, we compare the hashes before and after. But `uv pip
install` doesn't compute any hashes by default. So the hashes "differ"
and we error.
Instead, we need to compute a superset of the already-existing and
newly-requested hashes when performing this re-download. (In practice,
this will always be SHA-256.)
Closes https://github.com/astral-sh/uv/issues/8929.
## Test Plan
```shell
export UV_CACHE_DIR="$PWD/cache"
rm -rf "$UV_CACHE_DIR" .venv .venv-2 pyproject.toml uv.lock
echo $(uv --version)
uv init --name uv-cache-issue
cargo run add --python 3.13 "pycairo"
uv cache prune --ci
rm -rf .venv .venv-2
uv venv --python python3.11 .venv-2
. .venv-2/bin/activate
cargo run pip install "pycairo"
```
## Summary
In the example outlined in https://github.com/astral-sh/uv/issues/8884,
this removes an unnecessary `jupyter_contrib_nbextensions-0.7.0.tar.gz`
segment (replacing it with `src`), thereby saving 39 characters and
getting that build working on my Windows machine.
This should _not_ require a version bump because we already have logic
in place to "heal" partial cache entries that lack an unzipped
distribution.
Closes https://github.com/astral-sh/uv/issues/8884.
Closes https://github.com/astral-sh/uv/issues/7376.
## Summary
See: https://github.com/astral-sh/uv/issues/8884. We build in a
directory that's deep within the cache; to help with file name length
limits, we should build at the top-level of the cache.
## Summary
At present, when we have a Python requirement and we see a wheel, we
verify that the Python requirement is compatible with the wheel. For
source distributions, though, we verify that both the Python requirement
_and_ the currently-installed version are compatible, because we assume
that we'll need to build the source distribution in order to get
metadata. However, we can often extract source distribution metadata
_without_ building (e.g., if there's a `pyproject.toml` with no dynamic
keys).
This PR thus modifies the source distribution handling to defer that
incompatibility ("We couldn't get metadata for this project, because it
has no static metadata and requires a higher Python version to run /
build") until we actually try to build the package. As a result, you can
now resolve source distribution-only packages using Python versions
below their `requires-python`, as long as they include static metadata.
Closes https://github.com/astral-sh/uv/issues/8767.
When resolving workspace dependencies (from one workspace member to
another) from a workspace that's in git, we need to emit these
transitive dependencies as git dependencies, not path dependencies as
all other workspace deps. This fixes a bug where we would treat them as
path dependencies inside the checkout directory, leading either to
clashes (between a local path and another direct git dependency) or
invalid lockfiles (referencing the checkout dir in the lockfile when we
should be referencing the git repo).
Fixes#8087Fixes#4920Fixes#3936 since we needed that information anyway
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This is part of making
https://github.com/astral-sh/uv/issues/7299#issuecomment-2385286341
better. You can now use `tool.uv.dependency-metadata` for direct URL
requirements. Unfortunately, you _must_ include a version, since we need
one to perform resolution.
## Summary
We shouldn't show these in `uv add`, especially when the thing we're
adding is about to have a lower-bound put on it. Now, we only show these
when the user runs `uv lock` or `uv sync`.
## Summary
If you pass a named index via the CLI, you can now reference it as a
named source. This required some surprisingly large refactors, since we
now need to be able to track whether a given index was provided on the
CLI vs. elsewhere (since, e.g., we don't want users to be able to
reference named indexes defined in global configuration).
Closes https://github.com/astral-sh/uv/issues/7899.
## Summary
`uv cache prune --ci` will remove the source distribution directory. If
we then need to build a _different_ wheel (e.g., you're building a
package that has Python minor version-specific wheels), we fail, because
we expect the source to be there.
Now, if the source is missing, we re-download it. It would be slightly
easier to just _ignore_ that revision, but that would mean we'd also
lose the already-built wheels -- so if you ran against many Python
versions, we'd continuously lose the cached data.
Closes https://github.com/astral-sh/uv/issues/7543.
## Test Plan
We can add tests, but they _need_ to build non-pure Python wheels, which
tends to be expensive...
For reference:
```console
$ cargo run venv --python 3.12
$ cargo run pip install mercurial==6.8.1 --verbose
$ cargo run cache prune --ci
$ cargo run venv --python 3.11
$ cargo run pip install mercurial==6.8.1 --verbose
```
I also did this with a local `.tar.gz` that I downloaded from PyPI.
## Summary
Closes https://github.com/astral-sh/uv/issues/7485.
## Test Plan
```
$ cargo run cache clean
$ cargo run venv
$ cargo run pip install django-allauth==0.51.0
$ cargo run venv
$ cargo run pip install django-allauth==0.51.0
```
This is preparatory work for the upload functionality, which needs to
read the METADATA file and attach its parsed contents to the POST
request: We move finding the `.dist-info` from `install-wheel-rs` and
`uv-client` to a new `uv-metadata` crate, so it can be shared with the
publish crate.
I don't properly know if its the right place since the upload code isn't
ready, but i'm PR-ing it now because it already had merge conflicts.