Fixes https://github.com/astral-sh/uv/issues/3060
## Summary
Allows passing a virtual environment (the path to the directory, rather
than the path to the Python interpreter within the directory) to the
`--python` option of the `uv pip` command.
## Test Plan
Tested manually to confirm that the expected new functionality works.
The test suite still passes after this change.
I don't know how to add tests for a new feature like this. I would be
happy to do so if someone can give me some pointers on how to do it.
## Summary
Source distributions in the .tar.bz2 format are still relatively common
within the existing code-bases, namely, the most common examples are the
Twisted source distributions up to the version 20.3.0. As quite so often
the ability to upgrade Twisted to a more recent version is not available
for a given project, we add the support for .tar.bz2 here to still allow
`uv` to be a drop-in replacement for `pip` in these projects.
## Test Plan
The feature was tested both by adding the corresponding test coverage,
and by directly installing a package of interest under a Python version
that doesn't have the corresponding wheel:
```sh
cargo run venv -p python3.8
cargo run pip install Twisted==20.3.0 --no-cache
```
The `--no-cache` argument in the example above serves the purpose of
cleaning the cached information regarding the unsatisfiability of the
requirements, as it may have been cached during some previous attempt to
install this package by `uv` version that didn't implement this feature
yet.
## Summary
This PR adds basic struct definitions along with a "workspace" concept
for discovering settings. (The "workspace" terminology is used to match
Ruff; I did not invent it.)
A few notes:
- We discover any `pyproject.toml` or `uv.toml` file in any parent
directory of the current working directory. (We could adjust this to
look at the directories of the input files.)
- We don't actually do anything with the configuration yet; but those
PRs are large and I want this to be reviewed in isolation.
Hello! This is my first PR so do not hesitate to let me know if anything
should be done differently 🙌🏽
## Summary
This PR starts adding useful error messages and warnings when people
pass redundant or unsupported arguments to `pip list`.
For now, I've just covered `pip list --outdated`, which is currently
unsupported.
Closes https://github.com/astral-sh/uv/issues/2948
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Since the [`junction` crate](https://crates.io/crates/junction)
implements Windows-only functionality, and since the only place it is
used is guarded by `#[cfg(windows)]`,
1f626bfc73/crates/uv-fs/src/lib.rs (L65-L86)
it makes sense not to depend on this crate at all on non-Windows
platforms.
If nothing else, this makes Linux distribution packagers’ lives just a
*tiny* bit easier.
## Test Plan
<!-- How was it tested? -->
On Fedora Linux 39:
```
# To avoid an error when /tmp and the working directory are on different filesystems:
$ mkdir _tmp
$ TMPDIR="${PWD}/tmp" cargo run -p uv-dev -- fetch-python
$ cargo test
```
I don’t have access to a Windows system.
## Summary
This makes it easier to add (e.g.) JSON Schema derivations to the type.
If we have support for other dates in the future, we can generalize it
to a `UserDate` or similar.
# Avoid cache invalidation on credentials renewal
Addresses
- https://github.com/astral-sh/uv/issues/3009#issue-2239221126
## Summary
Some private package registries (e.g. AWS CodeArtifact) use short-lived
credentials. Since they are short-lived, the exact URL that is assigned
to `UV_INDEX_URL` changes frequently and with that the cache key /
hashes of these URLs. This causes the cache to be missed on token
renewal.
This PR attempts to fix this by hashing URLs for cache keys without
their user credentials.
## Test Plan
I added a test that validates that:
1. Changing user credentials returns the same hash
2. Setting no user credentials yields the same as some user credentials
## Question
I'm not sure if we should also change the `hash` implementation of
`CanonicalUrl` / `RepositoryUrl`. They also run `.hash` within.
PS. this is the first time I'm writing `rust` so if I'm wasting your
precious time, let me know and I'll try to up my skills before I ask
again. Anyway, I figured it's good to get this issue on your radar :)
## Summary
This PR adds system install tests to verify the behavior described in
#2798. It turns out this behavior _also_ affects Fedora and Amazon
Linux, we just didn't have the right conditions enabled (specifically,
you need to create the virtualenv with `python -m venv` to get these
symlinks), so the test suite was expanded to capture that.
The issue itself is also fixed by way of deduplicating the
`site-packages` entries.
Closes: https://github.com/astral-sh/uv/issues/2798
## Summary
I'm surprised we haven't hit this before, but apparently we don't allow
comments after `--index-url`, `-e` entries, etc., in the
requirements.txt parser.
Closes#3011.
## Test Plan
`cargo test`
## Summary
It turns out that if you have an environment variable set, Clap will
consider that equivalent to passing the flag, even if it's set to (e.g.)
something falsy or the default value.
So, e.g., this fails:
```shell
UV_SYSTEM=false uv pip install --python ./.venv/bin/python flask
```
Worse, this fails, because it thinks `--no-index` and `--index-url` are
conflicting:
```shell
export UV_INDEX_URL=https://google.com
uv pip install flask --no-index
```
This PR removes some of the conflicts, namely those related to
environment variables, such that:
- You _can_ pass mixes of `--no-index`, `--index-url`, etc. If
`--no-index` is provided, all the index URLs will be ignored (but we
won't error).
- Passing `--pre` will always enable prereleases, even if `--prerelease`
is also provided. (We could warn here, although honestly it's not
trivial because we'd need to make `--prerelease` take an optional, then
we'd lose the default argument from the `--help`.)
- You _can_ pass `--system` and `--python`. If `--python` is provided,
we use that, and ignore `--system`. (We could warn here.)
I guess the underlying problem here is that we can't differentiate
between arguments passed on the CLI and those set as environment
variables. But making bigger changes here seems out-of-scope.
Closes https://github.com/astral-sh/uv/issues/3000.
## Summary
It turns out that `normalize_path` (sourced from Cargo) has a subtle
bug. If you pass it a relative path that traverses beyond the root, it
silently drops components. So, e.g., passing `../foo/bar`, it will just
drop the leading `..` and return `foo/bar`.
This PR encodes that behavior as a `Result` and avoids using it in such
cases.
Closes https://github.com/astral-sh/uv/issues/3012.
Inspired by https://github.com/astral-sh/uv/issues/2964, we now properly
log hardlink failures, e.g. when the cache is a docker container but the
venv is in a bind mount, e.g.:
```
DEBUG Failed to hardlink `/code/venv/uv/lib/python3.12/site-packages/asgiref-3.8.1.dist-info/WHEEL` to `/root/.cache/uv/archive-v0/nnpkKgUoM3LMxcNDmEKJQ/asgiref-3.8.1.dist-info/WHEEL`, attempting to copy files as a fallback
```
## Summary
I don't know if this is a good change, but `main.rs` is really large.
This just moves all the Clap arguments into their own `cli.rs` module.
freethreaded python reintroduces abiflags since it is incompatible with
regular native modules and abi3.
Tests: None yet! We're lacking cpython 3.13 no-gil builds we can use in
ci.
My test setup:
```
PYTHON_CONFIGURE_OPTS="--enable-shared --disable-gil" pyenv install 3.13.0a5
cargo run -q -- venv -q -p python3.13 .venv3.13 --no-cache-dir && cargo run -q -- pip install -v psutil --no-cache-dir && .venv3.13/bin/python -c "import psutil"
```
Fixes#2429
## Summary
In all of these ID types, we pass values to `cache_key::digest` prior to
passing to `DistributionId` or `ResourceId`. The `DistributionId` and
`ResourceId` are then hashed later, since they're used as keys in hash
maps.
It seems wasteful to hash the value, then hash the hashed value? So this
PR modifies those structs to be enums that can take one of a fixed set
of types.
## Summary
If there are no hashes for a given package, we now return
`Validate(&[])` so that the policy is impossible to satisfy. Previously,
we returned `None`, which is always satisfied.
We don't really ever expect to hit this, because we detect this case in
the resolver and raise a different error. But if we have a bug
somewhere, it's better to fail with an error than silently let the
package through.
## Summary
This PR enables `--require-hashes` with unnamed requirements. The key
change is that `PackageId` becomes `VersionId` (since it refers to a
package at a specific version), and the new `PackageId` consists of
_either_ a package name _or_ a URL. The hashes are keyed by `PackageId`,
so we can generate the `RequiredHashes` before we have names for all
packages, and enforce them throughout.
Closes#2979.
When running the `uv-client` tests, i would previously get:
```
warning: field `0` is never read
--> crates/uv-configuration/src/config_settings.rs:43:27
|
43 | pub struct ConfigSettings(BTreeMap<String, ConfigSettingValue>);
| -------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| field in this struct
|
= note: `ConfigSettings` has derived impls for the traits `Clone` and `Debug`, but these are intentionally ignored during dead code analysis
= note: `#[warn(dead_code)]` on by default
help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field
|
43 | pub struct ConfigSettings(());
| ~~
warning: `uv-configuration` (lib) generated 1 warning
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Closes#2564
## Test Plan
1. Changed existing linehaul tests to leverage insta.
2. Ran tests in various linux distros (Debian, Ubuntu, Centos, Fedora,
Alpine) to ensure they also pass locally again.
---------
Co-authored-by: konstin <konstin@mailbox.org>
The sync scenarios script is broken, so i did the updates manually
```
$ ./scripts/sync_scenarios.sh
Setting up a temporary environment...
Using Python 3.12.1 interpreter at: /home/konsti/projects/uv/.venv/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
× No solution found when resolving dependencies:
╰─▶ Because docutils==0.21.post1 is unusable because the package metadata was inconsistent and you require docutils==0.21.post1, we can conclude that the requirements are unsatisfiable.
hint: Metadata for docutils==0.21.post1 was inconsistent:
Package metadata version `0.21` does not match given version `0.21.post1`
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Similar to `Revision`, we now store IDs in the `Archive` entires rather
than absolute paths. This makes the cache robust to moves, etc.
Closes https://github.com/astral-sh/uv/issues/2908.
## Summary
This PR formalizes some of the concepts we use in the cache for
"pointers to things".
In the wheel cache, we have files like
`annotated_types-0.6.0-py3-none-any.http`. This represents an unzipped
wheel, cached alongside an HTTP caching policy. We now have a struct for
this to encapsulate the logic: `HttpArchivePointer`.
Similarly, we have files like `annotated_types-0.6.0-py3-none-any.rev`.
This represents an unzipped local wheel, alongside with a timestamp. We
now have a struct for this to encapsulate the logic:
`LocalArchivePointer`.
We have similar structs for source distributions too.
## Summary
This PR enables hash generation for URL requirements when the user
provides `--generate-hashes` to `pip compile`. While we include the
hashes from the registry already, today, we omit hashes for URLs.
To power hash generation, we introduce a `HashPolicy` abstraction:
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HashPolicy<'a> {
/// No hash policy is specified.
None,
/// Hashes should be generated (specifically, a SHA-256 hash), but not validated.
Generate,
/// Hashes should be validated against a pre-defined list of hashes. If necessary, hashes should
/// be generated so as to ensure that the archive is valid.
Validate(&'a [HashDigest]),
}
```
All of the methods on the distribution database now accept this policy,
instead of accepting `&'a [HashDigest]`.
Closes#2378.
## Summary
This PR modifies the distribution database to return both the
`Metadata23` and the computed hashes when clients request metadata.
No behavior changes, but this will be necessary to power
`--generate-hashes`.
## Summary
This represents a change to `--require-hashes` in the event that we
don't find a matching hash from the registry. The behavior in this PR is
closer to pip's.
Prior to this PR, if a distribution had no reported hash, or only
mismatched hashes, we would mark it as incompatible. Now, we mark it as
compatible, but we use the hash-agreement as part of the ordering, such
that we prefer any distribution with a matching hash, then any
distribution with no hash, then any distribution with a mismatched hash.
As a result, if an index reports incorrect hashes, but the user provides
the correct one, resolution now succeeds, where it would've failed.
Similarly, if an index omits hashes altogether, but the user provides
the correct one, resolution now succeeds, where it would've failed.
If we end up picking a distribution whose hash ultimately doesn't match,
we'll reject it later, after resolution.
## Summary
This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.
Here are some of the most important highlights:
- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.
There are a few notable TODOs:
- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.
Closes#474.
## Test Plan
I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
## Summary
This lets us remove circular dependencies (in the future, e.g., #2945)
that arise from `FlatIndex` needing a bunch of resolver-specific
abstractions (like incompatibilities, required hashes, etc.) that aren't
necessary to _fetch_ the flat index entries.
See https://github.com/astral-sh/uv/issues/2617
Note this also includes:
- #2918
- #2931 (pending)
A first step towards Python toolchain management in Rust.
First, we add a new crate to manage Python download metadata:
- Adds a new `uv-toolchain` crate
- Adds Rust structs for Python version download metadata
- Duplicates the script which downloads Python version metadata
- Adds a script to generate Rust code from the JSON metadata
- Adds a utility to download and extract the Python version
I explored some alternatives like a build script using things like
`serde` and `uneval` to automatically construct the code from our
structs but deemed it to heavy. Unlike Rye, I don't generate the Rust
directly from the web requests and have an intermediate JSON layer to
speed up iteration on the Rust types.
Next, we add add a `uv-dev` command `fetch-python` to download Python
versions per the bootstrapping script.
- Downloads a requested version or reads from `.python-versions`
- Extracts to `UV_BOOTSTRAP_DIR`
- Links executables for path extension
This command is not really intended to be user facing, but it's a good
PoC for the `uv-toolchain` API. Hash checking (via the sha256) isn't
implemented yet, we can do that in a follow-up.
Finally, we remove the `scripts/bootstrap` directory, update CI to use
the new command, and update the CONTRIBUTING docs.
<img width="1023" alt="Screenshot 2024-04-08 at 17 12 15"
src="https://github.com/astral-sh/uv/assets/2586601/57bd3cf1-7477-4bb8-a8e9-802a00d772cb">
## Summary
If the user runs with `--generate-hashes`, and the lockfile doesn't
contain _any_ hashes for a package (despite being pinned), we should add
new hashes. This mirrors running `uv pip compile --generate-hashes` for
the first time with an existing lockfile.
Closes#2962.
To get more insights into test performance, allow instrumenting tests
with tracing-durations-export.
Usage:
```shell
# A single test
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo test --features tracing-durations-export --test pip_install_scenarios no_binary -- --exact
# All tests
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo nextest run --features tracing-durations-export
```
Then we can e.g. look at
`target/test-traces/pip_install_scenarios::no_binary.svg` and see the
builds it performs:

## Summary
If we build a source distribution from the registry, and the version
doesn't match that of the filename, we should error, just as we do for
mismatched package names. However, we should also backtrack here, which
we didn't previously.
Closes https://github.com/astral-sh/uv/issues/2953.
## Test Plan
Verified that `cargo run pip install docutils --verbose --no-cache
--reinstall` installs `docutils==0.21` instead of the invalid
`docutils==0.21.post1`.
In the logs, I see:
```
WARN Unable to extract metadata for docutils: Package metadata version `0.21` does not match given version `0.21.post1`
```
## Summary
This updates to the version of axoupdater used in cargo-dist 0.13.0's
own selfupdate command, with all relevant fixes for platforms. It also
tentatively introduces a mildly dangerous self-runtest that runs `uv
self update` and checks that the binary is installed and executable.
I *believe* some adjustments need to be made to your CI to have this new
test run, because it requires the `self-update` feature to be enabled,
and I didn't want to just start messing with how you do feature coverage
in your CI. **As a result I haven't yet had a chance to actually fully
run this in CI**, though I've locally tested it on windows (with the
guard disabled).
## Test Plan
Most of the machinery here is provided by axoupdater itself (cargo-dist
also includes a variant of these tests in its codebase). This initial
implementation has a couple major limitations:
* This is For Reals modifying the system that runs the test (so it's off
unless it detects it's running in CI, and if you want variations on this
test they'll need to be [run in
serial](5e7826f7b0/cargo-dist/tests/cli-tests.rs (L235))).
Since many of the testing issues were surrounding precise details of
Actual Deployed Executions, this seemed worth the tradeoff.
* The actual installer *script* it's ultimately invoking is the one you
last published, and *not* the one that cargo-dist will make when you
next publish.
We're already working on implementing some logic for "get cargo-dist to
generate a fresh installer script too", which is in fact the basis of a
huge amount of cargo-dist's own testsuite. Now that we're dogfooding
this stuff, it should be quite hard for this stuff to break without
cargo-dist's own codebase noticing it first.
<!-- How was it tested? -->
## Summary
The prefetcher tallies the number of times we tried a given package, and
then once we hit a threshold, grabs the version map, assuming it's
already been fetched. For direct URL distributions, though, we don't
have a version map! And there's no need to prefetch.
Closes https://github.com/astral-sh/uv/issues/2941.
## Summary
Right now, we have a `Hashes` representation that looks like:
```rust
/// A dictionary mapping a hash name to a hex encoded digest of the file.
///
/// PEP 691 says multiple hashes can be included and the interpretation is left to the client.
#[derive(Debug, Clone, Eq, PartialEq, Default, Deserialize)]
pub struct Hashes {
pub md5: Option<Box<str>>,
pub sha256: Option<Box<str>>,
pub sha384: Option<Box<str>>,
pub sha512: Option<Box<str>>,
}
```
It stems from the PyPI API, which returns a dictionary of hashes.
We tend to pass these around as a vector of `Vec<Hashes>`. But it's a
bit strange because each entry in that vector could contain multiple
hashes. And it makes it difficult to ask questions like "Is
`sha256:ab21378ca980a8` in the set of hashes"?
This PR instead treats `Hashes` as the PyPI-internal type, and uses a
new `Vec<HashDigest>` everywhere in our own APIs.
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?
Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
Elides Python patch versions from the test suite unless the test
specifically requests a patch version.
This reduces some toil when not using our bootstrapped Python versions.
Partially addresses https://github.com/astral-sh/uv/issues/2165 though
we'll need changes to the scenario tests to really support their case.
## Summary
When you specify a source distribution via a path, it can either be a
path to an archive (like a `.tar.gz` file), or a source tree (a
directory). Right now, we handle both paths through the same methods in
the source database. This PR splits them up into separate handlers.
This will make hash generation a little easier, since we need to
generate hashes for archives, but _can't_ generate hashes for source
trees.
It also means that we can now store the unzipped source distribution in
the cache (in the case of archives), and avoid unzipping the source
distribution needlessly on every invocation; and, overall, let's un
enforce clearer expectations between the two routes (e.g., what errors
are possible vs. not), at the cost of duplicating some code.
Closes#2760 (incidentally -- not exactly the motivation for the change,
but it did accomplish it).
## Summary
I think this is a much clearer name for this concept: the set of
"versions" of a given wheel or source distribution. We also use
"Manifest" elsewhere to refer to the set of requirements, constraints,
etc., so this was overloaded.
## Summary
We have a heuristic in `File` that attempts to detect whether a URL is
absolute or relative. However, `contains("://")` is prone to false
positive. In the linked issues, the URLs look like:
```
/packages/5a/d8/4d75d1e4287ad9d051aab793c68f902c9c55c4397636b5ee540ebd15aedf/pytz-2005k.tar.bz2?hash=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c&remote=4d75d1e4287636b5ee540ebd15aedf/pytz-2005k.tar.bz2#sha256=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c
```
Which is relative, but includes `://`.
Instead, we should determine whether the URL has a _scheme_ which
matches the `Url` crate internally.
Closes https://github.com/astral-sh/uv/issues/2899.
## Summary
Right now, the path-based wheel cache just looks at the symlink to the
archives directory, checks the timestamp on it, and continues with that
symlink as long as the timestamp is up-to-date.
The HTTP-based wheel meanwhile, uses an intermediary `.http` file, which
includes the HTTP caching information. The `.http` file's payload is
just a path pointing to an entry in the archives directory.
This PR modifies the path-based codepaths to use a similar cache file,
which stores a timestamp along with a path to the archives directory.
The main advantage here is that we can add other data to this cache file
(namely, hashes in the future).
## Test Plan
Beyond existing tests, I also verified that this doesn't require a
version bump:
```
git checkout main
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
git checkout charlie/manifest
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall --refresh
```
## Summary
I think this is kind of just an oversight. If a wheel is available via
`--find-links`, and the index is "local", we never find it in the cache.
## Test Plan
`cargo test`
## Summary
In all cases, we unzip these immediately after returning. By moving the
unzipping into the database, we can remove a bunch of code (coming in a
separate PR), and pave the way for hash-checking, since hash generation
will _also_ happen in the database, and splitting the caching layers
across the database and the unzipper creates complications.
Closes#2863.
With pubgrub being fast for complex ranges, we can now compute the next
n candidates without taking a performance hit. This speeds up cold cache
`urllib3<1.25.4` `boto3` from maybe 40s - 50s to ~2s. See docstrings for
details on the heuristics.
**Before**

**After**

---
We need two parts of the prefetching, first looking for compatible
version and then falling back to flat next versions. After we selected a
boto3 version, there is only one compatible botocore version remaining,
so when won't find other compatible candidates for prefetching. We see
this as a pattern where we only prefetch boto3 (stack bars), but not
botocore (sequential requests between the stacked bars).

The risk is that we're completely wrong with the guess and cause a lot
of useless network requests. I think this is acceptable since this
mechanism only triggers when we're already on the bad path and we should
simply have fetched all versions after some seconds (assuming a fast
index like pypi).
---
It would be even better if the pubgrub state was copy-on-write so we
could simulate more progress than we actually have; currently we're
guessing what the next version is which could be completely wrong, but i
think this is still a valuable heuristic.
Fixes#170.
## Summary
Is this, perhaps, not totally necessary? It doesn't show up in any
fixtures beyond those that I added recently.
Closes https://github.com/astral-sh/uv/issues/2846.
## Summary
Demonstrates some suboptimal behavior in how we handle invalid metadata,
which are fixed in https://github.com/astral-sh/uv/pull/2834.
The included wheels were modified by-hand to include invalid structures.
Batch prefetching needs more information from the candidate selector, so
i've split `select` into methods. Split out from #2452. No functional
changes.
## Summary
In working on `--require-hashes`, I noticed that we're missing some
incompatibility tracking for `--find-links` distributions. Specifically,
we don't respect `--no-build` or `--no-binary`, so if we select a wheel
due to `--find-links`, we then throw a hard error when trying to build
it later (if `--no-binary` is provided), rather than selecting the
source distribution instead.
Closes https://github.com/astral-sh/uv/issues/2827.
## Summary
If we're given a Git reference like `20240222`, we currently treat it as
a short commit hash. However... it _could_ be a branch or a tag. This PR
improves the Git reference logic to ensure that ambiguous references
like `20240222` are handled appropriately, by attempting to extract it
as a branch, then a tag, then a short commit hash.
Closes https://github.com/astral-sh/uv/issues/2772.
## Summary
Upgrading `rs-async-zip` enables us to support data descriptors in
streaming. This both greatly improves performance for indexes that use
data descriptors _and_ ensures that we support them in a few other
places (e.g., zipped source distributions created in Finder).
Closes#2808.
## Summary
This partially revives https://github.com/astral-sh/uv/pull/2135 (with
some modifications) to enable users to opt-in to looking for packages
across multiple indexes.
The behavior is such that, in version selection, we take _any_
compatible version from a "higher-priority" index over the compatible
versions of a "lower-priority" index, even if that means we might accept
an "older" version.
Closes https://github.com/astral-sh/uv/issues/2775.
## Summary
This was an oversight in the `-r pyproject.toml` refactor. We can't
enforce unused extras if we have a source tree. We made the correct
changes to `pip compile`, but not `pip install`. This PR just mirrors
those changes to `pip install`, and adds a few tests.
Closes https://github.com/astral-sh/uv/issues/2801.
We don't know what kind of error the OS gives us on `try_lock_exclusive`
with an already locked file, so we assume all those errors are an
already locked file and call `lock_exclusive`.
For example the windows error:
```
Os {
code: 33,
kind: Uncategorized,
message: "The process cannot access the file because another process has locked a portion of the file.",
}
```
Fixes#2767
## Summary
Rather than storing the `redirects` on the resolver, this PR just
re-uses the "convert this URL to precise" logic when we convert to a
`Resolution` after-the-fact. I think this is a lot simpler: it removes
state from the resolver, and simplifies a lot of the hooks around
distribution fetching (e.g., `get_or_build_wheel_metadata` no longer
returns `(Metadata23, Option<Url>)`).
## Summary
I noticed in #2769 that I was now stripping `.git` suffixes from Git
URLs after resolving to a precise commit. This PR cleans up the internal
caching to use a better canonical representation: a `RepositoryUrl`
along with a `GitReference`, instead of a `GitUrl` which can contain
non-canonical data. This gives us both better fidelity (preserving the
`.git`, along with any casing that the user provided when defining the
URL) and is overall cleaner and more robust.
## Summary
This PR leverages our lookahead direct URL resolution to significantly
improve the range of Git URLs that we can accept (e.g., if a user
provides the same requirement, once as a direct dependency, and once as
a tag). We did some of this in #2285, but the solution here is more
general and works for arbitrary transitive URLs.
Closes https://github.com/astral-sh/uv/issues/2614.
Originally a regression test for #2779 but we found out that there's
some weird behavior where different `anyio` versions were preferred
based on the platform.
Addresses panic introduced in #2596 and reported in
https://github.com/astral-sh/uv/issues/2763#issuecomment-2030674936
When there are multiple versions of a package available, we remove the
existing packages before installing the resolved version to "fix" the
environment. We must remove all of the package versions and reinstall
because removing _any_ of the package versions could break the others.
Since reinstalls require a pull from the remote, this broke a contract
between the resolver and the planner which must always agree on which
packages should come from the remote. This further demonstrates that we
should be constructing the install plan with more concrete knowledge
from the resolver (i.e. `ResolvedDist` instead of `Requirement`) to
avoid having to manually ensure logic matches.
## Test plan
Fails on `main` with panic succeeds on branch
```
uv venv --seed
source .venv/bin/activate
pip install anyio==3.7.0 --ignore-installed
pip install anyio==4.0.0 --ignore-installed
cargo run -- pip install anyio black -v
```
## Summary
Ensures that if we resolve any distributions before the resolver, we
cache the metadata in-memory.
_Also_ ensures that we lock (important!) when resolving Git
distributions.
## Summary
This fixes a potential bug that revealed itself in
https://github.com/astral-sh/uv/pull/2761. We don't run into this now
because we always use "allowed URLs", stores the "last" compatible URL
in the map. But the use of the "raw" URL (rather than the "canonical"
URL) means that other writers have to follow that same assumption and
iterate over dependencies in-order.
## Summary
This PR would enable us to support transitive URL requirements. The key
idea is to leverage the fact that...
- URL requirements can only come from URL requirements.
- URL requirements identify a _specific_ version, and so don't require
backtracking.
Prior to running the "real" resolver, we recursively resolve any URL
requirements, and collect all the known URLs upfront, then pass those to
the resolver as "lookahead" requirements. This means the resolver knows
upfront that if a given package is included, it _must_ use the provided
URL.
Closes https://github.com/astral-sh/uv/issues/1808.
With this change, all usages of `EXCLUDE_NEWER` are now in command
wrappers, not in the test functions themselves.
For the venv test, i refactored them into the same kind of test context
abstraction that the other test modules have in the second commit.
The third commit makes`"INSTA_FILTERS` "private", removing the last
remaining individual usage.
Pending windows CI 🤞
## Summary
We can access cache from `BuildContext`. This mirrors
`SourceDistCachedBuilder`, which doesn't accept `Cache` as an argument
and always accesses it through `BuildContext`.
## Summary
If the user provides `uv pip install pyproject.toml`, we now prompt them
to confirm that they meant the `pyproject-toml` package (as opposed to
`uv pip install -r pyproject.toml`).
## Summary
We iterate over the project "requirements" directly in a variety of
places. However, it's not always the case that an input "requirement" on
its own will _actually_ be part of the resolution, since we support
"overrides".
Historically, then, overrides haven't worked as expected for _direct_
dependencies (and we have some tests that demonstrate the current,
"wrong" behavior). This is just a bug, but it's not really one that
comes up in practice, since it's rare to apply an override to your _own_
dependency.
However, we're now considering expanding the lookahead concept to
include local transitive dependencies. In this case, it's more and more
important that overrides and constraints are handled consistently.
This PR modifies all the locations in which we iterate over requirements
directly, and modifies them to respect overrides (and constraints, where
necessary).
## Summary
This is a trimmed-down version of
https://github.com/astral-sh/uv/pull/2684 that only applies to local
source trees for now, which enables workspace-like workflows (whereby
local packages can depend on other local packages at arbitrary depth).
Closes#2699.
## Test Plan
Added new tests.
Also cloned this MRE that was shared with me
(https://github.com/timothyjlaurent/uv-poetry-monorepo-mre), and
verified that it was installed without error:
```
❯ cargo run pip install ./uv-poetry-monorepo-mre/app --no-cache
Finished dev [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv pip install ./uv-poetry-monorepo-mre/app --no-cache`
Resolved 4 packages in 1.28s
Built app @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/app
Built lib1 @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib1
Built lib2 @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib2 Downloaded 4 packages in 457ms
Installed 4 packages in 2ms
+ app==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/app)
+ lib1==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib1)
+ lib2==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib2)
+ ruff==0.3.4
```
Previously, we did not consider installed distributions as candidates
while performing resolution. Here, we update the resolver to use
installed distributions that satisfy requirements instead of pulling new
distributions from the registry.
The implementation details are as follows:
- We now provide `SitePackages` to the `CandidateSelector`
- If an installed distribution satisfies the requirement, we prefer it
over remote distributions
- We do not want to allow installed distributions in some cases, i.e.,
upgrade and reinstall
- We address this by introducing an `Exclusions` type which tracks
installed packages to ignore during selection
- There's a new `ResolvedDist` wrapper with `Installed(InstalledDist)`
and `Installable(Dist)` variants
- This lets us pass already installed distributions throughout the
resolver
The user-facing behavior is thoroughly covered in the tests, but
briefly:
- Installing a package that depends on an already-installed package
prefers the local version over the index
- Installing a package with a name that matches an already-installed URL
package does not reinstall from the index
- Reinstalling (--reinstall) a package by name _will_ pull from the
index even if an already-installed URL package is present
- To reinstall the URL package, you must specify the URL in the request
Closes https://github.com/astral-sh/uv/issues/1661
Addresses:
- https://github.com/astral-sh/uv/issues/1476
- https://github.com/astral-sh/uv/issues/1856
- https://github.com/astral-sh/uv/issues/2093
- https://github.com/astral-sh/uv/issues/2282
- https://github.com/astral-sh/uv/issues/2383
- https://github.com/astral-sh/uv/issues/2560
## Test plan
- [x] Reproduction at `charlesnicholson/uv-pep420-bug` passes
- [x] Unit test for editable package
([#1476](https://github.com/astral-sh/uv/issues/1476))
- [x] Unit test for previously installed package with empty registry
- [x] Unit test for local non-editable package
- [x] Unit test for new version available locally but not in registry
([#2093](https://github.com/astral-sh/uv/issues/2093))
- ~[ ] Unit test for wheel not available in registry but already
installed locally
([#2282](https://github.com/astral-sh/uv/issues/2282))~ (seems
complicated and not worthwhile)
- [x] Unit test for install from URL dependency then with matching
version ([#2383](https://github.com/astral-sh/uv/issues/2383))
- [x] Unit test for install of new package that depends on installed
package does not change version
([#2560](https://github.com/astral-sh/uv/issues/2560))
- [x] Unit test that `pip compile` does _not_ consider installed
packages
It turns out that #2712 did _not_ fix#2711. After I put up #2712, I
started trying to track down the specific change that caused the
failure. I had assumed at first that it was related to one of our `rkyv`
types, but it actually ended up being one of our msgpack caches. I think
the failure mode is still fundamentally the same idea: the cached data
changed in a way that is still valid msgpack, but got interpreted
differently after deserializing.
The specific change that caused this was the [removal of a field] from
our
metadata type.
Ideally we would just undo the change and add the field back. But that
change has already been shipped out to users. So I believe the only
plausible choice at this point is to bump the `built-wheels` cache. This
will unfortunately mean that `uv` will need to re-build wheels.
Fixes#2711
[removal of a field]:
365c292525 (diff-e42586829f9c2cdbb909bedc5cf95691cc415247f2cbc2ebeb80d887020457bbL29)
It seems likely that we forgot to bump the version of the "simple" cache
in the 0.1.25 release. I'm still working on confirming it, but I figured
I'd get this bump up first.
The main problem here is that our "simple" cache is represented by
`rkyv`, and that in turn is tightly coupled to the representation of a
selection of data types in `uv`. Changing those data types without
bumping the cache version can result in cache deserialization errors
like this, or in the worst case, silent logic errors.
One possibility here is that the representation changed in a way that
permitted it to pass `rkyv` validation, but changed how the data itself
is interpreted. Our cache is robust with respect to `rkyv` validation
(if it fails, the cache will invalidate the entry and self-heal), but
being robust to higher level logical errors in interpretation of the
data is a much more significant challenge. Our best bet there is perhaps
some kind of checksum that we could do on top of `rkyv` validation (or
instead of it), and thus convert silent logical changes in how the data
is interpreted into failure modes that we're already robust to.
Fixes#2711
## Summary
This looks like a big change but it really isn't. Rather, I just split
`get_or_build_wheel` into separate `get_wheel` and `build_wheel` methods
internally, which made `get_or_build_wheel_metadata` capable of _not_
relying on `Tags`, which in turn makes it easier for us to use the
`DistributionDatabase` in various places without having it coupled to an
interpreter or environment (something we already did for
`SourceDistributionBuilder`).
## Summary
This PR enables the resolver to "accept" URLs, prereleases, and local
version specifiers for direct dependencies of path dependencies. As a
result, `uv pip install .` and `uv pip install -e .` now behave
identically, in that neither has a restriction on URL dependencies and
the like.
Closes https://github.com/astral-sh/uv/issues/2643.
Closes https://github.com/astral-sh/uv/issues/1853.
## Summary
This PR removes the custom `DistFinder` that we use in `pip sync`. This
originally existed because `VersionMap` wasn't lazy, and so we saved a
lot of time in `DistFinder` by reading distribution data lazily. But
now, AFAICT, there's really no benefit. Maintaining `DistFinder` means
we effectively have to maintain two resolvers, and end up fixing bugs in
`DistFinder` that don't exist in the `Resolver` (like #2688.
Closes#2694.
Closes#2443.
## Test Plan
I ran this benchmark a bunch. It's basically a wash. Sometimes one is
faster than the other.
```
❯ python -m scripts.bench \
--uv-path ./target/release/main \
--uv-path ./target/release/uv \
scripts/requirements/compiled/trio.txt --min-runs 50 --benchmark install-warm --warmup 25
Benchmark 1: ./target/release/main (install-warm)
Time (mean ± σ): 54.0 ms ± 10.6 ms [User: 8.7 ms, System: 98.1 ms]
Range (min … max): 45.5 ms … 94.3 ms 50 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: ./target/release/uv (install-warm)
Time (mean ± σ): 50.7 ms ± 9.2 ms [User: 8.7 ms, System: 98.6 ms]
Range (min … max): 44.0 ms … 98.6 ms 50 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (77.6 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Summary
'./target/release/uv (install-warm)' ran
1.06 ± 0.29 times faster than './target/release/main (install-warm)'
```
I stumbled across this when writing tests for
`--emit-marker-expressions`. Namely, I observed that in CI, the `tzdata`
dependency of `pendulum` wasn't included in the `requirements.txt`
output on Windows.
@konstin [suggested] that this was a bug, so I've created a test for it.
In particular, it looks like [`tzdata` is an unconditional dependency of
`pendulum`][tzdata-unconditional].
[suggested]:
https://github.com/astral-sh/uv/pull/2651#discussion_r1539722464
[tzdata-unconditional]:
e646afbd165e58327bc5c698731107/pendulum-3.0.0-cp310-none-win_amd64.whl/pendulum-3.0.0.dist-info/METADATA#line.12
## Summary
In `pip sync`, we weren't properly handling cases in which a package
_only_ existed in `--find-links` (e.g., the user passed `--offline` or
`--no-index`).
I plan to explore removing `Finder` entirely to avoid these mismatch
bugs between `pip sync` and other commands, but this is fine for now.
Closes https://github.com/astral-sh/uv/issues/2688.
## Test Plan
`cargo test`
Unfortunately these tests are all gated on specific platforms because
the marker expressions they generate are, by design, platform specific.
I think we'll eventually want to figure out a more robust testing
strategy for multi-platform locking (of which this is just the tiniest
of first steps), but I don't think we really have the infrastructure for
that in place yet. That is, we don't yet have a way of generating a
marker expression _for_ a particular environment instead of just the one
that happens to _be_ the current environment.
When enabled, the marker expression for the pinned requirements
is written as a comment at the top of the output. It is disabled
by default *and* hidden because it's not clear whether 1) this is
useful to end users and 2) is an interface we want to commit to.
However, it is useful to expose it in some way so that it can be
tested.
For $reasons, we'll want to be able to clone a `Manifest` so
that it can be re-used to generate a marker expression.
There is likely a refactoring that could be done to avoid the
cloning, but a `Manifest` is likely to be small in practice, and
we'll only need to clone it once.
These are useful for converting lower level marker values types
to their corresponding values from a marker environment.
We'll use these for generating marker expressions based on both
the dependency graph and the current marker environment.
## Summary
Now that we're resolving metadata more aggressively for local sources,
it's worth doing this. We now pull metadata from the `pyproject.toml`
directly if it's statically-defined.
Closes https://github.com/astral-sh/uv/issues/2629.
The snapshot filtering situation has gotten way out of hand, with each
test hand-rolling it's own filters on top of copied cruft from previous
tests.
I've attempted to address this holistically:
- `TestContext.filters()` has everything you should need
- This was introduced a while ago, but needed a few more filters for it
to be generalized everywhere
- Using `INSTA_FILTERS` is **not recommended** unless you do not want
the context filters
- It is okay to extend these filters for things unrelated to paths
- If you have to write a custom path filter, please highlight it in
review so we can address it in the common module
- `TestContext.site_packages()` gives cross-platform access to the
site-packages directory
- Do not manually construct the path to site-packages from the venv
- Do not turn off tests on Windows because you manually constructed a
Unix path to site-packages
- `TestContext.workspace_root` gives access to uv's repository directory
- Use this for installing from `scripts/packages/`
- If you need coverage for relative paths, copy the test package into
the `temp_dir` don't change the working directory of the test fixture
There is additional work that can be done here, such as:
- Auditing and removing additional uses of `INSTA_FILTERS`
- Updating manual construction of `Command` instances to use a utility
- The `venv` tests are particularly frightening in their lack of a test
context and could use some love
- Improving the developer experience i.e. apply context filters to
snapshots by default
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.
There are no behavioral changes here.
If you pass a `pyproject.toml` that use Hatch's context formatting API,
we currently fail because the dependencies aren't valid under PEP 508.
This PR makes the static metadata parsing a little more relaxed, so that
we appropriately fall back to PEP 517 there.
## Summary
Passing `pyproject.toml` or `setup.py` to `pip uninstall` is a bit
strange, since it will often require running a resolution to resolve the
dependencies (e.g., build the project), which means we also need to
accept `--index-url` and friends.
## Summary
Hatch allows for highly dynamic customization of metadata via hooks. In
such cases, Hatch
can't upload the PEP 517 contract, in that the metadata Hatch would
return by
`prepare_metadata_for_build_wheel` isn't guaranteed to match that of the
built wheel.
Hatch disables `prepare_metadata_for_build_wheel` entirely for pip.
We'll instead disable
it on our end when metadata is defined as "dynamic" in the
pyproject.toml, which should
allow us to leverage the hook in _most_ cases while still avoiding
incorrect metadata for
the remaining cases.
Closes: https://github.com/astral-sh/uv/issues/2130.
## Summary
When a user passes a `pyproject.toml` to `pip compile` (e.g., `uv pip
compile pyproject.toml`), we extract the requirements from the
`pyproject.toml` directly. However... that isn't always possible (as
seen in the linked issues). When it's _not_, we instead need to run the
PEP 517 build hooks to identify the metadata.
Closes https://github.com/astral-sh/uv/issues/1624.
Closes https://github.com/astral-sh/uv/issues/1644.
## Test Plan
`cargo test`
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
- Displays missing packages as single-line warnings.
- Adds support for `Editable project location` and `Required-by` fields
in `pip show`.
Part of #2526.
## Summary
`uv` was failing to install requirements defined like:
```
file://localhost/Users/crmarsh/Downloads/iniconfig-2.0.0-py3-none-any.whl
```
Closes https://github.com/astral-sh/uv/issues/2652.
While writing tests for a new flag (`--emit-marker-expression`) for `uv
pip compile`, I noticed that one of my test cases (`pendulum 3.0.0`)
was published in Dec 2023. I wanted to include this package in my
tests, but since it comes after our `EXCLUDE_NEWER` constant, it wasn't
visible to `uv`.
In this PR, I chose to resolve this by bumping `EXCLUDE_NEWER` to
`2024-03-25T00:00:00Z`. I also considered a couple other options:
* For a specific test, override and provide a custom `--exclude-newer`
flag. I felt like this would maybe be okay, but we could easily wind
up in a situation where we do this a lot and have a bunch of different
`--exclude-newer` flags in our tests. I'm not sure if this is a huge
problem in practice. Maybe it's fine.
* Find another package (or invent one) with a similarly interesting
configuration. It seemed easier to just bump `EXCLUDE_NEWER`.
The way I did this was to run `cargo insta test` after bumping
`EXCLUDE_NEWER`.
I then reviewed the snapshot diffs, and if they looked reasonable, I
accepted them.
There was only one case where I changed the test to preserve what I
thought it
was trying to test. That's isolated in its own commit.
With https://github.com/pubgrub-rs/pubgrub/pull/190, pubgrub attaches
all types to a dependency provider to reduce the number of generics. We
need a dummy dependency provider now to emulate this. On the plus side,
pep440_rs drops its pubgrub dependency.
This test was introduced in 42973cd9cb. It
looks like it compares some values against some platform specific code
that attempts to find the OS version. But the comparisons made some
assumptions about what kind of data is available. In this commit, we try
to make the test a little more flexible on Linux by not assuming that
`Option` values are `Some`.
## Summary
I don't see a great reason to allow this, and it adds a lot of
complexity, so `pyproject.toml` files are now limited to `pip compile`
and `pip install -r` -- they can't be passed as `-c` or `--override`.
## Summary
Closes Issue:
- https://github.com/astral-sh/uv/issues/2626
## Test Plan
```
cargo run -- pip install -r dev-requirements.txt -r requirements.txt
```
where both requirements files have same `--index-url`
We put a `.gitignore` with `*` at the top of our cache. When maturin was
building a source distribution inside the cache, it would walk up the
tree to find a gitignore, see that and ignore all python files. We now
add an (empty) `.git` directory one directory below, in the root of
built-wheels cache. This prevents ignore walking further up (it marks
the top level a git repository).
Deptry (from #2490) is a mid sized rust package with additional python
packages, so instead of using it in the test i've replaced it with a
small (44KB total) reproducer that uses cffi for faster building, the
entire test taking <2s on my machine.
Fixes#2490
## Summary
Ensures that (e.g.) installs from conda-forge, Homebrew, and other
distributions don't expose `uv self update` at all.
We'll still show `uv self update` for `pip install uv`, but it will fail
with a good error. Removing the `uv self update` from `pip`-installed
`uv` is more complicated, since we'd need to build separately for the
installer vs. for PyPI.
Closes#2588.
## Summary
This PR enables the source distribution database to be used with unnamed
requirements (i.e., URLs without a package name). The (significant)
upside here is that we can now use PEP 517 hooks to resolve unnamed
requirement metadata _and_ reuse any computation in the cache.
The changes to `crates/uv-distribution/src/source/mod.rs` are quite
extensive, but mostly mechanical. The core idea is that we introduce a
new `BuildableSource` abstraction, which can either be a distribution,
or an unnamed URL:
```rust
/// A reference to a source that can be built into a built distribution.
///
/// This can either be a distribution (e.g., a package on a registry) or a direct URL.
///
/// Distributions can _also_ point to URLs in lieu of a registry; however, the primary distinction
/// here is that a distribution will always include a package name, while a URL will not.
#[derive(Debug, Clone, Copy)]
pub enum BuildableSource<'a> {
Dist(&'a SourceDist),
Url(SourceUrl<'a>),
}
```
All the methods on the source distribution database now accept
`BuildableSource`. `BuildableSource` has a `name()` method, but it
returns `Option<&PackageName>`, and everything is required to work with
and without a package name.
The main drawback of this approach (which isn't a terrible one) is that
we can no longer include the package name in the cache. (We do continue
to use the package name for registry-based distributions, since those
always have a name.). The package name was included in the cache route
for two reasons: (1) it's nice for debugging; and (2) we use it to power
`uv cache clean flask`, to identify the entries that are relevant for
Flask.
To solve this, I changed the `uv cache clean` code to look one level
deeper. So, when we want to determine whether to remove the cache entry
for a given URL, we now look into the directory to see if there are any
wheels that match the package name. This isn't as nice, but it does work
(and we have test coverage for it -- all passing).
I also considered removing the package name from the cache routes for
non-registry _wheels_, for consistency... But, it would require a cache
bump, and it didn't feel important enough to merit that.
## Summary
Detects unused cache entries, which can come in a few forms:
1. Directories that are out-dated via our versioning scheme.
2. Old source distribution builds (i.e., we have a more recent version).
3. Old wheels (stored in `archive-v0`, but not symlinked-to from
anywhere in the cache).
Closes https://github.com/astral-sh/puffin/issues/1059.
Closes#2566
We were storing the username e.g. `charlie@astral.sh` as a
percent-encoded string `charlie%40astral.sh` which resulted in different
headers and broke JFrog's artifactory which apparently does not decode
usernames.
Tested with a JFrog artifactory and AWS CodeArtifact although it is
worth noting that AWS does _not_ have a username with an `@` — it'd be
nice to test another artifactory with percent-encoded characters in the
username and/or password.