See https://github.com/astral-sh/uv/issues/2617
Note this also includes:
- #2918
- #2931 (pending)
A first step towards Python toolchain management in Rust.
First, we add a new crate to manage Python download metadata:
- Adds a new `uv-toolchain` crate
- Adds Rust structs for Python version download metadata
- Duplicates the script which downloads Python version metadata
- Adds a script to generate Rust code from the JSON metadata
- Adds a utility to download and extract the Python version
I explored some alternatives like a build script using things like
`serde` and `uneval` to automatically construct the code from our
structs but deemed it to heavy. Unlike Rye, I don't generate the Rust
directly from the web requests and have an intermediate JSON layer to
speed up iteration on the Rust types.
Next, we add add a `uv-dev` command `fetch-python` to download Python
versions per the bootstrapping script.
- Downloads a requested version or reads from `.python-versions`
- Extracts to `UV_BOOTSTRAP_DIR`
- Links executables for path extension
This command is not really intended to be user facing, but it's a good
PoC for the `uv-toolchain` API. Hash checking (via the sha256) isn't
implemented yet, we can do that in a follow-up.
Finally, we remove the `scripts/bootstrap` directory, update CI to use
the new command, and update the CONTRIBUTING docs.
<img width="1023" alt="Screenshot 2024-04-08 at 17 12 15"
src="https://github.com/astral-sh/uv/assets/2586601/57bd3cf1-7477-4bb8-a8e9-802a00d772cb">
## Summary
If the user runs with `--generate-hashes`, and the lockfile doesn't
contain _any_ hashes for a package (despite being pinned), we should add
new hashes. This mirrors running `uv pip compile --generate-hashes` for
the first time with an existing lockfile.
Closes#2962.
To get more insights into test performance, allow instrumenting tests
with tracing-durations-export.
Usage:
```shell
# A single test
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo test --features tracing-durations-export --test pip_install_scenarios no_binary -- --exact
# All tests
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo nextest run --features tracing-durations-export
```
Then we can e.g. look at
`target/test-traces/pip_install_scenarios::no_binary.svg` and see the
builds it performs:

## Summary
If we build a source distribution from the registry, and the version
doesn't match that of the filename, we should error, just as we do for
mismatched package names. However, we should also backtrack here, which
we didn't previously.
Closes https://github.com/astral-sh/uv/issues/2953.
## Test Plan
Verified that `cargo run pip install docutils --verbose --no-cache
--reinstall` installs `docutils==0.21` instead of the invalid
`docutils==0.21.post1`.
In the logs, I see:
```
WARN Unable to extract metadata for docutils: Package metadata version `0.21` does not match given version `0.21.post1`
```
## Summary
This updates to the version of axoupdater used in cargo-dist 0.13.0's
own selfupdate command, with all relevant fixes for platforms. It also
tentatively introduces a mildly dangerous self-runtest that runs `uv
self update` and checks that the binary is installed and executable.
I *believe* some adjustments need to be made to your CI to have this new
test run, because it requires the `self-update` feature to be enabled,
and I didn't want to just start messing with how you do feature coverage
in your CI. **As a result I haven't yet had a chance to actually fully
run this in CI**, though I've locally tested it on windows (with the
guard disabled).
## Test Plan
Most of the machinery here is provided by axoupdater itself (cargo-dist
also includes a variant of these tests in its codebase). This initial
implementation has a couple major limitations:
* This is For Reals modifying the system that runs the test (so it's off
unless it detects it's running in CI, and if you want variations on this
test they'll need to be [run in
serial](5e7826f7b0/cargo-dist/tests/cli-tests.rs (L235))).
Since many of the testing issues were surrounding precise details of
Actual Deployed Executions, this seemed worth the tradeoff.
* The actual installer *script* it's ultimately invoking is the one you
last published, and *not* the one that cargo-dist will make when you
next publish.
We're already working on implementing some logic for "get cargo-dist to
generate a fresh installer script too", which is in fact the basis of a
huge amount of cargo-dist's own testsuite. Now that we're dogfooding
this stuff, it should be quite hard for this stuff to break without
cargo-dist's own codebase noticing it first.
<!-- How was it tested? -->
## Summary
The prefetcher tallies the number of times we tried a given package, and
then once we hit a threshold, grabs the version map, assuming it's
already been fetched. For direct URL distributions, though, we don't
have a version map! And there's no need to prefetch.
Closes https://github.com/astral-sh/uv/issues/2941.
## Summary
Right now, we have a `Hashes` representation that looks like:
```rust
/// A dictionary mapping a hash name to a hex encoded digest of the file.
///
/// PEP 691 says multiple hashes can be included and the interpretation is left to the client.
#[derive(Debug, Clone, Eq, PartialEq, Default, Deserialize)]
pub struct Hashes {
pub md5: Option<Box<str>>,
pub sha256: Option<Box<str>>,
pub sha384: Option<Box<str>>,
pub sha512: Option<Box<str>>,
}
```
It stems from the PyPI API, which returns a dictionary of hashes.
We tend to pass these around as a vector of `Vec<Hashes>`. But it's a
bit strange because each entry in that vector could contain multiple
hashes. And it makes it difficult to ask questions like "Is
`sha256:ab21378ca980a8` in the set of hashes"?
This PR instead treats `Hashes` as the PyPI-internal type, and uses a
new `Vec<HashDigest>` everywhere in our own APIs.
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?
Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
Elides Python patch versions from the test suite unless the test
specifically requests a patch version.
This reduces some toil when not using our bootstrapped Python versions.
Partially addresses https://github.com/astral-sh/uv/issues/2165 though
we'll need changes to the scenario tests to really support their case.
## Summary
When you specify a source distribution via a path, it can either be a
path to an archive (like a `.tar.gz` file), or a source tree (a
directory). Right now, we handle both paths through the same methods in
the source database. This PR splits them up into separate handlers.
This will make hash generation a little easier, since we need to
generate hashes for archives, but _can't_ generate hashes for source
trees.
It also means that we can now store the unzipped source distribution in
the cache (in the case of archives), and avoid unzipping the source
distribution needlessly on every invocation; and, overall, let's un
enforce clearer expectations between the two routes (e.g., what errors
are possible vs. not), at the cost of duplicating some code.
Closes#2760 (incidentally -- not exactly the motivation for the change,
but it did accomplish it).
## Summary
I think this is a much clearer name for this concept: the set of
"versions" of a given wheel or source distribution. We also use
"Manifest" elsewhere to refer to the set of requirements, constraints,
etc., so this was overloaded.
## Summary
We have a heuristic in `File` that attempts to detect whether a URL is
absolute or relative. However, `contains("://")` is prone to false
positive. In the linked issues, the URLs look like:
```
/packages/5a/d8/4d75d1e4287ad9d051aab793c68f902c9c55c4397636b5ee540ebd15aedf/pytz-2005k.tar.bz2?hash=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c&remote=4d75d1e4287636b5ee540ebd15aedf/pytz-2005k.tar.bz2#sha256=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c
```
Which is relative, but includes `://`.
Instead, we should determine whether the URL has a _scheme_ which
matches the `Url` crate internally.
Closes https://github.com/astral-sh/uv/issues/2899.
## Summary
Right now, the path-based wheel cache just looks at the symlink to the
archives directory, checks the timestamp on it, and continues with that
symlink as long as the timestamp is up-to-date.
The HTTP-based wheel meanwhile, uses an intermediary `.http` file, which
includes the HTTP caching information. The `.http` file's payload is
just a path pointing to an entry in the archives directory.
This PR modifies the path-based codepaths to use a similar cache file,
which stores a timestamp along with a path to the archives directory.
The main advantage here is that we can add other data to this cache file
(namely, hashes in the future).
## Test Plan
Beyond existing tests, I also verified that this doesn't require a
version bump:
```
git checkout main
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
git checkout charlie/manifest
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall --refresh
```
## Summary
I think this is kind of just an oversight. If a wheel is available via
`--find-links`, and the index is "local", we never find it in the cache.
## Test Plan
`cargo test`
## Summary
In all cases, we unzip these immediately after returning. By moving the
unzipping into the database, we can remove a bunch of code (coming in a
separate PR), and pave the way for hash-checking, since hash generation
will _also_ happen in the database, and splitting the caching layers
across the database and the unzipper creates complications.
Closes#2863.
With pubgrub being fast for complex ranges, we can now compute the next
n candidates without taking a performance hit. This speeds up cold cache
`urllib3<1.25.4` `boto3` from maybe 40s - 50s to ~2s. See docstrings for
details on the heuristics.
**Before**

**After**

---
We need two parts of the prefetching, first looking for compatible
version and then falling back to flat next versions. After we selected a
boto3 version, there is only one compatible botocore version remaining,
so when won't find other compatible candidates for prefetching. We see
this as a pattern where we only prefetch boto3 (stack bars), but not
botocore (sequential requests between the stacked bars).

The risk is that we're completely wrong with the guess and cause a lot
of useless network requests. I think this is acceptable since this
mechanism only triggers when we're already on the bad path and we should
simply have fetched all versions after some seconds (assuming a fast
index like pypi).
---
It would be even better if the pubgrub state was copy-on-write so we
could simulate more progress than we actually have; currently we're
guessing what the next version is which could be completely wrong, but i
think this is still a valuable heuristic.
Fixes#170.
## Summary
Is this, perhaps, not totally necessary? It doesn't show up in any
fixtures beyond those that I added recently.
Closes https://github.com/astral-sh/uv/issues/2846.
## Summary
Demonstrates some suboptimal behavior in how we handle invalid metadata,
which are fixed in https://github.com/astral-sh/uv/pull/2834.
The included wheels were modified by-hand to include invalid structures.
Batch prefetching needs more information from the candidate selector, so
i've split `select` into methods. Split out from #2452. No functional
changes.
## Summary
In working on `--require-hashes`, I noticed that we're missing some
incompatibility tracking for `--find-links` distributions. Specifically,
we don't respect `--no-build` or `--no-binary`, so if we select a wheel
due to `--find-links`, we then throw a hard error when trying to build
it later (if `--no-binary` is provided), rather than selecting the
source distribution instead.
Closes https://github.com/astral-sh/uv/issues/2827.
## Summary
If we're given a Git reference like `20240222`, we currently treat it as
a short commit hash. However... it _could_ be a branch or a tag. This PR
improves the Git reference logic to ensure that ambiguous references
like `20240222` are handled appropriately, by attempting to extract it
as a branch, then a tag, then a short commit hash.
Closes https://github.com/astral-sh/uv/issues/2772.
## Summary
Upgrading `rs-async-zip` enables us to support data descriptors in
streaming. This both greatly improves performance for indexes that use
data descriptors _and_ ensures that we support them in a few other
places (e.g., zipped source distributions created in Finder).
Closes#2808.
## Summary
This partially revives https://github.com/astral-sh/uv/pull/2135 (with
some modifications) to enable users to opt-in to looking for packages
across multiple indexes.
The behavior is such that, in version selection, we take _any_
compatible version from a "higher-priority" index over the compatible
versions of a "lower-priority" index, even if that means we might accept
an "older" version.
Closes https://github.com/astral-sh/uv/issues/2775.
## Summary
This was an oversight in the `-r pyproject.toml` refactor. We can't
enforce unused extras if we have a source tree. We made the correct
changes to `pip compile`, but not `pip install`. This PR just mirrors
those changes to `pip install`, and adds a few tests.
Closes https://github.com/astral-sh/uv/issues/2801.
We don't know what kind of error the OS gives us on `try_lock_exclusive`
with an already locked file, so we assume all those errors are an
already locked file and call `lock_exclusive`.
For example the windows error:
```
Os {
code: 33,
kind: Uncategorized,
message: "The process cannot access the file because another process has locked a portion of the file.",
}
```
Fixes#2767
## Summary
Rather than storing the `redirects` on the resolver, this PR just
re-uses the "convert this URL to precise" logic when we convert to a
`Resolution` after-the-fact. I think this is a lot simpler: it removes
state from the resolver, and simplifies a lot of the hooks around
distribution fetching (e.g., `get_or_build_wheel_metadata` no longer
returns `(Metadata23, Option<Url>)`).
## Summary
I noticed in #2769 that I was now stripping `.git` suffixes from Git
URLs after resolving to a precise commit. This PR cleans up the internal
caching to use a better canonical representation: a `RepositoryUrl`
along with a `GitReference`, instead of a `GitUrl` which can contain
non-canonical data. This gives us both better fidelity (preserving the
`.git`, along with any casing that the user provided when defining the
URL) and is overall cleaner and more robust.
## Summary
This PR leverages our lookahead direct URL resolution to significantly
improve the range of Git URLs that we can accept (e.g., if a user
provides the same requirement, once as a direct dependency, and once as
a tag). We did some of this in #2285, but the solution here is more
general and works for arbitrary transitive URLs.
Closes https://github.com/astral-sh/uv/issues/2614.
Originally a regression test for #2779 but we found out that there's
some weird behavior where different `anyio` versions were preferred
based on the platform.
Addresses panic introduced in #2596 and reported in
https://github.com/astral-sh/uv/issues/2763#issuecomment-2030674936
When there are multiple versions of a package available, we remove the
existing packages before installing the resolved version to "fix" the
environment. We must remove all of the package versions and reinstall
because removing _any_ of the package versions could break the others.
Since reinstalls require a pull from the remote, this broke a contract
between the resolver and the planner which must always agree on which
packages should come from the remote. This further demonstrates that we
should be constructing the install plan with more concrete knowledge
from the resolver (i.e. `ResolvedDist` instead of `Requirement`) to
avoid having to manually ensure logic matches.
## Test plan
Fails on `main` with panic succeeds on branch
```
uv venv --seed
source .venv/bin/activate
pip install anyio==3.7.0 --ignore-installed
pip install anyio==4.0.0 --ignore-installed
cargo run -- pip install anyio black -v
```
## Summary
Ensures that if we resolve any distributions before the resolver, we
cache the metadata in-memory.
_Also_ ensures that we lock (important!) when resolving Git
distributions.
## Summary
This fixes a potential bug that revealed itself in
https://github.com/astral-sh/uv/pull/2761. We don't run into this now
because we always use "allowed URLs", stores the "last" compatible URL
in the map. But the use of the "raw" URL (rather than the "canonical"
URL) means that other writers have to follow that same assumption and
iterate over dependencies in-order.
## Summary
This PR would enable us to support transitive URL requirements. The key
idea is to leverage the fact that...
- URL requirements can only come from URL requirements.
- URL requirements identify a _specific_ version, and so don't require
backtracking.
Prior to running the "real" resolver, we recursively resolve any URL
requirements, and collect all the known URLs upfront, then pass those to
the resolver as "lookahead" requirements. This means the resolver knows
upfront that if a given package is included, it _must_ use the provided
URL.
Closes https://github.com/astral-sh/uv/issues/1808.
With this change, all usages of `EXCLUDE_NEWER` are now in command
wrappers, not in the test functions themselves.
For the venv test, i refactored them into the same kind of test context
abstraction that the other test modules have in the second commit.
The third commit makes`"INSTA_FILTERS` "private", removing the last
remaining individual usage.
Pending windows CI 🤞
## Summary
We can access cache from `BuildContext`. This mirrors
`SourceDistCachedBuilder`, which doesn't accept `Cache` as an argument
and always accesses it through `BuildContext`.
## Summary
If the user provides `uv pip install pyproject.toml`, we now prompt them
to confirm that they meant the `pyproject-toml` package (as opposed to
`uv pip install -r pyproject.toml`).
## Summary
We iterate over the project "requirements" directly in a variety of
places. However, it's not always the case that an input "requirement" on
its own will _actually_ be part of the resolution, since we support
"overrides".
Historically, then, overrides haven't worked as expected for _direct_
dependencies (and we have some tests that demonstrate the current,
"wrong" behavior). This is just a bug, but it's not really one that
comes up in practice, since it's rare to apply an override to your _own_
dependency.
However, we're now considering expanding the lookahead concept to
include local transitive dependencies. In this case, it's more and more
important that overrides and constraints are handled consistently.
This PR modifies all the locations in which we iterate over requirements
directly, and modifies them to respect overrides (and constraints, where
necessary).
## Summary
This is a trimmed-down version of
https://github.com/astral-sh/uv/pull/2684 that only applies to local
source trees for now, which enables workspace-like workflows (whereby
local packages can depend on other local packages at arbitrary depth).
Closes#2699.
## Test Plan
Added new tests.
Also cloned this MRE that was shared with me
(https://github.com/timothyjlaurent/uv-poetry-monorepo-mre), and
verified that it was installed without error:
```
❯ cargo run pip install ./uv-poetry-monorepo-mre/app --no-cache
Finished dev [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv pip install ./uv-poetry-monorepo-mre/app --no-cache`
Resolved 4 packages in 1.28s
Built app @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/app
Built lib1 @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib1
Built lib2 @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib2 Downloaded 4 packages in 457ms
Installed 4 packages in 2ms
+ app==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/app)
+ lib1==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib1)
+ lib2==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib2)
+ ruff==0.3.4
```
Previously, we did not consider installed distributions as candidates
while performing resolution. Here, we update the resolver to use
installed distributions that satisfy requirements instead of pulling new
distributions from the registry.
The implementation details are as follows:
- We now provide `SitePackages` to the `CandidateSelector`
- If an installed distribution satisfies the requirement, we prefer it
over remote distributions
- We do not want to allow installed distributions in some cases, i.e.,
upgrade and reinstall
- We address this by introducing an `Exclusions` type which tracks
installed packages to ignore during selection
- There's a new `ResolvedDist` wrapper with `Installed(InstalledDist)`
and `Installable(Dist)` variants
- This lets us pass already installed distributions throughout the
resolver
The user-facing behavior is thoroughly covered in the tests, but
briefly:
- Installing a package that depends on an already-installed package
prefers the local version over the index
- Installing a package with a name that matches an already-installed URL
package does not reinstall from the index
- Reinstalling (--reinstall) a package by name _will_ pull from the
index even if an already-installed URL package is present
- To reinstall the URL package, you must specify the URL in the request
Closes https://github.com/astral-sh/uv/issues/1661
Addresses:
- https://github.com/astral-sh/uv/issues/1476
- https://github.com/astral-sh/uv/issues/1856
- https://github.com/astral-sh/uv/issues/2093
- https://github.com/astral-sh/uv/issues/2282
- https://github.com/astral-sh/uv/issues/2383
- https://github.com/astral-sh/uv/issues/2560
## Test plan
- [x] Reproduction at `charlesnicholson/uv-pep420-bug` passes
- [x] Unit test for editable package
([#1476](https://github.com/astral-sh/uv/issues/1476))
- [x] Unit test for previously installed package with empty registry
- [x] Unit test for local non-editable package
- [x] Unit test for new version available locally but not in registry
([#2093](https://github.com/astral-sh/uv/issues/2093))
- ~[ ] Unit test for wheel not available in registry but already
installed locally
([#2282](https://github.com/astral-sh/uv/issues/2282))~ (seems
complicated and not worthwhile)
- [x] Unit test for install from URL dependency then with matching
version ([#2383](https://github.com/astral-sh/uv/issues/2383))
- [x] Unit test for install of new package that depends on installed
package does not change version
([#2560](https://github.com/astral-sh/uv/issues/2560))
- [x] Unit test that `pip compile` does _not_ consider installed
packages
It turns out that #2712 did _not_ fix#2711. After I put up #2712, I
started trying to track down the specific change that caused the
failure. I had assumed at first that it was related to one of our `rkyv`
types, but it actually ended up being one of our msgpack caches. I think
the failure mode is still fundamentally the same idea: the cached data
changed in a way that is still valid msgpack, but got interpreted
differently after deserializing.
The specific change that caused this was the [removal of a field] from
our
metadata type.
Ideally we would just undo the change and add the field back. But that
change has already been shipped out to users. So I believe the only
plausible choice at this point is to bump the `built-wheels` cache. This
will unfortunately mean that `uv` will need to re-build wheels.
Fixes#2711
[removal of a field]:
365c292525 (diff-e42586829f9c2cdbb909bedc5cf95691cc415247f2cbc2ebeb80d887020457bbL29)
It seems likely that we forgot to bump the version of the "simple" cache
in the 0.1.25 release. I'm still working on confirming it, but I figured
I'd get this bump up first.
The main problem here is that our "simple" cache is represented by
`rkyv`, and that in turn is tightly coupled to the representation of a
selection of data types in `uv`. Changing those data types without
bumping the cache version can result in cache deserialization errors
like this, or in the worst case, silent logic errors.
One possibility here is that the representation changed in a way that
permitted it to pass `rkyv` validation, but changed how the data itself
is interpreted. Our cache is robust with respect to `rkyv` validation
(if it fails, the cache will invalidate the entry and self-heal), but
being robust to higher level logical errors in interpretation of the
data is a much more significant challenge. Our best bet there is perhaps
some kind of checksum that we could do on top of `rkyv` validation (or
instead of it), and thus convert silent logical changes in how the data
is interpreted into failure modes that we're already robust to.
Fixes#2711
## Summary
This looks like a big change but it really isn't. Rather, I just split
`get_or_build_wheel` into separate `get_wheel` and `build_wheel` methods
internally, which made `get_or_build_wheel_metadata` capable of _not_
relying on `Tags`, which in turn makes it easier for us to use the
`DistributionDatabase` in various places without having it coupled to an
interpreter or environment (something we already did for
`SourceDistributionBuilder`).
## Summary
This PR enables the resolver to "accept" URLs, prereleases, and local
version specifiers for direct dependencies of path dependencies. As a
result, `uv pip install .` and `uv pip install -e .` now behave
identically, in that neither has a restriction on URL dependencies and
the like.
Closes https://github.com/astral-sh/uv/issues/2643.
Closes https://github.com/astral-sh/uv/issues/1853.
## Summary
This PR removes the custom `DistFinder` that we use in `pip sync`. This
originally existed because `VersionMap` wasn't lazy, and so we saved a
lot of time in `DistFinder` by reading distribution data lazily. But
now, AFAICT, there's really no benefit. Maintaining `DistFinder` means
we effectively have to maintain two resolvers, and end up fixing bugs in
`DistFinder` that don't exist in the `Resolver` (like #2688.
Closes#2694.
Closes#2443.
## Test Plan
I ran this benchmark a bunch. It's basically a wash. Sometimes one is
faster than the other.
```
❯ python -m scripts.bench \
--uv-path ./target/release/main \
--uv-path ./target/release/uv \
scripts/requirements/compiled/trio.txt --min-runs 50 --benchmark install-warm --warmup 25
Benchmark 1: ./target/release/main (install-warm)
Time (mean ± σ): 54.0 ms ± 10.6 ms [User: 8.7 ms, System: 98.1 ms]
Range (min … max): 45.5 ms … 94.3 ms 50 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: ./target/release/uv (install-warm)
Time (mean ± σ): 50.7 ms ± 9.2 ms [User: 8.7 ms, System: 98.6 ms]
Range (min … max): 44.0 ms … 98.6 ms 50 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (77.6 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Summary
'./target/release/uv (install-warm)' ran
1.06 ± 0.29 times faster than './target/release/main (install-warm)'
```
I stumbled across this when writing tests for
`--emit-marker-expressions`. Namely, I observed that in CI, the `tzdata`
dependency of `pendulum` wasn't included in the `requirements.txt`
output on Windows.
@konstin [suggested] that this was a bug, so I've created a test for it.
In particular, it looks like [`tzdata` is an unconditional dependency of
`pendulum`][tzdata-unconditional].
[suggested]:
https://github.com/astral-sh/uv/pull/2651#discussion_r1539722464
[tzdata-unconditional]:
e646afbd165e58327bc5c698731107/pendulum-3.0.0-cp310-none-win_amd64.whl/pendulum-3.0.0.dist-info/METADATA#line.12
## Summary
In `pip sync`, we weren't properly handling cases in which a package
_only_ existed in `--find-links` (e.g., the user passed `--offline` or
`--no-index`).
I plan to explore removing `Finder` entirely to avoid these mismatch
bugs between `pip sync` and other commands, but this is fine for now.
Closes https://github.com/astral-sh/uv/issues/2688.
## Test Plan
`cargo test`
Unfortunately these tests are all gated on specific platforms because
the marker expressions they generate are, by design, platform specific.
I think we'll eventually want to figure out a more robust testing
strategy for multi-platform locking (of which this is just the tiniest
of first steps), but I don't think we really have the infrastructure for
that in place yet. That is, we don't yet have a way of generating a
marker expression _for_ a particular environment instead of just the one
that happens to _be_ the current environment.
When enabled, the marker expression for the pinned requirements
is written as a comment at the top of the output. It is disabled
by default *and* hidden because it's not clear whether 1) this is
useful to end users and 2) is an interface we want to commit to.
However, it is useful to expose it in some way so that it can be
tested.
For $reasons, we'll want to be able to clone a `Manifest` so
that it can be re-used to generate a marker expression.
There is likely a refactoring that could be done to avoid the
cloning, but a `Manifest` is likely to be small in practice, and
we'll only need to clone it once.
These are useful for converting lower level marker values types
to their corresponding values from a marker environment.
We'll use these for generating marker expressions based on both
the dependency graph and the current marker environment.
## Summary
Now that we're resolving metadata more aggressively for local sources,
it's worth doing this. We now pull metadata from the `pyproject.toml`
directly if it's statically-defined.
Closes https://github.com/astral-sh/uv/issues/2629.
The snapshot filtering situation has gotten way out of hand, with each
test hand-rolling it's own filters on top of copied cruft from previous
tests.
I've attempted to address this holistically:
- `TestContext.filters()` has everything you should need
- This was introduced a while ago, but needed a few more filters for it
to be generalized everywhere
- Using `INSTA_FILTERS` is **not recommended** unless you do not want
the context filters
- It is okay to extend these filters for things unrelated to paths
- If you have to write a custom path filter, please highlight it in
review so we can address it in the common module
- `TestContext.site_packages()` gives cross-platform access to the
site-packages directory
- Do not manually construct the path to site-packages from the venv
- Do not turn off tests on Windows because you manually constructed a
Unix path to site-packages
- `TestContext.workspace_root` gives access to uv's repository directory
- Use this for installing from `scripts/packages/`
- If you need coverage for relative paths, copy the test package into
the `temp_dir` don't change the working directory of the test fixture
There is additional work that can be done here, such as:
- Auditing and removing additional uses of `INSTA_FILTERS`
- Updating manual construction of `Command` instances to use a utility
- The `venv` tests are particularly frightening in their lack of a test
context and could use some love
- Improving the developer experience i.e. apply context filters to
snapshots by default
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.
There are no behavioral changes here.
If you pass a `pyproject.toml` that use Hatch's context formatting API,
we currently fail because the dependencies aren't valid under PEP 508.
This PR makes the static metadata parsing a little more relaxed, so that
we appropriately fall back to PEP 517 there.
## Summary
Passing `pyproject.toml` or `setup.py` to `pip uninstall` is a bit
strange, since it will often require running a resolution to resolve the
dependencies (e.g., build the project), which means we also need to
accept `--index-url` and friends.
## Summary
Hatch allows for highly dynamic customization of metadata via hooks. In
such cases, Hatch
can't upload the PEP 517 contract, in that the metadata Hatch would
return by
`prepare_metadata_for_build_wheel` isn't guaranteed to match that of the
built wheel.
Hatch disables `prepare_metadata_for_build_wheel` entirely for pip.
We'll instead disable
it on our end when metadata is defined as "dynamic" in the
pyproject.toml, which should
allow us to leverage the hook in _most_ cases while still avoiding
incorrect metadata for
the remaining cases.
Closes: https://github.com/astral-sh/uv/issues/2130.
## Summary
When a user passes a `pyproject.toml` to `pip compile` (e.g., `uv pip
compile pyproject.toml`), we extract the requirements from the
`pyproject.toml` directly. However... that isn't always possible (as
seen in the linked issues). When it's _not_, we instead need to run the
PEP 517 build hooks to identify the metadata.
Closes https://github.com/astral-sh/uv/issues/1624.
Closes https://github.com/astral-sh/uv/issues/1644.
## Test Plan
`cargo test`
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
- Displays missing packages as single-line warnings.
- Adds support for `Editable project location` and `Required-by` fields
in `pip show`.
Part of #2526.
## Summary
`uv` was failing to install requirements defined like:
```
file://localhost/Users/crmarsh/Downloads/iniconfig-2.0.0-py3-none-any.whl
```
Closes https://github.com/astral-sh/uv/issues/2652.
While writing tests for a new flag (`--emit-marker-expression`) for `uv
pip compile`, I noticed that one of my test cases (`pendulum 3.0.0`)
was published in Dec 2023. I wanted to include this package in my
tests, but since it comes after our `EXCLUDE_NEWER` constant, it wasn't
visible to `uv`.
In this PR, I chose to resolve this by bumping `EXCLUDE_NEWER` to
`2024-03-25T00:00:00Z`. I also considered a couple other options:
* For a specific test, override and provide a custom `--exclude-newer`
flag. I felt like this would maybe be okay, but we could easily wind
up in a situation where we do this a lot and have a bunch of different
`--exclude-newer` flags in our tests. I'm not sure if this is a huge
problem in practice. Maybe it's fine.
* Find another package (or invent one) with a similarly interesting
configuration. It seemed easier to just bump `EXCLUDE_NEWER`.
The way I did this was to run `cargo insta test` after bumping
`EXCLUDE_NEWER`.
I then reviewed the snapshot diffs, and if they looked reasonable, I
accepted them.
There was only one case where I changed the test to preserve what I
thought it
was trying to test. That's isolated in its own commit.
With https://github.com/pubgrub-rs/pubgrub/pull/190, pubgrub attaches
all types to a dependency provider to reduce the number of generics. We
need a dummy dependency provider now to emulate this. On the plus side,
pep440_rs drops its pubgrub dependency.
This test was introduced in 42973cd9cb. It
looks like it compares some values against some platform specific code
that attempts to find the OS version. But the comparisons made some
assumptions about what kind of data is available. In this commit, we try
to make the test a little more flexible on Linux by not assuming that
`Option` values are `Some`.
## Summary
I don't see a great reason to allow this, and it adds a lot of
complexity, so `pyproject.toml` files are now limited to `pip compile`
and `pip install -r` -- they can't be passed as `-c` or `--override`.
## Summary
Closes Issue:
- https://github.com/astral-sh/uv/issues/2626
## Test Plan
```
cargo run -- pip install -r dev-requirements.txt -r requirements.txt
```
where both requirements files have same `--index-url`
We put a `.gitignore` with `*` at the top of our cache. When maturin was
building a source distribution inside the cache, it would walk up the
tree to find a gitignore, see that and ignore all python files. We now
add an (empty) `.git` directory one directory below, in the root of
built-wheels cache. This prevents ignore walking further up (it marks
the top level a git repository).
Deptry (from #2490) is a mid sized rust package with additional python
packages, so instead of using it in the test i've replaced it with a
small (44KB total) reproducer that uses cffi for faster building, the
entire test taking <2s on my machine.
Fixes#2490
## Summary
Ensures that (e.g.) installs from conda-forge, Homebrew, and other
distributions don't expose `uv self update` at all.
We'll still show `uv self update` for `pip install uv`, but it will fail
with a good error. Removing the `uv self update` from `pip`-installed
`uv` is more complicated, since we'd need to build separately for the
installer vs. for PyPI.
Closes#2588.
## Summary
This PR enables the source distribution database to be used with unnamed
requirements (i.e., URLs without a package name). The (significant)
upside here is that we can now use PEP 517 hooks to resolve unnamed
requirement metadata _and_ reuse any computation in the cache.
The changes to `crates/uv-distribution/src/source/mod.rs` are quite
extensive, but mostly mechanical. The core idea is that we introduce a
new `BuildableSource` abstraction, which can either be a distribution,
or an unnamed URL:
```rust
/// A reference to a source that can be built into a built distribution.
///
/// This can either be a distribution (e.g., a package on a registry) or a direct URL.
///
/// Distributions can _also_ point to URLs in lieu of a registry; however, the primary distinction
/// here is that a distribution will always include a package name, while a URL will not.
#[derive(Debug, Clone, Copy)]
pub enum BuildableSource<'a> {
Dist(&'a SourceDist),
Url(SourceUrl<'a>),
}
```
All the methods on the source distribution database now accept
`BuildableSource`. `BuildableSource` has a `name()` method, but it
returns `Option<&PackageName>`, and everything is required to work with
and without a package name.
The main drawback of this approach (which isn't a terrible one) is that
we can no longer include the package name in the cache. (We do continue
to use the package name for registry-based distributions, since those
always have a name.). The package name was included in the cache route
for two reasons: (1) it's nice for debugging; and (2) we use it to power
`uv cache clean flask`, to identify the entries that are relevant for
Flask.
To solve this, I changed the `uv cache clean` code to look one level
deeper. So, when we want to determine whether to remove the cache entry
for a given URL, we now look into the directory to see if there are any
wheels that match the package name. This isn't as nice, but it does work
(and we have test coverage for it -- all passing).
I also considered removing the package name from the cache routes for
non-registry _wheels_, for consistency... But, it would require a cache
bump, and it didn't feel important enough to merit that.
## Summary
Detects unused cache entries, which can come in a few forms:
1. Directories that are out-dated via our versioning scheme.
2. Old source distribution builds (i.e., we have a more recent version).
3. Old wheels (stored in `archive-v0`, but not symlinked-to from
anywhere in the cache).
Closes https://github.com/astral-sh/puffin/issues/1059.
Closes#2566
We were storing the username e.g. `charlie@astral.sh` as a
percent-encoded string `charlie%40astral.sh` which resulted in different
headers and broke JFrog's artifactory which apparently does not decode
usernames.
Tested with a JFrog artifactory and AWS CodeArtifact although it is
worth noting that AWS does _not_ have a username with an `@` — it'd be
nice to test another artifactory with percent-encoded characters in the
username and/or password.
## Summary
In `pip uninstall`, we shouldn't need to resolve unnamed requirements,
since we already index packages in `site-packages` by their URL.
This also changes `uninstall` to ignore editables, which matches pip's
behavior.
Part of: https://github.com/astral-sh/uv/issues/313.
## Test Plan
Run `cargo run pip install ./scripts/editable-installs/black_editable`,
followed by each of the following:
- `cargo run pip uninstall ./scripts/editable-installs/black_editable`
- `cargo run pip uninstall black`
- `cargo run pip uninstall ./scripts/editable-installs/black_editable
black`
## Summary
This PR ensures that if a package is already satisfied by the current
environment, we don't bother resolving the named requirement.
Part of: https://github.com/astral-sh/uv/issues/313.
## Test Plan
- `cargo run pip install ./scripts/editable-installs/black_editable`
- `cargo run pip install black --verbose`
## Summary
For example: `cargo run pip install .`
The strategy taken here is to attempt to extract the package name from
the distribution without executing the PEP 517 build steps. We could
choose to do that in the future if this proves lacking, but it adds
complexity.
Part of: https://github.com/astral-sh/uv/issues/313.
## Summary
This PR enables `uv pip install` to accept unnamed requirements, as long
as the requirement ends with the wheel or source distribution archive
name. For example: `cargo run pip install
~/Downloads/anyio-4.3.0.tar.gz`.
In subsequent PRs, I'll expand the scope of supported archives and
patterns.
Part of: https://github.com/astral-sh/uv/issues/313.
## Summary
First piece of https://github.com/astral-sh/uv/issues/313. In order to
support unnamed requirements, we need to be able to parse them in
`requirements-txt`, which in turn means that we need to introduce a new
type that's distinct from `pep508::Requirement`, given that these
_aren't_ PEP 508-compatible requirements.
Part of: https://github.com/astral-sh/uv/issues/313.
Scott schafer got me the idea: We can avoid repeating the path for
workspaces dependencies everywhere if we declare them in the virtual
package once and treat them as workspace dependencies from there on.
It is a common pattern to have an active conda base env (that sets
`CONDA_PREFIX`) and then create a venv on top of that (setting
`VIRTUAL_ENV`).
Previously, we would error when both `VIRTUAL_ENV` and `CONDA_PREFIX`
were set, now `VIRTUAL_ENV` takes precedence over `CONDA_PREFIX`.
Fixes#2028
## Summary
We would like to be able to configure the installer-name so that other
tools can co-exist with `uv`. In `pixi` we would like to use `pixi-uv`
as the installer name, for example, to be able to distinguish them from
packages installed by pure `uv`.
## Summary
This PR changes our user-facing representation for paths to use relative
paths, when the path is within the current working directory. This
mirrors what we do in Ruff. (If the path is _outside_ the current
working directory, we print an absolute path.)
Before:
```shell
❯ uv venv .venv2
Using Python 3.12.2 interpreter at: /Users/crmarsh/workspace/uv/.venv/bin/python3
Creating virtualenv at: .venv2
Activate with: source .venv2/bin/activate
```
After:
```shell
❯ cargo run venv .venv2
Finished dev [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv venv .venv2`
Using Python 3.12.2 interpreter at: .venv/bin/python3
Creating virtualenv at: .venv2
Activate with: source .venv2/bin/activate
```
Note that we still want to use the existing `.simplified_display()`
anywhere that the path is being simplified, but _still_ intended for
machine consumption (e.g., when passing to `.current_dir()`).
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
This adds support for `CUSTOM_COMPILE_COMMAND` support to change the
header comment in generated requirements files.
See Issue:
- #1527
From [pip-tools docs](https://pip-tools.readthedocs.io/en/latest/):
> You might be wrapping the pip-compile command in another script. To
avoid confusing consumers of your custom script you can override the
update command generated at the top of requirements files by setting the
CUSTOM_COMPILE_COMMAND environment variable.
## Test Plan
<!-- How was it tested? -->
See unit test included
---------
Co-authored-by: konsti <konstin@mailbox.org>
## Summary
If you have a file `typing.py` in the current working directory, `python
-m` doesn't work in some Python versions:
```sh
❯ python -m foo
Could not import runpy module
Traceback (most recent call last):
File "/Users/crmarsh/.local/share/rtx/installs/python/3.9.18/lib/python3.9/runpy.py", line 15, in <module>
import importlib.util
File "/Users/crmarsh/.local/share/rtx/installs/python/3.9.18/lib/python3.9/importlib/util.py", line 2, in <module>
from . import abc
File "/Users/crmarsh/.local/share/rtx/installs/python/3.9.18/lib/python3.9/importlib/abc.py", line 17, in <module>
from typing import Protocol, runtime_checkable
ImportError: cannot import name 'Protocol' from 'typing' (/Users/crmarsh/workspace/uv/typing.py)
```
This did _not_ cause problems for us on Python 3.11 or later, because we
set `PYTHONSAFEPATH`, which avoids adding the current working directory
to `sys.path`. However, on earlier versions, we _were_ failing with the
above. (It's important that we run interpreter discovery in the current
working directory, since doing otherwise breaks pyenv shims.)
The fix implemented here uses `-I` to run Python in isolated mode, which
is even stricter. The downside of isolated mode is that we currently
rely on setting `PYTHONPATH` to find the "fake module" that we create on
disk, and `-I` means `PYTHONPATH` is totally ignored. So, instead, we
run a script directly, and that _script_ injects the path we care about
into `PYTHONSAFEPATH`.
Closes https://github.com/astral-sh/uv/issues/2547.
Add a single job for for fast lint tools. Rustfmt for rust, ruff for
python formatting and linting, prettier avoids inconsistent formatter
changes between pycharm and vscode.
## Summary
We strip extras by default, but there are some valid use-cases in which
they're required (see the linked issue). This PR doesn't change our
default, but it does add `--no-strip-extras`, which lets users preserve
extras in the output requirements.
Closes https://github.com/astral-sh/uv/issues/1595.
## Summary
We had the right fixup for `torchsde`, but a subsequent fixup was making
it invalid. In general, we should apply as few of these as we can, so
lets stop as soon as we succeed.
Closes https://github.com/astral-sh/uv/issues/2546.
## Test Plan
`cargo run pip install torchsde==0.2.5 --verbose --reinstall -n
--verbose`
## Summary
I tried out `cargo shear` to see if there are any unused dependencies
that `cargo udeps` isn't reporting. It turned out, there are a few. This
PR removes those dependencies.
## Test Plan
`cargo build`
## Summary
In reality, there's no such thing as the `site-packages` directory for a
given virtualenv. Rather, Python defines both `purelib` and `platlib`,
where the former is for pure-Python packages and the latter is for
packages that contain native code. These are almost always set to the
same thing... but they don't _have_ to be, and in fact of Fedora they
are not.
This PR changes the `site_packages` method to return an iterator of
directories.
Closes https://github.com/astral-sh/uv/issues/2527.
## Summary
When a user runs with `--output-file` and `--generate-hashes`, we should
_only_ update the hashes if the pinned version itself changes.
Closes https://github.com/astral-sh/uv/issues/1530.
This PR handles the fragment part of the URL path.
It achieves this by splitting the fragment from the path before
normalization and parsing. It then sets the fragment back after the URL
has been parsed.
Resolve#2501
Brings us in-line with Python's behavior:
1. Prioritize `none` tags _after_ all of the relevant platform tags
2. Omit `none` tags for CPython versions less than the current version
3. Prioritize major (i.e. `py3-none`) version tags over minor (i.e.
`py3x-none`) version tags less than the current version
4. Add a `none-any` tag for the current CPython version
## Test plan
Tested on my Linux machine with a script to emit tags at the desired
glibc version:
```python
from packaging import tags
import re
exclude = re.compile("_(21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37|38|39)_")
for tag in tags.sys_tags():
if exclude.search(str(tag)):
continue
print(tag)
```
Then performed a diff with the snapshot in `tags.rs`
## Summary
Closes#1958
This adds linehaul metadata to uv's user-agent when pep 508 markers are
provided to the RegistryClientBuilder. Thanks to #2381, we were able to
leverage most information from markers and avoid inconsistency.
Linehaul is meant to be accompanying metadata pip sends in it's user
agent when talking to registries. You can see this output by running
something like `python -c 'from pip._internal.network.session import
user_agent; print(user_agent())'`.
In PyPI, this metadata processed by the
[linehaul-cloud-function](https://github.com/pypi/linehaul-cloud-function).
More info about linehaul can be found in #1958.
Below are some examples from pip:
* Linux GHA: `pip/24.0
{"ci":true,"cpu":"x86_64","distro":{"id":"jammy","libc":{"lib":"glibc","version":"2.35"},"name":"Ubuntu","version":"22.04"},"implementation":{"name":"CPython","version":"3.12.2"},"installer":{"name":"pip","version":"24.0"},"openssl_version":"OpenSSL
3.0.2 15 Mar
2022","python":"3.12.2","rustc_version":"1.76.0","system":{"name":"Linux","release":"6.5.0-1016-azure"}}`
* Windows GHA: `pip/24.0
{"ci":true,"cpu":"AMD64","implementation":{"name":"CPython","version":"3.12.2"},"installer":{"name":"pip","version":"24.0"},"openssl_version":"OpenSSL
3.0.13 30 Jan
2024","python":"3.12.2","rustc_version":"1.76.0","system":{"name":"Windows","release":"2022Server"}}`
* OSX GHA: `pip/24.0
{"ci":true,"cpu":"arm64","distro":{"name":"macOS","version":"14.2.1"},"implementation":{"name":"CPython","version":"3.12.2"},"installer":{"name":"pip","version":"24.0"},"openssl_version":"OpenSSL
3.0.13 30 Jan
2024","python":"3.12.2","rustc_version":"1.76.0","system":{"name":"Darwin","release":"23.2.0"}}`
Here's how uv results look like (sorry for the keys not having the same
order):
* Linux GHA: `uv/0.1.21
{"installer":{"name":"uv","version":"0.1.21"},"python":"3.12.2","implementation":{"name":"CPython","version":"3.12.2"},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":"Linux","release":"6.5.0-1016-azure"},"cpu":"x86_64","openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}`
* Windows GHA: `uv/0.1.21
{"installer":{"name":"uv","version":"0.1.21"},"python":"3.12.2","implementation":{"name":"CPython","version":"3.12.2"},"distro":null,"system":{"name":"Windows","release":"2022Server"},"cpu":"AMD64","openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}`
* OSX GHA: `uv/0.1.21
{"installer":{"name":"uv","version":"0.1.21"},"python":"3.12.2","implementation":{"name":"CPython","version":"3.12.2"},"distro":{"name":"macOS","version":"14.2.1","id":null,"libc":null},"system":{"name":"Darwin","release":"23.2.0"},"cpu":"arm64","openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}`
Distro information (such as the one pip uses `from pip._vendor import
distro` to retrieve instead of `platform` module) was not retrieved from
markers. Instead, the linux release codename/name/version uses
`sys-info` crate, adding about 50us of extra overhead on linux. The
distro osx version re-used the [mac_os version
implementation](99c992e38b/crates/platform-host/src/mac_os.rs)
from #2381 which adds about 20us of overhead on osx. I tried to use
other crates to avoid re-introducing `mac_os.rs` but most of them didn't
yield satisfactory performance (40ms-60ms~) or had the wrong values
needed (e.g. darwin version vs osx version).
I also didn't add libc retrieval or rustc retrieval as those seem to add
substantial overhead due to querying `ldd` or `rustc`. PyPy version
detection was also not added to avoid adding extra overhead to [support
PyPy for
linehaul](https://github.com/pypa/pip/blob/24.0/src/pip/_internal/network/session.py#L123).
All other behavior was kept 1-1 to match what pip's linehaul
implementation does (as of 24.0). This also aligns with what was
discussed in #1958.
## Test Plan
Added new integration test to uv-client.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
By running `get_interpreter_info.py` outside of the current working
directory, we seem to have broken pyenv shims.
Closes https://github.com/astral-sh/uv/issues/2488.
## Test Plan
Without this change (resolving to the Homebrew Python, even though we
start with a shim):
```
DEBUG Starting interpreter discovery for Python @ `python3.11`
DEBUG Probing interpreter info for: /Users/crmarsh/.pyenv/shims/python3.11
DEBUG Found Python 3.11.7 for: /Users/crmarsh/.pyenv/shims/python3.11
Using Python 3.11.7 interpreter at: /opt/homebrew/opt/python@3.11/bin/python3.11
Creating virtualenv at: .venv
INFO Removing existing directory
Activate with: source .venv/bin/activate
```
With this change:
```
DEBUG Starting interpreter discovery for Python @ `python3.11`
DEBUG Probing interpreter info for: /Users/crmarsh/.pyenv/shims/python3.11
DEBUG Found Python 3.11.1 for: /Users/crmarsh/.pyenv/shims/python3.11
Using Python 3.11.1 interpreter at: /Users/crmarsh/.pyenv/versions/3.11.1/bin/python3.11
Creating virtualenv at: .venv
INFO Removing existing directory
Activate with: source .venv/bin/activate
```
## Summary
If a package uses Hatch's `root.uri` feature, we currently error:
```toml
dependencies = [
"black @ {root:uri}/../black_editable"
]
```
Even though we're using PEP 517 hooks to get the metadata, which
_should_ support this. The problem is that we load the full
`PyProjectToml`, which means we parse the requirements, which means we
reject what looks like a relative URL in dependencies.
Instead, we should only enforce a limited subset of `pyproject.toml`
(arguably none).
Closes https://github.com/astral-sh/uv/issues/2475.
## Summary
This PR adds limited support for PEP 440-compatible local version
testing. Our behavior is _not_ comprehensively in-line with the spec.
However, it does fix by _far_ the biggest practical limitation, and
resolves all the issues that've been raised on uv related to local
versions without introducing much complexity into the resolver, so it
feels like a good tradeoff for me.
I'll summarize the change here, but for more context, see [Andrew's
write-up](https://github.com/astral-sh/uv/issues/1855#issuecomment-1967024866)
in the linked issue.
Local version identifiers are really tricky because of asymmetry.
`==1.2.3` should allow `1.2.3+foo`, but `==1.2.3+foo` should not allow
`1.2.3`. It's very hard to map them to PubGrub, because PubGrub doesn't
think of things in terms of individual specifiers (unlike the PEP 440
spec) -- it only thinks in terms of ranges.
Right now, resolving PyTorch and friends fails, because...
- The user provides requirements like `torch==2.0.0+cu118` and
`torchvision==0.15.1+cu118`.
- We then match those exact versions.
- We then look at the requirements of `torchvision==0.15.1+cu118`, which
includes `torch==2.0.0`.
- Under PEP 440, this is fine, because `torch @ 2.0.0+cu118` should be
compatible with `torch==2.0.0`.
- In our model, though, it's not, because these are different versions.
If we change our comparison logic in various places to allow this, we
risk breaking some fundamental assumptions of PubGrub around version
continuity.
- Thus, we fail to resolve, because we can't accept both `torch @ 2.0.0`
and `torch @ 2.0.0+cu118`.
As compared to the solutions we explored in
https://github.com/astral-sh/uv/issues/1855#issuecomment-1967024866, at
a high level, this approach differs in that we lie about the
_dependencies_ of packages that rely on our local-version-using package,
rather than lying about the versions that exist, or the version we're
returning, etc.
In short:
- When users specify local versions upfront, we keep track of them. So,
above, we'd take note of `torch` and `torchvision`.
- When we convert the dependencies of a package to PubGrub ranges, we
check if the requirement matches `torch` or `torchvision`. If it's
an`==`, we check if it matches (in the above example) for
`torch==2.0.0`. If so, we _change_ the requirement to
`torch==2.0.0+cu118`. (If it's `==` some other version, we return an
incompatibility.)
In other words, we selectively override the declared dependencies by
making them _more specific_ if a compatible local version was specified
upfront.
The net effect here is that the motivating PyTorch resolutions all work.
And, in general, transitive local versions work as expected.
The thing that still _doesn't_ work is: imagine if there were _only_
local versions of `torch` available. Like, `torch @ 2.0.0` didn't exist,
but `torch @ 2.0.0+cpu` did, and `torch @ 2.0.0+gpu` did, and so on.
`pip install torch==2.0.0` would arbitrarily choose one one `2.0.0+cpu`
or `2.0.0+gpu`, and that's correct as per PEP 440 (local version
segments should be completely ignored on `torch==2.0.0`). However, uv
would fail to identify a compatible version. I'd _probably_ prefer to
fix this, although candidly I think our behavior is _ok_ in practice,
and it's never been reported as an issue.
Closes https://github.com/astral-sh/uv/issues/1855.
Closes https://github.com/astral-sh/uv/issues/2080.
Closes https://github.com/astral-sh/uv/issues/2328.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Resolves https://github.com/astral-sh/uv/issues/2391
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
Added a few tests to make sure that the exit code returned is 0 when
there's no conflict; 1 when there's any conflict.
<!-- How was it tested? -->
## Summary
Right now, the middleware doesn't apply credentials that were
_originally_ sourced from a URL. This requires that we call
`with_url_encoded_auth` whenever we create a request to ensure that any
credentials that were passed in as part of an index URL (for example)
are respected.
This PR modifies `uv-auth` to instead apply those credentials in the
middleware itself. This seems preferable to me. As far as I can tell, we
can _only_ add in-URL credentials to the store ourselves (since in-URL
credentials are converted to headers by the time they reach the
middleware). And if we ever _didn't_ apply those credentials to new
URLs, it'd be a bug in the logic that precedes the middleware (i.e., us
forgetting to call `with_url_encoded_auth`).
## Test Plan
`cargo run pip install` with an authenticated index.