## Summary
If you have (e.g.) `extra-index-url` in your configuration file _and_
provide `--extra-index-url` on the command-line, we now merge the
options rather than ignoring those in the configuration file. As such,
merging the CLI and the persistent configuration is now semantically
identical to how we merge (project persistent configuration) with (user
persistent configuration).
Closes https://github.com/astral-sh/uv/issues/3541.
## Summary
The main motivation here is that the `.filename()` method that we
implement on `Url` will do URL decoding for the last segment, which we
were missing here.
The errors are a bit awkward, because in
`crates/uv-resolver/src/lock.rs`, we wrap in `failed to extract filename
from URL: {url}`, so in theory we want the underlying errors to _omit_
the URL? But sometimes they use `#[error(transparent)]`?
## Summary
Uncertain about this, but we don't actually need the full
`SourceDistFilename`, only the name and version -- and we often have
that information already (as in the lockfile routines). So by flattening
the fields onto `RegistrySourceDist`, we can avoid re-parsing for
information we already have.
Prompted by
https://github.com/astral-sh/uv/pull/3657#discussion_r1606041239
There's still some level of discomfort here, as the `tool` module needs
needs to import the `project` module to manage an environment. We should
probably move most of the basic operations in the `project` module root
into some sort of shared module for behind the scenes operations?
Regardless, this change should simplify that future move.
## Summary
Restore API-compatibility with pre-1.1.0 versions of the `zip` crate,
and pin the dependency to the 0.6 series, due to concerns discussed in
https://github.com/astral-sh/uv/issues/3642.
## Test Plan
```
cargo run -p uv-dev -- fetch-python
cargo test
```
## Summary
This PR adds lossless deserialization for `GitSourceDist` distributions
in the lockfile. Specifically, we now properly preserve the requested
revision, the subdirectory, and the precise Git commit SHA.
## Test Plan
`cargo test`
## Summary
This PR introduces parallelism to the resolver. Specifically, we can
perform PubGrub resolution on a separate thread, while keeping all I/O
on the tokio thread. We already have the infrastructure set up for this
with the channel and `OnceMap`, which makes this change relatively
simple. The big change needed to make this possible is removing the
lifetimes on some of the types that need to be shared between the
resolver and pubgrub thread.
A related PR, https://github.com/astral-sh/uv/pull/1163, found that
adding `yield_now` calls improved throughput. With optimal scheduling we
might be able to get away with everything on the same thread here.
However, in the ideal pipeline with perfect prefetching, the resolution
and prefetching can run completely in parallel without depending on one
another. While this would be very difficult to achieve, even with our
current prefetching pattern we see a consistent performance improvement
from parallelism.
This does also require reverting a few of the changes from
https://github.com/astral-sh/uv/pull/3413, but not all of them. The
sharing is isolated to the resolver task.
## Test Plan
On smaller tasks performance is mixed with ~2% improvements/regressions
on both sides. However, on medium-large resolution tasks we see the
benefits of parallelism, with improvements anywhere from 10-50%.
```
./scripts/requirements/jupyter.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 29.2 ms ± 1.8 ms [User: 20.3 ms, System: 29.8 ms]
Range (min … max): 26.4 ms … 36.0 ms 91 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 25.5 ms ± 1.0 ms [User: 19.5 ms, System: 25.5 ms]
Range (min … max): 23.6 ms … 27.8 ms 99 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.15 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/boto3.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 487.1 ms ± 6.2 ms [User: 464.6 ms, System: 61.6 ms]
Range (min … max): 480.0 ms … 497.3 ms 10 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 430.8 ms ± 9.3 ms [User: 529.0 ms, System: 77.2 ms]
Range (min … max): 417.1 ms … 442.5 ms 10 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.13 ± 0.03 times faster than ./target/profiling/baseline (resolve-warm)
```
```
./scripts/requirements/airflow.in
Benchmark 1: ./target/profiling/baseline (resolve-warm)
Time (mean ± σ): 478.1 ms ± 18.8 ms [User: 482.6 ms, System: 205.0 ms]
Range (min … max): 454.7 ms … 508.9 ms 10 runs
Benchmark 2: ./target/profiling/parallel (resolve-warm)
Time (mean ± σ): 308.7 ms ± 11.7 ms [User: 428.5 ms, System: 209.5 ms]
Range (min … max): 287.8 ms … 323.1 ms 10 runs
Summary
./target/profiling/parallel (resolve-warm) ran
1.55 ± 0.08 times faster than ./target/profiling/baseline (resolve-warm)
```
## Summary
Fixes a small discrepancy between the pip compile outputs for
`annotation-style=split` and `annotation-style=line` commands.
### Problem
Consider the following `pyproject.toml` file.
```sh
$ cat pyproject.toml
[project]
name = "uv_test"
dynamic = ["version"]
dependencies = ["click"]
```
Running uv pip compile with annotation-style=split on uv 0.1.44 version
yields the following:
```sh
❯ uv pip compile pyproject.toml --annotation-style=split
Resolved 1 package in 2ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml --annotation-style=split
click==8.1.7
# via uv-test (pyproject.toml)
```
However, running uv pip compile with annotation-style=line doesn't
include source info for root level dependencies.
```sh
❯ uv pip compile pyproject.toml --annotation-style=line
Resolved 1 package in 1ms
# This file was autogenerated by uv via the following command:
# uv pip compile pyproject.toml --annotation-style=line
click==8.1.7
```
With this PR:
```sh
❯ ../target/debug/uv pip compile --annotation-style=line pyproject.toml
Resolved 1 package in 6ms
# This file was autogenerated by uv via the following command:
# uv pip compile --annotation-style=line pyproject.toml
click==8.1.7 # via uv-test (pyproject.toml)
```
This also now matches `pip-tools` output:
```sh
❯ pip-compile --annotation-style=line pyproject.toml
#
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
# pip-compile --annotation-style=line pyproject.toml
#
click==8.1.7 # via uv_test (pyproject.toml)
```
## Test Plan
`cargo test`
## Summary
If a user includes markers after an editable, we now ignore them (rather
than including them in the parsed URL). This matches pip's behavior. In
the future, we could further improve by respecting them, but that
_would_ be a deviation from pip.
For example, given:
```
-e ./scripts/packages/black_editable ; python_version >= "3.9" and python_ver
```
We now split at the first whitespace (just before the `;`), parse
everything before, and throw out everything after.
This logic also extends to extras. So given:
```
-e ./scripts/packages/black_editable[dev, colorama]
```
We'll now parse this as the URL
`./scripts/packages/black_editable[dev,`, and throw out ` colorama]`.
Instead, you need to do:
```
-e ./scripts/packages/black_editable[dev,colorama]
```
(I.e., remove the space.)
This _also_ matches pip's behavior. I could "fix" this but I'm unsure if
I should -- it means requirements files will be parseable by uv that
won't work with pip. Open to input. My gut reaction is that we _should_
properly support `-e ./scripts/packages/black_editable[dev, colorama]`
even if pip would reject it, but `requirements.txt` is
implementation-defined so it'd be a "deviation".
Closes https://github.com/astral-sh/uv/issues/3604.
Following from #3595, we'd like wheels to make their way into the lock
file even if the current environment selects an sdist. With #3595, this
didn't happen:
$ cargo run -p uv -- pip compile -p3.10 <(echo psycopg2)
--unstable-uv-lock-file
$ cat uv.lock
version = 1
[[distribution]]
name = "psycopg2"
version = "2.9.9"
source = "registry+https://pypi.org/simple"
[distribution.sdist]
url =
"dc6acaf46d76fce95daac5e0f0301b/psycopg2-2.9.9.tar.gz"
hash =
"sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"
The above example uses `psycopg2`, which has an sdist and wheels only on
Windows. Since I ran the above on Linux, an sdist was selected. But no
wheels appeared in the lock file.
With this PR, wheels are now correctly plumbed through:
$ cargo run -p uv -- pip compile -p3.10 <(echo psycopg2)
--unstable-uv-lock-file
$ cat uv.lock
version = 1
[[distribution]]
name = "psycopg2"
version = "2.9.9"
source = "registry+https://pypi.org/simple"
[distribution.sdist]
url =
"dc6acaf46d76fce95daac5e0f0301b/psycopg2-2.9.9.tar.gz"
hash =
"sha256:d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156"
[[distribution.wheel]]
url =
"2767d96391f5cde90b82cb3e8c2a12/psycopg2-2.9.9-cp310-cp310-win32.whl"
hash =
"sha256:38a8dcc6856f569068b47de286b472b7c473ac7977243593a288ebce0dc89516"
[[distribution.wheel]]
url =
"6572dec6831f85491a5e4dda606a98/psycopg2-2.9.9-cp310-cp310-win_amd64.whl"
hash =
"sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"
[[distribution.wheel]]
url =
"1fc5b9d33c858a602868a592cdc1b0/psycopg2-2.9.9-cp311-cp311-win32.whl"
hash =
"sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"
[[distribution.wheel]]
url =
"5133dd3183e671b278ce248810b7f7/psycopg2-2.9.9-cp311-cp311-win_amd64.whl"
hash =
"sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"
[[distribution.wheel]]
url =
"f74ffe6b6fe119ccb8a6546c3fb893/psycopg2-2.9.9-cp312-cp312-win32.whl"
hash =
"sha256:d735786acc7dd25815e89cc4ad529a43af779db2e25aa7c626de864127e5a024"
[[distribution.wheel]]
url =
"c4a26e1918ab7ee854fb5247f16c40/psycopg2-2.9.9-cp312-cp312-win_amd64.whl"
hash =
"sha256:a7653d00b732afb6fc597e29c50ad28087dcb4fbfb28e86092277a559ae4e693"
[[distribution.wheel]]
url =
"ffeb9ac356ce0d6c4f2f34e396dbc0/psycopg2-2.9.9-cp37-cp37m-win32.whl"
hash =
"sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"
[[distribution.wheel]]
url =
"0a39176d36fd7105774e57996f63cd/psycopg2-2.9.9-cp37-cp37m-win_amd64.whl"
hash =
"sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"
[[distribution.wheel]]
url =
"86b90d30c4420cc3c0f6da2b8f3a9a/psycopg2-2.9.9-cp38-cp38-win32.whl"
hash =
"sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"
[[distribution.wheel]]
url =
"c439b378ef79997a935f10374f3c0d/psycopg2-2.9.9-cp38-cp38-win_amd64.whl"
hash =
"sha256:bac58c024c9922c23550af2a581998624d6e02350f4ae9c5f0bc642c633a2d5e"
[[distribution.wheel]]
url =
"5080c0e61ad5f08b9503e508aac116/psycopg2-2.9.9-cp39-cp39-win32.whl"
hash =
"sha256:c92811b2d4c9b6ea0285942b2e7cac98a59e166d59c588fe5cfe1eda58e72d59"
[[distribution.wheel]]
url =
"ec73fe66d4d65f5bbe54efb191d9e6/psycopg2-2.9.9-cp39-cp39-win_amd64.whl"
hash =
"sha256:de80739447af31525feddeb8effd640782cf5998e1a4e9192ebdf829717e3913"
Ref #3351
Our current flow of data from "simple registry package" to "final
resolved distribution" goes through a number of types:
* `SimpleMetadata` is the API response from a registry that includes all
published versions for a package. Each version has an assortment of
metadata
associated with it.
* `VersionFiles` is the aforementioned metadata. It is split in two: a
group of files for source distributions and a group of files for wheels.
* `PrioritizedDist` collects a subset of the files from `VersionFiles`
to form a selection of the "best" sdist and the "best" wheel for the
current environment.
* `CompatibleDist` is created from a borrowed `PrioritizedDist` that,
perhaps among other things, encapsulates the decision of whether to pick
an sdist or a wheel. (This decision depends both on compatibility and
the action being performed. e.g., When doing installation, a
`CompatibleDist` will sometimes select an sdist over a wheel.)
* `ResolvedDistRef` is like a `ResolvedDist`, but borrows a `Dist`.
* `ResolvedDist` is the almost-final-form of a distribution in a
resolution and is created from a `ResolvedDistRef`.
* `AnnotatedResolvedDist` is a new data type that is the actual final
form of a distribution that a universal lock file cares about. It
bundles a `ResolvedDist` with some metadata needed to generate a lock
file.
One of the requirements of a universal lock file is that we include all
wheels (and maybe all source distributions? but at least one if it's
present) associated with a distribution. But the above flow of data (in
the step from `VersionFiles` to `PrioritizedDist`) drops all wheels
except for the best one.
To remedy this, in this PR, we rejigger `PrioritizedDist`,
`CompatibleDist` and `ResolvedDistRef` so that all wheel data is
preserved. And when a `ResolvedDistRef` is finally turned into a
`ResolvedDist`, we copy all of the wheel data. And finally, we adjust
the `Lock` constructor to read this new data and include it in the lock
file. To make this work, we also modify `RegistryBuiltDist` so that it
can contain one or more wheels instead of just one.
One shortcoming here (called out in the code as a FIXME) is that if a
source distribution is selected as the "best" thing to use (perhaps
there are no compatible wheels), then the wheels won't end up in the
lock file. I plan to fix this in a follow-up PR.
We also aren't totally consistent on source distribution naming.
Sometimes we use `sdist`. Sometimes `source`. Sometimes `source_dist`.
I think it'd be nice to just use `sdist` everywhere, but I do prefer
the type names to be `SourceDist`. And sometimes you want function
names to match the type names (i.e., `from_source_dist`), which in turn
leads to an appearance of inconsistency. I'm open to ideas.
Closes#3351
## Summary
In `ResolutionGraph::from_state`, we have mechanisms to grab the hashes
and metadata for all distributions -- but we then throw that information
away. This PR preserves it on a new `AnnotatedDist` (yikes, open to
suggestions) that wraps `ResolvedDist` and includes (1) the hashes
(computed or from the registry) and (2) the `Metadata23`, which lets us
extract the version.
Closes https://github.com/astral-sh/uv/issues/3356.
Closes https://github.com/astral-sh/uv/issues/3357.
## Summary
Splits this into two loops that each handle independent cases, to make
the code a little easier to reason about. No behavioral or logic changes
-- just splitting the `match` across two loops.
## Summary
Closes
https://github.com/astral-sh/uv/issues/3578#issuecomment-2110675382.
## Test Plan
Verified that in the OpenSUSE test, we create both, and they're
symlinks:
```text
INFO: Creating virtual environment with `venv`...
INFO: Installing into `venv` virtual environment...
DEBUG Found a virtualenv named .venv at: /tmp/tmp4nape29h/.venv
DEBUG Cached interpreter info for Python 3.10.14, skipping probing: .venv/bin/python
DEBUG Using Python 3.10.14 environment at .venv/bin/python
DEBUG Trying to lock if free: .venv/.lock
purelib: "/tmp/tmp4nape29h/.venv/lib/python3.10/site-packages"
platlib: "/tmp/tmp4nape29h/.venv/lib64/python3.10/site-packages"
is_same_file(purelib, platlib): Ok(true)
```
## Summary
Increment the removed file counts in filters
in install_registry_source_dist_cached test, to make it work again on
Gentoo. The tested counts were updated
in 9a92a3ad37, but the filters were not.
That said, the respective count increased in Gentoo as well, so adjust
both input and output strings. I'm updating Windows as a guesswork,
though I suspect that filter may not be necessary anymore, given that CI
was passing.
## Test Plan
`cargo test` on Gentoo :-).
## Summary
Fixes a typo in a comment
## Test Plan
I assume there's no need to test comment changes, other than having a
human check they make sense. That's what this PR is for 😉
## Summary
Uses the editable handling from `pip sync`, and improves the
abstractions such that we can pass those resolved editables into the
resolver.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
It's confusing that we use `constraints` here because constraints mean
something else for us (e.g., `--constraint constraints.txt`). These are
really the dependencies of a given `PubGrubPackage` -- the type is even
called `PubGrubDependencies`.
Windows does not support cloning whole directories so clone each file
instead.
closes#3547
## Test Plan
Ran ` uv pip install setuptools --link-mode=clone` manually
## Summary
I don't love this, but it turns out that setuptools is not robust to
parallel builds: https://github.com/pypa/setuptools/issues/3119. As a
result, if you run uv from multiple processes, and they each attempt to
build the same source distribution, you can hit failures.
This PR applies an advisory lock to the source distribution directory.
We apply it unconditionally, even if we ultimately find something in the
cache and _don't_ do a build, which helps ensure that we only build the
distribution once (and wait for that build to complete) rather than
kicking off builds from each thread.
Closes https://github.com/astral-sh/uv/issues/3512.
## Test Plan
Ran:
```sh
#!/bin/bash
make_venv(){
target/debug/uv venv $1
source $1/bin/activate
target/debug/uv pip install opentracing --no-deps --verbose
}
for i in {1..8}
do
make_venv ./$1/$i &
done
```
## Summary
I think this is overall good change because it explicitly encodes (in
the type system) something that was previously implicit. I'm not a huge
fan of the names here, open to input.
It covers some of https://github.com/astral-sh/uv/issues/3506 but I
don't think it _closes_ it.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Just fix typos.
While `alpha-numeric` is not really a misspelling:
- it is missing from mainstream curated dictionaries, all of them
suggest `alphanumeric`;
- it is less used than `alphanumeric` (more than ⨉10 less) according to
the Google [Ngram
Viewer](https://books.google.com/ngrams/graph?content=alpha-numeric%2Calphanumeric&year_start=1900&year_end=2019&corpus=en-2019);
- it is [missing from
SCOWL](http://app.aspell.net/lookup?dict=en_US-large;words=alpha-numeric).
## Test Plan
CI jobs.
## Summary
runpy.run_path was added in python 2.7 and 3.2 - and every python that
is not EOL supports it.
It is arguably nicer to read and the path is only given once in the
command.
At least right now, runpy - unlike exec with S102 - is not flagged by
any bandit-derived ruff check.
(I guess because it loads from a file instead of a simple string...)
Because of the import, it is also not a one-liner anymore. (But that
could be fixed with an __import__('runpy').run_path...)
## Test Plan
import runpy
runpy.run_path('/path/to/venv/bin/activate_this.py')
## Summary
If you run the script included in the linked issue, then `uv cache
clean`, we hit permissions errors on certain directories created by
`setuptools`. The permissions on those directories look like:
```
❯ sudo ls -l /Users/crmarsh/Library/Caches/uv/built-wheels-v3/pypi/opentracing/2.4.0/M-fYsaHAaQQvedmPMUl9D/opentracing-2.4.0.tar.gz/build/bdist.macosx-14.2-arm64/wheel/opentracing
Password:
total 0
drwxr-xr-x 3 crmarsh staff 96 May 11 12:51 harness
```
This PR adds logic to make those directories readable by the current
user.
Closes https://github.com/astral-sh/uv/issues/3515.
## Summary
pip passes these as positional arguments, and at least one build backend
relies on that. My personal opinion is that it's a spec violation, and
the build backend should be updated, but I'd prefer to favor
compatibility over strictness here.
Closes https://github.com/astral-sh/uv/issues/3509.
## Test Plan
`cargo run pip install cryptacular==1.6.2`
## Summary
This PR consolidates the concurrency limits used throughout `uv` and
exposes two limits, `UV_CONCURRENT_DOWNLOADS` and
`UV_CONCURRENT_BUILDS`, as environment variables.
Currently, `uv` has a number of concurrent streams that it buffers using
relatively arbitrary limits for backpressure. However, many of these
limits are conflated. We run a relatively small number of tasks overall
and should start most things as soon as possible. What we really want to
limit are three separate operations:
- File I/O. This is managed by tokio's blocking pool and we should not
really have to worry about it.
- Network I/O.
- Python build processes.
Because the current limits span a broad range of tasks, it's possible
that a limit meant for network I/O is occupied by tasks performing
builds, reading from the file system, or even waiting on a `OnceMap`. We
also don't limit build processes that end up being required to perform a
download. While this may not pose a performance problem because our
limits are relatively high, it does mean that the limits do not do what
we want, making it tricky to expose them to users
(https://github.com/astral-sh/uv/issues/1205,
https://github.com/astral-sh/uv/issues/3311).
After this change, the limits on network I/O and build processes are
centralized and managed by semaphores. All other tasks are unbuffered
(note that these tasks are still bounded, so backpressure should not be
a problem).
This only makes hashes optional for wheels/sdists that come from
registires or direct URLs. For wheels/sdists that come from other
sources, a hash should not be present.
For path dependencies, a hash should not be present because the state of
the path dependency is not intended to be tracked in the lock file. This
is consistent with how other tools deal with path dependencies, and if
it were otherwise, the hash would I believe need to be updated for every
change to the path dependency.
For git dependencies (source dists only), a hash should not be present
because the lock will contain the specific commit revision hash. This is
functionally equivalent to a hash, and so a hash is redundant.
As part of this change, we validate the presence or absence of a hash
based on the dependency source. We also add our first regression tests.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
likely necessary to resolve https://github.com/astral-sh/uv/issues/2500
made this a separate PR in an attempt to make the changes as small as
possible; let me know if it's preferred to keep them as a single PR.
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
- edited the test in `interpreter.rs`
- tested manually via `println!`
```
$ cargo run --quiet pip show test
["/Users/chankang/Library/Caches/uv/.tmpKzNEPN", "/Users/chankang/.pyenv/versions/3.12.2/lib/python312.zip", "/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12", "/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12/lib-dynload", "/Users/chankang/repos/uv/.venv/lib/python3.12/site-packages"]
warning: Package(s) not found for: test
chankang@chans-Air ~/repos/uv - (syspath)
$ git diff
diff --git a/crates/uv-interpreter/src/environment.rs b/crates/uv-interpreter/src/environment.rs
index 33b785ce..8ebf0864 100644
--- a/crates/uv-interpreter/src/environment.rs
+++ b/crates/uv-interpreter/src/environment.rs
@@ -106,6 +106,7 @@ impl PythonEnvironment {
/// Some distributions also create symbolic links from `purelib` to `platlib`; in such cases, we
/// still deduplicate the entries, returning a single path.
pub fn site_packages(&self) -> impl Iterator<Item = &Path> {
+ println!("{:?}", self.interpreter.sys_path());
if let Some(target) = self.interpreter.target() {
Either::Left(std::iter::once(target.root()))
} else {
chankang@chans-Air ~/repos/uv - (syspath)
$ python -c "import sys; print(sys.path)"
['', '/Users/chankang/.pyenv/versions/3.12.2/lib/python312.zip', '/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12', '/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12/lib-dynload', '/Users/chankang/.pyenv/versions/3.12.2/lib/python3.12/site-packages']
chankang@chans-Air ~/repos/uv - (syspath)
```
<!-- How was it tested? -->
This still keeps the resolver state on the stack, but it organizes it
into a more structured representation. This is a precursor to
implementing resolver forking, where we will ultimately put this state
on the heap. The idea is that this will let us maintain multiple
independent resolver states that will all produce their own resolution
(and potentially other forked states).
Closes#3354
## Summary
I've started to refer to this as the "project" API in various places, it
seems less duplicative than the "workspace" API which is a little
different.
Now that the type is fully encapsulated, we can pretty easily
migrate to using an Arc inside of a MarkerEnvironment.
It looks like the pyo3 macros can't deal with an Arc, so we
write out the getter methods by hand.
We now use the getters and setters everywhere.
There were some places where we wanted to build a `MarkerEnvironment`
out of whole cloth, usually in tests. To facilitate those use cases, we
add a `MarkerEnvironmentBuilder` that provides a convenient constructor.
It's basically like a `MarkerEnvironment::new`, but with named
parameters. That's useful here because there are so many fields (and
they many have the same type).
This test was failing on master. I guess we don't test
this crate with the pyo3 feature enabled? I think this
regression was due to a recent change in the error reporting
of the pep440 crate.
## Summary
Ensures that we track the origins for requirements regardless of whether
they come from `pyproject.toml` or `setup.py` or `setup.cfg`.
Closes#3480.
This commit touches a lot of code, but the conceptual change here is
pretty simple: make it so we can run the resolver without providing a
`MarkerEnvironment`. This also indicates that the resolver should run in
universal mode. That is, the effect of a missing marker environment is
that all marker expressions that reference the marker environment are
evaluated to `true`. That is, they are ignored. (The only markers we
evaluate in that context are extras, which are the only markers that
aren't dependent on the environment.)
One interesting change here is that a `Resolver` no longer needs an
`Interpreter`. Previously, it had only been using it to construct a
`PythonRequirement`, by filling in the installed version from the
`Interpreter` state. But we now construct a `PythonRequirement`
explicitly since its `target` Python version should no longer be tied to
the `MarkerEnvironment`. (Currently, the marker environment is mutated
such that its `python_full_version` is derived from multiple sources,
including the CLI, which I found a touch confusing.)
The change in behavior can now be observed through the
`--unstable-uv-lock-file` flag. First, without it:
```
$ cat requirements.in
anyio>=4.3.0 ; sys_platform == "linux"
anyio<4 ; sys_platform == "darwin"
$ cargo run -qp uv -- pip compile -p3.10 requirements.in
anyio==4.3.0
exceptiongroup==1.2.1
# via anyio
idna==3.7
# via anyio
sniffio==1.3.1
# via anyio
typing-extensions==4.11.0
# via anyio
```
And now with it:
```
$ cargo run -qp uv -- pip compile -p3.10 requirements.in --unstable-uv-lock-file
x No solution found when resolving dependencies:
`-> Because you require anyio>=4.3.0 and anyio<4, we can conclude that the requirements are unsatisfiable.
```
This is expected at this point because the marker expressions are being
explicitly ignored, *and* there is no forking done yet to account for
the conflict.
We provide a new API on a `Requirement` that specifically
ignores the marker environment and only evaluates a requirement's
marker expression with respect to extras. Any marker expressions
that reference the marker environment automatically evaluate to
true.
Instead of duplicating the evaluation code, we just make a marker
environment optional on the lower level APIs. In theory, we could
just writer a separate evaluation routine that ignores everything
except extras, but the evaluator has a fair bit of other stuff in it
(such as emitting warnings) that would be good to keep DRY IMO.
This doc test seems to fail due to the recent change making
`Requirement` generic on its URL type. While the type parameter
was given a default of `VerbatimUrl`, this default doesn't always
apply. For example, the `FromStr` impl on `Requirement` is still
generic on any URL type, and so callers must indicate the type
of the URL to return. (An alternative would be to define the
`FromStr` impl for just the default URL type.)
## Summary
If a requirement is omitted due to a marker expression, we shouldn't
include it as the "source" of a package in the output.
For example, if your constraints include `pathspec ; python_version <
'3.12'`, and you're on Python 3.12, we should _not_ include the
constraint file as a "source" in the output annotations.
## Summary
Unfortunately, the `-I` flag was added in Python 3.4. So if we query a
Python version prior to 3.4 (e.g., Python 2.7), we can't run our script
at all, and lose the ability to match against our structured error.
This PR adds an additional check against the stderr output for these
cases.
Closes https://github.com/astral-sh/uv/issues/3474.
## Test Plan
Installed Python 2.7, and verified that it was skipped (and that we
instead found my `python3`).
## Summary
Fixes https://github.com/astral-sh/uv/issues/1343. This is kinda a first
draft at the moment, but does at least mostly work locally (barring some
bits of the test suite that seem to not work for me in general).
## Test Plan
Mostly running the existing tests and checking the revised output is
sane
## Outstanding issues
Most of these come down to "AFAIK, the existing tools don't support
these patterns, but `uv` does" and so I'm not sure there's an existing
good answer here! Most of the answers so far are "whatever was easiest
to build"
- [x] ~~Is "-r pyproject.toml" correct? Should it show something else or
get skipped entirely~~ No it wasn't. Fixed in
3044fa8b86
- [ ] If the requirements file is stdin, that just gets skipped. Should
it be recorded?
- [ ] Overrides get shown as "--override<override.txt>". Correct?
- [x] ~~Some of the tests (e.g.
`dependency_excludes_non_contiguous_range_of_compatible_versions`) make
assumptions about the order of package versions being outputted, which
this PR breaks. I'm not sure if the text is fairly arbitrary and can be
replaced or whether the behaviour needs fixing?~~ - fixed by removing
the custom pubgrub PartialEq/Hash
- [ ] Are all the `TrackedFromStr` et al changes needed, or is there an
easier way? I don't think so, I think it's necessary to track these sort
of things fairly comprehensively to make this feature work, and this
sort of invasive change feels necessary, but happy to be proved wrong
there :)
- [x] ~~If you have a requirement coming in from two or more different
requirements files only one turns up. I've got a closed-source example
for this (can go into more detail if needed), mostly consisting of a
complicated set of common deps creating a larger set. It's a rarer case,
but worth considering.~~ 042432b200
- [ ] Doesn't add annotations for `setup.py` yet
- This is pretty hard, as the correct location to insert the path is
`crates/pypi-types/src/metadata.rs`'s `parse_pkg_info`, which as it's
based off a source distribution has entirely thrown away such matters as
"where did this package requirement get built from". Could add "`built
package name`" as a dep, but that's a little odd.
## Summary
This PR takes a different approach to `--with` for `uv run`. Now,
instead of merging the requirements and re-resolving, we have two
phases: (1) sync the workspace requirements to the workspace
environment; then (2) sync the ephemeral `--with` requirements to an
ephemeral environment. The two environments are then layered by setting
the `PATH` and `PYTHONPATH` variables appropriately.
I think this approach simplifies a few things:
1. Once we have a lockfile, the semantics are much clearer, and we can
actually reuse it for the workspace. If we had to add arbitrary
dependencies via `--with`, then it's not really clear how the lockfile
would/should behave.
2. Caching becomes simpler, because we can just cache the ephemeral
environment based on the requirements.
The current version of this PR loses a few behaviors though that I need
to restore:
- `--python` support -- but I'm not yet sure how this is supposed to
behave within projects? It's also left unclear in `uv sync` and `uv
lock`.
- The "reuse the workspace environment if it already satisfies the
ephemeral requirements" behavior.
Closes#3411.
## Summary
This is universal environment variable used to determine the mac OS
deployment target. We now respect it in `--python-platform` -- so we
default to 12.0, but users can override it as needed.
## Summary
We already _don't_ discover a `pyproject.toml` in `~/.config/uv` -- it
must be `uv.toml`. This PR makes the same change for `--config-file` --
it _has_ to be a `uv.toml`.
I think this is reasonable and more consistent, though I'm not sure. A
`pyproject.toml` "means" something -- it defines a project itself, in
which case we should be using project configuration. But creating a
`pyproject.toml` outside the project and passing it via `--config-file`
seems like an anti-pattern.
## Summary
This PR follows Cargo's strategy for merging configuration, albeit in a
more limited way (we don't support as many configuration locations).
Specifically, we merge the user configuration with the workspace
configuration if both are present. The workspace configuration has
priority, such that we take values from the workspace configuration and
ignore those in the user configuration if both are specified for a given
setting -- with the exception of arrays and maps, which are
concatenated.
For now, if a user provides a configuration file with `--config-file`,
we _don't_ merge in the user settings.
See:
https://doc.rust-lang.org/cargo/reference/config.html#hierarchical-structure.
Closes#3420.
## Summary
This is annoying both locally in CI. If anyone wants to fuss with the
filters to fix it, that's fine too, but IMO it's better to disable than
leave it enabled on macOS for now.
When using `tool.uv.sources`, we warn that requirements have a bound,
i.e. at least a lower version constraint.
When using a library, the symbols you import were introduced in
different versions, creating an implicit lower bound. This warning makes
this explicit. This is crucial to prevent backtracking resolvers from
selecting an ancient versions that is not compatible (or worse, doesn't
build), and a performance optimization on top.
This feature is gated to `tool.uv.sources` (as it should have been to
begin with for #3263/#3443) to not unnecessarily break legacy workflows.
It is also helpful specifically when using a `tool.uv.sources` section
that contains constraints that are not published to pypi, e.g. for
workspace dependencies. We can adjust those later to e.g. not constrain
workspace dependencies with `publish = false`, but i think it's the
right setting to start with.
## Summary
These aren't intended for production use; instead, I'm just trying to
frame out the overall data flows and code-sharing for these commands. We
now have `uv sync` (sync the environment to match the lockfile, without
refreshing or resolving) and `uv lock` (generate the lockfile). Both
_require_ a virtual environment to exist (something we should change).
`uv sync`, `uv run`, and `uv lock` all share code for the underlying
subroutines (resolution and installation), so the commands themselves
are relatively small (~100 lines) and mostly consist of reading
arguments and such.
`uv lock` and `uv sync` don't actually really work yet, because we have
no way to include the project itself in the lockfile (that's a TODO in
the lockfile implementation).
Closes https://github.com/astral-sh/uv/issues/3432.
We would previously show the parsed version when erroring due to
trailing content after a valid version, which can look different than
the input. E.g. when encountering `0.1-bulbasaur`, we would display:
```
after parsing '0.1b0', found 'ulbasaur', which is not part of a valid version
```
With storing the input string instead of the input version, we now show:
```
after parsing '0.1-b', found 'ulbasaur', which is not part of a valid version
```
It turns out setuptools often uses Metadata-Version 2.1 in their
PKG-INFO:
4e766834d7/setuptools/dist.py (L64)
`Metadata23` requires Metadata-Version of at least 2.2.
This means that uv doesn't actually recognise legacy editable
installations from the most common way you'd actually get legacy
editable installations (works great for most legacy editables I make at
work though!)
Anyway, over here we only need the version and don't care about anything
else. Rather than make a `Metadata21`, I just add a version field to
`Metadata10`. The one slightly tricky thing is that only
Metadata-Version 1.2 and greater guarantee that the [version number is
PEP 440 compatible](https://peps.python.org/pep-0345/#version), so I
store the version in `Metadata10` as a `String` and only parse to
`Version` at time of use.
Also did you know that back in 2004, paramiko had a pokemon based
versioning system?
Pubgrub got a new feature where all unavailability is a custom, instead
of the reasonless `UnavailableDependencies` and our custom `String` type
previously (https://github.com/pubgrub-rs/pubgrub/pull/208). This PR
introduces a `UnavailableReason` that tracks either an entire version
being unusable, or a specific version. The error messages now also track
this difference properly.
The pubgrub commit is our main rebased onto the merged
https://github.com/pubgrub-rs/pubgrub/pull/208, i'll push
`konsti/main-rebase-generic-reason` to `main` after checking for rebase
problems.
## Summary
It's not clear to me that this should exist at all, but it's causing
errors in projects that don't use `tool.uv.sources`, so we should
definitely remove it for now.
## Summary
We already have a global `--isolated`, which means "ignore any on-disk
configuration". I think we should reuse this for the "ignore the
workspace" setting in `uv run`, rather than `--no-workspace`.
I've also merged the existing `--isolated` and `--no-workspace`
behaviors in `uv run` into a single flag. We may not need separate flags
for this, since the current intent seems to be "ignore the workspace
environment"? Though we could always re-add later.
Closes https://github.com/astral-sh/uv/issues/3421.
Resolves [#3419](https://github.com/astral-sh/uv/issues/3419)
## Summary
Add compatargs to pip install command and hint the user to create a venv
for --user arg.
## Test Plan
Tested it locally.
```bash
cargo run pip install --user flask
Compiling uv v0.1.39 (/home/ahmedilyas/uv/crates/uv)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 8.96s
Running `target/debug/uv pip install --user flask`
error: pip install's `--user` is unsupported (use a virtual environment instead).
```
This change allows switching out the url type for requirements. The
original idea was to allow different types for different requirement
origins, so that core metadata reads can ban non-pep 508 requirements
while we only allow them for requirements.txt. This didn't work out
because we expect `&Requirement`s from all sources to match.
I also tried to split `pep508_rs` into a PEP 508 compliant crate and
into our extensions, but they are to tightly coupled.
I think this change is an improvement still as it reduces the hardcoded
dependence on `VerbatimUrl`.
We now correctly emit relative paths in `uv pip compile` with
`tool.uv.sources` path inputs.
`tool.uv.sources` is mainly intended to be used with the uv lock file
over requirements.txt, but it's good to have basic `uv pip` support
working.
Fixes#3366
## Summary
All of the resolver code is run on the main thread, so a lot of the
`Send` bounds and uses of `DashMap` and `Arc` are unnecessary. We could
also switch to using single-threaded versions of `Mutex` and `Notify` in
some places, but there isn't really a crate that provides those I would
be comfortable with using.
The `Arc` in `OnceMap` can't easily be removed because of the uv-auth
code which uses the
[reqwest-middleware](https://docs.rs/reqwest-middleware/latest/reqwest_middleware/trait.Middleware.html)
crate, that seems to adds unnecessary `Send` bounds because of
`async-trait`. We could duplicate the code and create a `OnceMapLocal`
variant, but I don't feel that's worth it.
## Summary
Users often find themselves dropped into environments that contain
`.egg-info` packages. While we won't support installing these, it's not
hard to support identifying them (e.g., in `pip freeze`) and
_uninstalling_ them.
Closes https://github.com/astral-sh/uv/issues/2841.
Closes#2928.
Closes#3341.
## Test Plan
Ran `cargo run pip freeze --python
/opt/homebrew/Caskroom/miniforge/base/envs/TEST/bin/python`, with an
environment that includes `pip` as an `.egg-info`
(`/opt/homebrew/Caskroom/miniforge/base/envs/TEST/lib/python3.12/site-packages/pip-24.0-py3.12.egg-info`):
```
cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1696001825047/work
pip==24.0
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1711811537435/work
setuptools==69.5.1
wheel==0.43.0
```
Then ran `cargo run pip uninstall`, verified that `pip` was uninstalled,
and no longer listed in `pip freeze`.
Fixes#3371
It seems like uv doesn't proactively enforce 3.8+ and in most cases just
issues a warning. This PR keeps that property, only adding the new check
when it is known to fail. I checked the imports in this file and the
other ones seem fine.
## Summary
Refreshes some of the activation scripts, and fixes some bugs in
`activate_this.py` that were likely the rest of some erroneous
copy-pasting.
Closes https://github.com/astral-sh/uv/issues/3346.
## Test Plan
```
❯ python
Python 3.12.0 (main, Feb 28 2024, 09:44:16) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import httpx
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'httpx'
>>> activator = '.venv/bin/activate_this.py'
>>> with open(activator) as f:
... exec(f.read(), {'__file__': activator})
...
>>> import httpx
```
Only allow using `tool.uv.sources` with preview mode, the design isn't
finalized yet.
Not sure what to label this, do we want a preview section and label for
the release notes?
## Summary
We need to partition the editable and non-editable requirements. As-is,
`editable = true` requirements were still being installed as
non-editable.
## Summary
We were writing the build dependencies into the `--target` directory,
which both made builds fail and led to them leaking into the user's
directory.
Closes https://github.com/astral-sh/uv/issues/3349.
## Introduction
PEP 621 is limited. Specifically, it lacks
* Relative path support
* Editable support
* Workspace support
* Index pinning or any sort of index specification
The semantics of urls are a custom extension, PEP 440 does not specify
how to use git references or subdirectories, instead pip has a custom
stringly format. We need to somehow support these while still stying
compatible with PEP 621.
## `tool.uv.source`
Drawing inspiration from cargo, poetry and rye, we add `tool.uv.sources`
or (for now stub only) `tool.uv.workspace`:
```toml
[project]
name = "albatross"
version = "0.1.0"
dependencies = [
"tqdm >=4.66.2,<5",
"torch ==2.2.2",
"transformers[torch] >=4.39.3,<5",
"importlib_metadata >=7.1.0,<8; python_version < '3.10'",
"mollymawk ==0.1.0"
]
[tool.uv.sources]
tqdm = { git = "https://github.com/tqdm/tqdm", rev = "cc372d09dcd5a5eabdc6ed4cf365bdb0be004d44" }
importlib_metadata = { url = "https://github.com/python/importlib_metadata/archive/refs/tags/v7.1.0.zip" }
torch = { index = "torch-cu118" }
mollymawk = { workspace = true }
[tool.uv.workspace]
include = [
"packages/mollymawk"
]
[tool.uv.indexes]
torch-cu118 = "https://download.pytorch.org/whl/cu118"
```
See `docs/specifying_dependencies.md` for a detailed explanation of the
format. The basic gist is that `project.dependencies` is what ends up on
pypi, while `tool.uv.sources` are your non-published additions. We do
support the full range or PEP 508, we just hide it in the docs and
prefer the exploded table for easier readability and less confusing with
actual url parts.
This format should eventually be able to subsume requirements.txt's
current use cases. While we will continue to support the legacy `uv pip`
interface, this is a piece of the uv's own top level interface. Together
with `uv run` and a lockfile format, you should only need to write
`pyproject.toml` and do `uv run`, which generates/uses/updates your
lockfile behind the scenes, no more pip-style requirements involved. It
also lays the groundwork for implementing index pinning.
## Changes
This PR implements:
* Reading and lowering `project.dependencies`,
`project.optional-dependencies` and `tool.uv.sources` into a new
requirements format, including:
* Git dependencies
* Url dependencies
* Path dependencies, including relative and editable
* `pip install` integration
* Error reporting for invalid `tool.uv.sources`
* Json schema integration (works in pycharm, see below)
* Draft user-level docs (see `docs/specifying_dependencies.md`)
It does not implement:
* No `pip compile` testing, deprioritizing towards our own lockfile
* Index pinning (stub definitions only)
* Development dependencies
* Workspace support (stub definitions only)
* Overrides in pyproject.toml
* Patching/replacing dependencies
One technically breaking change is that we now require user provided
pyproject.toml to be valid wrt to PEP 621. Included files still fall
back to PEP 517. That means `pip install -r requirements.txt` requires
it to be valid while `pip install -r requirements.txt` with `-e .` as
content falls back to PEP 517 as before.
## Implementation
The `pep508` requirement is replaced by a new `UvRequirement` (name up
for bikeshedding, not particularly attached to the uv prefix). The still
existing `pep508_rs::Requirement` type is a url format copied from pip's
requirements.txt and doesn't appropriately capture all features we
want/need to support. The bulk of the diff is changing the requirement
type throughout the codebase.
We still use `VerbatimUrl` in many places, where we would expect a
parsed/decomposed url type, specifically:
* Reading core metadata except top level pyproject.toml files, we fail a
step later instead if the url isn't supported.
* Allowed `Urls`.
* `PackageId` with a custom `CanonicalUrl` comparison, instead of
canonicalizing urls eagerly.
* `PubGrubPackage`: We eventually convert the `VerbatimUrl` back to a
`Dist` (`Dist::from_url`), instead of remembering the url.
* Source dist types: We use verbatim url even though we know and require
that these are supported urls we can and have parsed.
I tried to make improve the situation be replacing `VerbatimUrl`, but
these changes would require massive invasive changes (see e.g.
https://github.com/astral-sh/uv/pull/3253). A main problem is the ref
`VersionOrUrl` and applying overrides, which assume the same
requirement/url type everywhere. In its current form, this PR increases
this tech debt.
I've tried to split off PRs and commits, but the main refactoring is
still a single monolith commit to make it compile and the tests pass.
## Demo
Adding
d1ae3b85d5/pyproject.json
as json schema (v7) to pycharm for `pyproject.toml`, you can try the IDE
support already:

[dove.webm](https://github.com/astral-sh/uv/assets/6826232/c293c272-c80b-459d-8c95-8c46a8d198a1)
In *some* places in our crates, `serde` (and `rkyv`) are optional
dependencies. I believe this was done out of reasons of "good sense,"
that is, it follows a Rust ecosystem pattern where serde integration
tends to be an opt-in crate feature. (And similarly for `rkyv`.)
However, ultimately, `uv` itself requires `serde` and `rkyv` to
function. Since our crates are strictly internal, there are limited
consumers for our crates without `serde` (and `rkyv`) enabled. I think
one possibility is that optional `serde` (and `rkyv`) integration means
that someone can do this:
cargo test -p pep440_rs
And this will run tests _without_ `serde` or `rkyv` enabled. That in
turn could lead to faster iteration time by reducing compile times. But,
I'm not sure this is worth supporting. The iterative compilation times
of
individual crates are probably fast enough in debug mode, even with
`serde` and `rkyv` enabled. Namely, `serde` and `rkyv` themselves
shouldn't need to be re-compiled in most cases. On `main`:
```
from-scratch: `cargo test -p pep440_rs --lib` 0.685
incremental: `cargo test -p pep440_rs --lib` 0.278s
from-scratch: `cargo test -p pep440_rs --features serde,rkyv --lib` 3.948s
incremental: `cargo test -p pep440_rs --features serde,rkyv --lib` 0.321s
```
So while a from-scratch build does take significantly longer, an
incremental build is about the same.
The benefit of doing this change is two-fold:
1. It brings out crates into alignment with "reality." In particular,
some crates were _implicitly_ relying on `serde` being enabled
without explicitly declaring it. This technically means that our
`Cargo.toml`s were wrong in some cases, but it is hard to observe it
because of feature unification in a Cargo workspace.
2. We no longer need to deal with the cognitive burden of writing
`#[cfg_attr(feature = "serde", ...)]` everywhere.
This PR principally adds a routine for converting a `Lock` to a
`Resolution`, where a `Resolution` is a map of package names pinned to
a specific version.
I'm not sure that a `Resolution` is ultimately what we want here (we
might need more stuff), but this was the quickest route I could find to
plug a `Lock` into our existing `uv pip install` infrastructure.
This commit also does a little refactoring of the `Lock` types. The
main thing is to permit extra state on some of the types (like a
`by_id` map on `Lock` for quick lookups of distributions) that aren't
included in the serialization format of a `Lock`. We achieve this
by defining separate `Wire` types that are automatically converted
to-and-from via `serde`.
Note that like with the lock file format types themselves, we leave a
few `todo!()` expressions around. The main idea is to get something
minimally working without spending too much effort here. (A fair bit
of refactoring will be required to generate a lock file, and it's
not clear how much this code will wind up needing to change anyway.)
In particular, we only handle the case of installing wheels from a
registry.
A demonstration of the full flow:
```
$ requirements.in
anyio
$ cargo run -p uv -- pip compile -p3.10 requirements.in --unstable-uv-lock-file
$ uv venv
$ cargo run -p uv -- pip install --unstable-uv-lock-file anyio -r requirements.in
Installed 5 packages in 7ms
+ anyio==4.3.0
+ exceptiongroup==1.2.1
+ idna==3.7
+ sniffio==1.3.1
+ typing-extensions==4.11.0
```
In order to install from a lock file, we start from the root and do a
breadth first traversal over its dependencies. We aren't yet filtering
on marker expressions (since they aren't in the lock file yet), but we
should be able to add that in the future. In so doing, the traversal
should select only the subset of distributions relevant for the current
platform.
Split out of #3266
The "selector" concept doesn't seem well enough defined as-is. For
example, `PythonVersion` belongs there but isn't present. Going for
smaller modules instead.
Split out of https://github.com/astral-sh/uv/pull/3266
If `UV_BOOTSTRAP_DIR` and `CARGO_MANIFEST_DIR` are both unset, we
currently panic. This isn't good once we start to use managed toolchains
in production. We'll need to change this more later once the toolchain
directory is more user-facing.
Moves all of `uv-toolchain` into `uv-interpreter`. We may split these
out in the future, but the refactoring I want to do for interpreter
discovery is easier if I don't have to deal with entanglement. Includes
some restructuring of `uv-interpreter`.
Part of #2386
Another split out from https://github.com/astral-sh/uv/pull/3263. This
abstracts the copy&pasted check whether an installed distribution
satisfies a requirement used by both plan.rs and site_packages.rs into a
shared module. It's less useful here than with the new requirement but
helps with reducing https://github.com/astral-sh/uv/pull/3263 diff size.
Previously, a noop `uv pip install` would only show "Audited {}
package(s)" but no details, not even with `-vv`. Now it debug logs which
requirements were met and it also debug logs which requirement was
missing to trigger the full routine, allowing it investigate caching
behaviour.
First `uv pip install -v jupyter`:
```
DEBUG At least one requirement is not satisfied: jupyter
```
Second `uv pip install -v jupyter`:
```
DEBUG Found a virtualenv named .venv at: /home/konsti/projects/uv-main/.venv
DEBUG Cached interpreter info for Python 3.12.1, skipping probing: .venv/bin/python
DEBUG Using Python 3.12.1 environment at .venv/bin/python
DEBUG Trying to lock if free: .venv/.lock
DEBUG Requirement satisfied: anyio
DEBUG Requirement satisfied: anyio>=3.1.0
DEBUG Requirement satisfied: argon2-cffi-bindings
DEBUG Requirement satisfied: argon2-cffi>=21.1
DEBUG Requirement satisfied: arrow>=0.15.0
DEBUG Requirement satisfied: asttokens>=2.1.0
DEBUG Requirement satisfied: async-lru>=1.0.0
DEBUG Requirement satisfied: attrs>=22.2.0
DEBUG Requirement satisfied: babel>=2.10
...
DEBUG Requirement satisfied: webencodings
DEBUG Requirement satisfied: webencodings>=0.4
DEBUG Requirement satisfied: websocket-client>=1.7
DEBUG Requirement satisfied: widgetsnbextension~=4.0.10
DEBUG All editables satisfied:
Audited 1 package in 12ms
```
This will clash with the `tool.uv.sources` PR, i'll rebase it on top.
## Summary
Hi! Added `UV_NO_BUILD_ISOLATION` as a boolean environment variable for
the `--no-build-isolation` command-line option.
Closes https://github.com/astral-sh/uv/issues/3309
## Test Plan
Added new test `respect_no_build_isolation_env_var` to check that the
behaviour is the same as if the ``--no-build-isolation``
command-line-option is set.
This is meant to be a base on which to build. There are some parts
which are implicitly incomplete and others which are explicitly
incomplete. The latter are indicated by TODO comments.
Here is a non-exhaustive list of incomplete things. In many cases, these
are incomplete simply because the data isn't present in a
`ResolutionGraph`. Future work will need to refactor our resolver so
that this data is correctly passed down.
* Not all wheels are included. Only the "selected" wheel for the current
distribution is included.
* Marker expressions are always absent.
* We don't emit hashes for certainly kinds of distributions (direct
URLs, git, and path).
* We don't capture git information from a dependency specification.
Right now, we just always emit "default branch."
There are perhaps also other changes we might want to make to the format
of a more cosmetic nature. Right now, all arrays are encoded using
whatever the `toml` crate decides to do. But we might want to exert more
control over this. For example, by using inline tables or squashing more
things into strings (like I did for `Source` and `Hash`). I think the
main trade-off here is that table arrays are somewhat difficult to read
(especially without indentation), where as squashing things down into a
more condensed format potentially makes future compatible additions
harder.
I also went pretty light on the documentation here than what I would
normally do. That's primarily because I think this code is going to
go through some evolution and I didn't want to spend too much time
documenting something that is likely to change.
Finally, here's an example of the lock file format in TOML for the
`anyio` dependency. I generated it with the following command:
```
cargo run -p uv -- pip compile -p3.10 ~/astral/tmp/reqs/anyio.in --unstable-uv-lock-file
```
And that writes out a `uv.lock` file:
```toml
version = 1
[[distribution]]
name = "anyio"
version = "4.3.0"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "2f20c40b45242c0b33774da0e2e34f/anyio-4.3.0-py3-none-any.whl"
hash = "sha256:048e05d0f6caeed70d731f3db756d35dcc1f35747c8c403364a8332c630441b8"
[[distribution.dependencies]]
name = "exceptiongroup"
version = "1.2.1"
source = "registry+https://pypi.org/simple"
[[distribution.dependencies]]
name = "idna"
version = "3.7"
source = "registry+https://pypi.org/simple"
[[distribution.dependencies]]
name = "sniffio"
version = "1.3.1"
source = "registry+https://pypi.org/simple"
[[distribution.dependencies]]
name = "typing-extensions"
version = "4.11.0"
source = "registry+https://pypi.org/simple"
[[distribution]]
name = "exceptiongroup"
version = "1.2.1"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "79fe92dd414cadab75055b0ae00b33/exceptiongroup-1.2.1-py3-none-any.whl"
hash = "sha256:5258b9ed329c5bbdd31a309f53cbfb0b155341807f6ff7606a1e801a891b29ad"
[[distribution]]
name = "idna"
version = "3.7"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "741d8c8280948df2ea0eda2c8b79e8/idna-3.7-py3-none-any.whl"
hash = "sha256:82fee1fc78add43492d3a1898bfa6d8a904cc97d8427f683ed8e798d07761aa0"
[[distribution]]
name = "sniffio"
version = "1.3.1"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "75a9c94214239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl"
hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2"
[[distribution]]
name = "typing-extensions"
version = "4.11.0"
source = "registry+https://pypi.org/simple"
[[distribution.wheel]]
url = "936e2092671b43269810cd589ceaf5/typing_extensions-4.11.0-py3-none-any.whl"
hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"
```
Resolves https://github.com/astral-sh/uv/issues/3313
## Summary
Add a new env variable `UV_LINK_MODE` as alias for the cli argument
--link-mode.
Updated the README env variables section.
## Test Plan
Tested manually using `UV_LINK_MODE=hardlink cargo run -p uv pip install
flask`.
When running
```
set UV_CACHE_DIR=%LOCALAPPDATA%\uv\cache-foo && uv venv venv
```
in windows CMD, the error would be just
```
error: The system cannot find the path specified. (os error 3)
```
The problem is that the first action in the cache dir is adding the tag,
and the `cachedir` crate is using `std::fs` instead of `fs_err`. I've
copied the two functions we use from the crate and changed the import
from `std::fs` to `fs_err`.
The new error is
```
error: failed to open file `C:\Users\Konstantin\AppData\Local\uv\cache-foo \CACHEDIR.TAG`
Caused by: The system cannot find the path specified. (os error 3)
```
which correctly explains the problem.
Closes#3280
## Summary
It seems like Azure might return a 401 when you request a package that
doesn't exist (even with valid credentials)? But I admittedly haven't
tested this. (We already skip 403, and this seems similar?)
Closes https://github.com/astral-sh/uv/issues/3291.
## Summary
This index strategy resolves every package to the latest possible
version across indexes. If a version is in multiple indexes, the first
available index is selected.
Implements #3137
This closely matches pip.
## Test Plan
Good question. I'm hesitant to use my certifi example here, since that
would inevitably break when torch removes this package. Please comment!
## Summary
The approach taken here is to model `--target` as an install scheme in
which all the directories are just subdirectories of the `--target`.
From there, everything else... just works? Like, upgrade, uninstalls,
editables, etc. all "just work".
Closes#1517.
## Summary
based on PEP 508 the `platform_machine` should be same as
`platform.machine()` output:
https://peps.python.org/pep-0508/#environment-markers
From my macOS M2
```python
In [1]: import platform
In [2]: platform.machine()
Out[2]: 'arm64'
```
I napari we also use `arm64` in requirements and it works as expected:
9fcf63e69a/pyproject.toml (L120)
## Test Plan
<!-- How was it tested? -->
I rebased https://github.com/astral-sh/uv/pull/2757 then realized that
we want to implement this for more than `uv venv`.
Closes https://github.com/astral-sh/uv/issues/2587
Closes https://github.com/astral-sh/uv/issues/2757
```
❯ cargo run -q -- pip install -p /Users/mz/bin/python3.7 anyio
warning: uv is only compatible with Python 3.8+, found Python 3.7.17.
Audited 1 package in 84ms
❯ cargo run -q -- venv -p /Users/mz/bin/python3.7
warning: uv is only compatible with Python 3.8+, found Python 3.7.17.
Using Python 3.7.17 interpreter at: /Users/mz/bin/python3.7
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
```
---------
Co-authored-by: Stevie Gayet <stegayet@users.noreply.github.com>
## Summary
Simplifies dependency errors of the form `you require package-a and you
require package-b` to `you require package-a and package-b`. Resolves
https://github.com/astral-sh/uv/issues/1009.
The only thing a `OnceMap` really needs to be able to do with the value
is to clone it. All extant uses benefited from having this done for them
by automatically wrapping values in an `Arc`. But this isn't necessarily
true for all things. For example, a value might have an `Arc` internally
to making cloning cheap in other contexts, and it doesn't make sense to
re-wrap it in an `Arc` just to use it with a `OnceMap`. Or
alternatively, cloning might just be cheap enough on its own that an
`Arc` isn't worth it.
Previously, this would use the "system" temporary directory.
Because we rename the resulting directory to its final destination,
and because renaming is implemented via hardlinking, and because
hardlinking doesn't work across mount points, and because /tmp is
commonly on a different mount point than where `uv` is checked out,
we "fix" this by putting the temporary directory somewhere close to
the final destination of the fetched artifact.
There are alternatives we might consider pursuing. For example,
if the `rename` fails, then we should probably do a recursive
directory copy. But this is a quick fix for now and it also
consistent with colocation of other temporary directories in `uv`.
The main downside of this change is that if a user does ^C while
`uv-dev fetch-python` is running, then there is no mechanism for
cleaning up temporary directories.
## Summary
I initially implemented this by allowing arbitrary
`x86_64-manylinux_x_y`, but it makes the Clap, Serde, and Schemars
definitions more complicated, _and_ makes it harder for the user.
Ultimately, manylinux itself only provides images for 2_17 and 2_28, so
this seems like it should be sufficient.
Closes https://github.com/astral-sh/uv/issues/3222.
## Summary
We now recursively expand any self-dependencies via extras, which lets
us detect conflicts sooner and avoid building unnecessary versions of
packages that are excluded via the extra.
Closes https://github.com/astral-sh/uv/issues/3135.
Previously, uv-auth would fail to compile due to a missing process
feature. I chose to make all tokio features we use top level features,
so we can share the tokio cache between all test invocations.
## Summary
See the diff in the tests. If you have a constraint with an extra, we
should respect it, but we shouldn't _add_ the extra to the requirements.
## Summary
macOS 11 has been EOL for 7 months, and it seems like it's common to
publish ARM-only wheels at 12.0+. I think this is a better default for
ARM macs.
Closes https://github.com/astral-sh/uv/issues/3227.
## Test Plan
`cargo run pip install scikit-learn==1.3.2 --python-platform macos
--no-build`
## Summary
The cli gives <COLOR> as value placeholder which is misleading. I
changed this to <COLOR_CHOICE> to make it obvious you don't supply a
color but a ColorChoice value.
## Test Plan
Compiled and verified --help text was correct
## Summary
This is mainly a cleanup PR to leverage uv-version in uv-virtualenv
instead of passing it via `uv`.
In #1852 I introduced the ability to pass extra cfg to `gourgeist` for
the primary purpose of passing the uv version, but since the dawn of the
uv-version crate dynamically passing more values to pyvenv.cfg is no
longer needed.
## Test Plan
Existing `uv` tests should still verify `uv = <version>` exists in the
venv and make sure no regressions were introduced.
## Summary
The function `find_in_directory` joins `uv.toml`. The initial join in
`user` is redundant.
Also a small comment fix.
## Test Plan
Not really sure how to test this. Please suggest if any tests need to be
added.
## Summary
Avoid removing quotes from markers, e.g. `numpy (>=1.19) ;
python_version >= "3.7"` should not be rewritten. Fixes
https://github.com/astral-sh/uv/issues/2551.
This PR also makes fixups a bit more flexible internally for fixes that
aren't simple to implement with a pure regex replacement, like this one.
https://github.com/astral-sh/uv/pull/1529 fixed a similar problem but
the current regex is still not smart enough to avoid all markers
completely (like `python_version`).
## Test Plan
Added a few unit tests.
Fixes the failure to lookup credentials in
https://github.com/astral-sh/uv/issues/3205
The issue is that we seed the cache with the index URL which includes a
username but no password. We did not ensure that a password was present
in the cached credentials before attempting a request with them. Now,
the cache will not return credentials when a username is provided and
the cached credentials have no password — the cached credentials are
useless in that case.
Tested with a Google Artifact Registry and keyring
```
RUST_LOG=uv=trace cargo run -q -- pip install requests --index-url https://oauth2accesstoken@us-central1-python.pkg.dev/<project>/pypi/simple/ --no-cache --keyring-provider subprocess -v
```
## Summary
I found some of these too bare (e.g., when they _just_ show a package
name with no other information). For me, this makes it easier to
differentiate error message copy from data. But open to other opinions.
Take a look at the fixture changes and LMK!
## Summary
pip supports providing a `--platform` to `pip install`, which can be
used to seed an environment (e.g., for use in a container or otherwise).
This PR adds `--python-platform` to our commands to support a similar
workflow. It has some caveats, which are documented on the CLI.
Closes#2079.
Since we're now using read timeouts and not total timeouts, we can use a
lower threshold, a single read shouldn't take 5 min (and not even 10s).
The 10s value is somewhat arbitrary.
Like #3144, this is a breaking change in some sense.
Adds hidden `--preview` / `--no-preview` flags with `UV_PREVIEW`
environment variable support. Copies the `PreviewMode` type from Ruff.
Does a little bit of extra work to port `uv run` to the new settings
model.
Note we allow `uv run` invocations without preview and only use its
presence to toggle an experimental warning.
## Test plan
```
❯ cargo run -q -- run --no-workspace -- python --version
warning: `uv run` is experimental and may change without warning.
Python 3.12.2
❯ cargo run -q -- run --no-workspace --preview -- python --version
Python 3.12.2
❯ UV_PREVIEW=1 cargo run -q -- run --no-workspace -- python --version
Python 3.12.2
```
In #2976 I made some changes that led to regressions:
- We stopped tracking URLs that we had not seen credentials for in the
cache
- This means the cache no longer returns a value to indicate we've seen
a realm before
- We stopped seeding the cache with URLs
- Combined with the above, this means we no longer had a list of
locations that we would never attempt to fetch credentials for
- We added caching of credentials found on requests
- Previously the cache was only populated from the seed or credentials
found in the netrc or keyring
- This meant that the cache was populated for locations that we
previously did not cache, i.e. GitHub artifacts(?)
Unfortunately this unveiled problems with the granularity of our cache.
We cache credentials per realm (roughly the hostname) but some realms
have mixed authentication modes i.e. different credentials per URL or
URLs that do not require credentials. Applying credentials to a URL that
does not require it can lead to a failed request, as seen in #3123 where
GitHub throws a 401 when receiving credentials.
To resolve this, the cache is expanded to supporting caching at two
levels:
- URL, cached URL must be a prefix of the request URL
- Realm, exact match required
When we don't have URL-level credentials cached, we attempt the request
without authentication first. On failure, we'll search for realm-level
credentials or fetch credentials from external services. This avoids
providing credentials to new URLs unless we know we need them.
Closes https://github.com/astral-sh/uv/issues/3123
## Summary
This PR avoids: (1) using the lookahead resolver when `--no-deps` is
specified (we'll never use those requirements), and (2) including any
transitive requirements when searching for allowed URLs, etc., when
`--no-deps` is specified.
Closes https://github.com/astral-sh/uv/issues/3183.
Previously, we got `pypi_types::DirectUrl` (the pypa spec
direct_url.json format) and `distribution_types::DirectUrl` (an enum of
all the url types we support). This lead me to confusion, so i'm
renaming the latter one to the more appropriate `ParsedUrl`.
Add a dedicated error type for direct url parsing. This change is broken
out from the new uv requirement type, which uses direct url parsing
internally.
## Summary
This PR is adding `UV_CONSTRAINT` environment variable as analogous to
`PIP_CONSTRAINT` to allow providing constraint file via environment
variable. Implementing this will simplify adoption of uv in testing
procedure in projects that I'm involved (testing using tox).
This was my motivation for opening #1841 that is closed in favor of
#1789 which was closed without implementing this feature.
In this implementation, I have used space as a separator as analogous to
`pip`. This introduces an obvious problem if the path contains space.
Another option could be to use standard separator (`:` - UNIX like, `;`
- Windows). Which one did you prefer?
## Test Plan
It is my first contribution and first rust coding experience. It will be
nice if one could point how I should implement testing this.
I can't get this to reproduce on GitHub Actions -- maybe the builds
there differ, or maybe the builds changed since we added this fix? I'll
check locally, but regardless, this is a typo.
Closes#3158.
## Summary
No behavior changes, but the idea here is that we move the argument
normalization code (e.g., create an `Upgrade` struct from `--upgrade`
and `--upgrade-package`) into the `settings.rs` file, where we build the
common settings structs.
This reduces a lot of the logic and duplication across commands in
`main.rs`.
## Summary
pip prefers somewhat-constrained over unconstrained packages... but only
if they're at equal depths in the tree. We don't have a way to track the
latter property yet (I've added a TODO), so for now, we should remove
this constraint -- it seems to be counter-productive.
I've filed https://github.com/astral-sh/uv/issues/3149 as a follow-up.
Closes https://github.com/astral-sh/uv/issues/3143
## Test Plan
- `git clone https://github.com/drivendataorg/zamba.git`
- `cat "-e .[tests]" > req.in`
- `cargo run venv && cargo run pip compile req.in --refresh -n
--python-platform linux --python-version 3.8`
I added distributions to these projects so the commit changed.
We could pin but we want to test for resolution... so we don't. These
are pretty static so this should be rare.
## Summary
This leverages the new `read_timeout` property, which ensures that (like
pip) our timeout is not applied to the _entire_ request, but rather, to
each individual read operation.
Closes: #1921.
See: #1912.
This means that a bare `uv run` invocation drops you into a REPL.
This behavior is internally controversial, and may best be served by a
dedicated `uv repl` command. I would imagine it's important to fail if
no command is given in _some_ circumstances, but those may be resolved
by _not_ doing this if we do not detect a TTY.
Regardless, I'm interested in giving this a try for a bit during this
experimental phase.
In addition to the requested requirements, we include requirements from
a `pyproject.toml` file if it exists and install the current directory.
Closes https://github.com/astral-sh/uv/issues/3104
Holy cow does installation / resolution take a ton of options. We
side-step most of them here.
If the current environment satisfies the requirements, it is used.
Otherwise, we create a new environment with the requested dependencies.
## Summary
Following up
- https://github.com/astral-sh/uv/pull/3113
- https://github.com/astral-sh/uv/pull/3115
It looks like `uv pip compile` command with `UV_SYSTEM_PYTHON` is missed
because these two PRs are close in time. And thus resulting in
```bash
$ uv --version
uv 0.1.34 (9259eceeb 2024-04-19)
$ UV_SYSTEM_PYTHON=1 uv pip compile --upgrade requirements.in -o requirements.txt
error: invalid value '1' for '--system'
[possible values: true, false]
For more information, try '--help'.
```
Signed-off-by: Jack Cherng <jfcherng@gmail.com>
## Summary
I've wanted to try this for a long time, so decided to give it a shot.
The basic idea is that you can provide a target triple (e.g.,
`--platform x86_64-pc-windows-msvc`) and resolve against that platform,
rather than the currently-running platform. It's functionally similar to
`--python-version`, though a bit simpler since there's no need to engage
with `Requires-Python`.
Our infrastructure is well-setup for this and so, in the end, it's
actually pretty straightforward: for each triple, we just need to
override the markers and platform tags.
## Summary
We weren't setting a priority for editables, so they were being visited
last.
I think there's still a problem whereby we're not aggressive enough in
visiting recursive extras (and, in fact, that's making it really hard to
write a test -- I wrote a test, but the most-reduced case still fails,
and I'd need to add a layer of indirection to make it
fail-on-main-but-pass-on-this-branch), but that problem likely already
existed on main prior to #3087, so I just want to get this quick fix out
now.
Closes https://github.com/astral-sh/uv/issues/3127.
## Test Plan
- `git clone https://github.com/cda-tum/mqt-core.git`
- `cargo run venv`
- `cargo run pip install 'scikit-build-core[pyproject]>=0.8.1'
'setuptools_scm>=7' 'pybind11>=2.12' --resolution=lowest-direct`
- `cargo run pip install --no-build-isolation
'-ve.[test,qiskit,evaluation,coverage]' --resolution=lowest-direct`
Given requirements like:
```
black==23.1.0
black[colorama]
```
The resolver will (on `main`) add a dependency on Black, and then try to
use the most recent version of Black to satisfy `black[colorama]`. For
sake of example, assume `black==24.0.0` is the most recent version. Once
the selects this most recent version, it'll fetch the metadata, then
return the dependencies for `black==24.0.0` with the `colorama` extra
enabled. Finally, it will tack on `black==24.0.0` (a dependency on the
base package). The resolver will then detect a conflict between
`black==23.1.0` and `black==24.0.0`, and throw out
`black[colorama]==24.0.0`, trying to next most-recent version.
This is both wasteful and can cause problems, since we're fetching
metadata for versions that will _never_ satisfy the resolver. In the
`apache-airflow[all]` case, I also ran into an issue whereby we were
attempting to build very old versions of `apache-airflow` due to
`apache-airflow[pandas]`, which in turn led to resolution failures.
The solution proposed here is that we create a new proxy package with
exactly two dependencies: one on `black` and one of `black[colorama]`.
Both of these packages must be at the same version as the proxy package,
so the resolver knows much _earlier_ that (in the above example) the
extra variant _must_ match `23.1.0`.
## Summary
This PR adds a test that currently leads to an error, but should
successfully resolve as of https://github.com/astral-sh/uv/pull/3100.
The core idea is that if we have a pinned package, we shouldn't try to
build other versions of that package if we have an unconstrained variant
with an extra.
## Summary
This was unintended. We ended up reverting `Option<bool>` everywhere,
but I missed this once since it's in a separate file.
(If you use `Option<bool>`, Clap requires a value, like `--no-cache
true`.)
## Test Plan
`cargo run pip install flask --no-cache`
## Summary
I think these are useful to have for consistency, though the `--system`
variant requires some new threading.
Closes: https://github.com/astral-sh/uv/issues/2242.
## Summary
Right now, we only accept _exactly `UV_NATIVE_TLS=true` and
`UV_NATIVE_TLS=false`. `BoolishValueParser` accepts a wider range of
values:
```rust
/// True values are `y`, `yes`, `t`, `true`, `on`, and `1`.
pub(crate) const TRUE_LITERALS: [&str; 6] = ["y", "yes", "t", "true", "on", "1"];
/// False values are `n`, `no`, `f`, `false`, `off`, and `0`.
pub(crate) const FALSE_LITERALS: [&str; 6] = ["n", "no", "f", "false", "off", "0"];
```
I tend to use `0` and `1` personally so this surprised me.
## Summary
resolves https://github.com/astral-sh/uv/issues/3106
## Test Plan
added a simple test where the password provided in `UV_INDEX_URL` is
hidden in the output as expected.
## Summary
With an alias for backwards compatibility. It's clearer and matches the
setting in the TOML configuration (where `compile` was deemed too
vague).
## Summary
Now that we can pick up configuration values from persistent files, we
need to enable users to _disable_ those values from the CLI. For
example, if a user has `emit_index_url = true` in the configuration
file, they should be able to do `--no-emit-index-url` on the
command-line. This PR adds support for such negations, following the
same patterns we use in Ruff.
## Summary
Enables `uv` to read configuration from (e.g.)
`/Users/crmarsh/.config/uv/uv.toml` on macOS and Linux, and
`C:\Users\Charlie\AppData\Roaming\uv\uv.toml` on Windows.
This differs slightly from Ruff, which uses the `Application Support`
directory on macOS. But I've deviated here. based on the preferences
expressed in https://github.com/astral-sh/ruff/issues/10739.
## Summary
This PR adds the structs and logic necessary to respect settings from
the workspace. It's a ton of code, but it's mostly mechanical. And,
believe it or not, I pulled out a few refactors in advance to trim down
the code and complexity.
The highlights are:
- All CLI arguments are now `Option`, so that we can detect whether they
were provided (i.e., we can't let Clap fill in the defaults).
- We now have a `*Settings` struct for each command, which merges the
CLI and workspace options (e.g., `PipCompileSettings`).
I've only implemented `PipCompileSettings` for now. If approved, I'll
implement the others prior to merging, but it's very mechanical and I
both didn't want to do the conversion prior to receiving feedback _and_
realized it would make the PR harder to review.
If a virtual environment does not exist, we will create one for the
duration of the invocation.
Adds an `--isolated` flag to force this behavior (ignoring an existing
virtual environment).
Adds `uv run` which executes a command in your current virtual
environment.
This is a simple first milestone, lots of remaining work and behavior.
The command is hidden.
Fixes https://github.com/astral-sh/uv/issues/3060
## Summary
Allows passing a virtual environment (the path to the directory, rather
than the path to the Python interpreter within the directory) to the
`--python` option of the `uv pip` command.
## Test Plan
Tested manually to confirm that the expected new functionality works.
The test suite still passes after this change.
I don't know how to add tests for a new feature like this. I would be
happy to do so if someone can give me some pointers on how to do it.
## Summary
Source distributions in the .tar.bz2 format are still relatively common
within the existing code-bases, namely, the most common examples are the
Twisted source distributions up to the version 20.3.0. As quite so often
the ability to upgrade Twisted to a more recent version is not available
for a given project, we add the support for .tar.bz2 here to still allow
`uv` to be a drop-in replacement for `pip` in these projects.
## Test Plan
The feature was tested both by adding the corresponding test coverage,
and by directly installing a package of interest under a Python version
that doesn't have the corresponding wheel:
```sh
cargo run venv -p python3.8
cargo run pip install Twisted==20.3.0 --no-cache
```
The `--no-cache` argument in the example above serves the purpose of
cleaning the cached information regarding the unsatisfiability of the
requirements, as it may have been cached during some previous attempt to
install this package by `uv` version that didn't implement this feature
yet.
## Summary
This PR adds basic struct definitions along with a "workspace" concept
for discovering settings. (The "workspace" terminology is used to match
Ruff; I did not invent it.)
A few notes:
- We discover any `pyproject.toml` or `uv.toml` file in any parent
directory of the current working directory. (We could adjust this to
look at the directories of the input files.)
- We don't actually do anything with the configuration yet; but those
PRs are large and I want this to be reviewed in isolation.
Hello! This is my first PR so do not hesitate to let me know if anything
should be done differently 🙌🏽
## Summary
This PR starts adding useful error messages and warnings when people
pass redundant or unsupported arguments to `pip list`.
For now, I've just covered `pip list --outdated`, which is currently
unsupported.
Closes https://github.com/astral-sh/uv/issues/2948
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Since the [`junction` crate](https://crates.io/crates/junction)
implements Windows-only functionality, and since the only place it is
used is guarded by `#[cfg(windows)]`,
1f626bfc73/crates/uv-fs/src/lib.rs (L65-L86)
it makes sense not to depend on this crate at all on non-Windows
platforms.
If nothing else, this makes Linux distribution packagers’ lives just a
*tiny* bit easier.
## Test Plan
<!-- How was it tested? -->
On Fedora Linux 39:
```
# To avoid an error when /tmp and the working directory are on different filesystems:
$ mkdir _tmp
$ TMPDIR="${PWD}/tmp" cargo run -p uv-dev -- fetch-python
$ cargo test
```
I don’t have access to a Windows system.
## Summary
This makes it easier to add (e.g.) JSON Schema derivations to the type.
If we have support for other dates in the future, we can generalize it
to a `UserDate` or similar.
# Avoid cache invalidation on credentials renewal
Addresses
- https://github.com/astral-sh/uv/issues/3009#issue-2239221126
## Summary
Some private package registries (e.g. AWS CodeArtifact) use short-lived
credentials. Since they are short-lived, the exact URL that is assigned
to `UV_INDEX_URL` changes frequently and with that the cache key /
hashes of these URLs. This causes the cache to be missed on token
renewal.
This PR attempts to fix this by hashing URLs for cache keys without
their user credentials.
## Test Plan
I added a test that validates that:
1. Changing user credentials returns the same hash
2. Setting no user credentials yields the same as some user credentials
## Question
I'm not sure if we should also change the `hash` implementation of
`CanonicalUrl` / `RepositoryUrl`. They also run `.hash` within.
PS. this is the first time I'm writing `rust` so if I'm wasting your
precious time, let me know and I'll try to up my skills before I ask
again. Anyway, I figured it's good to get this issue on your radar :)
## Summary
This PR adds system install tests to verify the behavior described in
#2798. It turns out this behavior _also_ affects Fedora and Amazon
Linux, we just didn't have the right conditions enabled (specifically,
you need to create the virtualenv with `python -m venv` to get these
symlinks), so the test suite was expanded to capture that.
The issue itself is also fixed by way of deduplicating the
`site-packages` entries.
Closes: https://github.com/astral-sh/uv/issues/2798
## Summary
I'm surprised we haven't hit this before, but apparently we don't allow
comments after `--index-url`, `-e` entries, etc., in the
requirements.txt parser.
Closes#3011.
## Test Plan
`cargo test`
## Summary
It turns out that if you have an environment variable set, Clap will
consider that equivalent to passing the flag, even if it's set to (e.g.)
something falsy or the default value.
So, e.g., this fails:
```shell
UV_SYSTEM=false uv pip install --python ./.venv/bin/python flask
```
Worse, this fails, because it thinks `--no-index` and `--index-url` are
conflicting:
```shell
export UV_INDEX_URL=https://google.com
uv pip install flask --no-index
```
This PR removes some of the conflicts, namely those related to
environment variables, such that:
- You _can_ pass mixes of `--no-index`, `--index-url`, etc. If
`--no-index` is provided, all the index URLs will be ignored (but we
won't error).
- Passing `--pre` will always enable prereleases, even if `--prerelease`
is also provided. (We could warn here, although honestly it's not
trivial because we'd need to make `--prerelease` take an optional, then
we'd lose the default argument from the `--help`.)
- You _can_ pass `--system` and `--python`. If `--python` is provided,
we use that, and ignore `--system`. (We could warn here.)
I guess the underlying problem here is that we can't differentiate
between arguments passed on the CLI and those set as environment
variables. But making bigger changes here seems out-of-scope.
Closes https://github.com/astral-sh/uv/issues/3000.
## Summary
It turns out that `normalize_path` (sourced from Cargo) has a subtle
bug. If you pass it a relative path that traverses beyond the root, it
silently drops components. So, e.g., passing `../foo/bar`, it will just
drop the leading `..` and return `foo/bar`.
This PR encodes that behavior as a `Result` and avoids using it in such
cases.
Closes https://github.com/astral-sh/uv/issues/3012.
Inspired by https://github.com/astral-sh/uv/issues/2964, we now properly
log hardlink failures, e.g. when the cache is a docker container but the
venv is in a bind mount, e.g.:
```
DEBUG Failed to hardlink `/code/venv/uv/lib/python3.12/site-packages/asgiref-3.8.1.dist-info/WHEEL` to `/root/.cache/uv/archive-v0/nnpkKgUoM3LMxcNDmEKJQ/asgiref-3.8.1.dist-info/WHEEL`, attempting to copy files as a fallback
```
## Summary
I don't know if this is a good change, but `main.rs` is really large.
This just moves all the Clap arguments into their own `cli.rs` module.
freethreaded python reintroduces abiflags since it is incompatible with
regular native modules and abi3.
Tests: None yet! We're lacking cpython 3.13 no-gil builds we can use in
ci.
My test setup:
```
PYTHON_CONFIGURE_OPTS="--enable-shared --disable-gil" pyenv install 3.13.0a5
cargo run -q -- venv -q -p python3.13 .venv3.13 --no-cache-dir && cargo run -q -- pip install -v psutil --no-cache-dir && .venv3.13/bin/python -c "import psutil"
```
Fixes#2429
## Summary
In all of these ID types, we pass values to `cache_key::digest` prior to
passing to `DistributionId` or `ResourceId`. The `DistributionId` and
`ResourceId` are then hashed later, since they're used as keys in hash
maps.
It seems wasteful to hash the value, then hash the hashed value? So this
PR modifies those structs to be enums that can take one of a fixed set
of types.
## Summary
If there are no hashes for a given package, we now return
`Validate(&[])` so that the policy is impossible to satisfy. Previously,
we returned `None`, which is always satisfied.
We don't really ever expect to hit this, because we detect this case in
the resolver and raise a different error. But if we have a bug
somewhere, it's better to fail with an error than silently let the
package through.
## Summary
This PR enables `--require-hashes` with unnamed requirements. The key
change is that `PackageId` becomes `VersionId` (since it refers to a
package at a specific version), and the new `PackageId` consists of
_either_ a package name _or_ a URL. The hashes are keyed by `PackageId`,
so we can generate the `RequiredHashes` before we have names for all
packages, and enforce them throughout.
Closes#2979.
When running the `uv-client` tests, i would previously get:
```
warning: field `0` is never read
--> crates/uv-configuration/src/config_settings.rs:43:27
|
43 | pub struct ConfigSettings(BTreeMap<String, ConfigSettingValue>);
| -------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| field in this struct
|
= note: `ConfigSettings` has derived impls for the traits `Clone` and `Debug`, but these are intentionally ignored during dead code analysis
= note: `#[warn(dead_code)]` on by default
help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field
|
43 | pub struct ConfigSettings(());
| ~~
warning: `uv-configuration` (lib) generated 1 warning
```
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Closes#2564
## Test Plan
1. Changed existing linehaul tests to leverage insta.
2. Ran tests in various linux distros (Debian, Ubuntu, Centos, Fedora,
Alpine) to ensure they also pass locally again.
---------
Co-authored-by: konstin <konstin@mailbox.org>
The sync scenarios script is broken, so i did the updates manually
```
$ ./scripts/sync_scenarios.sh
Setting up a temporary environment...
Using Python 3.12.1 interpreter at: /home/konsti/projects/uv/.venv/bin/python3
Creating virtualenv at: .venv
Activate with: source .venv/bin/activate
× No solution found when resolving dependencies:
╰─▶ Because docutils==0.21.post1 is unusable because the package metadata was inconsistent and you require docutils==0.21.post1, we can conclude that the requirements are unsatisfiable.
hint: Metadata for docutils==0.21.post1 was inconsistent:
Package metadata version `0.21` does not match given version `0.21.post1`
```
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
## Summary
Similar to `Revision`, we now store IDs in the `Archive` entires rather
than absolute paths. This makes the cache robust to moves, etc.
Closes https://github.com/astral-sh/uv/issues/2908.
## Summary
This PR formalizes some of the concepts we use in the cache for
"pointers to things".
In the wheel cache, we have files like
`annotated_types-0.6.0-py3-none-any.http`. This represents an unzipped
wheel, cached alongside an HTTP caching policy. We now have a struct for
this to encapsulate the logic: `HttpArchivePointer`.
Similarly, we have files like `annotated_types-0.6.0-py3-none-any.rev`.
This represents an unzipped local wheel, alongside with a timestamp. We
now have a struct for this to encapsulate the logic:
`LocalArchivePointer`.
We have similar structs for source distributions too.
## Summary
This PR enables hash generation for URL requirements when the user
provides `--generate-hashes` to `pip compile`. While we include the
hashes from the registry already, today, we omit hashes for URLs.
To power hash generation, we introduce a `HashPolicy` abstraction:
```rust
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HashPolicy<'a> {
/// No hash policy is specified.
None,
/// Hashes should be generated (specifically, a SHA-256 hash), but not validated.
Generate,
/// Hashes should be validated against a pre-defined list of hashes. If necessary, hashes should
/// be generated so as to ensure that the archive is valid.
Validate(&'a [HashDigest]),
}
```
All of the methods on the distribution database now accept this policy,
instead of accepting `&'a [HashDigest]`.
Closes#2378.
## Summary
This PR modifies the distribution database to return both the
`Metadata23` and the computed hashes when clients request metadata.
No behavior changes, but this will be necessary to power
`--generate-hashes`.
## Summary
This represents a change to `--require-hashes` in the event that we
don't find a matching hash from the registry. The behavior in this PR is
closer to pip's.
Prior to this PR, if a distribution had no reported hash, or only
mismatched hashes, we would mark it as incompatible. Now, we mark it as
compatible, but we use the hash-agreement as part of the ordering, such
that we prefer any distribution with a matching hash, then any
distribution with no hash, then any distribution with a mismatched hash.
As a result, if an index reports incorrect hashes, but the user provides
the correct one, resolution now succeeds, where it would've failed.
Similarly, if an index omits hashes altogether, but the user provides
the correct one, resolution now succeeds, where it would've failed.
If we end up picking a distribution whose hash ultimately doesn't match,
we'll reject it later, after resolution.
## Summary
This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.
Here are some of the most important highlights:
- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.
There are a few notable TODOs:
- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.
Closes#474.
## Test Plan
I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
## Summary
This lets us remove circular dependencies (in the future, e.g., #2945)
that arise from `FlatIndex` needing a bunch of resolver-specific
abstractions (like incompatibilities, required hashes, etc.) that aren't
necessary to _fetch_ the flat index entries.
See https://github.com/astral-sh/uv/issues/2617
Note this also includes:
- #2918
- #2931 (pending)
A first step towards Python toolchain management in Rust.
First, we add a new crate to manage Python download metadata:
- Adds a new `uv-toolchain` crate
- Adds Rust structs for Python version download metadata
- Duplicates the script which downloads Python version metadata
- Adds a script to generate Rust code from the JSON metadata
- Adds a utility to download and extract the Python version
I explored some alternatives like a build script using things like
`serde` and `uneval` to automatically construct the code from our
structs but deemed it to heavy. Unlike Rye, I don't generate the Rust
directly from the web requests and have an intermediate JSON layer to
speed up iteration on the Rust types.
Next, we add add a `uv-dev` command `fetch-python` to download Python
versions per the bootstrapping script.
- Downloads a requested version or reads from `.python-versions`
- Extracts to `UV_BOOTSTRAP_DIR`
- Links executables for path extension
This command is not really intended to be user facing, but it's a good
PoC for the `uv-toolchain` API. Hash checking (via the sha256) isn't
implemented yet, we can do that in a follow-up.
Finally, we remove the `scripts/bootstrap` directory, update CI to use
the new command, and update the CONTRIBUTING docs.
<img width="1023" alt="Screenshot 2024-04-08 at 17 12 15"
src="https://github.com/astral-sh/uv/assets/2586601/57bd3cf1-7477-4bb8-a8e9-802a00d772cb">
## Summary
If the user runs with `--generate-hashes`, and the lockfile doesn't
contain _any_ hashes for a package (despite being pinned), we should add
new hashes. This mirrors running `uv pip compile --generate-hashes` for
the first time with an existing lockfile.
Closes#2962.
To get more insights into test performance, allow instrumenting tests
with tracing-durations-export.
Usage:
```shell
# A single test
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo test --features tracing-durations-export --test pip_install_scenarios no_binary -- --exact
# All tests
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo nextest run --features tracing-durations-export
```
Then we can e.g. look at
`target/test-traces/pip_install_scenarios::no_binary.svg` and see the
builds it performs:

## Summary
If we build a source distribution from the registry, and the version
doesn't match that of the filename, we should error, just as we do for
mismatched package names. However, we should also backtrack here, which
we didn't previously.
Closes https://github.com/astral-sh/uv/issues/2953.
## Test Plan
Verified that `cargo run pip install docutils --verbose --no-cache
--reinstall` installs `docutils==0.21` instead of the invalid
`docutils==0.21.post1`.
In the logs, I see:
```
WARN Unable to extract metadata for docutils: Package metadata version `0.21` does not match given version `0.21.post1`
```
## Summary
This updates to the version of axoupdater used in cargo-dist 0.13.0's
own selfupdate command, with all relevant fixes for platforms. It also
tentatively introduces a mildly dangerous self-runtest that runs `uv
self update` and checks that the binary is installed and executable.
I *believe* some adjustments need to be made to your CI to have this new
test run, because it requires the `self-update` feature to be enabled,
and I didn't want to just start messing with how you do feature coverage
in your CI. **As a result I haven't yet had a chance to actually fully
run this in CI**, though I've locally tested it on windows (with the
guard disabled).
## Test Plan
Most of the machinery here is provided by axoupdater itself (cargo-dist
also includes a variant of these tests in its codebase). This initial
implementation has a couple major limitations:
* This is For Reals modifying the system that runs the test (so it's off
unless it detects it's running in CI, and if you want variations on this
test they'll need to be [run in
serial](5e7826f7b0/cargo-dist/tests/cli-tests.rs (L235))).
Since many of the testing issues were surrounding precise details of
Actual Deployed Executions, this seemed worth the tradeoff.
* The actual installer *script* it's ultimately invoking is the one you
last published, and *not* the one that cargo-dist will make when you
next publish.
We're already working on implementing some logic for "get cargo-dist to
generate a fresh installer script too", which is in fact the basis of a
huge amount of cargo-dist's own testsuite. Now that we're dogfooding
this stuff, it should be quite hard for this stuff to break without
cargo-dist's own codebase noticing it first.
<!-- How was it tested? -->
## Summary
The prefetcher tallies the number of times we tried a given package, and
then once we hit a threshold, grabs the version map, assuming it's
already been fetched. For direct URL distributions, though, we don't
have a version map! And there's no need to prefetch.
Closes https://github.com/astral-sh/uv/issues/2941.
## Summary
Right now, we have a `Hashes` representation that looks like:
```rust
/// A dictionary mapping a hash name to a hex encoded digest of the file.
///
/// PEP 691 says multiple hashes can be included and the interpretation is left to the client.
#[derive(Debug, Clone, Eq, PartialEq, Default, Deserialize)]
pub struct Hashes {
pub md5: Option<Box<str>>,
pub sha256: Option<Box<str>>,
pub sha384: Option<Box<str>>,
pub sha512: Option<Box<str>>,
}
```
It stems from the PyPI API, which returns a dictionary of hashes.
We tend to pass these around as a vector of `Vec<Hashes>`. But it's a
bit strange because each entry in that vector could contain multiple
hashes. And it makes it difficult to ask questions like "Is
`sha256:ab21378ca980a8` in the set of hashes"?
This PR instead treats `Hashes` as the PyPI-internal type, and uses a
new `Vec<HashDigest>` everywhere in our own APIs.
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?
Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
Elides Python patch versions from the test suite unless the test
specifically requests a patch version.
This reduces some toil when not using our bootstrapped Python versions.
Partially addresses https://github.com/astral-sh/uv/issues/2165 though
we'll need changes to the scenario tests to really support their case.
## Summary
When you specify a source distribution via a path, it can either be a
path to an archive (like a `.tar.gz` file), or a source tree (a
directory). Right now, we handle both paths through the same methods in
the source database. This PR splits them up into separate handlers.
This will make hash generation a little easier, since we need to
generate hashes for archives, but _can't_ generate hashes for source
trees.
It also means that we can now store the unzipped source distribution in
the cache (in the case of archives), and avoid unzipping the source
distribution needlessly on every invocation; and, overall, let's un
enforce clearer expectations between the two routes (e.g., what errors
are possible vs. not), at the cost of duplicating some code.
Closes#2760 (incidentally -- not exactly the motivation for the change,
but it did accomplish it).
## Summary
I think this is a much clearer name for this concept: the set of
"versions" of a given wheel or source distribution. We also use
"Manifest" elsewhere to refer to the set of requirements, constraints,
etc., so this was overloaded.
## Summary
We have a heuristic in `File` that attempts to detect whether a URL is
absolute or relative. However, `contains("://")` is prone to false
positive. In the linked issues, the URLs look like:
```
/packages/5a/d8/4d75d1e4287ad9d051aab793c68f902c9c55c4397636b5ee540ebd15aedf/pytz-2005k.tar.bz2?hash=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c&remote=4d75d1e4287636b5ee540ebd15aedf/pytz-2005k.tar.bz2#sha256=597b596dc1c2c130cd0a57a043459c3bd6477c640c07ac34ca3ce8eed7e6f30c
```
Which is relative, but includes `://`.
Instead, we should determine whether the URL has a _scheme_ which
matches the `Url` crate internally.
Closes https://github.com/astral-sh/uv/issues/2899.
## Summary
Right now, the path-based wheel cache just looks at the symlink to the
archives directory, checks the timestamp on it, and continues with that
symlink as long as the timestamp is up-to-date.
The HTTP-based wheel meanwhile, uses an intermediary `.http` file, which
includes the HTTP caching information. The `.http` file's payload is
just a path pointing to an entry in the archives directory.
This PR modifies the path-based codepaths to use a similar cache file,
which stores a timestamp along with a path to the archives directory.
The main advantage here is that we can add other data to this cache file
(namely, hashes in the future).
## Test Plan
Beyond existing tests, I also verified that this doesn't require a
version bump:
```
git checkout main
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
git checkout charlie/manifest
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall
cargo run pip install ~/Downloads/zeal-0.0.1-py3-none-any.whl --cache-dir baz --reinstall --refresh
```
## Summary
I think this is kind of just an oversight. If a wheel is available via
`--find-links`, and the index is "local", we never find it in the cache.
## Test Plan
`cargo test`
## Summary
In all cases, we unzip these immediately after returning. By moving the
unzipping into the database, we can remove a bunch of code (coming in a
separate PR), and pave the way for hash-checking, since hash generation
will _also_ happen in the database, and splitting the caching layers
across the database and the unzipper creates complications.
Closes#2863.
With pubgrub being fast for complex ranges, we can now compute the next
n candidates without taking a performance hit. This speeds up cold cache
`urllib3<1.25.4` `boto3` from maybe 40s - 50s to ~2s. See docstrings for
details on the heuristics.
**Before**

**After**

---
We need two parts of the prefetching, first looking for compatible
version and then falling back to flat next versions. After we selected a
boto3 version, there is only one compatible botocore version remaining,
so when won't find other compatible candidates for prefetching. We see
this as a pattern where we only prefetch boto3 (stack bars), but not
botocore (sequential requests between the stacked bars).

The risk is that we're completely wrong with the guess and cause a lot
of useless network requests. I think this is acceptable since this
mechanism only triggers when we're already on the bad path and we should
simply have fetched all versions after some seconds (assuming a fast
index like pypi).
---
It would be even better if the pubgrub state was copy-on-write so we
could simulate more progress than we actually have; currently we're
guessing what the next version is which could be completely wrong, but i
think this is still a valuable heuristic.
Fixes#170.
## Summary
Is this, perhaps, not totally necessary? It doesn't show up in any
fixtures beyond those that I added recently.
Closes https://github.com/astral-sh/uv/issues/2846.
## Summary
Demonstrates some suboptimal behavior in how we handle invalid metadata,
which are fixed in https://github.com/astral-sh/uv/pull/2834.
The included wheels were modified by-hand to include invalid structures.
Batch prefetching needs more information from the candidate selector, so
i've split `select` into methods. Split out from #2452. No functional
changes.
## Summary
In working on `--require-hashes`, I noticed that we're missing some
incompatibility tracking for `--find-links` distributions. Specifically,
we don't respect `--no-build` or `--no-binary`, so if we select a wheel
due to `--find-links`, we then throw a hard error when trying to build
it later (if `--no-binary` is provided), rather than selecting the
source distribution instead.
Closes https://github.com/astral-sh/uv/issues/2827.
## Summary
If we're given a Git reference like `20240222`, we currently treat it as
a short commit hash. However... it _could_ be a branch or a tag. This PR
improves the Git reference logic to ensure that ambiguous references
like `20240222` are handled appropriately, by attempting to extract it
as a branch, then a tag, then a short commit hash.
Closes https://github.com/astral-sh/uv/issues/2772.
## Summary
Upgrading `rs-async-zip` enables us to support data descriptors in
streaming. This both greatly improves performance for indexes that use
data descriptors _and_ ensures that we support them in a few other
places (e.g., zipped source distributions created in Finder).
Closes#2808.
## Summary
This partially revives https://github.com/astral-sh/uv/pull/2135 (with
some modifications) to enable users to opt-in to looking for packages
across multiple indexes.
The behavior is such that, in version selection, we take _any_
compatible version from a "higher-priority" index over the compatible
versions of a "lower-priority" index, even if that means we might accept
an "older" version.
Closes https://github.com/astral-sh/uv/issues/2775.
## Summary
This was an oversight in the `-r pyproject.toml` refactor. We can't
enforce unused extras if we have a source tree. We made the correct
changes to `pip compile`, but not `pip install`. This PR just mirrors
those changes to `pip install`, and adds a few tests.
Closes https://github.com/astral-sh/uv/issues/2801.
We don't know what kind of error the OS gives us on `try_lock_exclusive`
with an already locked file, so we assume all those errors are an
already locked file and call `lock_exclusive`.
For example the windows error:
```
Os {
code: 33,
kind: Uncategorized,
message: "The process cannot access the file because another process has locked a portion of the file.",
}
```
Fixes#2767
## Summary
Rather than storing the `redirects` on the resolver, this PR just
re-uses the "convert this URL to precise" logic when we convert to a
`Resolution` after-the-fact. I think this is a lot simpler: it removes
state from the resolver, and simplifies a lot of the hooks around
distribution fetching (e.g., `get_or_build_wheel_metadata` no longer
returns `(Metadata23, Option<Url>)`).
## Summary
I noticed in #2769 that I was now stripping `.git` suffixes from Git
URLs after resolving to a precise commit. This PR cleans up the internal
caching to use a better canonical representation: a `RepositoryUrl`
along with a `GitReference`, instead of a `GitUrl` which can contain
non-canonical data. This gives us both better fidelity (preserving the
`.git`, along with any casing that the user provided when defining the
URL) and is overall cleaner and more robust.
## Summary
This PR leverages our lookahead direct URL resolution to significantly
improve the range of Git URLs that we can accept (e.g., if a user
provides the same requirement, once as a direct dependency, and once as
a tag). We did some of this in #2285, but the solution here is more
general and works for arbitrary transitive URLs.
Closes https://github.com/astral-sh/uv/issues/2614.
Originally a regression test for #2779 but we found out that there's
some weird behavior where different `anyio` versions were preferred
based on the platform.
Addresses panic introduced in #2596 and reported in
https://github.com/astral-sh/uv/issues/2763#issuecomment-2030674936
When there are multiple versions of a package available, we remove the
existing packages before installing the resolved version to "fix" the
environment. We must remove all of the package versions and reinstall
because removing _any_ of the package versions could break the others.
Since reinstalls require a pull from the remote, this broke a contract
between the resolver and the planner which must always agree on which
packages should come from the remote. This further demonstrates that we
should be constructing the install plan with more concrete knowledge
from the resolver (i.e. `ResolvedDist` instead of `Requirement`) to
avoid having to manually ensure logic matches.
## Test plan
Fails on `main` with panic succeeds on branch
```
uv venv --seed
source .venv/bin/activate
pip install anyio==3.7.0 --ignore-installed
pip install anyio==4.0.0 --ignore-installed
cargo run -- pip install anyio black -v
```
## Summary
Ensures that if we resolve any distributions before the resolver, we
cache the metadata in-memory.
_Also_ ensures that we lock (important!) when resolving Git
distributions.
## Summary
This fixes a potential bug that revealed itself in
https://github.com/astral-sh/uv/pull/2761. We don't run into this now
because we always use "allowed URLs", stores the "last" compatible URL
in the map. But the use of the "raw" URL (rather than the "canonical"
URL) means that other writers have to follow that same assumption and
iterate over dependencies in-order.
## Summary
This PR would enable us to support transitive URL requirements. The key
idea is to leverage the fact that...
- URL requirements can only come from URL requirements.
- URL requirements identify a _specific_ version, and so don't require
backtracking.
Prior to running the "real" resolver, we recursively resolve any URL
requirements, and collect all the known URLs upfront, then pass those to
the resolver as "lookahead" requirements. This means the resolver knows
upfront that if a given package is included, it _must_ use the provided
URL.
Closes https://github.com/astral-sh/uv/issues/1808.
With this change, all usages of `EXCLUDE_NEWER` are now in command
wrappers, not in the test functions themselves.
For the venv test, i refactored them into the same kind of test context
abstraction that the other test modules have in the second commit.
The third commit makes`"INSTA_FILTERS` "private", removing the last
remaining individual usage.
Pending windows CI 🤞
## Summary
We can access cache from `BuildContext`. This mirrors
`SourceDistCachedBuilder`, which doesn't accept `Cache` as an argument
and always accesses it through `BuildContext`.
## Summary
If the user provides `uv pip install pyproject.toml`, we now prompt them
to confirm that they meant the `pyproject-toml` package (as opposed to
`uv pip install -r pyproject.toml`).
## Summary
We iterate over the project "requirements" directly in a variety of
places. However, it's not always the case that an input "requirement" on
its own will _actually_ be part of the resolution, since we support
"overrides".
Historically, then, overrides haven't worked as expected for _direct_
dependencies (and we have some tests that demonstrate the current,
"wrong" behavior). This is just a bug, but it's not really one that
comes up in practice, since it's rare to apply an override to your _own_
dependency.
However, we're now considering expanding the lookahead concept to
include local transitive dependencies. In this case, it's more and more
important that overrides and constraints are handled consistently.
This PR modifies all the locations in which we iterate over requirements
directly, and modifies them to respect overrides (and constraints, where
necessary).
## Summary
This is a trimmed-down version of
https://github.com/astral-sh/uv/pull/2684 that only applies to local
source trees for now, which enables workspace-like workflows (whereby
local packages can depend on other local packages at arbitrary depth).
Closes#2699.
## Test Plan
Added new tests.
Also cloned this MRE that was shared with me
(https://github.com/timothyjlaurent/uv-poetry-monorepo-mre), and
verified that it was installed without error:
```
❯ cargo run pip install ./uv-poetry-monorepo-mre/app --no-cache
Finished dev [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv pip install ./uv-poetry-monorepo-mre/app --no-cache`
Resolved 4 packages in 1.28s
Built app @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/app
Built lib1 @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib1
Built lib2 @ file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib2 Downloaded 4 packages in 457ms
Installed 4 packages in 2ms
+ app==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/app)
+ lib1==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib1)
+ lib2==0.1.0 (from file:///Users/crmarsh/workspace/uv/uv-poetry-monorepo-mre/lib2)
+ ruff==0.3.4
```
Previously, we did not consider installed distributions as candidates
while performing resolution. Here, we update the resolver to use
installed distributions that satisfy requirements instead of pulling new
distributions from the registry.
The implementation details are as follows:
- We now provide `SitePackages` to the `CandidateSelector`
- If an installed distribution satisfies the requirement, we prefer it
over remote distributions
- We do not want to allow installed distributions in some cases, i.e.,
upgrade and reinstall
- We address this by introducing an `Exclusions` type which tracks
installed packages to ignore during selection
- There's a new `ResolvedDist` wrapper with `Installed(InstalledDist)`
and `Installable(Dist)` variants
- This lets us pass already installed distributions throughout the
resolver
The user-facing behavior is thoroughly covered in the tests, but
briefly:
- Installing a package that depends on an already-installed package
prefers the local version over the index
- Installing a package with a name that matches an already-installed URL
package does not reinstall from the index
- Reinstalling (--reinstall) a package by name _will_ pull from the
index even if an already-installed URL package is present
- To reinstall the URL package, you must specify the URL in the request
Closes https://github.com/astral-sh/uv/issues/1661
Addresses:
- https://github.com/astral-sh/uv/issues/1476
- https://github.com/astral-sh/uv/issues/1856
- https://github.com/astral-sh/uv/issues/2093
- https://github.com/astral-sh/uv/issues/2282
- https://github.com/astral-sh/uv/issues/2383
- https://github.com/astral-sh/uv/issues/2560
## Test plan
- [x] Reproduction at `charlesnicholson/uv-pep420-bug` passes
- [x] Unit test for editable package
([#1476](https://github.com/astral-sh/uv/issues/1476))
- [x] Unit test for previously installed package with empty registry
- [x] Unit test for local non-editable package
- [x] Unit test for new version available locally but not in registry
([#2093](https://github.com/astral-sh/uv/issues/2093))
- ~[ ] Unit test for wheel not available in registry but already
installed locally
([#2282](https://github.com/astral-sh/uv/issues/2282))~ (seems
complicated and not worthwhile)
- [x] Unit test for install from URL dependency then with matching
version ([#2383](https://github.com/astral-sh/uv/issues/2383))
- [x] Unit test for install of new package that depends on installed
package does not change version
([#2560](https://github.com/astral-sh/uv/issues/2560))
- [x] Unit test that `pip compile` does _not_ consider installed
packages
It turns out that #2712 did _not_ fix#2711. After I put up #2712, I
started trying to track down the specific change that caused the
failure. I had assumed at first that it was related to one of our `rkyv`
types, but it actually ended up being one of our msgpack caches. I think
the failure mode is still fundamentally the same idea: the cached data
changed in a way that is still valid msgpack, but got interpreted
differently after deserializing.
The specific change that caused this was the [removal of a field] from
our
metadata type.
Ideally we would just undo the change and add the field back. But that
change has already been shipped out to users. So I believe the only
plausible choice at this point is to bump the `built-wheels` cache. This
will unfortunately mean that `uv` will need to re-build wheels.
Fixes#2711
[removal of a field]:
365c292525 (diff-e42586829f9c2cdbb909bedc5cf95691cc415247f2cbc2ebeb80d887020457bbL29)
It seems likely that we forgot to bump the version of the "simple" cache
in the 0.1.25 release. I'm still working on confirming it, but I figured
I'd get this bump up first.
The main problem here is that our "simple" cache is represented by
`rkyv`, and that in turn is tightly coupled to the representation of a
selection of data types in `uv`. Changing those data types without
bumping the cache version can result in cache deserialization errors
like this, or in the worst case, silent logic errors.
One possibility here is that the representation changed in a way that
permitted it to pass `rkyv` validation, but changed how the data itself
is interpreted. Our cache is robust with respect to `rkyv` validation
(if it fails, the cache will invalidate the entry and self-heal), but
being robust to higher level logical errors in interpretation of the
data is a much more significant challenge. Our best bet there is perhaps
some kind of checksum that we could do on top of `rkyv` validation (or
instead of it), and thus convert silent logical changes in how the data
is interpreted into failure modes that we're already robust to.
Fixes#2711
## Summary
This looks like a big change but it really isn't. Rather, I just split
`get_or_build_wheel` into separate `get_wheel` and `build_wheel` methods
internally, which made `get_or_build_wheel_metadata` capable of _not_
relying on `Tags`, which in turn makes it easier for us to use the
`DistributionDatabase` in various places without having it coupled to an
interpreter or environment (something we already did for
`SourceDistributionBuilder`).
## Summary
This PR enables the resolver to "accept" URLs, prereleases, and local
version specifiers for direct dependencies of path dependencies. As a
result, `uv pip install .` and `uv pip install -e .` now behave
identically, in that neither has a restriction on URL dependencies and
the like.
Closes https://github.com/astral-sh/uv/issues/2643.
Closes https://github.com/astral-sh/uv/issues/1853.
## Summary
This PR removes the custom `DistFinder` that we use in `pip sync`. This
originally existed because `VersionMap` wasn't lazy, and so we saved a
lot of time in `DistFinder` by reading distribution data lazily. But
now, AFAICT, there's really no benefit. Maintaining `DistFinder` means
we effectively have to maintain two resolvers, and end up fixing bugs in
`DistFinder` that don't exist in the `Resolver` (like #2688.
Closes#2694.
Closes#2443.
## Test Plan
I ran this benchmark a bunch. It's basically a wash. Sometimes one is
faster than the other.
```
❯ python -m scripts.bench \
--uv-path ./target/release/main \
--uv-path ./target/release/uv \
scripts/requirements/compiled/trio.txt --min-runs 50 --benchmark install-warm --warmup 25
Benchmark 1: ./target/release/main (install-warm)
Time (mean ± σ): 54.0 ms ± 10.6 ms [User: 8.7 ms, System: 98.1 ms]
Range (min … max): 45.5 ms … 94.3 ms 50 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: ./target/release/uv (install-warm)
Time (mean ± σ): 50.7 ms ± 9.2 ms [User: 8.7 ms, System: 98.6 ms]
Range (min … max): 44.0 ms … 98.6 ms 50 runs
Warning: The first benchmarking run for this command was significantly slower than the rest (77.6 ms). This could be caused by (filesystem) caches that were not filled until after the first run. You should consider using the '--warmup' option to fill those caches before the actual benchmark. Alternatively, use the '--prepare' option to clear the caches before each timing run.
Summary
'./target/release/uv (install-warm)' ran
1.06 ± 0.29 times faster than './target/release/main (install-warm)'
```
I stumbled across this when writing tests for
`--emit-marker-expressions`. Namely, I observed that in CI, the `tzdata`
dependency of `pendulum` wasn't included in the `requirements.txt`
output on Windows.
@konstin [suggested] that this was a bug, so I've created a test for it.
In particular, it looks like [`tzdata` is an unconditional dependency of
`pendulum`][tzdata-unconditional].
[suggested]:
https://github.com/astral-sh/uv/pull/2651#discussion_r1539722464
[tzdata-unconditional]:
e646afbd165e58327bc5c698731107/pendulum-3.0.0-cp310-none-win_amd64.whl/pendulum-3.0.0.dist-info/METADATA#line.12
## Summary
In `pip sync`, we weren't properly handling cases in which a package
_only_ existed in `--find-links` (e.g., the user passed `--offline` or
`--no-index`).
I plan to explore removing `Finder` entirely to avoid these mismatch
bugs between `pip sync` and other commands, but this is fine for now.
Closes https://github.com/astral-sh/uv/issues/2688.
## Test Plan
`cargo test`
Unfortunately these tests are all gated on specific platforms because
the marker expressions they generate are, by design, platform specific.
I think we'll eventually want to figure out a more robust testing
strategy for multi-platform locking (of which this is just the tiniest
of first steps), but I don't think we really have the infrastructure for
that in place yet. That is, we don't yet have a way of generating a
marker expression _for_ a particular environment instead of just the one
that happens to _be_ the current environment.
When enabled, the marker expression for the pinned requirements
is written as a comment at the top of the output. It is disabled
by default *and* hidden because it's not clear whether 1) this is
useful to end users and 2) is an interface we want to commit to.
However, it is useful to expose it in some way so that it can be
tested.
For $reasons, we'll want to be able to clone a `Manifest` so
that it can be re-used to generate a marker expression.
There is likely a refactoring that could be done to avoid the
cloning, but a `Manifest` is likely to be small in practice, and
we'll only need to clone it once.
These are useful for converting lower level marker values types
to their corresponding values from a marker environment.
We'll use these for generating marker expressions based on both
the dependency graph and the current marker environment.
## Summary
Now that we're resolving metadata more aggressively for local sources,
it's worth doing this. We now pull metadata from the `pyproject.toml`
directly if it's statically-defined.
Closes https://github.com/astral-sh/uv/issues/2629.
The snapshot filtering situation has gotten way out of hand, with each
test hand-rolling it's own filters on top of copied cruft from previous
tests.
I've attempted to address this holistically:
- `TestContext.filters()` has everything you should need
- This was introduced a while ago, but needed a few more filters for it
to be generalized everywhere
- Using `INSTA_FILTERS` is **not recommended** unless you do not want
the context filters
- It is okay to extend these filters for things unrelated to paths
- If you have to write a custom path filter, please highlight it in
review so we can address it in the common module
- `TestContext.site_packages()` gives cross-platform access to the
site-packages directory
- Do not manually construct the path to site-packages from the venv
- Do not turn off tests on Windows because you manually constructed a
Unix path to site-packages
- `TestContext.workspace_root` gives access to uv's repository directory
- Use this for installing from `scripts/packages/`
- If you need coverage for relative paths, copy the test package into
the `temp_dir` don't change the working directory of the test fixture
There is additional work that can be done here, such as:
- Auditing and removing additional uses of `INSTA_FILTERS`
- Updating manual construction of `Command` instances to use a utility
- The `venv` tests are particularly frightening in their lack of a test
context and could use some love
- Improving the developer experience i.e. apply context filters to
snapshots by default
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.
There are no behavioral changes here.
If you pass a `pyproject.toml` that use Hatch's context formatting API,
we currently fail because the dependencies aren't valid under PEP 508.
This PR makes the static metadata parsing a little more relaxed, so that
we appropriately fall back to PEP 517 there.
## Summary
Passing `pyproject.toml` or `setup.py` to `pip uninstall` is a bit
strange, since it will often require running a resolution to resolve the
dependencies (e.g., build the project), which means we also need to
accept `--index-url` and friends.
## Summary
Hatch allows for highly dynamic customization of metadata via hooks. In
such cases, Hatch
can't upload the PEP 517 contract, in that the metadata Hatch would
return by
`prepare_metadata_for_build_wheel` isn't guaranteed to match that of the
built wheel.
Hatch disables `prepare_metadata_for_build_wheel` entirely for pip.
We'll instead disable
it on our end when metadata is defined as "dynamic" in the
pyproject.toml, which should
allow us to leverage the hook in _most_ cases while still avoiding
incorrect metadata for
the remaining cases.
Closes: https://github.com/astral-sh/uv/issues/2130.
## Summary
When a user passes a `pyproject.toml` to `pip compile` (e.g., `uv pip
compile pyproject.toml`), we extract the requirements from the
`pyproject.toml` directly. However... that isn't always possible (as
seen in the linked issues). When it's _not_, we instead need to run the
PEP 517 build hooks to identify the metadata.
Closes https://github.com/astral-sh/uv/issues/1624.
Closes https://github.com/astral-sh/uv/issues/1644.
## Test Plan
`cargo test`
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
- Displays missing packages as single-line warnings.
- Adds support for `Editable project location` and `Required-by` fields
in `pip show`.
Part of #2526.
## Summary
`uv` was failing to install requirements defined like:
```
file://localhost/Users/crmarsh/Downloads/iniconfig-2.0.0-py3-none-any.whl
```
Closes https://github.com/astral-sh/uv/issues/2652.
While writing tests for a new flag (`--emit-marker-expression`) for `uv
pip compile`, I noticed that one of my test cases (`pendulum 3.0.0`)
was published in Dec 2023. I wanted to include this package in my
tests, but since it comes after our `EXCLUDE_NEWER` constant, it wasn't
visible to `uv`.
In this PR, I chose to resolve this by bumping `EXCLUDE_NEWER` to
`2024-03-25T00:00:00Z`. I also considered a couple other options:
* For a specific test, override and provide a custom `--exclude-newer`
flag. I felt like this would maybe be okay, but we could easily wind
up in a situation where we do this a lot and have a bunch of different
`--exclude-newer` flags in our tests. I'm not sure if this is a huge
problem in practice. Maybe it's fine.
* Find another package (or invent one) with a similarly interesting
configuration. It seemed easier to just bump `EXCLUDE_NEWER`.
The way I did this was to run `cargo insta test` after bumping
`EXCLUDE_NEWER`.
I then reviewed the snapshot diffs, and if they looked reasonable, I
accepted them.
There was only one case where I changed the test to preserve what I
thought it
was trying to test. That's isolated in its own commit.
With https://github.com/pubgrub-rs/pubgrub/pull/190, pubgrub attaches
all types to a dependency provider to reduce the number of generics. We
need a dummy dependency provider now to emulate this. On the plus side,
pep440_rs drops its pubgrub dependency.
This test was introduced in 42973cd9cb. It
looks like it compares some values against some platform specific code
that attempts to find the OS version. But the comparisons made some
assumptions about what kind of data is available. In this commit, we try
to make the test a little more flexible on Linux by not assuming that
`Option` values are `Some`.
## Summary
I don't see a great reason to allow this, and it adds a lot of
complexity, so `pyproject.toml` files are now limited to `pip compile`
and `pip install -r` -- they can't be passed as `-c` or `--override`.
## Summary
Closes Issue:
- https://github.com/astral-sh/uv/issues/2626
## Test Plan
```
cargo run -- pip install -r dev-requirements.txt -r requirements.txt
```
where both requirements files have same `--index-url`
We put a `.gitignore` with `*` at the top of our cache. When maturin was
building a source distribution inside the cache, it would walk up the
tree to find a gitignore, see that and ignore all python files. We now
add an (empty) `.git` directory one directory below, in the root of
built-wheels cache. This prevents ignore walking further up (it marks
the top level a git repository).
Deptry (from #2490) is a mid sized rust package with additional python
packages, so instead of using it in the test i've replaced it with a
small (44KB total) reproducer that uses cffi for faster building, the
entire test taking <2s on my machine.
Fixes#2490
## Summary
Ensures that (e.g.) installs from conda-forge, Homebrew, and other
distributions don't expose `uv self update` at all.
We'll still show `uv self update` for `pip install uv`, but it will fail
with a good error. Removing the `uv self update` from `pip`-installed
`uv` is more complicated, since we'd need to build separately for the
installer vs. for PyPI.
Closes#2588.
## Summary
This PR enables the source distribution database to be used with unnamed
requirements (i.e., URLs without a package name). The (significant)
upside here is that we can now use PEP 517 hooks to resolve unnamed
requirement metadata _and_ reuse any computation in the cache.
The changes to `crates/uv-distribution/src/source/mod.rs` are quite
extensive, but mostly mechanical. The core idea is that we introduce a
new `BuildableSource` abstraction, which can either be a distribution,
or an unnamed URL:
```rust
/// A reference to a source that can be built into a built distribution.
///
/// This can either be a distribution (e.g., a package on a registry) or a direct URL.
///
/// Distributions can _also_ point to URLs in lieu of a registry; however, the primary distinction
/// here is that a distribution will always include a package name, while a URL will not.
#[derive(Debug, Clone, Copy)]
pub enum BuildableSource<'a> {
Dist(&'a SourceDist),
Url(SourceUrl<'a>),
}
```
All the methods on the source distribution database now accept
`BuildableSource`. `BuildableSource` has a `name()` method, but it
returns `Option<&PackageName>`, and everything is required to work with
and without a package name.
The main drawback of this approach (which isn't a terrible one) is that
we can no longer include the package name in the cache. (We do continue
to use the package name for registry-based distributions, since those
always have a name.). The package name was included in the cache route
for two reasons: (1) it's nice for debugging; and (2) we use it to power
`uv cache clean flask`, to identify the entries that are relevant for
Flask.
To solve this, I changed the `uv cache clean` code to look one level
deeper. So, when we want to determine whether to remove the cache entry
for a given URL, we now look into the directory to see if there are any
wheels that match the package name. This isn't as nice, but it does work
(and we have test coverage for it -- all passing).
I also considered removing the package name from the cache routes for
non-registry _wheels_, for consistency... But, it would require a cache
bump, and it didn't feel important enough to merit that.
## Summary
Detects unused cache entries, which can come in a few forms:
1. Directories that are out-dated via our versioning scheme.
2. Old source distribution builds (i.e., we have a more recent version).
3. Old wheels (stored in `archive-v0`, but not symlinked-to from
anywhere in the cache).
Closes https://github.com/astral-sh/puffin/issues/1059.
Closes#2566
We were storing the username e.g. `charlie@astral.sh` as a
percent-encoded string `charlie%40astral.sh` which resulted in different
headers and broke JFrog's artifactory which apparently does not decode
usernames.
Tested with a JFrog artifactory and AWS CodeArtifact although it is
worth noting that AWS does _not_ have a username with an `@` — it'd be
nice to test another artifactory with percent-encoded characters in the
username and/or password.
## Summary
In `pip uninstall`, we shouldn't need to resolve unnamed requirements,
since we already index packages in `site-packages` by their URL.
This also changes `uninstall` to ignore editables, which matches pip's
behavior.
Part of: https://github.com/astral-sh/uv/issues/313.
## Test Plan
Run `cargo run pip install ./scripts/editable-installs/black_editable`,
followed by each of the following:
- `cargo run pip uninstall ./scripts/editable-installs/black_editable`
- `cargo run pip uninstall black`
- `cargo run pip uninstall ./scripts/editable-installs/black_editable
black`
## Summary
This PR ensures that if a package is already satisfied by the current
environment, we don't bother resolving the named requirement.
Part of: https://github.com/astral-sh/uv/issues/313.
## Test Plan
- `cargo run pip install ./scripts/editable-installs/black_editable`
- `cargo run pip install black --verbose`
## Summary
For example: `cargo run pip install .`
The strategy taken here is to attempt to extract the package name from
the distribution without executing the PEP 517 build steps. We could
choose to do that in the future if this proves lacking, but it adds
complexity.
Part of: https://github.com/astral-sh/uv/issues/313.
## Summary
This PR enables `uv pip install` to accept unnamed requirements, as long
as the requirement ends with the wheel or source distribution archive
name. For example: `cargo run pip install
~/Downloads/anyio-4.3.0.tar.gz`.
In subsequent PRs, I'll expand the scope of supported archives and
patterns.
Part of: https://github.com/astral-sh/uv/issues/313.
## Summary
First piece of https://github.com/astral-sh/uv/issues/313. In order to
support unnamed requirements, we need to be able to parse them in
`requirements-txt`, which in turn means that we need to introduce a new
type that's distinct from `pep508::Requirement`, given that these
_aren't_ PEP 508-compatible requirements.
Part of: https://github.com/astral-sh/uv/issues/313.