## Summary
This looks like a big change but it really isn't. Rather, I just split
`get_or_build_wheel` into separate `get_wheel` and `build_wheel` methods
internally, which made `get_or_build_wheel_metadata` capable of _not_
relying on `Tags`, which in turn makes it easier for us to use the
`DistributionDatabase` in various places without having it coupled to an
interpreter or environment (something we already did for
`SourceDistributionBuilder`).
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.
There are no behavioral changes here.
## Summary
This PR enables the source distribution database to be used with unnamed
requirements (i.e., URLs without a package name). The (significant)
upside here is that we can now use PEP 517 hooks to resolve unnamed
requirement metadata _and_ reuse any computation in the cache.
The changes to `crates/uv-distribution/src/source/mod.rs` are quite
extensive, but mostly mechanical. The core idea is that we introduce a
new `BuildableSource` abstraction, which can either be a distribution,
or an unnamed URL:
```rust
/// A reference to a source that can be built into a built distribution.
///
/// This can either be a distribution (e.g., a package on a registry) or a direct URL.
///
/// Distributions can _also_ point to URLs in lieu of a registry; however, the primary distinction
/// here is that a distribution will always include a package name, while a URL will not.
#[derive(Debug, Clone, Copy)]
pub enum BuildableSource<'a> {
Dist(&'a SourceDist),
Url(SourceUrl<'a>),
}
```
All the methods on the source distribution database now accept
`BuildableSource`. `BuildableSource` has a `name()` method, but it
returns `Option<&PackageName>`, and everything is required to work with
and without a package name.
The main drawback of this approach (which isn't a terrible one) is that
we can no longer include the package name in the cache. (We do continue
to use the package name for registry-based distributions, since those
always have a name.). The package name was included in the cache route
for two reasons: (1) it's nice for debugging; and (2) we use it to power
`uv cache clean flask`, to identify the entries that are relevant for
Flask.
To solve this, I changed the `uv cache clean` code to look one level
deeper. So, when we want to determine whether to remove the cache entry
for a given URL, we now look into the directory to see if there are any
wheels that match the package name. This isn't as nice, but it does work
(and we have test coverage for it -- all passing).
I also considered removing the package name from the cache routes for
non-registry _wheels_, for consistency... But, it would require a cache
bump, and it didn't feel important enough to merit that.
## Summary
This PR ensures that if a package is already satisfied by the current
environment, we don't bother resolving the named requirement.
Part of: https://github.com/astral-sh/uv/issues/313.
## Test Plan
- `cargo run pip install ./scripts/editable-installs/black_editable`
- `cargo run pip install black --verbose`
## Summary
We would like to be able to configure the installer-name so that other
tools can co-exist with `uv`. In `pixi` we would like to use `pixi-uv`
as the installer name, for example, to be able to distinguish them from
packages installed by pure `uv`.
## Summary
This PR changes our user-facing representation for paths to use relative
paths, when the path is within the current working directory. This
mirrors what we do in Ruff. (If the path is _outside_ the current
working directory, we print an absolute path.)
Before:
```shell
❯ uv venv .venv2
Using Python 3.12.2 interpreter at: /Users/crmarsh/workspace/uv/.venv/bin/python3
Creating virtualenv at: .venv2
Activate with: source .venv2/bin/activate
```
After:
```shell
❯ cargo run venv .venv2
Finished dev [unoptimized + debuginfo] target(s) in 0.15s
Running `target/debug/uv venv .venv2`
Using Python 3.12.2 interpreter at: .venv/bin/python3
Creating virtualenv at: .venv2
Activate with: source .venv2/bin/activate
```
Note that we still want to use the existing `.simplified_display()`
anywhere that the path is being simplified, but _still_ intended for
machine consumption (e.g., when passing to `.current_dir()`).
## Summary
In reality, there's no such thing as the `site-packages` directory for a
given virtualenv. Rather, Python defines both `purelib` and `platlib`,
where the former is for pure-Python packages and the latter is for
packages that contain native code. These are almost always set to the
same thing... but they don't _have_ to be, and in fact of Fedora they
are not.
This PR changes the `site_packages` method to return an iterator of
directories.
Closes https://github.com/astral-sh/uv/issues/2527.
## Summary
This may be required elsewhere, but all the traces in that issue are
related to persisting the temporary directory to our persistent cache,
so lets start there.
See: https://github.com/astral-sh/uv/issues/1491.
Since Python 3.7, deterministic pycs are possible (see [PEP
552](https://peps.python.org/pep-0552/))
To select the bytecode invalidation mode explicitly by env var:
PYC_INVALIDATION_MODE=UNCHECKED_HASH uv pip install --compile ...
Valid values are TIMESTAMP (default), CHECKED_HASH, and UNCHECKED_HASH.
The latter options are useful for reproducible builds.
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
PyPI now supports Metadata 2.2, which means distributions with Metadata
2.2-compliant metadata will start to appear. The upside is that if a
source distribution includes a `PKG-INFO` file with (1) a metadata
version of 2.2 or greater, and (2) no dynamic fields (at least, of the
fields we rely on), we can read the metadata from the `PKG-INFO` file
directly rather than running _any_ of the PEP 517 build hooks.
Closes https://github.com/astral-sh/uv/issues/2009.
Sometimes, the first time we read from the stdout of the bytecode
compiler python subprocess, we get an empty string back (no newline). If
we try to write to stdin, it will often be a broken pipe (#2245). After
we got an empty string the first time, we will get the same empty string
if we read a line again.
The details of this behavior are mysterious to me, but it seems that it
can be identified by the first empty string. We check by inserting
starting with a `Ready` message on the Python side. When we encounter
the broken state, we discard the interpreter and try again.
We have to introduce a third timeout check for the interpreter launch
itself.
Minimized test script:
```bash
#!/usr/bin/env bash
set -euo pipefail
while true; do
date --iso-8601=seconds # Progress indicator
rm -rf testenv
target/profiling/uv venv testenv -q --python 3.12
VIRTUAL_ENV=$PWD/testenv target/profiling/uv pip install -q --compile wheel==0.42.0
done
```
Run as
```
cargo build --profile profiling && bash compile_bug.sh
```
Fixes#2245
Follow-up to #2086: Don't use timeouts for the entire workers, but only
for the section that's about communicating with the (potentially broken)
`python` subprocess. I've also raised the timeout to 60s.
Add a `--compile` option to `pip install` and `pip sync`.
I chose to implement this as a separate pass over the entire venv. If we
wanted to compile during installation, we'd have to make sure that
writing is exclusive, to avoid concurrent processes writing broken
`.pyc` files. Additionally, this ensures that the entire site-packages
are bytecode compiled, even if there are packages that aren't from this
`uv` invocation. The disadvantage is that we do not update RECORD and
rely on this comment from [PEP 491](https://peps.python.org/pep-0491/):
> Uninstallers should be smart enough to remove .pyc even if it is not
mentioned in RECORD.
If this is a problem we can change it to run during installation and
write RECORD entries.
Internally, this is implemented as an async work-stealing subprocess
worker pool. The producer is a directory traversal over site-packages,
sending each `.py` file to a bounded async FIFO queue/channel. Each
worker has a long-running python process. It pops the queue to get a
single path (or exists if the channel is closed), then sends it to
stdin, waits until it's informed that the compilation is done through a
line on stdout, and repeat. This is fast, e.g. installing `jupyter
plotly` on Python 3.12 it processes 15876 files in 319ms with 32 threads
(vs. 3.8s with a single core). The python processes internally calls
`compileall.compile_file`, the same as pip.
Like pip, we ignore and silence all compilation errors
(https://github.com/astral-sh/uv/issues/1559). There is a 10s timeout to
handle the case when the workers got stuck. For the reviewers, please
check if i missed any spots where we could deadlock, this is the hardest
part of this PR.
I've added `uv-dev compile <dir>` and `uv-dev clear-compile <dir>`
commands, mainly for my own benchmarking. I don't want to expose them in
`uv`, they almost certainly not the correct workflow and we don't want
to support them.
Fixes#1788Closes#1559Closes#1928
## Summary
Ensures that local dependencies function similarly to editables, in that
if they're `uv pip install`ed, we invalidate them.
Closes https://github.com/astral-sh/uv/issues/1651.
## Summary
Internal-only refactor to consolidate multiple codepaths we have for
checking whether a cached or installed entry is up-to-date with a local
requirement.
## Summary
`PythonPlatform` only exists to format paths to directories within
virtual environments based on a root and an OS, so it's now
`VirtualenvLayout`.
`Virtualenv` is now used for non-virtual environment Pythons, so it's
now `PythonEnvironment`.
## Summary
This PR adds a `--python` flag that allows users to provide a specific
Python interpreter into which `uv` should install packages. This would
replace the `VIRTUAL_ENV=` workaround that folks have been using to
install into arbitrary, system environments, while _also_ actually being
correct for installing into non-virtual environments, where the bin and
site-packages paths can differ.
The approach taken here is to use `sysconfig.get_paths()` to get the
correct paths from the interpreter, and then use those for determining
the `bin` and `site-packages` directories, rather than constructing them
based on hard-coded expectations for each platform.
Closes https://github.com/astral-sh/uv/issues/1396.
Closes https://github.com/astral-sh/uv/issues/1779.
Closes https://github.com/astral-sh/uv/issues/1988.
## Test Plan
- Verified that, on my Windows machine, I was able to install `requests`
into a global environment with: `cargo run pip install requests --python
'C:\\Users\\crmarsh\\AppData\\Local\\Programs\\Python\\Python3.12\\python.exe`,
then `python` and `import requests`.
- Verified that, on macOS, I was able to install `requests` into a
global environment installed via Homebrew with: `cargo run pip install
requests --python $(which python3.8)`.
Address a few pedantic lints
lints are separated into separate commits so they can be reviewed
individually.
I've not added enforcement for any of these lints, but that could be
added if desirable.
## Summary
We were applying every constraint to every dependency. This is
"harmless" in practice since this is just an optimization, but we thus
had false negatives ~every time which could lead to wasted work.
## Summary
If a `pyproject.toml` or similar is changed within an editable, we
should avoid passing our audit check (and thus re-install the package).
Closes https://github.com/astral-sh/uv/issues/1913.
## Summary
When a virtual environment contains multiple packages with the same
name, we no longer throw a hard error. Instead:
- In `uv pip freeze`, we list all versions.
- In `uv pip uninstall`, we uninstall all versions.
- In `uv pip install`, we uninstall all versions prior to installing a
new version.
Closes https://github.com/astral-sh/uv/issues/1848.
Read the key read for uv from `pyenv.cfg` from `gourgeist` instead of
`uv`. I missed that we're also reading pyenv.cfg when reviewing #1852.
We could check for gourgeist for backwards compatibility, but i think
it's fine this way.
## Summary
Added `uv` to the list of the preserved packages when building the
installer plan. In that case `uv` is not going to be removed when, for
example, using `python -m uv pip sync requirements.txt` when
requirements.txt does not contain `uv`, but `uv` is installed in that
venv.
Closes#1631
## Test Plan
Got through the example attached to
https://github.com/astral-sh/uv/issues/1631 and did see the uv deletion
in the output
```
$ python -m uv pip sync requirements.txt
Installed 1 package in 20ms
+ ruff==0.2.2
```
## Summary
It turns out that it's not uncommon to end up with repeated packages in
requirements files when running `pip-sync`, e.g., you might have
`anyio==4.0.0` specified multiple times. This PR relaxes our assertions
in the install plan to allow such repeated packages, as long as the
requirement markers are exactly the same (i.e., they are truly
duplicates).
Closes https://github.com/astral-sh/uv/issues/1552.
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```