## Summary
This PR adds a more flexible cache invalidation abstraction for uv, and
uses that new abstraction to improve support for dynamic metadata.
Specifically, instead of relying solely on a timestamp, we now pass
around a `CacheInfo` struct which (as of now) contains
`Option<Timestamp>` and `Option<Commit>`. The `CacheInfo` is saved in
`dist-info` as `uv_cache.json`, so we can test already-installed
distributions for cache validity (along with testing _cached_
distributions for cache validity).
Beyond the defaults (`pyproject.toml`, `setup.py`, and `setup.cfg`
changes), users can also specify additional cache keys, and it's easy
for us to extend support in the future. Right now, cache keys can either
be instructions to include the current commit (for `setuptools_scm` and
similar) or file paths (for `hatch-requirements-txt` and similar):
```toml
[tool.uv]
cache-keys = [{ file = "requirements.txt" }, { git = true }]
```
This change should be fully backwards compatible.
Closes https://github.com/astral-sh/uv/issues/6964.
Closes https://github.com/astral-sh/uv/issues/6255.
Closes https://github.com/astral-sh/uv/issues/6860.
## Summary
We now persist the `ResolverInstallerOptions` when writing out a tool
receipt. When upgrading, we grab the saved options, and merge with the
command-line arguments and user-level filesystem settings (CLI > receipt
> filesystem).
## Summary
If we just created an entrypoint script, we can of course set the
permissions (we just created it). However, if we're copying from the
cache, we might _not_ own the file. In that case, if we need to change
the permissions (we shouldn't, since the script is likely already
executable -- we set the permissions when we unzip, but I guess they
could _not_ be properly set in the zip itself), we have to copy it.
Closes https://github.com/astral-sh/uv/issues/5581.
## Summary
Adds a `--relocatable` CLI arg to `uv venv`. This flag does two things:
* ensures that the associated activation scripts do not rely on a
hardcoded
absolute path to the virtual environment (to the extent possible; `.csh`
and
`.nu` left as-is)
* persists a `relocatable` flag in `pyvenv.cfg`.
The flag in `pyvenv.cfg` in turn instructs the wheel `Installer` to
create script
entrypoints in a relocatable way (use `exec` trick + `dirname $0` on
POSIX;
use relative path to `python[w].exe` on Windows).
Fixes: #3863
## Test Plan
* Relocatable console scripts covered as additional scenarios in
existing test cases.
* Integration testing of boilerplate generation in `venv`.
* Manual testing of `uv venv` with and without `--relocatable`
## Summary
Addressing this [issue](https://github.com/astral-sh/uv/issues/5147) by
adding the capability for Symbolic linking as a link mode when
installing or syncing dependencies.
## Summary
Packages that provide scripts that _aren't_ Python entrypoints need to
respected in `uv tool install`. For example, Ruff ships a script in
`ruff-0.5.0.data/scripts`.
Unfortunately, the `.data` directory doesn't exist in the virtual
environment at all (it's removed, per the spec, after install). So this
PR changes the entry point detection to look at the `RECORD` file, which
is the only evidence that the scripts were installed.
Closes https://github.com/astral-sh/uv/issues/4691.
## Test Plan
`cargo run uv tool install ruff` (snapshot tests to-come)
This is the minimal "working" implementation. In summary, we:
- Resolve the requested requirements
- Create an environment at `$UV_STATE_DIR/tools/$name`
- Inspect the `dist-info` for the main requirement to determine its
entry points scripts
- Link the entry points from a user-executable directory
(`$XDG_BIN_HOME`) to the environment bin
- Create an entry at `$UV_STATE_DIR/tools/tools.toml` tracking the
user's request
The idea with `tools.toml` is that it allows us to perform upgrades and
syncs, retaining the original user request (similar to declarations in a
`pyproject.toml`). I imagine using a similar schema in the
`pyproject.toml` in the future if/when we add project-levle tools. I'm
also considering exposing `tools.toml` in the standard uv configuration
directory instead of the state directory, but it seems nice to tuck it
away for now while we iterate on it. Installing a tool won't perform a
sync of other tool environments, we'll probably have an explicit `uv
tool sync` command for that?
I've split out todos into follow-up pull requests:
- #4509 (failing on Windows)
- #4501
- #4504Closes#4485
Windows does not support cloning whole directories so clone each file
instead.
closes#3547
## Test Plan
Ran ` uv pip install setuptools --link-mode=clone` manually
## Summary
This PR adds basic struct definitions along with a "workspace" concept
for discovering settings. (The "workspace" terminology is used to match
Ruff; I did not invent it.)
A few notes:
- We discover any `pyproject.toml` or `uv.toml` file in any parent
directory of the current working directory. (We could adjust this to
look at the directories of the input files.)
- We don't actually do anything with the configuration yet; but those
PRs are large and I want this to be reviewed in isolation.
Inspired by https://github.com/astral-sh/uv/issues/2964, we now properly
log hardlink failures, e.g. when the cache is a docker container but the
venv is in a bind mount, e.g.:
```
DEBUG Failed to hardlink `/code/venv/uv/lib/python3.12/site-packages/asgiref-3.8.1.dist-info/WHEEL` to `/root/.cache/uv/archive-v0/nnpkKgUoM3LMxcNDmEKJQ/asgiref-3.8.1.dist-info/WHEEL`, attempting to copy files as a fallback
```
## Summary
It turns out that pip does _not_ validate the normalization of the
version specifier in the `.dist-info` directory. In particular, it seems
that some tools replace the `+` in a local version segment with a `_`.
Closes https://github.com/astral-sh/uv/issues/2424.
## Summary
This will make it easier to use the paths returned by `distutils.py`
(for some cases). No code or behavior changes; just removing some fields
we don't need.
## Summary
This was a missed find-and-replace. We shouldn't assume `layout.platlib`
here, since `RECORD` will be written to `site_packages` (which could be
`layout.purelib`).
This is hard to reproduce. You need a _fresh_ environment where
`purelib` and `platlib` differ (which isn't the case for virtualenvs, at
least typically), and you need to be installing a new package that is a
purelib. I tested it by manually changing `platlib` to point to a
different path.
Closes https://github.com/astral-sh/uv/issues/2064.
## Summary
This PR adds a `--python` flag that allows users to provide a specific
Python interpreter into which `uv` should install packages. This would
replace the `VIRTUAL_ENV=` workaround that folks have been using to
install into arbitrary, system environments, while _also_ actually being
correct for installing into non-virtual environments, where the bin and
site-packages paths can differ.
The approach taken here is to use `sysconfig.get_paths()` to get the
correct paths from the interpreter, and then use those for determining
the `bin` and `site-packages` directories, rather than constructing them
based on hard-coded expectations for each platform.
Closes https://github.com/astral-sh/uv/issues/1396.
Closes https://github.com/astral-sh/uv/issues/1779.
Closes https://github.com/astral-sh/uv/issues/1988.
## Test Plan
- Verified that, on my Windows machine, I was able to install `requests`
into a global environment with: `cargo run pip install requests --python
'C:\\Users\\crmarsh\\AppData\\Local\\Programs\\Python\\Python3.12\\python.exe`,
then `python` and `import requests`.
- Verified that, on macOS, I was able to install `requests` into a
global environment installed via Homebrew with: `cargo run pip install
requests --python $(which python3.8)`.
Users expect pip to have `pip`, `pip3` and `pip3.x` entrypoints. But pip
is a universal wheel, so it contains the `pip3.x` entrypoint where it
was built on. To fix this, pip special cases itself when installing
(3898741e29/src/pip/_internal/operations/install/wheel.py (L283)),
replacing the wheel entrypoint with one for the current version. We now
do the same.
Fixes#1593
## Summary
Fixes#1444.
In situations where the installer fails to perform a reflink, a regular
file copy is also attempted, as a fallback. This circumvents issues with
linking files across filesystems or volumes.
## Test Plan
N/A
## Summary
This _could_ fix https://github.com/astral-sh/uv/issues/1454, but I'm
not sure. I was able to replicate by forcing a bunch of error states.
But, in short, if we fail to hardlink on the initial copy due to a file
existing, and then fail _again_, we fallback to copying. But if we copy,
then the tempfile doesn't exist, and so the `fs_err::rename(&tempfile,
&out_path)?;` will fail with "File not found".
This PR just ensures that the cases are explicitly mutually exclusive:
we only attempt to rename if the hardlink succeeded.
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.
Then, update directory and file names:
```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```
Then add all the files again
```
# Add all the files again
git add crates
git add python/uv
# This one needs a force-add
git add -f crates/uv-trampoline
```
In Rust, `fs::copy` automatically preserves permissions (see:
https://doc.rust-lang.org/std/fs/fn.copy.html).
Elsewhere, when copying from the zip archive out to the cache, we can
set permissions during file creation, rather than as a separate call.
Both of these should be slightly more efficient.
## Background
In virtual environments, we want to install python programs as console
commands, e.g. `black .` over `python -m black .`. They may be called
[entrypoints](https://packaging.python.org/en/latest/specifications/entry-points/)
or scripts. For entrypoints, we're given a module name and function to
call in that module.
On Unix, we generate a minimal python script launcher. Text files are
runnable on unix by adding a shebang at their top, e.g.
```python
#!/usr/bin/env python
```
will make the operating system run the file with the current python
interpreter. A venv launcher for black in `/home/ferris/colorize/.venv`
(module name: `black`, function to call: `patched_main`) would look like
this:
```python
#!/home/ferris/colorize/.venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from black import patched_main
if __name__ == "__main__":
sys.argv[0] = re.sub(r"(-script\.pyw|\.exe)?$", "", sys.argv[0])
sys.exit(patched_main())
```
On windows, this doesn't work, we can only rely on launching `.exe`
files.
## Summary
We use posy's rust implementation of a trampoline, which is based on
distlib's c++ implementation. We pre-build a minimal exe and append the
launcher script as stored zip archive behind it. The exe will look for
the venv python interpreter next to it and use it to execute the
appended script.
The changes in this PR make the `black` entrypoint work:
```powershell
cargo run -- venv .venv
cargo run -q -- pip install black
.\.venv\Scripts\black --version
```
Integration with our existing tests will be done in follow-up PRs.
## Implementation and Details
I've vendored the posy trampoline crate. It is a formatted, renamed and
slightly changed for embedding version of
https://github.com/njsmith/posy/pull/28.
The posy launchers are smaller than the distlib launchers, 16K vs 106K
for black. Currently only `x86_64-pc-windows-msvc` is supported. The
crate requires a nightly compiler for its no-std binary size tricks.
On windows, an application can be launched with a console or without (to
create windows instead), which needs two different launchers. The gui
launcher will subsequently use `pythonw.exe` while the console launcher
uses `python.exe`.
On ubuntu and python 3.10,
```
cargo run -q -- pip-install --find-links https://storage.googleapis.com/jax-releases/jax_cuda_releases.html "jax[cuda12_pip]==0.4.23"
```
non-deterministically but for me consistently fails to install with
messages such as
```
error: Failed to install: nvidia_nccl_cu12-2.19.3-py3-none-manylinux1_x86_64.whl (nvidia-nccl-cu12==2.19.3)
Caused by: failed to remove file `/home/konsti/projects/puffin/.venv/lib/python3.10/site-packages/nvidia/__init__.py`
Caused by: No such file or directory (os error 2)
```
```
error: Failed to install: nvidia_cublas_cu12-12.3.4.1-py3-none-manylinux1_x86_64.whl (nvidia-cublas-cu12==12.3.4.1)
Caused by: Replacing an existing file or directory failed
```
```
error: Failed to install: nvidia_cuda_nvcc_cu12-12.3.107-py3-none-manylinux1_x86_64.whl (nvidia-cuda-nvcc-cu12==12.3.107)
Caused by: failed to hardlink file from /home/konsti/.cache/puffin/wheels-v0/pypi/nvidia-cuda-nvcc-cu12/nvidia_cuda_nvcc_cu12-12.3.107-py3-none-manylinux1_x86_64/nvidia/__init__.py to /home/konsti/projects/puffin/.venv/lib/python3.10/site-packages/nvidia/__init__.py
Caused by: File exists (os error 17)
```
We install a lot of nvidia package, that all contain
`nvidia/__init__.py`, since they all install themselves into the
`nvidia` module:
```
nvidia-cublas-cu12==12.3.4.1
nvidia-cuda-cupti-cu12==12.3.101
nvidia-cuda-nvcc-cu12==12.3.107
nvidia-cuda-nvrtc-cu12==12.3.107
nvidia-cuda-runtime-cu12==12.3.101
nvidia-cudnn-cu12==8.9.7.29
nvidia-cufft-cu12==11.0.12.1
nvidia-cusolver-cu12==11.5.4.101
nvidia-cusparse-cu12==12.2.0.103
nvidia-nccl-cu12==2.19.3
nvidia-nvjitlink-cu12==12.3.101
```
```
$ tree -L 1 .venv/lib/python3.10/site-packages/nvidia
.venv/lib/python3.10/site-packages/nvidia
├── cublas
├── cuda_cupti
├── cuda_nvcc
├── cuda_nvrtc
├── cuda_runtime
├── cudnn
├── cufft
├── cusolver
├── cusparse
├── __init__.py
├── nccl
└── nvjitlink
```
When installing we get a race condition, each package installation is
its own thread:
* Installer Thread 1 creates `nvidia/__init__.py`
* Installer Thread 2 sees an existing `nvidia/__init__.py`
* Installer Thread 3 sees an existing `nvidia/__init__.py`
* Installer Thread 2 removes `nvidia/__init__.py`
* Installer Thread 3 tries to remove `nvidia/__init__.py`, it doesn't
exist anymore -> failure.
We switch to a new strategy: When the target files exists, we don't
remove it, but instead hardlink the source file to a tempfile first,
then renaming the tempfile to the target file. Renaming is considered an
atomic operation.
I've put the logging on debug level because they cases indicate a
conflict between two packages while being rare.
Closes#925
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Noticed these when working on something unrelated. Generally:
- Prefer `entry.file_type()` over `entry.path().is_file()` or similar,
as the former is almost always free on Unix.
- Call `entry.path()` once, since it allocates internally (returns a
`PathBuf`).
I've tried to investigate puffin's performance wrt to builds and
parallelism in general, but found the previous instrumentation to
granular. I've tried to add spans to every function that either needs
noticeable io or cpu resources without creating duplication. This also
fixes some wrong tracing usage on async functions
(https://docs.rs/tracing/latest/tracing/struct.Span.html#in-asynchronous-code)
and some spans that weren't actually entered.
Similar to #516, but for individual files.
## Test Plan
Ran:
```sh
cargo run -p puffin-cli -- pip-uninstall plaid-python
mkdir -p /Users/crmarsh/workspace/puffin/.venv/lib/python3.10/site-packages/tests
echo "x=1" > /Users/crmarsh/workspace/puffin/.venv/lib/python3.10/site-packages/tests/__init__.py
cargo run -p puffin-cli -- pip-sync requirements.txt --no-cache --verbose
```
Previously, when installing a package we would delete the target
directory before copying (or linking) the contents of the package.
However, this means that we do not properly support namespace packages
which can share a target directory. Instead the last package to be
installed would be override existing packages. Since we install packages
in parallel, this could result in a race condition where the target
directory already exists which is not allowed when using `clonefile`.
See example error in #515.
c7e63d2dce
provides a regression test for this — it fails on `main`.
Here, we implement a recursive merge when the target directory already
exists. Both packages will be installed into the same directory. We no
longer delete the target directory, which seems okay since we uninstall
packages before installing now.
When files conflict, we will likely throw an error still. The correct
behavior to implement in this case is unclear, as if we just take "first
write wins" or "last write wins" we could end up with some files from
one package and some from another resulting in two broken packages. A
possible solution here is to lock the target directories while copying.
Ensure that we consistently show a path for all io errors in
install-wheel-rs either (preferred) through `fs_err`, or as fallback by
a custom error type. For zip reading errors, we rely on the caller to
add the name and/or location of the wheel.
## Summary
Closes https://github.com/astral-sh/puffin/issues/390.
## Test Plan
Installed `jupyter_core==5.5.0`, then removed the `jupyter_core` and
`jupyter_core-5.5.0.dist-info` directories from my virtualenv manually,
but left `jupyter.py`. I then re-ran `puffin pip-compile`, and verified
that it errored on `main` but succeeded here.
One of the most common errors i observed are build failures due to
missing header files. On ubuntu, this generally means that you need to
install some `<...>-dev` package that the documentation tells you about,
e.g. [mysqlclient](https://github.com/PyMySQL/mysqlclient#linux) needs
`default-libmysqlclient-dev`, [some psycopg
versions](https://www.psycopg.org/psycopg3/docs/basic/install.html#local-installation)
(i remember that this was always required at some earlier point) require
`libpq-dev` and pygraphviz wants `graphviz-dev`. This is quite common
for many scientific packages (where conda has an advantage because they
can provide those package as a dependency).
The error message can be completely inscrutable if you're just a python
programmer (or user) and not a c programmer (example: pygraphviz):
```
warning: no files found matching '*.png' under directory 'doc'
warning: no files found matching '*.txt' under directory 'doc'
warning: no files found matching '*.css' under directory 'doc'
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc/build'
pygraphviz/graphviz_wrap.c:3020:10: fatal error: graphviz/cgraph.h: No such file or directory
3020 | #include "graphviz/cgraph.h"
| ^~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
```
The only relevant part is `Fatal error: graphviz/cgraph.h: No such file
or directory`. Why is this file not there and how do i get it to be
there?
This is even harder to spot in pip's output, where it's 11 lines above
the last line:

I've special cased missing headers and made sure that the last line
tells you the important information: We're missing some header, please
check the documentation of {package} {version} for what to install:

Scrolling up:

The difference gets even clearer with a default ubuntu terminal with its
80 columns:

---
Note that the situation is better for a missing compiler, there i get:
```
[...]
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '.svn' found anywhere in distribution
no previously-included directories found matching 'doc/build'
error: command 'gcc' failed: No such file or directory
---
```
Putting the last line into google, the first two results tell me to
`sudo apt-get install gcc`, the third even tells me about `sudo apt
install build-essential`