## Summary
I think Camino is nice but it makes it much harder to work in
`uv-virtualenv`, since it's the _only_ crate that uses it. If we want to
use Camino, we should use it everywhere IMO.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
With this PR I've added the option environment variables to the wheel
building process, through the `BuildDispatch`. When integrating uv with
our project pixi (https://github.com/prefix-dev/pixi/pull/863). We ran
into this missing requirement, I've made a rough version here, could
maybe use some refinement.
### Why do we need this?
Because pixi allow the user to use a conda activated prefix for wheel
building, this comes with a number of environment variables, like `PATH`
but also `CONDA_PREFIX` amongst others. This allows the user to use
system dependencies from conda-forge to use during an sdist build.
Because we use `uv` as a library we need to pass in the options
programatically. Additionally, in general there is nothing holding a
python sdist back from actually depending on an environment variable,
see
e.g the test package: https://pypi.org/project/env-test-package/
### What about `ConfigSettings`
I think `ConfigSettings` does not suffice because e.g. CMake could
function differently when the `CONDA_PREFIX` is set. Also, we do not
know if the user supplied backend actually support these settings.
### Path handling
Because the user can now also supply a PATH in the environment map, the
logic I had was the following, I format the path so that it has the
following precedence
1. venv scripts dir.
2. user supplied path.
3. system path.
### Improvements
There is some path modification and copying happening everytime we use
the `run_python_script` function, I think we could improve this but
would like some pointers where to best put the maybe split and cached
version, we might also want to use some types to split these things up.
### Finally
I did not add any of these options to the uv executables, I first would
like to know if this is a direction we would want to go in. I'm happy to
do this or make any changes that you feel would benefit this project.
Also tagging @wolfv to keep track of this as well.
## Test Plan
<!-- How was it tested? -->
---------
Co-authored-by: konsti <konstin@mailbox.org>
## Summary
This PR adds a `--python` flag that allows users to provide a specific
Python interpreter into which `uv` should install packages. This would
replace the `VIRTUAL_ENV=` workaround that folks have been using to
install into arbitrary, system environments, while _also_ actually being
correct for installing into non-virtual environments, where the bin and
site-packages paths can differ.
The approach taken here is to use `sysconfig.get_paths()` to get the
correct paths from the interpreter, and then use those for determining
the `bin` and `site-packages` directories, rather than constructing them
based on hard-coded expectations for each platform.
Closes https://github.com/astral-sh/uv/issues/1396.
Closes https://github.com/astral-sh/uv/issues/1779.
Closes https://github.com/astral-sh/uv/issues/1988.
## Test Plan
- Verified that, on my Windows machine, I was able to install `requests`
into a global environment with: `cargo run pip install requests --python
'C:\\Users\\crmarsh\\AppData\\Local\\Programs\\Python\\Python3.12\\python.exe`,
then `python` and `import requests`.
- Verified that, on macOS, I was able to install `requests` into a
global environment installed via Homebrew with: `cargo run pip install
requests --python $(which python3.8)`.
## Summary
When a `pyproject.toml` is provided directly to `uv pip compile`, we
were failing to resolve recursive extras. The solution I settled on here
is to flatten them recursively when determining the requirements
upfront.
Closes https://github.com/astral-sh/uv/issues/1987.
## Test Plan
`cargo test`
Hi, love your work on `uv` 👋!
Opening a Draft PR early to check if there are any existing rust table
formatting libs that I am unaware of (either already in `uv`/`ruff`, or
the rust ecosystem) before spending much time on inventing the wheel
myself and cleaning it up. Any other pointers are also welcome (e.g. on
the editable filtering).
Editable project locations in `uv pip list` include the file scheme
(`file://`), where they are omitted in `pip list`. Is this desired, or
should it replicate pip?
## Summary
Implementation for #1401
`--editable` flag is implemented.
`--outdated` and `--uptodate` out of scope for this PR (requires latest
version information, and type wheel/sdist)
## Test Plan
Not yet implemented as I couldn't locate the tests for `uv pip freeze`.
We can compare to `pip` in
`scripts/compare_with_pip/compare_with_pip.py`?
## Summary
For context, we have three extraction paths:
- untar (async) - used for any `.tar.gz`, local or remote.
- unzip (async) - used to unzip remote wheels, or local or remote source
distributions.
- unzip (sync) - used to untar locally-available wheels into the cache.
We use three different crates for these:
- [`tokio-tar`](https://github.com/vorot93/tokio-tar)
- [`async-zip`](https://github.com/Majored/rs-async-zip)
- [`zip-rs`](https://github.com/zip-rs/zip)
These all seem to have different support for symlinks:
- `tokio-tar` tries to create a symlink (which works fine on Unix but
errors on Windows, since we typically don't have elevated permissions).
- `async-zip` _seems_ to write the target contents directly to the file
(which is what we want).
- `zip-rs` _apparently_ writes the _name_ of the target to the file
(which isn't what we want).
Thankfully, symlinks are not allowed in wheels
(https://github.com/pypa/pip/issues/5919,
https://discuss.python.org/t/symbolic-links-in-wheels/1945), so we can
ignore `zip-rs`.
For `tokio-tar`, we now _skip_ (and warn) if we see a symlink on
Windows. We could do what pip does, and recursively copy, but it's
difficult because we don't have `Seek` on the file. (Alternatively, we
could use hard links and junctions, though those also might need to
exist already.) Let's see how far this gets us.
(We also no longer attempt to set permissions on symlinks on Unix, which
caused another failure.)
Closes https://github.com/astral-sh/uv/issues/1858.
Similar to https://github.com/astral-sh/ruff/pull/8034
Adds more version information so it's clear what revision the user is on
```
❯ cargo run -q -- --version
uv 0.1.10 (daa8565a7 2024-02-23)
❯ cargo run -q -- -V
uv 0.1.10
❯ cargo run -q -- version
uv 0.1.10 (daa8565a7 2024-02-23)
❯ cargo run -q -- version --output-format json
{
"version": "0.1.10",
"commit_info": {
"short_commit_hash": "daa8565a7",
"commit_hash": "daa8565a75249305821fdc34ace085060c082ba3",
"commit_date": "2024-02-23",
"last_tag": null,
"commits_since_last_tag": 0
}
}
```
Closes https://github.com/astral-sh/uv/issues/1709
Closes https://github.com/astral-sh/uv/issues/1371
Tested with the reproduction provided in #1709 which gets past the HTTP
401.
Reuses the same copying logic we introduced in
https://github.com/astral-sh/uv/pull/1874 to ensure authentication is
attached to file URLs with a realm that matches that of the index. I had
to move the authentication logic into a new crate so it could be used in
`distribution-types`.
We will want to something more robust in the future, like track all
realms with authentication in a central store and perform lookups there.
That's what `pip` does and it allows consolidation of logic like netrc
lookups. That refactor feels significant though, and I'd like to get
this fixed ASAP so this is a minimal fix.
## Summary
This revives a PR from long ago
(https://github.com/astral-sh/uv/pull/383 and
https://github.com/zanieb/pubgrub/pull/24) that modifies how we deal
with dependencies that are declared multiple times within a single
package.
To quote from the originating PR:
> Uses an experimental pubgrub branch (#370) that allows us to handle
multiple version ranges for a single dependency to the solver which
results in better error messages because the derivation tree contains
all of the relevant versions. Previously, the version ranges were merged
(by us) in the resolver before handing them to pubgrub since only one
range could be provided per package. Since we don't merge the versions
anymore, we no longer give the solver an empty range for conflicting
requirements; instead the solver comes to that conclusion from the
provided versions. You can see the improved error message for direct
dependencies in [this
snapshot](https://github.com/astral-sh/puffin/pull/383/files#diff-a0437f2c20cde5e2f15199a3bf81a102b92580063268417847ec9c793a115bd0).
The main issue with that PR was around its handling of URL dependencies,
so this PR _also_ refactors how we handle those. Previously, we stored
URL dependencies on `PubGrubPackage`, but they were omitted from the
hash and equality implementations of `PubGrubPackage`. This led to some
really careful codepaths wherein we had to ensure that we always visited
URLs before non-URL packages, so that the URL-inclusive versions were
included in any hashmaps, etc. I considered preserving this approach,
but it would require us to rely on lots of internal details of PubGrub
(since we'd now be relying on PubGrub to merge those packages in the
"right" order).
So, instead, we now _always_ set the URL on a given package, whenever
that package was _given_ a URL upfront. I think this is easier to reason
about: if the user provided a URL for `flask`, then we should just
always add the URL for `flask`. If we see some other URL for `flask`, we
error, like before. If we see some unknown URL for `flask`, we error,
like before.
Closes https://github.com/astral-sh/uv/issues/1522.
Closes https://github.com/astral-sh/uv/issues/1821.
Closes https://github.com/astral-sh/uv/issues/1615.
## Summary
We currently maintain separate untar methods for sync and async, but we
only use the sync version when the user provides a local source
distribution. (Otherwise, we untar as we download the distribution.) In
my testing, this is actually slower anyway:
```
❯ python -m scripts.bench \
--uv-path ./target/release/main \
--uv-path ./target/release/uv \
./requirements.in --benchmark resolve-cold --min-runs 50
Benchmark 1: ./target/release/main (resolve-cold)
Time (mean ± σ): 835.2 ms ± 107.4 ms [User: 346.0 ms, System: 151.3 ms]
Range (min … max): 639.2 ms … 1051.0 ms 50 runs
Benchmark 2: ./target/release/uv (resolve-cold)
Time (mean ± σ): 750.7 ms ± 91.9 ms [User: 345.7 ms, System: 149.4 ms]
Range (min … max): 637.9 ms … 905.7 ms 50 runs
Summary
'./target/release/uv (resolve-cold)' ran
1.11 ± 0.20 times faster than './target/release/main (resolve-cold)'
```
## Summary
Allows the corresponding `pypi_types` struct to use any URL, since other
installers can put those into the environment, and Poetry seems to write
invalid URLs.
If we see a distribution with an invalid URL, we just treat it as a
registry distribution, which isn't ideal, but is better than (1)
erroring, and (2) changing `Url` to `String` everywhere internally. (I'm
torn on this second option.)
Closes https://github.com/astral-sh/uv/issues/1744.
## Test Plan
- Added `flask = { git = "git@github.com:pallets/flask.git", rev =
"b90a4f1f4a370e92054b9cc9db0efcb864f87ebe" }` to
`scripts/editable-installs/poetry_editable/pyproject.toml`.
- Ran `poetry install`.
- Ran `cargo pip freeze`. Verified that it errored on `main`, but passed
here.
- Ran `cargo run pip install "flask==3.0.0"`. Verified that it
uninstalled the existing Flask, and installed a new version from the
registry.
Fixes handling of GitHub PATs in HTTPS URLs, which were otherwise
dropped. We now supporting the following authentication schemes:
```
git+https://<user>:<token>/...
git+https://<token>/...
```
On Windows, the username is required. We can consider adding a
special-case for this in the future, but this just matches libgit2's
behavior.
I tested with fine-grained tokens, OAuth tokens, and "classic" tokens.
There's test coverage for fine-grained tokens in CI where we use a real
private repository and PAT. Yes, the PAT is committed to make this test
usable by anyone. It has read-only permissions to the single repository,
expires Feb 1 2025, and is in an isolated organization and GitHub
account.
Does not yet address SSH authentication.
Related:
- https://github.com/astral-sh/uv/issues/1514
- https://github.com/astral-sh/uv/issues/1452
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Add the environment variable `UV_REQUEST_TIMEOUT` to allow control over
pip timeouts.
Closes#1549
## Test Plan
I built uv in the repository top Dockerfile, set the timeout to 3
seconds, and ran `uv pip install torch`.
I measured the execution time with the time command and confirmed that
the process finished at a value close to the timeout we set.
```bash
root@037c69228cdc:~# time UV_REQUEST_TIMEOUT=3 /uv pip install torch
Resolved 22 packages in 25ms
error: Failed to download distributions
Caused by: Failed to fetch wheel: nvidia-cusolver-cu12==11.4.5.107
Caused by: Failed to extract source distribution
Caused by: request or response body error: operation timed out
Caused by: operation timed out
real 0m3.064s
user 0m0.225s
sys 0m0.240s
```
## Summary
When we read `--index-url` from a `requirements.txt`, we attempt to
respect the `--index-url` provided by the CLI if it exists.
Unfortunately, `--index-url` from the CLI has a default value... so we
_never_ respect the `--index-url` in the requirements file.
This PR modifies the CLI to use `None`, and moves the default into logic
in the `IndexLocations `struct.
Closes https://github.com/astral-sh/uv/issues/1692.
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Adds cli command / flag (`generate-shell-completion <SHELL>` /
`--generate-shell-completion <SHELL>`) to generate the completion script
for the given shell. Implemented in exactly the same way as it is done
in ruff
(https://github.com/astral-sh/ruff/blob/main/crates/ruff/src/lib.rs#L197)
Closes https://github.com/astral-sh/uv/issues/1654
## Test Plan
I've normally tested the generated script manually only for bash shell
on Ubuntu 22.04.3
```bash
$ uv --generate-shell-completion bash > /usr/share/bash-completion/completions/uv
$ uv # <TAB>
-q -h --verbose --no-cache --version clean
-v -V --no-color --cache-dir pip generate-shell-completion
-n --quiet --color --help venv help
$ uv pip # <TAB>
-q -n -V --verbose --color --cache-dir --version sync uninstall help
-v -h --quiet --no-color --no-cache --help compile install freeze
```
## Summary
Just as we mark virtualenvs as `gitignore`d by default, we should also
mark them as `CACHEDIR.TAG`, to ensure that they aren't included in
backups, etc.
Closes https://github.com/astral-sh/uv/issues/1648.
## Test Plan
Ran `cargo run venv` and:
```
❯ ls .venv
CACHEDIR.TAG bin lib pyvenv.cfg
```