Commit Graph

42 Commits

Author SHA1 Message Date
konsti 0cee76417f
Bump version to 0.9.18 (#17141)
It's been a week.

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2025-12-16 13:32:35 +00:00
Zanie Blue 2b5d65e61d
Bump version to 0.9.17 (#17058) 2025-12-09 16:36:00 -06:00
Charlie Marsh 4af2d2b922
Add `torch-tensorrt` and `torchao` to the PyTorch list (#17053)
## Summary

Closes https://github.com/astral-sh/uv/issues/17050.
2025-12-09 20:29:32 +00:00
Charlie Marsh 2502577c9d
Sort packages in GPU lists (#17054)
## Summary

No semantic changes; this is just bothering me.
2025-12-09 15:10:06 -05:00
Zanie Blue a63e5b62e3
Bump version to 0.9.16 (#17008) 2025-12-06 07:52:06 -06:00
Zanie Blue f6ad3dcd57
Regenerate the crates.io readmes on release (#16992)
Otherwise, they're stale!
2025-12-04 19:19:36 -06:00
Zanie Blue e7af5838bb
Bump version to 0.9.15 (#16942) 2025-12-02 17:48:28 -06:00
Zanie Blue 99c40f74c5
Link to the uv version in crates.io member READMEs (#16939)
Closes https://github.com/astral-sh/uv/issues/16931
2025-12-02 20:02:22 +00:00
Charlie Marsh 2cdbf9e547
Add ROCm 6.4 to `--torch-backend=auto` (#16919)
## Summary

Closes https://github.com/astral-sh/uv/issues/16917.
2025-12-01 20:27:20 -05:00
Zsolt Dollenstein 05814f9cd5
Bump version to 0.9.14 (#16909) 2025-12-01 11:52:15 -05:00
Zanie Blue 735b87004c
Bump version to 0.9.13 (#16862) 2025-11-26 15:12:54 +00:00
Zanie Blue 17c1061676
Fix the links to uv in crates.io member READMEs (#16848) 2025-11-25 18:47:32 +00:00
Zanie Blue 0fb1233363
Bump version to 0.9.12 (#16840) 2025-11-24 23:22:12 +00:00
Zanie Blue 7b8240dca9
Generate a README for crate members too (#16812)
We skip members with existing READMEs for now.

Follows #16809 and #16811
2025-11-21 15:44:05 -06:00
Zanie Blue 1de0cbea94
Use the word "internal" in crate descriptions (#16810)
ref
https://github.com/astral-sh/uv/pull/16809#pullrequestreview-3494007588
2025-11-21 13:22:47 -06:00
Zanie Blue 563438f13d
Fix documentation links for crates (#16801)
Part of https://github.com/astral-sh/uv/issues/4392

We shouldn't link to PyPI, and dropping the workspace-level
documentation link should mean that we get the auto-generated `docs.rs`
links.
2025-11-21 10:44:58 -06:00
Zanie Blue dfe89047bb
Publish to `crates.io` (#16770) 2025-11-20 21:26:44 +00:00
Charlie Marsh ce4a47a2e0
Remove `torch-model-archiver` and `torch-tb-profiler` from PyTorch backend (#16655)
## Summary

These are present on the PyTorch index, but only at very old versions.
The PyPI versions are newer, and seemingly these don't need to be built
against CUDA, etc.

Closes https://github.com/astral-sh/uv/issues/16651.
2025-11-10 10:26:12 -05:00
Charlie Marsh 6da135a66a
Respect multi-GPU outputs in nvidia-smi (#15460)
## Summary

This initially included `NVIDIA_VISIBLE_DEVICES` masking, though it's
now omitted for simplicity.

Closes https://github.com/astral-sh/uv/issues/14647.
2025-11-02 21:21:44 +00:00
Yu, Guangye de9f299b80
Add auto-detection for Intel GPU on Windows (#16280)
This PR enables `--torch-backend=auto` to automatically detect Intel
GPUs. It follows up on
[#14386](https://github.com/astral-sh/uv/pull/14386).
On Windows, detection is implemented by querying the
`Win32_VideoController` class via the [WMI
crate](https://github.com/ohadravid/wmi-rs/tree/v0.16.0).

Currently, Intel GPUs (XPU) do not depend on specific driver or toolkit
versions to determine which PyTorch wheel to use.
2025-10-16 16:56:07 -04:00
Charlie Marsh bf81a5bf0c
Add CUDA 13.0 support (#16321)
## Summary

Closes https://github.com/astral-sh/uv/issues/16319.
2025-10-15 15:10:08 -04:00
Charlie Marsh 48f507680c
Add PyG packages to torch backend (#15911)
## Summary

These are now supported on pyx.
2025-09-17 14:18:30 +00:00
Charlie Marsh d4806ee921
Re-add `triton` as a torch backend package (#15910)
## Summary

This accidentally regressed in
https://github.com/astral-sh/uv/pull/15769/files#diff-fcd4a516243929cdb086b7b79af9865a6ed432a0386765b0436392edc17a5a4eL260.
2025-09-17 14:04:50 +00:00
Zanie Blue cbb713f705
Review for #15769 (#15775)
Addresses my review comments from
https://github.com/astral-sh/uv/pull/15769
2025-09-11 13:20:13 +00:00
Charlie Marsh b195d523d5
Add pyx as a supported PyTorch index URL (#15769)
## Summary

If the user explicitly authenticated to pyx, then we attempt to use the
pyx PyTorch URLs; otherwise, we stick to `download.pytorch.org` as the
default.
2025-09-10 14:38:00 -05:00
timrid 330e56e778
Support iOS platform tags (#15640)
## Summary
This implements the iOS part of
https://github.com/astral-sh/uv/issues/8029

FYI: @freakboy3742

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan
Create a venv with uv and run `cargo run pip install --python-platform
arm64-apple-ios pillow`. Then the iOS binary of pillow should be
installed inside the venv.
2025-09-03 18:24:48 -04:00
Charlie Marsh 3e34aee63e
Avoid introducing unnecessary system dependency on CUDA (#15449)
## Summary

Packages like `triton` should come from the PyTorch index, but they
don't actually vary across (e.g.) the `cu128` or `cu129` indexes.

Closes https://github.com/astral-sh/uv/issues/15446.

## Test Plan

Validate that the following pins to `cu128`, rather than `cpu`:

```
echo "vllm\ntorch==2.7.1+cu128" | cargo run pip compile --torch-backend=auto --extra-index-url https://wheels.vllm.ai/b2f6c247a9b84556a8ea0e75bb4a2db765ff3315 - --python-platform linux --python-version 3.13 -v
```
2025-08-22 12:06:23 +01:00
youkaichao b950453891
Add CUDA 12.9 backend (#15416)
## Summary

Add torch cuda 12.9 backend

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->

---------

Signed-off-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2025-08-21 16:01:02 +01:00
Charlie Marsh 61e2308806
Add `triton` to `torch-backend` manifest (#15405)
## Summary

The PyTorch team publishes ARM Linux wheels for `triton` to the PyTorch
index, which aren't available on PyPI.

## Test Plan

```
echo "torch" | cargo run pip compile - --torch-backend=cu128 --python-platform aarch64-unknown-linux-gnu --python-version 3.13
```

Previously failed because it couldn't find a compatible `triton` wheel.
2025-08-21 13:23:12 +00:00
adamnemecek 3f83390e34
Make the use of `Self` consistent. (#15074)
## Summary

Make the use of `Self` consistent. Mostly done by running `cargo clippy
--fix -- -A clippy::all -W clippy::use_self`.

## Test Plan

<!-- How was it tested? -->
No need.
2025-08-05 20:17:12 +01:00
konsti 2ad924d4cf
Use consistent workspace inheritance (#15031)
Following a CI failure in https://github.com/astral-sh/uv/pull/15028,
ensure that all workspace crates are inheriting the MSRV and other
workspace configuration from the workspace root.
2025-08-02 22:03:51 +02:00
Yu, Guangye b1dc2b71a3
Add auto-detection for Intel GPUs (#14386)
## Summary

This PR intends to enable `--torch-backend=auto` to detect Intel GPUs
automatically:
- On Linux, detection is performed using the `lspci` command via
`Display controller` id.
- On Windows, ~~detection is done via a `powershell` query to
`Win32_VideoController`~~. Skip support for now—revisit once a better
solution is available.

Currently, Intel GPUs (XPU) do not rely on specific driver or toolkit
versions to distribute different PyTorch wheels.

## Test Plan

<!-- How was it tested? -->
On Linux:

![image](https://github.com/user-attachments/assets/f7f238e3-a797-42ea-b8fa-9b028dfd4db5)
~~On Windows:

![image](https://github.com/user-attachments/assets/a10d774e-1cb9-431b-bb85-e3e8225df98f)~~

---------

Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
2025-07-09 13:31:08 +00:00
Charlie Marsh a82c210cab
Add auto-detection for AMD GPUs (#14176)
## Summary

Allows `--torch-backend=auto` to detect AMD GPUs. The approach is fairly
well-documented inline, but I opted for `rocm_agent_enumerator` over
(e.g.) `rocminfo` since it seems to be the recommended approach for
scripting:
https://rocm.docs.amd.com/projects/rocminfo/en/latest/how-to/use-rocm-agent-enumerator.html.

Closes https://github.com/astral-sh/uv/issues/14086.

## Test Plan

```
root@rocm-jupyter-gpu-mi300x1-192gb-devcloud-atl1:~# ./uv-linux-libc-11fb582c5c046bae09766ceddd276dcc5bb41218/uv pip install torch --torch-backend=auto
Resolved 11 packages in 251ms
Prepared 2 packages in 6ms
Installed 11 packages in 257ms
 + filelock==3.18.0
 + fsspec==2025.5.1
 + jinja2==3.1.6
 + markupsafe==3.0.2
 + mpmath==1.3.0
 + networkx==3.5
 + pytorch-triton-rocm==3.3.1
 + setuptools==80.9.0
 + sympy==1.14.0
 + torch==2.7.1+rocm6.3
 + typing-extensions==4.14.0
```

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2025-06-21 15:21:06 +00:00
Charlie Marsh e59835d50c
Add XPU to `--torch-backend` (#14172)
## Summary

Like ROCm, no auto-detection for now.
2025-06-20 20:33:20 -04:00
John Mumm 611a13c841
Fix benchmark compilation failure: `cannot find attribute clap in this scope` (#14128)
[Two benchmark
jobs](https://github.com/astral-sh/uv/actions/runs/15732775460/job/44337710992?pr=14126)
were failing with `error: cannot find attribute clap in this scope`
based on #14120. This updates the recently added `#[clap(name = rocm...`
lines to use `cfg_attr(feature = "clap",`.
2025-06-18 16:30:12 +02:00
Charlie Marsh 4d9c9a1e76
Add ROCm backends to `--torch-backend` (#14120)
We don't yet support automatic detection, but this at least allows
explicit selection (e.g., `uv pip install --torch-backend rocm5.3`).

Closes #14087.
2025-06-18 07:35:05 -04:00
Hood Chatham f9d3f24728
Add Pyodide support (#12731)
This includes some initial work on adding Pyodide support (issue
#12729). It is enough to get
```
uv pip compile -p /path/to/pyodide --extra-index-url file:/path/to/simple-index
```
to work which should already be quite useful.

## Test Plan

* added a unit test for `pyodide_platform`
* integration tested manually with:
```
cargo run pip install \
-p /home/rchatham/Documents/programming/tmp/pyodide-venv-test/.pyodide-xbuildenv-0.29.3/0.27.4/xbuildenv/pyodide-root/dist/python \
--extra-index-url file:/home/rchatham/Documents/programming/tmp/pyodide-venv-test/.pyodide-xbuildenv-0.29.3/0.27.4/xbuildenv/pyodide-root/package_index \
--index-strategy unsafe-best-match --target blah --no-build \
numpy pydantic
```

---------

Co-authored-by: konsti <konstin@mailbox.org>
Co-authored-by: Zanie Blue <contact@zanie.dev>
2025-06-03 12:01:26 -05:00
konsti 5d37c7ecc5
Apply first set of Rustfmt edition 2024 changes (#13478)
Rustfmt introduces a lot of formatting changes in the 2024 edition. To
not break everything all at once, we split out the set of formatting
changes compatible with both the 2021 and 2024 edition by first
formatting with the 2024 style, and then again with the currently used
2021 style.

Notable changes are the formatting of derive macro attributes and lines
with overly long strings and adding trailing semicolons after statements
consistently.
2025-05-16 20:19:02 -04:00
Charlie Marsh a3dae2512c
Disallow mixing requirements across PyTorch indexes (#13179)
## Summary

If you use `--torch-backend=auto`, we want to avoid selecting (e.g.) a
`+cu124` build of `torch` alongside a `+cu126` build of `torchvision`.
2025-04-28 20:06:18 +00:00
Charlie Marsh 4bef9fadbb
Add PyTorch v2.7.0 to GPU backend (#13072)
## Summary

The first version to support CUDA 12.8.
2025-04-23 16:59:41 -04:00
Charlie Marsh 4215d0e16b
Check all compatible torch indexes when `--torch-backend` is enabled (#12385)
## Summary

It's possible that the PyTorch version the user depends on isn't in the
latest index. These indexes are equally trusted, so we should override
the policy.

Closes #12357.
2025-03-22 11:53:23 -04:00
Charlie Marsh 5173b59b50
Automatically infer the PyTorch index via `--torch-backend=auto` (#12070)
## Summary

This is a prototype that I'm considering shipping under `--preview`,
based on [`light-the-torch`](https://github.com/pmeier/light-the-torch).

`light-the-torch` patches pip to pull PyTorch packages from the PyTorch
indexes automatically. And, in particular, `light-the-torch` will query
the installed CUDA drivers to determine which indexes are compatible
with your system.

This PR implements equivalent behavior under `--torch-backend auto`,
though you can also set `--torch-backend cpu`, etc. for convenience.
When enabled, the registry client will fetch from the appropriate
PyTorch index when it sees a package from the PyTorch ecosystem (and
ignore any other configured indexes, _unless_ the package is explicitly
pinned to a different index).

Right now, this is only implemented in the `uv pip` CLI, since it
doesn't quite fit into the lockfile APIs given that it relies on feature
detection on the currently-running machine.

## Test Plan

On macOS, you can test this with (e.g.):

```shell
UV_TORCH_BACKEND=auto UV_CUDA_DRIVER_VERSION=450.80.2 cargo run \
  pip install torch --python-platform linux --python-version 3.12
```

On a GPU-enabled EC2 machine:

```shell
ubuntu@ip-172-31-47-149:~/uv$ UV_TORCH_BACKEND=auto cargo run pip install torch -v
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.31s
     Running `target/debug/uv pip install torch -v`
DEBUG uv 0.6.6 (e95ca063b 2025-03-14)
DEBUG Searching for default Python interpreter in virtual environments
DEBUG Found `cpython-3.13.0-linux-x86_64-gnu` at `/home/ubuntu/uv/.venv/bin/python3` (virtual environment)
DEBUG Using Python 3.13.0 environment at: .venv
DEBUG Acquired lock for `.venv`
DEBUG At least one requirement is not satisfied: torch
warning: The `--torch-backend` setting is experimental and may change without warning. Pass `--preview` to disable this warning.
DEBUG Detected CUDA driver version from `/sys/module/nvidia/version`: 550.144.3
...
```
2025-03-19 14:37:08 +00:00