Upgrade PyTorch documentation to latest versions (#16970)

## Summary

Point to PyTorch 2.9, Python 3.14, CUDA 12.8, etc.
This commit is contained in:
Charlie Marsh 2025-12-03 07:01:49 -08:00 committed by GitHub
parent 20ab80ad8f
commit 99660a8574
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 28 additions and 34 deletions

View File

@ -34,28 +34,22 @@ As such, the necessary packaging configuration will vary depending on both the p
support and the accelerators you want to enable. support and the accelerators you want to enable.
To start, consider the following (default) configuration, which would be generated by running To start, consider the following (default) configuration, which would be generated by running
`uv init --python 3.12` followed by `uv add torch torchvision`. `uv init --python 3.14` followed by `uv add torch torchvision`.
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.6): macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.8, as of PyTorch 2.9.1):
```toml ```toml
[project] [project]
name = "project" name = "project"
version = "0.1.0" version = "0.1.0"
requires-python = ">=3.12" requires-python = ">=3.14"
dependencies = [ dependencies = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
] ]
``` ```
!!! tip "Supported Python versions"
At time of writing, PyTorch does not yet publish wheels for Python 3.14; as such projects with
`requires-python = ">=3.14"` may fail to resolve. See the
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators, CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
you'll need to configure the project accordingly. you'll need to configure the project accordingly.
@ -117,7 +111,7 @@ In such cases, the first step is to add the relevant PyTorch index to your `pypr
```toml ```toml
[[tool.uv.index]] [[tool.uv.index]]
name = "pytorch-rocm" name = "pytorch-rocm"
url = "https://download.pytorch.org/whl/rocm6.3" url = "https://download.pytorch.org/whl/rocm6.4"
explicit = true explicit = true
``` ```
@ -254,10 +248,10 @@ As a complete example, the following project would use PyTorch's CPU-only builds
[project] [project]
name = "project" name = "project"
version = "0.1.0" version = "0.1.0"
requires-python = ">=3.12.0" requires-python = ">=3.14.0"
dependencies = [ dependencies = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
] ]
[tool.uv.sources] [tool.uv.sources]
@ -287,10 +281,10 @@ and CPU-only builds on all other platforms (e.g., macOS and Windows):
[project] [project]
name = "project" name = "project"
version = "0.1.0" version = "0.1.0"
requires-python = ">=3.12.0" requires-python = ">=3.14.0"
dependencies = [ dependencies = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
] ]
[tool.uv.sources] [tool.uv.sources]
@ -321,11 +315,11 @@ builds on Windows and macOS (by way of falling back to PyPI):
[project] [project]
name = "project" name = "project"
version = "0.1.0" version = "0.1.0"
requires-python = ">=3.12.0" requires-python = ">=3.14.0"
dependencies = [ dependencies = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
"pytorch-triton-rocm>=3.3.0 ; sys_platform == 'linux'", "pytorch-triton-rocm>=3.5.1 ; sys_platform == 'linux'",
] ]
[tool.uv.sources] [tool.uv.sources]
@ -341,7 +335,7 @@ pytorch-triton-rocm = [
[[tool.uv.index]] [[tool.uv.index]]
name = "pytorch-rocm" name = "pytorch-rocm"
url = "https://download.pytorch.org/whl/rocm6.3" url = "https://download.pytorch.org/whl/rocm6.4"
explicit = true explicit = true
``` ```
@ -351,11 +345,11 @@ Or, for Intel GPU builds:
[project] [project]
name = "project" name = "project"
version = "0.1.0" version = "0.1.0"
requires-python = ">=3.12.0" requires-python = ">=3.14.0"
dependencies = [ dependencies = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
"pytorch-triton-xpu>=3.3.0 ; sys_platform == 'win32' or sys_platform == 'linux'", "pytorch-triton-xpu>=3.5.0 ; sys_platform == 'win32' or sys_platform == 'linux'",
] ]
[tool.uv.sources] [tool.uv.sources]
@ -389,17 +383,17 @@ extra. For example, the following configuration would use PyTorch's CPU-only for
[project] [project]
name = "project" name = "project"
version = "0.1.0" version = "0.1.0"
requires-python = ">=3.12.0" requires-python = ">=3.14.0"
dependencies = [] dependencies = []
[project.optional-dependencies] [project.optional-dependencies]
cpu = [ cpu = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
] ]
cu128 = [ cu128 = [
"torch>=2.7.0", "torch>=2.9.1",
"torchvision>=0.22.0", "torchvision>=0.24.1",
] ]
[tool.uv] [tool.uv]
@ -473,7 +467,7 @@ then use the most-compatible PyTorch index for all relevant packages (e.g., `tor
etc.). If no such GPU is found, uv will fall back to the CPU-only index. uv will continue to respect etc.). If no such GPU is found, uv will fall back to the CPU-only index. uv will continue to respect
existing index configuration for any packages outside the PyTorch ecosystem. existing index configuration for any packages outside the PyTorch ecosystem.
You can also select a specific backend (e.g., CUDA 12.6) with `--torch-backend=cu126` (or You can also select a specific backend (e.g., CUDA 12.8) with `--torch-backend=cu126` (or
`UV_TORCH_BACKEND=cu126`): `UV_TORCH_BACKEND=cu126`):
```shell ```shell
@ -481,7 +475,7 @@ $ # With a command-line argument.
$ uv pip install torch torchvision --torch-backend=cu126 $ uv pip install torch torchvision --torch-backend=cu126
$ # With an environment variable. $ # With an environment variable.
$ UV_TORCH_BACKEND=cu126 uv pip install torch torchvision $ UV_TORCH_BACKEND=cu128 uv pip install torch torchvision
``` ```
At present, `--torch-backend` is only available in the `uv pip` interface. At present, `--torch-backend` is only available in the `uv pip` interface.