Compare commits

..

16 Commits
0.9.18 ... main

Author SHA1 Message Date
Zanie Blue 9949f0801f
Respect `UV_PYTHON_DOWNLOAD_MIRROR` in `uv python list` (#16673)
Closes https://github.com/astral-sh/uv/issues/16671

Mostly authored by Claude
2025-12-18 11:29:48 -06:00
Charlie Marsh c1d3c9bdb2
Cache NVIDIA-hosted wheels by default (#17164)
## Summary

Matches our behavior for PyTorch.

Closes https://github.com/astral-sh/uv/issues/16959.
2025-12-18 08:33:13 -05:00
github-actions[bot] 1ddb646a74
Add CPython 3.15.0a3 (#17165)
Automated update for Python releases.

---------

Co-authored-by: zanieb <2586601+zanieb@users.noreply.github.com>
Co-authored-by: Zanie Blue <contact@zanie.dev>
2025-12-18 13:15:04 +00:00
Zanie Blue 994b108af2
Link to the contributing tab instead of the document (#17146)
I'm not sure if this is strictly better? but seemed worth a try


[Rendered](https://github.com/astral-sh/uv/tree/zb/contributing?tab=readme-ov-file#contributing)
2025-12-18 06:56:51 -06:00
konsti e2a775d727
Use the same retry logic across uv (#17105)
We were using slightly different retry code in multiple places, this PR
unifies it.

Also fixes retry undercounting in publish if the retry middleware was
involved.

---------

Co-authored-by: Tomasz Kramkowski <tom@astral.sh>
2025-12-18 12:44:37 +00:00
konsti 44d1a302c8
Update retry-policies to 0.5.1 (#17170)
Fixes #17169
2025-12-18 10:59:15 +00:00
konsti a25d4f953f
Fix retry counts in cached client (#17104)
Previously, we dropped the counts from the middleware layer, potentially
doing to many retries and/or reporting too few.

Not pretty but fixes the bug.
2025-12-18 10:51:00 +00:00
stringscut 9f422e7515
Fix comment typos and improve wording (#17166)
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

improve code comments clarity

<!-- What's the purpose of the change? What does it do, and why? -->

## Test Plan

<!-- How was it tested? -->

Signed-off-by: stringscut <stringscut@outlook.jp>
2025-12-18 11:47:43 +01:00
konsti 9360ca7778
Refactor uv retrayble strategy to use a single code path (#17099)
Refactoring that allows uv's retryable strategy to return
`Some(Retryable::Fatal)`, also helpful for
https://github.com/astral-sh/uv/pull/16245
2025-12-18 11:10:47 +01:00
Charlie Marsh 6fa8204efe
Avoid enforcing incorrect hash in mixed-hash settings (#17157)
## Summary

Right now, when we return a `Dist` from a lockfile, we concatenate all
hashes for all distributions for a given package. In the case of
https://github.com/astral-sh/uv/issues/17143, I think that means we'll
return the SHA256 from the sdist, plus the SHA512 from the wheel. If the
wheel was previously installed (i.e., it's in the cache), and we
computed the SHA256 at that point in time, then `Hashed::has_digests`
would return `true` because we have _at least_ one SHA256. We now limit
the hashes to the distribution that we expect to install.

Closes https://github.com/astral-sh/uv/issues/17143.
2025-12-17 16:01:59 +00:00
Charlie Marsh 6578e0521b
Avoid creating file contents with `uv init --bare --script` (#17162)
## Summary

As suggested in Discord.
2025-12-17 15:26:17 +00:00
Charlie Marsh 0a83bf7dd5
Respect `--torch-backend` in `uv tool` commands (#17117)
## Summary

Like `uv pip`, these don't require a universal resolution, so
`--torch-backend` is easy to support.
2025-12-16 19:23:50 -05:00
Charlie Marsh e603761862
Support remote `pylock.toml` files (#17119)
## Summary

Closes https://github.com/astral-sh/uv/issues/17112.
2025-12-16 19:16:23 -05:00
Zanie Blue 4f6f56b070
Add ty to the README (#17139) 2025-12-16 09:39:59 -06:00
Zanie Blue 66f7093ad2
Move a couple README items into the FAQ (#17148) 2025-12-16 08:36:06 -06:00
Zanie Blue 60df92f9aa
Copy the Code of Conduct from Ruff (#17145) 2025-12-16 14:12:15 +00:00
49 changed files with 3247 additions and 1186 deletions

125
CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,125 @@
# Contributor Covenant Code of Conduct
- [Our Pledge](#our-pledge)
- [Our Standards](#our-standards)
- [Enforcement Responsibilities](#enforcement-responsibilities)
- [Scope](#scope)
- [Enforcement](#enforcement)
- [Enforcement Guidelines](#enforcement-guidelines)
- [1. Correction](#1-correction)
- [2. Warning](#2-warning)
- [3. Temporary Ban](#3-temporary-ban)
- [4. Permanent Ban](#4-permanent-ban)
- [Attribution](#attribution)
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our community a
harassment-free experience for everyone, regardless of age, body size, visible or invisible
disability, ethnicity, sex characteristics, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and
healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes, and learning from the
experience
- Focusing on what is best not just for us as individuals, but for the overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address, without their
explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior
and will take appropriate and fair corrective action in response to any behavior that they deem
inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit, or reject comments, commits,
code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and
will communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when an individual is
officially representing the community in public spaces. Examples of representing our community
include using an official e-mail address, posting via an official social media account, or acting as
an appointed representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community
leaders responsible for enforcement at <hey@astral.sh>. All complaints will be reviewed and
investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the reporter of any
incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining the consequences for
any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or
unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing clarity around the
nature of the violation and an explanation of why the behavior was inappropriate. A public apology
may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of actions.
**Consequence**: A warning with consequences for continued behavior. No interaction with the people
involved, including unsolicited interaction with those enforcing the Code of Conduct, for a
specified period of time. This includes avoiding interactions in community spaces as well as
external channels like social media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including sustained inappropriate
behavior.
**Consequence**: A temporary ban from any sort of interaction or public communication with the
community for a specified period of time. No public or private interaction with the people involved,
including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this
period. Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community standards, including
sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement
of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available
[here](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity).
For answers to common questions about this code of conduct, see the
[FAQ](https://www.contributor-covenant.org/faq). Translations are available
[here](https://www.contributor-covenant.org/translations).
[homepage]: https://www.contributor-covenant.org

11
Cargo.lock generated
View File

@ -227,9 +227,9 @@ dependencies = [
[[package]] [[package]]
name = "astral-reqwest-retry" name = "astral-reqwest-retry"
version = "0.7.0" version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb7549bd00f62f73f2e7e76f3f77ccdabb31873f4f02f758ed88ad739d522867" checksum = "78ab210f6cdf8fd3254d47e5ee27ce60ed34a428ff71b4ae9477b1c84b49498c"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"astral-reqwest-middleware", "astral-reqwest-middleware",
@ -3753,11 +3753,11 @@ dependencies = [
[[package]] [[package]]
name = "retry-policies" name = "retry-policies"
version = "0.4.0" version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5875471e6cab2871bc150ecb8c727db5113c9338cc3354dc5ee3425b6aa40a1c" checksum = "46a4bd6027df676bcb752d3724db0ea3c0c5fc1dd0376fec51ac7dcaf9cc69be"
dependencies = [ dependencies = [
"rand 0.8.5", "rand 0.9.2",
] ]
[[package]] [[package]]
@ -5606,7 +5606,6 @@ dependencies = [
"thiserror 2.0.17", "thiserror 2.0.17",
"tokio", "tokio",
"tokio-util", "tokio-util",
"tracing",
"url", "url",
"uv-cache", "uv-cache",
"uv-client", "uv-client",

View File

@ -153,7 +153,7 @@ regex-automata = { version = "0.4.8", default-features = false, features = ["dfa
reqsign = { version = "0.18.0", features = ["aws", "default-context"], default-features = false } reqsign = { version = "0.18.0", features = ["aws", "default-context"], default-features = false }
reqwest = { version = "0.12.22", default-features = false, features = ["json", "gzip", "deflate", "zstd", "stream", "system-proxy", "rustls-tls", "rustls-tls-native-roots", "socks", "multipart", "http2", "blocking"] } reqwest = { version = "0.12.22", default-features = false, features = ["json", "gzip", "deflate", "zstd", "stream", "system-proxy", "rustls-tls", "rustls-tls-native-roots", "socks", "multipart", "http2", "blocking"] }
reqwest-middleware = { version = "0.4.2", package = "astral-reqwest-middleware", features = ["multipart"] } reqwest-middleware = { version = "0.4.2", package = "astral-reqwest-middleware", features = ["multipart"] }
reqwest-retry = { version = "0.7.0", package = "astral-reqwest-retry" } reqwest-retry = { version = "0.8.0", package = "astral-reqwest-retry" }
rkyv = { version = "0.8.8", features = ["bytecheck"] } rkyv = { version = "0.8.8", features = ["bytecheck"] }
rmp-serde = { version = "1.3.0" } rmp-serde = { version = "1.3.0" }
rust-netrc = { version = "0.1.2" } rust-netrc = { version = "0.1.2" }

View File

@ -42,7 +42,7 @@ An extremely fast Python package and project manager, written in Rust.
- 🖥️ Supports macOS, Linux, and Windows. - 🖥️ Supports macOS, Linux, and Windows.
uv is backed by [Astral](https://astral.sh), the creators of uv is backed by [Astral](https://astral.sh), the creators of
[Ruff](https://github.com/astral-sh/ruff). [Ruff](https://github.com/astral-sh/ruff) and [ty](https://github.com/astral-sh/ty).
## Installation ## Installation
@ -268,19 +268,12 @@ Installed 43 packages in 208ms
See the [pip interface documentation](https://docs.astral.sh/uv/pip/index/) to get started. See the [pip interface documentation](https://docs.astral.sh/uv/pip/index/) to get started.
## Platform support
See uv's [platform support](https://docs.astral.sh/uv/reference/platforms/) document.
## Versioning policy
See uv's [versioning policy](https://docs.astral.sh/uv/reference/versioning/) document.
## Contributing ## Contributing
We are passionate about supporting contributors of all levels of experience and would love to see We are passionate about supporting contributors of all levels of experience and would love to see
you get involved in the project. See the you get involved in the project. See the
[contributing guide](https://github.com/astral-sh/uv/blob/main/CONTRIBUTING.md) to get started. [contributing guide](https://github.com/astral-sh/uv?tab=contributing-ov-file#contributing) to get
started.
## FAQ ## FAQ
@ -292,6 +285,15 @@ It's pronounced as "you - vee" ([`/juː viː/`](https://en.wikipedia.org/wiki/He
Just "uv", please. See the [style guide](./STYLE.md#styling-uv) for details. Just "uv", please. See the [style guide](./STYLE.md#styling-uv) for details.
#### What platforms does uv support?
See uv's [platform support](https://docs.astral.sh/uv/reference/platforms/) document.
#### Is uv ready for production?
Yes, uv is stable and widely used in production. See uv's
[versioning policy](https://docs.astral.sh/uv/reference/versioning/) document for details.
## Acknowledgements ## Acknowledgements
uv's dependency resolver uses [PubGrub](https://github.com/pubgrub-rs/pubgrub) under the hood. We're uv's dependency resolver uses [PubGrub](https://github.com/pubgrub-rs/pubgrub) under the hood. We're

View File

@ -33,5 +33,4 @@ tempfile = { workspace = true }
thiserror = { workspace = true } thiserror = { workspace = true }
tokio = { workspace = true } tokio = { workspace = true }
tokio-util = { workspace = true } tokio-util = { workspace = true }
tracing = { workspace = true }
url = { workspace = true } url = { workspace = true }

View File

@ -6,21 +6,18 @@
use std::path::PathBuf; use std::path::PathBuf;
use std::pin::Pin; use std::pin::Pin;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::time::{Duration, SystemTime};
use futures::TryStreamExt; use futures::TryStreamExt;
use reqwest_retry::RetryPolicy;
use reqwest_retry::policies::ExponentialBackoff; use reqwest_retry::policies::ExponentialBackoff;
use std::fmt; use std::fmt;
use thiserror::Error; use thiserror::Error;
use tokio::io::{AsyncRead, ReadBuf}; use tokio::io::{AsyncRead, ReadBuf};
use tokio_util::compat::FuturesAsyncReadCompatExt; use tokio_util::compat::FuturesAsyncReadCompatExt;
use tracing::debug;
use url::Url; use url::Url;
use uv_distribution_filename::SourceDistExtension; use uv_distribution_filename::SourceDistExtension;
use uv_cache::{Cache, CacheBucket, CacheEntry, Error as CacheError}; use uv_cache::{Cache, CacheBucket, CacheEntry, Error as CacheError};
use uv_client::{BaseClient, is_transient_network_error}; use uv_client::{BaseClient, RetryState};
use uv_extract::{Error as ExtractError, stream}; use uv_extract::{Error as ExtractError, stream};
use uv_pep440::Version; use uv_pep440::Version;
use uv_platform::Platform; use uv_platform::Platform;
@ -141,7 +138,7 @@ pub enum Error {
#[error("Failed to detect platform")] #[error("Failed to detect platform")]
Platform(#[from] uv_platform::Error), Platform(#[from] uv_platform::Error),
#[error("Attempt failed after {retries} {subject}", subject = if *retries > 1 { "retries" } else { "retry" })] #[error("Request failed after {retries} {subject}", subject = if *retries > 1 { "retries" } else { "retry" })]
RetriedError { RetriedError {
#[source] #[source]
err: Box<Error>, err: Box<Error>,
@ -150,13 +147,15 @@ pub enum Error {
} }
impl Error { impl Error {
/// Return the number of attempts that were made to complete this request before this error was /// Return the number of retries that were made to complete this request before this error was
/// returned. Note that e.g. 3 retries equates to 4 attempts. /// returned.
fn attempts(&self) -> u32 { ///
/// Note that e.g. 3 retries equates to 4 attempts.
fn retries(&self) -> u32 {
if let Self::RetriedError { retries, .. } = self { if let Self::RetriedError { retries, .. } = self {
return retries + 1; return *retries;
} }
1 0
} }
} }
@ -242,9 +241,7 @@ async fn download_and_unpack_with_retry(
download_url: &Url, download_url: &Url,
cache_entry: &CacheEntry, cache_entry: &CacheEntry,
) -> Result<PathBuf, Error> { ) -> Result<PathBuf, Error> {
let mut total_attempts = 0; let mut retry_state = RetryState::start(*retry_policy, download_url.clone());
let mut retried_here = false;
let start_time = SystemTime::now();
loop { loop {
let result = download_and_unpack( let result = download_and_unpack(
@ -260,40 +257,23 @@ async fn download_and_unpack_with_retry(
) )
.await; .await;
let result = match result { match result {
Ok(path) => Ok(path), Ok(path) => return Ok(path),
Err(err) => { Err(err) => {
total_attempts += err.attempts(); if let Some(backoff) = retry_state.should_retry(&err, err.retries()) {
let past_retries = total_attempts - 1; retry_state.sleep_backoff(backoff).await;
if is_transient_network_error(&err) {
let retry_decision = retry_policy.should_retry(start_time, past_retries);
if let reqwest_retry::RetryDecision::Retry { execute_after } = retry_decision {
debug!(
"Transient failure while installing {} {}; retrying...",
binary.name(),
version
);
let duration = execute_after
.duration_since(SystemTime::now())
.unwrap_or_else(|_| Duration::default());
tokio::time::sleep(duration).await;
retried_here = true;
continue; continue;
} }
} return if retry_state.total_retries() > 0 {
if retried_here {
Err(Error::RetriedError { Err(Error::RetriedError {
err: Box::new(err), err: Box::new(err),
retries: past_retries, retries: retry_state.total_retries(),
}) })
} else { } else {
Err(err) Err(err)
}
}
}; };
return result; }
}
} }
} }

View File

@ -3309,7 +3309,9 @@ pub struct InitArgs {
/// ///
/// Disables creating extra files like `README.md`, the `src/` tree, `.python-version` files, /// Disables creating extra files like `README.md`, the `src/` tree, `.python-version` files,
/// etc. /// etc.
#[arg(long, conflicts_with = "script")] ///
/// When combined with `--script`, the script will only contain the inline metadata header.
#[arg(long)]
pub bare: bool, pub bare: bool,
/// Create a virtual project, rather than a package. /// Create a virtual project, rather than a package.
@ -5371,6 +5373,21 @@ pub struct ToolRunArgs {
#[arg(long)] #[arg(long)]
pub python_platform: Option<TargetTriple>, pub python_platform: Option<TargetTriple>,
/// The backend to use when fetching packages in the PyTorch ecosystem (e.g., `cpu`, `cu126`, or `auto`)
///
/// When set, uv will ignore the configured index URLs for packages in the PyTorch ecosystem,
/// and will instead use the defined backend.
///
/// For example, when set to `cpu`, uv will use the CPU-only PyTorch index; when set to `cu126`,
/// uv will use the PyTorch index for CUDA 12.6.
///
/// The `auto` mode will attempt to detect the appropriate PyTorch index based on the currently
/// installed CUDA drivers.
///
/// This option is in preview and may change in any future release.
#[arg(long, value_enum, env = EnvVars::UV_TORCH_BACKEND)]
pub torch_backend: Option<TorchMode>,
#[arg(long, hide = true)] #[arg(long, hide = true)]
pub generate_shell_completion: Option<clap_complete_command::Shell>, pub generate_shell_completion: Option<clap_complete_command::Shell>,
} }
@ -5547,6 +5564,21 @@ pub struct ToolInstallArgs {
/// `--python-platform` option is intended for advanced use cases. /// `--python-platform` option is intended for advanced use cases.
#[arg(long)] #[arg(long)]
pub python_platform: Option<TargetTriple>, pub python_platform: Option<TargetTriple>,
/// The backend to use when fetching packages in the PyTorch ecosystem (e.g., `cpu`, `cu126`, or `auto`)
///
/// When set, uv will ignore the configured index URLs for packages in the PyTorch ecosystem,
/// and will instead use the defined backend.
///
/// For example, when set to `cpu`, uv will use the CPU-only PyTorch index; when set to `cu126`,
/// uv will use the PyTorch index for CUDA 12.6.
///
/// The `auto` mode will attempt to detect the appropriate PyTorch index based on the currently
/// installed CUDA drivers.
///
/// This option is in preview and may change in any future release.
#[arg(long, value_enum, env = EnvVars::UV_TORCH_BACKEND)]
pub torch_backend: Option<TorchMode>,
} }
#[derive(Args)] #[derive(Args)]

View File

@ -366,6 +366,7 @@ pub fn resolver_options(
exclude_newer_package.unwrap_or_default(), exclude_newer_package.unwrap_or_default(),
), ),
link_mode, link_mode,
torch_backend: None,
no_build: flag(no_build, build, "build"), no_build: flag(no_build, build, "build"),
no_build_package: Some(no_build_package), no_build_package: Some(no_build_package),
no_binary: flag(no_binary, binary, "binary"), no_binary: flag(no_binary, binary, "binary"),

View File

@ -4,7 +4,7 @@ use std::fmt::Write;
use std::num::ParseIntError; use std::num::ParseIntError;
use std::path::Path; use std::path::Path;
use std::sync::Arc; use std::sync::Arc;
use std::time::Duration; use std::time::{Duration, SystemTime};
use std::{env, io, iter}; use std::{env, io, iter};
use anyhow::anyhow; use anyhow::anyhow;
@ -20,8 +20,8 @@ use reqwest::{Client, ClientBuilder, IntoUrl, Proxy, Request, Response, multipar
use reqwest_middleware::{ClientWithMiddleware, Middleware}; use reqwest_middleware::{ClientWithMiddleware, Middleware};
use reqwest_retry::policies::ExponentialBackoff; use reqwest_retry::policies::ExponentialBackoff;
use reqwest_retry::{ use reqwest_retry::{
DefaultRetryableStrategy, RetryTransientMiddleware, Retryable, RetryableStrategy, RetryPolicy, RetryTransientMiddleware, Retryable, RetryableStrategy, default_on_request_error,
default_on_request_error, default_on_request_success,
}; };
use thiserror::Error; use thiserror::Error;
use tracing::{debug, trace}; use tracing::{debug, trace};
@ -1019,21 +1019,15 @@ impl<'a> RequestBuilder<'a> {
} }
} }
/// Extends [`DefaultRetryableStrategy`], to log transient request failures and additional retry cases. /// An extension over [`DefaultRetryableStrategy`] that logs transient request failures and
/// adds additional retry cases.
pub struct UvRetryableStrategy; pub struct UvRetryableStrategy;
impl RetryableStrategy for UvRetryableStrategy { impl RetryableStrategy for UvRetryableStrategy {
fn handle(&self, res: &Result<Response, reqwest_middleware::Error>) -> Option<Retryable> { fn handle(&self, res: &Result<Response, reqwest_middleware::Error>) -> Option<Retryable> {
// Use the default strategy and check for additional transient error cases. let retryable = match res {
let retryable = match DefaultRetryableStrategy.handle(res) { Ok(success) => default_on_request_success(success),
None | Some(Retryable::Fatal) Err(err) => retryable_on_request_failure(err),
if res
.as_ref()
.is_err_and(|err| is_transient_network_error(err)) =>
{
Some(Retryable::Transient)
}
default => default,
}; };
// Log on transient errors // Log on transient errors
@ -1059,11 +1053,13 @@ impl RetryableStrategy for UvRetryableStrategy {
/// Whether the error looks like a network error that should be retried. /// Whether the error looks like a network error that should be retried.
/// ///
/// There are two cases that the default retry strategy is missing: /// This is an extension over [`reqwest_middleware::default_on_request_failure`], which is missing
/// a number of cases:
/// * Inside the reqwest or reqwest-middleware error is an `io::Error` such as a broken pipe /// * Inside the reqwest or reqwest-middleware error is an `io::Error` such as a broken pipe
/// * When streaming a response, a reqwest error may be hidden several layers behind errors /// * When streaming a response, a reqwest error may be hidden several layers behind errors
/// of different crates processing the stream, including `io::Error` layers. /// of different crates processing the stream, including `io::Error` layers
pub fn is_transient_network_error(err: &(dyn Error + 'static)) -> bool { /// * Any `h2` error
fn retryable_on_request_failure(err: &(dyn Error + 'static)) -> Option<Retryable> {
// First, try to show a nice trace log // First, try to show a nice trace log
if let Some((Some(status), Some(url))) = find_source::<WrappedReqwestError>(&err) if let Some((Some(status), Some(url))) = find_source::<WrappedReqwestError>(&err)
.map(|request_err| (request_err.status(), request_err.url())) .map(|request_err| (request_err.status(), request_err.url()))
@ -1078,29 +1074,32 @@ pub fn is_transient_network_error(err: &(dyn Error + 'static)) -> bool {
// crates // crates
let mut current_source = Some(err); let mut current_source = Some(err);
while let Some(source) = current_source { while let Some(source) = current_source {
if let Some(reqwest_err) = source.downcast_ref::<WrappedReqwestError>() { // Handle different kinds of reqwest error nesting not accessible by downcast.
has_known_error = true; let reqwest_err = if let Some(reqwest_err) = source.downcast_ref::<reqwest::Error>() {
if let reqwest_middleware::Error::Reqwest(reqwest_err) = &**reqwest_err { Some(reqwest_err)
if default_on_request_error(reqwest_err) == Some(Retryable::Transient) { } else if let Some(reqwest_err) = source
trace!("Retrying nested reqwest middleware error"); .downcast_ref::<WrappedReqwestError>()
return true; .and_then(|err| err.inner())
} {
if is_retryable_status_error(reqwest_err) { Some(reqwest_err)
trace!("Retrying nested reqwest middleware status code error"); } else if let Some(reqwest_middleware::Error::Reqwest(reqwest_err)) =
return true; source.downcast_ref::<reqwest_middleware::Error>()
} {
} Some(reqwest_err)
} else {
None
};
trace!("Cannot retry nested reqwest middleware error"); if let Some(reqwest_err) = reqwest_err {
} else if let Some(reqwest_err) = source.downcast_ref::<reqwest::Error>() {
has_known_error = true; has_known_error = true;
// Ignore the default retry strategy returning fatal.
if default_on_request_error(reqwest_err) == Some(Retryable::Transient) { if default_on_request_error(reqwest_err) == Some(Retryable::Transient) {
trace!("Retrying nested reqwest error"); trace!("Retrying nested reqwest error");
return true; return Some(Retryable::Transient);
} }
if is_retryable_status_error(reqwest_err) { if is_retryable_status_error(reqwest_err) {
trace!("Retrying nested reqwest status code error"); trace!("Retrying nested reqwest status code error");
return true; return Some(Retryable::Transient);
} }
trace!("Cannot retry nested reqwest error"); trace!("Cannot retry nested reqwest error");
@ -1108,7 +1107,7 @@ pub fn is_transient_network_error(err: &(dyn Error + 'static)) -> bool {
// All h2 errors look like errors that should be retried // All h2 errors look like errors that should be retried
// https://github.com/astral-sh/uv/issues/15916 // https://github.com/astral-sh/uv/issues/15916
trace!("Retrying nested h2 error"); trace!("Retrying nested h2 error");
return true; return Some(Retryable::Transient);
} else if let Some(io_err) = source.downcast_ref::<io::Error>() { } else if let Some(io_err) = source.downcast_ref::<io::Error>() {
has_known_error = true; has_known_error = true;
let retryable_io_err_kinds = [ let retryable_io_err_kinds = [
@ -1125,7 +1124,7 @@ pub fn is_transient_network_error(err: &(dyn Error + 'static)) -> bool {
]; ];
if retryable_io_err_kinds.contains(&io_err.kind()) { if retryable_io_err_kinds.contains(&io_err.kind()) {
trace!("Retrying error: `{}`", io_err.kind()); trace!("Retrying error: `{}`", io_err.kind());
return true; return Some(Retryable::Transient);
} }
trace!( trace!(
@ -1140,7 +1139,81 @@ pub fn is_transient_network_error(err: &(dyn Error + 'static)) -> bool {
if !has_known_error { if !has_known_error {
trace!("Cannot retry error: Neither an IO error nor a reqwest error"); trace!("Cannot retry error: Neither an IO error nor a reqwest error");
} }
false
None
}
/// Per-request retry state and policy.
pub struct RetryState {
retry_policy: ExponentialBackoff,
start_time: SystemTime,
total_retries: u32,
url: DisplaySafeUrl,
}
impl RetryState {
/// Initialize the [`RetryState`] and start the backoff timer.
pub fn start(retry_policy: ExponentialBackoff, url: impl Into<DisplaySafeUrl>) -> Self {
Self {
retry_policy,
start_time: SystemTime::now(),
total_retries: 0,
url: url.into(),
}
}
/// The number of retries across all requests.
///
/// After a failed retryable request, this equals the maximum number of retries.
pub fn total_retries(&self) -> u32 {
self.total_retries
}
/// Determines whether request should be retried.
///
/// Takes the number of retries from nested layers associated with the specific `err` type as
/// `error_retries`.
///
/// Returns the backoff duration if the request should be retried.
#[must_use]
pub fn should_retry(
&mut self,
err: &(dyn Error + 'static),
error_retries: u32,
) -> Option<Duration> {
// If the middleware performed any retries, consider them in our budget.
self.total_retries += error_retries;
match retryable_on_request_failure(err) {
Some(Retryable::Transient) => {
let retry_decision = self
.retry_policy
.should_retry(self.start_time, self.total_retries);
if let reqwest_retry::RetryDecision::Retry { execute_after } = retry_decision {
let duration = execute_after
.duration_since(SystemTime::now())
.unwrap_or_else(|_| Duration::default());
self.total_retries += 1;
return Some(duration);
}
None
}
Some(Retryable::Fatal) | None => None,
}
}
/// Wait before retrying the request.
pub async fn sleep_backoff(&self, duration: Duration) {
debug!(
"Transient failure while handling response from {}; retrying after {:.1}s...",
self.url,
duration.as_secs_f32(),
);
// TODO(konsti): Should we show a spinner plus a message in the CLI while
// waiting?
tokio::time::sleep(duration).await;
}
} }
/// Whether the error is a status code error that is retryable. /// Whether the error is a status code error that is retryable.
@ -1385,7 +1458,7 @@ mod tests {
let middleware_client = ClientWithMiddleware::default(); let middleware_client = ClientWithMiddleware::default();
let mut retried = Vec::new(); let mut retried = Vec::new();
for status in 100..599 { for status in 100..599 {
// Test all standard status codes and and example for a non-RFC code used in the wild. // Test all standard status codes and an example for a non-RFC code used in the wild.
if StatusCode::from_u16(status)?.canonical_reason().is_none() && status != 420 { if StatusCode::from_u16(status)?.canonical_reason().is_none() && status != 420 {
continue; continue;
} }
@ -1401,7 +1474,7 @@ mod tests {
.await; .await;
let middleware_retry = let middleware_retry =
DefaultRetryableStrategy.handle(&response) == Some(Retryable::Transient); UvRetryableStrategy.handle(&response) == Some(Retryable::Transient);
let response = client let response = client
.get(format!("{}/{}", server.uri(), status)) .get(format!("{}/{}", server.uri(), status))
@ -1410,7 +1483,7 @@ mod tests {
let uv_retry = match response.error_for_status() { let uv_retry = match response.error_for_status() {
Ok(_) => false, Ok(_) => false,
Err(err) => is_transient_network_error(&err), Err(err) => retryable_on_request_failure(&err) == Some(Retryable::Transient),
}; };
// Ensure we're retrying the same status code as the reqwest_retry crate. We may choose // Ensure we're retrying the same status code as the reqwest_retry crate. We may choose

View File

@ -1,9 +1,7 @@
use std::time::{Duration, SystemTime};
use std::{borrow::Cow, path::Path}; use std::{borrow::Cow, path::Path};
use futures::FutureExt; use futures::FutureExt;
use reqwest::{Request, Response}; use reqwest::{Request, Response};
use reqwest_retry::RetryPolicy;
use rkyv::util::AlignedVec; use rkyv::util::AlignedVec;
use serde::de::DeserializeOwned; use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
@ -14,7 +12,7 @@ use uv_fs::write_atomic;
use uv_redacted::DisplaySafeUrl; use uv_redacted::DisplaySafeUrl;
use crate::BaseClient; use crate::BaseClient;
use crate::base_client::is_transient_network_error; use crate::base_client::RetryState;
use crate::error::ProblemDetails; use crate::error::ProblemDetails;
use crate::{ use crate::{
Error, ErrorKind, Error, ErrorKind,
@ -119,51 +117,35 @@ where
} }
} }
/// Dispatch type: Either a cached client error or a (user specified) error from the callback /// Dispatch type: Either a cached client error or a (user specified) error from the callback.
#[derive(Debug)]
pub enum CachedClientError<CallbackError: std::error::Error + 'static> { pub enum CachedClientError<CallbackError: std::error::Error + 'static> {
Client { /// The client tracks retries internally.
retries: Option<u32>, Client(Error),
err: Error, /// Track retries before a callback explicitly, as we can't attach them to the callback error
}, /// type.
Callback { Callback { retries: u32, err: CallbackError },
retries: Option<u32>,
err: CallbackError,
},
} }
impl<CallbackError: std::error::Error + 'static> CachedClientError<CallbackError> { impl<CallbackError: std::error::Error + 'static> CachedClientError<CallbackError> {
/// Attach the number of retries to the error context. /// Attach the combined number of retries to the error context, discarding the previous count.
///
/// Adds to existing errors if any, in case different layers retried.
fn with_retries(self, retries: u32) -> Self { fn with_retries(self, retries: u32) -> Self {
match self { match self {
Self::Client { Self::Client(err) => Self::Client(err.with_retries(retries)),
retries: existing_retries, Self::Callback { retries: _, err } => Self::Callback { retries, err },
err,
} => Self::Client {
retries: Some(existing_retries.unwrap_or_default() + retries),
err,
},
Self::Callback {
retries: existing_retries,
err,
} => Self::Callback {
retries: Some(existing_retries.unwrap_or_default() + retries),
err,
},
} }
} }
fn retries(&self) -> Option<u32> { fn retries(&self) -> u32 {
match self { match self {
Self::Client { retries, .. } => *retries, Self::Client(err) => err.retries(),
Self::Callback { retries, .. } => *retries, Self::Callback { retries, .. } => *retries,
} }
} }
fn error(&self) -> &(dyn std::error::Error + 'static) { fn error(&self) -> &(dyn std::error::Error + 'static) {
match self { match self {
Self::Client { err, .. } => err, Self::Client(err) => err,
Self::Callback { err, .. } => err, Self::Callback { err, .. } => err,
} }
} }
@ -171,10 +153,7 @@ impl<CallbackError: std::error::Error + 'static> CachedClientError<CallbackError
impl<CallbackError: std::error::Error + 'static> From<Error> for CachedClientError<CallbackError> { impl<CallbackError: std::error::Error + 'static> From<Error> for CachedClientError<CallbackError> {
fn from(error: Error) -> Self { fn from(error: Error) -> Self {
Self::Client { Self::Client(error)
retries: None,
err: error,
}
} }
} }
@ -182,10 +161,7 @@ impl<CallbackError: std::error::Error + 'static> From<ErrorKind>
for CachedClientError<CallbackError> for CachedClientError<CallbackError>
{ {
fn from(error: ErrorKind) -> Self { fn from(error: ErrorKind) -> Self {
Self::Client { Self::Client(error.into())
retries: None,
err: error.into(),
}
} }
} }
@ -193,16 +169,10 @@ impl<E: Into<Self> + std::error::Error + 'static> From<CachedClientError<E>> for
/// Attach retry error context, if there were retries. /// Attach retry error context, if there were retries.
fn from(error: CachedClientError<E>) -> Self { fn from(error: CachedClientError<E>) -> Self {
match error { match error {
CachedClientError::Client { CachedClientError::Client(error) => error,
retries: Some(retries), CachedClientError::Callback { retries, err } => {
err, Self::new(err.into().into_kind(), retries)
} => Self::new(err.into_kind(), retries), }
CachedClientError::Client { retries: None, err } => err,
CachedClientError::Callback {
retries: Some(retries),
err,
} => Self::new(err.into().into_kind(), retries),
CachedClientError::Callback { retries: None, err } => err.into(),
} }
} }
} }
@ -450,10 +420,16 @@ impl CachedClient {
response_callback: Callback, response_callback: Callback,
) -> Result<Payload::Target, CachedClientError<CallBackError>> { ) -> Result<Payload::Target, CachedClientError<CallBackError>> {
let new_cache = info_span!("new_cache", file = %cache_entry.path().display()); let new_cache = info_span!("new_cache", file = %cache_entry.path().display());
// Capture retries from the retry middleware
let retries = response
.extensions()
.get::<reqwest_retry::RetryCount>()
.map(|retries| retries.value())
.unwrap_or_default();
let data = response_callback(response) let data = response_callback(response)
.boxed_local() .boxed_local()
.await .await
.map_err(|err| CachedClientError::Callback { retries: None, err })?; .map_err(|err| CachedClientError::Callback { retries, err })?;
let Some(cache_policy) = cache_policy else { let Some(cache_policy) = cache_policy else {
return Ok(data.into_target()); return Ok(data.into_target());
}; };
@ -662,16 +638,10 @@ impl CachedClient {
} else { } else {
None None
}; };
return Err(CachedClientError::<Error>::Client { return Err(Error::new(
retries: retry_count, ErrorKind::from_reqwest_with_problem_details(url, status_error, problem_details),
err: ErrorKind::from_reqwest_with_problem_details( retry_count.unwrap_or_default(),
url, ));
status_error,
problem_details,
)
.into(),
}
.into());
} }
let cache_policy = cache_policy_builder.build(&response); let cache_policy = cache_policy_builder.build(&response);
@ -720,49 +690,23 @@ impl CachedClient {
cache_control: CacheControl<'_>, cache_control: CacheControl<'_>,
response_callback: Callback, response_callback: Callback,
) -> Result<Payload::Target, CachedClientError<CallBackError>> { ) -> Result<Payload::Target, CachedClientError<CallBackError>> {
let mut past_retries = 0; let mut retry_state = RetryState::start(self.uncached().retry_policy(), req.url().clone());
let start_time = SystemTime::now();
let retry_policy = self.uncached().retry_policy();
loop { loop {
let fresh_req = req.try_clone().expect("HTTP request must be cloneable"); let fresh_req = req.try_clone().expect("HTTP request must be cloneable");
let result = self let result = self
.get_cacheable(fresh_req, cache_entry, cache_control, &response_callback) .get_cacheable(fresh_req, cache_entry, cache_control, &response_callback)
.await; .await;
// Check if the middleware already performed retries match result {
let middleware_retries = match &result { Ok(ok) => return Ok(ok),
Err(err) => err.retries().unwrap_or_default(), Err(err) => {
Ok(_) => 0, if let Some(backoff) = retry_state.should_retry(err.error(), err.retries()) {
}; retry_state.sleep_backoff(backoff).await;
if result
.as_ref()
.is_err_and(|err| is_transient_network_error(err.error()))
{
// If middleware already retried, consider that in our retry budget
let total_retries = past_retries + middleware_retries;
let retry_decision = retry_policy.should_retry(start_time, total_retries);
if let reqwest_retry::RetryDecision::Retry { execute_after } = retry_decision {
let duration = execute_after
.duration_since(SystemTime::now())
.unwrap_or_else(|_| Duration::default());
debug!(
"Transient failure while handling response from {}; retrying after {:.1}s...",
req.url(),
duration.as_secs_f32(),
);
tokio::time::sleep(duration).await;
past_retries += 1;
continue; continue;
} }
return Err(err.with_retries(retry_state.total_retries()));
} }
if past_retries > 0 {
return result.map_err(|err| err.with_retries(past_retries));
} }
return result;
} }
} }
@ -780,48 +724,23 @@ impl CachedClient {
cache_control: CacheControl<'_>, cache_control: CacheControl<'_>,
response_callback: Callback, response_callback: Callback,
) -> Result<Payload, CachedClientError<CallBackError>> { ) -> Result<Payload, CachedClientError<CallBackError>> {
let mut past_retries = 0; let mut retry_state = RetryState::start(self.uncached().retry_policy(), req.url().clone());
let start_time = SystemTime::now();
let retry_policy = self.uncached().retry_policy();
loop { loop {
let fresh_req = req.try_clone().expect("HTTP request must be cloneable"); let fresh_req = req.try_clone().expect("HTTP request must be cloneable");
let result = self let result = self
.skip_cache(fresh_req, cache_entry, cache_control, &response_callback) .skip_cache(fresh_req, cache_entry, cache_control, &response_callback)
.await; .await;
// Check if the middleware already performed retries match result {
let middleware_retries = match &result { Ok(ok) => return Ok(ok),
Err(err) => err.retries().unwrap_or_default(), Err(err) => {
_ => 0, if let Some(backoff) = retry_state.should_retry(err.error(), err.retries()) {
}; retry_state.sleep_backoff(backoff).await;
if result
.as_ref()
.err()
.is_some_and(|err| is_transient_network_error(err.error()))
{
let total_retries = past_retries + middleware_retries;
let retry_decision = retry_policy.should_retry(start_time, total_retries);
if let reqwest_retry::RetryDecision::Retry { execute_after } = retry_decision {
let duration = execute_after
.duration_since(SystemTime::now())
.unwrap_or_else(|_| Duration::default());
debug!(
"Transient failure while handling response from {}; retrying after {}s...",
req.url(),
duration.as_secs(),
);
tokio::time::sleep(duration).await;
past_retries += 1;
continue; continue;
} }
return Err(err.with_retries(retry_state.total_retries()));
} }
if past_retries > 0 {
return result.map_err(|err| err.with_retries(past_retries));
} }
return result;
} }
} }
} }

View File

@ -123,6 +123,11 @@ impl Error {
&self.kind &self.kind
} }
pub(crate) fn with_retries(mut self, retries: u32) -> Self {
self.retries = retries;
self
}
/// Create a new error from a JSON parsing error. /// Create a new error from a JSON parsing error.
pub(crate) fn from_json_err(err: serde_json::Error, url: DisplaySafeUrl) -> Self { pub(crate) fn from_json_err(err: serde_json::Error, url: DisplaySafeUrl) -> Self {
ErrorKind::BadJson { source: err, url }.into() ErrorKind::BadJson { source: err, url }.into()
@ -425,13 +430,33 @@ impl WrappedReqwestError {
problem_details: Option<ProblemDetails>, problem_details: Option<ProblemDetails>,
) -> Self { ) -> Self {
Self { Self {
error, error: Self::filter_retries_from_error(error),
problem_details: problem_details.map(Box::new), problem_details: problem_details.map(Box::new),
} }
} }
/// Drop `RetryError::WithRetries` to avoid reporting the number of retries twice.
///
/// We attach the number of errors outside by adding the retry counts from the retry middleware
/// and from uv's outer retry loop for streaming bodies. Stripping the inner count from the
/// error context avoids showing two numbers.
fn filter_retries_from_error(error: reqwest_middleware::Error) -> reqwest_middleware::Error {
match error {
reqwest_middleware::Error::Middleware(error) => {
match error.downcast::<reqwest_retry::RetryError>() {
Ok(
reqwest_retry::RetryError::WithRetries { err, .. }
| reqwest_retry::RetryError::Error(err),
) => err,
Err(error) => reqwest_middleware::Error::Middleware(error),
}
}
error @ reqwest_middleware::Error::Reqwest(_) => error,
}
}
/// Return the inner [`reqwest::Error`] from the error chain, if it exists. /// Return the inner [`reqwest::Error`] from the error chain, if it exists.
fn inner(&self) -> Option<&reqwest::Error> { pub fn inner(&self) -> Option<&reqwest::Error> {
match &self.error { match &self.error {
reqwest_middleware::Error::Reqwest(err) => Some(err), reqwest_middleware::Error::Reqwest(err) => Some(err),
reqwest_middleware::Error::Middleware(err) => err.chain().find_map(|err| { reqwest_middleware::Error::Middleware(err) => err.chain().find_map(|err| {
@ -495,6 +520,7 @@ impl WrappedReqwestError {
impl From<reqwest::Error> for WrappedReqwestError { impl From<reqwest::Error> for WrappedReqwestError {
fn from(error: reqwest::Error) -> Self { fn from(error: reqwest::Error) -> Self {
Self { Self {
// No need to filter retries as this error does not have retries.
error: error.into(), error: error.into(),
problem_details: None, problem_details: None,
} }
@ -504,7 +530,7 @@ impl From<reqwest::Error> for WrappedReqwestError {
impl From<reqwest_middleware::Error> for WrappedReqwestError { impl From<reqwest_middleware::Error> for WrappedReqwestError {
fn from(error: reqwest_middleware::Error) -> Self { fn from(error: reqwest_middleware::Error) -> Self {
Self { Self {
error, error: Self::filter_retries_from_error(error),
problem_details: None, problem_details: None,
} }
} }

View File

@ -246,7 +246,7 @@ impl<'a> FlatIndexClient<'a> {
.collect(); .collect();
Ok(FlatIndexEntries::from_entries(files)) Ok(FlatIndexEntries::from_entries(files))
} }
Err(CachedClientError::Client { err, .. }) if err.is_offline() => { Err(CachedClientError::Client(err)) if err.is_offline() => {
Ok(FlatIndexEntries::offline()) Ok(FlatIndexEntries::offline())
} }
Err(err) => Err(err.into()), Err(err) => Err(err.into()),

View File

@ -1,7 +1,7 @@
pub use base_client::{ pub use base_client::{
AuthIntegration, BaseClient, BaseClientBuilder, DEFAULT_MAX_REDIRECTS, DEFAULT_RETRIES, AuthIntegration, BaseClient, BaseClientBuilder, DEFAULT_MAX_REDIRECTS, DEFAULT_RETRIES,
ExtraMiddleware, RedirectClientWithMiddleware, RedirectPolicy, RequestBuilder, ExtraMiddleware, RedirectClientWithMiddleware, RedirectPolicy, RequestBuilder,
RetryParsingError, UvRetryableStrategy, is_transient_network_error, RetryParsingError, RetryState, UvRetryableStrategy,
}; };
pub use cached_client::{CacheControl, CachedClient, CachedClientError, DataWithCachePolicy}; pub use cached_client::{CacheControl, CachedClient, CachedClientError, DataWithCachePolicy};
pub use error::{Error, ErrorKind, WrappedReqwestError}; pub use error::{Error, ErrorKind, WrappedReqwestError};

View File

@ -11,7 +11,7 @@ use crate::ROOT_DIR;
use crate::generate_all::Mode; use crate::generate_all::Mode;
/// Contains current supported targets /// Contains current supported targets
const TARGETS_YML_URL: &str = "https://raw.githubusercontent.com/astral-sh/python-build-standalone/refs/tags/20251209/cpython-unix/targets.yml"; const TARGETS_YML_URL: &str = "https://raw.githubusercontent.com/astral-sh/python-build-standalone/refs/tags/20251217/cpython-unix/targets.yml";
#[derive(clap::Args)] #[derive(clap::Args)]
pub(crate) struct Args { pub(crate) struct Args {
@ -130,7 +130,7 @@ async fn generate() -> Result<String> {
output.push_str("//! DO NOT EDIT\n"); output.push_str("//! DO NOT EDIT\n");
output.push_str("//!\n"); output.push_str("//!\n");
output.push_str("//! Generated with `cargo run dev generate-sysconfig-metadata`\n"); output.push_str("//! Generated with `cargo run dev generate-sysconfig-metadata`\n");
output.push_str("//! Targets from <https://github.com/astral-sh/python-build-standalone/blob/20251209/cpython-unix/targets.yml>\n"); output.push_str("//! Targets from <https://github.com/astral-sh/python-build-standalone/blob/20251217/cpython-unix/targets.yml>\n");
output.push_str("//!\n"); output.push_str("//!\n");
// Disable clippy/fmt // Disable clippy/fmt

View File

@ -32,15 +32,18 @@ impl IndexCacheControl {
/// Return the default files cache control headers for the given index URL, if applicable. /// Return the default files cache control headers for the given index URL, if applicable.
pub fn artifact_cache_control(url: &Url) -> Option<&'static str> { pub fn artifact_cache_control(url: &Url) -> Option<&'static str> {
if url let dominated_by_pytorch_or_nvidia = url.host_str().is_some_and(|host| {
.host_str() host.eq_ignore_ascii_case("download.pytorch.org")
.is_some_and(|host| host.ends_with("pytorch.org")) || host.eq_ignore_ascii_case("pypi.nvidia.com")
{ });
if dominated_by_pytorch_or_nvidia {
// Some wheels in the PyTorch registry were accidentally uploaded with `no-cache,no-store,must-revalidate`. // Some wheels in the PyTorch registry were accidentally uploaded with `no-cache,no-store,must-revalidate`.
// The PyTorch team plans to correct this in the future, but in the meantime we override // The PyTorch team plans to correct this in the future, but in the meantime we override
// the cache control headers to allow caching of static files. // the cache control headers to allow caching of static files.
// //
// See: https://github.com/pytorch/pytorch/pull/149218 // See: https://github.com/pytorch/pytorch/pull/149218
//
// The same issue applies to files hosted on `pypi.nvidia.com`.
Some("max-age=365000000, immutable, public") Some("max-age=365000000, immutable, public")
} else { } else {
None None

View File

@ -875,4 +875,43 @@ mod tests {
Some("max-age=3600") Some("max-age=3600")
); );
} }
#[test]
fn test_nvidia_default_cache_control() {
// Test that NVIDIA indexes get default cache control from the getter methods
let indexes = vec![Index {
name: Some(IndexName::from_str("nvidia").unwrap()),
url: IndexUrl::from_str("https://pypi.nvidia.com").unwrap(),
cache_control: None, // No explicit cache control
explicit: false,
default: false,
origin: None,
format: IndexFormat::Simple,
publish_url: None,
authenticate: uv_auth::AuthPolicy::default(),
ignore_error_codes: None,
}];
let index_urls = IndexUrls::from_indexes(indexes.clone());
let index_locations = IndexLocations::new(indexes, Vec::new(), false);
let nvidia_url = IndexUrl::from_str("https://pypi.nvidia.com").unwrap();
// IndexUrls should return the default for NVIDIA
assert_eq!(index_urls.simple_api_cache_control_for(&nvidia_url), None);
assert_eq!(
index_urls.artifact_cache_control_for(&nvidia_url),
Some("max-age=365000000, immutable, public")
);
// IndexLocations should also return the default for NVIDIA
assert_eq!(
index_locations.simple_api_cache_control_for(&nvidia_url),
None
);
assert_eq!(
index_locations.artifact_cache_control_for(&nvidia_url),
Some("max-age=365000000, immutable, public")
);
}
} }

View File

@ -24,7 +24,7 @@ impl IndexStatusCodeStrategy {
pub fn from_index_url(url: &Url) -> Self { pub fn from_index_url(url: &Url) -> Self {
if url if url
.host_str() .host_str()
.is_some_and(|host| host.ends_with("pytorch.org")) .is_some_and(|host| host.eq_ignore_ascii_case("download.pytorch.org"))
{ {
// The PyTorch registry returns a 403 when a package is not found, so // The PyTorch registry returns a 403 when a package is not found, so
// we ignore them when deciding whether to search other indexes. // we ignore them when deciding whether to search other indexes.

View File

@ -721,7 +721,7 @@ impl<'a, Context: BuildContext> DistributionDatabase<'a, Context> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
})?; })?;
// If the archive is missing the required hashes, or has since been removed, force a refresh. // If the archive is missing the required hashes, or has since been removed, force a refresh.
@ -745,7 +745,7 @@ impl<'a, Context: BuildContext> DistributionDatabase<'a, Context> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
}) })
}) })
.await? .await?
@ -926,7 +926,7 @@ impl<'a, Context: BuildContext> DistributionDatabase<'a, Context> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
})?; })?;
// If the archive is missing the required hashes, or has since been removed, force a refresh. // If the archive is missing the required hashes, or has since been removed, force a refresh.
@ -950,7 +950,7 @@ impl<'a, Context: BuildContext> DistributionDatabase<'a, Context> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
}) })
}) })
.await? .await?

View File

@ -823,7 +823,7 @@ impl<'a, T: BuildContext> SourceDistributionBuilder<'a, T> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
})?; })?;
// If the archive is missing the required hashes, force a refresh. // If the archive is missing the required hashes, force a refresh.
@ -843,7 +843,7 @@ impl<'a, T: BuildContext> SourceDistributionBuilder<'a, T> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
}) })
}) })
.await .await
@ -2280,7 +2280,7 @@ impl<'a, T: BuildContext> SourceDistributionBuilder<'a, T> {
.await .await
.map_err(|err| match err { .map_err(|err| match err {
CachedClientError::Callback { err, .. } => err, CachedClientError::Callback { err, .. } => err,
CachedClientError::Client { err, .. } => Error::Client(err), CachedClientError::Client(err) => Error::Client(err),
}) })
}) })
.await .await

View File

@ -3,7 +3,6 @@ mod trusted_publishing;
use std::collections::BTreeSet; use std::collections::BTreeSet;
use std::path::{Path, PathBuf}; use std::path::{Path, PathBuf};
use std::sync::Arc; use std::sync::Arc;
use std::time::{Duration, SystemTime};
use std::{fmt, io}; use std::{fmt, io};
use fs_err::tokio::File; use fs_err::tokio::File;
@ -13,7 +12,7 @@ use itertools::Itertools;
use reqwest::header::{AUTHORIZATION, LOCATION, ToStrError}; use reqwest::header::{AUTHORIZATION, LOCATION, ToStrError};
use reqwest::multipart::Part; use reqwest::multipart::Part;
use reqwest::{Body, Response, StatusCode}; use reqwest::{Body, Response, StatusCode};
use reqwest_retry::RetryPolicy; use reqwest_retry::RetryError;
use reqwest_retry::policies::ExponentialBackoff; use reqwest_retry::policies::ExponentialBackoff;
use rustc_hash::FxHashMap; use rustc_hash::FxHashMap;
use serde::Deserialize; use serde::Deserialize;
@ -28,7 +27,7 @@ use uv_auth::{Credentials, PyxTokenStore, Realm};
use uv_cache::{Cache, Refresh}; use uv_cache::{Cache, Refresh};
use uv_client::{ use uv_client::{
BaseClient, DEFAULT_MAX_REDIRECTS, MetadataFormat, OwnedArchive, RegistryClientBuilder, BaseClient, DEFAULT_MAX_REDIRECTS, MetadataFormat, OwnedArchive, RegistryClientBuilder,
RequestBuilder, RetryParsingError, is_transient_network_error, RequestBuilder, RetryParsingError, RetryState,
}; };
use uv_configuration::{KeyringProviderType, TrustedPublishing}; use uv_configuration::{KeyringProviderType, TrustedPublishing};
use uv_distribution_filename::{DistFilename, SourceDistExtension, SourceDistFilename}; use uv_distribution_filename::{DistFilename, SourceDistExtension, SourceDistFilename};
@ -474,11 +473,10 @@ pub async fn upload(
download_concurrency: &Semaphore, download_concurrency: &Semaphore,
reporter: Arc<impl Reporter>, reporter: Arc<impl Reporter>,
) -> Result<bool, PublishError> { ) -> Result<bool, PublishError> {
let mut n_past_retries = 0;
let mut n_past_redirections = 0; let mut n_past_redirections = 0;
let max_redirects = DEFAULT_MAX_REDIRECTS; let max_redirects = DEFAULT_MAX_REDIRECTS;
let start_time = SystemTime::now();
let mut current_registry = registry.clone(); let mut current_registry = registry.clone();
let mut retry_state = RetryState::start(retry_policy, registry.clone());
loop { loop {
let (request, idx) = build_upload_request( let (request, idx) = build_upload_request(
@ -553,23 +551,17 @@ pub async fn upload(
response response
} }
Err(err) => { Err(err) => {
if is_transient_network_error(&err) { let middleware_retries = if let Some(RetryError::WithRetries { retries, .. }) =
let retry_decision = retry_policy.should_retry(start_time, n_past_retries); (&err as &dyn std::error::Error).downcast_ref::<RetryError>()
if let reqwest_retry::RetryDecision::Retry { execute_after } = retry_decision { {
let duration = execute_after *retries
.duration_since(SystemTime::now()) } else {
.unwrap_or_else(|_| Duration::default()); 0
warn_user!( };
"Transient failure while handling response for {}; retrying after {}s...", if let Some(backoff) = retry_state.should_retry(&err, middleware_retries) {
current_registry, retry_state.sleep_backoff(backoff).await;
duration.as_secs()
);
tokio::time::sleep(duration).await;
n_past_retries += 1;
continue; continue;
} }
}
return Err(PublishError::PublishSend( return Err(PublishError::PublishSend(
group.file.clone(), group.file.clone(),
current_registry.clone().into(), current_registry.clone().into(),

File diff suppressed because it is too large Load Diff

View File

@ -5,14 +5,13 @@ use std::path::{Path, PathBuf};
use std::pin::Pin; use std::pin::Pin;
use std::str::FromStr; use std::str::FromStr;
use std::task::{Context, Poll}; use std::task::{Context, Poll};
use std::time::{Duration, SystemTime};
use std::{env, io}; use std::{env, io};
use futures::TryStreamExt; use futures::TryStreamExt;
use itertools::Itertools; use itertools::Itertools;
use owo_colors::OwoColorize; use owo_colors::OwoColorize;
use reqwest_retry::RetryError;
use reqwest_retry::policies::ExponentialBackoff; use reqwest_retry::policies::ExponentialBackoff;
use reqwest_retry::{RetryError, RetryPolicy};
use serde::Deserialize; use serde::Deserialize;
use thiserror::Error; use thiserror::Error;
use tokio::io::{AsyncRead, AsyncReadExt, AsyncWriteExt, BufWriter, ReadBuf}; use tokio::io::{AsyncRead, AsyncReadExt, AsyncWriteExt, BufWriter, ReadBuf};
@ -21,7 +20,7 @@ use tokio_util::either::Either;
use tracing::{debug, instrument}; use tracing::{debug, instrument};
use url::Url; use url::Url;
use uv_client::{BaseClient, WrappedReqwestError, is_transient_network_error}; use uv_client::{BaseClient, RetryState, WrappedReqwestError};
use uv_distribution_filename::{ExtensionError, SourceDistExtension}; use uv_distribution_filename::{ExtensionError, SourceDistExtension};
use uv_extract::hash::Hasher; use uv_extract::hash::Hasher;
use uv_fs::{Simplified, rename_with_retry}; use uv_fs::{Simplified, rename_with_retry};
@ -118,29 +117,24 @@ pub enum Error {
} }
impl Error { impl Error {
// Return the number of attempts that were made to complete this request before this error was // Return the number of retries that were made to complete this request before this error was
// returned. Note that e.g. 3 retries equates to 4 attempts. // returned.
// //
// It's easier to do arithmetic with "attempts" instead of "retries", because if you have // Note that e.g. 3 retries equates to 4 attempts.
// nested retry loops you can just add up all the attempts directly, while adding up the fn retries(&self) -> u32 {
// retries requires +1/-1 adjustments.
fn attempts(&self) -> u32 {
// Unfortunately different variants of `Error` track retry counts in different ways. We // Unfortunately different variants of `Error` track retry counts in different ways. We
// could consider unifying the variants we handle here in `Error::from_reqwest_middleware` // could consider unifying the variants we handle here in `Error::from_reqwest_middleware`
// instead, but both approaches will be fragile as new variants get added over time. // instead, but both approaches will be fragile as new variants get added over time.
if let Self::NetworkErrorWithRetries { retries, .. } = self { if let Self::NetworkErrorWithRetries { retries, .. } = self {
return retries + 1; return *retries;
} }
// TODO(jack): let-chains are stable as of Rust 1.88. We should use them here as soon as if let Self::NetworkMiddlewareError(_, anyhow_error) = self
// our rust-version is high enough. && let Some(RetryError::WithRetries { retries, .. }) =
if let Self::NetworkMiddlewareError(_, anyhow_error) = self {
if let Some(RetryError::WithRetries { retries, .. }) =
anyhow_error.downcast_ref::<RetryError>() anyhow_error.downcast_ref::<RetryError>()
{ {
return retries + 1; return *retries;
} }
} 0
1
} }
} }
@ -1111,9 +1105,11 @@ impl ManagedPythonDownload {
pypy_install_mirror: Option<&str>, pypy_install_mirror: Option<&str>,
reporter: Option<&dyn Reporter>, reporter: Option<&dyn Reporter>,
) -> Result<DownloadResult, Error> { ) -> Result<DownloadResult, Error> {
let mut total_attempts = 0; let mut retry_state = RetryState::start(
let mut retried_here = false; *retry_policy,
let start_time = SystemTime::now(); self.download_url(python_install_mirror, pypy_install_mirror)?,
);
loop { loop {
let result = self let result = self
.fetch( .fetch(
@ -1126,43 +1122,23 @@ impl ManagedPythonDownload {
reporter, reporter,
) )
.await; .await;
let result = match result { match result {
Ok(download_result) => Ok(download_result), Ok(download_result) => return Ok(download_result),
Err(err) => { Err(err) => {
// Inner retry loops (e.g. `reqwest-retry` middleware) might make more than one if let Some(backoff) = retry_state.should_retry(&err, err.retries()) {
// attempt per error we see here. retry_state.sleep_backoff(backoff).await;
total_attempts += err.attempts(); continue;
// We currently interpret e.g. "3 retries" to mean we should make 4 attempts.
let n_past_retries = total_attempts - 1;
if is_transient_network_error(&err) {
let retry_decision = retry_policy.should_retry(start_time, n_past_retries);
if let reqwest_retry::RetryDecision::Retry { execute_after } =
retry_decision
{
let duration = execute_after
.duration_since(SystemTime::now())
.unwrap_or_else(|_| Duration::default());
debug!(
"Transient failure while handling response for {}; retrying after {}s...",
self.key(),
duration.as_secs()
);
tokio::time::sleep(duration).await;
retried_here = true;
continue; // Retry.
} }
} return if retry_state.total_retries() > 0 {
if retried_here {
Err(Error::NetworkErrorWithRetries { Err(Error::NetworkErrorWithRetries {
err: Box::new(err), err: Box::new(err),
retries: n_past_retries, retries: retry_state.total_retries(),
}) })
} else { } else {
Err(err) Err(err)
} };
} }
}; };
return result;
} }
} }
@ -1458,7 +1434,7 @@ impl ManagedPythonDownload {
/// Return the [`Url`] to use when downloading the distribution. If a mirror is set via the /// Return the [`Url`] to use when downloading the distribution. If a mirror is set via the
/// appropriate environment variable, use it instead. /// appropriate environment variable, use it instead.
fn download_url( pub fn download_url(
&self, &self,
python_install_mirror: Option<&str>, python_install_mirror: Option<&str>,
pypy_install_mirror: Option<&str>, pypy_install_mirror: Option<&str>,

View File

@ -1,7 +1,7 @@
//! DO NOT EDIT //! DO NOT EDIT
//! //!
//! Generated with `cargo run dev generate-sysconfig-metadata` //! Generated with `cargo run dev generate-sysconfig-metadata`
//! Targets from <https://github.com/astral-sh/python-build-standalone/blob/20251209/cpython-unix/targets.yml> //! Targets from <https://github.com/astral-sh/python-build-standalone/blob/20251217/cpython-unix/targets.yml>
//! //!
#![allow(clippy::all)] #![allow(clippy::all)]
#![cfg_attr(any(), rustfmt::skip)] #![cfg_attr(any(), rustfmt::skip)]

View File

@ -265,6 +265,12 @@ impl From<DisplaySafeUrl> for Url {
} }
} }
impl From<Url> for DisplaySafeUrl {
fn from(url: Url) -> Self {
Self(url)
}
}
impl FromStr for DisplaySafeUrl { impl FromStr for DisplaySafeUrl {
type Err = DisplaySafeUrlError; type Err = DisplaySafeUrlError;

View File

@ -335,7 +335,7 @@ impl RequirementsSpecification {
} }
} }
RequirementsSource::PylockToml(path) => { RequirementsSource::PylockToml(path) => {
if !path.is_file() { if !(path.starts_with("http://") || path.starts_with("https://") || path.exists()) {
return Err(anyhow::anyhow!("File not found: `{}`", path.user_display())); return Err(anyhow::anyhow!("File not found: `{}`", path.user_display()));
} }

View File

@ -16,7 +16,7 @@ use uv_normalize::{ExtraName, GroupName, PackageName};
use uv_platform_tags::Tags; use uv_platform_tags::Tags;
use uv_pypi_types::ResolverMarkerEnvironment; use uv_pypi_types::ResolverMarkerEnvironment;
use crate::lock::{LockErrorKind, Package, TagPolicy}; use crate::lock::{HashedDist, LockErrorKind, Package, TagPolicy};
use crate::{Lock, LockError}; use crate::{Lock, LockError};
pub trait Installable<'lock> { pub trait Installable<'lock> {
@ -527,18 +527,14 @@ pub trait Installable<'lock> {
marker_env: &ResolverMarkerEnvironment, marker_env: &ResolverMarkerEnvironment,
build_options: &BuildOptions, build_options: &BuildOptions,
) -> Result<Node, LockError> { ) -> Result<Node, LockError> {
let dist = package.to_dist( let tag_policy = TagPolicy::Required(tags);
self.install_path(), let HashedDist { dist, hashes } =
TagPolicy::Required(tags), package.to_dist(self.install_path(), tag_policy, build_options, marker_env)?;
build_options,
marker_env,
)?;
let version = package.version().cloned(); let version = package.version().cloned();
let dist = ResolvedDist::Installable { let dist = ResolvedDist::Installable {
dist: Arc::new(dist), dist: Arc::new(dist),
version, version,
}; };
let hashes = package.hashes();
Ok(Node::Dist { Ok(Node::Dist {
dist, dist,
hashes, hashes,
@ -553,7 +549,7 @@ pub trait Installable<'lock> {
tags: &Tags, tags: &Tags,
marker_env: &ResolverMarkerEnvironment, marker_env: &ResolverMarkerEnvironment,
) -> Result<Node, LockError> { ) -> Result<Node, LockError> {
let dist = package.to_dist( let HashedDist { dist, .. } = package.to_dist(
self.install_path(), self.install_path(),
TagPolicy::Preferred(tags), TagPolicy::Preferred(tags),
&BuildOptions::default(), &BuildOptions::default(),

View File

@ -171,6 +171,15 @@ static ANDROID_X86_MARKERS: LazyLock<UniversalMarker> = LazyLock::new(|| {
marker marker
}); });
/// A distribution with its associated hash.
///
/// This pairs a [`Dist`] with the [`HashDigests`] for the specific wheel or
/// sdist that would be installed.
pub(crate) struct HashedDist {
pub(crate) dist: Dist,
pub(crate) hashes: HashDigests,
}
#[derive(Clone, Debug, PartialEq, Eq, serde::Deserialize)] #[derive(Clone, Debug, PartialEq, Eq, serde::Deserialize)]
#[serde(try_from = "LockWire")] #[serde(try_from = "LockWire")]
pub struct Lock { pub struct Lock {
@ -1761,7 +1770,7 @@ impl Lock {
if let Some(version) = package.id.version.as_ref() { if let Some(version) = package.id.version.as_ref() {
// For a non-dynamic package, fetch the metadata from the distribution database. // For a non-dynamic package, fetch the metadata from the distribution database.
let dist = package.to_dist( let HashedDist { dist, .. } = package.to_dist(
root, root,
TagPolicy::Preferred(tags), TagPolicy::Preferred(tags),
&BuildOptions::default(), &BuildOptions::default(),
@ -1912,7 +1921,7 @@ impl Lock {
// exactly. For example, `hatchling` will flatten any recursive (or self-referential) // exactly. For example, `hatchling` will flatten any recursive (or self-referential)
// extras, while `setuptools` will not. // extras, while `setuptools` will not.
if !satisfied { if !satisfied {
let dist = package.to_dist( let HashedDist { dist, .. } = package.to_dist(
root, root,
TagPolicy::Preferred(tags), TagPolicy::Preferred(tags),
&BuildOptions::default(), &BuildOptions::default(),
@ -2579,20 +2588,26 @@ impl Package {
Ok(()) Ok(())
} }
/// Convert the [`Package`] to a [`Dist`] that can be used in installation. /// Convert the [`Package`] to a [`Dist`] that can be used in installation, along with its hash.
fn to_dist( fn to_dist(
&self, &self,
workspace_root: &Path, workspace_root: &Path,
tag_policy: TagPolicy<'_>, tag_policy: TagPolicy<'_>,
build_options: &BuildOptions, build_options: &BuildOptions,
markers: &MarkerEnvironment, markers: &MarkerEnvironment,
) -> Result<Dist, LockError> { ) -> Result<HashedDist, LockError> {
let no_binary = build_options.no_binary_package(&self.id.name); let no_binary = build_options.no_binary_package(&self.id.name);
let no_build = build_options.no_build_package(&self.id.name); let no_build = build_options.no_build_package(&self.id.name);
if !no_binary { if !no_binary {
if let Some(best_wheel_index) = self.find_best_wheel(tag_policy) { if let Some(best_wheel_index) = self.find_best_wheel(tag_policy) {
return match &self.id.source { let hashes = self.wheels[best_wheel_index]
.hash
.as_ref()
.map(|hash| HashDigests::from(vec![hash.0.clone()]))
.unwrap_or_else(|| HashDigests::from(vec![]));
let dist = match &self.id.source {
Source::Registry(source) => { Source::Registry(source) => {
let wheels = self let wheels = self
.wheels .wheels
@ -2604,7 +2619,7 @@ impl Package {
best_wheel_index, best_wheel_index,
sdist: None, sdist: None,
}; };
Ok(Dist::Built(BuiltDist::Registry(reg_built_dist))) Dist::Built(BuiltDist::Registry(reg_built_dist))
} }
Source::Path(path) => { Source::Path(path) => {
let filename: WheelFilename = let filename: WheelFilename =
@ -2616,7 +2631,7 @@ impl Package {
install_path: absolute_path(workspace_root, path)?.into_boxed_path(), install_path: absolute_path(workspace_root, path)?.into_boxed_path(),
}; };
let built_dist = BuiltDist::Path(path_dist); let built_dist = BuiltDist::Path(path_dist);
Ok(Dist::Built(built_dist)) Dist::Built(built_dist)
} }
Source::Direct(url, direct) => { Source::Direct(url, direct) => {
let filename: WheelFilename = let filename: WheelFilename =
@ -2632,29 +2647,39 @@ impl Package {
url: VerbatimUrl::from_url(url), url: VerbatimUrl::from_url(url),
}; };
let built_dist = BuiltDist::DirectUrl(direct_dist); let built_dist = BuiltDist::DirectUrl(direct_dist);
Ok(Dist::Built(built_dist)) Dist::Built(built_dist)
} }
Source::Git(_, _) => Err(LockErrorKind::InvalidWheelSource { Source::Git(_, _) => {
return Err(LockErrorKind::InvalidWheelSource {
id: self.id.clone(), id: self.id.clone(),
source_type: "Git", source_type: "Git",
} }
.into()), .into());
Source::Directory(_) => Err(LockErrorKind::InvalidWheelSource { }
Source::Directory(_) => {
return Err(LockErrorKind::InvalidWheelSource {
id: self.id.clone(), id: self.id.clone(),
source_type: "directory", source_type: "directory",
} }
.into()), .into());
Source::Editable(_) => Err(LockErrorKind::InvalidWheelSource { }
Source::Editable(_) => {
return Err(LockErrorKind::InvalidWheelSource {
id: self.id.clone(), id: self.id.clone(),
source_type: "editable", source_type: "editable",
} }
.into()), .into());
Source::Virtual(_) => Err(LockErrorKind::InvalidWheelSource { }
Source::Virtual(_) => {
return Err(LockErrorKind::InvalidWheelSource {
id: self.id.clone(), id: self.id.clone(),
source_type: "virtual", source_type: "virtual",
} }
.into()), .into());
}
}; };
return Ok(HashedDist { dist, hashes });
} }
} }
@ -2663,7 +2688,16 @@ impl Package {
// any local source tree, or at least editable source trees, which we allow in // any local source tree, or at least editable source trees, which we allow in
// `uv pip`.) // `uv pip`.)
if !no_build || sdist.is_virtual() { if !no_build || sdist.is_virtual() {
return Ok(Dist::Source(sdist)); let hashes = self
.sdist
.as_ref()
.and_then(|s| s.hash())
.map(|hash| HashDigests::from(vec![hash.0.clone()]))
.unwrap_or_else(|| HashDigests::from(vec![]));
return Ok(HashedDist {
dist: Dist::Source(sdist),
hashes,
});
} }
} }

View File

@ -270,6 +270,7 @@ impl Pep723Script {
file: impl AsRef<Path>, file: impl AsRef<Path>,
requires_python: &VersionSpecifiers, requires_python: &VersionSpecifiers,
existing_contents: Option<Vec<u8>>, existing_contents: Option<Vec<u8>>,
bare: bool,
) -> Result<(), Pep723Error> { ) -> Result<(), Pep723Error> {
let file = file.as_ref(); let file = file.as_ref();
@ -305,6 +306,8 @@ impl Pep723Script {
indoc::formatdoc! {r" indoc::formatdoc! {r"
{shebang}{metadata} {shebang}{metadata}
{contents}" } {contents}" }
} else if bare {
metadata
} else { } else {
indoc::formatdoc! {r#" indoc::formatdoc! {r#"
{metadata} {metadata}

View File

@ -370,6 +370,7 @@ pub struct ResolverOptions {
pub config_settings_package: Option<PackageConfigSettings>, pub config_settings_package: Option<PackageConfigSettings>,
pub exclude_newer: ExcludeNewer, pub exclude_newer: ExcludeNewer,
pub link_mode: Option<LinkMode>, pub link_mode: Option<LinkMode>,
pub torch_backend: Option<TorchMode>,
pub upgrade: Option<Upgrade>, pub upgrade: Option<Upgrade>,
pub build_isolation: Option<BuildIsolation>, pub build_isolation: Option<BuildIsolation>,
pub no_build: Option<bool>, pub no_build: Option<bool>,
@ -404,6 +405,7 @@ pub struct ResolverInstallerOptions {
pub exclude_newer: Option<ExcludeNewerValue>, pub exclude_newer: Option<ExcludeNewerValue>,
pub exclude_newer_package: Option<ExcludeNewerPackage>, pub exclude_newer_package: Option<ExcludeNewerPackage>,
pub link_mode: Option<LinkMode>, pub link_mode: Option<LinkMode>,
pub torch_backend: Option<TorchMode>,
pub compile_bytecode: Option<bool>, pub compile_bytecode: Option<bool>,
pub no_sources: Option<bool>, pub no_sources: Option<bool>,
pub upgrade: Option<Upgrade>, pub upgrade: Option<Upgrade>,
@ -412,7 +414,6 @@ pub struct ResolverInstallerOptions {
pub no_build_package: Option<Vec<PackageName>>, pub no_build_package: Option<Vec<PackageName>>,
pub no_binary: Option<bool>, pub no_binary: Option<bool>,
pub no_binary_package: Option<Vec<PackageName>>, pub no_binary_package: Option<Vec<PackageName>>,
pub torch_backend: Option<TorchMode>,
} }
impl From<ResolverInstallerSchema> for ResolverInstallerOptions { impl From<ResolverInstallerSchema> for ResolverInstallerOptions {
@ -438,6 +439,7 @@ impl From<ResolverInstallerSchema> for ResolverInstallerOptions {
exclude_newer, exclude_newer,
exclude_newer_package, exclude_newer_package,
link_mode, link_mode,
torch_backend,
compile_bytecode, compile_bytecode,
no_sources, no_sources,
upgrade, upgrade,
@ -448,7 +450,6 @@ impl From<ResolverInstallerSchema> for ResolverInstallerOptions {
no_build_package, no_build_package,
no_binary, no_binary,
no_binary_package, no_binary_package,
torch_backend,
} = value; } = value;
Self { Self {
index, index,
@ -473,6 +474,7 @@ impl From<ResolverInstallerSchema> for ResolverInstallerOptions {
exclude_newer, exclude_newer,
exclude_newer_package, exclude_newer_package,
link_mode, link_mode,
torch_backend,
compile_bytecode, compile_bytecode,
no_sources, no_sources,
upgrade: Upgrade::from_args( upgrade: Upgrade::from_args(
@ -488,7 +490,6 @@ impl From<ResolverInstallerSchema> for ResolverInstallerOptions {
no_build_package, no_build_package,
no_binary, no_binary,
no_binary_package, no_binary_package,
torch_backend,
} }
} }
} }
@ -1925,6 +1926,7 @@ impl From<ResolverInstallerSchema> for ResolverOptions {
extra_build_dependencies: value.extra_build_dependencies, extra_build_dependencies: value.extra_build_dependencies,
extra_build_variables: value.extra_build_variables, extra_build_variables: value.extra_build_variables,
no_sources: value.no_sources, no_sources: value.no_sources,
torch_backend: value.torch_backend,
} }
} }
} }
@ -2004,6 +2006,7 @@ pub struct ToolOptions {
pub no_build_package: Option<Vec<PackageName>>, pub no_build_package: Option<Vec<PackageName>>,
pub no_binary: Option<bool>, pub no_binary: Option<bool>,
pub no_binary_package: Option<Vec<PackageName>>, pub no_binary_package: Option<Vec<PackageName>>,
pub torch_backend: Option<TorchMode>,
} }
impl From<ResolverInstallerOptions> for ToolOptions { impl From<ResolverInstallerOptions> for ToolOptions {
@ -2034,6 +2037,7 @@ impl From<ResolverInstallerOptions> for ToolOptions {
no_build_package: value.no_build_package, no_build_package: value.no_build_package,
no_binary: value.no_binary, no_binary: value.no_binary,
no_binary_package: value.no_binary_package, no_binary_package: value.no_binary_package,
torch_backend: value.torch_backend,
} }
} }
} }
@ -2068,7 +2072,7 @@ impl From<ToolOptions> for ResolverInstallerOptions {
no_build_package: value.no_build_package, no_build_package: value.no_build_package,
no_binary: value.no_binary, no_binary: value.no_binary,
no_binary_package: value.no_binary_package, no_binary_package: value.no_binary_package,
torch_backend: None, torch_backend: value.torch_backend,
} }
} }
} }
@ -2150,7 +2154,7 @@ pub struct OptionsWire {
// `crates/uv-workspace/src/pyproject.rs`. The documentation lives on that struct. // `crates/uv-workspace/src/pyproject.rs`. The documentation lives on that struct.
// They're respected in both `pyproject.toml` and `uv.toml` files. // They're respected in both `pyproject.toml` and `uv.toml` files.
override_dependencies: Option<Vec<Requirement<VerbatimParsedUrl>>>, override_dependencies: Option<Vec<Requirement<VerbatimParsedUrl>>>,
exclude_dependencies: Option<Vec<uv_normalize::PackageName>>, exclude_dependencies: Option<Vec<PackageName>>,
constraint_dependencies: Option<Vec<Requirement<VerbatimParsedUrl>>>, constraint_dependencies: Option<Vec<Requirement<VerbatimParsedUrl>>>,
build_constraint_dependencies: Option<Vec<Requirement<VerbatimParsedUrl>>>, build_constraint_dependencies: Option<Vec<Requirement<VerbatimParsedUrl>>>,
environments: Option<SupportedEnvironments>, environments: Option<SupportedEnvironments>,

View File

@ -18,13 +18,13 @@ pub fn shlex_posix(executable: impl AsRef<Path>) -> String {
/// Escape a string for being used in single quotes in a POSIX-compatible shell command. /// Escape a string for being used in single quotes in a POSIX-compatible shell command.
/// ///
/// We want our scripts to support any POSIX shell. There's two kind of quotes in POSIX: /// We want our scripts to support any POSIX shell. There are two kinds of quotes in POSIX:
/// Single and double quotes. In bash, single quotes must not contain another single /// Single and double quotes. In bash, single quotes must not contain another single
/// quote, you can't even escape it (<https://linux.die.net/man/1/bash> under "QUOTING"). /// quote, you can't even escape it (<https://linux.die.net/man/1/bash> under "QUOTING").
/// Double quotes have escaping rules different from shell to shell, which we can't do. /// Double quotes have escaping rules that differ from shell to shell, which we can't handle.
/// Bash has `$'\''`, but that's not universal enough. /// Bash has `$'\''`, but that's not universal enough.
/// ///
/// As solution, use implicit string concatenations, by putting the single quote into double /// As a solution, use implicit string concatenations, by putting the single quote into double
/// quotes. /// quotes.
pub fn escape_posix_for_single_quotes(string: &str) -> String { pub fn escape_posix_for_single_quotes(string: &str) -> String {
string.replace('\'', r#"'"'"'"#) string.replace('\'', r#"'"'"'"#)

View File

@ -216,6 +216,7 @@ async fn build_impl(
upgrade: _, upgrade: _,
build_options, build_options,
sources, sources,
torch_backend: _,
} = settings; } = settings;
// Determine the source to build. // Determine the source to build.

View File

@ -500,10 +500,28 @@ pub(crate) async fn pip_install(
); );
let (resolution, hasher) = if let Some(pylock) = pylock { let (resolution, hasher) = if let Some(pylock) = pylock {
// Read the `pylock.toml` from disk, and deserialize it from TOML. // Read the `pylock.toml` from disk or URL, and deserialize it from TOML.
let (install_path, content) =
if pylock.starts_with("http://") || pylock.starts_with("https://") {
// Fetch the `pylock.toml` over HTTP(S).
let url = uv_redacted::DisplaySafeUrl::parse(&pylock.to_string_lossy())?;
let client = client_builder.build();
let response = client
.for_host(&url)
.get(url::Url::from(url.clone()))
.send()
.await?;
response.error_for_status_ref()?;
let content = response.text().await?;
// Use the current working directory as the install path for remote lock files.
let install_path = std::env::current_dir()?;
(install_path, content)
} else {
let install_path = std::path::absolute(&pylock)?; let install_path = std::path::absolute(&pylock)?;
let install_path = install_path.parent().unwrap(); let install_path = install_path.parent().unwrap().to_path_buf();
let content = fs_err::tokio::read_to_string(&pylock).await?; let content = fs_err::tokio::read_to_string(&pylock).await?;
(install_path, content)
};
let lock = toml::from_str::<PylockToml>(&content).with_context(|| { let lock = toml::from_str::<PylockToml>(&content).with_context(|| {
format!("Not a valid `pylock.toml` file: {}", pylock.user_display()) format!("Not a valid `pylock.toml` file: {}", pylock.user_display())
})?; })?;
@ -537,7 +555,7 @@ pub(crate) async fn pip_install(
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let resolution = lock.to_resolution( let resolution = lock.to_resolution(
install_path, &install_path,
marker_env.markers(), marker_env.markers(),
&extras, &extras,
&groups, &groups,

View File

@ -73,6 +73,7 @@ pub(crate) async fn init(
init_script( init_script(
path, path,
bare,
python, python,
install_mirrors, install_mirrors,
client_builder, client_builder,
@ -168,7 +169,7 @@ pub(crate) async fn init(
.await?; .await?;
// Create the `README.md` if it does not already exist. // Create the `README.md` if it does not already exist.
if !no_readme { if !no_readme && !bare {
let readme = path.join("README.md"); let readme = path.join("README.md");
if !readme.exists() { if !readme.exists() {
fs_err::write(readme, String::new())?; fs_err::write(readme, String::new())?;
@ -201,6 +202,7 @@ pub(crate) async fn init(
#[allow(clippy::fn_params_excessive_bools)] #[allow(clippy::fn_params_excessive_bools)]
async fn init_script( async fn init_script(
script_path: &Path, script_path: &Path,
bare: bool,
python: Option<String>, python: Option<String>,
install_mirrors: PythonInstallMirrors, install_mirrors: PythonInstallMirrors,
client_builder: &BaseClientBuilder<'_>, client_builder: &BaseClientBuilder<'_>,
@ -275,7 +277,7 @@ async fn init_script(
fs_err::tokio::create_dir_all(parent).await?; fs_err::tokio::create_dir_all(parent).await?;
} }
Pep723Script::create(script_path, requires_python.specifiers(), content).await?; Pep723Script::create(script_path, requires_python.specifiers(), content, bare).await?;
Ok(()) Ok(())
} }
@ -829,7 +831,7 @@ impl InitProjectKind {
author.as_ref(), author.as_ref(),
description, description,
no_description, no_description,
no_readme, no_readme || bare,
); );
// Include additional project configuration for packaged applications // Include additional project configuration for packaged applications
@ -908,7 +910,7 @@ impl InitProjectKind {
author.as_ref(), author.as_ref(),
description, description,
no_description, no_description,
no_readme, no_readme || bare,
); );
// Always include a build system if the project is packaged. // Always include a build system if the project is packaged.

View File

@ -470,6 +470,7 @@ async fn do_lock(
upgrade, upgrade,
build_options, build_options,
sources, sources,
torch_backend: _,
} = settings; } = settings;
if !preview.is_enabled(PreviewFeatures::EXTRA_BUILD_DEPENDENCIES) if !preview.is_enabled(PreviewFeatures::EXTRA_BUILD_DEPENDENCIES)

View File

@ -43,6 +43,7 @@ use uv_resolver::{
use uv_scripts::Pep723ItemRef; use uv_scripts::Pep723ItemRef;
use uv_settings::PythonInstallMirrors; use uv_settings::PythonInstallMirrors;
use uv_static::EnvVars; use uv_static::EnvVars;
use uv_torch::{TorchSource, TorchStrategy};
use uv_types::{BuildIsolation, EmptyInstalledPackages, HashStrategy}; use uv_types::{BuildIsolation, EmptyInstalledPackages, HashStrategy};
use uv_virtualenv::remove_virtualenv; use uv_virtualenv::remove_virtualenv;
use uv_warnings::{warn_user, warn_user_once}; use uv_warnings::{warn_user, warn_user_once};
@ -278,6 +279,9 @@ pub(crate) enum ProjectError {
#[error(transparent)] #[error(transparent)]
RetryParsing(#[from] uv_client::RetryParsingError), RetryParsing(#[from] uv_client::RetryParsingError),
#[error(transparent)]
Accelerator(#[from] uv_torch::AcceleratorError),
#[error(transparent)] #[error(transparent)]
Anyhow(#[from] anyhow::Error), Anyhow(#[from] anyhow::Error),
} }
@ -1723,6 +1727,7 @@ pub(crate) async fn resolve_names(
prerelease: _, prerelease: _,
resolution: _, resolution: _,
sources, sources,
torch_backend,
upgrade: _, upgrade: _,
}, },
compile_bytecode: _, compile_bytecode: _,
@ -1731,10 +1736,27 @@ pub(crate) async fn resolve_names(
let client_builder = client_builder.clone().keyring(*keyring_provider); let client_builder = client_builder.clone().keyring(*keyring_provider);
// Determine the PyTorch backend.
let torch_backend = torch_backend
.map(|mode| {
let source = if uv_auth::PyxTokenStore::from_settings()
.is_ok_and(|store| store.has_credentials())
{
TorchSource::Pyx
} else {
TorchSource::default()
};
TorchStrategy::from_mode(mode, source, interpreter.platform().os())
})
.transpose()
.ok()
.flatten();
// Initialize the registry client. // Initialize the registry client.
let client = RegistryClientBuilder::new(client_builder, cache.clone()) let client = RegistryClientBuilder::new(client_builder, cache.clone())
.index_locations(index_locations.clone()) .index_locations(index_locations.clone())
.index_strategy(*index_strategy) .index_strategy(*index_strategy)
.torch_backend(torch_backend.clone())
.markers(interpreter.markers()) .markers(interpreter.markers())
.platform(interpreter.platform()) .platform(interpreter.platform())
.build(); .build();
@ -1880,6 +1902,7 @@ pub(crate) async fn resolve_environment(
upgrade: _, upgrade: _,
build_options, build_options,
sources, sources,
torch_backend,
} = settings; } = settings;
// Respect all requirements from the provided sources. // Respect all requirements from the provided sources.
@ -1900,10 +1923,33 @@ pub(crate) async fn resolve_environment(
let marker_env = pip::resolution_markers(None, python_platform, interpreter); let marker_env = pip::resolution_markers(None, python_platform, interpreter);
let python_requirement = PythonRequirement::from_interpreter(interpreter); let python_requirement = PythonRequirement::from_interpreter(interpreter);
// Determine the PyTorch backend.
let torch_backend = torch_backend
.map(|mode| {
let source = if uv_auth::PyxTokenStore::from_settings()
.is_ok_and(|store| store.has_credentials())
{
TorchSource::Pyx
} else {
TorchSource::default()
};
TorchStrategy::from_mode(
mode,
source,
python_platform
.map(|t| t.platform())
.as_ref()
.unwrap_or(interpreter.platform())
.os(),
)
})
.transpose()?;
// Initialize the registry client. // Initialize the registry client.
let client = RegistryClientBuilder::new(client_builder, cache.clone()) let client = RegistryClientBuilder::new(client_builder, cache.clone())
.index_locations(index_locations.clone()) .index_locations(index_locations.clone())
.index_strategy(*index_strategy) .index_strategy(*index_strategy)
.torch_backend(torch_backend.clone())
.markers(interpreter.markers()) .markers(interpreter.markers())
.platform(interpreter.platform()) .platform(interpreter.platform())
.build(); .build();
@ -2232,6 +2278,7 @@ pub(crate) async fn update_environment(
prerelease, prerelease,
resolution, resolution,
sources, sources,
torch_backend,
upgrade, upgrade,
}, },
compile_bytecode, compile_bytecode,
@ -2302,10 +2349,33 @@ pub(crate) async fn update_environment(
} }
} }
// Determine the PyTorch backend.
let torch_backend = torch_backend
.map(|mode| {
let source = if uv_auth::PyxTokenStore::from_settings()
.is_ok_and(|store| store.has_credentials())
{
TorchSource::Pyx
} else {
TorchSource::default()
};
TorchStrategy::from_mode(
mode,
source,
python_platform
.map(|t| t.platform())
.as_ref()
.unwrap_or(interpreter.platform())
.os(),
)
})
.transpose()?;
// Initialize the registry client. // Initialize the registry client.
let client = RegistryClientBuilder::new(client_builder, cache.clone()) let client = RegistryClientBuilder::new(client_builder, cache.clone())
.index_locations(index_locations.clone()) .index_locations(index_locations.clone())
.index_strategy(*index_strategy) .index_strategy(*index_strategy)
.torch_backend(torch_backend.clone())
.markers(interpreter.markers()) .markers(interpreter.markers())
.platform(interpreter.platform()) .platform(interpreter.platform())
.build(); .build();

View File

@ -676,6 +676,7 @@ pub(super) async fn do_sync(
prerelease: PrereleaseMode::default(), prerelease: PrereleaseMode::default(),
resolution: ResolutionMode::default(), resolution: ResolutionMode::default(),
sources, sources,
torch_backend: None,
upgrade: Upgrade::default(), upgrade: Upgrade::default(),
}; };
script_extra_build_requires( script_extra_build_requires(

View File

@ -212,6 +212,7 @@ pub(crate) async fn tree(
upgrade: _, upgrade: _,
build_options: _, build_options: _,
sources: _, sources: _,
torch_backend: _,
} = &settings; } = &settings;
let capabilities = IndexCapabilities::default(); let capabilities = IndexCapabilities::default();

View File

@ -62,6 +62,8 @@ pub(crate) async fn list(
show_urls: bool, show_urls: bool,
output_format: PythonListFormat, output_format: PythonListFormat,
python_downloads_json_url: Option<String>, python_downloads_json_url: Option<String>,
python_install_mirror: Option<String>,
pypy_install_mirror: Option<String>,
python_preference: PythonPreference, python_preference: PythonPreference,
python_downloads: PythonDownloads, python_downloads: PythonDownloads,
client_builder: &BaseClientBuilder<'_>, client_builder: &BaseClientBuilder<'_>,
@ -121,7 +123,10 @@ pub(crate) async fn list(
output.insert(( output.insert((
download.key().clone(), download.key().clone(),
Kind::Download, Kind::Download,
Either::Right(download.url()), Either::Right(download.download_url(
python_install_mirror.as_deref(),
pypy_install_mirror.as_deref(),
)?),
)); ));
} }
} }

View File

@ -28,7 +28,7 @@ use uv_python::{
use uv_requirements::{RequirementsSource, RequirementsSpecification}; use uv_requirements::{RequirementsSource, RequirementsSpecification};
use uv_settings::{PythonInstallMirrors, ResolverInstallerOptions, ToolOptions}; use uv_settings::{PythonInstallMirrors, ResolverInstallerOptions, ToolOptions};
use uv_tool::InstalledTools; use uv_tool::InstalledTools;
use uv_warnings::warn_user; use uv_warnings::{warn_user, warn_user_once};
use uv_workspace::WorkspaceCache; use uv_workspace::WorkspaceCache;
use crate::commands::ExitStatus; use crate::commands::ExitStatus;
@ -76,6 +76,12 @@ pub(crate) async fn install(
printer: Printer, printer: Printer,
preview: Preview, preview: Preview,
) -> Result<ExitStatus> { ) -> Result<ExitStatus> {
if settings.resolver.torch_backend.is_some() {
warn_user_once!(
"The `--torch-backend` option is experimental and may change without warning."
);
}
let reporter = PythonDownloadReporter::single(printer); let reporter = PythonDownloadReporter::single(printer);
let python_request = python.as_deref().map(PythonRequest::parse); let python_request = python.as_deref().map(PythonRequest::parse);

View File

@ -129,6 +129,12 @@ pub(crate) async fn run(
.is_some_and(|ext| ext.eq_ignore_ascii_case("py") || ext.eq_ignore_ascii_case("pyw")) .is_some_and(|ext| ext.eq_ignore_ascii_case("py") || ext.eq_ignore_ascii_case("pyw"))
} }
if settings.resolver.torch_backend.is_some() {
warn_user_once!(
"The `--torch-backend` option is experimental and may change without warning."
);
}
// Read from the `.env` file, if necessary. // Read from the `.env` file, if necessary.
if !no_env_file { if !no_env_file {
for env_file_path in env_file.iter().rev().map(PathBuf::as_path) { for env_file_path in env_file.iter().rev().map(PathBuf::as_path) {

View File

@ -840,7 +840,7 @@ async fn run(mut cli: Cli) -> Result<ExitStatus> {
.combine(Refresh::from(args.settings.upgrade.clone())), .combine(Refresh::from(args.settings.upgrade.clone())),
); );
commands::pip_install( Box::pin(commands::pip_install(
&requirements, &requirements,
&constraints, &constraints,
&overrides, &overrides,
@ -892,7 +892,7 @@ async fn run(mut cli: Cli) -> Result<ExitStatus> {
args.dry_run, args.dry_run,
printer, printer,
globals.preview, globals.preview,
) ))
.await .await
} }
Commands::Pip(PipNamespace { Commands::Pip(PipNamespace {
@ -1585,6 +1585,8 @@ async fn run(mut cli: Cli) -> Result<ExitStatus> {
args.show_urls, args.show_urls,
args.output_format, args.output_format,
args.python_downloads_json_url, args.python_downloads_json_url,
args.python_install_mirror,
args.pypy_install_mirror,
globals.python_preference, globals.python_preference,
globals.python_downloads, globals.python_downloads,
&client_builder.subcommand(vec!["python".to_owned(), "list".to_owned()]), &client_builder.subcommand(vec!["python".to_owned(), "list".to_owned()]),

View File

@ -337,7 +337,7 @@ impl InitSettings {
no_description, no_description,
vcs: vcs.or(bare.then_some(VersionControlSystem::None)), vcs: vcs.or(bare.then_some(VersionControlSystem::None)),
build_backend, build_backend,
no_readme: no_readme || bare, no_readme,
author_from, author_from,
pin_python: flag(pin_python, no_pin_python, "pin-python").unwrap_or(!bare), pin_python: flag(pin_python, no_pin_python, "pin-python").unwrap_or(!bare),
no_workspace, no_workspace,
@ -586,6 +586,7 @@ impl ToolRunSettings {
lfs, lfs,
python, python,
python_platform, python_platform,
torch_backend,
generate_shell_completion: _, generate_shell_completion: _,
} = args; } = args;
@ -615,21 +616,24 @@ impl ToolRunSettings {
} }
} }
let filesystem_options = filesystem.map(FilesystemOptions::into_options);
let options = let options =
resolver_installer_options(installer, build).combine(ResolverInstallerOptions::from( resolver_installer_options(installer, build).combine(ResolverInstallerOptions::from(
filesystem filesystem_options
.clone() .as_ref()
.map(FilesystemOptions::into_options) .map(|options| options.top_level.clone())
.map(|options| options.top_level)
.unwrap_or_default(), .unwrap_or_default(),
)); ));
let filesystem_install_mirrors = filesystem let filesystem_install_mirrors = filesystem_options
.map(FilesystemOptions::into_options) .map(|options| options.install_mirrors.clone())
.map(|options| options.install_mirrors)
.unwrap_or_default(); .unwrap_or_default();
let settings = ResolverInstallerSettings::from(options.clone()); let mut settings = ResolverInstallerSettings::from(options.clone());
if torch_backend.is_some() {
settings.resolver.torch_backend = torch_backend;
}
let lfs = GitLfsSetting::new(lfs.then_some(true), environment.lfs); let lfs = GitLfsSetting::new(lfs.then_some(true), environment.lfs);
Self { Self {
@ -727,23 +731,27 @@ impl ToolInstallSettings {
refresh, refresh,
python, python,
python_platform, python_platform,
torch_backend,
} = args; } = args;
let filesystem_options = filesystem.map(FilesystemOptions::into_options);
let options = let options =
resolver_installer_options(installer, build).combine(ResolverInstallerOptions::from( resolver_installer_options(installer, build).combine(ResolverInstallerOptions::from(
filesystem filesystem_options
.clone() .as_ref()
.map(FilesystemOptions::into_options) .map(|options| options.top_level.clone())
.map(|options| options.top_level)
.unwrap_or_default(), .unwrap_or_default(),
)); ));
let filesystem_install_mirrors = filesystem let filesystem_install_mirrors = filesystem_options
.map(FilesystemOptions::into_options) .map(|options| options.install_mirrors.clone())
.map(|options| options.install_mirrors)
.unwrap_or_default(); .unwrap_or_default();
let settings = ResolverInstallerSettings::from(options.clone()); let mut settings = ResolverInstallerSettings::from(options.clone());
if torch_backend.is_some() {
settings.resolver.torch_backend = torch_backend;
}
let lfs = GitLfsSetting::new(lfs.then_some(true), environment.lfs); let lfs = GitLfsSetting::new(lfs.then_some(true), environment.lfs);
Self { Self {
@ -995,6 +1003,8 @@ pub(crate) struct PythonListSettings {
pub(crate) show_urls: bool, pub(crate) show_urls: bool,
pub(crate) output_format: PythonListFormat, pub(crate) output_format: PythonListFormat,
pub(crate) python_downloads_json_url: Option<String>, pub(crate) python_downloads_json_url: Option<String>,
pub(crate) python_install_mirror: Option<String>,
pub(crate) pypy_install_mirror: Option<String>,
} }
impl PythonListSettings { impl PythonListSettings {
@ -1018,15 +1028,38 @@ impl PythonListSettings {
} = args; } = args;
let options = filesystem.map(FilesystemOptions::into_options); let options = filesystem.map(FilesystemOptions::into_options);
let python_downloads_json_url_option = match options { let (
Some(options) => options.install_mirrors.python_downloads_json_url, python_downloads_json_url_option,
None => None, python_install_mirror_option,
pypy_install_mirror_option,
) = match &options {
Some(options) => (
options.install_mirrors.python_downloads_json_url.clone(),
options.install_mirrors.python_install_mirror.clone(),
options.install_mirrors.pypy_install_mirror.clone(),
),
None => (None, None, None),
}; };
let python_downloads_json_url = python_downloads_json_url_arg let python_downloads_json_url = python_downloads_json_url_arg
.or(environment.install_mirrors.python_downloads_json_url) .or(environment
.install_mirrors
.python_downloads_json_url
.clone())
.or(python_downloads_json_url_option); .or(python_downloads_json_url_option);
let python_install_mirror = environment
.install_mirrors
.python_install_mirror
.clone()
.or(python_install_mirror_option);
let pypy_install_mirror = environment
.install_mirrors
.pypy_install_mirror
.clone()
.or(pypy_install_mirror_option);
let kinds = if only_installed { let kinds = if only_installed {
PythonListKinds::Installed PythonListKinds::Installed
} else if only_downloads { } else if only_downloads {
@ -1044,6 +1077,8 @@ impl PythonListSettings {
show_urls, show_urls,
output_format, output_format,
python_downloads_json_url, python_downloads_json_url,
python_install_mirror,
pypy_install_mirror,
} }
} }
} }
@ -3199,6 +3234,7 @@ pub(crate) struct ResolverSettings {
pub(crate) prerelease: PrereleaseMode, pub(crate) prerelease: PrereleaseMode,
pub(crate) resolution: ResolutionMode, pub(crate) resolution: ResolutionMode,
pub(crate) sources: SourceStrategy, pub(crate) sources: SourceStrategy,
pub(crate) torch_backend: Option<TorchMode>,
pub(crate) upgrade: Upgrade, pub(crate) upgrade: Upgrade,
} }
@ -3253,6 +3289,7 @@ impl From<ResolverOptions> for ResolverSettings {
extra_build_variables: value.extra_build_variables.unwrap_or_default(), extra_build_variables: value.extra_build_variables.unwrap_or_default(),
exclude_newer: value.exclude_newer, exclude_newer: value.exclude_newer,
link_mode: value.link_mode.unwrap_or_default(), link_mode: value.link_mode.unwrap_or_default(),
torch_backend: value.torch_backend,
sources: SourceStrategy::from_args(value.no_sources.unwrap_or_default()), sources: SourceStrategy::from_args(value.no_sources.unwrap_or_default()),
upgrade: value.upgrade.unwrap_or_default(), upgrade: value.upgrade.unwrap_or_default(),
build_options: BuildOptions::new( build_options: BuildOptions::new(
@ -3344,6 +3381,7 @@ impl From<ResolverInstallerOptions> for ResolverInstallerSettings {
prerelease: value.prerelease.unwrap_or_default(), prerelease: value.prerelease.unwrap_or_default(),
resolution: value.resolution.unwrap_or_default(), resolution: value.resolution.unwrap_or_default(),
sources: SourceStrategy::from_args(value.no_sources.unwrap_or_default()), sources: SourceStrategy::from_args(value.no_sources.unwrap_or_default()),
torch_backend: value.torch_backend,
upgrade: value.upgrade.unwrap_or_default(), upgrade: value.upgrade.unwrap_or_default(),
}, },
compile_bytecode: value.compile_bytecode.unwrap_or_default(), compile_bytecode: value.compile_bytecode.unwrap_or_default(),

View File

@ -700,6 +700,42 @@ fn init_script() -> Result<()> {
Ok(()) Ok(())
} }
/// Using `--bare` with `--script` omits the default script content.
#[test]
fn init_script_bare() -> Result<()> {
let context = TestContext::new("3.12");
let child = context.temp_dir.child("foo");
child.create_dir_all()?;
let script = child.join("main.py");
uv_snapshot!(context.filters(), context.init().current_dir(&child).arg("--script").arg("--bare").arg("main.py"), @r###"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Initialized script at `main.py`
"###);
let script = fs_err::read_to_string(&script)?;
insta::with_settings!({
filters => context.filters(),
}, {
assert_snapshot!(
script, @r###"
# /// script
# requires-python = ">=3.12"
# dependencies = []
# ///
"###
);
});
Ok(())
}
// Ensure python versions passed as arguments are present in file metadata // Ensure python versions passed as arguments are present in file metadata
#[test] #[test]
fn init_script_python_version() -> Result<()> { fn init_script_python_version() -> Result<()> {
@ -2564,7 +2600,7 @@ fn init_existing_environment() -> Result<()> {
Ok(()) Ok(())
} }
/// Run `uv init`, it should ignore a the Python version from a parent `.venv` /// Run `uv init`, it should ignore the Python version from a parent `.venv`
#[test] #[test]
fn init_existing_environment_parent() -> Result<()> { fn init_existing_environment_parent() -> Result<()> {
let context = TestContext::new_with_versions(&["3.9", "3.12"]); let context = TestContext::new_with_versions(&["3.9", "3.12"]);

View File

@ -8231,7 +8231,6 @@ fn lock_invalid_hash() -> Result<()> {
Hash mismatch for `idna==3.6` Hash mismatch for `idna==3.6`
Expected: Expected:
sha256:aecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca
sha256:d05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f sha256:d05567e9c24a6b9faaa835c4821bad0590fbb9d5779e7caa6e1cc4978e7eb24f
Computed: Computed:
@ -8242,6 +8241,225 @@ fn lock_invalid_hash() -> Result<()> {
Ok(()) Ok(())
} }
/// Ensure that we can install from a lockfile when the index switches hash algorithms.
/// First lock and sync with SHA256 hashes, then switch to SHA512 and lock/sync again
/// without clearing the cache.
#[test]
fn lock_mixed_hashes() -> Result<()> {
let context = TestContext::new("3.13");
let root = context.temp_dir.child("simple-html");
fs_err::create_dir_all(&root)?;
let basic_package = root.child("basic-package");
fs_err::create_dir_all(&basic_package)?;
// Copy the wheel and sdist from `test/links`.
let wheel = basic_package.child("basic_package-0.1.0-py3-none-any.whl");
fs_err::copy(
context
.workspace_root
.join("test/links/basic_package-0.1.0-py3-none-any.whl"),
&wheel,
)?;
let sdist = basic_package.child("basic_package-0.1.0.tar.gz");
fs_err::copy(
context
.workspace_root
.join("test/links/basic_package-0.1.0.tar.gz"),
&sdist,
)?;
// Phase 1: Create an `index.html` with SHA256 hashes for both wheel and sdist.
let index = basic_package.child("index.html");
index.write_str(&formatdoc! {r#"
<!DOCTYPE html>
<html>
<head>
<meta name="pypi:repository-version" content="1.1" />
</head>
<body>
<h1>Links for basic-package</h1>
<a
href="{}#sha256=7b6229db79b5800e4e98a351b5628c1c8a944533a2d428aeeaa7275a30d4ea82"
>
basic_package-0.1.0-py3-none-any.whl
</a>
<a
href="{}#sha256=af478ff91ec60856c99a540b8df13d756513bebb65bc301fb27e0d1f974532b4"
>
basic_package-0.1.0.tar.gz
</a>
</body>
</html>
"#,
Url::from_file_path(&wheel).unwrap().as_str(),
Url::from_file_path(&sdist).unwrap().as_str(),
})?;
let pyproject_toml = context.temp_dir.child("pyproject.toml");
pyproject_toml.write_str(&formatdoc! { r#"
[project]
name = "project"
version = "0.1.0"
requires-python = ">=3.13"
dependencies = ["basic-package"]
[tool.uv]
extra-index-url = ["{}"]
"#,
Url::from_file_path(&root).unwrap().as_str()
})?;
let index_url = Url::from_file_path(&root).unwrap().to_string();
let filters = [(index_url.as_str(), "file://[TMP]")]
.into_iter()
.chain(context.filters())
.collect::<Vec<_>>();
// Lock with SHA256 hashes.
uv_snapshot!(context.filters(), context.lock().env_remove(EnvVars::UV_EXCLUDE_NEWER), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Resolved 2 packages in [TIME]
");
let lock = context.read("uv.lock");
insta::with_settings!({
filters => filters.clone(),
}, {
assert_snapshot!(
lock, @r#"
version = 1
revision = 3
requires-python = ">=3.13"
[[package]]
name = "basic-package"
version = "0.1.0"
source = { registry = "simple-html" }
sdist = { path = "basic-package/basic_package-0.1.0.tar.gz", hash = "sha256:af478ff91ec60856c99a540b8df13d756513bebb65bc301fb27e0d1f974532b4" }
wheels = [
{ path = "basic-package/basic_package-0.1.0-py3-none-any.whl", hash = "sha256:7b6229db79b5800e4e98a351b5628c1c8a944533a2d428aeeaa7275a30d4ea82" },
]
[[package]]
name = "project"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "basic-package" },
]
[package.metadata]
requires-dist = [{ name = "basic-package" }]
"#
);
});
// Sync with SHA256 hashes to populate the cache.
uv_snapshot!(filters.clone(), context.sync().arg("--frozen"), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Prepared 1 package in [TIME]
Installed 1 package in [TIME]
+ basic-package==0.1.0
");
// Phase 2: Update the index to use a SHA512 hash for the wheel instead.
index.write_str(&formatdoc! {r#"
<!DOCTYPE html>
<html>
<head>
<meta name="pypi:repository-version" content="1.1" />
</head>
<body>
<h1>Links for basic-package</h1>
<a
href="{}#sha512=765bde25938af485e492e25ee0e8cde262462565122c1301213a69bf9ceb2008e3997b652a604092a238c4b1a6a334e697ff3cee3c22f9a617cb14f34e26ef17"
>
basic_package-0.1.0-py3-none-any.whl
</a>
<a
href="{}#sha256=af478ff91ec60856c99a540b8df13d756513bebb65bc301fb27e0d1f974532b4"
>
basic_package-0.1.0.tar.gz
</a>
</body>
</html>
"#,
Url::from_file_path(&wheel).unwrap().as_str(),
Url::from_file_path(&sdist).unwrap().as_str(),
})?;
// Lock again with `--refresh` to pick up the SHA512 hash from the updated index.
uv_snapshot!(context.filters(), context.lock().arg("--refresh").env_remove(EnvVars::UV_EXCLUDE_NEWER), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Resolved 2 packages in [TIME]
");
let lock = context.read("uv.lock");
insta::with_settings!({
filters => filters.clone(),
}, {
assert_snapshot!(
lock, @r#"
version = 1
revision = 3
requires-python = ">=3.13"
[[package]]
name = "basic-package"
version = "0.1.0"
source = { registry = "simple-html" }
sdist = { path = "basic-package/basic_package-0.1.0.tar.gz", hash = "sha256:af478ff91ec60856c99a540b8df13d756513bebb65bc301fb27e0d1f974532b4" }
wheels = [
{ path = "basic-package/basic_package-0.1.0-py3-none-any.whl", hash = "sha512:765bde25938af485e492e25ee0e8cde262462565122c1301213a69bf9ceb2008e3997b652a604092a238c4b1a6a334e697ff3cee3c22f9a617cb14f34e26ef17" },
]
[[package]]
name = "project"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "basic-package" },
]
[package.metadata]
requires-dist = [{ name = "basic-package" }]
"#
);
});
// Reinstalling should re-compute the hash for the `basic-package` wheel to reflect SHA512.
uv_snapshot!(filters, context.sync().arg("--frozen").arg("--reinstall"), @r"
success: true
exit_code: 0
----- stdout -----
----- stderr -----
Prepared 1 package in [TIME]
Uninstalled 1 package in [TIME]
Installed 1 package in [TIME]
~ basic-package==0.1.0
");
Ok(())
}
/// Vary the `--resolution-mode`, and ensure that the lockfile is updated. /// Vary the `--resolution-mode`, and ensure that the lockfile is updated.
#[test] #[test]
fn lock_resolution_mode() -> Result<()> { fn lock_resolution_mode() -> Result<()> {

View File

@ -37,6 +37,30 @@ async fn io_error_server() -> (MockServer, String) {
(server, mock_server_uri) (server, mock_server_uri)
} }
/// Answers with a retryable HTTP status 500 for 2 times, then with a retryable connection reset
/// IO error.
///
/// Tests different errors paths inside uv, which retries 3 times by default, for a total for 4
/// requests.
async fn mixed_error_server() -> (MockServer, String) {
let server = MockServer::start().await;
Mock::given(method("GET"))
.respond_with_err(connection_reset)
.up_to_n_times(2)
.mount(&server)
.await;
Mock::given(method("GET"))
.respond_with(ResponseTemplate::new(StatusCode::INTERNAL_SERVER_ERROR))
.up_to_n_times(2)
.mount(&server)
.await;
let mock_server_uri = server.uri();
(server, mock_server_uri)
}
/// Check the simple index error message when the server returns HTTP status 500, a retryable error. /// Check the simple index error message when the server returns HTTP status 500, a retryable error.
#[tokio::test] #[tokio::test]
async fn simple_http_500() { async fn simple_http_500() {
@ -83,8 +107,8 @@ async fn simple_io_err() {
----- stdout ----- ----- stdout -----
----- stderr ----- ----- stderr -----
error: Failed to fetch: `[SERVER]/tqdm/` error: Request failed after 3 retries
Caused by: Request failed after 3 retries Caused by: Failed to fetch: `[SERVER]/tqdm/`
Caused by: error sending request for url ([SERVER]/tqdm/) Caused by: error sending request for url ([SERVER]/tqdm/)
Caused by: client error (SendRequest) Caused by: client error (SendRequest)
Caused by: connection closed before message completed Caused by: connection closed before message completed
@ -141,14 +165,43 @@ async fn find_links_io_error() {
----- stderr ----- ----- stderr -----
error: Failed to read `--find-links` URL: [SERVER]/ error: Failed to read `--find-links` URL: [SERVER]/
Caused by: Failed to fetch: `[SERVER]/`
Caused by: Request failed after 3 retries Caused by: Request failed after 3 retries
Caused by: Failed to fetch: `[SERVER]/`
Caused by: error sending request for url ([SERVER]/) Caused by: error sending request for url ([SERVER]/)
Caused by: client error (SendRequest) Caused by: client error (SendRequest)
Caused by: connection closed before message completed Caused by: connection closed before message completed
"); ");
} }
/// Check the error message for a find links index page, a non-streaming request, when the server
/// returns different kinds of retryable errors.
#[tokio::test]
async fn find_links_mixed_error() {
let context = TestContext::new("3.12");
let (_server_drop_guard, mock_server_uri) = mixed_error_server().await;
let filters = vec![(mock_server_uri.as_str(), "[SERVER]")];
uv_snapshot!(filters, context
.pip_install()
.arg("tqdm")
.arg("--no-index")
.arg("--find-links")
.arg(&mock_server_uri)
.env_remove(EnvVars::UV_HTTP_RETRIES)
.env(EnvVars::UV_TEST_NO_HTTP_RETRY_DELAY, "true"), @r"
success: false
exit_code: 2
----- stdout -----
----- stderr -----
error: Failed to read `--find-links` URL: [SERVER]/
Caused by: Request failed after 3 retries
Caused by: Failed to fetch: `[SERVER]/`
Caused by: HTTP status server error (500 Internal Server Error) for url ([SERVER]/)
");
}
/// Check the direct package URL error message when the server returns HTTP status 500, a retryable /// Check the direct package URL error message when the server returns HTTP status 500, a retryable
/// error. /// error.
#[tokio::test] #[tokio::test]
@ -185,6 +238,37 @@ async fn direct_url_io_error() {
let (_server_drop_guard, mock_server_uri) = io_error_server().await; let (_server_drop_guard, mock_server_uri) = io_error_server().await;
let tqdm_url = format!(
"{mock_server_uri}/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl"
);
let filters = vec![(mock_server_uri.as_str(), "[SERVER]")];
uv_snapshot!(filters, context
.pip_install()
.arg(format!("tqdm @ {tqdm_url}"))
.env_remove(EnvVars::UV_HTTP_RETRIES)
.env(EnvVars::UV_TEST_NO_HTTP_RETRY_DELAY, "true"), @r#"
success: false
exit_code: 1
----- stdout -----
----- stderr -----
× Failed to download `tqdm @ [SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl`
Request failed after 3 retries
Failed to fetch: `[SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl`
error sending request for url ([SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl)
client error (SendRequest)
connection closed before message completed
"#);
}
/// Check the error message for direct package URL, a streaming request, when the server returns
/// different kinds of retryable errors.
#[tokio::test]
async fn direct_url_mixed_error() {
let context = TestContext::new("3.12");
let (_server_drop_guard, mock_server_uri) = mixed_error_server().await;
let tqdm_url = format!( let tqdm_url = format!(
"{mock_server_uri}/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl" "{mock_server_uri}/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl"
); );
@ -200,11 +284,9 @@ async fn direct_url_io_error() {
----- stderr ----- ----- stderr -----
× Failed to download `tqdm @ [SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl` × Failed to download `tqdm @ [SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl`
Failed to fetch: `[SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl`
Request failed after 3 retries Request failed after 3 retries
error sending request for url ([SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl) Failed to fetch: `[SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl`
client error (SendRequest) HTTP status server error (500 Internal Server Error) for url ([SERVER]/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl)
connection closed before message completed
"); ");
} }

View File

@ -2447,8 +2447,8 @@ fn python_install_prerelease() {
----- stdout ----- ----- stdout -----
----- stderr ----- ----- stderr -----
Installed Python 3.15.0a2 in [TIME] Installed Python 3.15.0a3 in [TIME]
+ cpython-3.15.0a2-[PLATFORM] (python3.15) + cpython-3.15.0a3-[PLATFORM] (python3.15)
"); ");
// Install a specific pre-release // Install a specific pre-release
@ -2458,7 +2458,8 @@ fn python_install_prerelease() {
----- stdout ----- ----- stdout -----
----- stderr ----- ----- stderr -----
Python 3.15a2 is already installed Installed Python 3.15.0a2 in [TIME]
+ cpython-3.15.0a2-[PLATFORM]
"); ");
} }
@ -2482,7 +2483,7 @@ fn python_find_prerelease() {
success: true success: true
exit_code: 0 exit_code: 0
----- stdout ----- ----- stdout -----
[TEMP_DIR]/managed/cpython-3.15.0a2-[PLATFORM]/[INSTALL-BIN]/[PYTHON] [TEMP_DIR]/managed/cpython-3.15.0a3-[PLATFORM]/[INSTALL-BIN]/[PYTHON]
----- stderr ----- ----- stderr -----
"); ");
@ -2492,7 +2493,7 @@ fn python_find_prerelease() {
success: true success: true
exit_code: 0 exit_code: 0
----- stdout ----- ----- stdout -----
[TEMP_DIR]/managed/cpython-3.15.0a2-[PLATFORM]/[INSTALL-BIN]/[PYTHON] [TEMP_DIR]/managed/cpython-3.15.0a3-[PLATFORM]/[INSTALL-BIN]/[PYTHON]
----- stderr ----- ----- stderr -----
"); ");
@ -2501,7 +2502,7 @@ fn python_find_prerelease() {
success: true success: true
exit_code: 0 exit_code: 0
----- stdout ----- ----- stdout -----
[TEMP_DIR]/managed/cpython-3.15.0a2-[PLATFORM]/[INSTALL-BIN]/[PYTHON] [TEMP_DIR]/managed/cpython-3.15.0a3-[PLATFORM]/[INSTALL-BIN]/[PYTHON]
----- stderr ----- ----- stderr -----
"); ");

View File

@ -591,3 +591,96 @@ async fn python_list_remote_python_downloads_json_url() -> Result<()> {
Ok(()) Ok(())
} }
#[test]
fn python_list_with_mirrors() {
let context: TestContext = TestContext::new_with_versions(&[])
.with_filtered_python_keys()
.with_collapsed_whitespace()
// Add filters to normalize file paths in URLs
.with_filter((
r"(https://mirror\.example\.com/).*".to_string(),
"$1[FILE-PATH]".to_string(),
))
.with_filter((
r"(https://python-mirror\.example\.com/).*".to_string(),
"$1[FILE-PATH]".to_string(),
))
.with_filter((
r"(https://pypy-mirror\.example\.com/).*".to_string(),
"$1[FILE-PATH]".to_string(),
))
.with_filter((
r"(https://github\.com/astral-sh/python-build-standalone/releases/download/).*"
.to_string(),
"$1[FILE-PATH]".to_string(),
))
.with_filter((
r"(https://downloads\.python\.org/pypy/).*".to_string(),
"$1[FILE-PATH]".to_string(),
))
.with_filter((
r"(https://github\.com/oracle/graalpython/releases/download/).*".to_string(),
"$1[FILE-PATH]".to_string(),
));
// Test with UV_PYTHON_INSTALL_MIRROR environment variable - verify mirror URL is used
uv_snapshot!(context.filters(), context.python_list()
.arg("cpython@3.10.19")
.arg("--show-urls")
.env(EnvVars::UV_PYTHON_INSTALL_MIRROR, "https://mirror.example.com")
.env_remove(EnvVars::UV_PYTHON_DOWNLOADS), @r"
success: true
exit_code: 0
----- stdout -----
cpython-3.10.19-[PLATFORM] https://mirror.example.com/[FILE-PATH]
----- stderr -----
");
// Test with UV_PYPY_INSTALL_MIRROR environment variable - verify PyPy mirror URL is used
uv_snapshot!(context.filters(), context.python_list()
.arg("pypy@3.10")
.arg("--show-urls")
.env(EnvVars::UV_PYPY_INSTALL_MIRROR, "https://pypy-mirror.example.com")
.env_remove(EnvVars::UV_PYTHON_DOWNLOADS), @r"
success: true
exit_code: 0
----- stdout -----
pypy-3.10.16-[PLATFORM] https://pypy-mirror.example.com/[FILE-PATH]
----- stderr -----
");
// Test with both mirror environment variables set
uv_snapshot!(context.filters(), context.python_list()
.arg("3.10")
.arg("--show-urls")
.env(EnvVars::UV_PYTHON_INSTALL_MIRROR, "https://python-mirror.example.com")
.env(EnvVars::UV_PYPY_INSTALL_MIRROR, "https://pypy-mirror.example.com")
.env_remove(EnvVars::UV_PYTHON_DOWNLOADS), @r"
success: true
exit_code: 0
----- stdout -----
cpython-3.10.19-[PLATFORM] https://python-mirror.example.com/[FILE-PATH]
pypy-3.10.16-[PLATFORM] https://pypy-mirror.example.com/[FILE-PATH]
graalpy-3.10.0-[PLATFORM] https://github.com/oracle/graalpython/releases/download/[FILE-PATH]
----- stderr -----
");
// Test without mirrors - verify default URLs are used
uv_snapshot!(context.filters(), context.python_list()
.arg("3.10")
.arg("--show-urls")
.env_remove(EnvVars::UV_PYTHON_DOWNLOADS), @r"
success: true
exit_code: 0
----- stdout -----
cpython-3.10.19-[PLATFORM] https://github.com/astral-sh/python-build-standalone/releases/download/[FILE-PATH]
pypy-3.10.16-[PLATFORM] https://downloads.python.org/pypy/[FILE-PATH]
graalpy-3.10.0-[PLATFORM] https://github.com/oracle/graalpython/releases/download/[FILE-PATH]
----- stderr -----
");
}

View File

@ -3565,6 +3565,7 @@ fn resolve_tool() -> anyhow::Result<()> {
link_mode: Some( link_mode: Some(
Clone, Clone,
), ),
torch_backend: None,
compile_bytecode: None, compile_bytecode: None,
no_sources: None, no_sources: None,
upgrade: None, upgrade: None,
@ -3573,7 +3574,6 @@ fn resolve_tool() -> anyhow::Result<()> {
no_build_package: None, no_build_package: None,
no_binary: None, no_binary: None,
no_binary_package: None, no_binary_package: None,
torch_backend: None,
}, },
settings: ResolverInstallerSettings { settings: ResolverInstallerSettings {
resolver: ResolverSettings { resolver: ResolverSettings {
@ -3615,6 +3615,7 @@ fn resolve_tool() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: LowestDirect, resolution: LowestDirect,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -7912,6 +7913,7 @@ fn preview_features() {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -8026,6 +8028,7 @@ fn preview_features() {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -8140,6 +8143,7 @@ fn preview_features() {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -8254,6 +8258,7 @@ fn preview_features() {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -8368,6 +8373,7 @@ fn preview_features() {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -8484,6 +8490,7 @@ fn preview_features() {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
compile_bytecode: false, compile_bytecode: false,
@ -9738,6 +9745,7 @@ fn upgrade_project_cli_config_interaction() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
} }
@ -9857,6 +9865,7 @@ fn upgrade_project_cli_config_interaction() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: Packages( upgrade: Packages(
{ {
PackageName( PackageName(
@ -9999,6 +10008,7 @@ fn upgrade_project_cli_config_interaction() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: All, upgrade: All,
}, },
} }
@ -10116,6 +10126,7 @@ fn upgrade_project_cli_config_interaction() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: None, upgrade: None,
}, },
} }
@ -10223,6 +10234,7 @@ fn upgrade_project_cli_config_interaction() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: All, upgrade: All,
}, },
} }
@ -10331,6 +10343,7 @@ fn upgrade_project_cli_config_interaction() -> anyhow::Result<()> {
prerelease: IfNecessaryOrExplicit, prerelease: IfNecessaryOrExplicit,
resolution: Highest, resolution: Highest,
sources: Enabled, sources: Enabled,
torch_backend: None,
upgrade: Packages( upgrade: Packages(
{ {
PackageName( PackageName(