Ignore that the file matching is a little redundant with other checks,
I'll consolidate those in a subsequent pull request.
Co-authored-by: Claude <noreply@anthropic.com>
The goal here is to reduce cache consumption and contention by avoiding
cache saves on pull requests unless the pull request makes a change that
requires a new cache entry.
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This is included in the Rust cache key hash because it is prefixed with
`RUST_` but has no affect on compilation. This invalidates caches that
could be shared across jobs.
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Copies astral-sh/ruff#22126
We can use a larger _and_ less expensive runner for the build step.
Total runtime went from 18m -> 15m and most of that time is no longer on
the benchmark runner.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Yet another attempt at #5714 to see if it improves CI times now that
Windows test times have increased
I think this is worth it now
```
main (1 shard): 13m 46s
branch (2 shards): 10m 31s
branch (3 shards): 7m 34s
```
total time (a.k.a., cost increase)
```
main (1 shard): 13m 46s
branch (2 shards): 20m 18s
branch (3 shards): 21m 32s
```
As in #875, we could explore moving the build step before the sharded
test runs. The build is 3m of the runtime, but I think artifact transfer
and runner startup time was too expensive last time and I don't expect
it to be faster. It might save a bit on CI cost, but I'm not super
worried about that.
I chose three shards due to a reasonable reduction in per-shard runtime
without a big increase in total runtime (compared to two shards). I
think since half of the shard runtime is fixed build time (vs reducable
test time), going up to more shards would be wasteful.
We use a hash-based strategy for test splitting, which means adding
tests will never move tests across shards.
I moved _off_ of the Depot runner because the "checkout" setup is taking
_way_ too long, about 120s more than the GitHub one which, once we split
over multiple shards, overcomes the speed benefits of the Depot runners.
The downsides here are:
1. Increased total CI time and therefore increased cost
2. GitHub runners are more expensive than Depot runners
3. Failing tests can be split across multiple shards, requiring more
GitHub UI navigation
(1) The runtime is unacceptably slow and I think the cost increase is
fairly marginal
(2) We can move back to Depot runners once they resolve the specific
performance issue
(3) Failing tests on Windows without failing tests on Linux are fairly
rare and should often be isolated to a single shard
---------
Co-authored-by: Claude <noreply@anthropic.com>
The Windows registry test (PEP 514 integration) tests Python
installation registration functionality rather than system Python
detection like the other tests in test-system, so it fits better in
test-integration.
Co-authored-by: Claude <noreply@anthropic.com>
We get a bunch of redundant skipped `Release / Build binary ...` jobs in
CI otherwise, and I would rather the release workflow didn't have a pull
request trigger at all.
Follows #17388
This file is too big for an LLM context window and several contributors
have complained about it being too scary to touch.
This also gets us collapsible sections in the UI.
I renamed some jobs for clarity in the meantime. And added a meta-job
for required checks passing so we can avoid churn in our "Settings" when
we change job names.
Note this was entirely refactored by Claude.
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
I've noticed this escapes the trampoline crates so these fail whenever
there's bad formatting in the workspace.
Co-authored-by: Claude <noreply@anthropic.com>
## Summary
This fixes the report generation issues caused by large profile data now
properly handled by this newer version
## Test Plan
Reports should be generated on this PR
This also removes the file-specific targets from prettier execution
which means we're including `.json`, `.css`, and `.html` files, which
seems like an improvement.
Co-authored-by: Claude <noreply@anthropic.com>
Closes https://github.com/astral-sh/uv/issues/17095
This also stabilizes the Alpine version for users that do not choose to
pin it. We could add this to the build matrix separately to avoid that,
but I think that's okay?
It'd be nice to avoid churn for contributors. This is a pretty frequent
cause of CI failures and I don't think we really need to have the
reference documentation committed.
See https://github.com/astral-sh/uv/pull/16944
The `crates.io` publish succeeded and is not idempotent (i.e., it'll
fail on another publish attempt) so we will skip it for a re-run of the
release workflow.
## Summary
This broke the release and I haven't figured out why yet.
## Test Plan
Blame my past self.
Signed-off-by: William Woodruff <william@astral.sh>
## Summary
Follow up to https://github.com/astral-sh/uv/pull/15563
Closes https://github.com/astral-sh/uv/issues/13485
This is a first-pass at adding support for conditional support for Git
LFS between git sources, initial feedback welcome.
e.g.
```
[tool.uv.sources]
test-lfs-repo = { git = "https://github.com/zanieb/test-lfs-repo.git", lfs = true }
```
For context previously a user had to set `UV_GIT_LFS` to have uv fetch
lfs objects on git sources. This env var was all or nothing, meaning you
must always have it set to get consistent behavior and it applied to all
git sources. If you fetched lfs objects at a revision and then turned
off lfs (or vice versa), the git db, corresponding checkout lfs
artifacts would not be updated properly. Similarly, when git source
distributions were built, there would be no distinction between sources
with lfs and without lfs. Hence, it could corrupt the git, sdist, and
archive caches.
In order to support some sources being LFS enabled and other not, this
PR adds a stateful layer roughly similar to how `subdirectory` works but
for `lfs` since the git database, the checkouts and the corresponding
caching layers needed to be LFS aware (requested vs installed). The
caches also had to isolated and treated entirely separate when handling
LFS sources.
Summary
* Adds `lfs = true` or `lfs = false` to git sources in pyproject.toml
* Added `lfs=true` query param / fragments to most relevant url structs
(not parsed as user input)
* In the case of uv add / uv tool, `--lfs` is supported instead
* `UV_GIT_LFS` environment variable support is still functional for
non-project entrypoints (e.g. uv pip)
* `direct-url.json` now has an custom `git_lfs` entry under VcsInfo
(note, this is not in the spec currently -- see caveats).
* git database and checkouts have an different cache key as the sources
should be treated effectively different for the same rev.
* sdists cache also differ in the cache key of a built distribution if
it was built using LFS enabled revisions to distinguish between non-LFS
same revisions. This ensures the strong assumption for archive-v0 that
an unpacked revision "doesn't change sources" stays valid.
Caveats
* `pylock.toml` import support has not been added via git_lfs=true,
going through the spec it wasn't clear to me it's something we'd support
outside of the env var (for now).
* direct-url struct was modified by adding a non-standard `git_lfs`
field under VcsInfo which may be undersirable although the PEP 610 does
say `Additional fields that would be necessary to support such VCS
SHOULD be prefixed with the VCS command name` which could be interpret
this change as ok.
* There will be a slight lockfile and cache churn for users that use
`UV_GIT_LFS` as all git lockfile entries will get a `lfs=true` fragment.
The cache version does not need an update, but LFS sources will get
their own namespace under git-v0 and sdist-v9/git hence a cache-miss
will occur once but this can be sufficient to label this as breaking for
workflows always setting `UV_GIT_LFS`.
## Test Plan
Some initial tests were added. More tests likely to follow as we reach
consensus on a final approach.
For IT test, we may want to move to use a repo under astral namespace in
order to test lfs functionality.
Manual testing was done for common pathological cases like killing LFS
fetch mid-way, uninstalling LFS after installing an sdist with it and
reinstalling, fetching LFS artifacts in different commits, etc.
PSA: Please ignore the docker build failures as its related to depot
OIDC issues.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
Co-authored-by: konstin <konstin@mailbox.org>