Following the upstream release and #12120, removes gating preventing
installation of the managed musl Python versions.
Of note
- The filtering of musl Python distributions has moved from the Rust
runtime to the metadata fetcher
- The filtering is now conditional on the PBS release date, removing all
old static musl distributions
- We could support the `+static` musl downloads in the future; right
now, they are deprioritized when selecting a variant
- I added test to CI which uses Alpine and installs numpy
I somehow missed running an actual integration test of the PEP 517 API
in CI and the python shim was using the old uv CLI interface still.
The tests include pip, uv and `python -m build`. They must be a in CI
job since we can't depend on the Python package in the Rust tests (we
only get the binary in `cargo test`, not the `uv_build` wheel).
uv itself is a large package with many dependencies and lots of
features. To build a package using the uv build backend, you shouldn't
have to download and install the entirety of uv. For platform where we
don't provide wheels, it should be possible and fast to compile the uv
build backend. To that end, we're introducing a python package that
contains a trimmed down version of uv that only contains the build
backend, with a minimal dependency tree in rust.
The `uv_build` package is publish from CI just like uv itself. It is
part of the workspace, but has much less dependencies for its own
binary. We're using cargo deny to enforce that the network stack is not
part of the dependencies. A new build profile ensure we're getting the
minimum possible binary size for a rust binary.
---------
Co-authored-by: Zanie Blue <contact@zanie.dev>
I noticed that https://github.com/astral-sh/uv/pull/11936 did not run
the Docker builds, nor did #11934
We should run these when the relevant files change so there aren't
surprises at release time!
Updates the `build-binaries` workflow to include toolchain version
changes and `.cargo/config.toml` changes too.
For uv-build, we need to duplicate a lot of the `build-binaries.yml`
logic to build another source distribution and wheel. In preparation for
that I tried to make the invocations more consistent, to make it easier
to review the changes when adding the `uv-build` builds on top.
Split out from #11446
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Alpine 3.21 has been released for a few months and it's now being used
officially under `alpine` based [python
images](https://hub.docker.com/_/python), hence our python-alpine based
images has been using 3.21 since uv 0.5.8 under the hood.
This could arguably be `breaking` as we're dropping alpine3.20 top-level
tag, so it could be a good candidate for 0.6.0.
Alternatively, we can keep support for 3.20 and make this non-breaking
by simply repointing alpine to now be 3.21 and keeping the 3.20 tag
around.
cosign uses the GitHub action ID token to retrieve an ephemeral code
signing certificate from Fulcio, and store the signature in the Rekor
transparency log.
Once an image has been successfully signed, you should be able to verify
the signature with:
```sh
cosign verify ghcr.io/astral-sh/uv:latest --certificate-identity-regexp='.*' --certificate-oidc-issuer-regexp='.*'
```
Closes#8670
We have a lot of jobs downstream of the `build-binary-linux` job, but
the job is significantly slower than the other binary builds because we
need to configure musl. Instead, we split this into two jobs (as it was
before https://github.com/astral-sh/uv/pull/2309#discussion_r1520101330)
to speed things up.
The libc job takes ~1m and its _downstream_ jobs finish before the musl
build does. The musl job takes ~5m.
## Summary
In preview mode on windows, register und un-register the managed python build standalone installations in the Windows registry following PEP 514.
We write the values defined in the PEP plus the download URL and hash. We add an entry when installing a version, remove an entry when uninstalling and removing all values when uninstalling with `--all`. We update entries only by overwriting existing values, there is no "syncing" involved.
Since they are not official builds, pbs gets a prefix. `py -V:Astral/CPython3.13.1` works, `py -3.13` doesn't.
```
$ py --list-paths
-V:3.12 * C:\Users\Konsti\AppData\Local\Programs\Python\Python312\python.exe
-V:3.11.9 C:\Users\Konsti\.pyenv\pyenv-win\versions\3.11.9\python.exe
-V:3.11 C:\Users\micro\AppData\Local\Programs\Python\Python311\python.exe
-V:3.8 C:\Users\micro\AppData\Local\Programs\Python\Python38\python.exe
-V:Astral/CPython3.13.1 C:\Users\Konsti\AppData\Roaming\uv\data\python\cpython-3.13.1-windows-x86_64-none\python.exe
```
Registry errors are reported but not fatal, except for operations on the company key since it's not bound to any specific python interpreter.
On uninstallation, we prune registry entries that have no matching Python installation (i.e. broken entries).
The code uses the official `windows_registry` crate of the `winreg` crate.
Best reviewed commit-by-commit.
## Test Plan
We're reusing an existing system check to test different (un)installation scenarios.
In the interest of expanding these tests and debugging weird behaviors,
I've moved the smoke tests out of the `cargo test` job and into
dedicated `smoke test` jobs. We explicitly build `uvx` in the `build
binary` jobs instead of relying on the implicit build for the test run.
I also added a `uvx` test case to the smoke tests: `uvx ruff --version`
Demo at https://github.com/zanieb/uv/issues
I think the next steps are to
- Move the "Build failures" document to a dedicated "Troubleshooting"
section
- Add more documentation on how to create an MRE
- Add more troubleshooting pages
See https://github.com/astral-sh/uv/issues/4204 for motivation
This doesn't really reach the user experience I'd expect — i.e., we end
up saying a virtual environment "does not exist" which is a little
silly. However, I think improving the error messaging on interpreter
queries in general should be solved separately. I did one small
"general" change in
89e11d0222
— otherwise we don't show the message at all.
---------
Co-authored-by: konsti <konstin@mailbox.org>
When using the standard Windows runners (as opposed to the _larger_
GitHub runners), an undocumented `D:` drive is available and performant.
We can save some money on by using this on a standard runner instead of
a larger runner with an ReFS drive. Switching to the `D:` drive was not
acceptable for `cargo test` >25m runtime.
Inspired by https://github.com/pypa/pip/pull/13129
See https://github.com/actions/runner-images/issues/8755
Timings (grain of salt — GitHub is super noisy):
- clippy: 2m 18s -> 2m 11s
- build binary: 2m 3s -> 2m 35s
- trampoline check (x86-64): 2m 32s -> 1m 50s (other architectures
similar)
- trampoline test (x86-64): 4m 12s -> 6m 7s
- trampoline test (i686): 6m 44s -> 5m 35s
Previously, we couldn't use a DevDrive
(https://github.com/astral-sh/uv/pull/3522#issuecomment-2111448930)
because our Windows version was not sufficient.
Recently, I upgraded our larger runners to Windows 2025 preview
(https://github.com/astral-sh/uv/pull/10298) which I presume has support
for this.
I removed ReFS in
953c3535c3
which didn't seem to do anything to performance.
I also found some notes on "trusted" DevDrives and "disabling anti-virus
filtering" which I simply have to try.
The latest release flaked failing to fetch the buildx image, which is
reportedly due to rate limits. Last I checked, DockerHub enforces much
stricter limits on unauthenticated requests. I added a bot account and a
corresponding read-only token.
The shellcheck action we uses misses some files, so they fell out of
spec for what we support. This PR first and foremost adds them to the
scanning list, and then fixes the issues found.
Fixes#7480
I'm renaming our runners to be more explicit about their size,
architecture, and version.
Switching to Windows 2025 over 2022 in some of our jobs in the hope that
it's faster.