Commit Graph

441 Commits

Author SHA1 Message Date
Zanie Blue 2a9cbdb11f
Upgrade to cargo-dist 0.13.3 (#3217) 2024-04-23 18:18:21 +00:00
renovate[bot] 054fa3e439
Update Rust crate zip to v1 (#3175) 2024-04-22 22:17:44 -04:00
Zanie Blue ccf4a85f89
Fix Docker publish permissions in release pipeline (#3195) 2024-04-22 19:05:21 -04:00
Zanie Blue 11c6a07bb5
Regenerate with cargo-dist 0.13.2 (#3194) 2024-04-22 19:15:14 +00:00
renovate[bot] 04eaee7e19
Update Rust crate pyo3 to 0.21.0 (#2911) 2024-04-22 18:27:22 +00:00
Charlie Marsh 1bf9d879ab
Re-add Docker image to release pipeline (#3155)
## Summary

I already re-created and pushed the image, and set its visibility to
public (it defaults to private) -- see:
https://github.com/astral-sh/uv/pkgs/container/uv.

Closes https://github.com/astral-sh/uv/issues/2774.
2024-04-21 12:02:48 -04:00
Charlie Marsh dfccdb0e39
Enable global configuration files (#3049)
## Summary

Enables `uv` to read configuration from (e.g.)
`/Users/crmarsh/.config/uv/uv.toml` on macOS and Linux, and
`C:\Users\Charlie\AppData\Roaming\uv\uv.toml` on Windows.

This differs slightly from Ruff, which uses the `Application Support`
directory on macOS. But I've deviated here. based on the preferences
expressed in https://github.com/astral-sh/ruff/issues/10739.
2024-04-17 13:59:50 -04:00
Charlie Marsh 295b58ad37
Add `uv-workspace` crate with settings discovery and deserialization (#3007)
## Summary

This PR adds basic struct definitions along with a "workspace" concept
for discovering settings. (The "workspace" terminology is used to match
Ruff; I did not invent it.)

A few notes:

- We discover any `pyproject.toml` or `uv.toml` file in any parent
directory of the current working directory. (We could adjust this to
look at the directories of the input files.)
- We don't actually do anything with the configuration yet; but those
PRs are large and I want this to be reviewed in isolation.
2024-04-16 13:56:47 -04:00
renovate[bot] b4a79b9ee2
chore(deps): update rust crate axoupdater to 0.5.0 (#3026) 2024-04-14 22:09:25 -04:00
Charlie Marsh c43757ad4c
Avoid calling `normalize_path` with relative paths that extend beyond the current directory (#3013)
## Summary

It turns out that `normalize_path` (sourced from Cargo) has a subtle
bug. If you pass it a relative path that traverses beyond the root, it
silently drops components. So, e.g., passing `../foo/bar`, it will just
drop the leading `..` and return `foo/bar`.

This PR encodes that behavior as a `Result` and avoids using it in such
cases.

Closes https://github.com/astral-sh/uv/issues/3012.
2024-04-12 14:48:03 -04:00
Charlie Marsh 1f3b5bb093
Add hash-checking support to `install` and `sync` (#2945)
## Summary

This PR adds support for hash-checking mode in `pip install` and `pip
sync`. It's a large change, both in terms of the size of the diff and
the modifications in behavior, but it's also one that's hard to merge in
pieces (at least, with any test coverage) since it needs to work
end-to-end to be useful and testable.

Here are some of the most important highlights:

- We store hashes in the cache. Where we previously stored pointers to
unzipped wheels in the `archives` directory, we now store pointers with
a set of known hashes. So every pointer to an unzipped wheel also
includes its known hashes.
- By default, we don't compute any hashes. If the user runs with
`--require-hashes`, and the cache doesn't contain those hashes, we
invalidate the cache, redownload the wheel, and compute the hashes as we
go. For users that don't run with `--require-hashes`, there will be no
change in performance. For users that _do_, the only change will be if
they don't run with `--generate-hashes` -- then they may see some
repeated work between resolution and installation, if they use `pip
compile` then `pip sync`.
- Many of the distribution types now include a `hashes` field, like
`CachedDist` and `LocalWheel`.
- Our behavior is similar to pip, in that we enforce hashes when pulling
any remote distributions, and when pulling from our own cache. Like pip,
though, we _don't_ enforce hashes if a distribution is _already_
installed.
- Hash validity is enforced in a few different places:
1. During resolution, we enforce hash validity based on the hashes
reported by the registry. If we need to access a source distribution,
though, we then enforce hash validity at that point too, prior to
running any untrusted code. (This is enforced in the distribution
database.)
2. In the install plan, we _only_ add cached distributions that have
matching hashes. If a cached distribution is missing any hashes, or the
hashes don't match, we don't return them from the install plan.
3. In the downloader, we _only_ return distributions with matching
hashes.
4. The final combination of "things we install" are: (1) the wheels from
the cache, and (2) the downloaded wheels. So this ensures that we never
install any mismatching distributions.
- Like pip, if `--require-hashes` is provided, we require that _all_
distributions are pinned with either `==` or a direct URL. We also
require that _all_ distributions have hashes.

There are a few notable TODOs:

- We don't support hash-checking mode for unnamed requirements. These
should be _somewhat_ rare, though? Since `pip compile` never outputs
unnamed requirements. I can fix this, it's just some additional work.
- We don't automatically enable `--require-hashes` with a hash exists in
the requirements file. We require `--require-hashes`.

Closes #474.

## Test Plan

I'd like to add some tests for registries that report incorrect hashes,
but otherwise: `cargo test`
2024-04-10 19:09:03 +00:00
Charlie Marsh ddf02e7d5f
Remove unused `task-local-extensions` dependency (#2974)
## Summary

Made obsolete with the `reqwest` upgrade.
2024-04-10 14:56:39 -04:00
Zanie Blue 44e39bdca3
Replace Python bootstrapping script with Rust implementation (#2842)
See https://github.com/astral-sh/uv/issues/2617

Note this also includes:
- #2918 
- #2931 (pending)

A first step towards Python toolchain management in Rust.

First, we add a new crate to manage Python download metadata:

- Adds a new `uv-toolchain` crate
- Adds Rust structs for Python version download metadata
- Duplicates the script which downloads Python version metadata
- Adds a script to generate Rust code from the JSON metadata
- Adds a utility to download and extract the Python version

I explored some alternatives like a build script using things like
`serde` and `uneval` to automatically construct the code from our
structs but deemed it to heavy. Unlike Rye, I don't generate the Rust
directly from the web requests and have an intermediate JSON layer to
speed up iteration on the Rust types.

Next, we add add a `uv-dev` command `fetch-python` to download Python
versions per the bootstrapping script.

- Downloads a requested version or reads from `.python-versions`
- Extracts to `UV_BOOTSTRAP_DIR`
- Links executables for path extension

This command is not really intended to be user facing, but it's a good
PoC for the `uv-toolchain` API. Hash checking (via the sha256) isn't
implemented yet, we can do that in a follow-up.

Finally, we remove the `scripts/bootstrap` directory, update CI to use
the new command, and update the CONTRIBUTING docs.

<img width="1023" alt="Screenshot 2024-04-08 at 17 12 15"
src="https://github.com/astral-sh/uv/assets/2586601/57bd3cf1-7477-4bb8-a8e9-802a00d772cb">
2024-04-10 11:22:41 -05:00
Charlie Marsh a01143980a
Upgrade `reqwest` to v0.12.3 (#2817)
## Summary

Closes #2814.
2024-04-10 11:20:44 -04:00
konsti 15f0be8f04
Allow profiling tests with tracing instrumentation (#2957)
To get more insights into test performance, allow instrumenting tests
with tracing-durations-export.

Usage:

```shell
# A single test
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo test --features tracing-durations-export --test pip_install_scenarios no_binary -- --exact
# All tests
TRACING_DURATIONS_TEST_ROOT=$(pwd)/target/test-traces cargo nextest run --features tracing-durations-export
```

Then we can e.g. look at
`target/test-traces/pip_install_scenarios::no_binary.svg` and see the
builds it performs:


![image](https://github.com/astral-sh/uv/assets/6826232/40b4e094-debc-4b22-8aa3-9471998674af)
2024-04-10 10:15:27 +00:00
Aria Beingessner 997f3c9161
chore: update axoupdater to 0.4.0 and add a test (#2938)
## Summary

This updates to the version of axoupdater used in cargo-dist 0.13.0's
own selfupdate command, with all relevant fixes for platforms. It also
tentatively introduces a mildly dangerous self-runtest that runs `uv
self update` and checks that the binary is installed and executable.

I *believe* some adjustments need to be made to your CI to have this new
test run, because it requires the `self-update` feature to be enabled,
and I didn't want to just start messing with how you do feature coverage
in your CI. **As a result I haven't yet had a chance to actually fully
run this in CI**, though I've locally tested it on windows (with the
guard disabled).


## Test Plan

Most of the machinery here is provided by axoupdater itself (cargo-dist
also includes a variant of these tests in its codebase). This initial
implementation has a couple major limitations:

* This is For Reals modifying the system that runs the test (so it's off
unless it detects it's running in CI, and if you want variations on this
test they'll need to be [run in
serial](5e7826f7b0/cargo-dist/tests/cli-tests.rs (L235))).
Since many of the testing issues were surrounding precise details of
Actual Deployed Executions, this seemed worth the tradeoff.
* The actual installer *script* it's ultimately invoking is the one you
last published, and *not* the one that cargo-dist will make when you
next publish.

We're already working on implementing some logic for "get cargo-dist to
generate a fresh installer script too", which is in fact the basis of a
huge amount of cargo-dist's own testsuite. Now that we're dogfooding
this stuff, it should be quite hard for this stuff to break without
cargo-dist's own codebase noticing it first.


<!-- How was it tested? -->
2024-04-09 23:41:16 -04:00
Zanie Blue 1512e07a2e
Split configuration options out of `uv-types` (#2924)
Needed to prevent circular dependencies in my toolchain work (#2931). I
think this is probably a reasonable change as we move towards persistent
configuration too?

Unfortunately `BuildIsolation` needs to be in `uv-types` to avoid
circular dependencies still. We might be able to resolve that in the
future.
2024-04-09 11:35:53 -05:00
Charlie Marsh 90735660cb
Upgrade cargo-dist (#2936) 2024-04-09 16:19:22 +00:00
renovate[bot] f0c83a4ded
Update Rust crate base64 to 0.22.0 (#2874) 2024-04-08 03:02:35 +00:00
Charlie Marsh 424d7ab610
Sort and expand target triples in Cargo.toml (#2858)
## Summary

Too hard to read as-is.
2024-04-07 03:25:54 +00:00
Charlie Marsh dc2c289dff
Upgrade `rs-async-zip` to support data descriptors (#2809)
## Summary

Upgrading `rs-async-zip` enables us to support data descriptors in
streaming. This both greatly improves performance for indexes that use
data descriptors _and_ ensures that we support them in a few other
places (e.g., zipped source distributions created in Finder).

Closes #2808.
2024-04-04 01:31:40 +00:00
Charlie Marsh 0382abee70
Bump `cargo-dist` to 0.13.0-prerelease.3 (#2783)
Closes #2770.
2024-04-02 21:23:55 +00:00
Charlie Marsh 4b69ad4281
Remove some unused dependencies (#2698) 2024-03-27 17:50:55 -04:00
Charlie Marsh 8c7a6038fd
Update Rust to v1.77 (#2591) 2024-03-27 14:41:27 +00:00
Zanie Blue 0b08ba1e67
Rename `uv-traits` and split into separate modules (#2674)
This is driving me a little crazy and is becoming a larger problem in
#2596 where I need to move more types (like `Upgrade` and `Reinstall`)
into this crate. Anything that's shared across our core resolver,
install, and build crates needs to be defined in this crate to avoid
cyclic dependencies. We've outgrown it being a single file with some
shared traits.

There are no behavioral changes here.
2024-03-26 15:39:43 -05:00
konsti 786598bbf6
Add fast build profile without debuginfo (#2628)
Add a compile option `-p fast-build` for a 16% incremental compile
speedup (linux) at the cost of having no debuginfo.

After trying various rust compiler speedup suggestions, setting mold as
my default linker and removing debuginfo are the only ones showing a
speedup.

```
hyperfine --warmup 1 --runs 3 --prepare "touch crates/uv-resolver/src/resolver/mod.rs" \
    "cargo +nightly build --bin uv" \
    "cargo +nightly build --bin uv --profile fast-build"
Benchmark 1: cargo +nightly build --bin uv
  Time (mean ± σ):      1.569 s ±  0.008 s    [User: 1.179 s, System: 0.369 s]
  Range (min … max):    1.560 s …  1.576 s    3 runs

Benchmark 2: cargo +nightly build --bin uv --profile fast-build
  Time (mean ± σ):      1.353 s ±  0.020 s    [User: 1.109 s, System: 0.301 s]
  Range (min … max):    1.338 s …  1.375 s    3 runs

Summary
  cargo +nightly build --bin uv --profile fast-build ran
    1.16 ± 0.02 times faster than cargo +nightly build --bin uv
```
2024-03-25 16:57:26 -04:00
konsti 33bde826a0
Update pubgrub to use a dependency provider (#2648)
With https://github.com/pubgrub-rs/pubgrub/pull/190, pubgrub attaches
all types to a dependency provider to reduce the number of generics. We
need a dummy dependency provider now to emulate this. On the plus side,
pep440_rs drops its pubgrub dependency.
2024-03-25 15:51:31 +01:00
konsti 95a70084c7
Update minimum rust version (cargo) to 1.76 (#2618)
This should address (fix?) #2442, it blocks building with a version that
doesn't support return type impl trait.

```
$ cargo +1.74 check
  error: package `pep508_rs v0.4.2 (/home/konsti/projects/uv/crates/pep508-rs)` cannot be built because it requires rustc 1.76 or newer, while the currently active rustc version is 1.74.1
```

While we should encourage our dependencies to set a msrv, if we set our
rust-toolchain.toml version as cargo msrv our users on no-wheel
no-installer platforms will also be fine (or at least get helpful error
messages).
2024-03-22 18:45:16 +00:00
konsti 48bd02b8a8
Update miette v7, pubgrub and small Cargo.toml cleanup (#2610)
I was going through the output of `cargo tree --duplicate -p uv`, not
much success except these small cleanups.
2024-03-22 10:42:48 +00:00
Charlie Marsh f91ce521c5
Move requirements resolution into its own crate (#2579)
## Summary

No functional changes, but this was a lot of core logic in the `uv`
crate, which is mostly meant to be the CLI.
2024-03-21 13:52:47 +00:00
konsti a873963e5d
Fix cargo warning (#2576) 2024-03-20 21:25:09 +00:00
konsti 70e0967dbd
Avoid repeating paths of workspace packages (#2573)
Scott schafer got me the idea: We can avoid repeating the path for
workspaces dependencies everywhere if we declare them in the virtual
package once and treat them as workspace dependencies from there on.
2024-03-20 16:16:02 -04:00
konsti 32b9eeb532
Use mac version from python for linehaul information (#2509)
See
https://github.com/astral-sh/uv/pull/2493#pullrequestreview-1942899151.

---------

Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-03-20 10:55:50 +01:00
Charlie Marsh 1bf48c91f2
Add a `uv self update` command (#2228)
## Summary

Powered by Axo: https://github.com/axodotdev/axoupdater.

Closes https://github.com/astral-sh/uv/issues/1591.

## Test Plan

To test locally:

- `rm -f ~/.config/uv/uv-receipt.json /Users/crmarsh/.cargo/bin/uv`
- `curl --proto '=https' --tlsv1.2 -LsSf
https://github.com/astral-sh/uv/releases/download/0.1.14/uv-installer.sh
| sh`
- `cargo run self update`

Up-to-date:

![Screenshot 2024-03-06 at 12 13
36 AM](https://github.com/astral-sh/uv/assets/1309177/04bb7a11-6557-4317-8e86-18288fbc13c6)

Updated:

![Screenshot 2024-03-06 at 12 13
54 AM](https://github.com/astral-sh/uv/assets/1309177/c08ad739-5a2b-47cf-bf13-018a8d708330)

No receipt:

![Screenshot 2024-03-06 at 12 14
13 AM](https://github.com/astral-sh/uv/assets/1309177/317bbfaf-a787-4cbf-9f93-a4ce8ca7a988)
2024-03-19 16:02:49 -04:00
Micha Reiser acbee166c0
Remove unused dependencies (#2543)
## Summary

I tried out `cargo shear` to see if there are any unused dependencies
that `cargo udeps` isn't reporting. It turned out, there are a few. This
PR removes those dependencies.

## Test Plan

`cargo build`
2024-03-19 13:10:10 -04:00
dependabot[bot] 1d10014b2d
Bump clap from 4.5.2 to 4.5.3 (#2524)
Bumps [clap](https://github.com/clap-rs/clap) from 4.5.2 to 4.5.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/clap-rs/clap/releases">clap's
releases</a>.</em></p>
<blockquote>
<h2>v4.5.3</h2>
<h2>[4.5.3] - 2024-03-15</h2>
<h3>Internal</h3>
<ul>
<li><em>(derive)</em> Update <code>heck</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/clap-rs/clap/blob/master/CHANGELOG.md">clap's
changelog</a>.</em></p>
<blockquote>
<h2>[4.5.3] - 2024-03-15</h2>
<h3>Internal</h3>
<ul>
<li><em>(derive)</em> Update <code>heck</code></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4e07b43858"><code>4e07b43</code></a>
chore: Release</li>
<li><a
href="8247c7ddf0"><code>8247c7d</code></a>
docs: Update changelog</li>
<li><a
href="677c52ce08"><code>677c52c</code></a>
chore: Update <code>heck</code> requirement (<a
href="https://redirect.github.com/clap-rs/clap/issues/5396">#5396</a>)</li>
<li>See full diff in <a
href="https://github.com/clap-rs/clap/compare/v4.5.2...v4.5.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=clap&package-manager=cargo&previous-version=4.5.2&new-version=4.5.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-18 17:16:36 -05:00
dependabot[bot] 5d5eddbf8d
Bump async-trait from 0.1.77 to 0.1.78 (#2525) 2024-03-18 21:54:17 +00:00
dependabot[bot] c5f7678fcb
Bump toml from 0.8.11 to 0.8.12 (#2523) 2024-03-18 17:52:28 -04:00
samypr100 42973cd9cb
feat: add linehaul info to uv-client (#2493)
## Summary

Closes #1958

This adds linehaul metadata to uv's user-agent when pep 508 markers are
provided to the RegistryClientBuilder. Thanks to #2381, we were able to
leverage most information from markers and avoid inconsistency.

Linehaul is meant to be accompanying metadata pip sends in it's user
agent when talking to registries. You can see this output by running
something like `python -c 'from pip._internal.network.session import
user_agent; print(user_agent())'`.
In PyPI, this metadata processed by the
[linehaul-cloud-function](https://github.com/pypi/linehaul-cloud-function).
More info about linehaul can be found in #1958.

Below are some examples from pip:

* Linux GHA: `pip/24.0
{"ci":true,"cpu":"x86_64","distro":{"id":"jammy","libc":{"lib":"glibc","version":"2.35"},"name":"Ubuntu","version":"22.04"},"implementation":{"name":"CPython","version":"3.12.2"},"installer":{"name":"pip","version":"24.0"},"openssl_version":"OpenSSL
3.0.2 15 Mar
2022","python":"3.12.2","rustc_version":"1.76.0","system":{"name":"Linux","release":"6.5.0-1016-azure"}}`
* Windows GHA: `pip/24.0
{"ci":true,"cpu":"AMD64","implementation":{"name":"CPython","version":"3.12.2"},"installer":{"name":"pip","version":"24.0"},"openssl_version":"OpenSSL
3.0.13 30 Jan
2024","python":"3.12.2","rustc_version":"1.76.0","system":{"name":"Windows","release":"2022Server"}}`
* OSX GHA: `pip/24.0
{"ci":true,"cpu":"arm64","distro":{"name":"macOS","version":"14.2.1"},"implementation":{"name":"CPython","version":"3.12.2"},"installer":{"name":"pip","version":"24.0"},"openssl_version":"OpenSSL
3.0.13 30 Jan
2024","python":"3.12.2","rustc_version":"1.76.0","system":{"name":"Darwin","release":"23.2.0"}}`



Here's how uv results look like (sorry for the keys not having the same
order):

* Linux GHA: `uv/0.1.21
{"installer":{"name":"uv","version":"0.1.21"},"python":"3.12.2","implementation":{"name":"CPython","version":"3.12.2"},"distro":{"name":"Ubuntu","version":"22.04","id":"jammy","libc":null},"system":{"name":"Linux","release":"6.5.0-1016-azure"},"cpu":"x86_64","openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}`
* Windows GHA: `uv/0.1.21
{"installer":{"name":"uv","version":"0.1.21"},"python":"3.12.2","implementation":{"name":"CPython","version":"3.12.2"},"distro":null,"system":{"name":"Windows","release":"2022Server"},"cpu":"AMD64","openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}`
* OSX GHA: `uv/0.1.21
{"installer":{"name":"uv","version":"0.1.21"},"python":"3.12.2","implementation":{"name":"CPython","version":"3.12.2"},"distro":{"name":"macOS","version":"14.2.1","id":null,"libc":null},"system":{"name":"Darwin","release":"23.2.0"},"cpu":"arm64","openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}`

Distro information (such as the one pip uses `from pip._vendor import
distro` to retrieve instead of `platform` module) was not retrieved from
markers. Instead, the linux release codename/name/version uses
`sys-info` crate, adding about 50us of extra overhead on linux. The
distro osx version re-used the [mac_os version
implementation](99c992e38b/crates/platform-host/src/mac_os.rs)
from #2381 which adds about 20us of overhead on osx. I tried to use
other crates to avoid re-introducing `mac_os.rs` but most of them didn't
yield satisfactory performance (40ms-60ms~) or had the wrong values
needed (e.g. darwin version vs osx version).

I also didn't add libc retrieval or rustc retrieval as those seem to add
substantial overhead due to querying `ldd` or `rustc`. PyPy version
detection was also not added to avoid adding extra overhead to [support
PyPy for
linehaul](https://github.com/pypa/pip/blob/24.0/src/pip/_internal/network/session.py#L123).
All other behavior was kept 1-1 to match what pip's linehaul
implementation does (as of 24.0). This also aligns with what was
discussed in #1958.

## Test Plan

Added new integration test to uv-client.

---------

Co-authored-by: konstin <konstin@mailbox.org>
2024-03-18 10:46:58 +00:00
Charlie Marsh c296da34bf
Add additional ARM targets to release (#2417)
Closes https://github.com/astral-sh/uv/issues/2415.
Closes https://github.com/astral-sh/uv/issues/2416.
2024-03-15 13:49:29 +00:00
konsti 6e5a737e7a
Update pubgrub for faster range operations (#2421)
This update pulls in https://github.com/pubgrub-rs/pubgrub/pull/177,
optimizing common range operations in pubgrub. Please refer to this PR
for a more extensive description and discussion of the changes.

The changes optimize that last remaining pathological case,
`bio_embeddings[all]` on python 3.12, which has to try 100k versions,
from 12s to 3s in the cached case. It should also enable smarter
prefetching in batches (https://github.com/astral-sh/uv/issues/170),
even though a naive attempt didn't show better network usage.

**before** 12s


![image](https://github.com/pubgrub-rs/pubgrub/assets/6826232/80ffdc49-1159-453d-a3ea-0dd431df6d3b)

**after** 3s


![image](https://github.com/pubgrub-rs/pubgrub/assets/6826232/69508c29-73ab-4593-a588-d8c722242513)

```
$  taskset -c 0 hyperfine --warmup 1 "../uv/target/profiling/main-uv pip compile ../uv/scripts/requirements/bio_embeddings.in"  "../uv/target/profiling/branch-uv pip compile ../uv/scripts/requirements/bio_embeddings.in"
Benchmark 1: ../uv/target/profiling/main-uv pip compile ../uv/scripts/requirements/bio_embeddings.in
  Time (mean ± σ):     12.321 s ±  0.064 s    [User: 12.014 s, System: 0.300 s]
  Range (min … max):   12.224 s … 12.406 s    10 runs

Benchmark 2: ../uv/target/profiling/branch-uv pip compile ../uv/scripts/requirements/bio_embeddings.in
  Time (mean ± σ):      3.109 s ±  0.004 s    [User: 2.782 s, System: 0.321 s]
  Range (min … max):    3.103 s …  3.116 s    10 runs

Summary
  ../uv/target/profiling/branch-uv pip compile ../uv/scripts/requirements/bio_embeddings.in ran
    3.96 ± 0.02 times faster than ../uv/target/profiling/main-uv pip compile ../uv/scripts/requirements/bio_embeddings.in
```

It also adds `bio_embeddings[all]` as a requirements test case.
2024-03-13 17:48:24 -05:00
Zanie Blue 22a52391be
Refactor `AuthenticationStore` to inline credentials (#2427) 2024-03-13 17:48:02 -05:00
Hans Baker 9159731792
Add support for retrieving credentials from `keyring` (#2254)
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

<!-- What's the purpose of the change? What does it do, and why? -->

Adds basic keyring auth support for `uv` commands. Adds clone of `pip`'s
`--keyring-provider subprocess` argument (using CLI `keyring` tool).

See issue: https://github.com/astral-sh/uv/issues/1520

## Test Plan

<!-- How was it tested? -->

Hard to write full-suite unit tests due to reliance on
`process::Command` for `keyring` cli

Manually tested end-to-end in a project with GCP artifact registry using
keyring password:
```bash
➜  uv pip uninstall watchdog
Uninstalled 1 package in 46ms
 - watchdog==4.0.0

➜  cargo run -- pip install --index-url https://<redacted>/python/simple/ --extra-index-url https://<redacted>/pypi-mirror/simple/ watchdog
    Finished dev [unoptimized + debuginfo] target(s) in 0.18s
     Running `target/debug/uv pip install --index-url 'https://<redacted>/python/simple/' --extra-index-url 'https://<redacted>/pypi-mirror/simple/' watchdog`
error: HTTP status client error (401 Unauthorized) for url (https://<redacted>/pypi-mirror/simple/watchdog/)

➜  cargo run -- pip install --keyring-provider subprocess --index-url https://<redacted>/python/simple/ --extra-index-url https://<redacted>/pypi-mirror/simple/ watchdog
    Finished dev [unoptimized + debuginfo] target(s) in 0.17s
     Running `target/debug/uv pip install --keyring-provider subprocess --index-url 'https://<redacted>/python/simple/' --extra-index-url 'https://<redacted>/pypi-mirror/simple/' watchdog`
Resolved 1 package in 2.34s
Installed 1 package in 27ms
 + watchdog==4.0.0
```

`requirements.txt`
```
#
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
#    .bin/generate-requirements
#
--index-url https://<redacted>/python/simple/
--extra-index-url https://<redacted>/pypi-mirror/simple/

...
```

```bash
➜  cargo run -- pip install --keyring-provider subprocess -r requirements.txt
    Finished dev [unoptimized + debuginfo] target(s) in 0.19s
     Running `target/debug/uv pip install --keyring-provider subprocess -r requirements.txt`
Resolved 205 packages in 23.52s
   Built <redacted>
   ...
Downloaded 47 packages in 19.32s
Installed 195 packages in 276ms
 + <redacted>
  ...
```

---------

Co-authored-by: Thomas Gilgenast <thomas@vant.ai>
Co-authored-by: Zanie Blue <contact@zanie.dev>
2024-03-13 15:02:18 -05:00
Charlie Marsh d9b160b405
Add backoff for transient Windows failures (#2419)
## Summary

This may be required elsewhere, but all the traces in that issue are
related to persisting the temporary directory to our persistent cache,
so lets start there.

See: https://github.com/astral-sh/uv/issues/1491.
2024-03-13 13:16:26 -04:00
konsti 7964bfbb2b
Move architecture and operating system probing to Python (#2381)
The architecture of uv does not necessarily match that of the python
interpreter (#2326). In cross compiling/testing scenarios the operating
system can also mismatch. To solve this, we move arch and os detection
to python, vendoring the relevant pypa/packaging code, preventing
mismatches between what the python interpreter was compiled for and what
uv was compiled for.

To make the scripts more manageable, they are now a directory in a
tempdir and we run them with `python -m` . I've simplified the
pypa/packaging code since we're still building the tags in rust. A
`Platform` is now instantiated by querying the python interpreter for
its platform. The pypa/packaging files are copied verbatim for easier
updates except a `lru_cache()` python 3.7 backport.

Error handling is done by a `"result": "success|error"` field that allow
passing error details to rust:

```console
$ uv venv --no-cache
  × Can't use Python at `/home/konsti/projects/uv/.venv/bin/python3`
  ╰─▶ Unknown operation system `linux`
```

I've used the [maturin sysconfig
collection](855f6d2cb1/sysconfig)
as reference. I'm unsure how to test these changes across the wide
variety of platforms.

Fixes #2326
2024-03-13 11:51:14 +00:00
konsti 90a60bc4f2
Update pubgrub (#2384)
Rebase pubgrub onto upstream changes, reducing our divergencies with
upstream.
2024-03-12 16:14:50 +00:00
Charlie Marsh e9c16e9aa2
Enable TLS native root toggling at runtime (#2362)
## Summary

It turns out that on macOS, reading the native certificates can add
hundreds of milliseconds to client initialization. This PR makes
`--native-tls` a command-line flag, to toggle (at runtime) the choice of
the `webpki` roots or the native system roots.

You can't accomplish this kind of configuration with the `reqwest`
builder API, so instead, I pulled out the heart of that logic from the
crate
(e319263851/src/async_impl/client.rs (L498)),
and modified it to allow toggling a choice of root.

Note that there's an open PR for this in reqwest
(https://github.com/seanmonstar/reqwest/pull/1848), along with an issue
(https://github.com/seanmonstar/reqwest/issues/1843), which I may ping,
but it's been around for a while and I believe reqwest is focused on its
next major release.

Closes https://github.com/astral-sh/uv/issues/2346.
2024-03-12 04:05:49 +00:00
dependabot[bot] c159a262f2
Bump walkdir from 2.4.0 to 2.5.0 (#2365)
Bumps [walkdir](https://github.com/BurntSushi/walkdir) from 2.4.0 to
2.5.0.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="4f26be4d45"><code>4f26be4</code></a>
2.5.0</li>
<li><a
href="3be5734033"><code>3be5734</code></a>
api: implement <code>FusedIterator</code></li>
<li><a
href="b0d16b759a"><code>b0d16b7</code></a>
ci: fix it</li>
<li>See full diff in <a
href="https://github.com/BurntSushi/walkdir/compare/2.4.0...2.5.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=walkdir&package-manager=cargo&previous-version=2.4.0&new-version=2.5.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 19:37:08 -04:00
dependabot[bot] 6d67c93e0b
Bump reflink-copy from 0.1.14 to 0.1.15 (#2364) 2024-03-11 17:47:44 -04:00
dependabot[bot] 01533ab939
Bump indexmap from 2.2.3 to 2.2.5 (#2367) 2024-03-11 21:43:34 +00:00
dependabot[bot] 36ff8e0506
Bump ctrlc from 3.4.2 to 3.4.4 (#2366) 2024-03-11 21:42:04 +00:00
konsti f70ae72874
Add a `-vv` log level and make `-v` more readable (#2301)
Behind error messages, the debug log is the second most important
resource to finding out what and why went wrong when there was a problem
with uv. It is important to see which paths it has found and how the
decisions in the resolver were made. I'm trying to improve the
experience interacting with the debug log.

The hierarchical layer is verbose and hard to follow, so it's moved to
the `-vv` extra verbose setting, while `-v` works like
`RUST_LOG=uv=debug`.

For installing jupyter with a warm cache:

* Default:
https://gist.github.com/konstin/4de6e466127311c5a5fc2f99c56a8e11
* `-v`: https://gist.github.com/konstin/e7bafe0ec7d07e47ba98a3865ae2ef3e
* `-vv`:
https://gist.github.com/konstin/3ee1aaff37f91cceb6275dd5525f180e
Ideally, we would have `-v`, `-vv` and `-vvv`, but we're lacking the the
`info!` layer for `-v`, so there's only two layers for now.

The `tracing_subcriber` formatter always print the current span, so i
replaced it with a custom formatter.


![image](https://github.com/astral-sh/uv/assets/6826232/75f5cfd1-da7b-432e-b090-2f3916930dd1)

Best read commit-by-commit.
2024-03-11 08:58:31 +01:00
Charlie Marsh 67fb023f10
Avoid duplicating authorization header with netrc (#2325)
## Summary

The netrc middleware we added in
https://github.com/astral-sh/uv/pull/2241 has a slight problem. If you
include credentials in your index URL, _and_ in the netrc file, the
crate blindly adds the netrc credentials as a header. And given the
`ReqwestBuilder` API, this means you end up with _two_ `Authorization`
headers, which always leads to an invalid request, though the exact
failure can take different forms.

This PR removes the middleware crate in favor of our own middleware.
Instead of using the `RequestInitialiser` API, we have to use the
`Middleware` API, so that we can remove the header on the request
itself.

Closes https://github.com/astral-sh/uv/issues/2323.

## Test Plan

- Verified that running against a private index with credentials in the
URL (but no netrc file) worked without error.
- Verified that running against a private index with credentials in the
netrc file (but not the URL) worked without error.
- Verified that running against a private index with a mix of
credentials in both _also_ worked without error.
2024-03-10 15:02:24 +00:00
Andrew Gallant b3b5afaf78 uv-fs: transparently support reading UTF-16 files
This PR tweaks uv to support reading `requirements.txt` regardless of
whether it is encoded as UTF-8 or UTF-16. This is particularly relevant
on Windows where `uv pip freeze > requirements.txt` will likely write a
UTF-16 encoded `requirements.txt` file.

There is some discussion on #1666 where it's suggested that perhaps
we should explicitly not support this. I didn't see that until I
had already put this PR together, but even so, I think it's worth
considering this. UTF-16 is predominant on Windows. It is very easy
to produce a UTF-16 encoded file. Moreover, there is an easy and well
specified way to recognize and transcode UTF-16 encoded data to UTF-8.

I think the downside of this is that it could encourage the use UTF-16
encoded `requirements.txt` files *in addition* to UTF-8 encoded
files, and it would probably be nice to converge and standardize on
one encoding. One possible alternative to this PR is that we provide
a better error message. Another alternative is to ensure that a
`-o/--output` flag exists for all commands (neither `uv pip freeze` nor
`pip freeze` have such a flag) so that users can always write output
to a file without relying on their environment's piping behavior.
(Although this last alternative seems a little sad to me.)

It's also worth noting the [PEP-0508] doesn't seem to mention file
encoding at all. So I think from a "do the standards allow this"
perspective, this change is OK.

Finally, `pip` itself seems to work with UTF-16 encoded
`requirements.txt` files.

I think I personally overall lean towards supporting UTF-16 for
`requirements.txt` files. In part because I think it smoothes out the
UX a little bit, in part because there is no obvious specification
(that I'm aware of) that mandates that these files are UTF-8, and
finally in part because `pip` supports it too.

Fixes #1666, Fixes #2276

[PEP-0508]: https://peps.python.org/pep-0508/
2024-03-08 09:10:14 -05:00
Charlie Marsh 996a859b29
Use reparse points to detect Windows installer shims (#2284)
## Summary

This PR enables use of the Windows Store Pythons even with `py` is not
installed. Specifically, we need to ensure that the `python.exe` and
`python3.exe` executables installed into the
`C:\Users\crmar\AppData\Local\Microsoft\WindowsApp` directory _are_ used
when they're not "App execution aliases" (which merely open the Windows
Store, to help you install Python).

When `py` is installed, this isn't strictly necessary, since the
"resolved" executables are discovered via `py`. These look like
`C:\Users\crmar\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbs5n2kfra8p0\python.exe`.

Closes https://github.com/astral-sh/uv/issues/2264.

## Test Plan

- Removed all Python installations from my Windows machine.
- Uninstalled `py`.
- Enabled "App execution aliases".
- Verified that for both `cargo run venv --python python.exe` and `cargo
run venv --python python3.exe`, `uv` exited with a failure that no
Python could be found.
- Installed Python 3.10 via the Windows Store.
- Verified that the above commands succeeded without error.
- Verified that `cargo run venv --python python3.10.exe` _also_
succeeded.
2024-03-07 15:47:32 -05:00
Bas Schoenmaeckers e7742070c1
feat: Add netrc authentication to uv-client (#2241)
## Summary

Add netrc support to the uv-client.

closes #1405 

## Test Plan

I've added a corresponding test case to validate the correct header.
Furthermore a tested it against a real world private repository.
2024-03-06 20:48:30 +00:00
konsti 97b4526825
Update pubgrub onto upstream (#2243)
No functional changes, just pulling in the upstream changes in
preparation of new changes.
2024-03-06 16:06:35 +00:00
jannisko 71626e8dec
Support remote `https://` requirements files (#1332) (#2081)
## Summary

Allow using http(s) urls for constraints and requirements files handed
to the CLI, by handling paths starting with `http://` or `https://`
differently. This allows commands for such as: `uv pip install -c
https://raw.githubusercontent.com/apache/airflow/constraints-2.8.1/constraints-3.8.txt
requests`.

closes #1332

## Test Plan

Testing install using a `constraints.txt` file hosted on github in the
airflow repository:

fbdc2eba8e/crates/uv/tests/pip_install.rs (L1440-L1484)

## Advice Needed

- filesystem/http dispatch is implemented at a relatively low level (at
`crates/uv-fs/src/lib.rs#read_to_string`). Should I change some naming
here so it is obvious that the function is able to dispatch?
- I kept the CLI argument for -c and -r as a PathBuf, even though now it
is technically either a path or a url. We could either keep this as is
for now, or implement a new enum for this case? The enum could then
handle dispatch to files/http.
- Using another abstraction layer like
https://docs.rs/object_store/latest/object_store/ for the
files/urls/[s3] could work as well, though I ran into a bug during
testing which I couldn't debug
2024-03-06 04:18:11 +00:00
konsti 2a53e789b0
Add an option to bytecode compile during installation (#2086)
Add a `--compile` option to `pip install` and `pip sync`.

I chose to implement this as a separate pass over the entire venv. If we
wanted to compile during installation, we'd have to make sure that
writing is exclusive, to avoid concurrent processes writing broken
`.pyc` files. Additionally, this ensures that the entire site-packages
are bytecode compiled, even if there are packages that aren't from this
`uv` invocation. The disadvantage is that we do not update RECORD and
rely on this comment from [PEP 491](https://peps.python.org/pep-0491/):

> Uninstallers should be smart enough to remove .pyc even if it is not
mentioned in RECORD.

If this is a problem we can change it to run during installation and
write RECORD entries.

Internally, this is implemented as an async work-stealing subprocess
worker pool. The producer is a directory traversal over site-packages,
sending each `.py` file to a bounded async FIFO queue/channel. Each
worker has a long-running python process. It pops the queue to get a
single path (or exists if the channel is closed), then sends it to
stdin, waits until it's informed that the compilation is done through a
line on stdout, and repeat. This is fast, e.g. installing `jupyter
plotly` on Python 3.12 it processes 15876 files in 319ms with 32 threads
(vs. 3.8s with a single core). The python processes internally calls
`compileall.compile_file`, the same as pip.

Like pip, we ignore and silence all compilation errors
(https://github.com/astral-sh/uv/issues/1559). There is a 10s timeout to
handle the case when the workers got stuck. For the reviewers, please
check if i missed any spots where we could deadlock, this is the hardest
part of this PR.

I've added `uv-dev compile <dir>` and `uv-dev clear-compile <dir>`
commands, mainly for my own benchmarking. I don't want to expose them in
`uv`, they almost certainly not the correct workflow and we don't want
to support them.

Fixes #1788
Closes #1559
Closes #1928
2024-03-05 03:35:24 +00:00
dependabot[bot] bc43f609b9
Bump http from 0.2.11 to 0.2.12 (#2179) 2024-03-04 22:22:45 +00:00
dependabot[bot] 9fcffae084
Bump anstream from 0.6.12 to 0.6.13 (#2177) 2024-03-04 16:16:10 -06:00
Charlie Marsh 6f54aacfea
Remove `camino` from `uv-virtualenv` (#2119)
## Summary

I think Camino is nice but it makes it much harder to work in
`uv-virtualenv`, since it's the _only_ crate that uses it. If we want to
use Camino, we should use it everywhere IMO.
2024-03-01 19:36:56 +00:00
Charlie Marsh 10175143d1
Add a `--python` flag to allow installation into arbitrary Python interpreters (#2000)
## Summary

This PR adds a `--python` flag that allows users to provide a specific
Python interpreter into which `uv` should install packages. This would
replace the `VIRTUAL_ENV=` workaround that folks have been using to
install into arbitrary, system environments, while _also_ actually being
correct for installing into non-virtual environments, where the bin and
site-packages paths can differ.

The approach taken here is to use `sysconfig.get_paths()` to get the
correct paths from the interpreter, and then use those for determining
the `bin` and `site-packages` directories, rather than constructing them
based on hard-coded expectations for each platform.

Closes https://github.com/astral-sh/uv/issues/1396.

Closes https://github.com/astral-sh/uv/issues/1779.

Closes https://github.com/astral-sh/uv/issues/1988.

## Test Plan

- Verified that, on my Windows machine, I was able to install `requests`
into a global environment with: `cargo run pip install requests --python
'C:\\Users\\crmarsh\\AppData\\Local\\Programs\\Python\\Python3.12\\python.exe`,
then `python` and `import requests`.
- Verified that, on macOS, I was able to install `requests` into a
global environment installed via Homebrew with: `cargo run pip install
requests --python $(which python3.8)`.
2024-02-28 02:10:29 +00:00
dependabot[bot] 6678d545fb
Bump serde_json from 1.0.113 to 1.0.114 (#1996) 2024-02-26 23:12:54 +00:00
dependabot[bot] abbdad1856
Bump anstream from 0.6.11 to 0.6.12 (#1994) 2024-02-26 17:01:16 -06:00
dependabot[bot] ae525d4720
Bump serde from 1.0.196 to 1.0.197 (#1993) 2024-02-26 17:01:09 -06:00
dependabot[bot] 22fb373ab2
Bump target-lexicon from 0.12.13 to 0.12.14 (#1992) 2024-02-26 17:01:01 -06:00
Charlie Marsh 4fdf230c2f
Support recursive extras in direct `pyproject.toml` files (#1990)
## Summary

When a `pyproject.toml` is provided directly to `uv pip compile`, we
were failing to resolve recursive extras. The solution I settled on here
is to flatten them recursively when determining the requirements
upfront.

Closes https://github.com/astral-sh/uv/issues/1987.

## Test Plan

`cargo test`
2024-02-26 15:44:25 -05:00
samypr100 df812a181e
feat: bump actions/{download,upload}-artifact (#1947)
## Summary

Closes #1943

Makes sure `build-binaries` and `publish-pypi` workflows are compatible
with `actions/{download,upload}-artifact@v4`. In nature, this PR is very
similar to the changes in https://github.com/astral-sh/ruff/pull/10105.
This PR also updates cargo-dist.

## Test Plan

I ran a small non-dry-run [smoke
test](https://github.com/samypr100/uv/actions/runs/8027864059) on my own
fork CI with only linux builds (for speed) and those jobs seem to work
at a glance.
2024-02-25 20:09:04 -05:00
dependabot[bot] 8ff6182815
Bump anyhow from 1.0.79 to 1.0.80 (#1941) 2024-02-23 14:01:40 -06:00
dependabot[bot] 7e0d9b2cea
Bump pyo3 from 0.20.2 to 0.20.3 (#1940) 2024-02-23 14:01:18 -06:00
dependabot[bot] fee79aea7d
Bump itertools from 0.10.5 to 0.12.1 (#1939) 2024-02-23 14:01:11 -06:00
dependabot[bot] 086e58e54a
Bump textwrap from 0.16.0 to 0.16.1 (#1938) 2024-02-23 14:01:05 -06:00
konsti 5a50a753bd
Update `cargo dist` (#1929)
Pull in the fixes in the last few releases.
2024-02-23 18:56:34 +00:00
Zanie Blue fe1847561c
Retain authentication when making range requests (#1902)
Needs https://github.com/prefix-dev/async_http_range_reader/pull/9
Closes https://github.com/astral-sh/uv/issues/1709
2024-02-23 15:21:10 +00:00
Charlie Marsh 7eaed07f6c
Move conflicting dependencies into PubGrub (#1796)
## Summary

This revives a PR from long ago
(https://github.com/astral-sh/uv/pull/383 and
https://github.com/zanieb/pubgrub/pull/24) that modifies how we deal
with dependencies that are declared multiple times within a single
package.

To quote from the originating PR:

> Uses an experimental pubgrub branch (#370) that allows us to handle
multiple version ranges for a single dependency to the solver which
results in better error messages because the derivation tree contains
all of the relevant versions. Previously, the version ranges were merged
(by us) in the resolver before handing them to pubgrub since only one
range could be provided per package. Since we don't merge the versions
anymore, we no longer give the solver an empty range for conflicting
requirements; instead the solver comes to that conclusion from the
provided versions. You can see the improved error message for direct
dependencies in [this
snapshot](https://github.com/astral-sh/puffin/pull/383/files#diff-a0437f2c20cde5e2f15199a3bf81a102b92580063268417847ec9c793a115bd0).

The main issue with that PR was around its handling of URL dependencies,
so this PR _also_ refactors how we handle those. Previously, we stored
URL dependencies on `PubGrubPackage`, but they were omitted from the
hash and equality implementations of `PubGrubPackage`. This led to some
really careful codepaths wherein we had to ensure that we always visited
URLs before non-URL packages, so that the URL-inclusive versions were
included in any hashmaps, etc. I considered preserving this approach,
but it would require us to rely on lots of internal details of PubGrub
(since we'd now be relying on PubGrub to merge those packages in the
"right" order).

So, instead, we now _always_ set the URL on a given package, whenever
that package was _given_ a URL upfront. I think this is easier to reason
about: if the user provided a URL for `flask`, then we should just
always add the URL for `flask`. If we see some other URL for `flask`, we
error, like before. If we see some unknown URL for `flask`, we error,
like before.

Closes https://github.com/astral-sh/uv/issues/1522.

Closes https://github.com/astral-sh/uv/issues/1821.

Closes https://github.com/astral-sh/uv/issues/1615.
2024-02-21 21:27:58 -05:00
Charlie Marsh 88a0c13865
Use async unzip for local source distributions (#1809)
## Summary

We currently maintain separate untar methods for sync and async, but we
only use the sync version when the user provides a local source
distribution. (Otherwise, we untar as we download the distribution.) In
my testing, this is actually slower anyway:

```
❯ python -m scripts.bench \
        --uv-path ./target/release/main \
        --uv-path ./target/release/uv \
        ./requirements.in --benchmark resolve-cold --min-runs 50
Benchmark 1: ./target/release/main (resolve-cold)
  Time (mean ± σ):     835.2 ms ± 107.4 ms    [User: 346.0 ms, System: 151.3 ms]
  Range (min … max):   639.2 ms … 1051.0 ms    50 runs

Benchmark 2: ./target/release/uv (resolve-cold)
  Time (mean ± σ):     750.7 ms ±  91.9 ms    [User: 345.7 ms, System: 149.4 ms]
  Range (min … max):   637.9 ms … 905.7 ms    50 runs

Summary
  './target/release/uv (resolve-cold)' ran
    1.11 ± 0.20 times faster than './target/release/main (resolve-cold)'
```
2024-02-21 14:11:37 +00:00
konsti 2928c6e574
Backport changes from publish crates (#1739)
Backport of changes for the published new versions of pep440_rs and
pep508_rs to make it easier to keep them in sync.
2024-02-20 19:33:27 +01:00
Alexander Gherm 4dfcf32e4c
Add shell completions generation (#1675)
<!--
Thank you for contributing to uv! To help us out with reviewing, please
consider the following:

- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->

## Summary

Adds cli command / flag (`generate-shell-completion <SHELL>` /
`--generate-shell-completion <SHELL>`) to generate the completion script
for the given shell. Implemented in exactly the same way as it is done
in ruff
(https://github.com/astral-sh/ruff/blob/main/crates/ruff/src/lib.rs#L197)

Closes https://github.com/astral-sh/uv/issues/1654

## Test Plan

I've normally tested the generated script manually only for bash shell
on Ubuntu 22.04.3
```bash
$ uv --generate-shell-completion bash > /usr/share/bash-completion/completions/uv
$ uv # <TAB>
-q                         -h                         --verbose                  --no-cache                 --version                  clean
-v                         -V                         --no-color                 --cache-dir                pip                        generate-shell-completion
-n                         --quiet                    --color                    --help                     venv                       help
$ uv pip # <TAB>
-q           -n           -V           --verbose    --color      --cache-dir  --version    sync         uninstall    help
-v           -h           --quiet      --no-color   --no-cache   --help       compile      install      freeze
```
2024-02-18 21:43:18 -06:00
Charlie Marsh facc60f3a8
Add graceful fallback for Artifactory indexes (#1574)
## Summary

There are more details in https://github.com/astral-sh/uv/issues/1370,
but it looks like Artifactory servers have incorrect behavior when it
comes to HTTP range requests, in that they return `Accept-Ranges:
bytes`, but then incorrectly return 200 requests when you actually ask
for a given range.

This PR ensures that we fallback gracefully in this case. It's built on
https://github.com/prefix-dev/async_http_range_reader/pull/5. Assuming
that gets merged upstream, we can then remove the Git dependency.

Closes https://github.com/astral-sh/uv/issues/1370.

## Test Plan

`cargo run pip install requests -i
https://killjoyuvbug.jfrog.io/artifactory/api/pypi/pypi/simple
--verbose`
2024-02-17 14:37:06 +00:00
Charlie Marsh 01ffc36520
Apply percent-decoding to file-based URLs (#1541)
## Summary

Closes https://github.com/astral-sh/uv/issues/1537.
2024-02-16 16:11:16 -05:00
Zanie Blue 9737b93b79
Use the system trust store for HTTPS requests (#1512)
Closes #1474 

Using the `rustls-tls-native-roots` feature

> `rustls-tls`: Enables TLS functionality provided by rustls. Equivalent
to rustls-tls-webpki-roots.
>
> `rustls-tls-webpki-roots`: Enables TLS functionality provided by
rustls, while using root certificates from the webpki-roots crate.
>
> `rustls-tls-native-roots`: Enables TLS functionality provided by
rustls, while using root certificates from the rustls-native-certs
crate.

Additional context:

- https://github.com/seanmonstar/reqwest/issues/1554
- https://github.com/encode/httpx/issues/302
- [Should I use the native certs or
webpki-roots?](https://github.com/rustls/rustls-native-certs#should-i-use-this-or-webpki-roots)

Prior discussion at https://github.com/astral-sh/uv/pull/609
2024-02-16 14:07:18 -05:00
Charlie Marsh 458b4f2dde
Remove Docker (#1323)
I will revisit this later.
2024-02-15 18:30:18 +00:00
Zanie Blue 2586f655bb
Rename to `uv` (#1302)
First, replace all usages in files in-place. I used my editor for this.
If someone wants to add a one-liner that'd be fun.

Then, update directory and file names:

```
# Run twice for nested directories
find . -type d -print0 | xargs -0 rename s/puffin/uv/g
find . -type d -print0 | xargs -0 rename s/puffin/uv/g

# Update files
find . -type f -print0 | xargs -0 rename s/puffin/uv/g
```

Then add all the files again

```
# Add all the files again
git add crates
git add python/uv

# This one needs a force-add
git add -f crates/uv-trampoline
```
2024-02-15 11:19:46 -06:00
Andrew Gallant 8102980192 puffin-resolver: make VersionMap construction lazy
That is, a `PrioritizedDistribution` for a specific version of a
package is not actually materialized in memory until a corresponding
`VersionMap::get` call is made for that version. Similarly, iteration
lazily materializes distributions as it moves through the map. It
specifically does not materialize everything first.

The main reason why this is effective is that an
`OwnedArchive<SimpleMetadata>` represents a zero-copy (other than
reading the source file) version of `SimpleMetadata` that is really just
a `Vec<u8>` internally. The problem with `VersionMap` construction
previously is that it had to eagerly materialize a `SimpleMetadata` in
memory before anything else, which defeats a large part of the purpose
of zero-copy deserialization. By making more of `VersionMap`
construction itself lazy, we permit doing some parts of resolution
without necessarily fully deserializing a `SimpleMetadata` into memory.
Indeed, with this commit, in the warm cached case, a `SimpleMetadata` is
itself never materialized fully in memory.

This does not completely and totally fully realize the benefits of
zero-copy deserialization. For example, we are likely still building
lots of distributions in memory that we don't actually need in some
cases. Perhaps in cases where no resolution exists, or when one needs to
iterate over large portions of the total versions published for a
package.
2024-02-15 08:10:32 -05:00
Andrew Gallant bdb491baf6 deps: bump pubgrub
This brings in a [PR] that makes `Range::as_singleton` return a
borrow.

[PR]: https://github.com/zanieb/pubgrub/pull/23
2024-02-15 08:10:32 -05:00
Charlie Marsh 16bb80132f
Add an `--offline` mode (#1270)
## Summary

This PR adds an `--offline` flag to Puffin that disables network
requests (implemented as a Reqwest middleware on our registry client).
When `--offline` is provided, we also allow the HTTP cache to return
stale data.

Closes #942.
2024-02-13 03:35:23 +00:00
Charlie Marsh c75eef28b5
Upgrade to miette v6.0.0 (#1272) 2024-02-11 23:23:27 -05:00
Charlie Marsh 62416286e2
Remove `add` and `remove` commands (#1259)
## Summary

These add and remove dependencies from a `pyproject.toml` -- but they're
currently hidden, and don't match the rest of the workflow. We can
re-add them when the time is right.
2024-02-06 14:18:27 -05:00
Andrew Gallant d4b4c21133
initial implementation of zero-copy deserialization for SimpleMetadata (#1249)
(Please review this PR commit by commit.)

This PR closes an initial loop on zero-copy deserialization. That
is, provides a way to get a `Archived<SimpleMetadata>` (spelled
`OwnedArchive<SimpleMetadata>` in the code) from a `CachedClient`. The
main benefit of zero-copy deserialization is that we can read bytes
from a file, cast those bytes to a structured representation without
cost, and then start using that type as any other Rust type. The
"catch" is that the structured representation is not the actual type
you started with, but the "archived" version of it.

In order to make all this work, we ended up needing to shave a rather
large yak: we had to re-implement HTTP cache semantics. Previously,
we were using the `http-cache-semantics` crate. While it does support
Serde, it doesn't support `rkyv`. Moreover, even simple support for
`rkyv` wouldn't be enough. What we actually want is for the HTTP cache
semantics to be implemented on the *archived* type so that we can
decide whether our cached response is stale or not without needing to
do a full deserialization into the unarchived type. This is why, in
this PR, you'll see `impl ArchivedCachePolicy { ... }` instead of
`impl CachePolicy { ... }`. (The `derive(rkyv::Archive)` macro
automatically introduces the `ArchivedCachePolicy` type into the
current namespace.)

Unfortunately, this PR does not fully realize the dream that is
zero-copy deserialization. Namely, while a `CachedClient` can now
provide an `OwnedArchive<SimpleMetadata>`, the rest of our code
doesn't really make use of it. Indeed, as soon as we go to build a
`VersionMap`, we eagerly convert our archived metadata into an owned
`SimpleMetadata` via deserialization (that *isn't* zero-copy). After
this change, a lot of the work now shifts to `rkyv` deserialization
and `VersionMap` construction. More precisely, the main thing we drop
here is `CachePolicy` deserialization (which is now truly zero-copy)
and the parsing of the MessagePack format for `SimpleMetadata`. But we
are still paying for deserialization. We're just paying for it in a
different place.

This PR does seem to bring a speed-up, but it is somewhat underwhelming.
My measurements have been pretty noisy, but I get a 1.1x speedup fairly
often:

```
$ hyperfine -w5 "puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" "puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null" ; A kang
Benchmark 1: puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
  Time (mean ± σ):     164.4 ms ±  18.8 ms    [User: 427.1 ms, System: 348.6 ms]
  Range (min … max):   131.1 ms … 190.5 ms    18 runs

Benchmark 2: puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
  Time (mean ± σ):     148.3 ms ±  10.2 ms    [User: 357.1 ms, System: 319.4 ms]
  Range (min … max):   136.8 ms … 184.4 ms    19 runs

Summary
  puffin-test pip compile --cache-dir ~/astral/tmp/cache-test ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null ran
    1.11 ± 0.15 times faster than puffin-main pip compile --cache-dir ~/astral/tmp/cache-main ~/astral/tmp/reqs/home-assistant-reduced.in -o /dev/null
```

One downside is that this does increase cache size (`rkyv`'s
serialization format is not as compact as MessagePack). On disk size
increases by about 1.8x for our `simple-v0` cache.

```
$ sort-filesize cache-main
4.0K    cache-main/CACHEDIR.TAG
4.0K    cache-main/.gitignore
8.0K    cache-main/interpreter-v0
8.7M    cache-main/wheels-v0
18M     cache-main/archive-v0
59M     cache-main/simple-v0
109M    cache-main/built-wheels-v0
193M    cache-main
193M    total

$ sort-filesize cache-test
4.0K    cache-test/CACHEDIR.TAG
4.0K    cache-test/.gitignore
8.0K    cache-test/interpreter-v0
8.7M    cache-test/wheels-v0
18M     cache-test/archive-v0
107M    cache-test/simple-v0
109M    cache-test/built-wheels-v0
242M    cache-test
242M    total
```

Also, while I initially intended to do a simplistic implementation of
HTTP cache semantics, I found that everything was somewhat
inter-connected. I could have wrote code that _specifically_ only worked
with the present behavior of PyPI, but then it would need to be special
cased and everything else would need to continue to use
`http-cache-sematics`. By implementing what we need based on what Puffin
actually is (which is still less than what `http-cache-semantics` does),
we can avoid special casing and use zero-copy deserialization for our
cache policy in _all_ cases.
2024-02-05 16:47:53 -05:00
Zanie Blue d090acf13d
Improve error messaging when a dependency is not found (#1241)
Previously, whenever we encountered a missing package we would throw an
error without information about why the package was requested. This
meant that if a transitive dependency required a missing package, the
user would have no idea why it was even selected. Here, we track
`NotFound` and `NoIndex` errors as `NoVersions` incompatibilities with
an attached reason. Improves our test coverage for `--no-index` without
`--find-links`.

The
[snapshots](https://github.com/astral-sh/puffin/pull/1241/files#diff-3eea1658f165476252f1f061d0aa9f915aabdceafac21611cdf45019447f60ec)
show a nice improvement.

I think this will also enable backtracking to another version if some
version of transitive dependency has a missing dependency. I'll write a
scenario for that next.

Requires https://github.com/zanieb/pubgrub/pull/22
2024-02-05 08:43:05 -06:00
konsti f10f902570
Yield after channel send and move cpu tasks to thread (#1163)
## Summary

Previously, we were blocking operations that could run in parallel. We
would send request through our main requests channel, but not yield so
that the receiver could only start processing requests much later than
necessary. We solve this by switching to the async
`tokio::sync::mpsc::channel`, where send is an async functions that
yields.

Due to the increased parallelism cache deserialization and the
conversion from simple api request to version map became bottlenecks, so
i moved them to `spawn_blocking`. Together these result in a 30-60%
speedup for larger warm cache resolution. Small cases such as black
already resolve in 5.7 ms on my machine so there's no speedup to be
gained, refresh and no cache were to noisy to get signal from.

Note for the future: Revisit the bounded channel if we want to produce
requests from `process_request`, too, (this would be good for
prefetching) to avoid deadlocks.

## Details

We can look at the behavior change through the spans:

```
RUST_LOG=puffin=info TRACING_DURATIONS_FILE=target/traces/jupyter-warm-branch.ndjson cargo run --features tracing-durations-export --bin puffin-dev --profile profiling -- resolve jupyter 2> /dev/null
```

Below, you can see how on main, we have discrete phases: All (cached)
simple api requests in parallel, then all (cached) metadata requests in
parallel, repeat until done. The solver is mostly waiting until it has
it's version map from the simple API query to be able to choose a
version. The main thread is blocked by process requests.

In the PR branch, the simple api requests succeeds much earlier,
allowing the solver to advance and also to schedule more prefetching.
Due to that `parse_cache` and `from_metadata` became bottlenecks, so i
moved them off the main thread (green color, and their spans can now
overlap because they can run on multiple threads in parallel). The main
thread isn't blocked on `process_request` anymore, instead it has
frequent idle times. The spans are all much shorter, which indicates
that on main they could have finished much earlier, but a task didn't
yield so they weren't scheduled to finish (though i haven't dug deep
enough to understand the exact scheduling between the process request
stream and the solver here).

**main**


![jupyter-warm-main](https://github.com/astral-sh/puffin/assets/6826232/693c53cc-1090-41b7-b02a-a607fcd2cd99)

**PR**


![jupyter-warm-branch](https://github.com/astral-sh/puffin/assets/6826232/33435f34-b39b-4b0a-a9d7-4bfc22f55f05)

## Benchmarks

```
$ hyperfine --warmup 3 "target/profiling/main-dev resolve jupyter" "target/profiling/branch-dev resolve jupyter"
Benchmark 1: target/profiling/main-dev resolve jupyter
  Time (mean ± σ):      29.1 ms ±   0.7 ms    [User: 22.9 ms, System: 11.1 ms]
  Range (min … max):    27.7 ms …  32.2 ms    103 runs
 
Benchmark 2: target/profiling/branch-dev resolve jupyter
  Time (mean ± σ):      18.8 ms ±   1.1 ms    [User: 37.0 ms, System: 22.7 ms]
  Range (min … max):    16.5 ms …  21.9 ms    154 runs
 
Summary
  target/profiling/branch-dev resolve jupyter ran
    1.55 ± 0.10 times faster than target/profiling/main-dev resolve jupyter

$ hyperfine --warmup 3 "target/profiling/main-dev resolve meine_stadt_transparent" "target/profiling/branch-dev resolve meine_stadt_transparent"
Benchmark 1: target/profiling/main-dev resolve meine_stadt_transparent
  Time (mean ± σ):      37.8 ms ±   0.9 ms    [User: 30.7 ms, System: 14.1 ms]
  Range (min … max):    36.6 ms …  41.5 ms    79 runs
 
Benchmark 2: target/profiling/branch-dev resolve meine_stadt_transparent
  Time (mean ± σ):      24.7 ms ±   1.5 ms    [User: 47.0 ms, System: 39.3 ms]
  Range (min … max):    21.5 ms …  28.7 ms    113 runs
 
Summary
  target/profiling/branch-dev resolve meine_stadt_transparent ran
    1.53 ± 0.10 times faster than target/profiling/main-dev resolve meine_stadt_transparent

$ hyperfine --warmup 3 "target/profiling/main pip compile scripts/requirements/home-assistant.in" "target/profiling/branch pip compile scripts/requirements/home-assistant.in"
Benchmark 1: target/profiling/main pip compile scripts/requirements/home-assistant.in
  Time (mean ± σ):     229.0 ms ±   2.8 ms    [User: 197.3 ms, System: 63.7 ms]
  Range (min … max):   225.8 ms … 234.0 ms    13 runs
 
Benchmark 2: target/profiling/branch pip compile scripts/requirements/home-assistant.in
  Time (mean ± σ):      91.4 ms ±   5.3 ms    [User: 289.2 ms, System: 176.9 ms]
  Range (min … max):    81.0 ms … 104.7 ms    32 runs
 
Summary
  target/profiling/branch pip compile scripts/requirements/home-assistant.in ran
    2.50 ± 0.15 times faster than target/profiling/main pip compile scripts/requirements/home-assistant.in
```
2024-02-02 18:18:24 +01:00
Charlie Marsh b2f1bbaa63
Add a Ctrl+C handler to the confirm workflow (#1202)
Fixes an issue whereby exiting the confirmation prompt can lead to your
cursor disappearing: https://github.com/console-rs/dialoguer/issues/294.

See:
b839a2c5b7/rye/src/main.rs (L36-L48).
2024-01-31 02:08:27 +00:00
Charlie Marsh 3f5e7306a5
Remove `WaitMap` dependency (#1183)
## Summary

This is an attempt to https://github.com/astral-sh/puffin/pull/1163 by
removing the `WaitMap` and gaining more granular control over the values
that we hold over `await` boundaries.
2024-01-30 15:25:22 -05:00
Charlie Marsh aa3b79ec63
Prompt user for missing `-r` and `-e` flags in `pip install` (#1180)
## Summary

If the user runs a command like `pip install requirements.txt`, we now
prompt them to ask if they meant to include the `-r` flag:

![Screenshot 2024-01-29 at 8 38
29 PM](https://github.com/astral-sh/puffin/assets/1309177/82b9f7a2-2526-4144-b200-a5015e5b8a4b)

![Screenshot 2024-01-29 at 8 38
33 PM](https://github.com/astral-sh/puffin/assets/1309177/bd8ebb51-2537-4540-a0e0-718e66a1c69c)

The specific logic is: if the requirement ends in `.txt` or `.in`, and
the file exists locally, prompt the user for `-r`. If the requirement
contains a directory separator, and the directory exists locally, prompt
the user for `-e`.

Closes #1166.
2024-01-30 18:58:45 +00:00
konsti 614bb0cf52
Update async_http_range_reader to 0.5.0 (#1189)
Removes a git dep and removes itertools 0.11
2024-01-30 16:32:53 +00:00
konsti ab27913f68
Instrument the main function and add jupyter.in (#1186)
Instrument the main function as anchor span for checking overhead and
update tracing-durations-export to 0.2.0 for differentiating
blocking/non-blocking tasks.

Add a `jupyter.in` requirement since `pip install jupyter` is a common
operation. I tried `jupyterlab` too but there is no difference in
performance (1.00 ± 0.07).
2024-01-30 11:03:24 +00:00
Charlie Marsh 61a3060383
Run `cargo update` (#1178) 2024-01-29 21:01:37 -05:00
Charlie Marsh fa3c9afdc1
Deduplicate `pep440_rs` in dependency tree (#1177)
## Summary

Closes https://github.com/astral-sh/puffin/issues/1176.

## Test Plan

`cargo tree -p puffin -i pep440_rs` runs without error. Previously, it
errored due to multiple versions.
2024-01-29 16:11:42 -05:00
Charlie Marsh 4b9daf9604
Use tokio_tar instead of async_tar (#1170)
## Summary

`tokio_tar` is a fork of `async_tar` that uses Tokio instead of
`async-std`. Using it removes a significant dependency from our tree.

(There is an open PR
(https://github.com/dignifiedquire/async-tar/pull/41) in `async-tar` to
add Tokio support, but it's over a year old.)

See:
https://github.com/astral-sh/puffin/pull/1157#discussion_r1469190249.
2024-01-29 10:00:30 -05:00
Charlie Marsh d88ce76979
Stream unpacking of source distribution downloads (#1157)
This PR migrates our source distribution downloads to unzip as we
stream, similar to our approach for wheels.

In my testing, this showed a consistent speedup (e.g., 6% here for a few
representative source distributions):

```text
❯ python -m scripts.bench --puffin-path ./target/release/main --puffin-path ./target/release/puffin --benchmark install-cold requirements.in
Benchmark 1: ./target/release/main (install-cold)
  Time (mean ± σ):      1.503 s ±  0.039 s    [User: 1.479 s, System: 0.537 s]
  Range (min … max):    1.466 s …  1.605 s    10 runs

Benchmark 2: ./target/release/puffin (install-cold)
  Time (mean ± σ):      1.421 s ±  0.024 s    [User: 1.505 s, System: 0.593 s]
  Range (min … max):    1.381 s …  1.454 s    10 runs

Summary
  './target/release/puffin (install-cold)' ran
    1.06 ± 0.03 times faster than './target/release/main (install-cold)'
```
2024-01-28 20:09:24 -05:00
Andrew Gallant 5219d37250
add initial rkyv support (#1135)
This PR adds initial support for [rkyv] to puffin. In particular,
the main aim here is to make puffin-client's `SimpleMetadata` type
possible to deserialize from a `&[u8]` without doing any copies. This
PR **stops short of actuallying doing that zero-copy deserialization**.
Instead, this PR is about adding the necessary trait impls to a variety
of types, along with a smattering of small refactorings to make rkyv
possible to use.

For those unfamiliar, rkyv works via the interplay of three traits:
`Archive`, `Serialize` and `Deserialize`. The usual flow of things is
this:

* Make a type `T` implement `Archive`, `Serialize` and `Deserialize`.
rkyv
helpfully provides `derive` macros to make this pretty painless in most
  cases.
* The process of implementing `Archive` for `T` *usually* creates an
entirely
new distinct type within the same namespace. One can refer to this type
without naming it explicitly via `Archived<T>` (where `Archived` is a
clever
  type alias defined by rkyv).
* Serialization happens from `T` to (conceptually) a `Vec<u8>`. The
serialization format is specifically designed to reflect the in-memory
layout
  of `Archived<T>`. Notably, *not* `T`. But `Archived<T>`.
* One can then get an `Archived<T>` with no copying (albeit, we will
likely
need to incur some cost for validation) from the previously created
`&[u8]`.
This is quite literally [implemented as a pointer cast][rkyv-ptr-cast].
* The problem with an `Archived<T>` is that it isn't your `T`. It's
something
  else. And while there is limited interoperability between a `T` and an
`Archived<T>`, the main issue is that the surrounding code generally
demands
a `T` and not an `Archived<T>`. **This is at the heart of the tension
for
  introducing zero-copy deserialization, and this is mostly an intrinsic
problem to the technique and not an rkyv-specific issue.** For this
reason,
  given an `Archived<T>`, one can get a `T` back via an explicit
deserialization step. This step is like any other kind of
deserialization,
although generally faster since no real "parsing" is required. But it
will
  allocate and create all necessary objects.

This PR largely proceeds by deriving the three aforementioned traits
for `SimpleMetadata`. And, of course, all of its type dependencies. But
we stop there for now.

The main issue with carrying this work forward so that rkyv is actually
used to deserialize a `SimpleMetadata` is figuring out how to deal
with `DataWithCachePolicy` inside of the cached client. Ideally, this
type would itself have rkyv support, but adding it is difficult. The
main difficulty lay in the fact that its `CachePolicy` type is opaque,
not easily constructable and is internally the tip of the iceberg of
a rat's nest of types found in more crates such as `http`. While one
"dumb"-but-annoying approach would be to fork both of those crates
and add rkyv trait impls to all necessary types, it is my belief that
this is the wrong approach. What we'd *like* to do is not just use
rkyv to deserialize a `DataWithCachePolicy`, but we'd actually like to
get an `Archived<DataWithCachePolicy>` and make actual decisions used
the archived type directly. Doing that will require some work to make
`Archived<DataWithCachePolicy>` directly useful.

My suspicion is that, after doing the above, we may want to mush
forward with a similar approach for `SimpleMetadata`. That is, we want
`Archived<SimpleMetadata>` to be as useful as possible. But right
now, the structure of the code demands an eager conversion (and thus
deserialization) into a `SimpleMetadata` and then into a `VersionMap`.
Getting rid of that eagerness is, I think, the next step after dealing
with `DataWithCachePolicy` to unlock bigger wins here.

There are many commits in this PR, but most are tiny. I still encourage
review to happen commit-by-commit.

[rkyv]: https://rkyv.org/
[rkyv-ptr-cast]:
https://docs.rs/rkyv/latest/src/rkyv/util/mod.rs.html#63-68
2024-01-28 12:14:59 -05:00
Charlie Marsh d6795da0ea
Set permissions after streaming unzip (#1151)
## Summary

When we migrated to an "unzip while we stream" solution, we lost the
logic to set permissions on the extracted files, so executables in
wheels were no longer executable. It turns out this is a little tricky,
since the permissions metadata is in the central directory at the _end_
of the zip file, and the async ZIP reader explicitly stops iteration
once it hits the central directory. (Specifically, it goes 4 bytes into
the central directory, since it sees the 4-byte signature header and
then stops.)

So, to solve that, I've added a `CentralDirectoryReader` that continues
where that iterator left off. This required forking the async zip crate:
https://github.com/charliermarsh/rs-async-zip/pull/1. It took a lot of
fiddling but I'm quite confident in the code now, especially since the
async zip crate validates the signature kind on every read.

The central directory is typically quite small (even for the Zig wheel,
which is enormous, it's just around 1MB), so I don't expect this to have
a high cost.

Closes https://github.com/astral-sh/puffin/issues/1148.
2024-01-27 19:22:44 -05:00
Charlie Marsh abe1867a0d
Enable Windows wheel builds in CI (#1129)
Closes https://github.com/astral-sh/puffin/issues/990.
2024-01-27 01:12:25 +00:00
konsti 39021263dd
Windows launchers using posy trampolines (#1092)
## Background

In virtual environments, we want to install python programs as console
commands, e.g. `black .` over `python -m black .`. They may be called
[entrypoints](https://packaging.python.org/en/latest/specifications/entry-points/)
or scripts. For entrypoints, we're given a module name and function to
call in that module.

On Unix, we generate a minimal python script launcher. Text files are
runnable on unix by adding a shebang at their top, e.g.

```python
#!/usr/bin/env python
```

will make the operating system run the file with the current python
interpreter. A venv launcher for black in `/home/ferris/colorize/.venv`
(module name: `black`, function to call: `patched_main`) would look like
this:

```python
#!/home/ferris/colorize/.venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from black import patched_main
if __name__ == "__main__":
    sys.argv[0] = re.sub(r"(-script\.pyw|\.exe)?$", "", sys.argv[0])
    sys.exit(patched_main())
```

On windows, this doesn't work, we can only rely on launching `.exe`
files.

## Summary

We use posy's rust implementation of a trampoline, which is based on
distlib's c++ implementation. We pre-build a minimal exe and append the
launcher script as stored zip archive behind it. The exe will look for
the venv python interpreter next to it and use it to execute the
appended script.

The changes in this PR make the `black` entrypoint work:

```powershell
cargo run -- venv .venv
cargo run -q -- pip install black
.\.venv\Scripts\black --version
```

Integration with our existing tests will be done in follow-up PRs.

## Implementation and Details

I've vendored the posy trampoline crate. It is a formatted, renamed and
slightly changed for embedding version of
https://github.com/njsmith/posy/pull/28.

The posy launchers are smaller than the distlib launchers, 16K vs 106K
for black. Currently only `x86_64-pc-windows-msvc` is supported. The
crate requires a nightly compiler for its no-std binary size tricks.

On windows, an application can be launched with a console or without (to
create windows instead), which needs two different launchers. The gui
launcher will subsequently use `pythonw.exe` while the console launcher
uses `python.exe`.
2024-01-26 13:54:11 +00:00
Charlie Marsh 50057cd5f2
Re-add Cargo's known hosts checking (#1118)
## Summary

This ensures that (like Cargo) we don't suffer from
https://github.com/advisories/GHSA-r5w3-xm58-jv6j, by way of checking
known hosts when fetching via `libgit2`.

The implementation is taken from Cargo itself, modified to remove all
configuration, since we don't yet support configuration for known hosts,
etc.

Closes #285.
2024-01-25 22:29:36 -05:00
Charlie Marsh 5ad2e60561
Use `same-file` to detect interpreter shims (#1099)
Our existing detection doesn't work on Windows, because we canoncalize
the interpreter path but not `info.sys_executable`, so the former
includes the UNC prefix, etc. This is cross-platform and gets at the
intent of the check.
2024-01-25 12:27:49 -05:00
Zanie Blue 272555e915
Switch to ref on `main` for PubGrub (#1094)
Just fixing the wrong merge order from
https://github.com/astral-sh/puffin/pull/1088
2024-01-25 14:50:12 +00:00
Charlie Marsh 904db967af
Use junctions instead of symlinks on Windows (#1087)
## Summary

When we unzip wheels in the cache, we write the directories out to an
`archive-v0` bucket, and then symlink into that bucket from the
`wheels-v0` and `built-wheels-v0` buckets.

On Windows, symlinks are not well supported. Specifically, they need to
be explicitly enabled by the user. So, instead of symlinks, we now use
junctions, which are well-supported on Windows, and allow you to
(effectively) symlink a directory to another directory. This PR
implements said junction support, which gets the core installer working
on Windows.

In the past, we also used symlinks to implement another primitive: we
wanted to be able to replace a directory "atomically" (I put
"atomically" in quotes because I don't know if it's actually a
guaranteed atomic operation), in case someone was trying to use the
directory while we were replacing it (as opposed to deleting the
directory, then moving it into place).

On Windows, it doesn't appear to be possible to atomically replace a
junction. So instead, I'm using a new design, whereby the cache always
returns canonicalized paths. We know these canonicalized paths are
unique and won't be replaced, so they're safe for writers to rely on. In
general, when we write new data to the cache, we now return the
canonicalized path. When we read from the cache, and try to identify
(e.g.) the set of wheels available to us, we canonicalize the links
immediately and consider them non-existent if that operation fails.

Closes #1085.

---------

Co-authored-by: konstin <konstin@mailbox.org>
2024-01-25 10:06:38 +01:00
Zanie Blue ed1ac640b9
Consolidate `UnusableDependencies` into a generic `Unavailable` incompatibility (#1088)
Requires https://github.com/zanieb/pubgrub/pull/20

In short, `UnusableDependencies` can be generalized into `Unavailable`
which encompasses incompatibilities where a package range which is
unusable for some inherent reason as well as when its dependencies are
unusable. We can eventually use this to track more incompatibilities in
the solver. I made the reason string required because I can't see a case
where we should leave it out.

Additionally, this improves the display of conflicts in the root
requirements.
2024-01-24 22:10:44 -06:00
Charlie Marsh 0519375bd6
Remove some unused dependencies (#1077) 2024-01-24 11:58:21 -05:00
Charlie Marsh 63f3434b21
Use nanoid instead of uuid (#1074)
## Summary

Gives us equivalent randomness with ~half as many characters.
2024-01-24 05:05:14 +00:00
Andrew Gallant eebc2f340a
make some things guaranteed to be deterministic (#1065)
This PR replaces a few uses of hash maps/sets with btree maps/sets and
index maps/sets. This has the benefit of guaranteeing a deterministic
order of iteration.

I made these changes as part of looking into a flaky test.
Unfortunately, I'm not optimistic that anything here will actually fix
the flaky test, since I don't believe anything was actually dependent
on the order of iteration.
2024-01-23 20:30:33 -05:00
Charlie Marsh 6561617c56
Store source distribution builds under a unique manifest ID (#1051)
## Summary

This is a refactor of the source distribution cache that again aims to
make the cache purely additive. Instead of deleting all built wheels
when the cache gets invalidated (e.g., because the source distribution
changed on PyPI or something), we now treat each invalidation as its own
cache directory. The manifest inside of the source distribution
directory now becomes a pointer to the "latest" version of the source
distribution cache.

Here's a visual example:

![Screenshot 2024-01-22 at 5 35
41 PM](https://github.com/astral-sh/puffin/assets/1309177/ca103c83-e116-4956-b91c-8434fe62cffe)

With this change, we avoid deleting built distributions that might be
relied on elsewhere and maintain our invariant that the cache is purely
additive. The cost is that we now preserve stale wheels, but we should
add a garbage collection mechanism to deal with that.
2024-01-23 19:49:11 +00:00
Charlie Marsh 53b7e3cb4f
Update list of expected architectures for `cargo-dist` (#1021) 2024-01-19 22:32:01 +00:00
Charlie Marsh b3954f2449
Enable PowerPC builds (#1017)
Closes #1015.
2024-01-19 17:29:11 -05:00
Charlie Marsh 72924935f8
Upgrade cargo-dist (#1016) 2024-01-19 20:19:02 +00:00
Charlie Marsh 7b365195cb
Add support for ARM Linux builds in release (#1012)
Closes #992.
2024-01-19 15:13:07 -05:00
Charlie Marsh 980e1f6d79
Set explicit Docker permissions (#997) 2024-01-19 05:29:29 +00:00
Charlie Marsh 6af4eb7a45
Remove `[puffin]` prefix (#989)
And disable the plan job on CI now.
2024-01-19 01:59:06 +00:00
Charlie Marsh 3a1cd44fc6
Add Puffin Docker image (#985)
Missing piece for the release.

## Test Plan

Built the image locally:

```shell
❯ docker run 99956098e1f8f04e209dcfc4a0afcee67df1fe8a726c164884e67f035b1a0f42
Usage: puffin [OPTIONS] <COMMAND>

Commands:
  pip    Resolve and install Python packages
  venv   Create a virtual environment
  clean  Clear the cache
  help   Print this message or the help of the given subcommand(s)

Options:
  -q, --quiet                  Do not print any output
  -v, --verbose                Use verbose output
  -n, --no-cache               Avoid reading from or writing to the cache
      --cache-dir <CACHE_DIR>  Path to the cache directory [env: PUFFIN_CACHE_DIR=]
  -h, --help                   Print help
  -V, --version                Print version
```
2024-01-18 20:21:31 -05:00
Charlie Marsh 59d700ad2d
Run cargo-dist plan on PR (#971) 2024-01-18 16:16:11 -05:00
Charlie Marsh f9154e8297
Add release workflow (#961)
## Summary

This PR adds a release workflow powered by `cargo-dist`. It's similar to
the version that's PR'd in Ruff
(https://github.com/astral-sh/ruff/pull/9559), with the exception that
it doesn't include the Docker build or the "update dependents" step for
pre-commit.
2024-01-18 15:44:11 -05:00
Charlie Marsh 96a61fb351
Remove RFC2047 decoder (#967)
## Summary

- This was inherited from
d719988323/src/metadata.rs (LL78C2-L91C26)
- ...which introduced this code here:
9cd1d43f7c
- ...with the originating issue here:
https://github.com/PyO3/maturin/issues/612
- ...and the upstream issue here:
https://github.com/staktrace/mailparse/issues/50

It seems like the goal was to support Unicode in certain header fields,
but I don't think this is necessary for us. We only use
`get_first_value` for `Requires-Python`, which has to be ASCII, doesn't
it?

In my testing, it seems like the `charset` hack can also be removed. The
tests I copied over actually work without it, which makes me a bit
skeptical.

The main benefit here is that we get to a remove a _big_ dependency
stack, including Chumsky and Stacker and psm which have limited
cross-platform support.
2024-01-18 15:09:45 -05:00
Charlie Marsh fb22680311
Remove legacy release files (#960)
I added these long ago, and they're going to make the release diff a lot
more confusing.
2024-01-18 05:07:55 +00:00
Charlie Marsh 231686e71b
Remove `incompatibilities` from index (#905)
This isn't really part of the "index", it's part of the resolution.
2024-01-13 02:57:15 +00:00
Zanie Blue 93d3093a2a
Improve formatting of package ranges in error messages (#864)
Closes #810
Closes https://github.com/astral-sh/puffin/issues/812
Requires https://github.com/zanieb/pubgrub/pull/19 and
https://github.com/zanieb/pubgrub/pull/18

- Always pair package ranges with names e.g. `... of a matching a<1.0`
instead of `... of a matching <1.0`
- Split range segments onto multiple lines when not a singleton as
suggested in
[#850](https://github.com/astral-sh/puffin/pull/850#discussion_r1446419610)
- Improve formatting when ranges are split across multiple lines e.g. by
avoiding extra spaces and improving wording

Note review will require expanding the hidden files as there are
significant changes to the report formatter and snapshots.

Bear with me here as these are definitely not perfect still.

The following changes build on top of this independently for further
improvements:
- #868 
- #867 
- #866 
- #871
2024-01-10 14:16:23 -06:00
bojanserafimov e67b7858e6
Use zlib-ng for faster decompression (#859) 2024-01-09 16:13:36 -05:00
Zanie Blue 2b0c2e294b
Fix formatting of negated singleton versions in error messages (#836)
Closes #805 
Requires https://github.com/zanieb/pubgrub/pull/17
2024-01-08 12:33:01 -06:00
Charlie Marsh aeefe65227
Fix `tracing-duration-export` compilation (#835)
## Summary

I'm unable to run `puffin-cli` on `main` as the
`tracing-durations-export` is marked as optional, but the crate actually
depends on it to compile. Further, without `tracing-durations-export`,
there are `Option` types that can't resolve to a concrete type.

This PR fixes compilation with and without the feature.
2024-01-08 18:04:23 +00:00
Charlie Marsh 54838914be
Migrate back to `owo-colors` (#824)
In the past, I moved us to `owo-colors`
(https://github.com/astral-sh/puffin/pull/121); then, we moved back,
because we ran into issues with overriding the settings to force-disable
colors. But `anstream` solved those problems, so I'm moving us _back_ to
`owo-colors`, since it's what `anstream` recommends, and it's already
used by many of our dependencies (`miette`, `configparser`).

---------

Co-authored-by: konstin <konstin@mailbox.org>
2024-01-08 08:54:57 +00:00
Charlie Marsh ca2e3d7073
Remove outdated Cargo.toml comment (#813) 2024-01-06 02:50:52 +00:00
konsti 5820a9d937
Update dependencies (#794)
Pull in a bunch of updates so they get some testing before we announce
the project. textwrap 0.16 is blocked on miette updating, http 1.0 on
reqwest.
2024-01-05 11:40:12 -05:00
Zanie Blue 5e04a95c45
Disable line wrapping during scenario tests (#784)
Adds support for a `PUFFIN_NO_WRAP` environment variable which disables
line wrapping in `miette` output.

We set this variable in the scenario tests to improve the readability of
snapshots.

I contributed the ability to disable line wrapping upstream at
https://github.com/zkat/miette/pull/328
2024-01-04 19:07:16 +00:00
konsti 2db9135c51
Update pubgrub to 78b8add6942766e5fb070bbda1de570e93d6399f (#783)
Pull in the latest perf improvements
2024-01-04 15:55:35 +00:00
konsti 3f8dc9f5bb
Update pubgrub (#737)
Pull in https://github.com/pubgrub-rs/pubgrub/pull/170 and
https://github.com/pubgrub-rs/pubgrub/pull/171
2023-12-28 21:13:27 +00:00
Charlie Marsh 343880820b
Un-escape HTML entities when decoding (#723)
I don't have a good testing strategy here (I'm manually testing against
`devpi` via `packse`), but the HTML index uses (e.g.)
`data-requires-python="&gt;=3.8"`, so we need to decode.
2023-12-24 16:35:45 -05:00
Charlie Marsh 5bce699ee1
Add support for HTML indexes (#719)
## Summary

This PR adds support for HTML index responses (as with
`--index-url=https://download.pytorch.org/whl`).

Closes https://github.com/astral-sh/puffin/issues/412.
2023-12-24 16:04:00 +00:00
konsti e60f0ec732
Update pubgrub (#713)
Easier than i expected: We simply never construct the pubgrub error
variants since we have our own main loop. The `unreachable!()`s can be
removed when never is stabilized
2023-12-20 23:56:59 +01:00
konsti 71964ec7a8
Switch to msgpack in the cached client (#662)
This gives a 1.23 speedup on transformers-extras. We could change to
msgpack for the entire cache if we want. I only tried this format and
postcard so far, where postcard was much slower (like 1.6s).

I don't actually want to merge it like this, i wanted to figure out the
ballpark of improvement for switching away from json.

```
hyperfine --warmup 3 --runs 10 "target/profiling/puffin pip-compile --cache-dir cache-msgpack scripts/requirements/transformers-extras.in" "target/profiling/branch pip-compile scripts/requirements/transformers-extras.in"
Benchmark 1: target/profiling/puffin pip-compile --cache-dir cache-msgpack scripts/requirements/transformers-extras.in
  Time (mean ± σ):     179.1 ms ±   4.8 ms    [User: 157.5 ms, System: 48.1 ms]
  Range (min … max):   174.9 ms … 188.1 ms    10 runs

Benchmark 2: target/profiling/branch pip-compile scripts/requirements/transformers-extras.in
  Time (mean ± σ):     221.1 ms ±   6.7 ms    [User: 208.1 ms, System: 46.5 ms]
  Range (min … max):   213.5 ms … 235.5 ms    10 runs

Summary
  target/profiling/puffin pip-compile --cache-dir cache-msgpack scripts/requirements/transformers-extras.in ran
    1.23 ± 0.05 times faster than target/profiling/branch pip-compile scripts/requirements/transformers-extras.in
```

Disadvantage: We can't manually look into the cache anymore to debug
things

- [ ] Check more formats, i currently only tested json, msgpack and
postcard, there should be other formats, too
- [x] Switch over `CachedByTimestamp` serialization (for the interpreter
caching)
- [x] Switch over error handling and make sure puffin is still resilient
to cache failure
2023-12-16 21:01:35 +00:00
Zanie Blue 490fb55ac5
Use available versions to simplify unsat error reports (#547)
Uses https://github.com/pubgrub-rs/pubgrub/pull/156 to consolidate
version ranges in error reports using the actual available versions for
each package.

Alternative to https://github.com/zanieb/pubgrub/pull/8 which implements
this behavior as a method in the `Reporter` — here it's implemented in
our custom report formatter (#521) instead which requires no upstream
changes.

Requires https://github.com/zanieb/pubgrub/pull/11 to only retrieve the
versions for packages that will be used in the report.

This is a work in progress. Some things to do:
- ~We may want to allow lazy retrieval of the version maps from the
formatter~
- [x] We should probably create a separate error type for no solution
instead of mixing them with other resolve errors
- ~We can probably do something smarter than creating vectors to hold
the versions~
- [x] This degrades error messages when a single version is not
available, we'll need to special case that
- [x] It seems safer to coerce the error type in `resolve` instead of
`solve` if feasible
2023-12-12 23:25:16 +00:00
Charlie Marsh a24534b0ce
Use `rustc-hash` instead of `fxhash` crate (#594)
`fxhash` is the old, less maintained version of this crate
(`rustc-hash`). We use the latter in Ruff.
2023-12-08 20:27:49 +00:00
konsti 366c389385
Parse editable installs (#564)
Parse `-e` for editable installs in `requirements.txt`.

Unlike all the other requirements, editable installs don't have the name
of the package specified.
2023-12-06 18:21:15 +01:00
Zanie Blue 37ca2e2928
Bump pubgrub for latest upstream (#525)
https://github.com/pubgrub-rs/pubgrub/pull/157
2023-12-04 09:09:30 -06:00
Charlie Marsh ee2fca3a48
Add CACHEDIR and .gitignore tags to cache directories (#526)
## Summary

Even if this will typically be in the user's application folder (rather
than a local directory), it's still a good practice.

Closes https://github.com/astral-sh/puffin/issues/280.
2023-12-02 00:37:51 +00:00
Zanie Blue 2a8544df9e
Use a custom pubgrub report formatter (#521)
Uses https://github.com/zanieb/pubgrub/pull/10 to drastically simplify
our reporter implementation. This will allow us to make use of upstream
improvements to the reporter e.g.
https://github.com/zanieb/pubgrub/pull/8 without multiple duplicative
pull requests.
2023-12-01 13:36:12 -06:00
Zanie Blue efcc4f1409
Use upstream commit for reflink-copy requirement (#523)
https://github.com/cargo-bins/reflink-copy/pull/51 was merged
2023-12-01 10:58:24 +00:00
Zanie Blue 5f1f207628
Recursively merge existing package directories on installation (#516)
Previously, when installing a package we would delete the target
directory before copying (or linking) the contents of the package.
However, this means that we do not properly support namespace packages
which can share a target directory. Instead the last package to be
installed would be override existing packages. Since we install packages
in parallel, this could result in a race condition where the target
directory already exists which is not allowed when using `clonefile`.
See example error in #515.
c7e63d2dce
provides a regression test for this — it fails on `main`.

Here, we implement a recursive merge when the target directory already
exists. Both packages will be installed into the same directory. We no
longer delete the target directory, which seems okay since we uninstall
packages before installing now.

When files conflict, we will likely throw an error still. The correct
behavior to implement in this case is unclear, as if we just take "first
write wins" or "last write wins" we could end up with some files from
one package and some from another resulting in two broken packages. A
possible solution here is to lock the target directories while copying.
2023-11-30 10:14:51 -06:00
konsti d89fbeb642
Migrate interpreter query to custom caching (#508)
This removes the last usage of cacache by replacing it with a custom,
flat json caching keyed by the digest of the executable path.


![image](https://github.com/astral-sh/puffin/assets/6826232/8f777c4c-1f1b-4656-ba7b-002175270556)

A step towards #478. I've made `CachedByTimestamp<T>` generic over `T`
but intentionally not moved it to `puffin-cache` yet.
2023-11-28 17:14:59 +00:00
konsti 8855f44b5f
Move simple index queries to `CachedClient` (#504)
Replaces the usage of `http-cache-reqwest` for simple index queries with
our custom cached client, removing `http-cache-reqwest` altogether.

The new cache paths are `<cache>/simple-v0/<index>/<package_name>.json`.
I could not test with a non-pypi index since i'm not aware of any other
json indices (jax and torch are both html indices).

In a future step, we can transform the response to be a
`HashMap<Version, {source_dists: Vec<(SourceDistFilename, File)>,
wheels: Vec<(WheeFilename, File)>}` (independent of python version, this
cache is used by all environments together). This should speed up cache
deserialization a bit, since we don't need to try source dist and wheel
anymore and drop incompatible dists, and it should make building the
`VersionMap` simpler. We can speed this up even further by splitting
into a version lists and the info for each version. I'm mentioning this
because deserialization was a major bottleneck in the rust part of the
old python prototype.

Fixes #481
2023-11-28 00:11:03 +00:00
Charlie Marsh 9d35128840
Use Clippy lint table over Cargo config (#490)
Closes https://github.com/astral-sh/puffin/issues/482.
2023-11-22 15:10:27 +00:00
konsti 7c7daa8f83
Consistent Cargo.toml syntax (#483)
Remove the last Cargo.toml inconsistencies, see
1526b3458a (r1401083681).
Now all `[dependencies]` are workspace dependencies.
2023-11-22 08:34:08 +00:00
Zanie Blue e9b6fb90d6
Bump pubgrub to get range display changes (#444)
See https://github.com/zanieb/pubgrub/pull/5
2023-11-20 09:12:48 -06:00
Zanie Blue 221751487c
Use `UnusableDependencies` for URL dependency conflicts (#425)
Extends #424 with support for URL dependency incompatibilities.

Requires changes to `miette` to prevent URLs from being word wrapped;
accepted upstream in https://github.com/zkat/miette/pull/321
2023-11-17 08:28:12 -06:00
Zanie Blue 0d9d4f9fca
Add an `UnusableDependencies` incompatibility kind and use for conflicting versions (#424)
Addresses
https://github.com/astral-sh/puffin/issues/309#issuecomment-1792648969

Similar to #338 this throws an error when merging versions results in an
empty set. Instead of propagating that error, we capture it and return a
new dependency type of `Unusable`. Unusable dependencies are a new
incompatibility kind which includes an arbitrary "reason" string that we
present to the user. Adding a new incompatibility kind requires changes
to the vendored pubgrub crate.

We could use this same incompatibility kind for conflicting urls as in
#284 which should allow the solver to backtrack to another valid version
instead of failing (see #425).

Unlike #383 this does not require changes to PubGrub's package mapping
model. I think in the long run we'll want PubGrub to accept multiple
versions per package to solve this specific issue, but we're interested
in it being merged upstream first. This pull request is just using the
issue as a simple case to explore adding a new incompatibility type.

We may or may not be able convince them to add this new incompatibility
type upstream. As discussed in
https://github.com/pubgrub-rs/pubgrub/issues/152, we may want a more
general incompatibility kind instead which can be used for arbitrary
problems. An upstream pull request has been opened for discussion at
https://github.com/pubgrub-rs/pubgrub/pull/153.

Related to:
- https://github.com/pubgrub-rs/pubgrub/issues/152
- #338 
- #383

---------

Co-authored-by: konsti <konstin@mailbox.org>
2023-11-16 20:02:06 +00:00
Zanie Blue 832058dbba
Switch from vendored PubGrub to a fork (#438)
A fork will let us stay up to date with the upstream while replaying our
work on top of it.

I expect a similar workflow to the RustPython-Parser fork we maintained,
except that I wrote an automation to create tags for each commit on the
fork (https://github.com/zanieb/pubgrub/pull/2) so we do not need to
manually tag and document each commit.

To update with the upstream:

- Rebase our fork's `main` branch on top of the latest changes in
upstream's `dev` branch
- Force push, overwriting our `main` branch history
- Change the commit hash here to the last commit on `main` in our fork

Since we automatically tag each commit on the fork, we should never lose
the commits that are dropped from `main` during rebase.
2023-11-16 13:49:19 -06:00
konsti e41ec12239
Option to resolve at a fixed timestamp with `pip-compile --exclude-newer YYYY-MM-DD` (#434)
This works by filtering out files with a more recent upload time, so if
the index you use does not provide upload times, the results might be
inaccurate. pypi provides upload times for all files. This is, the field
is non-nullable in the warehouse schema, but the simple API PEP does not
know this field.

If you have only pypi dependencies, this means deterministic,
reproducible(!) resolution. We could try doing the same for git repos
but it doesn't seem worth the effort, i'd recommend pinning commits
since git histories are arbitrarily malleable and also if you care about
reproducibility and such you such not use git dependencies but a custom
index.

Timestamps are given either as RFC 3339 timestamps such as
`2006-12-02T02:07:43Z` or as UTC dates in the same format such as
`2006-12-02`. Dates are interpreted as including this day, i.e. until
midnight UTC that day. Date only is required to make this ergonomic and
midnight seems like an ergonomic choice.

In action for `pandas`:

```console
$ target/debug/puffin pip-compile --exclude-newer 2023-11-16 target/pandas.in
Resolved 6 packages in 679ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
#    target/debug/puffin pip-compile --exclude-newer 2023-11-16 target/pandas.in
numpy==1.26.2
    # via pandas
pandas==2.1.3
python-dateutil==2.8.2
    # via pandas
pytz==2023.3.post1
    # via pandas
six==1.16.0
    # via python-dateutil
tzdata==2023.3
    # via pandas
$ target/debug/puffin pip-compile --exclude-newer 2022-11-16 target/pandas.in
Resolved 5 packages in 655ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
#    target/debug/puffin pip-compile --exclude-newer 2022-11-16 target/pandas.in
numpy==1.23.4
    # via pandas
pandas==1.5.1
python-dateutil==2.8.2
    # via pandas
pytz==2022.6
    # via pandas
six==1.16.0
    # via python-dateutil
$ target/debug/puffin pip-compile --exclude-newer 2021-11-16 target/pandas.in
Resolved 5 packages in 594ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
#    target/debug/puffin pip-compile --exclude-newer 2021-11-16 target/pandas.in
numpy==1.21.4
    # via pandas
pandas==1.3.4
python-dateutil==2.8.2
    # via pandas
pytz==2021.3
    # via pandas
six==1.16.0
    # via python-dateutil
```
2023-11-16 19:46:17 +00:00
Charlie Marsh 0af2f7e39f
Use `anstream` to avoid writing colorized output (#415)
A more robust solution to avoiding colorized output by ensuring we write
to `stdout` and `stderr` via the
[`anstream`](https://docs.rs/anstream/latest/anstream/) crate.

Closes https://github.com/astral-sh/puffin/issues/393.
2023-11-13 20:00:12 +00:00
konsti 5cef40d87a
Add proper caching for pypi metadata fetching kinds (#368)
I intend this to become the main form of caching for puffin: You can
make http requests, you tranform the data to what you really need, you
have control over the cache key, and the cache is always json (or
anything else much faster we want to replace it with as long as it's
serde!)
2023-11-10 11:03:40 +00:00
Charlie Marsh 2c114592bd
Only store small wheels in-memory (#348)
Closes https://github.com/astral-sh/puffin/issues/246.
2023-11-07 00:50:00 +00:00
konsti 9b077f3d0f
`cargo upgrade --incompatible` (#330)
Ran `cargo upgrade --incompatible`, seems there are no changes required.

From cacache 0.12.0:
> BREAKING CHANGE: some signatures for copy have changed, and copy no
longer automatically reflinks

`which` 5.0.0 seems to have only error message changes.
2023-11-06 14:14:47 +00:00
konsti b2439b24a1
Fetch wheel metadata by async range requests on the remote wheel (#301)
Use range requests and async zip to extract the METADATA file from a
remote wheel.

We currently only cache when the remote says the remote declares the
resource as immutable, see
https://github.com/06chaynes/http-cache/issues/57 and
https://github.com/baszalmstra/async_http_range_reader/pull/1 . The
cache is stored as json with the description omitted, this improve cache
deserialization performance.
2023-11-06 15:06:49 +01:00
konsti b79a15b458
Update pyproject-toml to 0.8.0 (#329) 2023-11-06 13:16:36 +00:00
Charlie Marsh 62c474d880
Add support for Git dependencies (#283)
## Summary

This PR adds support for Git dependencies, like:

```
flask @ git+https://github.com/pallets/flask.git
```

Right now, they're only supported in the resolver (and not the
installer), since the installer doesn't yet support source distributions
at all.

The general approach here is based on Cargo's Git implementation.
Specifically, I adapted Cargo's
[`git`](23eb492cf9/src/cargo/sources/git/mod.rs)
module to perform the cloning, which is based on `libgit2`.

As compared to Cargo's implementation, I made the following changes:

- Removed any unnecessary code.
- Fixed any Clippy errors for our stricter ruleset.
- Removed the dependency on `curl`, in favor of `reqwest` which we use
elsewhere.
- Removed the ability to use `gix`. Cargo allows the use of `gix` as an
experimental flag, but it only supports a small subset of the
operations. When Cargo fully adopts `gix`, we should plan to do the
same.
- Removed Cargo's host key checking. We need to re-add this! I'll do it
shortly.
- Removed Cargo's progress bars. We should re-add this too, but we use
`indicatif` and Cargo had their own thing.

There are a few follow-ups to consider:

- Adding support in the installer.
- When we lock, we should write out the Git URL that includes the exact
SHA. This lets us cache in perpetuity and avoids dependencies changing
without re-locking.
- When we resolve, we should _always_ try to refresh Git dependencies.
(Right now, we skip if the wheel was already built.)

I'll work on the latter two in follow-up PRs.

Closes #202.
2023-11-02 15:14:55 +00:00
konsti 4adaa9a700
Wheel filename distribution package name (#278)
The normalized name abstractions were not consistently, this PR uses
them where they were previously missing:
* `WheelFilename::distribution`
* `Requirement::name`
* `Requirement::extras`
* `Metadata21::name`
* `Metadata21::provides_dist`

With `puffin-package` depending on `pep508_rs` this would be cyclical
crate dependency, so `puffin-normalize` gets split out from
`puffin-package`.

`DistInfoName` has the same task and semantics as `PackageName`, so it's
merged into the latter.

`PackageName` and `ExtraName` documentation is moved onto the type and
their constructors are called `new` instead of `normalize`. We now use
these constructors rarely enough the implicit allocation by
`to_string()` shouldn't matter anymore, while more actual cloning
becomes visible.
2023-11-02 11:15:27 +00:00
Charlie Marsh 2652caa3e3
Add support for URL dependencies (#251)
## Summary

This PR adds support for resolving and installing dependencies via
direct URLs, like:

```
werkzeug @ 960bb4017c4aed12b5ed8b78e0153e/Werkzeug-2.0.0-py3-none-any.whl
```

These are fairly common (e.g., with `torch`), but you most often see
them as Git dependencies.

Broadly, structs like `RemoteDistribution` and friends are now enums
that can represent either registry-based dependencies or URL-based
dependencies:

```rust
/// A built distribution (wheel) that exists as a remote file (e.g., on `PyPI`).
#[derive(Debug, Clone)]
#[allow(clippy::large_enum_variant)]
pub enum RemoteDistribution {
    /// The distribution exists in a registry, like `PyPI`.
    Registry(PackageName, Version, File),
    /// The distribution exists at an arbitrary URL.
    Url(PackageName, Url),
}
```

In the resolver, we now allow packages to take on an extra, optional
`Url` field:

```rust
#[derive(Debug, Clone, Eq, Derivative)]
#[derivative(PartialEq, Hash)]
pub enum PubGrubPackage {
    Root,
    Package(
        PackageName,
        Option<DistInfoName>,
        #[derivative(PartialEq = "ignore")]
        #[derivative(PartialOrd = "ignore")]
        #[derivative(Hash = "ignore")]
        Option<Url>,
    ),
}
```

However, for the purpose of version satisfaction, we ignore the URL.
This allows for the URL dependency to satisfy the transitive request in
cases like:

```
flask==3.0.0
werkzeug @ 254c3e9b5f5941e900b71206e6313b/werkzeug-3.0.1-py3-none-any.whl
```

There are a couple limitations in the current approach:

- The caching for remote URLs is done separately in the resolver vs. the
installer. I decided not to sweat this too much... We need to figure out
caching holistically.
- We don't support any sort of time-based cache for remote URLs -- they
just exist forever. This will be a problem for URL dependencies, where
we need some way to evict and refresh them. But I've deferred it for
now.
- I think I need to redo how this is modeled in the resolver, because
right now, we don't detect a variety of invalid cases, e.g., providing
two different URLs for a dependency, asking for a URL dependency and a
_different version_ of the same dependency in the list of first-party
dependencies, etc.
- (We don't yet support VCS dependencies.)
2023-11-01 09:21:44 -04:00
Charlie Marsh 3312ce30f5
Upgrade crates and remove unused dependencies (#256) 2023-10-31 13:16:58 -04:00
konsti 29bd0a4ed8
Fix musl compilation (#234)
musl (which we already use in ruff) allows statically linked binaries on
linux. This PR switches to rustls and vendors and fixes the glibc
detection. Using static musl builds makes it easier to avoid glibc
errors in docker and we'll need it later for alpine users anyway.

An alternative is using vendored openssl.
2023-10-30 18:10:17 +01:00
konsti 5ad58474ca
Add script to check the top 8k pypi packages (#198)
To check to top 1k (current state):

```bash
scripts/resolve/get_pypi_top_8k.sh
cargo run --bin puffin-dev -- resolve-many scripts/resolve/pypi_top_8k_flat.txt --limit 1000
```

Results:
```
Errors: pywin32, geoip2, maxminddb, pypika, dirac
Success: 995, Error: 5
```
pywin32 has no solution for the build environment, 3 have no
`[build-system]` entry in pyproject.toml, `dirac` is missing cmake
2023-10-26 12:03:59 +00:00
Charlie Marsh 49a27ff33c
Add support for parameterized link modes (#164)
Allows the user to select between clone, hardlink, and copy semantics
for installs. (The pnpm documentation has a decent description of what
these mean: https://pnpm.io/npmrc#package-import-method.)

Closes #159.
2023-10-22 04:35:50 +00:00
Charlie Marsh 4645f79237
Use `FxHash` (#151) 2023-10-20 05:26:06 +00:00
Charlie Marsh d5105a76c5
Improve and test diagnostics for requirements-reading CLI commands (#143)
Also removes `owo_colors` because it was really painful to get it to
avoid printing colors during tests.
2023-10-19 18:13:40 -04:00
Charlie Marsh 7bc42ca2ce
Use `owo_colors` instead of `colored` (#121)
This is what `miette` uses so seems better to avoid two coloring crates.
2023-10-18 18:57:07 +00:00
Charlie Marsh 1fc03780f9
Use `miette` for `puffin add` diagnostics (#119)
Experiment in using `miette` for better user-facing diagnostics in the
CLI crate:

<img width="710" alt="Screen Shot 2023-10-18 at 2 11 54 PM"
src="https://github.com/astral-sh/puffin/assets/1309177/30299da0-da65-4972-944f-cb8cc5f72a77">

For now, only the `add` command has been migrated, and all the library
crates continue to use `anyhow`.
2023-10-18 14:24:09 -04:00
Charlie Marsh 4c87a1d42c
Add a `puffin add` command (#117)
This needs far better error handling and user-facing feedback, but it
does the basic operation (and includes discovery of the `pyproject.toml`
file, etc.).
2023-10-18 00:51:20 -04:00
konsti fa2fd14587
Add basic sdist builder (#104)
This adds a basic sdist builder that has been tested with two source
distributions, one with a PEP 517 backend and one with setup.py.

It uses pip for requirements installation atm, lacks testing in all
directions, lacks checks for recursive requirements, can't pass in
already resolved versions, doesn't support prepare metadata for build to
allow resolution to continue without doing the actual (native) build,
error messages are mediocre, etc.

```console
$ RUST_LOG=puffin_build=debug puffin-build --wheels wheels downloads/tqdm-4.66.1.tar.gz
2023-10-16T12:28:35.503182Z DEBUG build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}: puffin_build: Building downloads/tqdm-4.66.1.tar.gz
2023-10-16T12:28:35.521780Z  INFO build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}:extract_archive: puffin_build: close time.busy=18.4ms time.idle=16.7µs
2023-10-16T12:28:35.845096Z DEBUG build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}:resolve_and_install: puffin_build: Calling pip to install build dependencies
2023-10-16T12:28:37.668660Z  INFO build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}:resolve_and_install: puffin_build: close time.busy=1.82s time.idle=13.2µs
2023-10-16T12:28:37.668744Z DEBUG build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}: puffin_build: Calling `setuptools.build_meta.get_requires_for_build_wheel()`
2023-10-16T12:28:38.159205Z  INFO build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}:run_python_script{python_interpreter="/tmp/.tmpm4cTra/venv/bin/python"}: puffin_build: close time.busy=490ms time.idle=13.0µs
2023-10-16T12:28:38.159304Z DEBUG build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}: puffin_build: Calling `setuptools.build_meta.build_wheel()`
2023-10-16T12:28:38.501732Z  INFO build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}:run_python_script{python_interpreter="/tmp/.tmpm4cTra/venv/bin/python"}: puffin_build: close time.busy=342ms time.idle=15.2µs
2023-10-16T12:28:38.522700Z  INFO build_sdist{path="downloads/tqdm-4.66.1.tar.gz" base_python="/usr/bin/python3"}: puffin_build: close time.busy=3.02s time.idle=16.2µs
Wheel built to /home/konsti/projects/puffin/crates/puffin-build/wheels/tqdm-4.66.1-py3-none-any.whl
2023-10-16T12:28:38.522772Z DEBUG puffin_build: Took 3020ms
$ puffin-build --wheels wheels downloads/geoextract-0.3.1.tar.gz
2023-10-16T12:28:40.884622Z DEBUG build_sdist{path="downloads/geoextract-0.3.1.tar.gz" base_python="/usr/bin/python3"}: puffin_build: Building downloads/geoextract-0.3.1.tar.gz
2023-10-16T12:28:40.887743Z  INFO build_sdist{path="downloads/geoextract-0.3.1.tar.gz" base_python="/usr/bin/python3"}:extract_archive: puffin_build: close time.busy=2.97ms time.idle=12.6µs
2023-10-16T12:28:41.469738Z  INFO build_sdist{path="downloads/geoextract-0.3.1.tar.gz" base_python="/usr/bin/python3"}: puffin_build: close time.busy=585ms time.idle=15.3µs
Wheel built to /home/konsti/projects/puffin/crates/puffin-build/wheels/geoextract-0.3.1-py3-none-any.whl
2023-10-16T12:28:41.469814Z DEBUG puffin_build: Took 585ms
```
2023-10-16 12:43:31 +00:00
Charlie Marsh 471a1d657d
Migrate resolver proof-of-concept to PubGrub (#97)
## Summary

This PR enables the proof-of-concept resolver to backtrack by way of
using the `pubgrub-rs` crate.

Rather than using PubGrub as a _framework_ (implementing the
`DependencyProvider` trait, letting PubGrub call us), I've instead
copied over PubGrub's primary solver hook (which is only ~100 lines or
so) and modified it for our purposes (e.g., made it async).

There's a lot to improve here, but it's a start that will let us
understand PubGrub's appropriateness for this problem space. A few
observations:

- In simple cases, the resolver is slower than our current (naive)
resolver. I think it's just that the pipelining isn't as efficient as in
the naive case, where we can just stream package and version fetches
concurrently without any bottlenecks.
- A lot of the code here relates to bridging PubGrub with our own
abstractions -- so we need a `PubGrubPackage`, a `PubGrubVersion`, etc.
2023-10-15 22:05:44 -04:00
konstin a8d020f53c Add `profiling` profile for profilers 2023-10-13 11:51:07 +02:00
Charlie Marsh 496cb7b2ef
Migrate to `requirements_txt.rs` (#90)
Remove the parser I wrote in favor of Konsti's which is much more
complete. The only change vs. the version in `poc-monotrail` is that I
changed the tests to use insta rather than manually storing and
comparing against JSON snapshots.

Closes https://github.com/astral-sh/puffin/issues/89.
2023-10-12 17:09:00 +00:00
Charlie Marsh a0294a510c
Rework `puffin sync` output to summarize (#81)
This also moves away from using `tracing` for user-facing logging,
instead introducing a new `Printer` abstraction.

Closes #66.
2023-10-10 03:29:09 +00:00
Charlie Marsh ba2b200fce
Enable release builds via `cargo-dist` (#79) 2023-10-09 20:48:55 +00:00
Charlie Marsh 0ca17a1cf2
Use local copy of `gourgeist` (#62)
This PR gets `gourgeist` passing our local CI and integrated into the
broader workspace.

There's some duplicate between concepts in `gourgeist` (like the
`InterpreterInfo`) and structs we have elsewhere, but we can tackle
those later.
2023-10-08 18:45:08 +00:00
Charlie Marsh 2a846e76b7
Store unzipped wheels in a cache (#49)
This PR massively speeds up the case in which you need to install wheels
that already exist in the global cache.

The new strategy is as follows:

- Download the wheel into the content-addressed cache.
- Unzip the wheel into the cache, but ignore content-addressing. It
turns out that writing to `cacache` for every file in the zip added a
ton of overhead, and I don't see any actual advantages to doing so.
Instead, we just unzip the contents into a directory at, e.g.,
`~/.cache/puffin/django-4.1.5`.
- (The unzip itself is now parallelized with Rayon.)
- When installing the wheel, we now support unzipping from a directory
instead of a zip archive. This required duplicating and tweaking a few
functions.
- When installing the wheel, we now use reflinks (or copy-on-write
links). These have a few fantastic properties: (1) they're extremely
cheap to create (on macOS, they are allegedly faster than hard links);
(2) they minimize disk space, since we avoid copying files entirely in
the vast majority of cases; and (3) if the user then edits a file
locally, the cache doesn't get polluted. Orogene, Bun, and soon pnpm all
use reflinks.

Puffin is now ~15x faster than `pip` for the common case of installing
cached data into a fresh environment.

Closes https://github.com/astral-sh/puffin/issues/21.

Closes https://github.com/astral-sh/puffin/issues/39.
2023-10-08 04:04:48 +00:00
Charlie Marsh f3015ffc1f
Add a `clean` command to clear the cache (#41) 2023-10-07 15:19:03 +00:00
Charlie Marsh ae28552b3a
Use local copy of `install-wheel-rs` (#34)
This PR modifies the `install-wheel-rs` (and a few other crates) to get
everything playing nicely. Specifically, CI should pass, and all these
crates now use workspace dependencies between one another.

As part of this change, I split out the wheel name parsing into its own
`wheel-filename` crate, and the compatibility tag parsing into its own
`platform-tags` crate.
2023-10-07 01:43:55 +00:00
Charlie Marsh c8477991a9
Use local versions of PEP 440 and PEP 508 crates (#32)
This PR modifies the PEP 440 and PEP 508 crates to pass CI, primarily by
fixing all lint violations.

We're also now using these crates in the workspace via `path`.
(Previously, we were still fetching them from Cargo.)
2023-10-07 00:16:44 +00:00
Charlie Marsh dd26cfa0cc
Migrate to `tokio` (#27)
Closes https://github.com/astral-sh/puffin/issues/26.
2023-10-06 20:31:03 +00:00
Charlie Marsh ca6aa207ff
Move to workspace dependencies (#25) 2023-10-06 19:49:41 +00:00
Charlie Marsh 2d6266b167
Add an HTTP cache (and `--no-cache` argument) (#14)
Closes https://github.com/astral-sh/puffin/issues/3.
2023-10-05 19:14:05 -04:00
Charlie Marsh b059c590c4
Add basic CI via GitHub Actions (#10)
Closes https://github.com/astral-sh/puffin/issues/1.
2023-10-05 13:42:58 -04:00
Charlie Marsh 8b9ac30507 Add license, Cargo.toml, etc. 2023-10-05 12:45:38 -04:00