## Summary
This fixes an extremely subtle bug in `pip install --reinstall`, whereby
if you depend on `setuptools` at the top level, we end up uninstalling
it after resolving, which breaks some cached state. If we have
`--reinstall`, we need to reset that cached state between resolving and
installing.
## Test Plan
Running `pip install --reinstall` with:
```txt
setuptools
devpi @ e334eb4dc9bb023329e4b610e4515b/devpi-2.2.0.tar.gz
```
Fails on `main`, but passes.
## Summary
This PR fixes a subtle bug in `pip install` when using `--reinstall`. If
a package depends on a build system directly (e.g., `waitress` depends
on `setuptools`), and then you have other packages that also need the
build system to build a source distribution, right now, we don't share
the `OnceMap` between those cases.
This lifts the `InFlight` tracking up a level, so that it's initialized
once per command, then shared everywhere.
## Test Plan
I'm having trouble coming up with an identical test-case and hesitant to
add this slow test to the suite... But if you run `pip install
--reinstall` with:
```
waitress @ git+https://github.com/zanieb/waitress
devpi-server @ git+https://github.com/zanieb/devpi#subdirectory=server
```
It fails consistently on `main` and passes here.
## Summary
This makes the separation clearer between the legacy `pip` API and the
API we'll add in the future for the package manager itself. It also
enables seamless `puffin pip` aliasing for those that want it.
Closes#918.
## Summary
This PR restructures the flat index fetching in a few ways:
1. It now lives in its own `FlatIndexClient`, since it felt a bit
awkward (in my opinion) for it to live in `RegistryClient`.
2. We now fetch the `FlatIndex` outside of the resolver. This has a few
benefits: (1) the resolver construct is no longer `async` and no longer
returns `Result`, which feels better for a resolver; and (2) we can
share the `FlatIndex` across resolutions rather than re-fetching it for
every source distribution build.
## Summary
`FlatIndex` is now the thing that's keyed on `PackageName`, while
`FlatDistributions` is what used to be called `FlatIndex` (a map from
version to `PrioritizedDistribution`, for a single package). I find this
a bit clearer, since we can also remove the `from_files` that doesn't
return `Self`, which I had trouble following.
## Summary
I'm running into some annoyances converting `&Version` to
`&PubGrubVersion` (which is just a wrapper type around `Version`), and I
realized... We don't even need `PubGrubVersion`?
The reason we "need" it today is due to the orphan trait rule: `Version`
is defined in `pep440_rs`, but we want to `impl
pubgrub::version::Version for Version` in the resolver crate.
Instead of introducing a new type here, which leads to a lot of
awkwardness around conversion and API isolation, what if we instead just
implement `pubgrub::version::Version` in `pep440_rs` via a feature? That
way, we can just use `Version` everywhere without any confusion and
conversion for the wrapper type.
Add directory `--find-links` support for local paths to pip-compile.
It seems that pip joins all sources and then picks the best package. We
explicitly give find links packages precedence if the same exists on an
index and locally by prefilling the `VersionMap`, otherwise they are
added as another index and the existing rules of precedence apply.
Internally, the feature is called _flat index_, which is more meaningful
than _find links_: We're not looking for links, we're picking up local
directories, and (TBD) support another index format that's just a flat
list of files instead of a nested index.
`RegistryBuiltDist` and `RegistrySourceDist` now use `WheelFilename` and
`SourceDistFilename` respectively. The `File` inside `RegistryBuiltDist`
and `RegistrySourceDist` gained the ability to represent both a url and
a path so that `--find-links` with a url and with a path works the same,
both being locked as `<package_name>@<version>` instead of
`<package_name> @ <url>`. (This is more of a detail, this PR in general
still work if we strip that and have directory find links represented as
`<package_name> @ file:///path/to/file.ext`)
`PrioritizedDistribution` and `FlatIndex` have been moved to locations
where we can use them in the upstack PR.
I added a `scripts/wheels` directory with stripped down wheels to use
for testing.
We're lacking tests for correct tag priority precedence with flat
indexes, i only confirmed this manually since it is not covered in the
pip-compile or pip-sync output.
Closes#876
There is no guarantee that indexes provide hashes at all or the sha256
we support specifically. [PEP
503](https://peps.python.org/pep-0503/#specification):
> The URL SHOULD include a hash in the form of a URL fragment with the
following syntax: #<hashname>=<hashvalue>, where <hashname> is the
lowercase name of the hash function (such as sha256) and <hashvalue> is
the hex encoded digest.
We instead use the url as input to generate a hash when caching.
## Summary
We always normalize extra names in our requirements (e.g., `cuda12_pip`
to `cuda12-pip`), but we weren't normalizing within PEP 508 markers,
which meant we ended up comparing `cuda12-pip` (normalized) against
`cuda12_pip` (unnormalized).
Closes https://github.com/astral-sh/puffin/issues/911.
Previously, the url on file could either be a relative or an absolute
url, depending on the index, and we would finalize it lazily. Now we
finalize the url when converting `pypi_types::File` to
`distribution_types::File`. This change is required to make the hashes
on `File` optional (https://github.com/astral-sh/puffin/pull/910), which
are currently the only unique field usable for caching.
## Summary
This PR ensures that when the user passes in `--python-version`, we
adjust the _markers_ to match the target version, thus forcing us to
select compatible wheels for the `--python-version`, rather than the
installed version.
## Context
Let's call Python 3.10 the "installed" environment and Python 3.12 the
"target" environment. For each version, we have _both_ a Python version
(to match against `Requires-Python`) and a set of tags (to match against
wheels).
The rules for resolution are as follows...
- For each package, for each version, we try to find the "best
candidate" for resolution and installation.
- We first look for a wheel that's compatible with the _target_
environment. This requires testing against both the `Requires-Python`
and the markers. (We won't have to build or run this code, so the
_installed_ version is irrelevant.) **(This PR corrects _this_ bullet --
previously, we validated against the _installed_ markers, rather than
the target markers.)**
- If we can't find a compatible wheel, we accept any _incompatible_
wheel as long as there's a source distribution. The source distribution
_must_ be compatible with the target environment. (We won't have to
build or run this code, so the _installed_ version is irrelevant.)
- If there are no wheels, then the source distribution must be
compatible with _both_ the installed and target environments, since we
need to build it.
This is all true for the top-level resolution. When we perform a
sub-resolution (when resolving the build dependencies of a source
distribution), we should _only_ use the installed environment, and
ignore the target environment, since we assume that the dependencies
will be the same in both environments once built -- so our goal is
"just" to build the distribution, without concern for which build
dependencies it uses.
Closes https://github.com/astral-sh/puffin/issues/883.
Remove a test case from the `install_editable` that slows it down from
3.6s to 6.5s while providing low test coverage. It also seems to block
other tests sometimes, `cargo nextest run -E "test(editable)"
--all-features` has more consistent and lower runtimes. Surprisingly
this seems to have bigger effect than switching from pyo3 to cffi.
Used test commands:
```
rm -rf scripts/editable-installs/maturin_editable/target/ && time cargo nextest run -E "test(=install_editable)" --all-features
rm -rf scripts/editable-installs/maturin_editable/target/ && time cargo nextest run -E "test(editable)" --all-features
```
Part of #878
Replace the DTLSsocket test with a dummy package that does nothing but
contain the build system specs that we need. This should speed up one of
the slowest tests.
Part of #878
## Summary
Now that `get_or_build_wheel` will often _also_ handle the unzip step,
we need to move our per-target locking (`OnceMap`) up a level.
Previously, it was only applied to the unzip step, to prevent us from
attempting to unzip into the same target concurrently; now, it's applied
at the `get_wheel` level, which includes both downloading and unzipping.
## Test Plan
It seems like none of our existing tests catch this -- perhaps because
they're too "simple"? You need to run into a situation in which you're
doing multiple source distribution builds concurrently (since they'll
all try to download `setuptools`):
```
rm -rf foo && virtualenv --clear .venv && cargo run -p puffin-cli -- pip-compile ./scripts/requirements/pydantic.in --verbose --cache-dir foo
```
Reduces the number of implementation branches handling `Range:full`,
deferring it to `PackageRange`.
Improves some user-facing messages, e.g. saying `all versions of
<package>` instead of `<package>*`.
Changes the member names of the `PackageRangeKind` enum — they were not
very clear.
## Summary
Installs the seed packages you get with `virtualenv`, but opt-in rather
than opt-out.
Closes https://github.com/astral-sh/puffin/issues/852.
## Test Plan
```
❯ ./scripts/benchmarks/venv.sh
+ hyperfine --runs 20 --warmup 3 --prepare 'rm -rf .venv' './target/release/puffin venv' --prepare 'rm -rf .venv' 'virtualenv --without-pip .venv' --prepare 'rm -rf .venv' 'python -m venv --without-pip .venv'
Benchmark 1: ./target/release/puffin venv
Time (mean ± σ): 4.6 ms ± 0.2 ms [User: 2.4 ms, System: 3.6 ms]
Range (min … max): 4.3 ms … 4.9 ms 20 runs
Warning: Command took less than 5 ms to complete. Note that the results might be inaccurate because hyperfine can not calibrate the shell startup time much more precise than this limit. You can try to use the `-N`/`--shell=none` option to disable the shell completely.
Benchmark 2: virtualenv --without-pip .venv
Time (mean ± σ): 73.3 ms ± 0.3 ms [User: 57.4 ms, System: 14.2 ms]
Range (min … max): 72.8 ms … 74.0 ms 20 runs
Benchmark 3: python -m venv --without-pip .venv
Time (mean ± σ): 22.5 ms ± 0.3 ms [User: 17.0 ms, System: 4.9 ms]
Range (min … max): 22.0 ms … 23.2 ms 20 runs
Summary
'./target/release/puffin venv' ran
4.92 ± 0.20 times faster than 'python -m venv --without-pip .venv'
16.00 ± 0.63 times faster than 'virtualenv --without-pip .venv'
+ hyperfine --runs 20 --warmup 3 --prepare 'rm -rf .venv' './target/release/puffin venv --seed' --prepare 'rm -rf .venv' 'virtualenv .venv' --prepare 'rm -rf .venv' 'python -m venv .venv'
Benchmark 1: ./target/release/puffin venv --seed
Time (mean ± σ): 20.2 ms ± 0.4 ms [User: 8.6 ms, System: 15.7 ms]
Range (min … max): 19.7 ms … 21.2 ms 20 runs
Benchmark 2: virtualenv .venv
Time (mean ± σ): 135.1 ms ± 2.4 ms [User: 66.7 ms, System: 65.7 ms]
Range (min … max): 133.2 ms … 142.8 ms 20 runs
Benchmark 3: python -m venv .venv
Time (mean ± σ): 1.656 s ± 0.014 s [User: 1.447 s, System: 0.186 s]
Range (min … max): 1.641 s … 1.697 s 20 runs
Summary
'./target/release/puffin venv --seed' ran
6.67 ± 0.17 times faster than 'virtualenv .venv'
81.79 ± 1.70 times faster than 'python -m venv .venv'
```
## Summary
Our current setup uses the legacy `setup.py`-based builds if a
`pyproject.toml` file isn't present. This matches pip's behavior.
However, `pypa/build` uses PEP 517-based builds in such cases, and it
looks like pip plans to make that the default
(https://github.com/pypa/pip/issues/9175), with the limiting factor
being performance issues related to isolated builds.
This is now the default behavior, but the `--legacy-setup-py` flag
allows users to opt-in to using `setup.py` directly for distributions
that lack a `pyproject.toml`.
## Summary
This PR adds support for `prepare_metadata_for_build_wheel`, which
allows us to determine source distribution metadata without building the
source distribution. This represents an optimization for the resolver,
as we can skip the expensive build phase for build backends that support
it.
For reference, `prepare_metadata_for_build_wheel` seems to be supported
by:
- `hatchling` (as of
[1.0.9](https://hatch.pypa.io/latest/history/hatchling/#hatchling-v1.9.0)).
- `flit`
- `setuptools`
In fact, it seems to work for every backend _except_ those using legacy
`setup.py`.
Closes#599.
Refactoring split out from find links support: Find links files can be
represented as `Dist`, but not really as `File`, they don't have url nor
hashes.
`DistRequiresPython` is somewhat odd as an in between type.
Changes `File::size` from a `usize` to a `u64`.
The motivations are that with tensorflow wheels being 475 MB
(https://pypi.org/project/tensorflow/2.15.0.post1/#files), we're already
only one order of magnitude away and to avoid target dependent failures.
## Summary
If pre-releases are available for a package that we otherwise couldn't
resolve, we now show a hint that includes one of the example versions.
Closes https://github.com/astral-sh/puffin/issues/811.
Instead of trying to fixup _all_ the invalid version specifiers on pypi
and elsewhere, this filters out distributions with invalid
`requires-python` version specifiers that even
`LenientVersionSpecifiers` couldn't parse, as opposed to failing
entirely, which we currently do.
I would be nicer to model through an invalid distribution pubgrub type,
together with e.g. source dists with an unknown extension, so that the
version itself still shows up in the error trace.
At the same time, we reduce the log level for fixups from warning to
trace, as they are not actionable for the user.
Adjusts display of "no versions available" in error messages to be
consistent with other package/range pairings i.e. we usually display
"<package-name><range>".
## Summary
Right now, both the callback _and_ the "We have no compatible wheel"
paths have a lot of repeated code. This PR changes the callback to
_just_ remove all the wheels and handle the download, and the rest of
the method following the callback is responsible for finding and
building any wheels.
## Summary
I'm unable to run `puffin-cli` on `main` as the
`tracing-durations-export` is marked as optional, but the crate actually
depends on it to compile. Further, without `tracing-durations-export`,
there are `Option` types that can't resolve to a concrete type.
This PR fixes compilation with and without the feature.
Noticed these when working on something unrelated. Generally:
- Prefer `entry.file_type()` over `entry.path().is_file()` or similar,
as the former is almost always free on Unix.
- Call `entry.path()` once, since it allocates internally (returns a
`PathBuf`).
In the past, I moved us to `owo-colors`
(https://github.com/astral-sh/puffin/pull/121); then, we moved back,
because we ran into issues with overriding the settings to force-disable
colors. But `anstream` solved those problems, so I'm moving us _back_ to
`owo-colors`, since it's what `anstream` recommends, and it's already
used by many of our dependencies (`miette`, `configparser`).
---------
Co-authored-by: konstin <konstin@mailbox.org>
## Summary
We can use `anstream` for all color control, rather than going through
`colored`. Note that we still need the `colored` crate, since `colored`
and `anstream` solve different problems. (`anstream` recommends using
`owo-colors` alongside it, but `colored` seems to work fine?)
Resolves the issue raised in
https://github.com/astral-sh/puffin/pull/742 via `anstream` rather than
`colored`.
Closes https://github.com/astral-sh/puffin/issues/782.
Looking at the profile for tf-models-nightly after #789,
`compare_release` is the single biggest item. Adding a fast path, we
avoid paying the cost for padding releases with 0s when they are the
same length, resulting in a 16% for this pathological case. Note that
this mainly happens because tf-models-nightly is almost all large dev
releases that hit the slow path.
**Before**

**After**

```
$ hyperfine --warmup 1 --runs 3 "target/profiling/main pip-compile -q scripts/requirements/tf-models-nightly.txt"
"target/profiling/puffin pip-compile -q scripts/requirements/tf-models-nightly.txt"
Benchmark 1: target/profiling/main pip-compile -q scripts/requirements/tf-models-nightly.txt
Time (mean ± σ): 11.963 s ± 0.225 s [User: 11.478 s, System: 0.451 s]
Range (min … max): 11.747 s … 12.196 s 3 runs
Benchmark 2: target/profiling/puffin pip-compile -q scripts/requirements/tf-models-nightly.txt
Time (mean ± σ): 10.317 s ± 0.720 s [User: 9.885 s, System: 0.404 s]
Range (min … max): 9.501 s … 10.860 s 3 runs
Summary
target/profiling/puffin pip-compile -q scripts/requirements/tf-models-nightly.txt ran
1.16 ± 0.08 times faster than target/profiling/main pip-compile -q scripts/requirements/tf-models-nightly.txt
```
This PR builds on #780 by making both version parsing faster, and
perhaps more importantly, making version comparisons much faster.
Overall, these changes result in a considerable improvement for the
`boto3.in` workload. Here's the status quo:
```
$ time puffin pip-compile --no-build --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/requirements/boto3.in
Resolved 31 packages in 34.56s
real 34.579
user 34.004
sys 0.413
maxmem 2867 MB
faults 0
```
And now with this PR:
```
$ time puffin pip-compile --no-build --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/requirements/boto3.in
Resolved 31 packages in 9.20s
real 9.218
user 8.919
sys 0.165
maxmem 463 MB
faults 0
```
This particular workload gets stuck in pubgrub doing resolution, and
thus benefits mightily from a faster `Version::cmp` routine. With that
said, this change does also help a fair bit with "normal" runs:
```
$ hyperfine -w10 \
"puffin-base pip-compile --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/benchmarks/requirements.in" \
"puffin-cmparc pip-compile --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/benchmarks/requirements.in"
Benchmark 1: puffin-base pip-compile --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/benchmarks/requirements.in
Time (mean ± σ): 337.5 ms ± 3.9 ms [User: 310.5 ms, System: 73.2 ms]
Range (min … max): 333.6 ms … 343.4 ms 10 runs
Benchmark 2: puffin-cmparc pip-compile --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/benchmarks/requirements.in
Time (mean ± σ): 189.8 ms ± 3.0 ms [User: 168.1 ms, System: 78.4 ms]
Range (min … max): 185.0 ms … 196.2 ms 15 runs
Summary
puffin-cmparc pip-compile --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/benchmarks/requirements.in ran
1.78 ± 0.03 times faster than puffin-base pip-compile --cache-dir ~/astral/tmp/cache/ -o /dev/null ./scripts/benchmarks/requirements.in
```
There is perhaps some future work here (detailed in the commit
messages), but I suspect it would be more fruitful to explore ways of
making resolution itself and/or deserialization faster.
Fixes#373, Closes#396
Uses new metadata added in https://github.com/zanieb/packse/pull/61 to
assert that resolution succeeded or failed _and_ that the installed
package versions match the expected result.
The semantics are a bit unintuitive because `--python-version` is a
preference when looking for a python version without a venv, but if we
don't find that exact version we'll take `python3` and patch the
markers. This will make more sense once we start provisioning python
builds.
We can now resolve black with both python 3.8 and 3.12, with or without
that python version being in scope. In the example below,
`PATH=$HOME/.cargo/bin:/usr/bin` removes the pyenv builds and leaves
only `python3`, which is python 3.11.
```console
$ RUST_LOG=puffin::commands=debug cargo run --bin puffin -q -- pip-compile -v scripts/benchmarks/requirements/black.in --python-version py38
0.004108s DEBUG puffin::commands::pip_compile Using Python 3.8 at /home/konsti/.local/bin/python3.8
Resolved 8 packages in 44ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
# puffin pip-compile -v scripts/benchmarks/requirements/black.in --python-version py38
black==23.11.0
[...]
platformdirs==4.0.0
# via black
tomli==2.0.1
# via black
typing-extensions==4.8.0
# via black
$ PATH=$HOME/.cargo/bin:/usr/bin RUST_LOG=puffin::commands=debug cargo run --bin puffin -q -- pip-compile -v scripts/benchmarks/requirements/black.in --python-version py38
0.004315s DEBUG puffin::commands::pip_compile Using Python 3.11 at /usr/bin/python3
Resolved 8 packages in 43ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
# puffin pip-compile -v scripts/benchmarks/requirements/black.in --python-version py38
black==23.11.0
[...]
platformdirs==4.0.0
# via black
tomli==2.0.1
# via black
typing-extensions==4.8.0
# via black
```
```console
$ RUST_LOG=puffin::commands=debug cargo run --bin puffin -q -- pip-compile -v scripts/benchmarks/requirements/black.in --python-version py312
0.004216s DEBUG puffin::commands::pip_compile Using Python 3.12 at /home/konsti/.local/bin/python3.12
Resolved 6 packages in 37ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
# puffin pip-compile -v scripts/benchmarks/requirements/black.in --python-version py312
black==23.11.0
[...]
platformdirs==4.0.0
# via black
$ PATH=$HOME/.cargo/bin:/usr/bin RUST_LOG=puffin::commands=debug cargo run --bin puffin -q -- pip-compile -v scripts/benchmarks/requirements/black.in --python-version py312
0.004190s DEBUG puffin::commands::pip_compile Using Python 3.11 at /usr/bin/python3
Resolved 6 packages in 39ms
# This file was autogenerated by Puffin v0.0.1 via the following command:
# puffin pip-compile -v scripts/benchmarks/requirements/black.in --python-version py312
black==23.11.0
[...]
platformdirs==4.0.0
# via black
```
Fixes#235.
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
One of the most common ways source dists fail to build (on linux) is
when the linker fails because the shared library of a native dependency
is not installed. These errors are hard to understand when you're not a
c programmer:
```
In file included from /usr/include/python3.10/unicodeobject.h:1046,
from /usr/include/python3.10/Python.h:83,
from Modules/3.x/readline.c:8:
Modules/3.x/readline.c: In function ‘on_completion’:
/usr/include/python3.10/cpython/unicodeobject.h:744:29: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
744 | #define _PyUnicode_AsString PyUnicode_AsUTF8
| ^~~~~~~~~~~~~~~~
Modules/3.x/readline.c:842:23: note: in expansion of macro ‘_PyUnicode_AsString’
842 | char *s = _PyUnicode_AsString(r);
| ^~~~~~~~~~~~~~~~~~~
Modules/3.x/readline.c: In function ‘readline_until_enter_or_signal’:
Modules/3.x/readline.c:1044:9: warning: ‘sigrelse’ is deprecated: Use the sigprocmask function instead [-Wdeprecated-declarations]
1044 | sigrelse(SIGINT);
| ^~~~~~~~
In file included from Modules/3.x/readline.c:10:
/usr/include/signal.h:359:12: note: declared here
359 | extern int sigrelse (int __sig) __THROW
| ^~~~~~~~
Modules/3.x/readline.c: In function ‘PyInit_readline’:
Modules/3.x/readline.c:1179:34: warning: assignment to ‘char * (*)(FILE *, FILE *, const char *)’ from incompatible pointer type ‘char * (*)(FILE *, FILE *, char *)’ [-Wincompatible-pointer-types]
1179 | PyOS_ReadlineFunctionPointer = call_readline;
| ^
In file included from /usr/include/string.h:535,
from /usr/include/python3.10/Python.h:30,
from Modules/3.x/readline.c:8:
In function ‘strncpy’,
inlined from ‘call_readline’ at Modules/3.x/readline.c:1124:9:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:95:10: warning: ‘__builtin_strncpy’ output truncated before terminating nul copying as many bytes from a string as its length [-Wstringop-truncation]
95 | return __builtin___strncpy_chk (__dest, __src, __len,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
96 | __glibc_objsize (__dest));
| ~~~~~~~~~~~~~~~~~~~~~~~~~
Modules/3.x/readline.c: In function ‘call_readline’:
Modules/3.x/readline.c:1099:9: note: length computed here
1099 | n = strlen(p);
| ^~~~~~~~~
/usr/bin/ld: cannot find -lncurses: No such file or directory
collect2: error: ld returned 1 exit status
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
---
```
We parse these errors out, tell the user about the missing shared
library and even the most likely debian/ubuntu package name:
```
This error likely indicates that you need to install the library that provides a shared library for ncurses for pygraphviz-1.11 (e.g. libncurses-dev)
```
Previously, we just pulled the latest commit from `main` on every
update. This causes problems when you do not intend to update the
scenarios as in #787.
This bumps to the latest `packse` commit without new scenarios.
Adds support for a `PUFFIN_NO_WRAP` environment variable which disables
line wrapping in `miette` output.
We set this variable in the scenario tests to improve the readability of
snapshots.
I contributed the ability to disable line wrapping upstream at
https://github.com/zkat/miette/pull/328
This PR does a bit of refactoring to the pep440 crate, and in
particular around the erorr types. This PR is meant to be a precursor
to another PR that does some surgery (both in parsing and in `Version`
representation) that benefits somewhat from this refactoring.
As usual, please review commit-by-commit.
This fixes a compilation error with tests on current `main`. I didn't
track down the exact provenance, but I'd guess it's the result of a
botched merge. (i.e., Two or more PRs that pass CI independently, but
when merged cause failures.)
I'm still confused about it, but this seems to do the right thing?
`HierarchicalLayer` internally has [`let ansi =
io::stderr().is_terminal();`](fcd9eed252/src/lib.rs (L74)),
so the logging itself is already correctly uncolored, but errors in the
log weren't.
Test command, ran with network deactivated:
```shell
RUST_LOG=debug cargo run --bin puffin -- pip-compile -v ./scripts/popular_packages/pypi_8k_downloads.txt 2> log.txt
```
**Before**
```
[1;31merror[0m: Request error: error sending request for url (https://pypi.org/simple/apache-airflow-providers-dbt-cloud/): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
[1;31mCaused by[0m: error sending request for url (https://pypi.org/simple/apache-airflow-providers-dbt-cloud/): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
[1;31mCaused by[0m: error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
[1;31mCaused by[0m: dns error: failed to lookup address information: Temporary failure in name resolution
[1;31mCaused by[0m: failed to lookup address information: Temporary failure in name resolution
```
**After**
```
error: Request error: error sending request for url (https://pypi.org/simple/fissix/): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: error sending request for url (https://pypi.org/simple/fissix/): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: dns error: failed to lookup address information: Temporary failure in name resolution
Caused by: failed to lookup address information: Temporary failure in name resolution
```
Fixup for
7349527ceadde8fc265a33e6a4e662/boto3-1.2.0-py2.py3-none-any.whl:
```
botocore>=1.3.0,<1.4.0',
```
Note that neither the quote nor the comma are right.
Following #757, improves the script for generating scenario test cases
with:
- A requirements file
- Support for downloading packse scenarios from GitHub dynamically
- Running rustfmt on the generated test file
- Updating snapshots / running tests