Move `bench` directory to `benchmark` (#5529)

## Summary

Removes the legacy `benchmark` directory (we'll always have it in Git)
and renames `bench` to `benchmark` for clarity. Fixes a variety of
commands and references.
This commit is contained in:
Charlie Marsh 2024-07-28 18:03:52 -04:00 committed by GitHub
parent d7c79182ea
commit 44a77a04d0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
15 changed files with 60 additions and 267 deletions

View File

@ -54,41 +54,44 @@ manager cache is not shared across runs).
## Reproduction
All benchmarks were generated using the `scripts/bench/__main__.py` script, which wraps
All benchmarks were generated using the `scripts/benchmark` package, which wraps
[`hyperfine`](https://github.com/sharkdp/hyperfine) to facilitate benchmarking uv
against a variety of other tools.
The benchmark script itself has a several requirements:
- A local uv release build (`cargo build --release`).
- A virtual environment with the script's own dependencies installed (`uv venv && uv pip sync scripts/bench/requirements.txt`).
- An installation of the production `uv` binary in your path.
- The [`hyperfine`](https://github.com/sharkdp/hyperfine) command-line tool installed on your system.
To benchmark resolution against pip-compile, Poetry, and PDM:
```shell
python -m scripts.bench \
--uv \
uv run benchmark \
--uv-pip \
--poetry \
--pdm \
--pip-compile \
--benchmark resolve-warm --benchmark resolve-cold \
scripts/requirements/trio.in \
--json
--json \
../requirements/trio.in
```
To benchmark installation against pip-sync, Poetry, and PDM:
```shell
python -m scripts.bench \
--uv \
uv run benchmark \
--uv-pip \
--poetry \
--pdm \
--pip-sync \
--benchmark install-warm --benchmark install-cold \
--json
--json \
../requirements/compiled/trio.txt
```
Both commands should be run from the `scripts/benchmark` directory.
After running the benchmark script, you can generate the corresponding graph via:
```shell

View File

@ -83,15 +83,15 @@ Please refer to Ruff's [Profiling Guide](https://github.com/astral-sh/ruff/blob/
We provide diverse sets of requirements for testing and benchmarking the resolver in `scripts/requirements` and for the installer in `scripts/requirements/compiled`.
You can use `scripts/bench` to benchmark predefined workloads between uv versions and with other tools, e.g., from the `scripts/bench` directory:
You can use `scripts/benchmark` to benchmark predefined workloads between uv versions and with other tools, e.g., from the `scripts/benchmark` directory:
```shell
uv run bench \
--uv-path ./target/release/before \
--uv-path ./target/release/after \
../scripts/requirements/jupyter.in \
--benchmark resolve-cold \
--min-runs 20
uv run benchmark \
--uv-pip \
--poetry \
--benchmark \
resolve-cold \
../scripts/requirements/trio.in
```
### Analyzing concurrency

View File

@ -1,14 +0,0 @@
# bench
Benchmarking scripts for uv and other package management tools.
## Getting Started
From the `bench` directory:
```shell
uv run __main__.py \
--uv-pip \
--poetry \
../scripts/requirements/trio.in --benchmark resolve-cold --min-runs 20
```

View File

@ -0,0 +1,16 @@
# benchmark
Benchmarking scripts for uv and other package management tools.
## Getting Started
From the `scripts/benchmark` directory:
```shell
uv run benchmark \
--uv-pip \
--poetry \
--benchmark \
resolve-cold \
../requirements/trio.in
```

View File

@ -1,5 +1,5 @@
[project]
name = "bench"
name = "benchmark"
version = "0.0.1"
description = "Benchmark package resolution tools"
requires-python = ">=3.12"
@ -13,7 +13,7 @@ dependencies = [
]
[project.scripts]
bench = "bench:main"
benchmark = "benchmark:main"
[build-system]
requires = ["hatchling"]

View File

@ -6,58 +6,53 @@ By default, this script also assumes that `pip`, `pip-tools`, `virtualenv`, `poe
`hyperfine` are installed, and that a uv release builds exists at `./target/release/uv`
(relative to the repository root). However, the set of tools is configurable.
To set up the required environment, run:
For example, to benchmark uv's `pip compile` command against `pip-tools`, run the
following from the `scripts/benchmark` directory:
cargo build --release
./target/release/uv venv
source .venv/bin/activate
./target/release/uv pip sync ./scripts/bench/requirements.txt
Then, to benchmark uv against `pip-tools`:
python -m scripts.bench --uv --pip-compile requirements.in
uv run benchmark --uv-pip --pip-compile ../requirements/trio.in
It's most common to benchmark multiple uv versions against one another by building
from multiple branches and specifying the path to each binary, as in:
# Build the baseline version.
# Build the baseline version, from the repo root.
git checkout main
cargo build --release
mv ./target/release/uv ./target/release/baseline
# Build the feature version.
# Build the feature version, again from the repo root.
git checkout feature
cargo build --release
# Run the benchmark.
python -m scripts.bench \
--uv-path ./target/release/uv \
--uv-path ./target/release/baseline \
requirements.in
cd scripts/benchmark
uv run benchmark \
--uv-pip-path ../../target/release/uv \
--uv-pip-path ../../target/release/baseline \
../requirements/trio.in
By default, the script will run the resolution benchmarks when a `requirements.in` file
is provided, and the installation benchmarks when a `requirements.txt` file is provided:
# Run the resolution benchmarks against the Trio project.
python -m scripts.bench \
--uv-path ./target/release/uv \
--uv-path ./target/release/baseline \
./scripts/requirements/trio.in
uv run bench \
--uv-path ../../target/release/uv \
--uv-path ../../target/release/baseline \
../requirements/trio.in
# Run the installation benchmarks against the Trio project.
python -m scripts.bench \
--uv-path ./target/release/uv \
--uv-path ./target/release/baseline \
./scripts/requirements/compiled/trio.txt
uv run bench \
--uv-path ../../target/release/uv \
--uv-path ../../target/release/baseline \
../requirements/compiled/trio.txt
You can also specify the benchmark to run explicitly:
# Run the "uncached install" benchmark against the Trio project.
python -m scripts.bench \
--uv-path ./target/release/uv \
--uv-path ./target/release/baseline \
uv run bench \
--uv-path ../../target/release/uv \
--uv-path ../../target/release/baseline \
--benchmark install-cold \
./scripts/requirements/compiled/trio.txt
../requirements/compiled/trio.txt
"""
import abc

View File

@ -15,7 +15,7 @@ wheels = [
]
[[distribution]]
name = "bench"
name = "benchmark"
version = "0.0.1"
source = { editable = "." }
dependencies = [

View File

@ -1,27 +0,0 @@
#!/usr/bin/env sh
###
# Benchmark the resolver against `pip-compile`.
#
# Example usage:
#
# ./scripts/benchmarks/compile.sh ./scripts/benchmarks/requirements.in
###
set -euxo pipefail
TARGET=${1}
###
# Resolution with a cold cache.
###
hyperfine --runs 20 --warmup 3 --prepare "rm -f /tmp/requirements.txt" \
"./target/release/uv --no-cache pip-compile ${TARGET} > /tmp/requirements.txt" \
"./target/release/main --no-cache pip-compile ${TARGET} > /tmp/requirements.txt"
###
# Resolution with a warm cache.
###
hyperfine --runs 20 --warmup 3 --prepare "rm -f /tmp/requirements.txt" \
"./target/release/uv pip compile ${TARGET} > /tmp/requirements.txt" \
"./target/release/main pip-compile ${TARGET} > /tmp/requirements.txt"

View File

@ -1,49 +0,0 @@
###
# A large-ish set of dependencies, including several packages with a large number of small files
# (like Django) and several packages with a small number of large files (like Ruff).
###
pygments==2.16.1
packaging==23.2
click==8.1.7
threadpoolctl==3.2.0
flake8-docstrings==1.7.0
pytest==7.4.2
mdurl==0.1.2
typeguard==3.0.2
tokenize-rt==5.2.0
typing-extensions==4.8.0
markupsafe==2.1.3
attrs==23.1.0
lsprotocol==2023.0.0b1
markdown-it-py==3.0.0
joblib==1.3.2
cattrs==23.1.2
tomlkit==0.12.1
mccabe==0.7.0
iniconfig==2.0.0
rich==13.6.0
django==5.0a1
isort==5.12.0
flake8==6.1.0
snowballstemmer==2.2.0
pycodestyle==2.11.0
mypy-extensions==1.0.0
pluggy==1.3.0
pyflakes==3.1.0
pydocstyle==6.3.0
scipy==1.11.3
jinja2==3.1.2
ruff==0.0.292
pygls==1.1.1
pyupgrade==3.15.0
platformdirs==3.11.0
pylint==3.0.1
pathspec==0.11.2
astroid==3.0.0
dill==0.3.7
scikit-learn==1.3.1
mypy==1.5.1
numpy==1.26.0
asgiref==3.7.2
black==23.9.1
sqlparse==0.4.4

View File

@ -1,32 +0,0 @@
###
# A small set of pure-Python packages.
###
packaging>=23.1
pygls>=1.0.1
lsprotocol>=2023.0.0a1
ruff>=0.0.274
flask @ git+https://github.com/pallets/flask.git@d92b64a
typing_extensions
scipy
numpy
pandas<2.0.0
matplotlib>=3.0.0
scikit-learn
rich
textual
jupyter>=1.0.0,<2.0.0
transformers[torch]
django<4.0.0
sqlalchemy
psycopg2-binary
trio<0.20
trio-websocket
trio-asyncio
trio-typing
trio-protocol
fastapi
typer
pydantic
uvicorn
traitlets

View File

@ -1,10 +0,0 @@
###
# A small set of pure-Python packages.
###
attrs==23.1.0
cattrs==23.1.2
lsprotocol==2023.0.0b1
packaging==23.2
pygls==1.1.1
typeguard==3.0.2
typing-extensions==4.8.0

View File

@ -1,39 +0,0 @@
#!/usr/bin/env bash
###
# Benchmark the installer against `pip`.
#
# Example usage:
#
# ./scripts/benchmarks/sync.sh ./scripts/benchmarks/requirements.txt
###
set -euxo pipefail
TARGET=${1}
###
# Installation with a cold cache.
###
hyperfine --runs 20 --warmup 3 \
--prepare "virtualenv --clear .venv" \
"./target/release/uv pip sync ${TARGET} --no-cache" \
--prepare "rm -rf /tmp/site-packages" \
"pip install -r ${TARGET} --target /tmp/site-packages --no-cache-dir --no-deps"
###
# Installation with a warm cache, similar to blowing away and re-creating a virtual environment.
###
hyperfine --runs 20 --warmup 3 \
--prepare "virtualenv --clear .venv" \
"./target/release/uv pip sync ${TARGET}" \
--prepare "rm -rf /tmp/site-packages" \
"pip install -r ${TARGET} --target /tmp/site-packages --no-deps"
###
# Installation with all dependencies already installed (no-op).
###
hyperfine --runs 20 --warmup 3 \
--setup "virtualenv --clear .venv && source .venv/bin/activate" \
"./target/release/uv pip sync ${TARGET}" \
"pip install -r ${TARGET} --no-deps"

View File

@ -1,17 +0,0 @@
#!/usr/bin/env sh
###
# Benchmark the uninstall command against `pip`.
#
# Example usage:
#
# ./scripts/benchmarks/uninstall.sh numpy
###
set -euxo pipefail
TARGET=${1}
hyperfine --runs 20 --warmup 3 --prepare "rm -rf .venv && virtualenv .venv && source activate .venv/bin/activate && pip install ${TARGET}" \
"./target/release/uv uninstall ${TARGET}" \
"pip uninstall -y ${TARGET}"

View File

@ -1,33 +0,0 @@
#!/usr/bin/env bash
###
# Benchmark the virtualenv initialization against `virtualenv`.
#
# Example usage:
#
# ./scripts/benchmarks/venv.sh
###
set -euxo pipefail
###
# Create a virtual environment without seed packages.
###
hyperfine --runs 20 --warmup 3 \
--prepare "rm -rf .venv" \
"./target/release/uv venv" \
--prepare "rm -rf .venv" \
"virtualenv --without-pip .venv" \
--prepare "rm -rf .venv" \
"python -m venv --without-pip .venv"
###
# Create a virtual environment with seed packages.
###
hyperfine --runs 20 --warmup 3 \
--prepare "rm -rf .venv" \
"./target/release/uv venv --seed" \
--prepare "rm -rf .venv" \
"virtualenv .venv" \
--prepare "rm -rf .venv" \
"python -m venv .venv"