This moves all docs about benchmarking and profiling into
CONTRIBUTING.md by moving the readme of `ruff_benchmark` and adding more
information on profiling.
We need to somehow consolidate that documentation, but i'm not convinced
that this is the best way (i tried subpages in mkdocs, but that didn't
seem good either), so i'm happy to take suggestions.
## Summary
Currently, it is possible to create a tag and then have the release
fail, which is a problem since we can't edit the tag
(https://github.com/charliermarsh/ruff/issues/4468). This change the
release process so that the tag is created inside the release workflow.
This leaves as a failure mode that we have published to pypi but then
creating the tag or GitHub release doesn't work, but in this case we can
restart and the pypi upload is just skipped because we use the skip
existing option.
The release workflow is started by a workflow dispatch with the tag
instead of creating the tag yourself. You can start the release workflow
without a tag to do a dry run which does not publish an artifacts. You
can optionally add a git sha to the workflow run and it will verify that
the release runs on the mentioned commit.
This also adds docs on how to release and a small style improvement for
the maturin integration.
## Test Plan
Testing is hard since we can't do real releases, i've tested a minimized
workflow in a separate dummy repository.
This tackles three problems:
* pre-commit was slow because it ran cargo commands
* Improve the clarity on what you need to run to get your PR pass on CI
(and make those fast)
* You had to compile and run `cargo dev generate-all` separately, which
was slow
The first change is to remove all cargo commands except running ruff
itself from pre-commit. With `cargo run --bin ruff` already compiled it
takes about 7s on my machine. It would make sense to also use the ruff
pre-commit action here even if we're then lagging a release behind for
checking ruff on ruff.
The contributing guide is now clear about what you need to run:
```shell
cargo clippy --workspace --all-targets --all-features -- -D warnings # Linting...
RUFF_UPDATE_SCHEMA=1 cargo test # Testing and updating ruff.schema.json
pre-commit run --all-files # rust and python formatting, markdown and python linting, etc.
```
Example timings from my machine:
`cargo clippy --workspace --all-targets --all-features -- -D warnings`:
23s
`RUFF_UPDATE_SCHEMA=1 cargo test`: 2min (recompiling), 1min (no code
changes, this is mainly doc tests)
`pre-commit run --all-files`: 7s
The exact numbers don't matter so much as the approximate experience (6s
is easier to just wait than 1min, esp if you need to fix and rerun). The
biggest remaining block seems to be doc tests, i'm surprised i didn't
find any solution to speeding them up (nextest simply doesn't run them
at all). Also note that the formatter has it's own tests which are much
faster since they avoid linking ruff (`cargo test
ruff_python_formatter`).
The third change is to enable `cargo test` to update the schema. Similar
to `INSTA_UPDATE=always`, i've added `RUFF_UPDATE_SCHEMA=1` (name open
to bikeshedding), so `RUFF_UPDATE_SCHEMA=1 cargo test` updates the
schema, while `cargo test` still fails as expected if the repo isn't
up-to-date.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
* Document codes.rs
* Refactor codes.rs before merging
Helper script:
```python
# %%
from pathlib import Path
codes = Path("crates/ruff/src/codes.rs").read_text().splitlines()
rules = Path("a.txt").read_text().strip().splitlines()
rule_map = {i.split("::")[-1]: i for i in rules}
# %%
codes_new = []
for line in codes:
if ", Rule::" in line:
left, right = line.split(", Rule::")
right = right[:-2]
line = left + ", " + rule_map[right] + "),"
codes_new.append(line)
# %%
Path("crates/ruff/src/codes.rs").write_text("\n".join(codes_new))
```
Co-authored-by: Jonathan Plasse <13716151+JonathanPlasse@users.noreply.github.com>
I presume the reasoning for not including clippy in `pre-commit` was that it passes all files. This can be turned off with `pass_filenames`, in which case it only runs once.
`cargo +nightly dev generate-all` is also added (when excluding `target` is does not give false positives).
(The overhead of these commands is not much when the build is there. People can always choose to run only certain hooks with `pre-commit run [hook] --all-files`)
We already enforced pedantic clippy lints via the
following command in .github/workflows/ci.yaml:
cargo clippy --workspace --all-targets --all-features -- -D warnings -W clippy::pedantic
Additionally adding #![warn(clippy::pedantic)] to all main.rs and lib.rs
has the benefit that violations of pedantic clippy lints are also
reported when just running `cargo clippy` without any arguments and
are thereby also picked up by LSP[1] servers such as rust-analyzer[2].
However for rust-analyzer to run clippy you'll have to configure:
"rust-analyzer.check.command": "clippy",
in your editor.[3]
[1]: https://microsoft.github.io/language-server-protocol/
[2]: https://rust-analyzer.github.io/
[3]: https://rust-analyzer.github.io/manual.html#configuration
"origin" was accurate since ruff rules are currently always modeled
after one origin (except the Ruff-specific rules).
Since we however want to introduce a many-to-many mapping between codes
and rules, the term "origin" no longer makes much sense. Rules usually
don't have multiple origins but one linter implements a rule first and
then others implement it later (often inspired from another linter).
But we don't actually care much about where a rule originates from when
mapping multiple rule codes to one rule implementation, so renaming
RuleOrigin to Linter is less confusing with the many-to-many system.
Following up on #2018/#2019 discussion, this moves the readme's development-related bits to `CONTRIBUTING.md` to avoid duplication, and fixes up the commands accordingly 😄
I previously tried adding the #[test_case()] attribute macro and got
confused because the Rust compilation suddenly failed with:
error[E0658]: use of unstable library feature 'custom_test_frameworks': custom test frameworks are an unstable feature
which is a quite confusing error message. The solution is to just add
`use test_case::test_case;`, so this commit adds that line to the
example in CONTRIBUTING.md to spare others this source of confusion.
By default only E* and F* lints are enabled. I previously followed the
CONTRIBUTING.md instructions to implement a TID* lint and was confused
why my lint wasn't being run.