(This is not possible to actually use until
https://github.com/astral-sh/ruff/pull/8854 is merged.)
ruff_python_formatter: add reStructuredText docstring formatting support
This commit makes use of the refactoring done in prior commits to slot
in reStructuredText support. Essentially, we add a new type of code
example and look for *both* literal blocks and code block directives.
Literal blocks are treated as Python by default because it seems to be a
[common
practice](https://github.com/adamchainz/blacken-docs/issues/195).
That is, literal blocks like this:
```
def example():
"""
Here's an example::
foo( 1 )
All done.
"""
pass
```
Will get reformatted. And code blocks (via reStructuredText directives)
will also get reformatted:
```
def example():
"""
Here's an example:
.. code-block:: python
foo( 1 )
All done.
"""
pass
```
When looking for a code block, it is possible for it to become invalid.
In which case, we back out of looking for a code example and print the
lines out as they are. As with doctest formatting, if reformatting the
code would result in invalid Python or if the code collected from the
block is invalid, then formatting is also skipped.
A number of tests have been added to check both the formatting and
resetting behavior. Mixed indentation is also tested a fair bit, since
one of my initial attempts at dealing with mixed indentation ended up
not working.
I recommend working through this PR commit-by-commit. There is in
particular a somewhat gnarly refactoring before reST support is added.
Closes#8859
## Summary
We should avoid inlining the ellipsis in:
```python
def h():
...
# bye
```
Just as we omit the ellipsis in:
```python
def h():
# bye
...
```
Closes https://github.com/astral-sh/ruff/issues/8905.
In the source of working on #8859, I made a number of smallish refactors
to how code snippet formatting works. Most or all of these were
motivated by writing in support for reStructuredText blocks. They have
some fundamentally different requirements than doctests, and there are a
lot more ways for reStructuredText blocks to become invalid.
(Commit-by-commit review is recommended as the commit messages provide
further context on each change. I split this off from ongoing work to
make review more manageable.)
This PR removes several uses of `unsafe`. I generally limited myself to
low hanging fruit that I could see. There are still a few remaining uses
of `unsafe` that looked a bit more difficult to remove (if possible at
all). But this gets rid of a good chunk of them.
I put each `unsafe` removal into its own commit with a justification for
why I did it. So I would encourage reviewing this PR commit-by-commit.
That way, we can legislate them independently. It's no problem to drop a
commit if we feel the `unsafe` should stay in that case.
This turns `string` into a parent module with a `docstring` sub-module.
I arranged things this way because there are parts of the `string`
module that the `docstring` module wants to know about (such as a
`NormalizedString`). The alternative I think would be to make
`docstring` a sibling module and expose more of `string`'s internals.
I think I overall like this change because it gives docstring handling a
bit more room to breath. It has grown quite a bit with the addition of
code snippet formatting.
[This was suggested by
@charliermarsh.](https://github.com/astral-sh/ruff/pull/8811#discussion_r1401169531)
## Summary
This PR adds opt-in support for formatting doctests in docstrings. This
reflects initial support and it is intended to add support for Markdown
and reStructuredText Python code blocks in the future. But I believe
this PR lays the groundwork, and future additions for Markdown and reST
should be less costly to add.
It's strongly recommended to review this PR commit-by-commit. The last
few commits in particular implement the bulk of the work here and
represent the denser portions.
Some things worth mentioning:
* The formatter is itself not perfect, and it is possible for it to
produce invalid Python code. Because of this, reformatted code snippets
are checked for Python validity. If they aren't valid, then we
(unfortunately silently) bail on formatting that code snippet.
* There are a couple places where it would be nice to at least warn the
user that doctest formatting failed, but it wasn't clear to me what the
best way to do that is.
* I haven't yet run this in anger on a real world code base. I think
that should happen before merging.
Closes#7146
## Test Plan
* [x] Pass the local test suite.
* [x] Scrutinize ecosystem changes.
* [x] Run this formatter on extant code and scrutinize the results.
(e.g., CPython, numpy.)
A follow-up to auto-generate the `FormatNodeRule` implementation for the
string part nodes. This is just a dummy implementation that is
unreachable because it's handled by the parent nodes.
## Summary
This PR updates the string nodes (`ExprStringLiteral`,
`ExprBytesLiteral`, and `ExprFString`) to account for implicit string
concatenation.
### Motivation
In Python, implicit string concatenation are joined while parsing
because the interpreter doesn't require the information for each part.
While that's feasible for an interpreter, it falls short for a static
analysis tool where having such information is more useful. Currently,
various parts of the code uses the lexer to get the individual string
parts.
One of the main challenge this solves is that of string formatting.
Currently, the formatter relies on the lexer to get the individual
string parts, and formats them including the comments accordingly. But,
with PEP 701, f-string can also contain comments. Without this change,
it becomes very difficult to add support for f-string formatting.
### Implementation
The initial proposal was made in this discussion:
https://github.com/astral-sh/ruff/discussions/6183#discussioncomment-6591993.
There were various AST designs which were explored for this task which
are available in the linked internal document[^1].
The selected variant was the one where the nodes were kept as it is
except that the `implicit_concatenated` field was removed and instead a
new struct was added to the `Expr*` struct. This would be a private
struct would contain the actual implementation of how the AST is
designed for both single and implicitly concatenated strings.
This implementation is achieved through an enum with two variants:
`Single` and `Concatenated` to avoid allocating a vector even for single
strings. There are various public methods available on the value struct
to query certain information regarding the node.
The nodes are structured in the following way:
```
ExprStringLiteral - "foo" "bar"
|- StringLiteral - "foo"
|- StringLiteral - "bar"
ExprBytesLiteral - b"foo" b"bar"
|- BytesLiteral - b"foo"
|- BytesLiteral - b"bar"
ExprFString - "foo" f"bar {x}"
|- FStringPart::Literal - "foo"
|- FStringPart::FString - f"bar {x}"
|- StringLiteral - "bar "
|- FormattedValue - "x"
```
[^1]: Internal document:
https://www.notion.so/astral-sh/Implicit-String-Concatenation-e036345dc48943f89e416c087bf6f6d9?pvs=4
#### Visitor
The way the nodes are structured is that the entire string, including
all the parts that are implicitly concatenation, is a single node
containing individual nodes for the parts. The previous section has a
representation of that tree for all the string nodes. This means that
new visitor methods are added to visit the individual parts of string,
bytes, and f-strings for `Visitor`, `PreorderVisitor`, and
`Transformer`.
## Test Plan
- `cargo insta test --workspace --all-features --unreferenced reject`
- Verify that the ecosystem results are unchanged
Fix an instability where await was followed by a breaking fluent style
expression:
```python
test_data = await (
Stream.from_async(async_data)
.flat_map_async()
.map()
.filter_async(is_valid_data)
.to_list()
)
```
Note that this technically a minor style change (see ecosystem check)
Update to [Rust
1.74](https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html) and use
the new clippy lints table.
The update itself introduced a new clippy lint about superfluous hashes
in raw strings, which got removed.
I moved our lint config from `rustflags` to the newly stabilized
[workspace.lints](https://doc.rust-lang.org/stable/cargo/reference/workspaces.html#the-lints-table).
One consequence is that we have to `unsafe_code = "warn"` instead of
"forbid" because the latter now actually bans unsafe code:
```
error[E0453]: allow(unsafe_code) incompatible with previous forbid
--> crates/ruff_source_file/src/newlines.rs:62:17
|
62 | #[allow(unsafe_code)]
| ^^^^^^^^^^^ overruled by previous forbid
|
= note: `forbid` lint level was set on command line
```
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
Testing the compatibility with the future stable black style, i realized
the `ruff_python_formatter` dev main was lacking the
`--skip-magic-trailing-comma` option. This does not affect `ruff
format`.
Usage:
```shell
cargo run --bin ruff_python_formatter -p ruff_python_formatter -- --skip-magic-trailing-comma --emit stdout scratch.py
```
It seems as though using `include!(...)` to avoid the source code copy
breaks rust-analzer. Namely, it treats the included file as unlinked,
and so any part of analysis (e.g., goto-definition) that needs that file
to reason about the code ends up failing.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
<!-- How was it tested? -->
## Summary
This PR updates the formatter to preserve trailing semicolon for Jupyter
Notebooks.
The motivation behind the change is that semicolons in notebooks are
typically used to hide the output, for example when plotting. This is
highlighted in the linked issue.
The conditions required as to when the trailing semicolon should be
preserved are:
1. It should be a top-level statement which is last in the module.
2. For statement, it can be either assignment, annotated assignment, or
augmented assignment. Here, the target should only be a single
identifier i.e., multiple assignments or tuple unpacking isn't
considered.
3. For expression, it can be any.
## Test Plan
Add a new integration test in `ruff_cli`. The test notebook basically
acts as a document as to which trailing semicolons are to be preserved.
fixes: #8254
## Summary
This PR fixes a bug in our formatter where a multiline lambda expression
statement was formatted over multiple lines without adding parentheses.
The PR "fixes" the problem by not splitting the lambda parameters if it
is not parenthesized
## Test Plan
Added test
**Summary** Previously, own line comment following after a docstring
followed by newline(s) before the first content statement were treated
as trailing on the docstring and we didn't insert a newline after the
docstring as black would.
Before:
```python
class ModuleBrowser:
"""Browse module classes and functions in IDLE."""
# This class is also the base class for pathbrowser.PathBrowser.
def __init__(self, master, path, *, _htest=False, _utest=False):
pass
```
After:
```python
class ModuleBrowser:
"""Browse module classes and functions in IDLE."""
# This class is also the base class for pathbrowser.PathBrowser.
def __init__(self, master, path, *, _htest=False, _utest=False):
pass
```
I'm not entirely happy about hijacking
`handle_own_line_comment_between_statements`, but i don't know a better
spot to put it.
Fixes#7948
**Test Plan** Fixtures
We previously incorrectly treated byte strings in docstring position as
docstrings because black does so
(https://github.com/astral-sh/ruff/pull/8283#discussion_r1375682931,
https://github.com/psf/black/issues/4002), even CPython doesn't
recognize them:
```console
$ python3.12
Python 3.12.0 (main, Oct 6 2023, 17:57:44) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def f():
... b""" a"""
...
>>> print(str(f.__doc__))
None
```
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR inlines the formatting logic for `ExprNumberLiteral` and removes
the need of having dedicated `Format*` struct for each number type.
## Test Plan
`cargo test`
## Summary
This PR splits the `Constant` enum as individual literal nodes. It
introduces the following new nodes for each variant:
* `ExprStringLiteral`
* `ExprBytesLiteral`
* `ExprNumberLiteral`
* `ExprBooleanLiteral`
* `ExprNoneLiteral`
* `ExprEllipsisLiteral`
The main motivation behind this refactor is to introduce the new AST
node for implicit string concatenation in the coming PR. The elements of
that node will be either a string literal, bytes literal or a f-string
which can be implemented using an enum. This means that a string or
bytes literal cannot be represented by `Constant::Str` /
`Constant::Bytes` which creates an inconsistency.
This PR avoids that inconsistency by splitting the constant nodes into
it's own literal nodes, literal being the more appropriate naming
convention from a static analysis tool perspective.
This also makes working with literals in the linter and formatter much
more ergonomic like, for example, if one would want to check if this is
a string literal, it can be done easily using
`Expr::is_string_literal_expr` or matching against `Expr::StringLiteral`
as oppose to matching against the `ExprConstant` and enum `Constant`. A
few AST helper methods can be simplified as well which will be done in a
follow-up PR.
This introduces a new `Expr::is_literal_expr` method which is the same
as `Expr::is_constant_expr`. There are also intermediary changes related
to implicit string concatenation which are quiet less. This is done so
as to avoid having a huge PR which this already is.
## Test Plan
1. Verify and update all of the existing snapshots (parser, visitor)
2. Verify that the ecosystem check output remains **unchanged** for both
the linter and formatter
### Formatter ecosystem check
#### `main`
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
#### `dhruv/constant-to-literal`
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
This PR adds a new `Singleton` enum for the `PatternMatchSingleton`
node.
Earlier the node was using the `Constant` enum but the value for this
pattern can only be either `None`, `True` or `False`. With the coming PR
to remove the `Constant`, this node required a new type to fill in.
This also has the benefit of narrowing the type down to only the
possible values for the node as evident by the removal of `unreachable`.
## Test Plan
Update the AST snapshots and run `cargo test`.
Change
```python
"""Test docstring"""
a = 1
```
to
```python
"""Test docstring"""
a = 1
```
in preview style, but don't touch the docstring otherwise.
Do we want to ask black to also format the content of module level
docstrings? Seems inconsistent to me that we change function and class
docstring indentation/contents but not module docstrings.
Fixes https://github.com/astral-sh/ruff/issues/7995
**Summary** Prepare for the black preview style becoming the black
stable style at the end of the year.
This adds a new test file to compare stable and preview on some relevant
preview options in black, and makes `format_dev` understand the black
preview flag. I've added poetry as a project that uses preview.
I've implemented one specific deviation (collapsing of stub
implementation in non-stub files) which showed up in poetry for testing.
This also improves poetry compatibility from 0.99891 to 0.99919.
Fixes#7440
New compatibility stats:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 35 |
| home-assistant | 0.99953 | 10596 | 189 |
| poetry | 0.99919 | 317 | 12 |
| transformers | 0.99963 | 2657 | 332 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99969 | 654 | 15 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
Given:
```python
# comment
class A:
def foo(self):
pass
```
We need to insert an additional newline between `# comment` and `class
A`. We were missing this handling for the case in which `# comment` is a
leading comment on `class A`, as opposed to a trailing comment of some
preceding statement.
In practice, I think this only applies to the specific case in which a
class or function is the first statement in a module, and there's a
single empty line between a leading comment and that class or function.
If there are no empty lines, then the comment "sticks" to the
definition; if there are two or more, then `leading_comments` will
truncate appropriately. If the class or function is nested, then we only
need one empty line anyway.
Closes https://github.com/astral-sh/ruff/issues/8215.
## Test Plan
No change in similarity.
Before:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
After:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1648 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
This PR removes the `todo!()` around `IpyEscapeCommand` in the
formatter.
The `NeedsParentheses` trait needs to be implemented which always return
`Never`. The reason being that if an escape command is parenthesized,
then that's not parsed as an escape command. IOW, the parentheses
shouldn't be present around an escape command.
In the similar way, the `CanSkipOptionalParenthesesVisitor` will skip
this node.
## Test Plan
Updated the `unformatted.ipynb` fixture with new cells containing
IPython escape commands and the corresponding snapshot was verified.
Also, tested it out in a few open source repositories containing
notebooks (`openai/openai-cookbook`, `huggingface/notebooks`).
#### New cells in `unformatted.ipynb`
**Cell 2**
```markdown
A markdown cell
```
**Cell 3**
```python
def some_function(foo, bar):
pass
%matplotlib inline
```
**Cell 4**
```python
foo = %pwd
def some_function(foo,bar,):
foo = %pwd
print(foo
)
```
fixes: #8204
## Summary
This PR fixes the bug where if a Notebook contained IPython syntax, then
the format command would fail. This was because the correct mode was not
being used while parsing through the formatter code path.
## Test Plan
This PR isn't the only requirement for Notebook formatting to start
working with IPython escape commands. The following PR in the stack is
required as well.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes https://github.com/astral-sh/ruff/issues/7448
Fixes https://github.com/astral-sh/ruff/issues/7892
I've removed automatic dangling comment formatting, we're doing manual
dangling comment formatting everywhere anyway (the
assert-all-comments-formatted ensures this) and dangling comments would
break the formatting there.
## Test Plan
New test file.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Given an expression like `[x for (x) in y]`, we weren't skipping over
parentheses when searching for the `in` between `(x)` and `y`.
Closes https://github.com/astral-sh/ruff/issues/8053.
**Summary** Insert a newline after nested function and class
definitions, unless there is a trailing own line comment.
We need to e.g. format
```python
if platform.system() == "Linux":
if sys.version > (3, 10):
def f():
print("old")
else:
def f():
print("new")
f()
```
as
```python
if platform.system() == "Linux":
if sys.version > (3, 10):
def f():
print("old")
else:
def f():
print("new")
f()
```
even though `f()` is directly preceded by an if statement, not a
function or class definition. See the comments and fixtures for trailing
own line comment handling.
**Test Plan** I checked that the new content of `newlines.py` matches
black's formatting.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
**Summary** Handle comment before the default values of function
parameters correctly by inserting a line break instead of space after
the equals sign where required.
```python
def f(
a = # parameter trailing comment; needs line break
1,
b =
# default leading comment; needs line break
2,
c = ( # the default leading can only be end-of-line with parentheses; no line break
3
),
d = (
# own line leading comment with parentheses; no line break
4
)
)
```
Fixes#7603
**Test Plan** Added the different cases and one more complex case as
fixtures.
**Summary** Remove spaces from import statements such as
```python
import tqdm . tqdm
from tqdm . auto import tqdm
```
See also #7760 for a better solution.
**Test Plan** New fixtures
**Summary** Quoting of f-strings can change if they are triple quoted
and only contain single quotes inside.
Fixes#6841
**Test Plan** New fixtures
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
This PR fixes the bug where the formatter would panic if a class/function with
decorators had a suppression comment.
The fix is to use to correct start location to find the `async`/`def`/`class`
keyword when decorators are present which is the end of the last
decorator.
## Test Plan
Add test cases for the fix and update the snapshots.
## Summary
It turns out that _some_ identifiers can contain newlines --
specifically, dot-delimited import identifiers, like:
```python
import foo\
.bar
```
At present, we print all identifiers verbatim, which causes us to retain
the `\` in the formatted output. This also leads to violating some debug
assertions (see the linked issue, though that's a symptom of this
formatting failure).
This PR adds detection for import identifiers that contain newlines, and
formats them via `text` (slow) rather than `source_code_slice` (fast) in
those cases.
Closes https://github.com/astral-sh/ruff/issues/7734.
## Test Plan
`cargo test`
## Summary
At present, `quote-style` is used universally. However, [PEP
8](https://peps.python.org/pep-0008/) and [PEP
257](https://peps.python.org/pep-0257/) suggest that while either single
or double quotes are acceptable in general (as long as they're
consistent), docstrings and triple-quoted strings should always use
double quotes. In our research, the vast majority of Ruff users that
enable the `flake8-quotes` rules only enable them for inline strings
(i.e., non-triple-quoted strings).
Additionally, many Black forks (like Blue and Pyink) use double quotes
for docstrings and triple-quoted strings.
Our decision for now is to always prefer double quotes for triple-quoted
strings (which should include docstrings). Based on feedback, we may
consider adding additional options (e.g., a `"preserve"` mode, to avoid
changing quotes; or a `"multiline-quote-style"` to override this).
Closes https://github.com/astral-sh/ruff/issues/7615.
## Test Plan
`cargo test`
## Summary
Extends the pragma comment detection in the formatter to support
case-insensitive `noqa` (as supposed by Ruff), plus a variety of other
pragmas (`isort:`, `nosec`, etc.).
Also extracts the detection out into the trivia crate so that we can
reuse it in the linter (see:
https://github.com/astral-sh/ruff/issues/7471).
## Test Plan
`cargo test`
## Summary
No-op refactor, but we can evaluate early if the first part of
`preserve_parentheses || has_comments` is `true`, and thus avoid looking
up the node comments.
## Test Plan
`cargo test`
## Summary
The formatting for tuple patterns is now intended to match that of `for`
loops:
- Always parenthesize single-element tuples.
- Don't break on the trailing comma in single-element tuples.
- For other tuples, preserve the parentheses, and insert if-breaks.
Closes https://github.com/astral-sh/ruff/issues/7681.
## Test Plan
`cargo test`
I got confused and refactored a bit, now the naming should be more
consistent. This is the basis for the range formatting work.
Chages:
* `format_module` -> `format_module_source` (format a string)
* `format_node` -> `format_module_ast` (format a program parsed into an
AST)
* Added `parse_ok_tokens` that takes `Token` instead of `Result<Token>`
* Call the source code `source` consistently
* Added a `tokens_and_ranges` helper
* `python_ast` -> `module` (because that's the type)
## Summary
Given:
```python
if True:
if True:
pass
else:
pass
# a
# b
# c
else:
pass
```
We want to preserve the newline after the `# c` (before the `else`).
However, the `last_node` ends at the `pass`, and the comments are
trailing comments on the `pass`, not trailing comments on the
`last_node` (the `if`). As such, when counting the trailing newlines on
the outer `if`, we abort as soon as we see the comment (`# a`).
This PR changes the logic to skip _all_ comments (even those with
newlines between them). This is safe as we know that there are no
"leading" comments on the `else`, so there's no risk of skipping those
accidentally.
Closes https://github.com/astral-sh/ruff/issues/7602.
## Test Plan
No change in compatibility.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99979 | 3496 | 22 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
Similar to tuples, a generator _can_ be parenthesized or
unparenthesized. Only search for bracketed comments if it contains its
own parentheses.
Closes https://github.com/astral-sh/ruff/issues/7623.
## Summary
Given:
```python
if True:
if True:
if True:
pass
#a
#b
#c
else:
pass
```
When determining the placement of the various comments, we compute the
indentation depth of each comment, and then compare it to the depth of
the previous statement. It turns out this can lead to reordering
comments, e.g., above, `#b` is assigned as a trailing comment of `pass`,
and so gets reordered above `#a`.
This PR modifies the logic such that when we compute the indentation
depth of `#b`, we limit it to at most the indentation depth of `#a`. In
other words, when analyzing comments at the end of branches, we don't
let successive comments go any _deeper_ than their preceding comments.
Closes https://github.com/astral-sh/ruff/issues/7602.
## Test Plan
`cargo test`
No change in similarity.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99979 | 3496 | 22 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99979 | 3496 | 22 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
## Summary
Given:
```python
# -*- coding: utf-8 -*-
import random
# Defaults for arguments are defined here
# args.threshold = None;
logger = logging.getLogger("FastProject")
```
We want to count the number of newlines after `import random`, to ensure
that there's _at least one_, but up to two.
Previously, we used the end range of the statement (then skipped
trivia); instead, we need to use the end of the _last comment_. This is
similar to #7556.
Closes https://github.com/astral-sh/ruff/issues/7604.
## Summary
This is only used for the `level` field in relative imports (e.g., `from
..foo import bar`). It seems unnecessary to use a wrapper here, so this
PR changes to a `u32` directly.
## Summary
When we format the trailing comments on a clause body, we check if there
are any newlines after the last statement; if not, we insert one.
This logic didn't take into account that the last statement could itself
have trailing comments, as in:
```python
if True:
pass
# comment
else:
pass
```
We were thus inserting a newline after the comment, like:
```python
if True:
pass
# comment
else:
pass
```
In the context of function definitions, this led to an instability,
since we insert a newline _after_ a function, which would in turn lead
to the bug above appearing in the second formatting pass.
Closes https://github.com/astral-sh/ruff/issues/7465.
## Test Plan
`cargo test`
Small improvement in `transformers`, but no regressions.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99956 | 2587 | 404 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| **transformers** | **0.99957** | **2587** | **402** |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
## Summary
If a function has no parameters (and no comments within the parameters'
`()`), we're supposed to wrap the return annotation _whenever_ it
breaks. However, our `empty_parameters` test didn't properly account for
the case in which the parameters include a newline (but no other
content), like:
```python
def get_dashboards_hierarchy(
) -> Dict[Type['BaseDashboard'], List[Type['BaseDashboard']]]:
"""Get hierarchy of dashboards classes.
Returns:
Dict of dashboards classes.
"""
dashboards_hierarchy = {}
```
This PR fixes that detection. Instead of lexing, it now checks if the
parameters itself is empty (or if it contains comments).
Closes https://github.com/astral-sh/ruff/issues/7457.
## Summary
The tokenizer was split into a forward and a backwards tokenizer. The
backwards tokenizer uses the same names as the forwards ones (e.g.
`next_token`). The backwards tokenizer gets the comment ranges that we
already built to skip comments.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Given a trailing operator comment in a unary expression, like:
```python
if (
not # comment
a):
...
```
We were attaching these to the operand (`a`), but formatting them in the
unary operator via special handling. Parents shouldn't format the
comments of their children, so this instead attaches them as dangling
comments on the unary expression. (No intended change in formatting.)
## Summary
Given a statement like:
```python
result = (
f(111111111111111111111111111111111111111111111111111111111111111111111111111111111)
+ 1
)()
```
When we go to parenthesize the target of the assignment, we use
`maybe_parenthesize_expression` with `Parenthesize::IfBreaks`. This then
checks if the call on the right-hand side needs to be parenthesized, the
implementation of which looks like:
```rust
impl NeedsParentheses for ExprCall {
fn needs_parentheses(
&self,
_parent: AnyNodeRef,
context: &PyFormatContext,
) -> OptionalParentheses {
if CallChainLayout::from_expression(self.into(), context.source())
== CallChainLayout::Fluent
{
OptionalParentheses::Multiline
} else if context.comments().has_dangling(self) {
OptionalParentheses::Always
} else {
self.func.needs_parentheses(self.into(), context)
}
}
}
```
Checking for `self.func.needs_parentheses(self.into(), context)` is
problematic, since, as in the example above, `self.func` may _already_
be parenthesized -- in which case, we _don't_ want to parenthesize the
entire expression. If we do, we end up with this non-ideal formatting:
```python
result = (
(
f(
111111111111111111111111111111111111111111111111111111111111111111111111111111111
)
+ 1
)()
)
```
This PR modifies the `NeedsParentheses` implementations for call chain
expressions to return `Never` if the inner expression has its own
parentheses, in which case, the formatting implementations for those
expressions will preserve them anyway.
Closes https://github.com/astral-sh/ruff/issues/7370.
## Test Plan
Zulip improves a bit, everything else is unchanged.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| zulip | 0.99956 | 1437 | 23 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| **zulip** | **0.99962** | **1437** | **22** |
## Summary
Fix all but one empty line differences with the black preview style in
typeshed. The remaining differences are breaking with type comments and
trailing commas in function definitions.
I compared the empty line differences with the preview mode of black
since stable has some oddities that would have been hard to replicate
(https://github.com/psf/black/issues/3861). Additionally, it assumes the
style proposed in https://github.com/psf/black/issues/3862.
An edge case that also surfaced with typeshed are newline before
trailing module comments.
**main**
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99966 | 2760 | 58 |
| transformers | 0.99930 | 2587 | 447 |
| twine | 1.00000 | 33 | 0 |
| **typeshed** | 0.99978 | 3496 | **2173** |
| warehouse | 0.99825 | 648 | 22 |
| zulip | 0.99950 | 1437 | 27 |
**PR**
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99966 | 2760 | 58 |
| transformers | 0.99930 | 2587 | 447 |
| twine | 1.00000 | 33 | 0 |
| **typeshed** | 0.99983 | 3496 | **18** |
| warehouse | 0.99825 | 648 | 22 |
| zulip | 0.99950 | 1437 | 27 |
Closes#6723
## Test Plan
The main driver was the typeshed diff. I added new test cases for all
kinds of possible empty line combinations in stub files, test cases for
newlines before trailing module comments.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR adds the `--preview` and `--no-preview` options to the `format` command (hidden) and passes it through to the formatte.
## Test Plan
I added the `dbg(f.options().preview())` statement in `FormatNodeRule::fmt` and verified that the option gets correctly passed to the formatter.
Show header for formatter comment decoration info
**Summary** Show a header in the formatter comment decoration debug
output that shows which node is preceding/following/enclosing
(https://github.com/astral-sh/ruff/pull/6813#issuecomment-1708119550). I
kept this intentionally condensed to make it easy to use this is a small
sidebar without vertical scrolling.
```console
$ cargo run --bin ruff_python_formatter -- --emit stdout --print-comments scratch.py
# Comment decoration: Range, Preceding, Following, Enclosing, Comment
17..20, Some((ParameterWithDefault, 6..10)), None, (Parameters, 5..22), "# a"
44..47, Some((StmtExpr, 28..39)), Some((StmtExpr, 52..60)), (StmtFunctionDef, 0..60), "# b"
77..80, None, None, (ExprList, 71..82), "# c"
{
Node {
kind: ParameterWithDefault,
range: 6..10,
source: `x=[]`,
}: {
...
```
**Test Plan** It's debug output.
**Summary** The comment visitor used to rebuild the locator for every
comment. Instead, we now keep the locator on the builder. Follow-up to
#6813.
**Test Plan** No formatting changes.
## Summary
This PR modifies our between-statement comment handling such that
comments that are not separated by a statement by any newlines continue
to be treated as leading comments on the statement, but comments that
_are_ separated are instead formatted as trailing comments on the
preceding statement.
See, e.g., the originating snippet:
```python
DEFAULT_TEMPLATE = "flatpages/default.html"
# This view is called from FlatpageFallbackMiddleware.process_response
# when a 404 is raised, which often means CsrfViewMiddleware.process_view
# has not been called even if CsrfViewMiddleware is installed. So we need
# to use @csrf_protect, in case the template needs {% csrf_token %}.
# However, we can't just wrap this view; if no matching flatpage exists,
# or a redirect is required for authentication, the 404 needs to be returned
# without any CSRF checks. Therefore, we only
# CSRF protect the internal implementation.
def flatpage(request, url):
pass
```
Here, we need to ensure that the `def flatpage` is precede by two empty
lines. However, we want those two empty lines to be enforced from the
_end_ of the comment block, _unless_ the comments are directly atop the
`def flatpage`.
I played with this a bit, and I think the simplest conceptual model and
implementation is to instead treat those as trailing comments on the
preceding node. The main difficulty with this approach is that, in order
to be fully compatible with Black, we'd sometimes need to insert
newlines _between_ the preceding node and its trailing comments. See,
e.g.:
```python
def func():
...
# comment
x = 1
```
In this case, we'd need to insert two blank lines between `def func():
...` and `# comment`, but `# comment` is trailing comment on `def
func(): ...`. So, we'd need to take this case into account in the
various nodes that _require_ newlines after them: functions, classes,
and imports. After some discussion, we've opted _not_ to support this,
and just treat these as trailing comments -- so we won't insert newlines
there. This means our handling is still identical to Black's on
Black-formatted code, but avoids moving such trailing comments on
unformatted code.
I dislike that the empty handling is so complex, and that it's split
between so many different nodes, but this is really tricky. Continuing
to treat these as leading comments is very difficult too, since we'd
need to do similar tricks for the leading comment handling in those
nodes, and influencing leading comments is even harder, since they're
all formatted _before_ the node itself.
Closes https://github.com/astral-sh/ruff/issues/6761.
## Test Plan
`cargo test`
Surprisingly, it doesn't change the similarity at all (apart from a
0.00001 change in CPython), but I manually confirmed that it did fix the
originating issue in Django.
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76082 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76081 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
## Summary
This PR adds comment handling for comments between the `=` and the
`value` for keywords, as in the following cases:
```python
func(
x # dangling
= # dangling
# dangling
1,
** # dangling
y
)
```
(Comments after the `**` were already handled in some cases, but I've
unified the handling with the `=` handling.)
Note that, previously, comments between the `**` and its value were
rendered as trailing comments on the value (so they'd appear after `y`).
This struck me as odd since it effectively re-ordered the comment with
respect to its closest AST node (the value). I've made them leading
comments, though I don't know that that's a significant improvement. I
could also imagine us leaving them where they are.
## Summary
As a small quality-of-life improvement, the locator can now slice like
`locator.slice(stmt)` instead of requiring
`locator.slice(stmt.range())`.
## Test Plan
`cargo test`
**Summary** Add recursive formatting based on `ruff check` file
discovery for `ruff format`, as a prototype for the formatter alpha.
This allows e.g. `format ../projects/django/`. It's still lacking
support for any settings except line length.
Note just like the existing `ruff format` this will become part of the
production build, i.e. you'll be able to use it - hidden by default and
with a prominent warning - with `ruff format .` after the next release.
Error handling works in my manual tests (the colors do also work):
```
$ target/debug/ruff format scripts/
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended for internal use only.
```
(the above changes `add_rule.py` where we have the wrong bin op
breaking)
```
$ target/debug/ruff format ../projects/django/
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended for internal use only.
Failed to format /home/konsti/projects/django/tests/test_runner_apps/tagged/tests_syntax_error.py: source contains syntax errors: ParseError { error: UnrecognizedToken(Name { name: "syntax_error" }, None), offset: 131, source_path: "<filename>" }
```
```
$ target/debug/ruff format a
warning: `ruff format` is a work-in-progress, subject to change at any time, and intended for internal use only.
Failed to read /home/konsti/ruff/a/d.py: Permission denied (os error 13)
```
**Test Plan** Missing! I'm not sure if it's worth building tests at this
stage or how they should look like.
## Summary
The motivation here is that this enables us to implement `Ranged` in
crates that don't depend on `ruff_python_ast`.
Largely a mechanical refactor with a lot of regex, Clippy help, and
manual fixups.
## Test Plan
`cargo test`
## Summary
This PR introduces two new AST nodes to improve the representation of
`PatternMatchClass`. As a reminder, `PatternMatchClass` looks like this:
```python
case Point2D(0, 0, x=1, y=2):
...
```
Historically, this was represented as a vector of patterns (for the `0,
0` portion) and parallel vectors of keyword names (for `x` and `y`) and
values (for `1` and `2`). This introduces a bunch of challenges for the
formatter, but importantly, it's also really different from how we
represent similar nodes, like arguments (`func(0, 0, x=1, y=2)`) or
parameters (`def func(x, y)`).
So, firstly, we now use a single node (`PatternArguments`) for the
entire parenthesized region, making it much more consistent with our
other nodes. So, above, `PatternArguments` would be `(0, 0, x=1, y=2)`.
Secondly, we now have a `PatternKeyword` node for `x=1` and `y=2`. This
is much more similar to the how `Keyword` is represented within
`Arguments` for call expressions.
Closes https://github.com/astral-sh/ruff/issues/6866.
Closes https://github.com/astral-sh/ruff/issues/6880.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Adds support for `PatternMatchMapping` -- i.e., cases like:
```python
match foo:
case {"a": 1, "b": 2, **rest}:
pass
```
Unfortunately, this node has _three_ kinds of dangling comments:
```python
{ # "open parenthesis comment"
key: pattern,
** # end-of-line "double star comment"
# own-line "double star comment"
rest # end-of-line "after rest comment"
# own-line "after rest comment"
}
```
Some of the complexity comes from the fact that in `**rest`, `rest` is
an _identifier_, not a node, so we have to handle comments _after_ it as
dangling on the enclosing node, rather than trailing on `**rest`. (We
could change the AST to use `PatternMatchAs` there, which would be more
permissive than the grammar but not totally crazy -- `PatternMatchAs` is
used elsewhere to mean "a single identifier".)
Closes https://github.com/astral-sh/ruff/issues/6644.
## Test Plan
`cargo test`
## Summary
This PR ensures that if an expression has an own-line leading comment
_before_ its open parentheses, we render it as such.
For example, given:
```python
[ # foo
# bar
( # baz
1
)
]
```
On `main`, we format as:
```python
[ # foo
(
# bar
# baz
1
)
]
```
As of this PR, we format as:
```python
[ # foo
# bar
( # baz
1
)
]
```
## Test Plan
`cargo test`
## Summary
This PR ensures that we handle bracketed comments on sequences, like `#
comment` here:
```python
match x:
case [ # comment
1, 2
]:
pass
```
The handling is very similar to other, similar nodes, except that we do
need some special logic to determine whether the sequence is
parenthesized, similar to our logic for tuples.
## Test Plan
`cargo test`
## Summary
This PR modifies our formatting of comments around the `.` in an
attribute. Specifically, the goal here is to avoid _reordering_
comments, and the net effect is that we generally leave comments
where-they-are when dealing with comments between around the dot (which
you can also think of as comments between attributes).
All comments around the dot are now treated as dangling and formatted
manually, with the exception of end-of-line or parenthesized comments on
the value, like those marked as trailing here, which remain trailing:
```python
(
(
a # trailing end-of-line
# trailing own-line
) # dangling before dot end-of-line
.b # trailing end-of-line
)
```
Closes https://github.com/astral-sh/ruff/issues/6823.
## Test Plan
`cargo test`
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76050 |
| django | 0.99820 |
| transformers | 0.99800 |
| twine | 0.99876 |
| typeshed | 0.99953 |
| warehouse | 0.99615 |
| zulip | 0.99729 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76050 |
| django | 0.99820 |
| transformers | 0.99800 |
| twine | 0.99876 |
| typeshed | 0.99953 |
| warehouse | 0.99615 |
| zulip | 0.99729 |
## Summary
This PR fixes the duplicate-parenthesis problem that's visible in the
tests from https://github.com/astral-sh/ruff/pull/6799. The issue is
that we might have parentheses around the entire match-case pattern,
like in `(1)` here:
```python
match foo:
case (1):
y = 0
```
In this case, the inner expression (`1`) will _think_ it's
parenthesized, but we'll _also_ detect the parentheses at the case level
-- so they get rendered by the case, then again by the expression.
Instead, if we detect parentheses at the case level, we can force-off
the parentheses for the pattern using a design similar to the way we
handle parentheses on expressions.
Closes https://github.com/astral-sh/ruff/issues/6753.
## Test Plan
`cargo test`
## Summary
This is effectively #6608, but with additional tests.
We aren't properly handling parenthesized patterns, but that needs to be
dealt with separately as it's somewhat involved.
Closes#6555
## Summary
Follows up on
https://github.com/astral-sh/ruff/pull/6652#discussion_r1300871033 with
some modifications to the `PatternMatchAs` comment handling.
Specifically, any comments between the `as` and the end are now
formatted as dangling, and we now insert some newlines in the
appropriate places.
## Test Plan
`cargo test`
## Summary
Ensures that we retain the open-parenthesis comment in cases like:
```python
match pattern_comments:
case ( # leading
only_leading
):
...
```
Previously, this was treated as a leading comment on `only_leading`.
## Test Plan
`cargo test`
## Summary
For imports, we enforce that there's _at least_ one empty line after an
import (assuming the next statement is _not_ an import), but allow up to
two at the module level.
Closes https://github.com/astral-sh/ruff/issues/6760.
## Test Plan
`cargo test`
Avoid the nesting in a macro by using the new `WithNodeLevel` to
`PyFormatter` deref. No changes otherwise.
I wanted to follow this up with quickly fixing the typeshed empty line
rules but they turned out a lot more complex than i had anticipated.
## Summary
We're using LibCST to ensure that we return the full parenthesized range
of an expression, for display purposes. We can just use
`parenthesized_range` which is more efficient and removes one LibCST
dependency.
## Test Plan
`cargo test`
## Summary
If a lambda doesn't contain any parameters, or any parameter _tokens_
(like `*`), we can use `None` for the parameters. This feels like a
better representation to me, since, e.g., what should the `TextRange` be
for a non-existent set of parameters? It also allows us to remove
several sites where we check if the `Parameters` is empty by seeing if
it contains any arguments, so semantically, we're already trying to
detect and model around this elsewhere.
Changing this also fixes a number of issues with dangling comments in
parameter-less lambdas, since those comments are now automatically
marked as dangling on the lambda. (As-is, we were also doing something
not-great whereby the lambda was responsible for formatting dangling
comments on the parameters, which has been removed.)
Closes https://github.com/astral-sh/ruff/issues/6646.
Closes https://github.com/astral-sh/ruff/issues/6647.
## Test Plan
`cargo test`
## Summary
The motivating code here was:
```python
with test as (
# test
foo):
pass
```
Which we were formatting as:
```python
with test as
# test
(foo):
pass
```
`with` statements are oddly difficult. This PR makes a bunch of subtle
modifications and adds a more extensive test suite. For example, we now
only preserve parentheses if there's more than one `WithItem` _or_ a
trailing comma; before, we always preserved.
Our formatting is_not_ the same as Black, but here's a diff of our
formatted code vs. Black's for the `with.py` test suite. The primary
difference is that we tend to break parentheses when they contain
comments rather than move them to the end of the life (this is a
consistent difference that we make across the codebase):
```diff
diff --git a/crates/ruff_python_formatter/foo.py b/crates/ruff_python_formatter/foo.py
index 85e761080..31625c876 100644
--- a/crates/ruff_python_formatter/foo.py
+++ b/crates/ruff_python_formatter/foo.py
@@ -1,6 +1,4 @@
-with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
-), aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa, aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa:
...
# trailing
@@ -16,28 +14,33 @@ with (
# trailing
-with a, b: # a # comma # c # colon
+with (
+ a, # a # comma
+ b, # c
+): # colon
...
with (
- a as # a # as
- # own line
- b, # b # comma
+ a as ( # a # as
+ # own line
+ b
+ ), # b # comma
c, # c
): # colon
... # body
# body trailing own
-with (
- a as # a # as
+with a as ( # a # as
# own line
- bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb # b
-):
+ bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
+): # b
pass
-with (a,): # magic trailing comma
+with (
+ a,
+): # magic trailing comma
...
@@ -47,6 +50,7 @@ with a: # should remove brackets
with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as c:
...
+
with (
# leading comment
a
@@ -74,8 +78,7 @@ with (
with (
a # trailing same line comment
# trailing own line comment
- as b
-):
+) as b:
...
with (
@@ -87,7 +90,9 @@ with (
with (
a
# trailing own line comment
-) as b: # trailing as same line comment # trailing b same line comment
+) as ( # trailing as same line comment
+ b
+): # trailing b same line comment
...
with (
@@ -124,18 +129,24 @@ with ( # comment
...
with ( # outer comment
- CtxManager1() as example1, # inner comment
+ ( # inner comment
+ CtxManager1()
+ ) as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
-with CtxManager() as example: # outer comment
+with ( # outer comment
+ CtxManager()
+) as example:
...
with ( # outer comment
CtxManager()
-) as example, CtxManager2() as example2: # inner comment
+) as example, ( # inner comment
+ CtxManager2()
+) as example2:
...
with ( # outer comment
@@ -145,7 +156,9 @@ with ( # outer comment
...
with ( # outer comment
- (CtxManager1()), # inner comment
+ ( # inner comment
+ CtxManager1()
+ ),
CtxManager2(),
) as example:
...
@@ -179,7 +192,9 @@ with (
):
pass
-with a as (b): # foo
+with a as ( # foo
+ b
+):
pass
with f(
@@ -209,17 +224,13 @@ with f(
) as b, c as d:
pass
-with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b:
pass
with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b:
pass
-with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b, c as d:
+with aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b, c as d:
pass
with (
@@ -230,6 +241,8 @@ with (
pass
with (
- aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
-) as b, c as d:
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
+ + bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb as b,
+ c as d,
+):
pass
```
Closes https://github.com/astral-sh/ruff/issues/6600.
## Test Plan
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75473 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74292 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75473 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74292 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
`cargo test`
## Summary
Attaches comments around the `:=` operator in a named expression as
dangling, and formats them manually in the `named_expr.rs` formatter.
Closes https://github.com/astral-sh/ruff/issues/5695.
## Test Plan
`cargo test`
## Summary
I noticed some inconsistencies around uses of `.range.start()`, structs
that have a `TextRange` field but don't implement `Ranged`, etc.
## Test Plan
`cargo test`
## Summary
Closes https://github.com/astral-sh/ruff/issues/6384, although I think
the issue was fixed already on main, for the most part.
The linked issue is around formatting expressions like:
```python
def test():
(
yield
#comment 1
* # comment 2
# comment 3
test # comment 4
)
```
On main, prior to this PR, we now format like:
```python
def test():
(
yield (
# comment 1
# comment 2
# comment 3
*test
) # comment 4
)
```
Which strikes me as reasonable. (We can't test this, since it's a syntax
error after for our parser, despite being a syntax error in both cases
from CPython's perspective.)
Meanwhile, Black does:
```python
def test():
(
yield
# comment 1
* # comment 2
# comment 3
test # comment 4
)
```
So our formatting differs in that we move comments between the star and
the expression above the star.
As of this PR, we also support formatting this input, which is valid:
```python
def test():
(
yield
#comment 1
* # comment 2
# comment 3
test, # comment 4
1
)
```
Like:
```python
def test():
(
yield (
# comment 1
(
# comment 2
# comment 3
*test, # comment 4
1,
)
)
)
```
There were two fixes here: (1) marking starred comments as dangling and
formatting them properly; and (2) supporting parenthesized comments for
tuples that don't contain their own parentheses, as is often the case
for yielded tuples (previously, we hit a debug assert).
Note that this diff
## Test Plan
cargo test
## Summary
Allows for proper lexing of tokens like `->`.
The main challenge is to ensure that our forward and backwards
representations are the same for cases like `===`. Specifically, we want
that to lex as `==` followed by `=` regardless of whether it's a
forwards or backwards lex. To do so, we identify the range of the
sequential characters (the full span of `===`), lex it forwards, then
return the last token.
## Test Plan
`cargo test`
## Summary
This PR adds support for parenthesized comments. A parenthesized comment
is a comment that appears within a parenthesis, but not within the range
of the expression enclosed by the parenthesis. For example, the comment
here is a parenthesized comment:
```python
if (
# comment
True
):
...
```
The parentheses enclose the `True`, but the range of `True` doesn’t
include the `# comment`.
There are at least two problems associated with parenthesized comments:
(1) associating the comment with the correct (i.e., enclosed) node; and
(2) formatting the comment correctly, once it has been associated with
the enclosed node.
The solution proposed here for (1) is to search for parentheses between
preceding and following node, and use open and close parentheses to
break ties, rather than always assigning to the preceding node.
For (2), we handle these special parenthesized comments in `FormatExpr`.
The biggest risk with this approach is that we forget some codepath that
force-disables parenthesization (by passing in `Parentheses::Never`).
I've audited all usages of that enum and added additional handling +
test coverage for such cases.
Closes https://github.com/astral-sh/ruff/issues/6390.
## Test Plan
`cargo test` with new cases.
Before:
| project | similarity index |
|--------------|------------------|
| build | 0.75623 |
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| typeshed | 0.74233 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
After:
| project | similarity index |
|--------------|------------------|
| build | 0.75623 |
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| typeshed | 0.74237 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
## Summary
Unlike other statements, Black always adds a trailing comma if an
import-from statement breaks with a single import member. I believe this
is for compatibility with isort -- see
09f5ee3a19,
https://github.com/psf/black/issues/127, or
66648c528a/src/black/linegen.py (L1452)
for the current version.
## Test Plan
`cargo test`, notice that a big chunk of the compatibility suite is
removed.
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74233 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.75472 |
| django | 0.99804 |
| transformers | 0.99618 |
| twine | 0.99876 |
| typeshed | 0.74260 |
| warehouse | 0.99601 |
| zulip | 0.99727 |
## Summary
Instead, we set an `is_star` flag on `Stmt::Try`. This is similar to the
pattern we've migrated towards for `Stmt::For` (removing
`Stmt::AsyncFor`) and friends. While these are significant differences
for an interpreter, we tend to handle these cases identically or nearly
identically.
## Test Plan
`cargo test`
## Summary
This PR adds handling for comments on open parentheses in parenthesized
context managers. For example, given:
```python
with ( # comment
CtxManager1() as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
```
We want to preserve that formatting. (Black does the same.) On `main`,
we format as:
```python
with (
# comment
CtxManager1() as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
```
It's very similar to how `StmtImportFrom` is handled.
Note that this case _isn't_ covered by the "parenthesized comment"
proposal, since this is a common on the statement that would typically
be attached to the first `WithItem`, and the `WithItem` _itself_ can
have parenthesized comments, like:
```python
with ( # comment
(
CtxManager1() # comment
) as example1,
CtxManager2() as example2,
CtxManager3() as example3,
):
...
```
## Test Plan
`cargo test`
Confirmed no change in similarity score.
## Summary
In https://github.com/astral-sh/ruff/pull/6512, we added a flag to the
AST to mark implicitly-concatenated string expressions. This PR makes
use of that flag to remove the `is_implicit_concatenation` method.
## Test Plan
`cargo test`
## Summary
The bracketed-end-of-line comment rule is meant to assign comments like
this as "immediately following the bracket":
```python
f( # comment
1
)
```
However, the logic was such that we treated this equivalently:
```python
f(
( # comment
1
)
)
```
This PR modifies the placement logic to ensure that we only skip the
opening bracket, and not any nested brackets. The above is now formatted
as:
```python
f(
(
# comment
1
)
)
```
(But will be corrected once we handle parenthesized comments properly.)
## Test Plan
`cargo test`
Confirmed no change in similarity score.
**Summary** Implement docstring formatting
**Test Plan** Matches black's `docstring.py` fixture exactly, added some
new cases for what is hard to debug with black and with what black
doesn't cover.
similarity index:
main:
zulip: 0.99702
django: 0.99784
warehouse: 0.99585
build: 0.75623
transformers: 0.99469
cpython: 0.75989
typeshed: 0.74853
this branch:
zulip: 0.99702
django: 0.99784
warehouse: 0.99585
build: 0.75623
transformers: 0.99464
cpython: 0.75517
typeshed: 0.74853
The regression in transformers is actually an improvement in a file they
don't format with black (they run `black examples tests src utils
setup.py conftest.py`, the difference is in hubconf.py). cpython doesn't
use black.
Closes#6196
## Summary
This PR modifies our logic for wrapping return type annotations.
Previously, we _always_ wrapped the annotation in parentheses if it
expanded; however, Black only exhibits this behavior when the function
parameters is empty (i.e., it doesn't and can't break). In other cases,
it uses the normal parenthesization rules, allowing nodes to bring their
own parentheses.
For example, given:
```python
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx() -> Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]:
...
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx(x) -> Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]:
...
```
Black will format as:
```python
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx() -> (
Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]
):
...
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
x,
) -> Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]:
...
```
Whereas, prior to this PR, Ruff would format as:
```python
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx() -> (
Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]
):
...
def xxxxxxxxxxxxxxxxxxxxxxxxxxxx(
x,
) -> (
Set[
"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
]
):
...
```
Closes https://github.com/astral-sh/ruff/issues/6431.
## Test Plan
Before:
- `zulip`: 0.99702
- `django`: 0.99784
- `warehouse`: 0.99585
- `build`: 0.75623
- `transformers`: 0.99470
- `cpython`: 0.75988
- `typeshed`: 0.74853
After:
- `zulip`: 0.99724
- `django`: 0.99791
- `warehouse`: 0.99586
- `build`: 0.75623
- `transformers`: 0.99474
- `cpython`: 0.75956
- `typeshed`: 0.74857
## Summary
This PR fixes some misformattings around optional parentheses for
expressions.
I first noticed that we were misformatting this:
```python
return (
unicodedata.normalize("NFKC", s1).casefold()
== unicodedata.normalize("NFKC", s2).casefold()
)
```
The above is stable Black formatting, but we were doing:
```python
return unicodedata.normalize("NFKC", s1).casefold() == unicodedata.normalize(
"NFKC", s2
).casefold()
```
Above, the "last" expression is a function call, so our
`can_omit_optional_parentheses` was returning `true`...
However, it turns out that Black treats function calls differently
depending on whether or not they have arguments -- presumedly because
they'll never split empty parentheses, and so they're functionally
non-useful. On further investigation, I believe this applies to all
parenthesized expressions. If Black can't split on the parentheses, it
doesn't leverage them when removing optional parentheses.
## Test Plan
Nice increase in similarity scores.
Before:
- `zulip`: 0.99702
- `django`: 0.99784
- `warehouse`: 0.99585
- `build`: 0.75623
- `transformers`: 0.99470
- `cpython`: 0.75989
- `typeshed`: 0.74853
After:
- `zulip`: 0.99705
- `django`: 0.99795
- `warehouse`: 0.99600
- `build`: 0.75623
- `transformers`: 0.99471
- `cpython`: 0.75989
- `typeshed`: 0.74853
## Summary
This PR adds formatting support for `MatchCase` node with subs for the
`Pattern`
nodes.
## Test Plan
Added test cases for case node handling with comments, newlines.
resolves: #6299
## Summary
The bug was happening in this
[loop](75f402eb82/crates/ruff_python_formatter/src/comments/placement.rs (L545)).
Basically, In the first iteration of the loop, the `comment_indentation`
is bigger than `child_indentation` (`comment_indentation` is 7 and
`child_indentation` is 4) making the `Ordering::Greater` branch execute.
Inside the `Ordering::Greater` branch, the `if` block gets executed,
resulting in the update of these variables.
```rust
parent_body = current_body;
current_body = Some(last_child_in_current_body);
last_child_in_current_body = nested_child;
```
In the second iteration of the loop, `comment_indentation` is smaller
than `child_indentation` (`comment_indentation` is 7 and
`child_indentation` is 8) making the `Ordering::Less` branch execute.
Inside the `Ordering::Less` branch, the `if` block gets executed, this
is where the bug was happening. At this point `parent_body` should be a
`StmtFunctionDef` but it was a `StmtClassDef`. Causing the comment to be
incorrectly formatted.
That happened for the following code:
```python
class A:
def f():
pass
# strangely indented comment
print()
```
There is only one problem that I couldn't figure it out a solution, the
variable `current_body` in this
[line](75f402eb82/crates/ruff_python_formatter/src/comments/placement.rs (L542C5-L542C49))
now gives this warning _"value assigned to `current_body` is never read
maybe it is overwritten before being read?"_
Any tips on how to solve that?
Closes#5337
## Test Plan
Add new test case.
---------
Co-authored-by: konstin <konstin@mailbox.org>
**Summary** Implement `DerefMut` for `WithNodeLevel` so it can be used
in the same way as `PyFormatter`. I want this for my WIP upstack branch
to enable `.fmt(f)` on `WithNodeLevel` context. We could extend this to
remove the other two method from `WithNodeLevel`.
We currently don't format all comments as match statements are not yet implemented. We can work around this for the top level match statement by setting them manually formatted but the mocked-out top level match doesn't call into its children so they would still have unformatted comments
<!--
Thank you for contributing to Ruff! To help us out with reviewing, please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
This PR fixes the issue where the FString formatting dropped dangling comments between the string parts.
```python
result_f = (
f' File "{__file__}", line {lineno_f+1}, in f\n'
' f()\n'
# XXX: The following line changes depending on whether the tests
# are run through the interactive interpreter or with -m
# It also varies depending on the platform (stack size)
# Fortunately, we don't care about exactness here, so we use regex
r' \[Previous line repeated (\d+) more times\]' '\n'
'RecursionError: maximum recursion depth exceeded\n'
)
```
The solution here isn't ideal because it re-introduces the `enclosing_parent` on `DecoratedComment` but it is the easiest fix that I could come up.
I didn't spend more time finding another solution becaues I think we have to re-write most of the fstring formatting with the upcoming Python 3.12 support (because lexing the individual parts as we do now will no longer work).
closes#6440
<!-- What's the purpose of the change? What does it do, and why? -->
## Test Plan
`cargo test`
The child PR testing that all comments are formatted should now pass
## Summary
This PR renames the `MagicCommand` token to `IpyEscapeCommand` token and
`MagicKind` to `IpyEscapeKind` type to better reflect the purpose of the
token and type. Similarly, it renames the AST nodes from `LineMagic` to
`IpyEscapeCommand` prefixed with `Stmt`/`Expr` wherever necessary.
It also makes renames from using `jupyter_magic` to
`ipython_escape_commands` in various function names.
The mode value is still `Mode::Jupyter` because the escape commands are
part of the IPython syntax but the lexing/parsing is done for a Jupyter
notebook.
### Motivation behind the rename:
* IPython codebase defines it as "EscapeCommand" / "Escape Sequences":
* Escape Sequences:
292e3a2345/IPython/core/inputtransformer2.py (L329-L333)
* Escape command:
292e3a2345/IPython/core/inputtransformer2.py (L410-L411)
* The word "magic" is used mainly for the actual magic commands i.e.,
the ones starting with `%`/`%%`
(https://ipython.readthedocs.io/en/stable/interactive/reference.html#magic-command-system).
So, this avoids any confusion between the Magic token (`%`, `%%`) and
the escape command itself.
## Test Plan
* `cargo test` to make sure all renames are done correctly.
* `grep` for `jupyter_escape`/`magic` to make sure all renames are done
correctly.
## Summary
This PR removes the group around function definition parameters, instead
grouping the parameters with the type parameters and return type
annotation.
This increases Zulip's similarity score from 0.99385 to 0.99699, so it's
a meaningful improvement. However, there's at least one stability error
that I'm working on, and I'm really just looking for high-level feedback
at this point, because I'm not happy with the solution.
Closes https://github.com/astral-sh/ruff/issues/6352.
## Test Plan
Before:
- `zulip`: 0.99396
- `django`: 0.99784
- `warehouse`: 0.99578
- `build`: 0.75436
- `transformers`: 0.99407
- `cpython`: 0.75987
- `typeshed`: 0.74432
After:
- `zulip`: 0.99702
- `django`: 0.99784
- `warehouse`: 0.99585
- `build`: 0.75623
- `transformers`: 0.99470
- `cpython`: 0.75988
- `typeshed`: 0.74853
## Summary
Given:
```python
def double(a: int) -> ( # Hello
int
):
return 2*a
```
We currently treat `# Hello` as a trailing comment on the parameters
(`(a: int)`). This PR adds a placement method to instead treat it as a
dangling comment on the function definition itself, so that it gets
formatted at the end of the definition, like:
```python
def double(a: int) -> int: # Hello
return 2*a
```
The formatting in this case is unchanged, but it's incorrect IMO for
that to be a trailing comment on the parameters, and that placement
leads to an instability after changing the grouping in #6410.
Fixing this led to a _different_ instability related to tuple return
type annotations, like:
```python
def zrevrangebylex(self, name: _Key, max: _Value, min: _Value, start: int | None = None, num: int | None = None) -> ( # type: ignore[override]
):
...
```
(This is a real example.)
To fix, I had to special-case tuples in that spot, though I'm not
certain that's correct.
## Summary
This PR moves `empty_parenthesized` such that it's peer to
`parenthesized`, and changes the API to better match that of
`parenthesized` (takes `&str` rather than `StaticText`, has a
`with_dangling_comments` method, etc.).
It may be intentionally _not_ part of `parentheses.rs`, but to me
they're so similar that it makes more sense for them to be in the same
module, with the same API, etc.
## Summary
This PR adds support for `StmtMatch` with subs for `MatchCase`.
## Test Plan
Add a few additional test cases around `match` statement, comments, line
breaks.
resolves: #6298
## Bug
Given
```python
x = () - (#
)
```
the comment is a dangling comment of the empty tuple. This is an
end-of-line comment so it may move after the expression. It still
expands the parent, so the operator breaks:
```python
x = (
()
- () #
)
```
In the next formatting pass, the comment is not a trailing tuple but a
trailing bin op comment, so the bin op doesn't break anymore. The
comment again expands the parent, so we still add the superfluous
parentheses
```python
x = (
() - () #
)
```
## Fix
The new formatting is to keep the comment on the empty tuple. This is a
log uglier and again has additional outer parentheses, but it's stable:
```python
x = (
()
- ( #
)
)
```
## Alternatives
Black formats all the examples above as
```python
x = () - () #
```
which i find better.
I would be happy about any suggestions for better solutions than the
current one. I'd mainly need a workaround for expand parent having an
effect on the bin op instead of first moving the comment to the end and
then applying expand parent to the assign statement.
## Summary
I noticed some deviations in how we treat dangling comments that hug the
opening parenthesis for function definitions.
For example, given:
```python
def f( # first
# second
): # third
...
```
We currently format as:
```python
def f(
# first
# second
): # third
...
```
This PR adds the proper opening-parenthesis dangling comment handling
for function parameters. Specifically, as with all other parenthesized
nodes, we now detect that dangling comment in `placement.rs` and handle
it in `parameters.rs`. We have to take some care in that file, since we
have multiple "kinds" of dangling comments, but I added a bunch of test
cases that we now format identically to Black.
## Test Plan
`cargo test`
Before:
- `zulip`: 0.99388
- `django`: 0.99784
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.74364
After:
- `zulip`: 0.99386
- `django`: 0.99784
- `warehouse`: 0.99504
- `transformers`: 0.99404
- `cpython`: 0.75913
- `typeshed`: 0.74409
Meaningful improvement on `typeshed`, minor decrease on `zulip`.