This fixes a bug where the current indent level was not calculated
correctly for doctests. Namely, it didn't account for the extra indent
level (in terms of ASCII spaces) used by by the PS1 (`>>> `) and PS2
(`... `) prompts. As a result, lines could extend up to 4 spaces beyond
the configured line length limit.
We fix that by passing the `CodeExampleKind` to the `format` routine
instead of just the code itself. In this way, `format` can query whether
there will be any extra indent added _after_ formatting the code and
take that into account for its line length setting.
We add a few regression tests, taken directly from @stinodego's
examples.
Fixes#9126
This PR does the plumbing to make a new formatting option,
`docstring-code-format`, available in the configuration for end users.
It is disabled by default (opt-in). It is opt-in at least initially to
reflect a conservative posture. The intent is to make it opt-out at some
point in the future.
This was split out from #8811 in order to make #8811 easier to merge.
Namely, once this is merged, docstring code snippet formatting will
become available to end users. (See comments below for how we arrived at
the name.)
Closes#7146
## Test Plan
Other than the standard test suite, I ran the formatter over the CPython
and polars projects to ensure both that the result looked sensible and
that tests still passed. At time of writing, one issue that currently
appears is that reformatting code snippets trips the long line lint:
https://github.com/BurntSushi/polars/actions/runs/7006619426/job/19058868021
## Summary
This PR changes the internal `docstring-code-line-width` setting to
additionally accept a string value `dynamic`. When `dynamic` is set, the
line width is dynamically adjusted when reformatting code snippets in
docstrings based on the indent level of the docstring. The result is
that the reformatted lines from the code snippet should not exceed the
"global" line width configuration for the surrounding source.
This PR does not change the default behavior, although I suspect the
default should probably be `dynamic`.
## Test Plan
I added a new configuration to the existing docstring code tests and
also added a new set of tests dedicated to the new `dynamic` mode.
## Summary
This does the light plumbing necessary to add a new internal option that
permits setting the line width of code examples in docstrings. The plan
is to add the corresponding user facing knob in #8854.
Note that this effectively removes the `same-as-global` configuration
style discussed [in this
comment](https://github.com/astral-sh/ruff/issues/8855#issuecomment-1847230440).
It replaces it with the `{integer}` configuration style only.
There are a lot of commits here, but they are each tiny to make review
easier because of the changes to snapshots.
## Test Plan
I added a new docstring test configuration that sets
`docstring-code-line-width = 60` and examined the differences.
## Summary
In https://github.com/astral-sh/ruff/pull/8921, we changed our parameter
formatting behavior to add a trailing comma whenever a single-argument
function breaks. This introduced a deviation in the case that a function
contains a single argument, but _also_ includes a positional-only or
keyword-only separator.
Closes https://github.com/astral-sh/ruff/issues/9074.
(This is not possible to actually use until
https://github.com/astral-sh/ruff/pull/8854 is merged.)
This commit slots in support for formatting Markdown fenced code
blocks[1]. With the refactoring done for reStructuredText previously,
this ended up being pretty easy to add. Markdown code blocks are also
quite a bit easier to parse and recognize correctly.
One point of contention in #8860 is whether to assume that unlabeled
Markdown code fences are Python or not by default. In this PR, we make
such an assumption. This follows what `rustdoc` does. The mitigation
here is that if an unlabeled code block isn't Python, then it probably
won't parse as Python. And we'll end up skipping it. So in the vast
majority of cases, the worst thing that can happen is a little bit of
wasted work.
Closes#8860
[1]: https://spec.commonmark.org/0.30/#fenced-code-blocks
(This is not possible to actually use until
https://github.com/astral-sh/ruff/pull/8854 is merged.)
ruff_python_formatter: add reStructuredText docstring formatting support
This commit makes use of the refactoring done in prior commits to slot
in reStructuredText support. Essentially, we add a new type of code
example and look for *both* literal blocks and code block directives.
Literal blocks are treated as Python by default because it seems to be a
[common
practice](https://github.com/adamchainz/blacken-docs/issues/195).
That is, literal blocks like this:
```
def example():
"""
Here's an example::
foo( 1 )
All done.
"""
pass
```
Will get reformatted. And code blocks (via reStructuredText directives)
will also get reformatted:
```
def example():
"""
Here's an example:
.. code-block:: python
foo( 1 )
All done.
"""
pass
```
When looking for a code block, it is possible for it to become invalid.
In which case, we back out of looking for a code example and print the
lines out as they are. As with doctest formatting, if reformatting the
code would result in invalid Python or if the code collected from the
block is invalid, then formatting is also skipped.
A number of tests have been added to check both the formatting and
resetting behavior. Mixed indentation is also tested a fair bit, since
one of my initial attempts at dealing with mixed indentation ended up
not working.
I recommend working through this PR commit-by-commit. There is in
particular a somewhat gnarly refactoring before reST support is added.
Closes#8859
## Summary
We should avoid inlining the ellipsis in:
```python
def h():
...
# bye
```
Just as we omit the ellipsis in:
```python
def h():
# bye
...
```
Closes https://github.com/astral-sh/ruff/issues/8905.
In the source of working on #8859, I made a number of smallish refactors
to how code snippet formatting works. Most or all of these were
motivated by writing in support for reStructuredText blocks. They have
some fundamentally different requirements than doctests, and there are a
lot more ways for reStructuredText blocks to become invalid.
(Commit-by-commit review is recommended as the commit messages provide
further context on each change. I split this off from ongoing work to
make review more manageable.)
## Summary
This PR adds opt-in support for formatting doctests in docstrings. This
reflects initial support and it is intended to add support for Markdown
and reStructuredText Python code blocks in the future. But I believe
this PR lays the groundwork, and future additions for Markdown and reST
should be less costly to add.
It's strongly recommended to review this PR commit-by-commit. The last
few commits in particular implement the bulk of the work here and
represent the denser portions.
Some things worth mentioning:
* The formatter is itself not perfect, and it is possible for it to
produce invalid Python code. Because of this, reformatted code snippets
are checked for Python validity. If they aren't valid, then we
(unfortunately silently) bail on formatting that code snippet.
* There are a couple places where it would be nice to at least warn the
user that doctest formatting failed, but it wasn't clear to me what the
best way to do that is.
* I haven't yet run this in anger on a real world code base. I think
that should happen before merging.
Closes#7146
## Test Plan
* [x] Pass the local test suite.
* [x] Scrutinize ecosystem changes.
* [x] Run this formatter on extant code and scrutinize the results.
(e.g., CPython, numpy.)
## Summary
This PR updates the string nodes (`ExprStringLiteral`,
`ExprBytesLiteral`, and `ExprFString`) to account for implicit string
concatenation.
### Motivation
In Python, implicit string concatenation are joined while parsing
because the interpreter doesn't require the information for each part.
While that's feasible for an interpreter, it falls short for a static
analysis tool where having such information is more useful. Currently,
various parts of the code uses the lexer to get the individual string
parts.
One of the main challenge this solves is that of string formatting.
Currently, the formatter relies on the lexer to get the individual
string parts, and formats them including the comments accordingly. But,
with PEP 701, f-string can also contain comments. Without this change,
it becomes very difficult to add support for f-string formatting.
### Implementation
The initial proposal was made in this discussion:
https://github.com/astral-sh/ruff/discussions/6183#discussioncomment-6591993.
There were various AST designs which were explored for this task which
are available in the linked internal document[^1].
The selected variant was the one where the nodes were kept as it is
except that the `implicit_concatenated` field was removed and instead a
new struct was added to the `Expr*` struct. This would be a private
struct would contain the actual implementation of how the AST is
designed for both single and implicitly concatenated strings.
This implementation is achieved through an enum with two variants:
`Single` and `Concatenated` to avoid allocating a vector even for single
strings. There are various public methods available on the value struct
to query certain information regarding the node.
The nodes are structured in the following way:
```
ExprStringLiteral - "foo" "bar"
|- StringLiteral - "foo"
|- StringLiteral - "bar"
ExprBytesLiteral - b"foo" b"bar"
|- BytesLiteral - b"foo"
|- BytesLiteral - b"bar"
ExprFString - "foo" f"bar {x}"
|- FStringPart::Literal - "foo"
|- FStringPart::FString - f"bar {x}"
|- StringLiteral - "bar "
|- FormattedValue - "x"
```
[^1]: Internal document:
https://www.notion.so/astral-sh/Implicit-String-Concatenation-e036345dc48943f89e416c087bf6f6d9?pvs=4
#### Visitor
The way the nodes are structured is that the entire string, including
all the parts that are implicitly concatenation, is a single node
containing individual nodes for the parts. The previous section has a
representation of that tree for all the string nodes. This means that
new visitor methods are added to visit the individual parts of string,
bytes, and f-strings for `Visitor`, `PreorderVisitor`, and
`Transformer`.
## Test Plan
- `cargo insta test --workspace --all-features --unreferenced reject`
- Verify that the ecosystem results are unchanged
Fix an instability where await was followed by a breaking fluent style
expression:
```python
test_data = await (
Stream.from_async(async_data)
.flat_map_async()
.map()
.filter_async(is_valid_data)
.to_list()
)
```
Note that this technically a minor style change (see ecosystem check)
## Summary
This PR implements validation in the formatter tests to ensure that we
don't modify the AST during formatting. Black has similar logic.
In implementing this, I learned that Black actually _does_ modify the
AST, and their test infrastructure normalizes the AST to wipe away those
differences. Specifically, Black changes the indentation of docstrings,
which _does_ modify the AST; and it also inserts parentheses in `del`
statements, which changes the AST too.
Ruff also does both these things, so we _also_ implement the same
normalization using a new visitor that allows for modifying the AST.
Closes https://github.com/astral-sh/ruff/issues/8184.
## Test Plan
`cargo test`
This ensures the python label is used for all python code blocks for
consistency.
## Test Plan
Visual inspection of all changes via git client ensuring no other
changes were made in error.
## Summary
This commit adds some additional error checking to the parser such that
assignments that are invalid syntax are rejected. This covers the
obvious cases like `5 = 3` and some not so obvious cases like `x + y =
42`.
This does add an additional recursive call to the parser for the cases
handling assignments. I had initially been concerned about doing this,
but `set_context` is already doing recursion during assignments, so I
didn't feel as though this was changing any fundamental performance
characteristics of the parser. (Also, in practice, I would expect any
such recursion here to be quite shallow since the recursion is done on
the target of an assignment. Such things are rarely nested much in
practice.)
Fixes#6895
## Test Plan
I've added unit tests covering every case that is detected as invalid on
an `Expr`.
## Summary
This PR fixes a bug in our formatter where a multiline lambda expression
statement was formatted over multiple lines without adding parentheses.
The PR "fixes" the problem by not splitting the lambda parameters if it
is not parenthesized
## Test Plan
Added test
**Summary** Previously, own line comment following after a docstring
followed by newline(s) before the first content statement were treated
as trailing on the docstring and we didn't insert a newline after the
docstring as black would.
Before:
```python
class ModuleBrowser:
"""Browse module classes and functions in IDLE."""
# This class is also the base class for pathbrowser.PathBrowser.
def __init__(self, master, path, *, _htest=False, _utest=False):
pass
```
After:
```python
class ModuleBrowser:
"""Browse module classes and functions in IDLE."""
# This class is also the base class for pathbrowser.PathBrowser.
def __init__(self, master, path, *, _htest=False, _utest=False):
pass
```
I'm not entirely happy about hijacking
`handle_own_line_comment_between_statements`, but i don't know a better
spot to put it.
Fixes#7948
**Test Plan** Fixtures
We previously incorrectly treated byte strings in docstring position as
docstrings because black does so
(https://github.com/astral-sh/ruff/pull/8283#discussion_r1375682931,
https://github.com/psf/black/issues/4002), even CPython doesn't
recognize them:
```console
$ python3.12
Python 3.12.0 (main, Oct 6 2023, 17:57:44) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def f():
... b""" a"""
...
>>> print(str(f.__doc__))
None
```
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Change
```python
"""Test docstring"""
a = 1
```
to
```python
"""Test docstring"""
a = 1
```
in preview style, but don't touch the docstring otherwise.
Do we want to ask black to also format the content of module level
docstrings? Seems inconsistent to me that we change function and class
docstring indentation/contents but not module docstrings.
Fixes https://github.com/astral-sh/ruff/issues/7995
**Summary** Prepare for the black preview style becoming the black
stable style at the end of the year.
This adds a new test file to compare stable and preview on some relevant
preview options in black, and makes `format_dev` understand the black
preview flag. I've added poetry as a project that uses preview.
I've implemented one specific deviation (collapsing of stub
implementation in non-stub files) which showed up in poetry for testing.
This also improves poetry compatibility from 0.99891 to 0.99919.
Fixes#7440
New compatibility stats:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 35 |
| home-assistant | 0.99953 | 10596 | 189 |
| poetry | 0.99919 | 317 | 12 |
| transformers | 0.99963 | 2657 | 332 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99969 | 654 | 15 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
Given:
```python
# comment
class A:
def foo(self):
pass
```
We need to insert an additional newline between `# comment` and `class
A`. We were missing this handling for the case in which `# comment` is a
leading comment on `class A`, as opposed to a trailing comment of some
preceding statement.
In practice, I think this only applies to the specific case in which a
class or function is the first statement in a module, and there's a
single empty line between a leading comment and that class or function.
If there are no empty lines, then the comment "sticks" to the
definition; if there are two or more, then `leading_comments` will
truncate appropriately. If the class or function is nested, then we only
need one empty line anyway.
Closes https://github.com/astral-sh/ruff/issues/8215.
## Test Plan
No change in similarity.
Before:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1647 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
After:
| project | similarity index | total files | changed files |
|----------------|------------------:|------------------:|------------------:|
| cpython | 0.75803 | 1799 | 1648 |
| django | 0.99983 | 2772 | 34 |
| home-assistant | 0.99953 | 10596 | 186 |
| poetry | 0.99891 | 317 | 17 |
| transformers | 0.99966 | 2657 | 330 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99978 | 3669 | 20 |
| warehouse | 0.99977 | 654 | 13 |
| zulip | 0.99970 | 1459 | 22 |
## Summary
This was just a bug in the parser ranges, probably since it was
initially implemented. Given `match n % 3, n % 5: ...`, the "subject"
(i.e., the tuple of two binary operators) was using the entire range of
the `match` statement.
Closes https://github.com/astral-sh/ruff/issues/8091.
## Test Plan
`cargo test`
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Fixes https://github.com/astral-sh/ruff/issues/7448
Fixes https://github.com/astral-sh/ruff/issues/7892
I've removed automatic dangling comment formatting, we're doing manual
dangling comment formatting everywhere anyway (the
assert-all-comments-formatted ensures this) and dangling comments would
break the formatting there.
## Test Plan
New test file.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Split out of #8044: In preview style, ellipsis are also collapsed in
non-stub files. This should only affect function/class contexts since
for other statements stub are generally not used. I've updated our tests
to use `pass` instead to reflect this, which makes tracking the preview
style changes much easier.
## Summary
Given an expression like `[x for (x) in y]`, we weren't skipping over
parentheses when searching for the `in` between `(x)` and `y`.
Closes https://github.com/astral-sh/ruff/issues/8053.
**Summary** Insert a newline after nested function and class
definitions, unless there is a trailing own line comment.
We need to e.g. format
```python
if platform.system() == "Linux":
if sys.version > (3, 10):
def f():
print("old")
else:
def f():
print("new")
f()
```
as
```python
if platform.system() == "Linux":
if sys.version > (3, 10):
def f():
print("old")
else:
def f():
print("new")
f()
```
even though `f()` is directly preceded by an if statement, not a
function or class definition. See the comments and fixtures for trailing
own line comment handling.
**Test Plan** I checked that the new content of `newlines.py` matches
black's formatting.
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
**Summary** Handle comment before the default values of function
parameters correctly by inserting a line break instead of space after
the equals sign where required.
```python
def f(
a = # parameter trailing comment; needs line break
1,
b =
# default leading comment; needs line break
2,
c = ( # the default leading can only be end-of-line with parentheses; no line break
3
),
d = (
# own line leading comment with parentheses; no line break
4
)
)
```
Fixes#7603
**Test Plan** Added the different cases and one more complex case as
fixtures.
**Summary** Remove spaces from import statements such as
```python
import tqdm . tqdm
from tqdm . auto import tqdm
```
See also #7760 for a better solution.
**Test Plan** New fixtures
**Summary** Quoting of f-strings can change if they are triple quoted
and only contain single quotes inside.
Fixes#6841
**Test Plan** New fixtures
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
## Summary
This PR fixes the bug where the formatter would panic if a class/function with
decorators had a suppression comment.
The fix is to use to correct start location to find the `async`/`def`/`class`
keyword when decorators are present which is the end of the last
decorator.
## Test Plan
Add test cases for the fix and update the snapshots.
## Summary
It turns out that _some_ identifiers can contain newlines --
specifically, dot-delimited import identifiers, like:
```python
import foo\
.bar
```
At present, we print all identifiers verbatim, which causes us to retain
the `\` in the formatted output. This also leads to violating some debug
assertions (see the linked issue, though that's a symptom of this
formatting failure).
This PR adds detection for import identifiers that contain newlines, and
formats them via `text` (slow) rather than `source_code_slice` (fast) in
those cases.
Closes https://github.com/astral-sh/ruff/issues/7734.
## Test Plan
`cargo test`
## Summary
At present, `quote-style` is used universally. However, [PEP
8](https://peps.python.org/pep-0008/) and [PEP
257](https://peps.python.org/pep-0257/) suggest that while either single
or double quotes are acceptable in general (as long as they're
consistent), docstrings and triple-quoted strings should always use
double quotes. In our research, the vast majority of Ruff users that
enable the `flake8-quotes` rules only enable them for inline strings
(i.e., non-triple-quoted strings).
Additionally, many Black forks (like Blue and Pyink) use double quotes
for docstrings and triple-quoted strings.
Our decision for now is to always prefer double quotes for triple-quoted
strings (which should include docstrings). Based on feedback, we may
consider adding additional options (e.g., a `"preserve"` mode, to avoid
changing quotes; or a `"multiline-quote-style"` to override this).
Closes https://github.com/astral-sh/ruff/issues/7615.
## Test Plan
`cargo test`
## Summary
Extends the pragma comment detection in the formatter to support
case-insensitive `noqa` (as supposed by Ruff), plus a variety of other
pragmas (`isort:`, `nosec`, etc.).
Also extracts the detection out into the trivia crate so that we can
reuse it in the linter (see:
https://github.com/astral-sh/ruff/issues/7471).
## Test Plan
`cargo test`
## Summary
The formatting for tuple patterns is now intended to match that of `for`
loops:
- Always parenthesize single-element tuples.
- Don't break on the trailing comma in single-element tuples.
- For other tuples, preserve the parentheses, and insert if-breaks.
Closes https://github.com/astral-sh/ruff/issues/7681.
## Test Plan
`cargo test`
I got confused and refactored a bit, now the naming should be more
consistent. This is the basis for the range formatting work.
Chages:
* `format_module` -> `format_module_source` (format a string)
* `format_node` -> `format_module_ast` (format a program parsed into an
AST)
* Added `parse_ok_tokens` that takes `Token` instead of `Result<Token>`
* Call the source code `source` consistently
* Added a `tokens_and_ranges` helper
* `python_ast` -> `module` (because that's the type)
## Summary
This is whitespace as per `is_python_whitespace`, and right now it tends
to lead to panics in the formatter. Seems reasonable to treat it as
whitespace in the `SimpleTokenizer` too.
Closes .https://github.com/astral-sh/ruff/issues/7624.
## Summary
Given:
```python
if True:
if True:
pass
else:
pass
# a
# b
# c
else:
pass
```
We want to preserve the newline after the `# c` (before the `else`).
However, the `last_node` ends at the `pass`, and the comments are
trailing comments on the `pass`, not trailing comments on the
`last_node` (the `if`). As such, when counting the trailing newlines on
the outer `if`, we abort as soon as we see the comment (`# a`).
This PR changes the logic to skip _all_ comments (even those with
newlines between them). This is safe as we know that there are no
"leading" comments on the `else`, so there's no risk of skipping those
accidentally.
Closes https://github.com/astral-sh/ruff/issues/7602.
## Test Plan
No change in compatibility.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99979 | 3496 | 22 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
Similar to tuples, a generator _can_ be parenthesized or
unparenthesized. Only search for bracketed comments if it contains its
own parentheses.
Closes https://github.com/astral-sh/ruff/issues/7623.
## Summary
Given:
```python
if True:
if True:
if True:
pass
#a
#b
#c
else:
pass
```
When determining the placement of the various comments, we compute the
indentation depth of each comment, and then compare it to the depth of
the previous statement. It turns out this can lead to reordering
comments, e.g., above, `#b` is assigned as a trailing comment of `pass`,
and so gets reordered above `#a`.
This PR modifies the logic such that when we compute the indentation
depth of `#b`, we limit it to at most the indentation depth of `#a`. In
other words, when analyzing comments at the end of branches, we don't
let successive comments go any _deeper_ than their preceding comments.
Closes https://github.com/astral-sh/ruff/issues/7602.
## Test Plan
`cargo test`
No change in similarity.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99979 | 3496 | 22 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99963 | 2587 | 319 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99979 | 3496 | 22 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
## Summary
Given:
```python
# -*- coding: utf-8 -*-
import random
# Defaults for arguments are defined here
# args.threshold = None;
logger = logging.getLogger("FastProject")
```
We want to count the number of newlines after `import random`, to ensure
that there's _at least one_, but up to two.
Previously, we used the end range of the statement (then skipped
trivia); instead, we need to use the end of the _last comment_. This is
similar to #7556.
Closes https://github.com/astral-sh/ruff/issues/7604.
## Summary
When we format the trailing comments on a clause body, we check if there
are any newlines after the last statement; if not, we insert one.
This logic didn't take into account that the last statement could itself
have trailing comments, as in:
```python
if True:
pass
# comment
else:
pass
```
We were thus inserting a newline after the comment, like:
```python
if True:
pass
# comment
else:
pass
```
In the context of function definitions, this led to an instability,
since we insert a newline _after_ a function, which would in turn lead
to the bug above appearing in the second formatting pass.
Closes https://github.com/astral-sh/ruff/issues/7465.
## Test Plan
`cargo test`
Small improvement in `transformers`, but no regressions.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| transformers | 0.99956 | 2587 | 404 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1631 |
| django | 0.99983 | 2760 | 36 |
| **transformers** | **0.99957** | **2587** | **402** |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99967 | 648 | 15 |
| zulip | 0.99972 | 1437 | 21 |
## Summary
If a function has no parameters (and no comments within the parameters'
`()`), we're supposed to wrap the return annotation _whenever_ it
breaks. However, our `empty_parameters` test didn't properly account for
the case in which the parameters include a newline (but no other
content), like:
```python
def get_dashboards_hierarchy(
) -> Dict[Type['BaseDashboard'], List[Type['BaseDashboard']]]:
"""Get hierarchy of dashboards classes.
Returns:
Dict of dashboards classes.
"""
dashboards_hierarchy = {}
```
This PR fixes that detection. Instead of lexing, it now checks if the
parameters itself is empty (or if it contains comments).
Closes https://github.com/astral-sh/ruff/issues/7457.
## Summary
Given a trailing operator comment in a unary expression, like:
```python
if (
not # comment
a):
...
```
We were attaching these to the operand (`a`), but formatting them in the
unary operator via special handling. Parents shouldn't format the
comments of their children, so this instead attaches them as dangling
comments on the unary expression. (No intended change in formatting.)
## Summary
Given a statement like:
```python
result = (
f(111111111111111111111111111111111111111111111111111111111111111111111111111111111)
+ 1
)()
```
When we go to parenthesize the target of the assignment, we use
`maybe_parenthesize_expression` with `Parenthesize::IfBreaks`. This then
checks if the call on the right-hand side needs to be parenthesized, the
implementation of which looks like:
```rust
impl NeedsParentheses for ExprCall {
fn needs_parentheses(
&self,
_parent: AnyNodeRef,
context: &PyFormatContext,
) -> OptionalParentheses {
if CallChainLayout::from_expression(self.into(), context.source())
== CallChainLayout::Fluent
{
OptionalParentheses::Multiline
} else if context.comments().has_dangling(self) {
OptionalParentheses::Always
} else {
self.func.needs_parentheses(self.into(), context)
}
}
}
```
Checking for `self.func.needs_parentheses(self.into(), context)` is
problematic, since, as in the example above, `self.func` may _already_
be parenthesized -- in which case, we _don't_ want to parenthesize the
entire expression. If we do, we end up with this non-ideal formatting:
```python
result = (
(
f(
111111111111111111111111111111111111111111111111111111111111111111111111111111111
)
+ 1
)()
)
```
This PR modifies the `NeedsParentheses` implementations for call chain
expressions to return `Never` if the inner expression has its own
parentheses, in which case, the formatting implementations for those
expressions will preserve them anyway.
Closes https://github.com/astral-sh/ruff/issues/7370.
## Test Plan
Zulip improves a bit, everything else is unchanged.
Before:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| zulip | 0.99956 | 1437 | 23 |
After:
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99981 | 2760 | 40 |
| transformers | 0.99944 | 2587 | 413 |
| twine | 1.00000 | 33 | 0 |
| typeshed | 0.99983 | 3496 | 18 |
| warehouse | 0.99834 | 648 | 20 |
| **zulip** | **0.99962** | **1437** | **22** |
## Summary
Fix all but one empty line differences with the black preview style in
typeshed. The remaining differences are breaking with type comments and
trailing commas in function definitions.
I compared the empty line differences with the preview mode of black
since stable has some oddities that would have been hard to replicate
(https://github.com/psf/black/issues/3861). Additionally, it assumes the
style proposed in https://github.com/psf/black/issues/3862.
An edge case that also surfaced with typeshed are newline before
trailing module comments.
**main**
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99966 | 2760 | 58 |
| transformers | 0.99930 | 2587 | 447 |
| twine | 1.00000 | 33 | 0 |
| **typeshed** | 0.99978 | 3496 | **2173** |
| warehouse | 0.99825 | 648 | 22 |
| zulip | 0.99950 | 1437 | 27 |
**PR**
| project | similarity index | total files | changed files |
|--------------|------------------:|------------------:|------------------:|
| cpython | 0.76083 | 1789 | 1632 |
| django | 0.99966 | 2760 | 58 |
| transformers | 0.99930 | 2587 | 447 |
| twine | 1.00000 | 33 | 0 |
| **typeshed** | 0.99983 | 3496 | **18** |
| warehouse | 0.99825 | 648 | 22 |
| zulip | 0.99950 | 1437 | 27 |
Closes#6723
## Test Plan
The main driver was the typeshed diff. I added new test cases for all
kinds of possible empty line combinations in stub files, test cases for
newlines before trailing module comments.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR modifies our between-statement comment handling such that
comments that are not separated by a statement by any newlines continue
to be treated as leading comments on the statement, but comments that
_are_ separated are instead formatted as trailing comments on the
preceding statement.
See, e.g., the originating snippet:
```python
DEFAULT_TEMPLATE = "flatpages/default.html"
# This view is called from FlatpageFallbackMiddleware.process_response
# when a 404 is raised, which often means CsrfViewMiddleware.process_view
# has not been called even if CsrfViewMiddleware is installed. So we need
# to use @csrf_protect, in case the template needs {% csrf_token %}.
# However, we can't just wrap this view; if no matching flatpage exists,
# or a redirect is required for authentication, the 404 needs to be returned
# without any CSRF checks. Therefore, we only
# CSRF protect the internal implementation.
def flatpage(request, url):
pass
```
Here, we need to ensure that the `def flatpage` is precede by two empty
lines. However, we want those two empty lines to be enforced from the
_end_ of the comment block, _unless_ the comments are directly atop the
`def flatpage`.
I played with this a bit, and I think the simplest conceptual model and
implementation is to instead treat those as trailing comments on the
preceding node. The main difficulty with this approach is that, in order
to be fully compatible with Black, we'd sometimes need to insert
newlines _between_ the preceding node and its trailing comments. See,
e.g.:
```python
def func():
...
# comment
x = 1
```
In this case, we'd need to insert two blank lines between `def func():
...` and `# comment`, but `# comment` is trailing comment on `def
func(): ...`. So, we'd need to take this case into account in the
various nodes that _require_ newlines after them: functions, classes,
and imports. After some discussion, we've opted _not_ to support this,
and just treat these as trailing comments -- so we won't insert newlines
there. This means our handling is still identical to Black's on
Black-formatted code, but avoids moving such trailing comments on
unformatted code.
I dislike that the empty handling is so complex, and that it's split
between so many different nodes, but this is really tricky. Continuing
to treat these as leading comments is very difficult too, since we'd
need to do similar tricks for the leading comment handling in those
nodes, and influencing leading comments is even harder, since they're
all formatted _before_ the node itself.
Closes https://github.com/astral-sh/ruff/issues/6761.
## Test Plan
`cargo test`
Surprisingly, it doesn't change the similarity at all (apart from a
0.00001 change in CPython), but I manually confirmed that it did fix the
originating issue in Django.
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76082 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76081 |
| django | 0.99921 |
| transformers | 0.99854 |
| twine | 0.99982 |
| typeshed | 0.99953 |
| warehouse | 0.99648 |
| zulip | 0.99928 |
## Summary
This PR adds comment handling for comments between the `=` and the
`value` for keywords, as in the following cases:
```python
func(
x # dangling
= # dangling
# dangling
1,
** # dangling
y
)
```
(Comments after the `**` were already handled in some cases, but I've
unified the handling with the `=` handling.)
Note that, previously, comments between the `**` and its value were
rendered as trailing comments on the value (so they'd appear after `y`).
This struck me as odd since it effectively re-ordered the comment with
respect to its closest AST node (the value). I've made them leading
comments, though I don't know that that's a significant improvement. I
could also imagine us leaving them where they are.
## Summary
This PR introduces two new AST nodes to improve the representation of
`PatternMatchClass`. As a reminder, `PatternMatchClass` looks like this:
```python
case Point2D(0, 0, x=1, y=2):
...
```
Historically, this was represented as a vector of patterns (for the `0,
0` portion) and parallel vectors of keyword names (for `x` and `y`) and
values (for `1` and `2`). This introduces a bunch of challenges for the
formatter, but importantly, it's also really different from how we
represent similar nodes, like arguments (`func(0, 0, x=1, y=2)`) or
parameters (`def func(x, y)`).
So, firstly, we now use a single node (`PatternArguments`) for the
entire parenthesized region, making it much more consistent with our
other nodes. So, above, `PatternArguments` would be `(0, 0, x=1, y=2)`.
Secondly, we now have a `PatternKeyword` node for `x=1` and `y=2`. This
is much more similar to the how `Keyword` is represented within
`Arguments` for call expressions.
Closes https://github.com/astral-sh/ruff/issues/6866.
Closes https://github.com/astral-sh/ruff/issues/6880.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
Adds support for `PatternMatchMapping` -- i.e., cases like:
```python
match foo:
case {"a": 1, "b": 2, **rest}:
pass
```
Unfortunately, this node has _three_ kinds of dangling comments:
```python
{ # "open parenthesis comment"
key: pattern,
** # end-of-line "double star comment"
# own-line "double star comment"
rest # end-of-line "after rest comment"
# own-line "after rest comment"
}
```
Some of the complexity comes from the fact that in `**rest`, `rest` is
an _identifier_, not a node, so we have to handle comments _after_ it as
dangling on the enclosing node, rather than trailing on `**rest`. (We
could change the AST to use `PatternMatchAs` there, which would be more
permissive than the grammar but not totally crazy -- `PatternMatchAs` is
used elsewhere to mean "a single identifier".)
Closes https://github.com/astral-sh/ruff/issues/6644.
## Test Plan
`cargo test`
## Summary
This PR ensures that if an expression has an own-line leading comment
_before_ its open parentheses, we render it as such.
For example, given:
```python
[ # foo
# bar
( # baz
1
)
]
```
On `main`, we format as:
```python
[ # foo
(
# bar
# baz
1
)
]
```
As of this PR, we format as:
```python
[ # foo
# bar
( # baz
1
)
]
```
## Test Plan
`cargo test`
## Summary
This PR ensures that we handle bracketed comments on sequences, like `#
comment` here:
```python
match x:
case [ # comment
1, 2
]:
pass
```
The handling is very similar to other, similar nodes, except that we do
need some special logic to determine whether the sequence is
parenthesized, similar to our logic for tuples.
## Test Plan
`cargo test`
## Summary
This PR modifies our formatting of comments around the `.` in an
attribute. Specifically, the goal here is to avoid _reordering_
comments, and the net effect is that we generally leave comments
where-they-are when dealing with comments between around the dot (which
you can also think of as comments between attributes).
All comments around the dot are now treated as dangling and formatted
manually, with the exception of end-of-line or parenthesized comments on
the value, like those marked as trailing here, which remain trailing:
```python
(
(
a # trailing end-of-line
# trailing own-line
) # dangling before dot end-of-line
.b # trailing end-of-line
)
```
Closes https://github.com/astral-sh/ruff/issues/6823.
## Test Plan
`cargo test`
Before:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76050 |
| django | 0.99820 |
| transformers | 0.99800 |
| twine | 0.99876 |
| typeshed | 0.99953 |
| warehouse | 0.99615 |
| zulip | 0.99729 |
After:
| project | similarity index |
|--------------|------------------|
| cpython | 0.76050 |
| django | 0.99820 |
| transformers | 0.99800 |
| twine | 0.99876 |
| typeshed | 0.99953 |
| warehouse | 0.99615 |
| zulip | 0.99729 |
## Summary
This PR fixes the duplicate-parenthesis problem that's visible in the
tests from https://github.com/astral-sh/ruff/pull/6799. The issue is
that we might have parentheses around the entire match-case pattern,
like in `(1)` here:
```python
match foo:
case (1):
y = 0
```
In this case, the inner expression (`1`) will _think_ it's
parenthesized, but we'll _also_ detect the parentheses at the case level
-- so they get rendered by the case, then again by the expression.
Instead, if we detect parentheses at the case level, we can force-off
the parentheses for the pattern using a design similar to the way we
handle parentheses on expressions.
Closes https://github.com/astral-sh/ruff/issues/6753.
## Test Plan
`cargo test`
## Summary
This is effectively #6608, but with additional tests.
We aren't properly handling parenthesized patterns, but that needs to be
dealt with separately as it's somewhat involved.
Closes#6555
## Summary
Follows up on
https://github.com/astral-sh/ruff/pull/6652#discussion_r1300871033 with
some modifications to the `PatternMatchAs` comment handling.
Specifically, any comments between the `as` and the end are now
formatted as dangling, and we now insert some newlines in the
appropriate places.
## Test Plan
`cargo test`
## Summary
Ensures that we retain the open-parenthesis comment in cases like:
```python
match pattern_comments:
case ( # leading
only_leading
):
...
```
Previously, this was treated as a leading comment on `only_leading`.
## Test Plan
`cargo test`
## Summary
For imports, we enforce that there's _at least_ one empty line after an
import (assuming the next statement is _not_ an import), but allow up to
two at the module level.
Closes https://github.com/astral-sh/ruff/issues/6760.
## Test Plan
`cargo test`