This PR implements a modification (in preview) to fluent formatting for
method chains: We break _at_ the first call instead of _after_.
For example, we have the following diff between `main` and this PR (with
`line-length=8` so I don't have to stretch out the text):
```diff
x = (
- df.merge()
+ df
+ .merge()
.groupby()
.agg()
.filter()
)
```
## Explanation of current implementation
Recall that we traverse the AST to apply formatting. A method chain,
while read left-to-right, is stored in the AST "in reverse". So if we
start with something like
```python
a.b.c.d().e.f()
```
then the first syntax node we meet is essentially `.f()`. So we have to
peek ahead. And we actually _already_ do this in our current fluent
formatting logic: we peek ahead to count how many calls we have in the
chain to see whether we should be using fluent formatting or now.
In this implementation, we actually _record_ this number inside the enum
for `CallChainLayout`. That is, we make the variant `Fluent` hold an
`AttributeState`. This state can either be:
- The number of call-like attributes preceding the current attribute
- The state `FirstCallOrSubscript` which means we are at the first
call-like attribute in the chain (reading from left to right)
- The state `BeforeFirstCallOrSubscript` which means we are in the
"first group" of attributes, preceding that first call.
In our example, here's what it looks like at each attribute:
```
a.b.c.d().e.f @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d().e @ Fluent(CallsOrSubscriptsPreceding(1))
a.b.c.d @ Fluent(FirstCallOrSubscript)
a.b.c @ Fluent(BeforeFirstCallOrSubscript)
a.b @ Fluent(BeforeFirstCallOrSubscript)
```
Now, as we descend down from the parent expression, we pass along this
little piece of state and modify it as we go to track where we are. This
state doesn't do anything except when we are in `FirstCallOrSubscript`,
in which case we add a soft line break.
Closes#8598
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
This PR makes two changes to our formatting of `lambda` expressions:
1. We now parenthesize the body expression if it expands
2. We now try to keep the parameters on a single line
The latter of these fixes#8179:
Black formatting and this PR's formatting:
```py
def a():
return b(
c,
d,
e,
f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
*args, **kwargs
),
)
```
Stable Ruff formatting
```py
def a():
return b(
c,
d,
e,
f=lambda self,
*args,
**kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs),
)
```
We don't parenthesize the body expression here because the call to
`aaaa...` has its own parentheses, but adding a binary operator shows
the new parenthesization:
```diff
@@ -3,7 +3,7 @@
c,
d,
e,
- f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
- *args, **kwargs
- ) + 1,
+ f=lambda self, *args, **kwargs: (
+ aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(*args, **kwargs) + 1
+ ),
)
```
This is actually a new divergence from Black, which formats this input
like this:
```py
def a():
return b(
c,
d,
e,
f=lambda self, *args, **kwargs: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa(
*args, **kwargs
)
+ 1,
)
```
But I think this is an improvement, unlike the case from #8179.
One other, smaller benefit is that because we now add parentheses to
lambda bodies, we also remove redundant parentheses:
```diff
@pytest.mark.parametrize(
"f",
[
- lambda x: (x.expanding(min_periods=5).cov(x, pairwise=True)),
- lambda x: (x.expanding(min_periods=5).corr(x, pairwise=True)),
+ lambda x: x.expanding(min_periods=5).cov(x, pairwise=True),
+ lambda x: x.expanding(min_periods=5).corr(x, pairwise=True),
],
)
def test_moment_functions_zero_length_pairwise(f):
```
## Test Plan
New tests taken from #8465 and probably a few more I should grab from
the ecosystem results.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Summary
--
This PR fixes#17796 by taking the approach mentioned in
https://github.com/astral-sh/ruff/issues/17796#issuecomment-2847943862
of simply recursing into the `MatchAs` patterns when checking if we need
parentheses. This allows us to reuse the parentheses in the inner
pattern before also breaking the `MatchAs` pattern itself:
```diff
match class_pattern:
case Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture:
pass
- case (
- Class(xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) as capture
- ):
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture:
pass
- case (
- Class(
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- ) as capture
- ):
+ case Class(
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ) as capture:
pass
case (
Class(
@@ -685,13 +683,11 @@
match sequence_pattern_brackets:
case [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture:
pass
- case (
- [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] as capture
- ):
+ case [
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ] as capture:
pass
- case (
- [
- xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- ] as capture
- ):
+ case [
+ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ ] as capture:
pass
```
I haven't really resolved the question of whether or not it's okay
always to recurse, but I'm hoping the ecosystem check on this PR might
shed some light on that.
Test Plan
--
New tests based on the issue and then reviewing the ecosystem check here
Summary
--
This is a first step toward fixing #9745. After reviewing our open
issues and several Black issues and PRs, I personally found the function
case the most compelling, especially with very long argument lists:
```py
def func(
self,
arg1: int,
arg2: bool,
arg3: bool,
arg4: float,
arg5: bool,
) -> tuple[...]:
if arg2 and arg3:
raise ValueError
```
or many annotations:
```py
def function(
self, data: torch.Tensor | tuple[torch.Tensor, ...], other_argument: int
) -> torch.Tensor | tuple[torch.Tensor, ...]:
do_something(data)
return something
```
I think docstrings help the situation substantially both because syntax
highlighting will usually give a very clear separation between the
annotations and the docstring and because we already allow a blank line
_after_ the docstring:
```py
def function(
self, data: torch.Tensor | tuple[torch.Tensor, ...], other_argument: int
) -> torch.Tensor | tuple[torch.Tensor, ...]:
"""
A function doing something.
And a longer description of the things it does.
"""
do_something(data)
return something
```
There are still other comments on #9745, such as [this one] with 9
upvotes, where users specifically request blank lines in all block
types, or at least including conditionals and loops. I'm sympathetic to
that case as well, even if personally I don't find an [example] like
this:
```py
if blah:
# Do some stuff that is logically related
data = get_data()
# Do some different stuff that is logically related
results = calculate_results()
return results
```
to be much more readable than:
```py
if blah:
# Do some stuff that is logically related
data = get_data()
# Do some different stuff that is logically related
results = calculate_results()
return results
```
I'm probably just used to the latter from the formatters I've used, but
I do prefer it. I also think that functions are the least susceptible to
the accidental introduction of a newline after refactoring described in
Micha's [comment] on #8893.
I actually considered further restricting this change to functions with
multiline headers. I don't think very short functions like:
```py
def foo():
return 1
```
benefit nearly as much from the allowed newline, but I just went with
any function without a docstring for now. I guess a marginal case like:
```py
def foo(a_long_parameter: ALongType, b_long_parameter: BLongType) -> CLongType:
return 1
```
might be a good argument for not restricting it.
I caused a couple of syntax errors before adding special handling for
the ellipsis-only case, so I suspect that there are some other
interesting edge cases that may need to be handled better.
Test Plan
--
Existing tests, plus a few simple new ones. As noted above, I suspect
that we may need a few more for edge cases I haven't considered.
[this one]:
https://github.com/astral-sh/ruff/issues/9745#issuecomment-2876771400
[example]:
https://github.com/psf/black/issues/902#issuecomment-1562154809
[comment]:
https://github.com/astral-sh/ruff/issues/8893#issuecomment-1867259744
Summary
--
This PR implements the black preview style from
https://github.com/psf/black/pull/4720. As of Python 3.14, you're
allowed to omit the parentheses around groups of exceptions, as long as
there's no `as` binding:
**3.13**
```pycon
Python 3.13.4 (main, Jun 4 2025, 17:37:06) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException: ...
...
File "<python-input-1>", line 2
except Exception, BaseException: ...
^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized
```
**3.14**
```pycon
Python 3.14.0rc2 (main, Sep 2 2025, 14:20:56) [Clang 20.1.4 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> try: ...
... except Exception, BaseException: ...
...
Ellipsis
>>> try: ...
... except (Exception, BaseException): ...
...
Ellipsis
>>> try: ...
... except Exception, BaseException as e: ...
...
File "<python-input-2>", line 2
except Exception, BaseException as e: ...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: multiple exception types must be parenthesized when using 'as'
```
I think this ended up being pretty straightforward, at least once Micha
showed me where to start :)
Test Plan
--
New tests
At first I thought we were deviating from black in how we handle
comments within the exception type tuple, but I think this applies to
how we format all tuples, not specifically with the new preview style.
## Summary
This PR stabilizies the fix for
https://github.com/astral-sh/ruff/issues/14001
We try to only make breaking formatting changes once a year. However,
the plan was to release this fix as part of Ruff 0.9 but I somehow
missed it when promoting all other formatter changes.
I think it's worth making an exception here considering that this is a
bug fix, it improves readability, and it should be rare
(very few files in a single project). Our version policy explicitly
allows breaking formatter changes in any minor release and the idea of
only making breaking formatter changes once a year is mainly to avoid
multiple releases throughout the year that introduce large formatter
changes
Closes https://github.com/astral-sh/ruff/issues/14001
## Test Plan
Updated snapshot
## Summary
This PR changes how we format `with` statements with a single with item
for Python 3.8 or older. This change is not compatible with Black.
This is how we format a single-item with statement today
```python
def run(data_path, model_uri):
with pyspark.sql.SparkSession.builder.config(
key="spark.python.worker.reuse", value=True
).config(key="spark.ui.enabled", value=False).master(
"local-cluster[2, 1, 1024]"
).getOrCreate():
# ignore spark log output
spark.sparkContext.setLogLevel("OFF")
print(score_model(spark, data_path, model_uri))
```
This is different than how we would format the same expression if it is
inside any other clause header (`while`, `if`, ...):
```python
def run(data_path, model_uri):
while (
pyspark.sql.SparkSession.builder.config(
key="spark.python.worker.reuse", value=True
)
.config(key="spark.ui.enabled", value=False)
.master("local-cluster[2, 1, 1024]")
.getOrCreate()
):
# ignore spark log output
spark.sparkContext.setLogLevel("OFF")
print(score_model(spark, data_path, model_uri))
```
Which seems inconsistent to me.
This PR changes the formatting of the single-item with Python 3.8 or
older to match that of other clause headers.
```python
def run(data_path, model_uri):
with (
pyspark.sql.SparkSession.builder.config(
key="spark.python.worker.reuse", value=True
)
.config(key="spark.ui.enabled", value=False)
.master("local-cluster[2, 1, 1024]")
.getOrCreate()
):
# ignore spark log output
spark.sparkContext.setLogLevel("OFF")
print(score_model(spark, data_path, model_uri))
```
According to our versioning policy, this style change is gated behind a
preview flag.
## Test Plan
See added tests.
Added
## Summary
_This is preview only feature and is available using the `--preview`
command-line flag._
With the implementation of [PEP 701] in Python 3.12, f-strings can now
be broken into multiple lines, can contain comments, and can re-use the
same quote character. Currently, no other Python formatter formats the
f-strings so there's some discussion which needs to happen in defining
the style used for f-string formatting. Relevant discussion:
https://github.com/astral-sh/ruff/discussions/9785
The goal for this PR is to add minimal support for f-string formatting.
This would be to format expression within the replacement field without
introducing any major style changes.
### Newlines
The heuristics for adding newline is similar to that of
[Prettier](https://prettier.io/docs/en/next/rationale.html#template-literals)
where the formatter would only split an expression in the replacement
field across multiple lines if there was already a line break within the
replacement field.
In other words, the formatter would not add any newlines unless they
were already present i.e., they were added by the user. This makes
breaking any expression inside an f-string optional and in control of
the user. For example,
```python
# We wouldn't break this
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa { aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
# But, we would break the following as there's already a newline
aaaaaaaaaaa = f"asaaaaaaaaaaaaaaaa {
aaaaaaaaaaaa + bbbbbbbbbbbb + ccccccccccccccc } cccccccccc"
```
If there are comments in any of the replacement field of the f-string,
then it will always be a multi-line f-string in which case the formatter
would prefer to break expressions i.e., introduce newlines. For example,
```python
x = f"{ # comment
a }"
```
### Quotes
The logic for formatting quotes remains unchanged. The existing logic is
used to determine the necessary quote char and is used accordingly.
Now, if the expression inside an f-string is itself a string like, then
we need to make sure to preserve the existing quote and not change it to
the preferred quote unless it's 3.12. For example,
```python
f"outer {'inner'} outer"
# For pre 3.12, preserve the single quote
f"outer {'inner'} outer"
# While for 3.12 and later, the quotes can be changed
f"outer {"inner"} outer"
```
But, for triple-quoted strings, we can re-use the same quote char unless
the inner string is itself a triple-quoted string.
```python
f"""outer {"inner"} outer""" # valid
f"""outer {'''inner'''} outer""" # preserve the single quote char for the inner string
```
### Debug expressions
If debug expressions are present in the replacement field of a f-string,
then the whitespace needs to be preserved as they will be rendered as it
is (for example, `f"{ x = }"`. If there are any nested f-strings, then
the whitespace in them needs to be preserved as well which means that
we'll stop formatting the f-string as soon as we encounter a debug
expression.
```python
f"outer { x = !s :.3f}"
# ^^
# We can remove these whitespaces
```
Now, the whitespace doesn't need to be preserved around conversion spec
and format specifiers, so we'll format them as usual but we won't be
formatting any nested f-string within the format specifier.
### Miscellaneous
- The
[`hug_parens_with_braces_and_square_brackets`](https://github.com/astral-sh/ruff/issues/8279)
preview style isn't implemented w.r.t. the f-string curly braces.
- The
[indentation](https://github.com/astral-sh/ruff/discussions/9785#discussioncomment-8470590)
is always relative to the f-string containing statement
## Test Plan
* Add new test cases
* Review existing snapshot changes
* Review the ecosystem changes
[PEP 701]: https://peps.python.org/pep-0701/
## Summary
This PR implements the `blank_line_after_nested_stub_class` preview
style in the formatter.
The logic is divided into 3 parts:
1. In between preceding and following nodes at top level and nested
suite
2. When there's a trailing comment after the class
3. When there is no following node from (1) which is the case when it's
the last or the only node in a suite
We handle (3) with `FormatLeadingAlternateBranchComments`.
## Test Plan
- Add new test cases and update existing snapshots
- Checked the `typeshed` diff
fixes: #8891