## Summary
This PR changes the default value of
`lint.flake8-builtins.builtins-strict-checking` added in
https://github.com/astral-sh/ruff/pull/15951 from `true` to `false`.
This also allows simplifying the default option logic and removes the
dependence on preview mode.
https://github.com/astral-sh/ruff/issues/15399 was already closed by
#15951, but this change will finalize the behavior mentioned in
https://github.com/astral-sh/ruff/issues/15399#issuecomment-2587017147.
As an example, strict checking flags modules based on their last
component, so `utils/logging.py` triggers A005. Non-strict checking
checks the path to the module, so `utils/logging.py` is allowed (this is
the example and desired behavior from #15399 exactly) but a top-level
`logging.py` or `logging/__init__.py` is still disallowed.
## Test Plan
Existing tests from #15951 and #16006, with the snapshot updated in
`a005_module_shadowing_strict_default` to reflect the new default.
Summary
--
Stabilizes UP044, renames the module to match the rule name, and removes
the `PreviewMode` from the test settings.
Test Plan
--
2 closed issues in November, just after the rule was added, otherwise no
issues
Summary
--
Stabilizes SIM905 and adds a small addition to the docs. The test was
already in the right place.
Test Plan
--
No issues except 2 recent, general issues about whitespace
normalization.
## Summary
This PR stabilizes several preview-only behaviours for
`custom-typevar-for-self` (`PYI019`). Namely:
- A new, more accurate technique is now employed for detecting custom
TypeVars that are replaceable with `Self`. See
https://github.com/astral-sh/ruff/pull/15888 for details.
- The range of the diagnostic is now the full function header rather
than just the return annotation. (Previously, the rule only applied to
methods with return annotations, but this is no longer true due to the
changes in the first bullet point.)
- The fix is now available even when preview mode is not enabled.
## Test Plan
- Existing snapshots that do not have preview mode enabled are updated
- Preview-specific snapshots are removed
- I'll check the ecosystem report on this PR to verify everything's as
expected
Summary
--
Stabilizes PLC1802. The tests were already in the right place, and I
just tidied the docs a little bit.
Test Plan
--
1 issue closed 4 days after the rule was added, no other issues
Summary
--
Stabilizes PLW1507. The tests were already in the right place, and I
just tidied the docs a little bit.
Test Plan
--
1 issue from 2 weeks ago but just suggesting to mark the fix unsafe. The
shallow vs deep copy *does* change the program behavior, just usually in
a preferable way.
## Summary
Stabilizes FAST003, completing the group with FAST001 and FAST002.
## Test Plan
Last bug fix (false positive) was fixed on 2025-01-13, almost 2 months
ago.
The test case was already in the right place.
## Summary
Stabilizes C420 for the 0.10 release.
## Test Plan
No open issues or PRs (except a general issue about [string
normalization](https://github.com/astral-sh/ruff/issues/16579)). The
last (and only) false-negative bug fix was over a month ago.
The tests for this rule were already not on the `preview_rules` test, so
I just changed the `RuleGroup`. The documentation looked okay to me.
## Summary
Resolves#15368.
The following options have been renamed:
* `builtins-allowed-modules` → `allowed-modules`
* `builtins-ignorelist` → `ignorelist`
* `builtins-strict-checking` → `strict-checking`
To preserve compatibility, the old names are kept as Serde aliases.
## Test Plan
`cargo nextest run` and `cargo insta test`.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
`RUF035` has been backported into bandit as `S704` in this
[PR](https://github.com/PyCQA/bandit/pull/1225)
This moves the rule and its corresponding setting to the `flake8-bandit`
category
## Test Plan
`cargo nextest run`
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
For context, the initial implementation started out by sending a log
notification to the client to include this information in the client
channel. This is a bit ineffective because it doesn't allow the client
to display this information in a more obvious way. In addition to that,
it isn't obvious from a users perspective as to where the information is
being printed unless they actually open the output channel.
The change was to actually return this formatted string that contains
the information and let the client handle how it should display this
information. For example, in the Ruff VS Code extension we open a split
window and show this information which is similar to what rust-analyzer
does.
The notification request was kept as a precaution in case there are
users who are actually utilizing this way. If they exists, it should a
minority as it requires the user to actually dive into the code to
understand how to hook into this notification. With 0.10, we're removing
the old way as it only clobbers the output channel with a long message.
fixes: #16225
## Test Plan
Tested it out locally that the information is not being logged to the
output channel of VS Code.
## Summary
This PR stabilizies the fix for
https://github.com/astral-sh/ruff/issues/14001
We try to only make breaking formatting changes once a year. However,
the plan was to release this fix as part of Ruff 0.9 but I somehow
missed it when promoting all other formatter changes.
I think it's worth making an exception here considering that this is a
bug fix, it improves readability, and it should be rare
(very few files in a single project). Our version policy explicitly
allows breaking formatter changes in any minor release and the idea of
only making breaking formatter changes once a year is mainly to avoid
multiple releases throughout the year that introduce large formatter
changes
Closes https://github.com/astral-sh/ruff/issues/14001
## Test Plan
Updated snapshot
## Summary
Since Ruff changed to GitHub releases, tags are no longer annotated and
`git describe` no longer picks them up. Instead, it's necessary to also
search lightweight tags.
This changes fixes the `version` command to give more accurate
`last_tag`/`commits_since_last_tag` information. This only affects
development builds, as this information is not present in releases.
## Test Plan
Testing is a little tricky because this information changes on every
commit. Running manually on current `main` and my branch:
`main`:
```
# cargo run --bin ruff -- version --output-format=text
ruff 0.9.10+2547 (dd2313ab0 2025-03-12)
# cargo run --bin ruff -- version --output-format=json
{
"version": "0.9.10",
"commit_info": {
"short_commit_hash": "dd2313ab0",
"commit_hash": "dd2313ab0faea90abf66a75f1b5c388e728d9d0a",
"commit_date": "2025-03-12",
"last_tag": "v0.4.10",
"commits_since_last_tag": 2547
}
}
```
This PR:
```
# cargo run --bin ruff -- version --output-format=text
ruff 0.9.10+46 (11f39f616 2025-03-12)
# cargo run --bin ruff -- version --output-format=json
{
"version": "0.9.10",
"commit_info": {
"short_commit_hash": "11f39f616",
"commit_hash": "11f39f6166c3d7a521725b938a166659f64abb59",
"commit_date": "2025-03-12",
"last_tag": "0.9.10",
"commits_since_last_tag": 46
}
}
```
## Summary
This PR adds a new `CallableTypeFromFunction` special form to allow
extracting the abstract signature of a function literal i.e., convert a
`Type::Function` into a `Type::Callable` (`CallableType::General`).
This is done to support testing the `is_gradual_equivalent_to` type
relation specifically the case we want to make sure that a function that
has parameters with no annotations and does not have a return type
annotation is gradual equivalent to `Callable[[Any, Any, ...], Any]`
where the number of parameters should match between the function literal
and callable type.
Refer
https://github.com/astral-sh/ruff/pull/16634#discussion_r1989976692
### Bikeshedding
The name `CallableTypeFromFunction` is a bit too verbose. A possibly
alternative from Carl is `CallableTypeOf` but that would be similar to
`TypeOf` albeit with a limitation that the former only accepts function
literal types and errors on other types.
Some other alternatives:
* `FunctionSignature`
* `SignatureOf` (similar issues as `TypeOf`?)
* ...
## Test Plan
Update `type_api.md` with a new section that tests this special form,
both invalid and valid forms.
## Summary
Add support for calling custom `__getattr__` methods in case an
attribute is not otherwise found. This allows us to get rid of many
ecosystem false positives where we previously emitted errors when
accessing attributes on `argparse.Namespace`.
closes#16614
## Test Plan
* New Markdown tests
* Observed expected ecosystem changes (the changes for `arrow` also look
fine, since the `Arrow` class has a custom [`__getattr__`
here](1d70d00919/arrow/arrow.py (L802-L815)))
Pulls in the latest Salsa main branch, which supports fixpoint
iteration, and uses it to handle all query cycles.
With this, we no longer need to skip any corpus files to avoid panics.
Latest perf results show a 6% incremental and 1% cold-check regression.
This is not a "no cycles" regression, as tomllib and typeshed do trigger
some definition cycles (previously handled by our old
`infer_definition_types` fallback to `Unknown`). We don't currently have
a benchmark we can use to measure the pure no-cycles regression, though
I expect there would still be some regression; the fixpoint iteration
feature in Salsa does add some overhead even for non-cyclic queries.
I think this regression is within the reasonable range for this feature.
We can do further optimization work later, but I don't think it's the
top priority right now. So going ahead and acknowledging the regression
on CodSpeed.
Mypy primer is happy, so this doesn't regress anything on our
currently-checked projects. I expect it probably unlocks adding a number
of new projects to our ecosystem check that previously would have
panicked.
Fixes#13792Fixes#14672
## Summary
Implements attribute access on intersection types, which didn't
previously work. For example:
```py
from typing import Any
class P: ...
class Q: ...
class A:
x: P = P()
class B:
x: Any = Q()
def _(obj: A):
if isinstance(obj, B):
reveal_type(obj.x) # revealed: P & Any
```
Refers to [this comment].
[this comment]:
https://github.com/astral-sh/ruff/pull/16416#discussion_r1985040363
## Test Plan
New Markdown tests
## Summary
Background - as a follow up to #16611 I noticed that there's a lot of
code duplicated between the `is_assignable_to` and `is_subtype_of`
functions and considered trying to merge them.
[A subtype and an assignable type are pretty much the
same](https://typing.python.org/en/latest/spec/concepts.html#the-assignable-to-or-consistent-subtyping-relation),
except that subtypes are by definition fully static, so I think we can
replace the whole of `is_subtype_of` with:
```
if !self.is_fully_static(db) || !target.is_fully_static(db) {
return false;
}
return self.is_assignable_to(target)
```
if we move all of the logic to is_assignable_to and delete duplicate
code. Then we can discuss if it even makes sense to have a separate
is_subtype_of function (I think the answer is yes since it's used by a
bunch of other places, but we may be able to basically rip out the
concept).
Anyways while playing with combining the functions I noticed is that the
handling of Intersections in `is_subtype_of` has a special case for two
intersections, which I didn't include in the last PR - rather I first
handled right hand intersections before left hand, which should properly
handle double intersections (hand-wavy explanation I can justify if
needed - (A & B & C) is assignable to (A & B) because the left is
assignable to both A and B, but none of A, B, or C is assignable to (A &
B)).
I took a look at what breaks if I remove the handling for double
intersections, and the reason it is needed is because is_disjoint does
not properly handle intersections with negative conditions (so instead
`is_subtype_of` basically implements the check correctly).
This PR adds support to is_disjoint for properly checking negative
branches, which also lets us simplify `is_subtype_of`, bringing it in
line with `is_assignable_to`
## Test Plan
Added a bunch of tests, most of which failed before this fix
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
This is a pure restructuring of the `attributes.md` and
`descriptor_protocol.md` test suites. They have grown organically and I
didn't want to make major structural changes in my recent PR to keep the
diff clean.
## Summary
A follow up to address [this comment]:
> Similarly here, it might be a little more performant to have a single
`Type::instance()` branch with an inner match over `class.known()`
rather than having multiple branches with `if class.is_known()` guards
[this comment]:
https://github.com/astral-sh/ruff/pull/16416#discussion_r1985159037
## Summary
Properly handle binary operator inference for union types.
This fixes a bug I noticed while looking at ecosystem results. The MRE
version of it is this:
```py
def sub(x: float, y: float):
# Red Knot: Operator `-` is unsupported between objects of type `int | float` and `int | float`
return x - y
```
## Test Plan
- New Markdown tests.
- Expected diff in the ecosystem checks
## Summary
Part of #15382
This PR adds the check for whether a callable type is fully static or
not.
A callable type is fully static if all of the parameter types are fully
static _and_ the return type is fully static _and_ if it does not use
the gradual form (`...`) for its parameters.
## Test Plan
Update `is_fully_static.md` with callable types.
It seems that currently this test is grouped into either fully static or
not, I think it would be useful to split them up in groups like
callable, etc. I intentionally avoided that in this PR but I'll put up a
PR for an appropriate split.
Note: I've an explicit goal of updating the property tests with the new
callable types once all relations are implemented.
## Summary
This PR closes#16248.
If the return type of the function isn't assignable to the one
specified, an `invalid-return-type` error occurs.
I thought it would be better to report this as a different kind of error
than the `invalid-assignment` error, so I defined this as a new error.
## Test Plan
All type inconsistencies in the test cases have been replaced with
appropriate ones.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
This updates the `Signature` and `CallBinding` machinery to support
multiple overloads for a callable. This is currently only used for
`KnownFunction`s that we special-case in our type inference code. It
does **_not_** yet update the semantic index builder to handle
`@overload` decorators and construct a multi-signature `Overloads`
instance for real Python functions.
While I was here, I updated many of the `try_call` special cases to use
signatures (possibly overloaded ones now) and `bind_call` to check
parameter lists. We still need some of the mutator methods on
`OverloadBinding` for the special cases where we need to update return
types based on some Rust code.
## Summary
One of the motivations in https://github.com/astral-sh/ruff/pull/16428
for panicking when the `test` or `debug_assertions` features are enabled
and a lookup of a `KnownClass` fails is that we've had some latent bugs
in our code where certain variants have been silently falling back to
`Unknown` in every typeshed lookup without us realising. But that in
itself isn't a great motivation for panicking in
`KnownClass::to_instance()`, since we can fairly easily add some tests
that assert that we don't unexpectedly fallback to `Unknown` for any
`KnownClass` variant. This PR adds those tests.
## Test Plan
`cargo test -p red_knot_python_semantic`
## Summary
This mostly fixes#14899
My motivation was similar to the last comment by @sharkdp there. I ran
red_knot on a codebase and the most common error was patterns like this
failing:
```
def foo(x: str): ...
x: Any = ...
if isinstance(x, str):
foo(x) # Object of type `Any & str` cannot be assigned to parameter 1 (`x`) of function `foo`; expected type `str`
```
The desired behavior is pretty much to ignore Any/Unknown when resolving
intersection assignability - `Any & str` should be assignable to `str`,
and `str` should be assignable to `str & Any`
The fix is actually very similar to the existing code in
`is_subtype_of`, we need to correctly handle intersections on either
side, while being careful to handle dynamic types as desired.
This does not fix the second test case from that issue:
```
static_assert(is_assignable_to(Intersection[Unrelated, Any], Not[tuple[Unrelated, Any]]))
```
but that's misleading because the root cause there has nothing to do
with gradual types. I added a simpler test case that also fails:
```
static_assert(is_assignable_to(Unrelated, Not[tuple[Unrelated]]))
```
This is because we don't determine that Unrelated does not subclass from
tuple so we can't rule out this relation. If that logic is improved then
this fix should also handle the case of the intersection
## Test Plan
Added a bunch of is_assignable_to tests, most of which failed before
this fix.
## Summary
Part of https://github.com/astral-sh/ruff/issues/15382
This PR adds support for inferring the `lambda` expression and return
the `CallableType`.
Currently, this is only limited to inferring the parameters and a todo
type for the return type.
For posterity, I tried using the `file_expression_type` to infer the
return type of lambda but it would always lead to cycle. The main reason
is that in `infer_parameter_definition`, the default expression is being
inferred using `file_expression_type`, which is correct, but it then
Take the following source code as an example:
```py
lambda x=1: x
```
Here's how the code will flow:
* `infer_scope_types` for the global scope
* `infer_lambda_expression`
* `infer_expression` for the default value `1`
* `file_expression_type` for the return type using the body expression.
This is because the body creates it's own scope
* `infer_scope_types` (lambda body scope)
* `infer_name_load` for the symbol `x` whose visible binding is the
lambda parameter `x`
* `infer_parameter_definition` for parameter `x`
* `file_expression_type` for the default value `1`
* `infer_scope_types` for the global scope because of the default
expression
This will then reach to `infer_definition` for the parameter `x` again
which then creates the cycle.
## Test Plan
Add tests around `lambda` expression inference.
## Summary
We currently fail to add the stubs for the `venv` stdlib module because
there is a `venv/` ignore pattern in the top-level `.gitignore` file.
## Test Plan
Ran the typeshed sync workflow manually once to see if the `venv/`
folder is now correctly added.
## Summary
Theoretically this should be slightly more performant, since the
`class.is_known()` calls each do a separate Salsa lookup, which we can
avoid if we do a single `match` on the value of `class.known()`. It also
ends up being two lines less code overall!
## Test Plan
`cargo test -p red_knot_python_semantic`
## Summary
Fixes#16566, fixes#16575
The semantics of `Type::class_member` changed in
https://github.com/astral-sh/ruff/pull/16416, but the property-test
infrastructure was not updated. That means that the property tests were
panicking on the second `expect_type` call here:
0361021863/crates/red_knot_python_semantic/src/types/property_tests.rs (L151-L158)
With the somewhat unhelpful message:
```
Expected a (possibly unbound) type, not an unbound symbol
```
Applying this patch, and then running `QUICKCHECK_TESTS=1000000 cargo
test --release -p red_knot_python_semantic -- --ignored
types::property_tests::stable::equivalent_to_is_reflexive` showed
clearly that it was no longer able to find _any_ methods on _any_
classes due to the change in semantics of `Type::class_member`:
```diff
--- a/crates/red_knot_python_semantic/src/types/property_tests.rs
+++ b/crates/red_knot_python_semantic/src/types/property_tests.rs
@@ -27,7 +27,7 @@
use std::sync::{Arc, Mutex, MutexGuard, OnceLock};
use crate::db::tests::{setup_db, TestDb};
-use crate::symbol::{builtins_symbol, known_module_symbol};
+use crate::symbol::{builtins_symbol, known_module_symbol, Symbol};
use crate::types::{
BoundMethodType, CallableType, IntersectionBuilder, KnownClass, KnownInstanceType,
SubclassOfType, TupleType, Type, UnionType,
@@ -150,10 +150,11 @@ impl Ty {
Ty::BuiltinsFunction(name) => builtins_symbol(db, name).symbol.expect_type(),
Ty::BuiltinsBoundMethod { class, method } => {
let builtins_class = builtins_symbol(db, class).symbol.expect_type();
- let function = builtins_class
- .class_member(db, method.into())
- .symbol
- .expect_type();
+ let Symbol::Type(function, ..) =
+ builtins_class.class_member(db, method.into()).symbol
+ else {
+ panic!("no method `{method}` on class `{class}`");
+ };
create_bound_method(db, function, builtins_class)
}
```
This PR updates the property-test infrastructure to use `Type::member`
rather than `Type::class_member`.
## Test Plan
- Ran `QUICKCHECK_TESTS=1000000 cargo test --release -p
red_knot_python_semantic -- --ignored types::property_tests::stable`
successfully
- Checked that there were no remaining uses of `Type::class_member` in
`property_tests.rs`
## Summary
Fixes a small nit of mine -- we are currently inconsistent in our
spelling between "metaclass" and "meta class", and between "meta type"
and "meta-type". This PR means that we consistently use "metaclass" and
"meta-type".
## Test Plan
`uvx pre-commit run -a`
## Summary
Part of https://github.com/astral-sh/ruff/issues/15382
This PR implements a general callable type that wraps around a
`Signature` and it uses that new type to represent `typing.Callable`.
It also implements `Display` support for `Callable`. The format is as:
```
([<arg name>][: <arg type>][ = <default type>], ...) -> <return type>
```
The `/` and `*` separators are added at the correct boundary for
positional-only and keyword-only parameters. Now, as `typing.Callable`
only has positional-only parameters, the rendered signature would be:
```py
Callable[[int, str], None]
# (int, str, /) -> None
```
The `/` separator represents that all the arguments are positional-only.
The relationship methods that check assignability, subtype relationship,
etc. are not yet implemented and will be done so as a follow-up.
## Test Plan
Add test cases for display support for `Signature` and various mdtest
for `typing.Callable`.
## Summary
Resolves#16365
Add support for unpacking `with` statement targets.
## Test Plan
Added some test cases, alike the ones added by #15058.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
* Attributes/method are now properly looked up on metaclasses, when
called on class objects
* We properly distinguish between data descriptors and non-data
descriptors (but we do not yet support them in store-context, i.e.
`obj.data_descr = …`)
* The descriptor protocol is now implemented in a single unified place
for instances, classes and dunder-calls. Unions and possibly-unbound
symbols are supported in all possible stages of the process by creating
union types as results.
* In general, the handling of "possibly-unbound" symbols has been
improved in a lot of places: meta-class attributes, attributes,
descriptors with possibly-unbound `__get__` methods, instance
attributes, …
* We keep track of type qualifiers in a lot more places. I anticipate
that this will be useful if we import e.g. `Final` symbols from other
modules (see relevant change to typing spec:
https://github.com/python/typing/pull/1937).
* Detection and special-casing of the `typing.Protocol` special form in
order to avoid lots of changes in the test suite due to new `@Todo`
types when looking up attributes on builtin types which have `Protocol`
in their MRO. We previously
looked up attributes in a wrong way, which is why this didn't come up
before.
closes#16367closes#15966
## Context
The way attribute lookup in `Type::member` worked before was simply
wrong (mostly my own fault). The whole instance-attribute lookup should
probably never have been integrated into `Type::member`. And the
`Type::static_member` function that I introduced in my last descriptor
PR was the wrong abstraction. It's kind of fascinating how far this
approach took us, but I am pretty confident that the new approach
proposed here is what we need to model this correctly.
There are three key pieces that are required to implement attribute
lookups:
- **`Type::class_member`**/**`Type::find_in_mro`**: The
`Type::find_in_mro` method that can look up attributes on class bodies
(and corresponding bases). This is a partial function on types, as it
can not be called on instance types like`Type::Instance(…)` or
`Type::IntLiteral(…)`. For this reason, we usually call it through
`Type::class_member`, which is essentially just
`type.to_meta_type().find_in_mro(…)` plus union/intersection handling.
- **`Type::instance_member`**: This new function is basically the
type-level equivalent to `obj.__dict__[name]` when called on
`Type::Instance(…)`. We use this to discover instance attributes such as
those that we see as declarations on class bodies or as (annotated)
assignments to `self.attr` in methods of a class.
- The implementation of the descriptor protocol. It works slightly
different for instances and for class objects, but it can be described
by the general framework:
- Call `type.class_member("attribute")` to look up "attribute" in the
MRO of the meta type of `type`. Call the resulting `Symbol` `meta_attr`
(even if it's unbound).
- Use `meta_attr.class_member("__get__")` to look up `__get__` on the
*meta type* of `meta_attr`. Call it with `__get__(meta_attr, self,
self.to_meta_type())`. If this fails (either the lookup or the call),
just proceed with `meta_attr`. Otherwise, replace `meta_attr` in the
following with the return type of `__get__`. In this step, we also probe
if a `__set__` or `__delete__` method exists and store it in
`meta_attr_kind` (can be either "data descriptor" or "normal attribute
or non-data descriptor").
- Compute a `fallback` type.
- For instances, we use `self.instance_member("attribute")`
- For class objects, we use `class_attr =
self.find_in_mro("attribute")`, and then try to invoke the descriptor
protocol on `class_attr`, i.e. we look up `__get__` on the meta type of
`class_attr` and call it with `__get__(class_attr, None, self)`. This
additional invocation of the descriptor protocol on the fallback type is
one major asymmetry in the otherwise universal descriptor protocol
implementation.
- Finally, we look at `meta_attr`, `meta_attr_kind` and `fallback`, and
handle various cases of (possible) unboundness of these symbols.
- If `meta_attr` is bound and a data descriptor, just return `meta_attr`
- If `meta_attr` is not a data descriptor, and `fallback` is bound, just
return `fallback`
- If `meta_attr` is not a data descriptor, and `fallback` is unbound,
return `meta_attr`
- Return unions of these three possibilities for partially-bound
symbols.
This allows us to handle class objects and instances within the same
framework. There is a minor additional detail where for instances, we do
not allow the fallback type (the instance attribute) to completely
shadow the non-data descriptor. We do this because we (currently) don't
want to pretend that we can statically infer that an instance attribute
is always set.
Dunder method calls can also be embedded into this framework. The only
thing that changes is that *there is no fallback type*. If a dunder
method is called on an instance, we do not fall back to instance
variables. If a dunder method is called on a class object, we only look
it up on the meta class, never on the class itself.
## Test Plan
New Markdown tests.
## Summary
This PR closes#15199.
The change I just made is to set all variables to type `Unknown` if
unpacking fails, but in some cases this may be excessive.
For example:
```py
a, b, c = "ab"
reveal_type(a) # Unknown, but it would be reasonable to think of it as LiteralString
reveal_type(c) # Unknown
```
```py
# Failed to unpack before the starred expression
(a, b, *c, d, e) = (1,)
reveal_type(a) # Unknown
reveal_type(b) # Unknown
...
# Failed to unpack after the starred expression
(a, b, *c, d, e) = (1, 2, 3)
reveal_type(a) # Unknown, but should it be Literal[1]?
reveal_type(b) # Unknown, but should it be Literal[2]?
reveal_type(c) # Todo
reveal_type(d) # Unknown
reveal_type(e) # Unknown
```
I will modify it if you think it would be better to make it a different
type than just `Unknown`.
## Test Plan
I have made appropriate modifications to the test cases affected by this
change, and also added some more test cases.
## Summary
This should give us better coverage for the unsupported syntax error
features and
increases our confidence that the formatter doesn't accidentially
introduce new unsupported
syntax errors.
A feature like this would have been very useful when working on f-string
formatting
where it took a lot of iteration to find all Python 3.11 or older
incompatibilities.
## Test Plan
I applied my changes on top of
https://github.com/astral-sh/ruff/pull/16523 and
removed the target version check in the with-statement formatting code.
As expected,
the integration tests now failed
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
If an mdtest fails, the error output will include an example command
that can be run to re-run just the failing test, e.g
```
To rerun this specific test, set the environment variable: MDTEST_TEST_FILTER="sync.md - With statements - Context manager with non-callable `__exit__` attribute"
MDTEST_TEST_FILTER="sync.md - With statements - Context manager with non-callable `__exit__` attribute" cargo test -p red_knot_python_semantic --test mdtest -- mdtest__with_sync
```
This is very helpful, but because we're printing the envvar value
surrounded in double-quotes, the bits between backticks in this example
get interpreted as a shell interpolation. When running this in zsh, for
example, I see
```console
❯ MDTEST_TEST_FILTER="sync.md - With statements - Context manager with non-callable `__exit__` attribute" cargo test -p red_knot_python_semantic --test mdtest -- mdtest__with_sync
zsh: command not found: __exit__
Compiling red_knot_python_semantic v0.0.0 (/home/ericmarkmartin/Development/ruff/crates/red_knot_python_semantic)
Compiling red_knot_test v0.0.0 (/home/ericmarkmartin/Development/ruff/crates/red_knot_test)
Finished `test` profile [unoptimized + debuginfo] target(s) in 6.09s
Running tests/mdtest.rs (target/debug/deps/mdtest-149b8f9d937e36bc)
running 1 test
test mdtest__with_sync ... ok
```
[^1]
This is a minor annoyance which we can solve by using single-quotes
instead of double-quotes for this string. To do so safely, we also
escape single-quotes possibly contained within the string.
There is a [shell-quote](https://github.com/allenap/shell-quote) crate,
which seems to handle all this escaping stuff for you but fixing this
issue perfectly isn't a big deal (if there are more things to escape we
can deal with it then), so adding a new dependency (even a dev one)
seemed overkill.
[^1]: The filter does still work---it turns out that the filter
`MDTEST_TEST_FILTER="sync.md - With statements - Context manager with
non-callable attribute"` (what you get after the failed interpolation)
is still good enough
## Test Plan
<!-- How was it tested? -->
I broke the ``## Context manager with non-callable `__exit__`
attribute`` test by deleting the error assertion, then successfully ran
the new command it printed out.
Summary
--
Unlike the other syntax errors detected so far, parenthesized keyword
arguments are only allowed *before* 3.8. It sounds like they were only
accidentally allowed before that [^1].
As an aside, you get a pretty confusing error from Python for this, so
it's nice that we can catch it:
```pycon
>>> def f(**kwargs): ...
... f((a)=1)
...
File "<python-input-0>", line 2
f((a)=1)
^^^
SyntaxError: expression cannot contain assignment, perhaps you meant "=="?
>>>
```
Test Plan
--
Inline tests.
[^1]: https://github.com/python/cpython/issues/78822
Summary
--
Checks for tuple unpacking in `return` and `yield` statements before
Python 3.8, as described [here].
Test Plan
--
Inline tests.
[here]: https://github.com/python/cpython/issues/76298
## Summary
- `Never` is callable
- `Never` is iterable
- Arbitrary attributes can be accessed on `Never`
Split out from #16416 that is going to be required.
## Test Plan
Tests for all properties above.
## Summary
This came up in https://github.com/astral-sh/ruff/issues/16477
It's not obvious from the D417 rule's documentation that it only checks
docstrings
with an arguments section. Functions without such a section aren't
checked.
This PR tries to make this clearer in the documentation.
## Summary
This PR introduces a new mdtest option `system` that can either be
`in-memory` or `os`
where `in-memory` is the default.
The motivation for supporting `os` is so that we can write OS/system
specific tests
with mdtests. Specifically, I want to write mdtests for the module
resolver,
testing that module resolution is case sensitive.
## Test Plan
I tested that the case-sensitive module resolver test start failing when
setting `system = "os"`
## Summary
Python's module resolver is case sensitive.
This PR adds mdtests that assert that our module resolution is case
sensitive.
The tests currently all pass because our in memory file system is case
sensitive.
I'll add support for using the real file system to the mdtest framework
in a separate PR.
This PR also adds support for specifying extra search paths to the
mdtest framework.
## Test Plan
The tests fail when running them using the real file system.
To kick off the work of supporting generics, this adds many new
(currently failing) tests, showing the behavior we plan to support.
This is still missing a lot! Not included:
- typevar tuples
- param specs
- variance
- `Self`
But it's a good start! We can add more failing tests for those once we
tackle these.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
This is split out of https://github.com/astral-sh/ruff/pull/14029, to
reduce the size of that PR, and to validate that this "fallback type"
support in `TypeInference` doesn't come with a performance cost. It also
improves the reliability and debuggability of our current (temporary)
cycle handling.
In order to recover from a cycle, we have to be able to construct a
"default" `TypeInference` where all expressions and definitions have
some "default" type. In our current cycle handling, this "default" type
is just unknown or a todo type. With fixpoint iteration, the "default"
type will be `Type::Never`, which is the "bottom" type that fixpoint
iteration starts from.
Since it would be costly (both in space and time) to actually enumerate
all expressions and definitions in a scope, just to insert the same
default type for all of them, instead we add an optional "missing type"
fallback to `TypeInference`, which (if set) is the fallback type for any
expression or definition which doesn't have an explicit type set.
With this change, cycles can no longer result in the dreaded "Missing
key" errors looking up the type of some expression.
... with supporting types. This is meant to give us a base to work with
in terms of our new diagnostic data model. I expect the representations
to be tweaked over time, but I think this is a decent start.
I would also like to add doctest examples, but I think it's better if we
wait until an initial version of the renderer is done for that.
This puts them out of the way so that they can hopefully be removed more
easily in the (near) future, and so that they don't get in the way of
the new types. This also makes the intent of the migration a bit clearer
in the code and hopefully results in less confusion.
This trait should eventually go away, so we rename it (and supporting
types) to make room for a new concrete `Diagnostic` type.
This commit is just the rename. In the next commit, we'll move it to a
different module.
Summary
--
Another simple one, just detect type parameter lists in functions
and classes. Like pyright, we don't emit a second diagnostic for
`type` alias statements, which were also introduced in 3.12.
Test Plan
--
Inline tests.
## Summary
This PR does a small refactor to avoid double
`symbol_table(...).symbol(...)` call to check for `__slots__` and
`TYPE_CHECKING`. It merges them into a single call.
I noticed this while looking at
https://github.com/astral-sh/ruff/pull/16468.
## Summary
This PR adds more features to #16468.
* Adds a new error rule `invalid-type-checking-constant`, which occurs
when we try to assign a value other than `False` to a user-defined
`TYPE_CHECKING` variable (it is possible to assign `...` in a stub
file).
* Allows annotated assignment to `TYPE_CHECKING`. Only types that
`False` can be assigned to are allowed. However, the type of
`TYPE_CHECKING` will be inferred to be `Literal[True]` regardless of
what the type is specified.
## Test plan
I ran the tests with `cargo test -p red_knot_python_semantic` and
confirmed that all tests passed.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Fixes https://github.com/astral-sh/ruff/issues/16476fixes: #11453
We format notebooks cell by cell. That means, that offsets in parse
errors are relative
to the cell and not the entire document. We didn't account for this fact
when emitting syntax errors for notebooks in the formatter.
This PR ensures that we correctly offset parse errors by the cell
location.
## Test Plan
Added test (it panicked before)
Summary
--
Detects the presence of a [PEP 696] type parameter default before Python
3.13.
Test Plan
--
New inline parser tests for type aliases, generic functions and generic
classes.
[PEP 696]: https://peps.python.org/pep-0696/#grammar-changes
Summary
--
This is a follow-up to #16446 to fix the diagnostic range to point to
the `*` like `pyright` does
(https://github.com/astral-sh/ruff/pull/16446#discussion_r1976900643).
Storing the range in the `ExceptClauseKind::Star` variant feels slightly
awkward, but we don't store the star itself anywhere on the
`ExceptHandler`. And we can't just take `ExceptHandler.start() +
"except".text_len()` because this code appears to be valid:
```python
try: ...
except * Error: ...
```
Test Plan
--
Existing tests.
## Summary
This PR closes#15722.
The change is that if the variable `TYPE_CHECKING` is defined/imported,
the type of the variable is interpreted as `Literal[True]` regardless of
what the value is.
This is compatible with the behavior of other type checkers (e.g. mypy,
pyright).
## Test Plan
I ran the tests with `cargo test -p red_knot_python_semantic` and
confirmed that all tests passed.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Summary
--
This is a follow up addressing the comments on #16425. As @dhruvmanila
pointed out, the naming is a bit tricky. I went with `has_no_errors` to
try to differentiate it from `is_valid`. It actually ends up negated in
most uses, so it would be more convenient to have `has_any_errors` or
`has_errors`, but I thought it would sound too much like the opposite of
`is_valid` in that case. I'm definitely open to suggestions here.
Test Plan
--
Existing tests.
## Summary
Resolves#16445.
`UP028` is now no longer always fixable: it will not offer a fix when at
least one `ExprName` target is bound to either a `global` or a
`nonlocal` declaration.
## Test Plan
`cargo nextest run` and `cargo insta test`.
## Summary
Fixes#9381. This PR fixes errors like
```
Cause: error parsing glob '/Users/me/project/{{cookiecutter.project_dirname}}/__pycache__': nested alternate groups are not allowed
```
caused by glob special characters in filenames like
`{{cookiecutter.project_dirname}}`. When the user is matching that
directory exactly, they can use the workaround given by
https://github.com/astral-sh/ruff/issues/7959#issuecomment-1764751734,
but that doesn't work for a nested config file with relative paths. For
example, the directory tree in the reproduction repo linked
[here](https://github.com/astral-sh/ruff/issues/9381#issuecomment-2677696408):
```
.
├── README.md
├── hello.py
├── pyproject.toml
├── uv.lock
└── {{cookiecutter.repo_name}}
├── main.py
├── pyproject.toml
└── tests
└── maintest.py
```
where the inner `pyproject.toml` contains a relative glob:
```toml
[tool.ruff.lint.per-file-ignores]
"tests/*" = ["F811"]
```
## Test Plan
A new CLI test in both the linter and formatter. The formatter test may
not be necessary because I didn't have to modify any additional code to
pass it, but the original report mentioned both `check` and `format`, so
I wanted to be sure both were fixed.
The PR addresses issue #16396 .
Specifically:
- If the exit statement contains a code keyword argument, it is
converted into a positional argument.
- If retrieving the code from the exit statement is not possible, a
violation is raised without suggesting a fix.
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
This PR adds support for an optional list of paths that should be
checked to `knot check`.
E.g. to only check the `src` directory
```sh
knot check src
```
The default is to check all files in the project but users can reduce
the included files by specifying one or multiple optional paths.
The main two challenges with adding this feature were:
* We now need to show an error when one of the provided paths doesn't
exist. That's why this PR now collects errors from the project file
indexing phase and adds them to the output diagnostics. The diagnostic
looks similar to ruffs (see CLI test)
* The CLI should pick up new files added to included folders. For
example, `knot check src --watch` should pick up new files that are
added to the `src` folder. This requires that we now filter the files
before adding them to the project. This is a good first step to
supporting `include` and `exclude`.
The PR makes two simplifications:
1. I didn't test the changes with case-insensitive file systems. We may
need to do some extra path normalization to support those well. See
https://github.com/astral-sh/ruff/issues/16400
2. Ideally, we'd accumulate the IO errors from the initial indexing
phase and subsequent incremental indexing operations. For example, we
should preserve the IO diagnostic for a non existing `test.py` if it was
specified as an explicit CLI argument until the file gets created and we
should show it again when the file gets deleted. However, this is
somewhat complicated because we'd need to track which files we revisited
(or were removed because the entire directory is gone). I considered
this too low a priority as it's worth dealing with right now.
The implementation doesn't support symlinks within the project but that
is the same as Ruff and is unchanged from before this PR.
Closes https://github.com/astral-sh/ruff/issues/14193
## Test Plan
Added CLI and file watching integration tests. Manually testing.
Split from F841 following discussion in #8884.
Fixes#8884.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
Add a new rule for unused assignments in tuples. Remove similar behavior
from F841.
## Test Plan
Adapt F841 tests and move them over to the new rule.
<!-- How was it tested? -->
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR is the first in a series derived from
https://github.com/astral-sh/ruff/pull/16308, each of which add support
for detecting one version-related syntax error from
https://github.com/astral-sh/ruff/issues/6591. This one should be
the largest because it also includes the addition of the
`Parser::add_unsupported_syntax_error` method
Otherwise I think the general structure will be the same for each syntax
error:
* Detecting the error in the parser
* Inline parser tests for the new error
* New ruff CLI tests for the new error
## Test Plan
As noted above, there are new inline parser tests, as well as new ruff
CLI
tests. Once https://github.com/astral-sh/ruff/pull/16379 is resolved,
there should also be new mdtests for red-knot,
but this PR does not currently include those.
Regardless of whether #16408 and #16311 pan out, this part is worth
pulling out as a separate PR.
Before, you had to define a new `IndexVec` index type for each type of
association list you wanted to create. Now there's a single index type
that's internal to the alist implementation, and you use `List<K, V>` to
store a handle to a particular list.
This also adds some property tests for the alist implementation.
## Summary
This PR adds support for a pragma-style header for inline parser tests
containing JSON-serialized `ParseOptions`. For example,
```python
# parse_options: { "target-version": "3.9" }
match 2:
case 1:
pass
```
The line must start with `# parse_options: ` and then the rest of the
(trimmed) line is deserialized into `ParseOptions` used for parsing the
the test.
## Test Plan
Existing inline tests, plus two new inline tests for
`match-before-py310`.
---------
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
## Summary
As mentioned in
https://github.com/astral-sh/ruff/pull/16296#discussion_r1967047387
This PR updates the client settings resolver to notify the user if there
are any errors in the config using a very basic approach. In addition,
each error related to specific settings are logged.
This isn't the best approach because it can log the same message
multiple times when both workspace and global settings are provided and
they both are the same. This is the case for a single workspace VS Code
instance.
I do have some ideas on how to improve this and will explore them during
my free time (low priority):
* Avoid resolving the global settings multiple times as they're static
* Include the source of the setting (workspace or global?)
* Maybe use a struct (`ResolvedClientSettings` +
`Vec<ClientSettingsResolverError>`) instead to make unit testing easier
## Test Plan
Using:
```jsonc
{
"ruff.logLevel": "debug",
// Invalid settings
"ruff.configuration": "$RANDOM",
"ruff.lint.select": ["RUF000", "I001"],
"ruff.lint.extendSelect": ["B001", "B002"],
"ruff.lint.ignore": ["I999", "F401"]
}
```
The error logs:
```
2025-02-27 12:30:04.318736000 ERROR Failed to load settings from `configuration`: error looking key 'RANDOM' up: environment variable not found
2025-02-27 12:30:04.319196000 ERROR Failed to load settings from `configuration`: error looking key 'RANDOM' up: environment variable not found
2025-02-27 12:30:04.320549000 ERROR Unknown rule selectors found in `lint.select`: ["RUF000"]
2025-02-27 12:30:04.320669000 ERROR Unknown rule selectors found in `lint.extendSelect`: ["B001"]
2025-02-27 12:30:04.320764000 ERROR Unknown rule selectors found in `lint.ignore`: ["I999"]
```
Notification preview:
<img width="470" alt="Screenshot 2025-02-27 at 12 29 06 PM"
src="https://github.com/user-attachments/assets/61f41d5c-2558-46b3-a1ed-82114fd8ec22"
/>
## Summary
Closes: https://github.com/astral-sh/ruff/issues/16267
This change skips building the `index` in RuffSettingsIndex when the
configuration preference, in the editor settings, is set to
`editorOnly`. This is appropriate due to the fact that the indexes will
go unused as long as the configuration preference persists.
## Test Plan
I have tested this in VSCode and can confirm that we skip indexing when
`editorOnly` is set. Upon switching back to `editorFirst` or
`filesystemFirst` we index the settings as normal.
I don't seen any unit tests for setting indexing at the moment, but I am
happy to give it a shot if that is something we want.
We currently keep two separate pieces of state regarding the current
loop on `SemanticIndexBuilder`. One is an enum simply reflecting whether
we are currently inside a loop, and the other is the saved flow states
for `break` statements found in the current loop.
For adding loopy control flow, I'll need to add some additional loop
state (`continue` states, for example). Prepare for this by
consolidating our existing loop state into a single struct and
simplifying the API for pushing and popping a loop.
This is purely a refactor, so tests are not changed.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Resolves#16374.
`PLW0177` now also reports the pattern of a case branch if it is an
attribute access whose qualified name is that of either `np.nan` or
`math.nan`.
As the rule is in preview, the changes are not preview-gated.
## Test Plan
`cargo nextest run` and `cargo insta test`.
Minor follow-up to https://github.com/astral-sh/ruff/pull/16161
This `not_callable` flag wasn't functional, because it could never be
`false`. It was initialized to `true` and then only ever updated with
`|=`, which can never make it `false`.
Add a test that exercises the case where it _should_ be `false` (all of
the union elements are callable) but `bindings` is also empty (all union
elements have binding errors). Before this PR, the added test wrongly
emits a diagnostic that the union `Literal[f1] | Literal[f2]` is not
callable.
And add a test where a union call results in one binding error and one
not-callable error, where we currently give the wrong result (we show
only the binding error), with a TODO.
Also add TODO comments in a couple other tests where ideally we'd report
more than just one error out of a union call.
Also update the flag name to `all_errors_not_callable` to more clearly
indicate the semantics of the flag.
## Summary
Currently, the log messages emitted by the server includes multiple
information which isn't really required most of the time.
Here's the current format:
```
0.000755625s DEBUG main ruff_server::session::index::ruff_settings: Indexing settings for workspace: /Users/dhruv/playground/ruff
0.016334666s DEBUG ThreadId(10) ruff_server::session::index::ruff_settings: Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
0.019954541s INFO main ruff_server::session::index: Registering workspace: /Users/dhruv/playground/ruff
0.020160416s TRACE ruff:main notification{method="textDocument/didOpen"}: ruff_server::server::api: enter
0.020209625s TRACE ruff:worker:0 request{id=1 method="textDocument/diagnostic"}: ruff_server::server::api: enter
0.020228166s DEBUG ruff:worker:0 request{id=1 method="textDocument/diagnostic"}: ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/test.py
0.020359833s INFO ruff:main ruff_server::server: Configuration file watcher successfully registered
```
This PR updates the following:
* Uses current timestamp (same as red-knot) for all log levels instead
of the uptime value
* Includes the target and thread names only at the trace level
What this means is that the message is reduced to only important
information at DEBUG level:
```
2025-02-26 11:35:02.198375000 DEBUG Indexing settings for workspace: /Users/dhruv/playground/ruff
2025-02-26 11:35:02.209933000 DEBUG Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
2025-02-26 11:35:02.217165000 INFO Registering workspace: /Users/dhruv/playground/ruff
2025-02-26 11:35:02.217631000 DEBUG Included path via `include`: /Users/dhruv/playground/ruff/lsp/test.py
2025-02-26 11:35:02.217684000 INFO Configuration file watcher successfully registered
```
while still showing the other information (thread names and target) at
trace level:
```
2025-02-26 11:35:27.819617000 DEBUG main ruff_server::session::index::ruff_settings: Indexing settings for workspace: /Users/dhruv/playground/ruff
2025-02-26 11:35:27.830500000 DEBUG ThreadId(11) ruff_server::session::index::ruff_settings: Ignored path via `exclude`: /Users/dhruv/playground/ruff/.vscode
2025-02-26 11:35:27.837212000 INFO main ruff_server::session::index: Registering workspace: /Users/dhruv/playground/ruff
2025-02-26 11:35:27.837714000 TRACE ruff:main notification{method="textDocument/didOpen"}: ruff_server::server::api: enter
2025-02-26 11:35:27.838019000 INFO ruff:main ruff_server::server: Configuration file watcher successfully registered
2025-02-26 11:35:27.838084000 TRACE ruff:worker:1 request{id=1 method="textDocument/diagnostic"}: ruff_server::server::api: enter
2025-02-26 11:35:27.838205000 DEBUG ruff:worker:1 request{id=1 method="textDocument/diagnostic"}: ruff_server::resolve: Included path via `include`: /Users/dhruv/playground/ruff/lsp/test.py
```
## Summary
[Internal design
document](https://www.notion.so/astral-sh/In-editor-settings-19e48797e1ca807fa8c2c91b689d9070?pvs=4)
This PR expands `ruff.configuration` to allow inline configuration
directly in the editor. For example:
```json
{
"ruff.configuration": {
"line-length": 100,
"lint": {
"unfixable": ["F401"],
"flake8-tidy-imports": {
"banned-api": {
"typing.TypedDict": {
"msg": "Use `typing_extensions.TypedDict` instead"
}
}
}
},
"format": {
"quote-style": "single"
}
}
}
```
This means that now `ruff.configuration` accepts either a path to
configuration file or the raw config itself. It's _mostly_ similar to
`--config` with one difference that's highlighted in the following
section. So, it can be said that the format of `ruff.configuration` when
provided the config map is same as the one on the [playground] [^1].
## Limitations
<details><summary><b>Casing (<code>kebab-case</code> v/s/
<code>camelCase</code>)</b></summary>
<p>
The config keys needs to be in `kebab-case` instead of `camelCase` which
is being used for other settings in the editor.
This could be a bit confusing. For example, the `line-length` option can
be set directly via an editor setting or can be configured via
`ruff.configuration`:
```json
{
"ruff.configuration": {
"line-length": 100
},
"ruff.lineLength": 120
}
```
#### Possible solution
We could use feature flag with [conditional
compilation](https://doc.rust-lang.org/reference/conditional-compilation.html#the-cfg_attr-attribute)
to indicate that when used in `ruff_server`, we need the `Options`
fields to be renamed as `camelCase` while for other crates it needs to
be renamed as `kebab-case`. But, this might not work very easily because
it will require wrapping the `Options` struct and create two structs in
which we'll have to add `#[cfg_attr(...)]` because otherwise `serde`
will complain:
```
error: duplicate serde attribute `rename_all`
--> crates/ruff_workspace/src/options.rs:43:38
|
43 | #[cfg_attr(feature = "editor", serde(rename_all = "camelCase"))]
| ^^^^^^^^^^
```
</p>
</details>
<details><summary><b>Nesting (flat v/s nested keys)</b></summary>
<p>
This is the major difference between `--config` flag on the command-line
v/s `ruff.configuration` and it makes it such that `ruff.configuration`
has same value format as [playground] [^1].
The config keys needs to be split up into keys which can result in
nested structure instead of flat structure:
So, the following **won't work**:
```json
{
"ruff.configuration": {
"format.quote-style": "single",
"lint.flake8-tidy-imports.banned-api.\"typing.TypedDict\".msg": "Use `typing_extensions.TypedDict` instead"
}
}
```
But, instead it would need to be split up like the following:
```json
{
"ruff.configuration": {
"format": {
"quote-style": "single"
},
"lint": {
"flake8-tidy-imports": {
"banned-api": {
"typing.TypedDict": {
"msg": "Use `typing_extensions.TypedDict` instead"
}
}
}
}
}
}
```
#### Possible solution (1)
The way we could solve this and make it same as `--config` would be to
add a manual logic of converting the JSON map into an equivalent TOML
string which would be then parsed into `Options`.
So, the following JSON map:
```json
{ "lint.flake8-tidy-imports": { "banned-api": {"\"typing.TypedDict\".msg": "Use typing_extensions.TypedDict instead"}}}
```
would need to be converted into the following TOML string:
```toml
lint.flake8-tidy-imports = { banned-api = { "typing.TypedDict".msg = "Use typing_extensions.TypedDict instead" } }
```
by recursively convering `"key": value` into `key = value` which is to
remove the quotes from key and replacing `:` with `=`.
#### Possible solution (2)
Another would be to just accept `Map<String, String>` strictly and
convert it into `key = value` and then parse it as a TOML string. This
would also match `--config` but quotes might become a nuisance because
JSON only allows double quotes and so it'll require escaping any inner
quotes or use single quotes.
</p>
</details>
## Test Plan
### VS Code
**Requires https://github.com/astral-sh/ruff-vscode/pull/702**
**`settings.json`**:
```json
{
"ruff.lint.extendSelect": ["TID"],
"ruff.configuration": {
"line-length": 50,
"format": {
"quote-style": "single"
},
"lint": {
"unfixable": ["F401"],
"flake8-tidy-imports": {
"banned-api": {
"typing.TypedDict": {
"msg": "Use `typing_extensions.TypedDict` instead"
}
}
}
}
}
}
```
Following video showcases me doing the following:
1. Check diagnostics that it includes `TID`
2. Run `Ruff: Fix all auto-fixable problems` to test `unfixable`
3. Run `Format: Document` to test `line-length` and `quote-style`
https://github.com/user-attachments/assets/0a38176f-3fb0-4960-a213-73b2ea5b1180
### Neovim
**`init.lua`**:
```lua
require('lspconfig').ruff.setup {
init_options = {
settings = {
lint = {
extendSelect = { 'TID' },
},
configuration = {
['line-length'] = 50,
format = {
['quote-style'] = 'single',
},
lint = {
unfixable = { 'F401' },
['flake8-tidy-imports'] = {
['banned-api'] = {
['typing.TypedDict'] = {
msg = 'Use typing_extensions.TypedDict instead',
},
},
},
},
},
},
},
}
```
Same steps as in the VS Code test:
https://github.com/user-attachments/assets/cfe49a9b-9a89-43d7-94f2-7f565d6e3c9d
## Documentation Preview
https://github.com/user-attachments/assets/e0062f58-6ec8-4e01-889d-fac76fd8b3c7
[playground]: https://play.ruff.rs
[^1]: This has one advantage that the value can be copy-pasted directly
into the playground
## Summary
This PR builds on the changes in #16220 to pass a target Python version
to the parser. It also adds the `Parser::unsupported_syntax_errors` field, which
collects version-related syntax errors while parsing. These syntax
errors are then turned into `Message`s in ruff (in preview mode).
This PR only detects one syntax error (`match` statement before Python
3.10), but it has been pretty quick to extend to several other simple
errors (see #16308 for example).
## Test Plan
The current tests are CLI tests in the linter crate, but these could be
supplemented with inline parser tests after #16357.
I also tested the display of these syntax errors in VS Code:


---------
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
In https://github.com/astral-sh/ruff/pull/16306#discussion_r1966290700,
@carljm pointed out that #16306 introduced a terminology problem, with
too many things called a "constraint". This is a follow-up PR that
renames `Constraint` to `Predicate` to hopefully clear things up a bit.
So now we have that:
- a _predicate_ is a Python expression that might influence type
inference
- a _narrowing constraint_ is a list of predicates that constraint the
type of a binding that is visible at a use
- a _visibility constraint_ is a ternary formula of predicates that
define whether a binding is visible or a statement is reachable
This is a pure renaming, with no behavioral changes.
## Summary
Model dunder-calls correctly (and in one single place), by implementing
this behavior (using `__getitem__` as an example).
```py
def getitem_desugared(obj: object, key: object) -> object:
getitem_callable = find_in_mro(type(obj), "__getitem__")
if hasattr(getitem_callable, "__get__"):
getitem_callable = getitem_callable.__get__(obj, type(obj))
return getitem_callable(key)
```
See the new `calls/dunder.md` test suite for more information. The new
behavior also needs much fewer lines of code (the diff is positive due
to new tests).
## Test Plan
New tests; fix TODOs in existing tests.
This PR adds an implementation of [association
lists](https://en.wikipedia.org/wiki/Association_list), and uses them to
replace the previous `BitSet`/`SmallVec` representation for narrowing
constraints.
An association list is a linked list of key/value pairs. We additionally
guarantee that the elements of an association list are sorted (by their
keys), and that they do not contain any entries with duplicate keys.
Association lists have fallen out of favor in recent decades, since you
often need operations that are inefficient on them. In particular,
looking up a random element by index is O(n), just like a linked list;
and looking up an element by key is also O(n), since you must do a
linear scan of the list to find the matching element. Luckily we don't
need either of those operations for narrowing constraints!
The typical implementation also suffers from poor cache locality and
high memory allocation overhead, since individual list cells are
typically allocated separately from the heap. We solve that last problem
by storing the cells of an association list in an `IndexVec` arena.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
I am working on a project that uses ruff linters' docs to generate a
fine-tuning dataset for LLMs.
To achieve this, I first ran the command `ruff rule --all
--output-format json` to retrieve all the rules. Then, I parsed the
explanation field to get these 3 consistent sections:
- `Why is this bad?`
- `What it does`
- `Example`
However, during the initial processing, I noticed that the markdown
headings are not that consistent. For instance:
- In most cases, `Use instead` appears as a normal paragraph within the
`Example` section, but in the file
`crates/ruff_linter/src/rules/flake8_bandit/rules/django_extra.rs` it is
a level-2 heading
- The heading "What it does**?**" is used in some places, while others
consistently use "What it does"
- There are 831 `Example` headings and 65 `Examples`. But all of them
only have one example case
This PR normalized these across all rules.
## Test Plan
CI are passed.
## Summary
Add a diagnostic if a pure instance variable is accessed on a class object. For example
```py
class C:
instance_only: str
def __init__(self):
self.instance_only = "a"
# error: Attribute `instance_only` can only be accessed on instances, not on the class object `Literal[C]` itself.
C.instance_only
```
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
This PR is another step in preparing to detect syntax errors in the
parser. It introduces the new `per-file-target-version` top-level
configuration option, which holds a mapping of compiled glob patterns to
Python versions. I intend to use the
`LinterSettings::resolve_target_version` method here to pass to the
parser:
f50849aeef/crates/ruff_linter/src/linter.rs (L491-L493)
## Test Plan
I added two new CLI tests to show that the `per-file-target-version` is
respected in both the formatter and the linter.
## Summary
Add support for `@classmethod`s.
```py
class C:
@classmethod
def f(cls, x: int) -> str:
return "a"
reveal_type(C.f(1)) # revealed: str
```
## Test Plan
New Markdown tests
## Summary
* Existing example did not include RawSQL() call like it should
* Also clarify the example a bit to make it clearer that the code is not
secure
## Test Plan
N/A, only documentation updated
## Summary
I spotted a minor mistake in my descriptor protocol implementation where
`C.descriptor` would pass the meta type (`type`) of the type of `C`
(`Literal[C]`) as the owner argument to `__get__`, instead of passing
`Literal[C]` directly.
## Test Plan
New test.
Two related changes. For context:
1. We were maintaining two separate arenas of `Constraint`s in each
use-def map. One was used for narrowing constraints, and the other for
visibility constraints. The visibility constraint arena was interned,
ensuring that we always used the same ID for any particular
`Constraint`. The narrowing constraint arena was not interned.
2. The TDD code relies on _all_ TDD nodes being interned and reduced.
This is an important requirement for TDDs to be a canonical form, which
allows us to use a single int comparison to test for "always true/false"
and to compare two TDDs for equivalence. But we also need to support an
individual `Constraint` having multiple values in a TDD evaluation (e.g.
to handle a `while` condition having different values the first time
it's evaluated vs later times). Previously, we handled that by
introducing a "copy" number, which was only there as a disambiguator, to
allow an interned, deduplicated constraint ID to appear in the TDD
formula multiple times.
A better way to handle (2) is to not intern the constraints in the
visibility constraint arena! The caller now gets to decide: if they add
a `Constraint` to the arena more than once, they get distinct
`ScopedConstraintId`s — which the TDD code will treat as distinct
variables, allowing them to take on different values in the ternary
function.
With that in place, we can then consolidate on a single (non-interned)
arena, which is shared for both narrowing and visibility constraints.
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
## Summary
Resolves 3/4 requests in #16217:
- ✅ Remove not special methods: `__cmp__`, `__div__`, `__nonzero__`, and
`__unicode__`.
- ✅ Add special methods: `__next__`, `__buffer__`, `__class_getitem__`,
`__mro_entries__`, `__release_buffer__`, and `__subclasshook__`.
- ✅ Support positional-only arguments.
- ❌ Add support for module functions `__dir__` and `__getattr__`. As
mentioned in the issue the check is scoped for methods rather than
module functions. I am hesitant to expand the scope of this check
without a discussion.
## Test Plan
- Manually confirmed each example file from the issue functioned as
expected.
- Ran cargo nextest to ensure `unexpected_special_method_signature` test
still passed.
Fixes#16217.
## Summary
This is just a small refactor to move workspace related structs and impl
out from `server.rs` where `Server` is defined and into a new
`workspace.rs`.
## Summary
This PR achieves the following:
* Add support for checking method calls, and inferring return types from
method calls. For example:
```py
reveal_type("abcde".find("abc")) # revealed: int
reveal_type("foo".encode(encoding="utf-8")) # revealed: bytes
"abcde".find(123) # error: [invalid-argument-type]
class C:
def f(self) -> int:
pass
reveal_type(C.f) # revealed: <function `f`>
reveal_type(C().f) # revealed: <bound method: `f` of `C`>
C.f() # error: [missing-argument]
reveal_type(C().f()) # revealed: int
```
* Implement the descriptor protocol, i.e. properly call the `__get__`
method when a descriptor object is accessed through a class object or an
instance of a class. For example:
```py
from typing import Literal
class Ten:
def __get__(self, instance: object, owner: type | None = None) ->
Literal[10]:
return 10
class C:
ten: Ten = Ten()
reveal_type(C.ten) # revealed: Literal[10]
reveal_type(C().ten) # revealed: Literal[10]
```
* Add support for member lookup on intersection types.
* Support type inference for `inspect.getattr_static(obj, attr)` calls.
This was mostly used as a debugging tool during development, but seems
more generally useful. It can be used to bypass the descriptor protocol.
For the example above:
```py
from inspect import getattr_static
reveal_type(getattr_static(C, "ten")) # revealed: Ten
```
* Add a new `Type::Callable(…)` variant with the following sub-variants:
* `Type::Callable(CallableType::BoundMethod(…))` — represents bound
method objects, e.g. `C().f` above
* `Type::Callable(CallableType::MethodWrapperDunderGet(…))` — represents
`f.__get__` where `f` is a function
* `Type::Callable(WrapperDescriptorDunderGet)` — represents
`FunctionType.__get__`
* Add new known classes:
* `types.MethodType`
* `types.MethodWrapperType`
* `types.WrapperDescriptorType`
* `builtins.range`
## Performance analysis
On this branch, we do more work. We need to do more call checking, since
we now check all method calls. We also need to do ~twice as many member
lookups, because we need to check if a `__get__` attribute exists on
accessed members.
A brief analysis on `tomllib` shows that we now call `Type::call` 1780
times, compared to 612 calls before.
## Limitations
* Data descriptors are not yet supported, i.e. we do not infer correct
types for descriptor attribute accesses in `Store` context and do not
check writes to descriptor attributes. I felt like this was something
that could be split out as a follow-up without risking a major
architectural change.
* We currently distinguish between `Type::member` (with descriptor
protocol) and `Type::static_member` (without descriptor protocol). The
former corresponds to `obj.attr`, the latter corresponds to
`getattr_static(obj, "attr")`. However, to model some details correctly,
we would also need to distinguish between a static member lookup *with*
and *without* instance variables. The lookup without instance variables
corresponds to `find_name_in_mro`
[here](https://docs.python.org/3/howto/descriptor.html#invocation-from-an-instance).
We currently approximate both using `member_static`, which leads to two
open TODOs. Changing this would be a larger refactoring of
`Type::own_instance_member`, so I chose to leave it out of this PR.
## Test Plan
* New `call/methods.md` test suite for method calls
* New tests in `descriptor_protocol.md`
* New `call/getattr_static.md` test suite for `inspect.getattr_static`
* Various updated tests
## Summary
This avoids looking up `__bool__` on class `bool` for every
`Type::Instance(bool).bool()` call. 1% performance win on cold cache, 4%
win on incremental performance.
This updates the `SymbolBindings` and `SymbolDeclarations` types to use
a single smallvec of live bindings/declarations, instead of splitting
that out into separate containers for each field.
I'm seeing an 11-13% `cargo bench` performance improvement with this
locally (for both cold and incremental). I'm interested to see if
Codspeed agrees!
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
A minor cleanup that breaks up a `HashMap` of an enum into separate
`HashMap`s for each variant. (These separate fields were already how
this cache was being described in the big comment at the top of the
file!)
This is a small tweak to avoid adding the callable `Type` on the error
value itself. Namely, it's always available regardless of the error, and
it's easy to pass it down explicitly to the diagnostic generating code.
It's likely that the other `CallBindingError` variants will also want
the callable `Type` to improve diagnostics too. This way, we don't have
to duplicate the `Type` on each variant. It's just available to all of
them.
Ref https://github.com/astral-sh/ruff/pull/16239#discussion_r1962352646
## Summary
Follow up on the discussion
[here](https://github.com/astral-sh/ruff/pull/16121#discussion_r1962973298).
Replace builtin classes with custom placeholder names, which should
hopefully make the tests a bit easier to understand.
I carefully renamed things one after the other, to make sure that there
is no functional change in the tests.
## Summary
This PR should help in
https://github.com/astral-sh/ruff-vscode/issues/676.
There are two issues that this is trying to fix all related to the way
shutdown should happen as per the protocol:
1. After the server handled the [shutdown
request](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#shutdown)
and while waiting for the exit notification:
> If a server receives requests after a shutdown request those requests
should error with `InvalidRequest`.
But, we raised an error and exited. This PR fixes it by entering a loop
which responds to any request during this period with `InvalidRequest`
2. If the server received an [exit
notification](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#exit)
but the shutdown request was never received, the server handled that by
logging and exiting with success but as per the spec:
> The server should exit with success code 0 if the shutdown request has
been received before; otherwise with error code 1.
So, this PR fixes that as well by raising an error in this case.
## Test Plan
I'm not sure how to go about testing this without using a mock server.
## Summary
This is part of the preparation for detecting syntax errors in the
parser from https://github.com/astral-sh/ruff/pull/16090/. As suggested
in [this
comment](https://github.com/astral-sh/ruff/pull/16090/#discussion_r1953084509),
I started working on a `ParseOptions` struct that could be stored in the
parser. For this initial refactor, I only made it hold the existing
`Mode` option, but for syntax errors, we will also need it to have a
`PythonVersion`. For that use case, I'm picturing something like a
`ParseOptions::with_python_version` method, so you can extend the
current calls to something like
```rust
ParseOptions::from(mode).with_python_version(settings.target_version)
```
But I thought it was worth adding `ParseOptions` alone without changing
any other behavior first.
Most of the diff is just updating call sites taking `Mode` to take
`ParseOptions::from(Mode)` or those taking `PySourceType`s to take
`ParseOptions::from(PySourceType)`. The interesting changes are in the
new `parser/options.rs` file and smaller parts of `parser/mod.rs` and
`ruff_python_parser/src/lib.rs`.
## Test Plan
Existing tests, this should not change any behavior.
We now resolve references in "eager" scopes correctly — using the
bindings and declarations that are visible at the point where the eager
scope is created, not the "public" type of the symbol (typically the
bindings visible at the end of the scope).
---------
Co-authored-by: Alex Waygood <alex.waygood@gmail.com>
This uses the refactoring and support for secondary diagnostic messages
to improve the diagnostic for "invalid argument type." The main
improvement here is that we show where the function being called is
defined, and annotate the span corresponding to the invalid parameter.
This is a small little hack to make the `Diagnostic` trait
capable of supporting attaching multiple spans.
This design should be considered transient. This was just the
quickest way that I could see to pass multiple spans through from
the type checker to the diagnostic renderer.
This commit has no behavioral changes.
This refactor moves the logic for turning a `D: Diagnostic` into
an `annotate_snippets::Message` into its own types. This would
ideally just be a function or something, but the `annotate-snippets`
types want borrowed data, and sometimes we need to produce owned
data. So we gather everything we need into our own types and then
spit it back out in the format that `annotate-snippets` wants.
This factor was motivated by wanting to render multiple snippets.
The logic for generating a code frame is complicated enough that
it's worth splitting out so that we can reuse it for other spans.
(Note that one should consider this prototype-level code. It is
unlikely to survive for long.)
It seems nothing is using it, and I'm not sure if it makes semantic
sense. Particularly if we want to support multiple ranges. One could
make an argument that this ought to correspond to the "primary"
range (which we should have), but I think such a concept is better
expressed as an explicit routine if possible.
## Summary
Resolves#15979.
The file explains what Red Knot is (a type checker), what state it is in
(not yet ready for user testing), what its goals ("extremely fast") and
non-goals (not a drop-in replacement for other type checkers) are as
well as what the crates contain.
## Test Plan
None.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
Move class attribute (property, methods, variables) related cases in
AIR302_names to AIR302_class_attribute
## Test Plan
No functionality change. Test fixture is reogranized
Fixes false negative when slice bound uses length of string literal.
We were meant to check the following, for example. Given:
```python
text[:bound] if text.endswith(suffix) else text
```
We want to know whether:
- `suffix` is a string literal and `bound` is a number literal
- `suffix` is an expression and `bound` is
exactly `-len(suffix)` (as AST nodes, prior to evaluation.)
The issue is that negative number literals like `-10` are stored as
unary operators applied to a number literal in the AST. So when `suffix`
was a string literal but `bound` was `-len(suffix)` we were getting
caught in the match arm where `bound` needed to be a number. This is now
fixed with a guard.
Closes#16231
## Summary
This PR updates the formatter and linter to use the `PythonVersion`
struct from the `ruff_python_ast` crate internally. While this doesn't
remove the need for the `linter::PythonVersion` enum, it does remove the
`formatter::PythonVersion` enum and limits the use in the linter to
deserializing from CLI arguments and config files and moves most of the
remaining methods to the `ast::PythonVersion` struct.
## Test Plan
Existing tests, with some inputs and outputs updated to reflect the new
(de)serialization format. I think these are test-specific and shouldn't
affect any external (de)serialization.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR updates the `ruff.printDebugInformation` command to return the
info as string in the response. Currently, we send a `window/logMessage`
request with the info but that has the disadvantage that it's not
visible to the user directly.
What `rust-analyzer` does with it's `rust-analyzer/status` request which
returns it as a string which then the client can just display it in a
separate window. This is what I'm thinking of doing as well.
Other editors can also benefit from it by directly opening a temporary
file with this information that the user can see directly.
There are couple of options here:
1. Keep using the command, keep the log request and return the string
2. Keep using the command, remove the log request and return the string
3. Create a new request similar to `rust-analyzer/status` which returns
a string
This PR implements (1) but I'd want to move towards (2) and remove the
log request completely. We haven't advertised it as such so this would
only require updating the VS Code extension to handle it by opening a
new document with the debug content.
## Test plan
For VS Code, refer to https://github.com/astral-sh/ruff-vscode/pull/694.
For Neovim, one could do:
```lua
local function execute_ruff_command(command)
local client = vim.lsp.get_clients({
bufnr = vim.api.nvim_get_current_buf(),
name = name,
method = 'workspace/executeCommand',
})[1]
if not client then
return
end
client.request('workspace/executeCommand', {
command = command,
arguments = {
{ uri = vim.uri_from_bufnr(0) }
},
function(err, result)
if err then
-- log error
return
end
vim.print(result)
-- Or, open a new window with the `result` content
end
}
```
## Summary
Separate ImportPathMoved and ProviderName to avoid misusing (AIR303)
## Test Plan
only code arrangement is updated. existing test fixture should be not be
changed
## Summary
Related to https://github.com/astral-sh/ruff-vscode/pull/686, this PR
ignores handling source code actions for notebooks which are not
prefixed with `notebook`.
The main motivation is that the native server does not actually handle
it well which results in gibberish code. There's some context about this
in
https://github.com/astral-sh/ruff-vscode/issues/680#issuecomment-2647490812
and the following comments.
closes: https://github.com/astral-sh/ruff-vscode/issues/680
## Test Plan
Running a notebook with the following does nothing except log the
message:
```json
"notebook.codeActionsOnSave": {
"source.organizeImports.ruff": "explicit",
},
```
while, including the `notebook` code actions does make the edit (as
usual):
```json
"notebook.codeActionsOnSave": {
"notebook.source.organizeImports.ruff": "explicit"
},
```
## Summary
This PR does the following:
* Moves the following from `types.rs` in `symbol.rs`:
* `symbol`
* `global_symbol`
* `imported_symbol`
* `symbol_from_bindings`
* `symbol_from_declarations`
* `SymbolAndQualifiers`
* `SymbolFromDeclarationsResult`
* Moves the following from `stdlib.rs` in `symbol.rs` and removes
`stdlib.rs`:
* `known_module_symbol`
* `builtins_symbol`
* `typing_symbol` (only for tests)
* `typing_extensions_symbol`
* `builtins_module_scope`
* `core_module_scope`
* Add `symbol_from_bindings_impl` and `symbol_from_declarations_impl` to
keep `RequiresExplicitReExport` an implementation detail
* Make `declaration_type` a `pub(crate)` as it's required in
`symbol_from_declarations` (`binding_type` is already `pub(crate)`
The main motivation is to keep the implementation details private and
only expose an ergonomic API which uses sane defaults for various
scenario to avoid any mistakes from the caller. Refer to
https://github.com/astral-sh/ruff/pull/16133#discussion_r1955262772,
https://github.com/astral-sh/ruff/pull/16133#issue-2850146612 for
details.
## Summary
This PR makes the following changes:
- It adjusts various callsites to use the new
`ast::StringLiteral::contents_range()` method that was introduced in
https://github.com/astral-sh/ruff/pull/16183. This is less verbose and
more type-safe than using the `ast::str::raw_contents()` helper
function.
- It adds a new `ast::ExprStringLiteral::as_unconcatenated_literal()`
helper method, and adjusts various callsites to use it. This addresses
@MichaReiser's review comment at
https://github.com/astral-sh/ruff/pull/16183#discussion_r1957334365.
There is no functional change here, but it helps readability to make it
clearer that we're differentiating between implicitly concatenated
strings and unconcatenated strings at various points.
- It renames the `StringLiteralValue::flags()` method to
`StringLiteralFlags::first_literal_flags()`. If you're dealing with an
implicitly concatenated string `string_node`,
`string_node.value.flags().closer_len()` could give an incorrect result;
this renaming makes it clearer that the `StringLiteralFlags` instance
returned by the method is only guaranteed to give accurate information
for the first `StringLiteral` contained in the `ExprStringLiteral` node.
- It deletes the unused `BytesLiteralValue::flags()` method. This seems
prone to misuse in the same way as `StringLiteralValue::flags()`: if
it's an implicitly concatenated bytestring, the `BytesLiteralFlags`
instance returned by the method would only give accurate information for
the first `BytesLiteral` in the bytestring.
## Test Plan
`cargo test`
On `main` we warn the user if there is an invalid noqa comment[^1] and
at least one of the following holds:
- There is at least one diagnostic
- A lint rule related to `noqa`s is enabled (e.g. `RUF100`)
This is probably strange behavior from the point of view of the user, so
we now show invalid `noqa`s even when there are no diagnostics.
Closes#12831
[^1]: For the current definition of "invalid noqa comment", which may be
expanded in #12811 . This PR is independent of loc. cit. in the sense
that the CLI warnings should be consistent, regardless of which `noqa`
comments are considered invalid.
## Summary
Fixes#16189.
Only `sys.breakpointhook` is flagged by the upstream linter:
007a745c86/pylint/checkers/stdlib.py (L38)
but I think it makes sense to flag
[`__breakpointhook__`](https://docs.python.org/3/library/sys.html#sys.__breakpointhook__)
too, as suggested in the issue because it
> contain[s] the original value of breakpointhook [...] in case [it
happens] to get replaced with broken or alternative objects.
## Test Plan
New T100 test cases
## Summary
Provides documentation about the FIPS compliant flag for Python hashlib
`usedforsecurity`
Fixes#16188
## Test Plan
* pre-commit hooks
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
Running `cargo test -p red_knot_python_semantic` failed because of a
missing serde feature. This PR enables the `ruff_python_ast`'`s `serde`
if the crate's `serde` feature is enabled
## Test Plan
`cargo test -p red_knot_python_semantic` compiles again
When adjusting the existing tests, I aimed to avoid dealing with the
special case in other tests if it's not necessary to do so (that is,
avoid using `float` and `complex` as examples where we just need "some
type"), and keep the tests for the special case mostly collected in the
mdtest dedicated to that purpose.
Fixes https://github.com/astral-sh/ruff/issues/14932
## Summary
Added checks for subscript expressions on builtin classes as in FURB189.
The object is changed to use the collections objects and the types from
the subscript are kept.
Resolves#16130
> Note: Added some comments in the code explaining why
## Test Plan
- Added a subscript dict and list class to the test file.
- Tested locally to check that the symbols are changed and the types are
kept.
- No modifications changed on optional `str` values.
## Summary
This PR moves the `PythonVersion` struct from the
`red_knot_python_semantic` crate to the `ruff_python_ast` crate so that
it can be used more easily in the syntax error detection work. Compared
to that [prototype](https://github.com/astral-sh/ruff/pull/16090/) these
changes reduce us from 2 `PythonVersion` structs to 1.
This does not unify any of the `PythonVersion` *enums*, but I hope to
make some progress on that in a follow-up.
## Test Plan
Existing tests, this should not change any external behavior.
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This change begins to resolve#16071 by moving the `OperatorPrecedence`
structs from the `ruff_python_linter` crate into `ruff_python_ast`. This
PR also implements `precedence()` methods on the `Expr` and `ExprRef`
enums.
## Test Plan
Since this change mainly shifts existing logic, I didn't add any
additional tests. Existing tests do pass.