We weren't really using `chrono` for anything other than getting the
current time and formatting it for logs.
Unfortunately, this doesn't quite get us to a point where `chrono`
can be removed. From what I can tell, we're still bringing it via
[`tracing-subscriber`](https://docs.rs/tracing-subscriber/latest/tracing_subscriber/)
and
[`quick-junit`](https://docs.rs/quick-junit/latest/quick_junit/).
`tracing-subscriber` does have an
[issue open about Jiff](https://github.com/tokio-rs/tracing/discussions/3128),
but there's no movement on it.
Normally I'd suggest holding off on this since it doesn't get us all of
the way there and it would be better to avoid bringing in two datetime
libraries, but we are, it appears, already there. In particular,
`env_logger` brings in Jiff. So this PR doesn't really make anything
worse, but it does bring us closer to an all-Jiff world.
## Summary
Add very early support for dataclasses. This is mostly to make sure that
we do not emit false positives on dataclass construction, but it also
lies some foundations for future extensions.
This seems like a good initial step to merge to me, as it basically
removes all false positives on dataclass constructor calls. This allows
us to use the ecosystem checks for making sure we don't introduce new
false positives as we continue to work on dataclasses.
## Ecosystem analysis
I re-ran the mypy_primer evaluation of [the `__init__`
PR](https://github.com/astral-sh/ruff/pull/16512) locally with our
current mypy_primer version and project selection. It introduced 1597
new diagnostics. Filtering those by searching for `__init__` and
rejecting those that contain `invalid-argument-type` (those could not
possibly be solved by this PR) leaves 1281 diagnostics. The current
version of this PR removes 1171 diagnostics, which leaves 110
unaccounted for. I extracted the lint + file path for all of these
diagnostics and generated a diff (of diffs), to see which
`__init__`-diagnostics remain. I looked at a subset of these: There are
a lot of `SomeClass(*args)` calls where we don't understand the
unpacking yet (this is not even related to `__init__`). Some others are
related to `NamedTuple`, which we also don't support yet. And then there
are some errors related to `@attrs.define`-decorated classes, which
would probably require support for `dataclass_transform`, which I made
no attempt to include in this PR.
## Test Plan
New Markdown tests.
## Summary
* Partial #17238
* Flyby from discord discussion - `todo_type!` now statically checks for
no parens in the message to avoid issues between debug & release build
tests
## Test Plan
many mdtests are changing
## Summary
This PR moves all the relation methods from `CallableType` to
`Signature`.
The main reason for this is that `Signature` is going to be the common
denominator between normal and overloaded callables and the core logic
to check a certain relationship is going to just require the information
that would exists on `Signature`. For example, to check whether an
overloaded callable is a subtype of a normal callable, we need to check
whether _every_ overloaded signature is a subtype of the normal
callable's signature. This "every" logic would become part of the
`CallableType` and the core logic of checking the subtyping would exists
on `Signature`.
Summary
--
This PR implements detecting the use of `await` expressions outside of
async functions. This is a reimplementation of
[await-outside-async
(PLE1142)](https://docs.astral.sh/ruff/rules/await-outside-async/) as a
semantic syntax error.
Despite the rule name, PLE1142 also applies to `async for` and `async
with`, so these are covered here too.
Test Plan
--
Existing PLE1142 tests.
I also deleted more code from the `SemanticSyntaxCheckerVisitor` to
avoid changes in other parser tests.
## Summary
Allows us to establish that two literals do not have a subtype
relationship with each other, without having to fallback to a typeshed
Instance type, which is comparatively slow.
Improves the performance of the many-string-literals union benchmark by
5x.
## Test Plan
`cargo test -p red_knot_python_semantic` and `cargo bench --bench
red_knot`.
## Summary
Add a benchmark for a large-union case that currently has exponential
blow-up in execution time.
## Test Plan
`cargo bench --bench red_knot`
## Summary
This is mainly a follow-up from
https://github.com/astral-sh/ruff/pull/17357 to use the
`concise_message` method for the red-knot server which I noticed
recently while testing the overload implementation.
This commit shuffles the reporting API around a little bit such that a
range is required, up front, when reporting a lint diagnostic. This in
turn enables us to make suppression checking eager.
In order to avoid callers needing to provide the range twice, we create
a primary annotation *without* a message inside the `Diagnostic`
encapsulated by the guard. We do this instead of requiring the message
up front because we're concerned about API complexity and the effort
involved in creating the message.
In order to provide a means of attaching a message to the primary
annotation, we expose a convenience API on `LintDiagnosticGuard` for
setting the message. This isn't generally possible for a `Diagnostic`,
but since a `LintDiagnosticGuard` knows how the `Diagnostic` was
constructed, we can offer this API correctly.
It strikes me that it might be easy to forget to attach a primary
annotation message, btu I think this the "least" bad failure mode. And
in particular, it should be somewhat obvious that it's missing once one
adds a snapshot test for how the diagnostic renders.
Otherwise, this API gives us the ability to eagerly check whether a
diagnostic should be reported with nearly minimal information. It also
shouldn't have any footguns since it guarantees that the primary
annotation is tied to the file in the typing context. And it keeps
things pretty simple: callers only need to provide what is actually
strictly necessary to make a diagnostic.
This will enable us to provide an API on `LintDiagnosticGuard` for
setting the primary annotation message. It will require an `unwrap()`,
but due to how `LintDiagnosticGuard` will build a `Diagnostic`, this
`unwrap()` will be guaranteed to succeed. (And it won't bubble out to
every user of `LintDiagnosticGuard`.)
This is the payoff from removing a bit of indirection. The types still
exist, but now callers don't need to do builder -> reporter ->
diagnostic. They can just conceptually think of it as builder ->
diagnostic.
We're going to make the guards deref to `Diagnostic` in order to remove
a layer of indirection in the reporter API. (Well, technically the layer
is not removed since the types still exist, but in actual _usage_ the
layer will be removed. We'll see how it shakes out in the next commit.)
This expands the set of types accepted for diagnostic messages. Instead
of only `std::fmt::Display` impls, we now *also* accept a concrete
`DiagnosticMessage`.
This will be useful to avoid unnecessary copies of every single
diagnostic message created via `InferContext::report_lint`. (I'll call
out how this helps in a subsequent commit.)
## Summary
Infer precise Boolean literal types for `str.startswith` calls where the
instance and the prefix are both string literals. This allows us to
understand `sys.platform.startswith(…)` branches.
## Test Plan
New Markdown tests
Summary
--
This PR reimplements [yield-outside-function
(F704)](https://docs.astral.sh/ruff/rules/yield-outside-function/) as a
semantic syntax error. Despite the name, this rule covers `yield from`
and `await` in addition to `yield`.
Test Plan
--
New linter tests, along with the existing F704 test.
---------
Co-authored-by: Dhruv Manilawala <dhruvmanila@gmail.com>
Summary
--
This PR replaces uses of version-dependent imports from `typing` or
`typing_extensions` with a centralized `Checker::import_from_typing`
method.
The idea here is to make the fix for #9761 (whatever it ends up being)
applicable to all of the rules performing similar checks.
Test Plan
--
Existing tests for the affected rules.
## Summary
For silencing `invalid-type-form` diagnostics in unreachable code, we
use the same approach that we use before and check the reachability that
we already record.
For silencing `invalid-bases`, we simply check if the type of the base
is `Never`. If so, we silence the diagnostic with the argument that the
class construction would never happen.
## Test Plan
Updated Markdown tests.
## Summary
Similar to what we did for `unresolved-reference` and
`unresolved-attribute`, we now also silence `unresolved-import`
diagnostics if the corresponding `import` statement is unreachable.
This addresses the (already closed) issue #17049.
## Test Plan
Adapted Markdown tests.
I added this accessor because tests want it, but we can also use it in
other places internally. It's a little nicer because it does the
`as_deref()` for you.
This finally completes the deletion of all old diagnostic types.
We do this by migrating the second (and last) use of secondary
diagnostic messages: to highlight the return type of a function
definition when its return value is inconsistent with the type.
Like the last diagnostic, we do actually change the message here a bit.
We don't need a sub-diagnostic here, and we can instead just add a
secondary annotation to highlight the return type.
This is the first use of the new `lint()` reporter.
I somewhat skipped a step here and also modified the actual diagnostic
message itself. The snapshots should tell the story.
We couldn't do this before because we had no way of differentiating
between "message for the diagnostic as a whole" and "message for a
specific code annotation." Now we can, so we can write more precise
messages based on the assumption that users are also seeing the code
snippet.
The downside here is that the actual message text can become quite vague
in the absence of the code snippet. This occurs, for example, with
concise diagnostic formatting. It's unclear if we should do anything
about it. I don't really see a way to make it better that doesn't
involve creating diagnostics with messages for each mode, which I think
would be a major PITA.
The upside is that this code gets a bit simpler, and we very
specifically avoid doing extra work if this specific lint is disabled.
This required a bit of surgery in the diagnostic matching and more
faffing about using a "concise" message from a diagnostic instead of
only printing the "primary" message.
In the new diagnostic data model, we really should have a main
diagnostic message *and* a primary span (with an optional message
attached to it) for every diagnostic.
In this commit, I try to make this true for the "revealed type"
diagnostic. Instead of the annotation saying both "revealed type is"
and also the revealed type itself, the annotation is now just the
revealed type and the main diagnostic message is "Revealed type."
I expect this may be controversial. I'm open to doing something
different. I tried to avoid redundancy, but maybe this is a special case
where we want the redundancy. I'm honestly not sure. I do *like* how it
looks with this commit, but I'm not working with Red Knot's type
checking daily, so my opinion doesn't count for much.
This did also require some tweaking to concise diagnostic formatting in
order to preserve the essential information.
This commit doesn't update every relevant snapshot. Just a few. I split
the rest out into the next commit.
... and replace it with use of `report()`.
Interestingly, this is the only instance of `report_diagnostic` used
directly, and thus anticipated to be the only instance of using
`report()`. If this ends up being a true single use method, we could
make it less generic and tailored specifically to "reveal type."
Two other things to note:
I left the "primary message" as empty. This avoids changing snapshots.
I address this in a subsequent commit.
The creation of a diagnostic here is a bit verbose/annoying. Certainly
more so than it was. This is somewhat expected since our diagnostic
model is more expressive and because we don't have a proc macro. I
avoided creating helpers for this case since there's only one use of
`report()`. But I expect to create helpers for the `lint()` case.
This is a surgical change that adds new `report()` and `lint()`
APIs to `InferContext`. These are intended to replace the existing
`report_*` APIs.
The comments should explain what these reporters are meant to do. For
the most part, this is "just" shuffling some code around. The actual
logic for determining whether a lint *should* be reported or not remains
unchanged and we don't make any changes to how a `Diagnostic` is
actually constructed (yet).
I initially tried to just use `LintReporter` and `DiagnosticReporter`
without the builder types, since I perceive the builder types to be an
annoying additional layer. But I found it also exceedingly annoying to
have to construct and provide the diagnostic message before you even
know if you are going to build the diagnostic. I also felt like this
could result in potentially unnecessary and costly querying in some
cases, although this is somewhat hand wavy. So I overall felt like the
builder route was the way to go. If the builders end up being super
annoying, we can probably add convenience APIs for common patterns to
paper over them.
## Summary
Basically just repeat the same thing that we did for
`unresolved-reference`, but now for attribute expressions.
We now also handle the case where the unresolved attribute (or the
unresolved reference) diagnostic originates from a stringified type
annotation.
And I made the evaluation of reachability constraints lazy (will only be
evaluated right before we are about to emit a diagnostic).
## Test Plan
New Markdown tests for stringified annotations.
I merged #17149 without checking the ecosystem results, and it still
caused a cycle panic in pybind11. Reverting for now until I fix that, so
we don't lose the ecosystem signal on other PRs.
This causes spurious query cycles.
This PR also includes an update to Salsa, which gives us db events on
cycle iteration, so we can write tests asserting the absence of a cycle.
## Summary
Track the reachability of nested scopes within their parent scopes. We
use this as an additional requirement for emitting
`unresolved-reference` diagnostics (and in the future,
`unresolved-attribute` and `unresolved-import`). This means that we only
emit `unresolved-reference` for a given use of a symbol if the use
itself is reachable (within its own scope), *and if the scope itself is
reachable*. For example, no diagnostic should be emitted for the use of
`x` here:
```py
if False:
x = 1
def f():
print(x) # this use of `x` is reachable inside the `f` scope,
# but the whole `f` scope is not reachable.
```
There are probably more fine-grained ways of solving this problem, but
they require a more sophisticated understanding of nested scopes (see
#15777, in particular
https://github.com/astral-sh/ruff/issues/15777#issuecomment-2788950267).
But it doesn't seem completely unreasonable to silence *this specific
kind of error* in unreachable scopes.
## Test Plan
Observed changes in reachability tests and ecosystem.
## Summary
Update Salsa to pull in https://github.com/salsa-rs/salsa/pull/788 which
fixes the, by now, famous *access to field whilst the value is being
initialized*.
This PR also re-enables all tests that previously triggered the panic.
## Test Plan
`cargo test`
## Summary
There is a new official URL for the typing documentation:
https://typing.python.org/
Change all https://typing.readthedocs.io/ links to use the new sub
domain, which is slightly shorter and looks more official.
## Test Plan
Tested to see if each and every new URL is accessible. I noticed that
some links go to https://typing.python.org/en/latest/source/stubs.html
which seems to be outdated, but that is a separate issue. The same page
shows up for the old URL.
## Summary
Based on the discussion in
https://github.com/astral-sh/ruff/pull/17298#discussion_r2033975460, we
decided to move the scope handling out of the `SemanticSyntaxChecker`
and into the `SemanticSyntaxContext` trait. This PR implements that
refactor by:
- Reverting all of the `Checkpoint` and `in_async_context` code in the
`SemanticSyntaxChecker`
- Adding four new methods to the `SemanticSyntaxContext` trait
- `in_async_context`: matches `SemanticModel::in_async_context` and only
detects the nearest enclosing function
- `in_sync_comprehension`: uses the new `is_async` tracking on
`Generator` scopes to detect any enclosing sync comprehension
- `in_module_scope`: reports whether we're at the top-level scope
- `in_notebook`: reports whether we're in a Jupyter notebook
- In-lining the `TestContext` directly into the
`SemanticSyntaxCheckerVisitor`
- This allows modifying the context as the visitor traverses the AST,
which wasn't possible before
One potential question here is "why not add a single method returning a
`Scope` or `Scopes` to the context?" The main reason is that the `Scope`
type is defined in the `ruff_python_semantic` crate, which is not
currently a dependency of the parser. It also doesn't appear to be used
in red-knot. So it seemed best to use these more granular methods
instead of trying to access `Scope` in `ruff_python_parser` (and
red-knot).
## Test Plan
Existing parser and linter tests.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
attribute check was missing in the previous implementation
e.g.
```python
from airflow.api.auth.backend import basic_auth
basic_auth.auth_current_user
```
This PR adds this kind of check.
## Test Plan
<!-- How was it tested? -->
The test case has been added to the button of the existing test
fixtures, confirmed to be correct and later reorgnaized
Summary
--
This PR extends the documentation of the `LoadBeforeGlobalDeclaration`
check to specify the behavior on versions of Python before 3.13. Namely,
on Python 3.12, the `else` clause of a `try` statement is visited before
the `except` handlers:
```pycon
Python 3.12.9 (main, Feb 12 2025, 14:50:50) [Clang 19.1.6 ] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10
>>> def g():
... try:
... 1 / 0
... except:
... a = 1
... else:
... global a
...
>>> def f():
... try:
... pass
... except:
... global a
... else:
... print(a)
...
File "<stdin>", line 5
SyntaxError: name 'a' is used prior to global declaration
```
The order is swapped on 3.13 (see
[CPython#111123](https://github.com/python/cpython/issues/111123)):
```pycon
Python 3.13.2 (main, Feb 5 2025, 08:05:21) [GCC 14.2.1 20250128] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = 10
... def g():
... try:
... 1 / 0
... except:
... a = 1
... else:
... global a
...
File "<python-input-0>", line 8
global a
^^^^^^^^
SyntaxError: name 'a' is assigned to before global declaration
>>> def f():
... try:
... pass
... except:
... global a
... else:
... print(a)
...
>>>
```
The current implementation of PLE0118 is correct for 3.13 but not 3.12:
[playground](https://play.ruff.rs/d7467ea6-f546-4a76-828f-8e6b800694c9)
(it flags the first case regardless of Python version).
We decided to maintain this incorrect diagnostic for Python versions
before 3.13 because the pre-3.13 behavior is very unintuitive and
confirmed to be a bug, although the bug fix was not backported to
earlier versions. This can lead to false positives and false negatives
for pre-3.13 code, but we also expect that to be very rare, as
demonstrated by the ecosystem check (before the version-dependent check
was reverted here).
Test Plan
--
N/a
This PR lets you explicitly specialize a generic class using a subscript
expression. It introduces three new Rust types for representing classes:
- `NonGenericClass`
- `GenericClass` (not specialized)
- `GenericAlias` (specialized)
and two enum wrappers:
- `ClassType` (a non-generic class or generic alias, represents a class
_type_ at runtime)
- `ClassLiteralType` (a non-generic class or generic class, represents a
class body in the AST)
We also add internal support for specializing callables, in particular
function literals. (That is, the internal `Type` representation now
attaches an optional specialization to a function literal.) This is used
in this PR for the methods of a generic class, but should also give us
most of what we need for specializing generic _functions_ (which this PR
does not yet tackle).
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
* Simplify match conditions in AIR301
* Fix
* `airflow.datasets.manager.DatasetManager` →
`airflow.assets.manager.AssetManager`
* `airflow.www.auth.has_access_dataset` →
`airflow.www.auth.has_access_dataset`
## Test Plan
<!-- How was it tested? -->
The test fixture has been updated accordingly
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
As discussed in
https://github.com/astral-sh/ruff/issues/14626#issuecomment-2766146129,
we're to separate suggested changes from required changes.
The following symbols has been moved to AIR312 from AIR302. They still
work in Airflow 3.0, but they're suggested to be changed as they're
expected to be removed in future version
```python
from airflow.hooks.filesystem import FSHook
from airflow.hooks.package_index import PackageIndexHook
from airflow.hooks.subprocess import (SubprocessHook, SubprocessResult, working_directory)
from airflow.operators.bash import BashOperator
from airflow.operators.datetime import BranchDateTimeOperator, target_times_as_dates
from airflow.operators.trigger_dagrun import TriggerDagRunLink, TriggerDagRunOperator
from airflow.operators.empty import EmptyOperator
from airflow.operators.latest_only import LatestOnlyOperator
from airflow.operators.python import (BranchPythonOperator, PythonOperator, PythonVirtualenvOperator, ShortCircuitOperator)
from airflow.operators.weekday import BranchDayOfWeekOperator
from airflow.sensors.date_time import DateTimeSensor, DateTimeSensorAsync
from airflow.sensors.external_task import ExternalTaskMarker, ExternalTaskSensor, ExternalTaskSensorLink
from airflow.sensors.filesystem import FileSensor
from airflow.sensors.time_sensor import TimeSensor, TimeSensorAsync
from airflow.sensors.time_delta import TimeDeltaSensor, TimeDeltaSensorAsync, WaitSensor
from airflow.sensors.weekday import DayOfWeekSensor
from airflow.triggers.external_task import DagStateTrigger, WorkflowTrigger
from airflow.triggers.file import FileTrigger
from airflow.triggers.temporal import DateTimeTrigger, TimeDeltaTrigger
```
## Test Plan
<!-- How was it tested? -->
The test fixture has been updated acccordingly
---------
Co-authored-by: Brent Westbrook <36778786+ntBre@users.noreply.github.com>
## Summary
As discussed in https://github.com/astral-sh/ruff/issues/16983 and
"mitigate" said issue for the alpha.
This PR changes the default for `PythonPlatform` to be the current
platform rather than `all`.
I'm not sure if we should be as sophisticated as supporting `ios` and
`android` as defaults but it was easy...
## Test Plan
Updated Markdown tests.
---------
Co-authored-by: David Peter <mail@david-peter.de>
## Summary
This is a new test case that I don't know how to handle yet. It leads to
many false positives in `rich/tests/test_win32_console.py`, which does
something like:
```py
if sys.platform == "win32":
from windows_only_module import some_symbol
some_other_symbol = 1
def some_test_case():
use(some_symbol) # Red Knot: unresolved-reference
use(some_other_symbol) # Red Knot: unresolved-reference
```
Also adds a test for using unreachable symbols in type annotations or as
class bases.
## Summary
* Addresses #16511 for simple cases where only `__init__` method is
bound on class or doesn't exist at all.
* fixes a bug with argument counting in bound method diagnostics
Caveats:
* No handling of `__new__` or modified `__call__` on metaclass.
* This leads to a couple of false positive errors in tests
## Test Plan
- A couple new cases in mdtests
- cargo nextest run -p red_knot_python_semantic --no-fail-fast
---------
Co-authored-by: Carl Meyer <carl@astral.sh>
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
This fix closes#16868
I noticed the issue is assigned, but the assignee appears to be actively
working on another pull request. I hope that’s okay!
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
<!-- What's the purpose of the change? What does it do, and why? -->
As of Python 3.11.1, `enum.auto()` can be used in multiple assignments.
This pattern should not trigger non-unique-enums check.
Reference: [Python docs on
enum.auto()](https://docs.python.org/3/library/enum.html#enum.auto)
This fix updates the check logic to skip enum variant statements where
the right-hand side is a tuple containing a call to `enum.auto()`.
## Test Plan
<!-- How was it tested? -->
The added test case uses the example from the original issue. It
previously triggered a false positive, but now passes successfully.
Summary
--
Detect async comprehensions nested in sync comprehensions in async
functions before Python 3.11, when this was [changed].
The actual logic of this rule is very straightforward, but properly
tracking the async scopes took a bit of work. An alternative to the
current approach is to offload the `in_async_context` check into the
`SemanticSyntaxContext` trait, but that actually required much more
extensive changes to the `TestContext` and also to ruff's semantic
model, as you can see in the changes up to
31554b473507034735bd410760fde6341d54a050. This version has the benefit
of mostly centralizing the state tracking in `SemanticSyntaxChecker`,
although there was some subtlety around deferred function body traversal
that made the changes to `Checker` more intrusive too (hence the new
linter test).
The `Checkpoint` struct/system is obviously overkill for now since it's
only tracking a single `bool`, but I thought it might be more useful
later.
[changed]: https://github.com/python/cpython/issues/77527
Test Plan
--
New inline tests and a new linter integration test.
## Summary
### Improvement
Expand the following moved module into individual symbols.
* airflow.triggers.temporal
* airflow.triggers.file
* airflow.triggers.external_task
* airflow.hooks.subprocess
* airflow.hooks.package_index
* airflow.hooks.filesystem
* airflow.sensors.weekday
* airflow.sensors.time_delta
* airflow.sensors.time_sensor
* airflow.sensors.date_time
* airflow.operators.weekday
* airflow.operators.datetime
* airflow.operators.bash
This removes `Replacement::ImportPathMoved`.
## Fix
During the expansion, the following paths were also fixed
* airflow.sensors.s3_key_sensor.S3KeySensor →
airflow.providers.amazon.aws.sensors.S3KeySensor
* airflow.operators.sql.SQLThresholdCheckOperator →
airflow.providers.common.sql.operators.sql.SQLThresholdCheckOperator
* airflow.hooks.druid_hook.DruidDbApiHook →
airflow.providers.apache.druid.hooks.druid.DruidDbApiHook
* airflow.hooks.druid_hook.DruidHook →
airflow.providers.apache.druid.hooks.druid.DruidHook
* airflow.kubernetes.pod_generator.extend_object_field →
airflow.providers.cncf.kubernetes.pod_generator.extend_object_field
* airflow.kubernetes.pod_launcher.PodLauncher →
airflow.providers.cncf.kubernetes.pod_launcher_deprecated.PodLauncher
* airflow.kubernetes.pod_launcher.PodStatus →
airflow.providers.cncf.kubernetes.pod_launcher_deprecated.PodStatus
* airflow.kubernetes.pod_generator.PodDefaults →
airflow.providers.cncf.kubernetes.pod_generator.PodDefaults
* airflow.kubernetes.pod_launcher_deprecated.PodDefaults →
airflow.providers.cncf.kubernetes.pod_launcher_deprecated.PodDefaults
### Refactor
As many symbols are moved into the same module,
`SourceModuleMovedToProvider` is introduced for grouping similar logic
## Test Plan
Summary
--
This PR extends the checks in #17101 and #17282 to annotated assignments
after Python 3.13.
Currently stacked on #17282 to include `await`.
Test Plan
--
New inline tests. These are simpler than the other cases because there's
no place to put generics.
Summary
--
This PR extends the changes in #17101 to include `await` in the same
positions.
I also renamed the `valid_annotation_function` test to include `_py313`
and explicitly passed a Python version to contrast it with the `_py314`
version.
Test Plan
--
New test cases added to existing files.
## Summary
We already have partial "support" for `assert_never`, because it is
annotated as
```pyi
def assert_never(arg: Never, /) -> Never: ...
```
in typeshed. So we already emit a `invalid-argument-type` diagnostic if
the argument type to `assert_never` is not assignable to `Never`.
That is not enough, however. Gradual types like `Any`, `Unknown`,
`@Todo(…)` or `Any & int` can be assignable to `Never`. Which means that
we didn't issue any diagnostic in those cases.
Also, it seems like `assert_never` deserves a dedicated diagnostic
message, not just a generic "invalid argument type" error.
## Test Plan
New Markdown tests.
This fix closes#17026
## Summary
The check for the `PytestRaisesTooBroad` rule is now skipped if there is
a second positional argument present, which means `pytest.raises` is
used as a function.
## Test Plan
Tested on the example from the issue, which now passes the check.
```Python3
pytest.raises(Exception, func, *func_args, **func_kwargs).match("error message")
```
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
Resolves#17289.
After this change, Red Knot will no longer show types on hover for
`None`, `...`, `True`, `False`, numbers, strings (but not f-strings),
and bytes literals.
## Test Plan
Unit tests.
## Summary
This implements a new approach to silencing `unresolved-reference`
diagnostics by keeping track of the reachability of each use of a
symbol. The changes merged in
https://github.com/astral-sh/ruff/pull/17169 are still needed for the
"Use of variable in nested function" test case, but that could also be
solved in another way eventually (see
https://github.com/astral-sh/ruff/issues/15777). We can use the same
technique to silence `unresolved-import` and `unresolved-attribute`
false-positives, but I think this could be merged in isolation.
## Test Plan
New Markdown tests, ecosystem tests
## Summary
The priority latency-sensitive is reserved for actions that need to run
immediately because they would otherwise block the user's action. An
example of this is a format request. VS code blocks the editor until the
save action is complete. That's why formatting a document is very
sensitive to delays and it's important that we always have a worker
thread available to run a format request *immediately*. Another example
are code completions, where it's important that they appear immediately
when the user types.
On the other hand, showing diagnostics, hover, or inlay hints has high
priority but users are used that the editor takes a few ms to compute
the overlay.
Computing this information can also be expensive (e.g. find all
references), blocking the worker for quiet some time (a few 100ms).
That's why it's important
that those requests don't clog the sensitive worker threads.
Closes#17042
## Summary
This PR fixes the issue outlined in #17042 where RUF100 (unused-noqa)
fails to detect unused file-level noqa directives (`# ruff: noqa` or `#
ruff: noqa: {code}`).
The issue stems from two underlying causes:
1. For blanket file-level directives (`# ruff: noqa`), there's a
circular dependency: the directive exempts all rules including RUF100
itself, which prevents checking for usage. This isn't changed by this
PR. I would argue it is intendend behavior - a blanket `# ruff: noqa`
directive should exempt all rules including RUF100 itself.
2. For code-specific file-level directives (e.g. `# ruff: noqa: F841`),
the handling was missing in the `check_noqa` function. This is added in
this PR.
## Notes
- For file-level directives, the `matches` array is pre-populated with
the specified codes during parsing, unlike line-level directives which
only populate their `matches` array when actually suppressing
diagnostics. This difference requires the somewhat clunky handling of
both cases. I would appreciate guidance on a cleaner design :)
- A more fundamental solution would be to change how file-level
directives initialize the `matches` array in
`FileNoqaDirectives::extract()`, but that requires more substantial
changes as it breaks existing functionality. I suspect discussions in
#16483 are relevant for this.
## Test Plan
- Local verification
- Added a test case and fixture
## Summary
Some of the migration rules has been changed during Airflow 3
development. The following are new AIR302 rules. Corresponding AIR301
has also been removed.
* airflow.sensors.external_task_sensor.ExternalTaskMarker →
airflow.providers.standard.sensors.external_task.ExternalTaskMarker
* airflow.sensors.external_task_sensor.ExternalTaskSensor →
airflow.providers.standard.sensors.external_task.ExternalTaskSensor
* airflow.sensors.external_task_sensor.ExternalTaskSensorLink →
airflow.providers.standard.sensors.external_task.ExternalTaskSensorLink
* airflow.sensors.time_delta_sensor.TimeDeltaSensor →
airflow.providers.standard.sensors.time_delta.TimeDeltaSensor
* airflow.operators.dagrun_operator.TriggerDagRunLink →
airflow.providers.standard.operators.trigger_dagrun.TriggerDagRunLink
* airflow.operators.dagrun_operator.TriggerDagRunOperator →
airflow.providers.standard.operators.trigger_dagrun.TriggerDagRunOperator
* airflow.operators.python_operator.BranchPythonOperator →
airflow.providers.standard.operators.python.BranchPythonOperator
* airflow.operators.python_operator.PythonOperator →
airflow.providers.standard.operators.python.PythonOperator
* airflow.operators.python_operator.PythonVirtualenvOperator →
airflow.providers.standard.operators.python.PythonVirtualenvOperator
* airflow.operators.python_operator.ShortCircuitOperator →
airflow.providers.standard.operators.python.ShortCircuitOperator
* airflow.operators.latest_only_operator.LatestOnlyOperator →
airflow.providers.standard.operators.latest_only.LatestOnlyOperator
* airflow.sensors.date_time_sensor.DateTimeSensor →
airflow.providers.standard.sensors.DateTimeSensor
* airflow.operators.email_operator.EmailOperator →
airflow.providers.smtp.operators.smtp.EmailOperator
* airflow.operators.email.EmailOperator →
airflow.providers.smtp.operators.smtp.EmailOperator
* airflow.operators.bash.BashOperator →
airflow.providers.standard.operators.bash.BashOperator
* airflow.operators.EmptyOperator →
airflow.providers.standard.operators.empty.EmptyOperator
closes: https://github.com/astral-sh/ruff/issues/17103
## Test Plan
The test fixture has been updated and checked after each change and
later reorganized in the latest commit
## Summary
This PR adds support for stub packages, except for partial stub packages
(a stub package is always considered non-partial).
I read the specification at
[typing.python.org/en/latest/spec/distributing.html#stub-only-packages](https://typing.python.org/en/latest/spec/distributing.html#stub-only-packages)
but I found it lacking some details, especially on how to handle
namespace packages or when the regular and stub packages disagree on
whether they're namespace packages. I tried to document my decisions in
the mdtests where the specification isn't clear and compared the
behavior to Pyright.
Mypy seems to only support stub packages in the venv folder. At least,
it never picked up my stub packages otherwise. I decided not to spend
too much time fighting mypyp, which is why I focused the comparison
around Pyright
Closes https://github.com/astral-sh/ruff/issues/16612
## Test plan
Added mdtests
## Summary
Some more edge cases that I thought of while working on integrating
knowledge of statically known branches into the `*`-import machinery
## Test Plan
`cargo test -p red_knot_python_semantic`
## Summary
Use more local `expect(dead_code)` suppressions instead of a global
`allow(dead_code)` in `lib.rs`.
Remove some methods that are either easy to add later, are less likely
to be needed for red knot, or it's unclear if we'd add it the same way
as in ruff.
## Summary
This PR does the following things:
- Fixes the `python` configuration setting for mdtest (added in
https://github.com/astral-sh/ruff/pull/17221) so that it expects a path
pointing to a venv's `sys.prefix` variable rather than the a path
pointing to the venv's `site-packages` subdirectory. This brings the
`python` setting in mdtest in sync with our CLI `--python` flag.
- Tweaks mdtest so that it automatically creates a valid `pyvenv.cfg`
file for you if you don't specify one. This makes it much more ergonomic
to write an mdtest with a custom `python` setting: red-knot will reject
a `python` setting that points to a directory that doesn't have a
`pyvenv.cfg` file in it
- Tweaks mdtest so that it doesn't check a custom `pyvenv.cfg` as Python
source code if you _do_ add a custom `pyvenv.cfg` file for your mock
virtual environment in an mdtest. (You get a lot of diagnostics about
Python syntax errors in the `pyvenv.cfg` file, otherwise!)
- Rewrites the test added in
https://github.com/astral-sh/ruff/pull/17178 as an mdtest, and deletes
the original test that was added in that PR
## Test Plan
I verified that the new mdtest fails if I revert the changes to
`resolver.rs` that were added in
https://github.com/astral-sh/ruff/pull/17178
## Summary
This PR extends the mdtest options to allow setting the
`environment.python` option.
## Test Plan
I let @AlexWaygood write a test and he'll tell me if it works 😆
## Summary
This PR fixes the cycle issue that was causing problems in the `support
super` PR.
### Affected queries
- `all_narrowing_constraints_for_expression`
- `all_negative_narrowing_constraints_for_expression`
--
Additionally, `bidict` and `werkzeug` have been added to the
project-selection list in `mypy_primer`.
This PR also addresses the panics that occurred while analyzing those
packages:
- `bidict`: panic triggered by
`all_narrowing_constraints_for_expression`
- `werkzeug`: panic triggered by
`all_negative_narrowing_constraints_for_expression`
I think the mypy-primer results for this PR can serve as sufficient test
:)
Summary
--
This PR fixes the issue pointed out by @JelleZijlstra in
https://github.com/astral-sh/ruff/pull/17101#issuecomment-2777480204.
Namely, I conflated two very different errors from CPython:
```pycon
>>> def m[T](x: (yield from 1)): ...
File "<python-input-310>", line 1
def m[T](x: (yield from 1)): ...
^^^^^^^^^^^^
SyntaxError: yield expression cannot be used within the definition of a generic
>>> def m(x: (yield from 1)): ...
File "<python-input-311>", line 1
def m(x: (yield from 1)): ...
^^^^^^^^^^^^
SyntaxError: 'yield from' outside function
>>> def outer():
... def m(x: (yield from 1)): ...
...
>>>
```
I thought the second error was the same as the first, but `yield` (and
`yield from`) is actually valid in this position when inside a function
scope. The same is true for base classes, as pointed out in the original
comment.
We don't currently raise an error for `yield` outside of a function, but
that should be handled separately.
On the upside, this had the benefit of removing the
`InvalidExpressionPosition::BaseClass` variant and the
`allow_named_expr` field from the visitor because they were both no
longer used.
Test Plan
--
Updated inline tests.
Fixes: #17196
## Summary
Skipping these nodes for malformed type expressions would lead to
incorrect semantic state, which can in turn mean we emit false positives
for rules like `unused-variable`(`F841`)
## Test Plan
`cargo nextest run`
## Summary
This is a follow up to the goto type definition PR. Specifically, that
we want to avoid exposing too many semantic model internals publicly.
I want to get some feedback on the approach taken. I think it goes into
the right direction but I'm not super happy with it.
The basic idea is that we add a `Type::definition` method which does the
"goto type definition". The parts that I think make it awkward:
* We can't directly return `Definition` because we don't create a
`Definition` for modules (but we could?). Although I think it makes
sense to possibly have a more public wrapper type anyway?
* It doesn't handle unions and intersections. Mainly because not all
elements in an intersection may have a definition and we only want to
show a navigation target for intersections if there's only a single
positive element (besides maybe `Unknown`).
An alternative design or an addition to this design is to introduce a
`SemanticAnalysis(Db)` struct that has methods like
`type_definition(&self, type)` which explicitly exposes the methods we
want. I don't feel comfortable design this API yet because it's unclear
how fine granular it has to be (and if it is very fine granular,
directly using `Type` might be better after all)
## Test Plan
`cargo test`
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
Closes#17084
## Summary
This PR adds a new rule (RUF102) to detect and fix invalid rule codes in
`noqa` comments.
Invalid rule codes in `noqa` directives serve no purpose and may
indicate outdated code suppressions.
This extends the previous behaviour originating from
`crates/ruff_linter/src/noqa.rs` which would only emit a warnigs.
With this rule a `--fix` is available.
The rule:
1. Analyzes all `noqa` directives to identify invalid rule codes
2. Provides autofix functionality to:
- Remove the entire comment if all codes are invalid
- Remove only the invalid codes when mixed with valid codes
3. Preserves original comment formatting and whitespace where possible
Example cases:
- `# noqa: XYZ111` → Remove entire comment (keep empty line)
- `# noqa: XYZ222, XYZ333` → Remove entire comment (keep empty line)
- `# noqa: F401, INVALID123` → Keep only valid codes (`# noqa: F401`)
## Test Plan
- Added tests in
`crates/ruff_linter/resources/test/fixtures/ruff/RUF102.py` covering
different example cases.
<!-- How was it tested? -->
## Notes
- This does not handle cases where parsing fails. E.g. `# noqa:
NON_EXISTENT, ANOTHER_INVALID` causes a `LexicalError` and the
diagnostic is not propagated and we cannot handle the diagnostic. I am
also unsure what proper `fix` handling would be and making the user
aware we don't understand the codes is probably the best bet.
- The rule is added to the Preview rule group as it's a new addition
## Questions
- Should we remove the warnings, now that we have a rule?
- Is the current fix behavior appropriate for all cases, particularly
the handling of whitespace and line deletions?
- I'm new to the codebase; let me know if there are rule utilities which
could have used but didn't.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
For two non-disjoint types `P` and `Q`, the simplification of `(P | Q) &
~Q` is not `P`, but `P & ~Q`. In other words, the non-empty set `P & Q`
is also excluded from the type.
The same applies for a constrained typevar `[T: (P, Q)]`: `T & ~Q`
should simplify to `P & ~Q`, not just `P`.
Implementing this is actually purely a matter of removing code from the
constrained typevar simplification logic; we just need to not bother
removing the negations. If the negations are actually redundant (because
the constraint types are disjoint), normal intersection simplification
will already eliminate them (as shown in the added test.)
Summary
--
This PR detects the use of invalid syntax in annotation scopes,
including
`yield` and `yield from` expressions and named expressions. I combined a
few
different types of CPython errors here, but I think the resulting error
messages
still make sense and are even preferable to what CPython gives. For
example, we
report `yield expression cannot be used in a type annotation` for both
of these:
```pycon
>>> def f[T](x: (yield 1)): ...
File "<python-input-26>", line 1
def f[T](x: (yield 1)): ...
^^^^^^^
SyntaxError: yield expression cannot be used within the definition of a generic
>>> def foo() -> (yield x): ...
File "<python-input-28>", line 1
def foo() -> (yield x): ...
^^^^^^^
SyntaxError: 'yield' outside function
```
Fixes https://github.com/astral-sh/ruff/issues/11118.
Test Plan
--
New inline tests, along with some updates to existing tests.
Summary
--
Detects duplicate attributes in a `match` class pattern:
```python
match x:
case Class(x=1, x=2): ...
```
which are more analogous to the similar check for mapping patterns than
to the
multiple assignments rule.
I also realized that both this and the mapping check would only work on
top-level patterns, despite the possibility that they can be nested
inside other
patterns:
```python
match x:
case [{"x": 1, "x": 2}]: ... # false negative in the old version
```
and moved these checks into the recursive pattern visitor instead.
I also tidied up some of the names like the `multiple_case_assignment`
function
and the `MultipleCaseAssignmentVisitor`, which are now doing more than
checking
for multiple assignments.
Test Plan
--
New inline tests for both classes and mappings.
Summary
--
Fixes#17181. The cases being tested with multiple *keys* being equal
are actually a slightly different error, more like the error for
`MatchMapping` than like the other multiple assignment errors:
```pycon
>>> match x:
... case Class(x=x, x=x): ...
...
File "<python-input-249>", line 2
case Class(x=x, x=x): ...
^
SyntaxError: attribute name repeated in class pattern: x
>>> match x:
... case {"x": 1, "x": 2}: ...
...
File "<python-input-251>", line 2
case {"x": 1, "x": 2}: ...
^^^^^^^^^^^^^^^^
SyntaxError: mapping pattern checks duplicate key ('x')
>>> match x:
... case [x, x]: ...
...
File "<python-input-252>", line 2
case [x, x]: ...
^
SyntaxError: multiple assignments to name 'x' in pattern
```
This PR just stops the false positive reported in the issue, but I will
quickly follow it up with a new rule (or possibly combined with the
mapping rule) catching the repeated attributes separately.
Test Plan
--
New inline `test_ok` and updating the `test_err` cases to have duplicate
values instead of keys.
## Summary
It turns out that `a.` isn't a list format supported by rustdoc. I
changed the documentation to use `1.`, `2.` instead.
## Test Plan
`cargo clippy`
This adds a new `Type` variant for holding an instance of a typevar
inside of a generic function or class. We don't handle specializing the
typevars yet, but this should implement most of the typing rules for
inside the generic function/class, where we don't know yet which
specific type the typevar will be specialized to.
This PR does _not_ yet handle the constraint that multiple occurrences
of the typevar must be specialized to the _same_ time. (There is an
existing test case for this in `generics/functions.md` which is still
marked as TODO.)
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Carl Meyer <carl@astral.sh>
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
I decided to disable the new
[`needless_continue`](https://rust-lang.github.io/rust-clippy/master/index.html#needless_continue)
rule because I often found the explicit `continue` more readable over an
empty block or having to invert the condition of an other branch.
## Test Plan
`cargo test`
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR changes the inferred type for symbols in unreachable sections of
code to `Never` (instead of reporting them as unbound), in order to
silence false positive diagnostics. See the lengthy comment in the code
for further details.
## Test Plan
- Updated Markdown tests.
- Manually verified a couple of ecosystem diagnostic changes.
## Summary
If a package in `site-packages` had this directory structure:
```py
# bar/__init__.py
from .a import A
# bar/a.py
class A: ...
```
then we would fail to resolve the `from .a import A` import _if_ (as is
usually the case!) the `site-packages` search path was located inside a
`.venv` directory that was a subdirectory of the project's first-party
search path. The reason for this is a bug in `file_to_module` in the
module resolver. In this loop, we would identify that
`/project_root/.venv/lib/python3.13/site-packages/foo/__init__.py` can
be turned into a path relative to the first-party search path
(`/project_root`):
6e2b8f9696/crates/red_knot_python_semantic/src/module_resolver/resolver.rs (L101-L110)
but we'd then try to turn the relative path
(.venv/lib/python3.13/site-packages/foo/__init__.py`) into a module
path, realise that it wasn't a valid module path... and therefore
immediately `break` out of the loop before trying any other search paths
(such as the `site-packages` search path).
This bug was originally reported on Discord by @MatthewMckee4.
## Test Plan
I added a unit test for `file_to_module` in `resolver.rs`, and an
integration test that shows we can now resolve the import correctly in
`infer.rs`.
Summary
--
Detects duplicate literals in `match` mapping keys.
This PR also adds a `source` method to `SemanticSyntaxContext` to
display the duplicated key in the error message by slicing out its
range.
Test Plan
--
New inline tests.
Fixes#17164. Simply checking whether one type is gradually equivalent
to another is too simplistic here: `Any` is gradually equivalent to
`Todo`, but we should permit users to cast from `Todo` or `Unknown` to
`Any` without complaining about it. This changes our logic so that we
only complain about redundant casts if:
- the two types are exactly equal (when normalized) OR they are
equivalent (we'll still complain about `Any -> Any` casts, and about
`Any | str | int` -> `str | int | Any` casts, since their normalized
forms are exactly equal, even though the type is not fully static -- and
therefore does not participate in equivalence relations)
- AND the casted type does not contain `Todo`
We add support for `return` and `raise` statements in the control flow
graph: we simply add an edge to the terminal block, push the statements
to the current block, and proceed.
This implementation will have to be modified somewhat once we add
support for `try` statements - then we will need to check whether to
_defer_ the jump. But for now this will do!
Also in this PR: We fix the `unreachable` diagnostic range so that it
lumps together consecutive unreachable blocks.
## Summary
The existing signature for `str` calls had various problems, one of
which I noticed while looking at some ecosystem projects (`scrapy`,
added as a project to mypy_primer in this PR).
## Test Plan
- New tests for `str(…)` calls.
- Observed reduction of false positives in ecosystem checks
## Summary
A callable type is disjoint from other literal types. For example,
`Type::StringLiteral` must be an instance of exactly `str`, not a
subclass of `str`, and `str` is not callable. The same applies to other
literal types.
This should hopefully fix#17144, I couldn't produce any failures after
running property tests multiple times.
## Test Plan
Add test cases for disjointness check between callable and other literal
types.
Run property tests multiple times.
## Summary
Python `**` works differently to Rust `**`!
## Test Plan
Added an mdtest for various edge cases, and checked in the Python REPL
that we infer the correct type in all the new cases tested.
## Summary
This PR adds a CI job that causes GitHub to add annotations to a PR diff
when mdtest assertions fail. For example:
<details>
<summary>Screenshot</summary>

</details>
## Motivation
Debugging mdtest failures locally is currently a really nice experience:
- Errors are displayed with pretty colours, which makes them much more
readable
- If you run the test from inside an IDE, you can CTRL-click on a path
and jump directly to the line that had the failing assertion
- If you use
[`mdtest.py`](https://github.com/astral-sh/ruff/blob/main/crates/red_knot_python_semantic/mdtest.py),
you don't even need to recompile anything after changing an assertion in
an mdtest, amd the test results instantly live-update with each change
to the MarkDown file
Debugging mdtest failures in CI is much more unpleasant, however.
Sometimes an error message is just
> [static-assert-error] Argument evaluates to `False`
...which doesn't tell you very much unless you navigate to the line in
question that has the failing mdtest assertion. The line in question
might not even be touched by the PR, and even if it is, it can be hard
to find the line if the PR touches many files. Unlike locally, you can't
click on the error and jump straight to the line that contains the
failing assertion. You also don't get colourised output in CI
(https://github.com/astral-sh/ruff/issues/13939).
GitHub PR annotations should make it really easy to debug why mdtests
are failing on PRs, making PR review much easier.
## Test Plan
I opened a PR to my fork
[here](https://github.com/AlexWaygood/ruff/pull/11/files) with some
bogus changes to an mdtest to show what it looks like when there are
failures in CI and this job has been added. Scroll down to
`crates/red_knot_python_semantic/resources/mdtest/type_properties/is_equivalent_to.md`
on the "files changed" tab for that PR to see the annotations.
## Summary
Fixes https://github.com/astral-sh/ruff/issues/17058.
Equivalent callable types were not understood as equivalent when they
appeared nested inside unions and intersections. This PR fixes that by
ensuring that `Callable` elements nested inside unions, intersections
and tuples have their representations normalized before one union type
is compared with another for equivalence, or before one intersection
type is compared with another for equivalence.
The normalizations applied to a `Callable` type are:
- the type of the default value is stripped from all parameters (only
whether the parameter _has_ a default value is relevant to whether one
`Callable` type is equivalent to another)
- The names of the parameters are stripped from positional-only
parameters, variadic parameters and keyword-variadic parameters
- Unions and intersections that are present (top-level or nested) inside
parameter annotations or return annotations are normalized.
Adding a `CallableType::normalized()` method also allows us to simplify
the implementation of `CallableType::is_equivalent_to()`.
### Should these normalizations be done eagerly as part of a
`CallableType` constructor?
I considered this. It's something that we could still consider doing in
the future; this PR doesn't rule it out as a possibility. However, I
didn't pursue it for now, for several reasons:
1. Our current `Display` implementation doesn't handle well the
possibility that a parameter might not have a name or an annotated type.
Callable types with parameters like this would be displayed as follows:
```py
(, ,) -> None: ...
```
That's fixable! It could easily become something like `(Unknown,
Unknown) -> None: ...`. But it also illustrates that we probably want to
retain the parameter names when displaying the signature of a `lambda`
function if you're hovering over a reference to the lambda in an IDE.
Currently we don't have a `LambdaType` struct for representing `lambda`
functions; if we wanted to eagerly normalize signatures when creating
`CallableType`s, we'd probably have to add a `LambdaType` struct so that
we would retain the full signature of a `lambda` function, rather than
representing it as an eagerly simplified `CallableType`.
2. In order to ensure that it's impossible to create `CallableType`s
without the parameters being normalized, I'd either have to create an
alternative `SimplifiedSignature` struct (which would duplicate a lot of
code), or move `CallableType` to a new module so that the only way of
constructing a `CallableType` instance would be via a constructor method
that performs the normalizations eagerly on the callable's signature.
Again, this isn't a dealbreaker, and I think it's still an option, but
it would be a lot of churn, and it didn't seem necessary for now. Doing
it this way, at least to start with, felt like it would create a diff
that's easier to review and felt like it would create fewer merge
conflicts for others.
## Test Plan
- Added a regression mdtest for
https://github.com/astral-sh/ruff/issues/17058
- Ran `QUICKCHECK_TESTS=1000000 cargo test --release -p
red_knot_python_semantic -- --ignored types::property_tests::stable`
## Summary
Add an initial set of tests that will eventually document our behavior
around unreachable code. In the last section of this suite, I argue why
we should never type check unreachable sections and never emit any
diagnostics in these sections.
## Summary
Following up the discussion in
https://github.com/astral-sh/ruff/issues/14626#issuecomment-2766548545,
we're to reorganize airflow rules. Before this discussion happens, we
combine required changes and suggested changes in to one single error
code.
This PR first rename the original error code to the new error code as we
discussed. We will gradually extract suggested changes out of AIR301 and
AIR302 to AIR311 and AIR312 in the following PRs
## Test Plan
Except for file, error code rename, the test case should work as it used
to be.
I initially split the lifetime out into three distinct lifetimes on
near-instinct because I moved the struct into the public API. But
because they are all shared borrows, and because there are no other APIs
on `DisplayDiagnostic` to access individual fields (and probably never
will be), it's probably fine to just specify one lifetime. Because of
subtyping, the one lifetime will be the shorter of the three.
There's also the point that `ruff_db` isn't _really_ a public API, since
it isn't a library that others depend on. So my instinct is probably a
bit off there.
## Summary
With this PR, we emit a diagnostic for this case where
previously didn't:
```py
from typing import Literal
def f(m: int, n: Literal[-1, 0, 1]):
# error: [division-by-zero] "Cannot divide object of type `int` by zero"
return m / n
```
## Test Plan
New Markdown test
## Summary
Add autofix infrastructure to `AIR302` name checks and use this logic to
fix`"airflow", "api_connexion", "security", "requires_access_dataset"`, `"airflow", "Dataset"` and `"airflow",
"datasets", "Dataset"`
## Test Plan
The existing test fixture reflects the update
## Summary
Closes#17112. Allows passing in string and list-of-strings literals
into `subprocess.run` (and related) calls without marking them as
untrusted input:
```py
import subprocess
subprocess.run("true")
# "instant" named expressions are also allowed
subprocess.run(c := "ls")
```
## Test Plan
Added test cases covering new behavior, passed with `cargo nextest run`.
## Summary
* ``airflow.auth.managers.base_auth_manager.is_authorized_dataset`` has
been moved to
``airflow.api_fastapi.auth.managers.base_auth_manager.is_authorized_asset``
in Airflow 3.0
* ``airflow.auth.managers.models.resource_details.DatasetDetails`` has
been moved to
``airflow.api_fastapi.auth.managers.models.resource_details.AssetDetails``
in Airflow 3.0
* Dag arguments `default_view` and `orientation` has been removed in
Airflow 3.0
* `airflow.models.baseoperatorlink.BaseOperatorLink` has been moved to
`airflow.sdk.definitions.baseoperatorlink.BaseOperatorLink` in Airflow
3.0
* ``airflow.notifications.basenotifier.BaseNotifier`` has been moved to
``airflow.sdk.BaseNotifier`` in Airflow 3.0
* ``airflow.utils.log.secrets_masker`` has been moved to
``airflow.sdk.execution_time.secrets_masker`` in Airflow 3.0
* ``airflow...DAG.allow_future_exec_dates`` has been removed in Airflow
3.0
* `airflow.utils.db.create_session` has een removed in Airflow 3.0
* `airflow.sensors.base_sensor_operator.BaseSensorOperator` has been
moved to `airflow.sdk.bases.sensor.BaseSensorOperator` removed Airflow
3.0
* `airflow.utils.file.TemporaryDirectory` has been removed in Airflow
3.0 and can be replaced by `tempfile.TemporaryDirectory`
* `airflow.utils.file.mkdirs` has been removed in Airflow 3.0 and can be
replaced by `pathlib.Path({path}).mkdir`
## Test Plan
Test fixture has been added for these changes
## Summary
Unlike other AIR3XX rules, this best practice can be applied to Airflow
1 and Airflow 2 as well. Thus, we think it might make sense for use to
move it to AIR002 so that the first number of the error align to Airflow
version as possible to reduce confusion
## Test Plan
the test fixture has been updated
It was already using this approach internally, so this is "just" a
matter of rejiggering the public API of `Diagnostic`.
We were previously writing directly to a `std::io::Write` since it was
thought that this worked better with the linear typing fakery. Namely,
it increased confidence that the diagnostic rendering was actually
written somewhere useful, instead of just being converted to a string
that could potentially get lost.
For reasons discussed in #17130, the linear type fakery was removed.
And so there is less of a reason to require a `std::io::Write`
implementation for diagnostic rendering. Indeed, this would sometimes
result in `unwrap()` calls when one wants to convert to a `String`.
## Summary
This PR adds Goto type definition to the playground, using the same
infrastructure as the LSP.
The main *challenge* with implementing this feature was that the editor
can now participate in which tab is open.
## Known limitations
The same as for the LSP. Most notably, navigating to types defined in
typeshed isn't supported.
## Test Plan
https://github.com/user-attachments/assets/22dad7c8-7ac7-463f-b066-5d5b2c45d1fe
This just adds an extra blank line. I think these tests were written
against the new renderer before it was used by Red Knot's `main`
function. Once I did that, I saw that it was missing a blank line, and
so I added it to match the status quo. But that means these snapshots
have become stale. So this commit updates them.
I put this in its own commit in case all of the information removed here
was controversial. But it *looks* stale to me. At the very least,
`TypeCheckDiagnostic` no longer exists, so that would need to be fixed.
And it doesn't really make sense to me (at this point) to make
`Diagnostic` a Salsa struct, particularly since we are keen on using it
in Ruff (at some point).
We do keep around `OldSecondaryDiagnosticMessage`, since that's part of
the Red Knot `InferContext` API. But it's a rather simple type, and
we'll be able to delete it entirely once `InferContext` exposes the new
`Diagnostic` type directly.
Since we aren't consuming `OldSecondaryDiagnosticMessage` any more, we
can now accept a slice instead of a vec. (Thanks Clippy.)
This replaces things like `TypeCheckDiagnostic` with the new Diagnostic`
type.
This is a "surgical" replacement where we retain the existing API of
of diagnostic reporting such that _most_ of Red Knot doesn't need to be
changed to support this update. But it will enable us to start using the
new diagnostic renderer and to delete the old renderer. It also paves
the path for exposing the new `Diagnostic` data model to the broader Red
Knot codebase.
Previously, this was only available in the old renderer.
To avoid regressions, we just copy it to the new renderer.
We don't bother with DRY because the old renderer will be
deleted very soon.
Now that we don't need to update the `printed` flag, this can just be an
immutable borrow.
(Arguably this should have been an immutable borrow even initially, but
I didn't want to introduce interior mutability without a more compelling
justification.)
The switch to `Arc` was done because Salsa sometimes requires cloning a
`Diagnostic` (or something that contains a `Diagnostic`). And so it
probably makes sense to make this cheap.
Since `Diagnostic` exposes a mutable API, we adopt "clone on write"
semantics. Although, it's more like, "clone on write when the `Arc` has
more than one reference." In the common case of creating a `Diagnostic`
and then immediately mutating it, no additional copies should be made
over the status quo.
We also drop the linear type fakery. Its interaction with Salsa is
somewhat awkward, and it has been suggested that there will be points
where diagnostics will be dropped unceremoniously without an opportunity
to tag them as having been ignored. Moreover, this machinery was added
out of "good sense" and isn't actually motivated by real world problems
with accidentally ignoring diagnostics. So that makes it easier, I
think, to just kick this out entirely instead of trying to find a way to
make it work.
This is temporary to scaffold the refactor.
The main idea is that we want to take the `InferContext` API,
*as it is*, and migrate that to the new diagnostic data model
*internally*. Then we can rip out the old stuff and iterate
on the API.
I did this mostly because it wasn't buying us much, and I'm
trying to simplify the public API of the types I'd like to
refactor in order to make the refactor simpler.
If we really want something like this, we can re-add it
later.
I removed this to see how much code was depending internally on the
`&[Arc<TypeCheckDiagnostic>]` representation. Thankfully, it was just
one place. So I just removed the `Deref` impl in favor of adding an
explicit `iter` method.
In general, I think using `Deref` for things like this is _somewhat_ of
an abuse. The tip-off is if there are `&self` or `&mut self` methods on
the type, then it's probably not a good candidate for `Deref`.
Summary
--
This PR reimplements
[load-before-global-declaration
(PLE0118)](https://docs.astral.sh/ruff/rules/load-before-global-declaration/)
as a semantic syntax error.
I added a `global` method to the `SemanticSyntaxContext` trait to make
this very easy, at least in ruff. Does red-knot have something similar?
If this approach will also work in red-knot, I think some of the other
PLE rules are also compile-time errors in CPython, PLE0117 in
particular. 0115 and 0116 also mention `SyntaxError`s in their docs, but
I haven't confirmed them in the REPL yet.
Test Plan
--
Existing linter tests for PLE0118. I think this actually can't be tested
very easily in an inline test because the `TestContext` doesn't have a
real way to track globals.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
Summary
--
Fixes https://github.com/astral-sh/ruff/issues/16520 by flagging single,
starred expressions in `return`, `yield`, and
`for` statements.
I thought `yield from` would also be included here, but that error is
emitted by
the CPython parser:
```pycon
>>> ast.parse("def f(): yield from *x")
Traceback (most recent call last):
File "<python-input-214>", line 1, in <module>
ast.parse("def f(): yield from *x")
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/ast.py", line 54, in parse
return compile(source, filename, mode, flags,
_feature_version=feature_version, optimize=optimize)
File "<unknown>", line 1
def f(): yield from *x
^
SyntaxError: invalid syntax
```
And we also already catch it in our parser.
Test Plan
--
New inline tests and updates to existing tests.
## Summary
Implement basic *Goto type definition* support for Red Knot's LSP.
This PR also builds the foundation for other LSP operations. E.g., Goto
definition, hover, etc., should be able to reuse some, if not most,
logic introduced in this PR.
The basic steps of resolving the type definitions are:
1. Find the closest token for the cursor offset. This is a bit more
subtle than I first anticipated because the cursor could be positioned
right between the callee and the `(` in `call(test)`, in which case we
want to resolve the type for `call`.
2. Find the node with the minimal range that fully encloses the token
found in 1. I somewhat suspect that 1 and 2 could be done at the same
time but it complicated things because we also need to compute the spine
(ancestor chain) for the node and there's no guarantee that the found
nodes have the same ancestors
3. Reduce the node found in 2. to a node that is a valid goto target.
This may require traversing upwards to e.g. find the closest expression.
4. Resolve the type for the goto target
5. Resolve the location for the type, return it to the LSP
## Design decisions
The current implementation navigates to the inferred type. I think this
is what we want because it means that it correctly accounts for
narrowing (in which case we want to go to the narrowed type because
that's the value's type at the given position). However, it does have
the downside that Goto type definition doesn't work whenever we infer `T
& Unknown` because intersection types aren't supported. I'm not sure
what to do about this specific case, other than maybe ignoring `Unkown`
in Goto type definition if the type is an intersection?
## Known limitations
* Types defined in the vendored typeshed aren't supported because the
client can't open files from the red knot binary (we can either
implement our own file protocol and handler OR extract the typeshed
files and point there). See
https://github.com/astral-sh/ruff/issues/17041
* Red Knot only exposes an API to get types for expressions and
definitions. However, there are many other nodes with identifiers that
can have a type (e.g. go to type of a globals statement, match patterns,
...). We can add support for those in separate PRs (after we figure out
how to query the types from the semantic model). See
https://github.com/astral-sh/ruff/issues/17113
* We should have a higher-level API for the LSP that doesn't directly
call semantic queries. I intentionally decided not to design that API
just yet.
## Test plan
https://github.com/user-attachments/assets/fa077297-a42d-4ec8-b71f-90c0802b4edb
Goto type definition on a union
<img width="1215" alt="Screenshot 2025-04-01 at 13 02 55"
src="https://github.com/user-attachments/assets/689cabcc-4a86-4a18-b14a-c56f56868085"
/>
Note: I recorded this using a custom typeshed path so that navigating to
builtins works.
<!--
Thank you for contributing to Ruff! To help us out with reviewing,
please consider the following:
- Does this pull request include a summary of the change? (See below.)
- Does this pull request include a descriptive title?
- Does this pull request include references to any relevant issues?
-->
## Summary
from https://github.com/astral-sh/ruff/pull/17034#discussion_r2024222525
This is a simple PR to fix the invalid behavior of `NotImplemented` on
Python >=3.10.
## Test Plan
I think it would be better if we could run mdtest across multiple Python
versions in GitHub Actions.
<!-- How was it tested? -->
---------
Co-authored-by: David Peter <sharkdp@users.noreply.github.com>
## Summary
Add support for decorators on function as well as support
for properties by adding special handling for `@property` and `@<name of
property>.setter`/`.getter` decorators.
closes https://github.com/astral-sh/ruff/issues/16987
## Ecosystem results
- ✔️ A lot of false positives are fixed by our new
understanding of properties
- 🔴 A bunch of new false positives (typically
`possibly-unbound-attribute` or `invalid-argument-type`) occur because
we currently do not perform type narrowing on attributes. And with the
new understanding of properties, this becomes even more relevant. In
many cases, the narrowing occurs through an assertion, so this is also
something that we need to implement to get rid of these false positives.
- 🔴 A few new false positives occur because we do not
understand generics, and therefore some calls to custom setters fail.
- 🔴 Similarly, some false positives occur because we do not
understand protocols yet.
- ✔️ Seems like a true positive to me. [The
setter](e624d8edfa/src/packaging/specifiers.py (L752-L754))
only accepts `bools`, but `None` is assigned in [this
line](e624d8edfa/tests/test_specifiers.py (L688)).
```
+ error[lint:invalid-assignment]
/tmp/mypy_primer/projects/packaging/tests/test_specifiers.py:688:9:
Invalid assignment to data descriptor attribute `prereleases` on type
`SpecifierSet` with custom `__set__` method
```
- ✔️ This is arguable also a true positive. The setter
[here](0c6c75644f/rich/table.py (L359-L363))
returns `Table`, but typeshed wants [setters to return
`None`](bf8d2a9912/stdlib/builtins.pyi (L1298)).
```
+ error[lint:invalid-argument-type]
/tmp/mypy_primer/projects/rich/rich/table.py:359:5: Object of type
`Literal[padding]` cannot be assigned to parameter 2 (`fset`) of bound
method `setter`; expected type `(Any, Any, /) -> None`
```
## Follow ups
- Fix the `@no_type_check` regression
- Implement class decorators
## Test Plan
New Markdown test suites for decorators and properties.
## Summary
Adds import `numpy.typing as npt` to `default in
flake8-import-conventions.aliases`
Resolves#17028
## Test Plan
Manually ran local ruff on the altered fixture and also ran `cargo test`
## Summary
Part of #15382, this PR adds property tests for callable types.
Specifically, this PR updates the property tests to generate an
arbitrary signature for a general callable type which includes:
* Arbitrary combination of parameter kinds in the correct order
* Arbitrary number of parameters
* Arbitrary optional types for annotation and return type
* Arbitrary parameter names (no duplicate names), optional for
positional-only parameters
## Test Plan
```
QUICKCHECK_TESTS=100000 cargo test -p red_knot_python_semantic -- --ignored types::property_tests::stable
```
Also, the commands in CI:
d72b4100a3/.github/workflows/daily_property_tests.yaml (L47-L52)
## Summary
Part of #15382, this PR adds support for disjointness between two
callable types. They are never disjoint because there exists a callable
type that's a subtype of all other callable types:
```py
(*args: object, **kwargs: object) -> Never
```
The `Never` is a subtype of every fully static type thus a callable type
that has the return type of `Never` means that it is a subtype of every
return type.
## Test Plan
Add test cases related to mixed parameter kinds, gradual form (`...`)
and `Never` type.
## Summary
Currently our `Type::Callable` wraps a four-variant `CallableType` enum.
But as time has gone on, I think we've found that the four variants in
`CallableType` are really more different to each other than they are
similar to each other:
- `GeneralCallableType` is a structural type describing all callable
types with a certain signature, but the other three types are "literal
types", more similar to the `FunctionLiteral` variant
- `GeneralCallableType` is not a singleton or a single-valued type, but
the other three are all single-valued types
(`WrapperDescriptorDunderGet` is even a singleton type)
- `GeneralCallableType` has (or should have) ambiguous truthiness, but
all possible inhabitants of the other three types are always truthy.
- As a structural type, `GeneralCallableType` can contain inner unions
and intersections that must be sorted in some contexts in our internal
model, but this is not true for the other three variants.
This PR flattens `Type::Callable` into four distinct `Type::` variants.
In the process, it fixes a number of latent bugs that were concealed by
the current architecture but are laid bare by the refactor. Unit tests
for these bugs are included in the PR.