This is done in what appears to be the same way as Ruff: we get the CWD,
strip the prefix from the path if possible, and use that. If stripping
the prefix fails, then we print the full path as-is.
Fixes#17233
Summary
--
This PR extends semantic syntax error detection to red-knot. The main
changes here are:
1. Adding `SemanticSyntaxChecker` and `Vec<SemanticSyntaxError>` fields
to the `SemanticIndexBuilder`
2. Calling `SemanticSyntaxChecker::visit_stmt` and `visit_expr` in the
`SemanticIndexBuilder`'s `visit_stmt` and `visit_expr` methods
3. Implementing `SemanticSyntaxContext` for `SemanticIndexBuilder`
4. Adding new mdtests to test the context implementation and show
diagnostics
(3) is definitely the trickiest and required (I think) a minor addition
to the `SemanticIndexBuilder`. I tried to look around for existing code
performing the necessary checks, but I definitely could have missed
something or misused the existing code even when I found it.
There's still one TODO around `global` statement handling. I don't think
there's an existing way to look this up, but I'm happy to work on that
here or in a separate PR. This currently only affects detection of one
error (`LoadBeforeGlobalDeclaration` or
[PLE0118](https://docs.astral.sh/ruff/rules/load-before-global-declaration/)
in ruff), so it's not too big of a problem even if we leave the TODO.
Test Plan
--
New mdtests, as well as new errors for existing mdtests
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR extends version-related syntax error detection to red-knot. The
main changes here are:
1. Passing `ParseOptions` specifying a `PythonVersion` to parser calls
2. Adding a `python_version` method to the `Db` trait to make this
possible
3. Converting `UnsupportedSyntaxError`s to `Diagnostic`s
4. Updating existing mdtests to avoid unrelated syntax errors
My initial draft of (1) and (2) in #16090 instead tried passing a
`PythonVersion` down to every parser call, but @MichaReiser suggested
the `Db` approach instead
[here](https://github.com/astral-sh/ruff/pull/16090#discussion_r1969198407),
and I think it turned out much nicer.
All of the new `python_version` methods look like this:
```rust
fn python_version(&self) -> ruff_python_ast::PythonVersion {
Program::get(self).python_version(self)
}
```
with the exception of the `TestDb` in `ruff_db`, which hard-codes
`PythonVersion::latest()`.
## Test Plan
Existing mdtests, plus a new mdtest to see at least one of the new
diagnostics.
This commit shuffles the reporting API around a little bit such that a
range is required, up front, when reporting a lint diagnostic. This in
turn enables us to make suppression checking eager.
In order to avoid callers needing to provide the range twice, we create
a primary annotation *without* a message inside the `Diagnostic`
encapsulated by the guard. We do this instead of requiring the message
up front because we're concerned about API complexity and the effort
involved in creating the message.
In order to provide a means of attaching a message to the primary
annotation, we expose a convenience API on `LintDiagnosticGuard` for
setting the message. This isn't generally possible for a `Diagnostic`,
but since a `LintDiagnosticGuard` knows how the `Diagnostic` was
constructed, we can offer this API correctly.
It strikes me that it might be easy to forget to attach a primary
annotation message, btu I think this the "least" bad failure mode. And
in particular, it should be somewhat obvious that it's missing once one
adds a snapshot test for how the diagnostic renders.
Otherwise, this API gives us the ability to eagerly check whether a
diagnostic should be reported with nearly minimal information. It also
shouldn't have any footguns since it guarantees that the primary
annotation is tied to the file in the typing context. And it keeps
things pretty simple: callers only need to provide what is actually
strictly necessary to make a diagnostic.
This will enable us to provide an API on `LintDiagnosticGuard` for
setting the primary annotation message. It will require an `unwrap()`,
but due to how `LintDiagnosticGuard` will build a `Diagnostic`, this
`unwrap()` will be guaranteed to succeed. (And it won't bubble out to
every user of `LintDiagnosticGuard`.)
This expands the set of types accepted for diagnostic messages. Instead
of only `std::fmt::Display` impls, we now *also* accept a concrete
`DiagnosticMessage`.
This will be useful to avoid unnecessary copies of every single
diagnostic message created via `InferContext::report_lint`. (I'll call
out how this helps in a subsequent commit.)
I added this accessor because tests want it, but we can also use it in
other places internally. It's a little nicer because it does the
`as_deref()` for you.
This finally completes the deletion of all old diagnostic types.
We do this by migrating the second (and last) use of secondary
diagnostic messages: to highlight the return type of a function
definition when its return value is inconsistent with the type.
Like the last diagnostic, we do actually change the message here a bit.
We don't need a sub-diagnostic here, and we can instead just add a
secondary annotation to highlight the return type.
In the new diagnostic data model, we really should have a main
diagnostic message *and* a primary span (with an optional message
attached to it) for every diagnostic.
In this commit, I try to make this true for the "revealed type"
diagnostic. Instead of the annotation saying both "revealed type is"
and also the revealed type itself, the annotation is now just the
revealed type and the main diagnostic message is "Revealed type."
I expect this may be controversial. I'm open to doing something
different. I tried to avoid redundancy, but maybe this is a special case
where we want the redundancy. I'm honestly not sure. I do *like* how it
looks with this commit, but I'm not working with Red Knot's type
checking daily, so my opinion doesn't count for much.
This did also require some tweaking to concise diagnostic formatting in
order to preserve the essential information.
This commit doesn't update every relevant snapshot. Just a few. I split
the rest out into the next commit.
I initially split the lifetime out into three distinct lifetimes on
near-instinct because I moved the struct into the public API. But
because they are all shared borrows, and because there are no other APIs
on `DisplayDiagnostic` to access individual fields (and probably never
will be), it's probably fine to just specify one lifetime. Because of
subtyping, the one lifetime will be the shorter of the three.
There's also the point that `ruff_db` isn't _really_ a public API, since
it isn't a library that others depend on. So my instinct is probably a
bit off there.
It was already using this approach internally, so this is "just" a
matter of rejiggering the public API of `Diagnostic`.
We were previously writing directly to a `std::io::Write` since it was
thought that this worked better with the linear typing fakery. Namely,
it increased confidence that the diagnostic rendering was actually
written somewhere useful, instead of just being converted to a string
that could potentially get lost.
For reasons discussed in #17130, the linear type fakery was removed.
And so there is less of a reason to require a `std::io::Write`
implementation for diagnostic rendering. Indeed, this would sometimes
result in `unwrap()` calls when one wants to convert to a `String`.
We do keep around `OldSecondaryDiagnosticMessage`, since that's part of
the Red Knot `InferContext` API. But it's a rather simple type, and
we'll be able to delete it entirely once `InferContext` exposes the new
`Diagnostic` type directly.
Since we aren't consuming `OldSecondaryDiagnosticMessage` any more, we
can now accept a slice instead of a vec. (Thanks Clippy.)
This replaces things like `TypeCheckDiagnostic` with the new Diagnostic`
type.
This is a "surgical" replacement where we retain the existing API of
of diagnostic reporting such that _most_ of Red Knot doesn't need to be
changed to support this update. But it will enable us to start using the
new diagnostic renderer and to delete the old renderer. It also paves
the path for exposing the new `Diagnostic` data model to the broader Red
Knot codebase.
Previously, this was only available in the old renderer.
To avoid regressions, we just copy it to the new renderer.
We don't bother with DRY because the old renderer will be
deleted very soon.
Now that we don't need to update the `printed` flag, this can just be an
immutable borrow.
(Arguably this should have been an immutable borrow even initially, but
I didn't want to introduce interior mutability without a more compelling
justification.)
The switch to `Arc` was done because Salsa sometimes requires cloning a
`Diagnostic` (or something that contains a `Diagnostic`). And so it
probably makes sense to make this cheap.
Since `Diagnostic` exposes a mutable API, we adopt "clone on write"
semantics. Although, it's more like, "clone on write when the `Arc` has
more than one reference." In the common case of creating a `Diagnostic`
and then immediately mutating it, no additional copies should be made
over the status quo.
We also drop the linear type fakery. Its interaction with Salsa is
somewhat awkward, and it has been suggested that there will be points
where diagnostics will be dropped unceremoniously without an opportunity
to tag them as having been ignored. Moreover, this machinery was added
out of "good sense" and isn't actually motivated by real world problems
with accidentally ignoring diagnostics. So that makes it easier, I
think, to just kick this out entirely instead of trying to find a way to
make it work.
This is temporary to scaffold the refactor.
The main idea is that we want to take the `InferContext` API,
*as it is*, and migrate that to the new diagnostic data model
*internally*. Then we can rip out the old stuff and iterate
on the API.
## Summary
Implement basic *Goto type definition* support for Red Knot's LSP.
This PR also builds the foundation for other LSP operations. E.g., Goto
definition, hover, etc., should be able to reuse some, if not most,
logic introduced in this PR.
The basic steps of resolving the type definitions are:
1. Find the closest token for the cursor offset. This is a bit more
subtle than I first anticipated because the cursor could be positioned
right between the callee and the `(` in `call(test)`, in which case we
want to resolve the type for `call`.
2. Find the node with the minimal range that fully encloses the token
found in 1. I somewhat suspect that 1 and 2 could be done at the same
time but it complicated things because we also need to compute the spine
(ancestor chain) for the node and there's no guarantee that the found
nodes have the same ancestors
3. Reduce the node found in 2. to a node that is a valid goto target.
This may require traversing upwards to e.g. find the closest expression.
4. Resolve the type for the goto target
5. Resolve the location for the type, return it to the LSP
## Design decisions
The current implementation navigates to the inferred type. I think this
is what we want because it means that it correctly accounts for
narrowing (in which case we want to go to the narrowed type because
that's the value's type at the given position). However, it does have
the downside that Goto type definition doesn't work whenever we infer `T
& Unknown` because intersection types aren't supported. I'm not sure
what to do about this specific case, other than maybe ignoring `Unkown`
in Goto type definition if the type is an intersection?
## Known limitations
* Types defined in the vendored typeshed aren't supported because the
client can't open files from the red knot binary (we can either
implement our own file protocol and handler OR extract the typeshed
files and point there). See
https://github.com/astral-sh/ruff/issues/17041
* Red Knot only exposes an API to get types for expressions and
definitions. However, there are many other nodes with identifiers that
can have a type (e.g. go to type of a globals statement, match patterns,
...). We can add support for those in separate PRs (after we figure out
how to query the types from the semantic model). See
https://github.com/astral-sh/ruff/issues/17113
* We should have a higher-level API for the LSP that doesn't directly
call semantic queries. I intentionally decided not to design that API
just yet.
## Test plan
https://github.com/user-attachments/assets/fa077297-a42d-4ec8-b71f-90c0802b4edb
Goto type definition on a union
<img width="1215" alt="Screenshot 2025-04-01 at 13 02 55"
src="https://github.com/user-attachments/assets/689cabcc-4a86-4a18-b14a-c56f56868085"
/>
Note: I recorded this using a custom typeshed path so that navigating to
builtins works.
## Summary
`std::time::now` isn't available on `wasm32-unknown-unknown` but it is
used by `FileTime::now`.
This PR replaces the usages of `FileTime::now` with a target specific
helper function that we already had in the memory file system.
Fixes https://github.com/astral-sh/ruff/issues/16966
## Test Plan
Tested that the playground no longer crash when adding an extra-path
... and switch to the new one.
We do this switch by converting the old diagnostics to a
`Diagnostic`, and then rendering that.
This does not quite emit identical output. There are some
changes. They *could* be fixed to remain the same, but the
changes aren't obviously worse to me and I think the right
way to *improve* them is to move Red Knot to the new `Diagnostic`
API.
The next commit will have the snapshot changes.
In our existing diagnostics, our message is just the diagnostic
ID, and the message goes to the annotation. In reality, the
diagnostic can have its own message distinct from the optional
messages associated with an annotation.
In order to make the outputs match, we do a small tweak here:
when the main diagnostic message is empty, we drop the colon
after the diagnostic ID.
I expect that we'll want to rejigger this output format more
in the future, but for now this was a very simple change to
preserve the status quo.
When moving over to the new renderer, I noticed that it
was emitting an extra line terminator compared to the status
quo. This removes it by turning the line terminator into a
line delimiter between diagnostics.
## Summary
Another salsa upgrade.
The main motivation is to stay on a recent salsa version because there
are still a lot of breaking changes happening.
The most significant changes in this update:
* Salsa no longer derives `Debug` by default. It now requires
`interned(debug)` (or similar)
* This version ships the foundation for garbage collecting interned
values. However, this comes at the cost that queries now track which
interned values they created (or read). The micro benchmarks in the
salsa repo showed a significant perf regression. Will see if this also
visible in our benchmarks.
## Test Plan
`cargo test`
## Summary
This PR implements the first part of
https://github.com/astral-sh/ruff/discussions/16440. It ensures that Red
Knot's module resolver is case sensitive on all systems.
This PR combines a few approaches:
1. It uses `canonicalize` on non-case-sensitive systems to get the real
casing of a path. This works for as long as no symlinks or mapped
network drives (the windows `E:\` is mapped to `\\server\share` thingy).
This is the same as what Pyright does
2. If 1. fails, fall back to recursively list the parent directory and
test if the path's file name matches the casing exactly as listed in by
list dir. This is the same approach as CPython takes in its module
resolver. The main downside is that it requires more syscalls because,
unlike CPython, we Red Knot needs to invalidate its caches if a file
name gets renamed (CPython assumes that the folders are immutable).
It's worth noting that the file watching test that I added that renames
`lib.py` to `Lib.py` currently doesn't pass on case-insensitive systems.
Making it pass requires some more involved changes to `Files`. I plan to
work on this next. There's the argument that landing this PR on its own
isn't worth it without this issue being addressed. I think it's still a
good step in the right direction even when some of the details on how
and where the path case sensitive comparison is implemented.
## Test plan
I added multiple integration tests (including a failing one). I tested
that the `case-sensitivity` detection works as expected on Windows,
MacOS and Linux and that the fast-paths are taken accordingly.
We don't actually hook this up to anything in this PR, but we do
go to some trouble to granularly unit test it. The unit tests caught
plenty of bugs after I initially wrote down the implementation, so they
were very much worth it.
Closes#16506
Instead of hard-coding a specific context window,
it seemed prudent to make this configurable. That
makes it easier to test different context window
sizes as well.
I am not totally convinced that this is the right
place for this configuration. I could see the context
window size being a property of `Diagnostic` instead,
since we might want to change the context window
size based not just on some end user configuration,
but perhaps also the specific diagnostic.
But for now, I think it's fine for it to live here,
and all of the rendering logic doesn't care where
it lives. So it should be relatively easy to change
in the future.
This adds a new configuration knob to diagnostic rendering that, when
enabled, will make diagnostic rendering much more terse. Specifically,
it will guarantee that each diagnostic will only use one line.
This doesn't actually hook the concise output option up to anything.
We'll do that plumbing in the next commit.
## Summary
This PR introduces a new mdtest option `system` that can either be
`in-memory` or `os`
where `in-memory` is the default.
The motivation for supporting `os` is so that we can write OS/system
specific tests
with mdtests. Specifically, I want to write mdtests for the module
resolver,
testing that module resolution is case sensitive.
## Test Plan
I tested that the case-sensitive module resolver test start failing when
setting `system = "os"`
... with supporting types. This is meant to give us a base to work with
in terms of our new diagnostic data model. I expect the representations
to be tweaked over time, but I think this is a decent start.
I would also like to add doctest examples, but I think it's better if we
wait until an initial version of the renderer is done for that.
This puts them out of the way so that they can hopefully be removed more
easily in the (near) future, and so that they don't get in the way of
the new types. This also makes the intent of the migration a bit clearer
in the code and hopefully results in less confusion.
This trait should eventually go away, so we rename it (and supporting
types) to make room for a new concrete `Diagnostic` type.
This commit is just the rename. In the next commit, we'll move it to a
different module.
Summary
--
This is a follow up addressing the comments on #16425. As @dhruvmanila
pointed out, the naming is a bit tricky. I went with `has_no_errors` to
try to differentiate it from `is_valid`. It actually ends up negated in
most uses, so it would be more convenient to have `has_any_errors` or
`has_errors`, but I thought it would sound too much like the opposite of
`is_valid` in that case. I'm definitely open to suggestions here.
Test Plan
--
Existing tests.
This is a small little hack to make the `Diagnostic` trait
capable of supporting attaching multiple spans.
This design should be considered transient. This was just the
quickest way that I could see to pass multiple spans through from
the type checker to the diagnostic renderer.
This commit has no behavioral changes.
This refactor moves the logic for turning a `D: Diagnostic` into
an `annotate_snippets::Message` into its own types. This would
ideally just be a function or something, but the `annotate-snippets`
types want borrowed data, and sometimes we need to produce owned
data. So we gather everything we need into our own types and then
spit it back out in the format that `annotate-snippets` wants.
This factor was motivated by wanting to render multiple snippets.
The logic for generating a code frame is complicated enough that
it's worth splitting out so that we can reuse it for other spans.
(Note that one should consider this prototype-level code. It is
unlikely to survive for long.)
For now, the only thing one can configure is whether color is enabled or
not. This avoids needing to ask the `colored` crate whether colors have
been globally enabled or disabled. And, more crucially, avoids the need
to _set_ this global flag for testing diagnostic output. Doing so can
have unintended consequences, as outlined in #16115.
Fixes#16115
This essentially makes it impossible to construct a `Diagnostic`
that has a `TextRange` but no `File`.
This is meant to be a precursor to multi-span support.
(Note that I consider this more of a prototyping-change and not
necessarily what this is going to look like longer term.)
Reviewers can probably review this PR as one big diff instead of
commit-by-commit.
## Summary
Transition to using coarse-grained tracked structs (depends on
https://github.com/salsa-rs/salsa/pull/657). For now, this PR doesn't
add any `#[tracked]` fields, meaning that any changes cause the entire
struct to be invalidated. It also changes `AstNodeRef` to be
compared/hashed by pointer address, instead of performing a deep AST
comparison.
## Test Plan
This yields a 10-15% improvement on my machine (though weirdly some runs
were 5-10% without being flagged as inconsistent by criterion, is there
some non-determinism involved?). It's possible that some of this is
unrelated, I'll try applying the patch to the current salsa version to
make sure.
---------
Co-authored-by: Micha Reiser <micha@reiser.io>
## Summary
This PR adds support for user-level configurations
(`~/.config/knot/knot.toml`) to Red Knot.
Red Knot will watch the user-level configuration file for changes but
only if it exists
when the process start. It doesn't watch for new configurations,
mainly to simplify things for now (it would require watching the entire
`.config` directory because the `knot` subfolder might not exist
either).
The new `ConfigurationFile` struct seems a bit overkill for now but I
plan to use it for
hierarchical configurations as well.
Red Knot uses the same strategy as uv and Ruff by using the etcetera
crate.
## Test Plan
Added CLI and file watching test
## Summary
This PR adds a new `user_configuration_directory` method to `System`. We
need it to resolve where to lookup a user-level `knot.toml`
configuration file.
The method belongs to `System` because not all platforms have a
convention of where to store such configuration files (e.g. wasm).
I refactored `TestSystem` to be a simple wrapper around an `Arc<dyn
System...>` and use the `System.as_any` method instead to cast it down
to an `InMemory` system. I also removed some `System` specific methods
from `InMemoryFileSystem`, they don't belong there.
This PR removes the `os` feature as a default feature from `ruff_db`.
Most crates depending on `ruff_db` don't need it because they only
depend on `System` or only depend on `os` for testing. This was
necessary to fix a compile error with `red_knot_wasm`
## Test Plan
I'll make use of the method in my next PR. So I guess we won't know if
it works before then but I copied the code from Ruff/uv, so I have high
confidence that it is correct.
`cargo test`
## Summary
Adds a JSON schema generation step for Red Knot. This PR doesn't yet add
a publishing step because it's still a bit early for that
## Test plan
I tested the schema in Zed, VS Code and PyCharm:
* PyCharm: You have to manually add a schema mapping (settings JSON
Schema Mappings)
* Zed and VS code support the inline schema specification
```toml
#:schema /Users/micha/astral/ruff/knot.schema.json
[environment]
extra-paths = []
[rules]
call-possibly-unbound-method = "error"
unknown-rule = "error"
# duplicate-base = "error"
```
```json
{
"$schema": "file:///Users/micha/astral/ruff/knot.schema.json",
"environment": {
"python-version": "3.13",
"python-platform": "linux2"
},
"rules": {
"unknown-rule": "error"
}
}
```
https://github.com/user-attachments/assets/a18fcd96-7cbe-4110-985b-9f1935584411
The Schema overall works but all editors have their own quirks:
* PyCharm: Hovering a name always shows the section description instead
of the description of the specific setting. But it's the same for other
settings in `pyproject.toml` files 🤷
* VS Code (JSON): Using the generated schema in a JSON file gives
exactly the experience I want
* VS Code (TOML):
* Properties with multiple possible values are repeated during
auto-completion without giving any hint how they're different. 
* The property description mushes together the description of the
property and the value, which looks sort of ridiculous. 
* Autocompletion and documentation hovering works (except the
limitations mentioned above)
* Zed:
* Very similar to VS Code with the exception that it uses the
description attribute to distinguish settings with multiple possible
values 
I don't think there's much we can do here other than hope (or help)
editors improve their auto completion. The same short comings also apply
to ruff, so this isn't something new. For now, I think this is good
enough
I found it useful to have the `&dyn Diagnostic` trait impl
specifically. I added `Arc<dyn Diagnostic>` for completeness.
(I do kind of wonder if we should be preferring `Arc<dyn ...>`
over something like `Box<dyn ...>` more generally, especially
for things with immutable APIs. It would make cloning cheap.)
This change does a simple swap of the existing renderer for one that
uses our vendored copy of `annotate-snippets`. We don't change anything
about the diagnostic data model, but this alone already makes
diagnostics look a lot nicer!
More refinements to the panic messages for failing mdtests to mimic the
output of the default panic hook more closely:
- We now print out `Box<dyn Any>` if the panic payload is not a string
(which is typically the case for salsa panics).
- We now include the panic's backtrace if you set the `RUST_BACKTRACE`
environment variable.
This fixes#15317. Our `catch_unwind` wrapper installs a panic hook that
captures (the rendered contents of) the panic info when a panic occurs.
Since the intent is that the caller will render the panic info in some
custom way, the hook silences the default stderr panic output.
However, the panic hook is a global resource, so if any one thread was
in the middle of a `catch_unwind` call, we would silence the default
panic output for _all_ threads.
The solution is to also keep a thread local that indicates whether the
current thread is in the middle of our `catch_unwind`, and to fall back
on the default panic hook if not.
## Test Plan
Artificially added an mdtest parse error, ran tests via `cargo test -p
red_knot_python_semantic` to run a large number of tests in parallel.
Before this patch, the panic message was swallowed as reported in
#15317. After, the panic message was shown.
This updates the mdtest harness to catch any panics that occur during
type checking, and to display the panic message as an mdtest failure.
(We don't know which specific line causes the failure, so we attribute
panics to the first line of the test case.)
## Summary
Fixes https://github.com/astral-sh/ruff/issues/15027
The `MemoryFileSystem::write_file` API automatically creates
non-existing ancestor directoryes
but we failed to update the status of the now created ancestor
directories in the `Files` data structure.
## Test Plan
Tested that the case in https://github.com/astral-sh/ruff/issues/15027
now passes regardless of whether the *Simple* case is commented out or
not
## Summary
This PR extends the mdtest configuration with a `log` setting that can
be any of:
* `true`: Enables tracing
* `false`: Disables tracing (default)
* String: An ENV_FILTER similar to `RED_KNOT_LOG`
```toml
log = true
```
Closes https://github.com/astral-sh/ruff/issues/13865
## Test Plan
I changed a test and tried `log=true`, `log=false`, and `log=INFO`
## Summary
Fixes a small scoping issue in `DiagnosticId::matches`
Note: I don't think we should use `lint:id` in mdtests just yet. I worry
that it could lead to many unnecessary churns if we decide **not** to
use `lint:<id>` as the format (e.g., `lint/id`).
The reason why users even see `lint:<rule>` is because the mdtest
framework uses the diagnostic infrastructure
Closes#14910
## Test Plan
Added tests
## Summary
This PR introduces a structured `DiagnosticId` instead of using a plain
`&'static str`. It is the first of three in a stack that implements a
basic rules infrastructure for Red Knot.
`DiagnosticId` is an enum over all known diagnostic codes. A closed enum
reduces the risk of accidentally introducing two identical diagnostic
codes. It also opens the possibility of generating reference
documentation from the enum in the future (not part of this PR).
The enum isn't *fully closed* because it uses a `&'static str` for lint
names. This is because we want the flexibility to define lints in
different crates, and all names are only known in `red_knot_linter` or
above. Still, lower-level crates must already reference the lint names
to emit diagnostics. We could define all lint-names in `DiagnosticId`
but I decided against it because:
* We probably want to share the `DiagnosticId` type between Ruff and Red
Knot to avoid extra complexity in the diagnostic crate, and both tools
use different lint names.
* Lints require a lot of extra metadata beyond just the name. That's why
I think defining them close to their implementation is important.
In the long term, we may also want to support plugins, which would make
it impossible to know all lint names at compile time. The next PR in the
stack introduces extra syntax for defining lints.
A closed enum does have a few disadvantages:
* rustc can't help us detect unused diagnostic codes because the enum is
public
* Adding a new diagnostic in the workspace crate now requires changes to
at least two crates: It requires changing the workspace crate to add the
diagnostic and the `ruff_db` crate to define the diagnostic ID. I
consider this an acceptable trade. We may want to move `DiagnosticId` to
its own crate or into a shared `red_knot_diagnostic` crate.
## Preventing duplicate diagnostic identifiers
One goal of this PR is to make it harder to introduce ambiguous
diagnostic IDs, which is achieved by defining a closed enum. However,
the enum isn't fully "closed" because it doesn't explicitly list the IDs
for all lint rules. That leaves the possibility that a lint rule and a
diagnostic ID share the same name.
I made the names unambiguous in this PR by separating them into
different namespaces by using `lint/<rule>` for lint rule codes. I don't
mind the `lint` prefix in a *Ruff next* context, but it is a bit weird
for a standalone type checker. I'd like to not overfocus on this for now
because I see a few different options:
* We remove the `lint` prefix and add a unit test in a top-level crate
that iterates over all known lint rules and diagnostic IDs to ensure the
names are non-overlapping.
* We only render `[lint]` as the error code and add a note to the
diagnostic mentioning the lint rule. This is similar to clippy and has
the advantage that the header line remains short
(`lint/some-long-rule-name` is very long ;))
* Any other form of adjusting the diagnostic rendering to make the
distinction clear
I think we can defer this decision for now because the `DiagnosticId`
contains all the relevant information to change the rendering
accordingly.
## Why `Lint` and not `LintRule`
I see three kinds of diagnostics in Red Knot:
* Non-suppressable: Reveal type, IO errors, configuration errors, etc.
(any `DiagnosticId`)
* Lints: code-related diagnostics that are suppressable.
* Lint rules: The same as lints, but they can be enabled or disabled in
the configuration. The majority of lints in Red Knot and the Ruff
linter.
Our current implementation doesn't distinguish between lints and Lint
rules because we aren't aware of a suppressible code-related lint that
can't be configured in the configuration. The only lint that comes to my
mind is maybe `division-by-zero` if we're 99.99% sure that it is always
right. However, I want to keep the door open to making this distinction
in the future if it proves useful.
Another reason why I chose lint over lint rule (or just rule) is that I
want to leave room for a future lint rule and lint phase concept:
* lint is the *what*: a specific code smell, pattern, or violation
* the lint rule is the *how*: I could see a future `LintRule` trait in
`red_knot_python_linter` that provides the necessary hooks to run as
part of the linter. A lint rule produces diagnostics for exactly one
lint. A lint rule differs from all lints in `red_knot_python_semantic`
because they don't run as "rules" in the Ruff sense. Instead, they're a
side-product of type inference.
* the lint phase is a different form of *how*: A lint phase can produce
many different lints in a single pass. This is a somewhat common pattern
in Ruff where running one analysis collects the necessary information
for finding many different lints
* diagnostic is the *presentation*: Unlike a lint, the diagnostic isn't
the what, but how a specific lint gets presented. I expect that many
lints can use one generic `LintDiagnostic`, but a few lints might need
more flexibility and implement their custom diagnostic rendering (at
least custom `Diagnostic` implementation).
## Test Plan
`cargo test`
## Summary
- Add 383 files from `crates/ruff_python_parser/resources` to the test
corpus
- Add 1296 files from `crates/ruff_linter/resources` to the test corpus
- Use in-memory file system for tests
- Improve test isolation by cleaning the test environment between checks
- Add a mechanism for "known failures". Mark ~80 files as known
failures.
- The corpus test is now a lot slower (6 seconds).
Note:
While `red_knot` as a command line tool can run over all of these
files without panicking, we still have a lot of test failures caused by
explicitly "pulling" all types.
## Test Plan
Run `cargo test -p red_knot_workspace` while making sure that
- Introducing code that is known to lead to a panic fails the test
- Removing code that is known to lead to a panic from
`KNOWN_FAILURES`-files also fails the test
## Summary
...and remove periods from messages that don't span more than a single
sentence.
This is more consistent with how we present user-facing messages in uv
(which has a defined style guide).
## Summary
This PR changes removes the typeshed stubs from the vendored file system
shipped with ruff
and instead ships an empty "typeshed".
Making the typeshed files optional required extracting the typshed files
into a new `ruff_vendored` crate. I do like this even if all our builds
always include typeshed because it means `red_knot_python_semantic`
contains less code that needs compiling.
This also allows us to use deflate because the compression algorithm
doesn't matter for an archive containing a single, empty file.
## Test Plan
`cargo test`
I verified with ` cargo tree -f "{p} {f}" -p <package> ` that:
* red_knot_wasm: enables `deflate` compression
* red_knot: enables `zstd` compression
* `ruff`: uses stored
I'm not quiet sure how to build the binary that maturin builds but
comparing the release artifact size with `strip = true` shows a `1.5MB`
size reduction
---------
Co-authored-by: Charlie Marsh <charlie.r.marsh@gmail.com>
## Summary
This PR adds an experimental Ruff subcommand to generate dependency
graphs based on module resolution.
A few highlights:
- You can generate either dependency or dependent graphs via the
`--direction` command-line argument.
- Like Pants, we also provide an option to identify imports from string
literals (`--detect-string-imports`).
- Users can also provide additional dependency data via the
`include-dependencies` key under `[tool.ruff.import-map]`. This map uses
file paths as keys, and lists of strings as values. Those strings can be
file paths or globs.
The dependency resolution uses the red-knot module resolver which is
intended to be fully spec compliant, so it's also a chance to expose the
module resolver in a real-world setting.
The CLI is, e.g., `ruff graph build ../autobot`, which will output a
JSON map from file to files it depends on for the `autobot` project.
Use declared types in inference and checking. This means several things:
* Imports prefer declarations over inference, when declarations are
available.
* When we encounter a binding, we check that the bound value's inferred
type is assignable to the live declarations of the bound symbol, if any.
* When we encounter a declaration, we check that the declared type is
assignable from the inferred type of the symbol from previous bindings,
if any.
* When we encounter a binding+declaration, we check that the inferred
type of the bound value is assignable to the declared type.
Prototype deferred evaluation of type expressions by deferring
evaluation of class bases in a stub file. This allows self-referential
class definitions, as occur with the definition of `str` in typeshed
(which inherits `Sequence[str]`).
---------
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
## Summary
This PR simplifies the virtual file support in the red knot core,
specifically:
* Update `File::add_virtual_file` method to `File::virtual_file` which
will always create a new virtual file and override the existing entry in
the lookup table
* Add `VirtualFile` which is a wrapper around `File` and provides
methods to increment the file revision / close the virtual file
* Add a new `File::try_virtual_file` to lookup the `VirtualFile` from
`Files`
* Add `File::sync_virtual_path` which takes in the `SystemVirtualPath`,
looks up the `VirtualFile` for it and calls the `sync` method to
increment the file revision
* Removes the `virtual_path_metadata` method on `System` trait
## Test Plan
- [x] Make sure the existing red knot tests pass
- [x] Updated code works well with the LSP
## Summary
This PR adds support for untitled files in the Red Knot project.
Refer to the [design
discussion](https://github.com/astral-sh/ruff/discussions/12336) for
more details.
### Changes
* The `parsed_module` always assumes that the `SystemVirtual` path is of
`PySourceType::Python`.
* For the module resolver, as suggested, I went ahead by adding a new
`SystemOrVendoredPath` enum and renamed `FilePathRef` to
`SystemOrVendoredPathRef` (happy to consider better names here).
* The `file_to_module` query would return if it's a
`FilePath::SystemVirtual` variant because a virtual file doesn't belong
to any module.
* The sync implementation for the system virtual path is basically the
same as that of system path except that it uses the
`virtual_path_metadata`. The reason for this is that the system
(language server) would provide the metadata on whether it still exists
or not and if it exists, the corresponding metadata.
For point (1), VS Code would use `Untitled-1` for Python files and
`Untitled-1.ipynb` for Jupyter Notebooks. We could use this distinction
to determine whether the source type is `Python` or `Ipynb`.
## Test Plan
Added test cases in #12526
Add a lint rule to detect if a name is definitely or possibly undefined
at a given usage.
If I create the file `undef/main.py` with contents:
```python
x = int
def foo():
z
return x
if flag:
y = x
y
```
And then run `cargo run --bin red_knot -- --current-directory
../ruff-examples/undef`, I get the output:
```
Name 'z' used when not defined.
Name 'flag' used when not defined.
Name 'y' used when possibly not defined.
```
If I modify the file to add `y = 0` at the top, red-knot re-checks it
and I get the new output:
```
Name 'z' used when not defined.
Name 'flag' used when not defined.
```
Note that `int` is not flagged, since it's a builtin, and `return x` in
the function scope is not flagged, since it refers to the global `x`.
Implements definition-level type inference, with basic control flow
(only if statements and if expressions so far) in Salsa.
There are a couple key ideas here:
1) We can do type inference queries at any of three region
granularities: an entire scope, a single definition, or a single
expression. These are represented by the `InferenceRegion` enum, and the
entry points are the salsa queries `infer_scope_types`,
`infer_definition_types`, and `infer_expression_types`. Generally
per-scope will be used for scopes that we are directly checking and
per-definition will be used anytime we are looking up symbol types from
another module/scope. Per-expression should be uncommon: used only for
the RHS of an unpacking or multi-target assignment (to avoid
re-inferring the RHS once per symbol defined in the assignment) and for
test nodes in type narrowing (e.g. the `test` of an `If` node). All
three queries return a `TypeInference` with a map of types for all
definitions and expressions within their region. If you do e.g.
scope-level inference, when it hits a definition, or an
independently-inferable expression, it should use the relevant query
(which may already be cached) to get all types within the smaller
region. This avoids double-inferring smaller regions, even though larger
regions encompass smaller ones.
2) Instead of building a control-flow graph and lazily traversing it to
find definitions which reach a use of a name (which is O(n^2) in the
worst case), instead semantic indexing builds a use-def map, where every
use of a name knows which definitions can reach that use. We also no
longer track all definitions of a symbol in the symbol itself; instead
the use-def map also records which defs remain visible at the end of the
scope, and considers these the publicly-visible definitions of the
symbol (see below).
Major items left as TODOs in this PR, to be done in follow-up PRs:
1) Free/global references aren't supported yet (only lookup based on
definitions in current scope), which means the override-check example
doesn't currently work. This is the first thing I'll fix as follow-up to
this PR.
2) Control flow outside of if statements and expressions.
3) Type narrowing.
There are also some smaller relevant changes here:
1) Eliminate `Option` in the return type of member lookups; instead
always return `Type::Unbound` for a name we can't find. Also use
`Type::Unbound` for modules we can't resolve (not 100% sure about this
one yet.)
2) Eliminate the use of the terms "public" and "root" to refer to
module-global scope or symbols. Instead consistently use the term
"module-global". It's longer, but it's the clearest, and the most
consistent with typical Python terminology. In particular I don't like
"public" for this use because it has other implications around author
intent (is an underscore-prefixed module-global symbol "public"?). And
"root" is just not commonly used for this in Python.
3) Eliminate the `PublicSymbol` Salsa ingredient. Many non-module-global
symbols can also be seen from other scopes (e.g. by a free var in a
nested scope, or by class attribute access), and thus need to have a
"public type" (that is, the type not as seen from a particular use in
the control flow of the same scope, but the type as seen from some other
scope.) So all symbols need to have a "public type" (here I want to keep
the use of the term "public", unless someone has a better term to
suggest -- since it's "public type of a symbol" and not "public symbol"
the confusion with e.g. initial underscores is less of an issue.) At
least initially, I would like to try not having special handling for
module-global symbols vs other symbols.
4) Switch to using "definitions that reach end of scope" rather than
"all definitions" in determining the public type of a symbol. I'm
convinced that in general this is the right way to go. We may want to
refine this further in future for some free-variable cases, but it can
be changed purely by making changes to the building of the use-def map
(the `public_definitions` index in it), without affecting any other
code. One consequence of combining this with no control-flow support
(just last-definition-wins) is that some inference tests now give more
wrong-looking results; I left TODO comments on these tests to fix them
when control flow is added.
And some potential areas for consideration in the future:
1) Should `symbol_ty` be a Salsa query? This would require making all
symbols a Salsa ingredient, and tracking even more dependencies. But it
would save some repeated reconstruction of unions, for symbols with
multiple public definitions. For now I'm not making it a query, but open
to changing this in future with actual perf evidence that it's better.