mirror of https://github.com/astral-sh/ruff
This is still early days, but I hope the framework introduced here makes it very easy to add new truth data. Truth data should be seen as a form of regression test for non-ideal ranking of completion suggestions. I think it would help to read `crates/ty_completion_eval/README.md` first to get an idea of what you're reviewing. |
||
|---|---|---|
| .. | ||
| higher-level-symbols-preferred | ||
| import-deprioritizes-dunder | ||
| import-deprioritizes-sunder | ||
| internal-typeshed-hidden | ||
| numpy-array | ||
| object-attr-instance-methods | ||
| raise-uses-base-exception | ||
| scope-existing-over-new-import | ||
| scope-prioritize-closer | ||
| scope-simple-long-identifier | ||
| ty-extensions-lower-stdlib | ||
| type-var-typing-over-ast | ||
| README.md | ||
README.md
This directory contains truth data for ty's completion evaluation.
Adding new truth data
To add new truth data, you can either add a new <CURSOR> directive to an
existing Python project in this directory or create a new Python project. To
create a new directory, just cp -a existing new and modify it as needed. Then:
- Check
completion.tomlfor relevant settings. - Run
uv.lockafter updatingpyproject.toml(if necessary) to ensure the dependency versions are locked.