ollama/ml
Daniel Hiltgen bd6c1d6b49
flash attn: add auto mode for llama engine (#13052)
* flash attn: add auto mode for llama engine

If the user does not specify fa in the environment, use auto-mode.

* review comments

* ensure kv cache quantized types have FA explicitly enabled

additional review comments
2025-12-12 13:27:19 -08:00
..
backend flash attn: add auto mode for llama engine (#13052) 2025-12-12 13:27:19 -08:00
nn model: add rnj-1 inference support (#13354) 2025-12-08 16:49:17 -08:00
backend.go flash attn: add auto mode for llama engine (#13052) 2025-12-12 13:27:19 -08:00
device.go flash attn: add auto mode for llama engine (#13052) 2025-12-12 13:27:19 -08:00
path.go cpu: always ensure LibOllamaPath included (#12890) 2025-10-31 14:37:29 -07:00