ollama/llama/llama.cpp/src
Gabe Goodhart b95693056c
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
* feat: Bump llama.cpp to the latest master (17f7f4b)

This brings in significant improvements to prefill performance for all
models using the SSM_CONV and SSM_SCAN ops (granite4, jamba, falcon-h,
nemotron-h, Qwen3 Next) on Apple Metal.

See https://github.com/ggml-org/llama.cpp/pull/17876

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches 1-4

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* fix: Update patches 5-12

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches 13-18

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patch 20

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Update patches 21-31

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

* feat: Sync vendored code

The two files I'm not sure about here are the swap from gemma3-iswa.cpp to
gemma3.cpp (I chose to include this because I think it's required), and the
inclusion of `ggml-zendnn.h` which I chose to omit.

Branch: LlamaCPPMetalSSMImprovements

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>

---------

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-12-10 12:59:27 -08:00
..
models feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-adapter.cpp Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-adapter.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-arch.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-arch.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-batch.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-batch.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-chat.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-chat.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-context.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-context.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-cparams.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-cparams.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-grammar.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-grammar.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-graph.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-graph.h ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
llama-hparams.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-hparams.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-impl.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-impl.h feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-io.cpp llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-io.h llama: update to commit 71e90e88 (#10192) 2025-04-16 15:14:01 -07:00
llama-kv-cache-iswa.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cache-iswa.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-kv-cache.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cache.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-kv-cells.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory-hybrid.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-memory-hybrid.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-memory-recurrent.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory-recurrent.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-memory.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-memory.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00
llama-mmap.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-mmap.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-model-loader.cpp Llama cpp bump (df1b612): granite docling / mamba2 optimizations / multimodal encoding fixes (#12552) 2025-10-13 15:26:18 -07:00
llama-model-loader.h update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.cpp update vendored llama.cpp and ggml (#11823) 2025-08-14 14:42:58 -07:00
llama-model-saver.h llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
llama-model.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-model.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-quant.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-quant.h next build (#8539) 2025-01-29 15:03:38 -08:00
llama-sampling.cpp ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama-sampling.h llama: update llama.cpp vendor code to commit d7cfe1ff (#9356) 2025-02-26 20:34:44 -08:00
llama-vocab.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
llama-vocab.h ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
llama.cpp ggml update to b6840 (#12791) 2025-11-06 10:19:22 -08:00
llama.go ggml update to b7108 (#12992) 2025-12-03 19:43:29 -08:00
unicode-data.cpp next build (#8539) 2025-01-29 15:03:38 -08:00
unicode-data.h next build (#8539) 2025-01-29 15:03:38 -08:00
unicode.cpp feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408) 2025-12-10 12:59:27 -08:00
unicode.h Update GGML to b6646 (#12245) 2025-10-02 14:47:10 -07:00