| .. |
|
.gitignore
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
|
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0002-pretokenizer.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0003-clip-unicode.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0004-solar-pro.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0005-fix-deepseek-deseret-regex.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0006-maintain-ordering-for-rules-for-grammar.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0007-sort-devices-by-score.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0008-add-phony-target-ggml-cpu-for-all-cpu-variants.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0009-remove-amx.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0010-fix-string-arr-kv-loading.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0011-ollama-debug-tensor.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0012-add-ollama-vocab-for-grammar-support.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0013-add-argsort-and-cuda-copy-for-i32.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0014-graph-memory-reporting-on-failure.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0015-ggml-Export-GPU-UUIDs.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0016-add-C-API-for-mtmd_input_text.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0017-no-power-throttling-win32-with-gnuc.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0018-ggml-Add-batch-size-hint.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0019-fix-mtmd-audio.cpp-build-on-windows.patch
|
Remove unnecessary MacOs 13 and lower Patches (#12656)
|
2025-11-06 15:52:56 -08:00 |
|
0020-ggml-No-alloc-mode.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0021-decode-disable-output_all.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0022-ggml-Enable-resetting-backend-devices.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0023-harden-uncaught-exception-registration.patch
|
Remove unnecessary MacOs 13 and lower Patches (#12656)
|
2025-11-06 15:52:56 -08:00 |
|
0024-GPU-discovery-enhancements.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0025-NVML-fallback-for-unified-memory-GPUs.patch
|
Remove unnecessary MacOs 13 and lower Patches (#12656)
|
2025-11-06 15:52:56 -08:00 |
|
0026-report-LoadLibrary-failures.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0027-interleave-multi-rope.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0028-Add-memory-detection-using-DXGI-PDH.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0029-ggml-cuda-skip-large-batches.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0030-win-exit-instead-of-abort.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0031-fix-bakllava-regression.patch
|
feat: llama.cpp bump (17f7f4) for SSM performance improvements (#13408)
|
2025-12-10 12:59:27 -08:00 |
|
0032-llama-add-support-for-NVIDIA-Nemotron-Nano-3.patch
|
llama/parsers/renderers: nemotron 3 nano (#13489)
|
2025-12-15 18:00:08 -08:00 |