whisper.cpp/ggml/include
Georgi Gerganov cd9b8c6d18
ggml : remove GGML_KQ_MASK_PAD constant (llama/17910)
* ggml : remove GGML_KQ_MASK_PAD constant

* cont : remove comment
2025-12-12 17:53:24 +02:00
..
ggml-alloc.h ggml : upgrade init_tensor API to return a ggml_status (llama/11854) 2025-03-08 15:13:01 +02:00
ggml-backend.h rpc : add support for multiple devices (llama/16276) 2025-10-12 11:16:23 +03:00
ggml-blas.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cann.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-cpp.h ggml : fix ggml_gallocr_ptr type (ggml/1205) 2025-05-01 13:29:02 +03:00
ggml-cpu.h ggml: allow casting between f32 and i32 (llama/15783) 2025-09-20 13:42:51 +03:00
ggml-cuda.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-hexagon.h Add experimental ggml-hexagon backend for the Hexagon NPU (llama/16547) 2025-11-09 23:38:03 +02:00
ggml-metal.h metal : refactor + optimize v2 (llama/15995) 2025-09-20 13:46:10 +03:00
ggml-opencl.h Introducing experimental OpenCL backend with support for Qualcomm Adreno GPUs (llama/10693) 2024-12-18 12:52:16 +02:00
ggml-opt.h finetune: SGD optimizer, more CLI args (llama/13873) 2025-08-18 20:30:45 +03:00
ggml-rpc.h rpc : fix alloc size logic (llama/17116) 2025-12-12 17:53:18 +02:00
ggml-sycl.h ggml : build backends as libraries (llama/10256) 2024-11-20 21:00:08 +02:00
ggml-vulkan.h vulkan: Make Vulkan optional at runtime (ggml/11493). (llama/11494) 2025-02-27 08:55:36 +02:00
ggml-webgpu.h ggml: Add initial WebGPU backend (llama/14521) 2025-07-20 00:23:50 +03:00
ggml-zdnn.h zdnn: refactor codebase + add docs (llama/16178) 2025-09-29 15:18:09 +03:00
ggml-zendnn.h ggml-zendnn : add ZenDNN backend for AMD CPUs (llama/17690) 2025-12-12 17:53:21 +02:00
ggml.h ggml : remove GGML_KQ_MASK_PAD constant (llama/17910) 2025-12-12 17:53:24 +02:00
gguf.h GGUF: C++ refactor, backend support, misc fixes (skip) (llama/11030) 2025-01-14 10:38:01 +02:00