Default Branch

2551e4ce98 · server: allow custom temp directory for ffmpeg (#3564) · Updated 2025-12-13 02:37:44 -05:00

Branches

f0c9017a2f · ggml : arm repack fix build (#0) · Updated 2025-12-13 01:04:09 -05:00

1
0
Included

7d79ef9fb0 · Initial plan · Updated 2025-11-18 05:37:04 -05:00

143
1

8a7f3b03c9 · ggml : bump version to 0.9.4 (ggml/1363) · Updated 2025-09-30 06:44:11 -04:00

400
1

dc8dda60ee · bench : print system info before ctx check · Updated 2025-06-25 09:01:32 -04:00

898
0
Included

bff8dc248a · talk-llama : sync llama.cpp · Updated 2025-05-13 06:20:19 -04:00

1137
21

0055356fbc · cli : avoid std::exchange · Updated 2025-05-07 06:23:06 -04:00

1176
10

10acc21fa3 · make : fix samples glob pattern · Updated 2025-04-30 07:20:50 -04:00

1204
1

becd0c888e · whisper : reduce delta_min from 1000ms to 100ms · Updated 2025-04-10 05:25:29 -04:00

1305
1

e400aeb770 · examples : add new sources · Updated 2025-04-02 08:52:29 -04:00

1323
3

05ce7476ae · ggml-ci: update input env variables to GG_BUILD_ · Updated 2025-03-14 04:14:44 -04:00

1464
1

00ddb10fe2 · select utf8 codepage on windows · Updated 2025-02-19 04:00:39 -05:00

1575
2

b0aeef2d52 · ci : fix windows builds to use 2019 · Updated 2024-11-21 07:28:14 -05:00

1842
1

b67bdc9430 · disable · Updated 2024-11-20 16:18:58 -05:00

1842
4

511579cc15 · ci : use local ggml · Updated 2024-11-16 13:31:57 -05:00

1886
1

552419f2c0 · ggml : aligned malloc -> malloc · Updated 2024-10-31 15:40:11 -04:00

1986
3

ceb77363cd · ggml : disable CUDA graphs for non-llama.cpp projects · Updated 2024-06-26 13:14:22 -04:00

2283
1

267e15a46d · cuda : avoid async allocs in CUDA mel code · Updated 2024-06-12 02:52:15 -04:00

2400
1

5801b8ac64 · cuda : fix HIPBLAS build · Updated 2024-06-11 12:13:43 -04:00

2401
1

13c5446759 · Update ggml-cuda/mmvq.cu · Updated 2024-06-11 10:37:32 -04:00

2403
2

059bcd3009 · ci : fix CUDA builds · Updated 2024-06-11 04:40:19 -04:00

2403
1