Commit Graph

54 Commits

Author SHA1 Message Date
Georgi Gerganov 3f91832352
talk-llama : sync llama.cpp 2025-02-03 22:42:26 +02:00
Georgi Gerganov 99b011a9f5 talk-llama : sync llama.cpp 2025-01-14 10:38:01 +02:00
Georgi Gerganov 35d0e02c72
talk-llama : sync llama.cpp (#2709) 2025-01-13 08:55:48 +02:00
Georgi Gerganov 61edb117a0 talk-llama : sync llama.cpp 2024-12-18 12:52:16 +02:00
Georgi Gerganov f2c680f893 talk-llama : sync llama.cpp 2024-12-08 20:14:35 +02:00
Georgi Gerganov 06e059b8f8 talk-llama : sync llama.cpp 2024-11-20 21:00:08 +02:00
Georgi Gerganov 24d706774d talk-llama : sync llama.cpp 2024-11-15 15:21:04 +02:00
Georgi Gerganov c65d0fd3c8 talk-llama : sync llama.cpp 2024-11-01 10:19:05 +02:00
Georgi Gerganov 941912467d whisper : adapt to latest ggml (skip) (#0) 2024-10-05 15:23:51 +03:00
Georgi Gerganov ccc2547210 talk-llama : sync llama.cpp 2024-10-03 12:22:17 +03:00
Georgi Gerganov fe18c29ab8 talk-llama : sync llama.cpp 2024-09-24 19:45:08 +03:00
Georgi Gerganov da9809f243 talk-llama : sync llama.cpp 2024-08-28 13:22:20 +03:00
Georgi Gerganov 22058f2dbc talk-llama : sync llama.cpp 2024-08-08 22:48:46 +03:00
Georgi Gerganov dbf9c15e30 talk-llama : sync llama.cpp 2024-07-08 14:53:55 +03:00
Georgi Gerganov e30c679928
whisper : reorganize source code + improve CMake (#2256)
* scripts : update sync [no ci]

* files : reorganize [no ci]

* sync : llama.cpp

* cmake : link math library

* cmake : build normal ggml library

* files : move headers to include

* objc : fix path to ggml-metal.h

* ci : fix WHISPER_CUDA -> GGML_CUDA

* scripts : sync LICENSE [no ci]
2024-06-26 19:34:09 +03:00
Georgi Gerganov e293f17d34
talk-llama : sync llama.cpp 2024-06-18 09:45:37 +03:00
Georgi Gerganov 061eeb9f61 talk-llama : sync llama.cpp 2024-06-16 18:19:48 +03:00
Georgi Gerganov 3fa7d29876 talk-llama : sync llama.cpp 2024-05-13 11:02:26 +03:00
Georgi Gerganov 81a3c41aa0
talk-llama : sync llama.cpp 2024-04-07 16:21:08 +03:00
Georgi Gerganov 2948c740a2
sync : ggml (#2001)
* sync : update scripts

* sync : ggml

* talk-llama : sync llama.cpp

* make : WHISPER_CUBLAS -> WHISPER_CUDA

* ci : try to fix sycl build

* talk-llama : fix make build
2024-03-27 18:55:10 +02:00
Georgi Gerganov de4d067f1e
talk-llama : sync llama.cpp 2024-03-15 14:21:59 +02:00
Georgi Gerganov 8e409d1113
talk-llama : sync llama.cpp 2024-03-08 11:55:50 +02:00
Georgi Gerganov 05d1b61af4
talk-llama : sync llama.cpp 2024-03-08 11:52:47 +02:00
Georgi Gerganov 25d313b38b
talk-llama : sync llama.cpp 2024-02-28 13:04:05 +02:00
Georgi Gerganov 3170841ed9
talk-llama : sync llama.cpp 2024-02-25 20:00:10 +02:00
Georgi Gerganov a2506909b1
talk-llama : sync llama.cpp 2024-02-22 23:30:53 +02:00
Georgi Gerganov 59119f4f20
talk-llama : sync llama.cpp 2024-02-20 12:09:57 +02:00
Georgi Gerganov 551529290d
talk-llama : sync llama.cpp 2024-02-12 10:39:58 +02:00
Georgi Gerganov 02b4c52c12
talk-llama : sync llama.cpp 2024-02-10 10:10:59 +02:00
Georgi Gerganov e72e4158de
talk-llama : sync llama.cpp 2024-01-28 19:44:10 +02:00
Georgi Gerganov ef3c9ed9eb
talk-llama : sync llama.cpp 2024-01-27 17:24:53 +02:00
Georgi Gerganov 1f50a7d29f
sync : llama.cpp 2024-01-17 21:23:33 +02:00
Georgi Gerganov 6ebba525f1
talk-llama : sync llama.cpp 2024-01-14 18:08:20 +02:00
Georgi Gerganov 2a5874441d
talk-llama : llama.cpp 2024-01-14 11:06:28 +02:00
Georgi Gerganov f001a3b7b6
talk-llama : sync llama.cpp 2024-01-14 00:13:17 +02:00
Georgi Gerganov 40ae0962f4
talk-llama : sync llama.cpp 2024-01-12 22:04:51 +02:00
Georgi Gerganov 00b7a4be02
talk-llama : sync llama.cpp 2024-01-11 22:10:10 +02:00
Georgi Gerganov 3b8c2dff57
talk-llama : sync latest llama.cpp 2024-01-06 17:22:57 +02:00
Georgi Gerganov 3a5302108d
sync : ggml (ggml_scale, ggml_row_size, etc.) (#1677)
* sync : ggml

* sync : llama.cpp

* talk-llama : fix obsolete param

* ggml-alloc : fix ggml_tallocr_is_own

* talk.wasm : update to new ggml

* ggml : fix type punning in ggml_scale

* ggml : cuda jetson + arm quants warnings
2023-12-22 17:53:39 +02:00
Georgi Gerganov f96e1c5b78
sync : ggml (backend v2, k-quants, CUDA opts, Metal opts, etc.) (#1422)
* sync : ggml (backend v2, k-quants, CUDA opts, Metal opts, etc.)

* metal : allow env metal variable to override resource path (#1415)

* Allow env variable to override resource path

* Update ggml-metal.m

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* sync : restore common / main from `master`

* sync : restore whisper from `master`

* talk-llama : update to latest llama.cpp

* ruby : fix build

* ggml : fix 32-bit ARM build

* ggml : fix MIN / MAX macro collisions + update ios bindings

* ggml : fix ifdefs and MIN / MAX again

* exampels : fix Obj-C and Swift examples

* ggml : fix 32-bit ARM compatibility

* ggml : one more attempt to fix 32-bit ARM compat

* whisper : fix support for larger graphs

---------

Co-authored-by: Chris Raethke <codesoda@users.noreply.github.com>
2023-11-03 21:35:05 +02:00
Georgi Gerganov 1ca4041b86
talk-llama : update to latest llama.cpp 2023-09-15 20:06:31 +03:00
Przemysław Pawełczyk b55b505690
build : do not use _GNU_SOURCE gratuitously (#1129)
* Do not use _GNU_SOURCE gratuitously.

What is needed to build whisper.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions,
plus some stuff from BSD that is not specified in POSIX.1.

Well, that was true until NUMA support was added recently in ggml,
so enable GNU libc extensions for Linux builds to cover that.

There is no need to penalize musl libc which simply follows standards.

Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
minimal FTM (_XOPEN_SOURCE=600) or other FTM depending on their needs.

It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.

* examples : include SDL headers before other headers

Avoid macOS build error when _DARWIN_C_SOURCE is not defined, brought by
SDL2 relying on Darwin extension memset_pattern4/8/16 (from string.h).

* make : enable BSD extensions for DragonFlyBSD to expose RLIMIT_MEMLOCK

* make : use BSD-specific FTMs to enable alloca on BSDs

* make : fix OpenBSD build by exposing newer POSIX definitions

* cmake : follow recent FTM improvements from Makefile
2023-09-07 12:36:14 +03:00
Georgi Gerganov 2818de21ff
examples : fix build + compile warnings (close #1256) 2023-09-07 12:33:12 +03:00
Georgi Gerganov fdf58a6668
talk-llama : fix new rope interface 2023-07-03 19:24:01 +03:00
Georgi Gerganov 8ba42095c5
Revert "ggml : do not use _GNU_SOURCE gratuitously (#1027)"
This reverts commit 3f7a03ebe3.
2023-07-02 21:53:52 +03:00
Przemysław Pawełczyk 3f7a03ebe3
ggml : do not use _GNU_SOURCE gratuitously (#1027)
* Do not use _GNU_SOURCE gratuitously.

What is needed to build whisper.cpp and examples is availability of
stuff defined in The Open Group Base Specifications Issue 6
(https://pubs.opengroup.org/onlinepubs/009695399/) known also as
Single Unix Specification v3 (SUSv3) or POSIX.1-2001 + XSI extensions.

There is no need to penalize musl libc which simply follows standards.

Not having feature test macros in source code gives greater flexibility
to those wanting to reuse it in 3rd party app, as they can build it with
minimal FTM (_XOPEN_SOURCE=600) or other FTM depending on their needs.

It builds without issues in Alpine (musl libc), Ubuntu (glibc), MSYS2.

* examples : include SDL headers before other headers

This is an attempt at fixing macOS build error coming from SDL2 relying
on Darwin extension memset_pattern4/8/16 coming from Apple's string.h.
2023-06-25 16:34:30 +03:00
Przemysław Pawełczyk 62642bb61c
talk-llama : fix build after ggml sync (#1049)
sed -i 's,GGML_BACKEND_CUDA,GGML_BACKEND_GPU,g' examples/talk-llama/llama.cpp
2023-06-25 16:13:50 +03:00
Georgi Gerganov 77eab3fbfe
talk-llama : sync latest llama.cpp (close #922, close #954) 2023-05-23 14:04:39 +03:00
Georgi Gerganov 0cb820e0f9
talk-llama : fix build + sync latest llama.cpp 2023-05-14 18:46:42 +03:00
Luis Herrera 4e4d00c67a
talk-llama : only copy used KV cache in get / set state (#890)
---------

Co-authored-by: ejones <evan.q.jones@gmail.com>
2023-05-08 20:59:21 +03:00