aboutsummaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2023-05-23OpenCL Token Generation Acceleration (#1459)0cc4m
2023-05-20llama : define magic numbers as integer constants (#1518) (#1520)Juuso Alasuutari
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...Johannes Gäßler
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-20llama : fix name shadowing and C4146 (#1526)Maxime
2023-05-20llama : fix compile warnings in llama_set_state_data()Georgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-19minor : fix compile warningsGeorgi Gerganov
2023-05-18make kv_f16 the default for api users (#1517)Erik Scholz
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-13llama : fix unused warningGeorgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13ggml : implement backward pass for llama + small training-llama-from-scratch ...xaedes
2023-05-13llama : fix various warningsGeorgi Gerganov
2023-05-13llama : free ggml context in set / copy state data (close #1425)Georgi Gerganov
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-08llama : fix hparams shadow (#1367)Pavol Rusnak
2023-05-08llama : require first token to be BOS (#1303)Georgi Gerganov
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox
2023-05-02llama : only copy used KV cache in get / set state (#1272)Evan Jones
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02ggml: add names to tensors (#1268)slaren
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
2023-05-01llama : let context be const when accessing const data (#1261)Alex Klinkhamer
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-28ggml : add CLBlast support (#1164)0cc4m
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-26Allow setting the rng seed after initialization. (#1184)Ásgeir Bjarni Ingvarsson
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-24llama : increase scratch buffer size for 65B (ref #1152)Georgi Gerganov
2023-04-24llama : refactor get / set state + remove redundant kv cache API (#1143)Georgi Gerganov
2023-04-23ggml : better PERF prints + support "LLAMA_PERF=1 make"Georgi Gerganov
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter
2023-04-22ggml : fix AVX build + update to new Q8_0 formatGeorgi Gerganov
2023-04-22llama : add api for getting/setting the complete state: rng, logits, embeddin...xaedes
2023-04-21llama : remember and restore kv cache data pointers (#1104)xaedes
2023-04-21llama : fix comment for "output.weight" tensorGeorgi Gerganov
2023-04-20ggml : sync ggml (add GPT-NeoX RoPE implementation)Georgi Gerganov
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-19Add NVIDIA cuBLAS support (#1044)slaren
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
2023-04-17Add LoRA support (#820)slaren