index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2023-05-20
llama : fix compile warnings in llama_set_state_data()
Georgi Gerganov
2023-05-19
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
Georgi Gerganov
2023-05-19
minor : fix compile warnings
Georgi Gerganov
2023-05-18
make kv_f16 the default for api users (#1517)
Erik Scholz
2023-05-17
Remove unused n_parts parameter (#1509)
Stephan Walter
2023-05-13
llama : fix unused warning
Georgi Gerganov
2023-05-13
ggml : GPU-accelerated token generation (#1412)
Johannes Gäßler
2023-05-13
ggml : implement backward pass for llama + small training-llama-from-scratch ...
xaedes
2023-05-13
llama : fix various warnings
Georgi Gerganov
2023-05-13
llama : free ggml context in set / copy state data (close #1425)
Georgi Gerganov
2023-05-12
ggml : remove bit shuffling (#1405)
Georgi Gerganov
2023-05-08
llama : fix hparams shadow (#1367)
Pavol Rusnak
2023-05-08
llama : require first token to be BOS (#1303)
Georgi Gerganov
2023-05-06
Remove default arguments from sampling functions (#1343)
Jed Fox
2023-05-02
llama : only copy used KV cache in get / set state (#1272)
Evan Jones
2023-05-02
llama : fix compile warnings
Georgi Gerganov
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-05-02
ggml: add names to tensors (#1268)
slaren
2023-05-01
llama : fix session load / save (#1263)
Georgi Gerganov
2023-05-01
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
slaren
2023-05-01
llama : let context be const when accessing const data (#1261)
Alex Klinkhamer
2023-04-29
ggml : adjust mul_mat_f16 work memory (#1226)
Georgi Gerganov
2023-04-29
examples : fix save-load-state + rename llama-util.h
Georgi Gerganov
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-29
cuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-28
ggml : add CLBlast support (#1164)
0cc4m
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-26
Allow setting the rng seed after initialization. (#1184)
Ásgeir Bjarni Ingvarsson
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-24
llama : increase scratch buffer size for 65B (ref #1152)
Georgi Gerganov
2023-04-24
llama : refactor get / set state + remove redundant kv cache API (#1143)
Georgi Gerganov
2023-04-23
ggml : better PERF prints + support "LLAMA_PERF=1 make"
Georgi Gerganov
2023-04-22
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
Stephan Walter
2023-04-22
ggml : fix AVX build + update to new Q8_0 format
Georgi Gerganov
2023-04-22
llama : add api for getting/setting the complete state: rng, logits, embeddin...
xaedes
2023-04-21
llama : remember and restore kv cache data pointers (#1104)
xaedes
2023-04-21
llama : fix comment for "output.weight" tensor
Georgi Gerganov
2023-04-20
ggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-19
Add NVIDIA cuBLAS support (#1044)
slaren
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
llama : well-defined static initialization of complex objects (#927)
Arik Poznanski
2023-04-17
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
Ivan Komarov
2023-04-16
stdout : vertical align outputs for better readibility
Georgi Gerganov
2023-04-16
Fix msys2 build error and warnings (#1009)
nanahi
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
[next]