index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.h
Age
Commit message (
Expand
)
Author
2023-05-06
Remove default arguments from sampling functions (#1343)
Jed Fox
2023-05-02
llama : only copy used KV cache in get / set state (#1272)
Evan Jones
2023-05-02
llama : fix compile warnings
Georgi Gerganov
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-05-01
llama : fix session load / save (#1263)
Georgi Gerganov
2023-05-01
llama : let context be const when accessing const data (#1261)
Alex Klinkhamer
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-26
Allow setting the rng seed after initialization. (#1184)
Ásgeir Bjarni Ingvarsson
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-24
llama : refactor get / set state + remove redundant kv cache API (#1143)
Georgi Gerganov
2023-04-22
llama : add api for getting/setting the complete state: rng, logits, embeddin...
xaedes
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-17
Add LoRA support (#820)
slaren
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-12
Don't crash on ftype (formerly f16) == 4 (#917)
Stephan Walter
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-02
Added api for getting/setting the kv_cache (#685)
Christian Falch
2023-03-30
Make loading weights 10-100x faster
Justine Tunney
2023-03-29
Fix typo in llama.h (#593)
anzz1
2023-03-28
llama : fix linkage with mingw (#551)
anzz1
2023-03-28
all : be more strict about converting float to double (#458)
Stephan Walter
2023-03-28
ggml : introduce structs for the q4 data blocks (#356)
Stephan Walter
2023-03-25
Cleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov
2023-03-25
Add support for file load progress reporting callbacks (#434)
Jed Fox
2023-03-25
Add missing struct annotation (#483)
Doomsdayrs
2023-03-24
Support calling mlock() on loaded model data on Linux and macOS (#453)
comex
2023-03-24
Add embedding mode with arg flag. Currently working (#282)
Luciano
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov