aboutsummaryrefslogtreecommitdiff
path: root/llama.h
AgeCommit message (Expand)Author
2023-07-14llama : add functions that work directly on model (#2197)Bach Le
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20llama : fix params struct slignment (#1936)Ettore Di Giacinto
2023-06-15examples : add chat-vicuna.sh (#1854)yangli2
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-10llama : support requantizing models instead of only allowing quantization fro...Kerfuffle
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-20llama : define magic numbers as integer constants (#1518) (#1520)Juuso Alasuutari
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-20llama : fix compile warnings in llama_set_state_data()Georgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13llama : free ggml context in set / copy state data (close #1425)Georgi Gerganov
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox
2023-05-02llama : only copy used KV cache in get / set state (#1272)Evan Jones
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-05-01llama : let context be const when accessing const data (#1261)Alex Klinkhamer
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-26Allow setting the rng seed after initialization. (#1184)Ásgeir Bjarni Ingvarsson
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-24llama : refactor get / set state + remove redundant kv cache API (#1143)Georgi Gerganov
2023-04-22llama : add api for getting/setting the complete state: rng, logits, embeddin...xaedes
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
2023-04-17Add LoRA support (#820)slaren
2023-04-13llama : merge llama_internal.h into llama.hGeorgi Gerganov
2023-04-12Don't crash on ftype (formerly f16) == 4 (#917)Stephan Walter
2023-04-11Add enum llama_ftype, sync ggml_type to model files (#709)Stephan Walter
2023-04-10Rewrite loading code to try to satisfy everyone:comex
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
2023-04-02Added api for getting/setting the kv_cache (#685)Christian Falch
2023-03-30Make loading weights 10-100x fasterJustine Tunney