aboutsummaryrefslogtreecommitdiff
path: root/llama.h
AgeCommit message (Expand)Author
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
2023-04-17Add LoRA support (#820)slaren
2023-04-13llama : merge llama_internal.h into llama.hGeorgi Gerganov
2023-04-12Don't crash on ftype (formerly f16) == 4 (#917)Stephan Walter
2023-04-11Add enum llama_ftype, sync ggml_type to model files (#709)Stephan Walter
2023-04-10Rewrite loading code to try to satisfy everyone:comex
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
2023-04-02Added api for getting/setting the kv_cache (#685)Christian Falch
2023-03-30Make loading weights 10-100x fasterJustine Tunney
2023-03-29Fix typo in llama.h (#593)anzz1
2023-03-28llama : fix linkage with mingw (#551)anzz1
2023-03-28all : be more strict about converting float to double (#458)Stephan Walter
2023-03-28ggml : introduce structs for the q4 data blocks (#356)Stephan Walter
2023-03-25Cleanup STL headers + fix embedding examples + minor stuffGeorgi Gerganov
2023-03-25Add support for file load progress reporting callbacks (#434)Jed Fox
2023-03-25Add missing struct annotation (#483)Doomsdayrs
2023-03-24Support calling mlock() on loaded model data on Linux and macOS (#453)comex
2023-03-24Add embedding mode with arg flag. Currently working (#282)Luciano
2023-03-22Introduce C-style API (#370)Georgi Gerganov