Age | Commit message (Expand) | Author |
2023-07-24 | make rms_norm_eps a parameter (#2374) | slaren |
2023-07-23 | llama : add grammar-based sampling (#1773) | Evan Jones |
2023-07-23 | llama : grouped-query attention + LLaMAv2 70B support (#2276) | Georgi Gerganov |
2023-07-21 | llama : remove cfg smooth factor as it is only a reparameterization of the gu... | Guillaume "Vermeille" Sanchez |
2023-07-21 | llama : make tensor_split ptr instead of array (#2272) | Georgi Gerganov |
2023-07-19 | llama : extend API to get max devices at runtime (#2253) | Rinne |
2023-07-15 | llama : add custom RoPE (#2054) | Xiao-Yong Jin |
2023-07-14 | llama : add functions that work directly on model (#2197) | Bach Le |
2023-07-11 | llama : add classifier-free guidance (#2135) | Bach Le |
2023-07-10 | mpi : add support for distributed inference via MPI (#2099) | Evan Miller |
2023-07-05 | Expose generation timings from server & update completions.js (#2116) | Tobias Lütke |
2023-06-29 | Use unsigned for random seed (#2006) | Howard Su |
2023-06-28 | llama : support input embeddings directly (#1910) | ningshanwutuobang |
2023-06-26 | ggml : add NUMA support (#1556) | zrm |
2023-06-24 | llama : make model stateless and context stateful (llama_state) (#1797) | Didzis Gosko |
2023-06-20 | llama : fix params struct slignment (#1936) | Ettore Di Giacinto |
2023-06-15 | examples : add chat-vicuna.sh (#1854) | yangli2 |
2023-06-14 | CUDA full GPU acceleration, KV cache in VRAM (#1827) | Johannes Gäßler |
2023-06-13 | train : improved training-from-scratch example (#1652) | xaedes |
2023-06-10 | llama : support requantizing models instead of only allowing quantization fro... | Kerfuffle |
2023-06-06 | Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) | Johannes Gäßler |
2023-06-05 | ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) | Kawrakow |
2023-06-04 | llama : Metal inference (#1642) | Georgi Gerganov |
2023-05-28 | Only show -ngl option when relevant + other doc/arg handling updates (#1625) | Kerfuffle |
2023-05-20 | llama : define magic numbers as integer constants (#1518) (#1520) | Juuso Alasuutari |
2023-05-20 | llama : add llama_init_backend() API (close #1527) | Georgi Gerganov |
2023-05-20 | llama : fix compile warnings in llama_set_state_data() | Georgi Gerganov |
2023-05-19 | ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) | Georgi Gerganov |
2023-05-17 | Remove unused n_parts parameter (#1509) | Stephan Walter |
2023-05-13 | ggml : GPU-accelerated token generation (#1412) | Johannes Gäßler |
2023-05-13 | llama : free ggml context in set / copy state data (close #1425) | Georgi Gerganov |
2023-05-12 | ggml : remove bit shuffling (#1405) | Georgi Gerganov |
2023-05-06 | Remove default arguments from sampling functions (#1343) | Jed Fox |
2023-05-02 | llama : only copy used KV cache in get / set state (#1272) | Evan Jones |
2023-05-02 | llama : fix compile warnings | Georgi Gerganov |
2023-05-02 | llama : allow 0 as a seed number. (#1275) | Robert Brisita |
2023-05-01 | llama : fix session load / save (#1263) | Georgi Gerganov |
2023-05-01 | llama : let context be const when accessing const data (#1261) | Alex Klinkhamer |
2023-04-29 | llama : new sampling algorithms (#1126) | Ivan Stepanov |
2023-04-28 | Remove Q4_3 which is no better than Q5 (#1218) | Stephan Walter |
2023-04-28 | llama : add session file format and saved sessions in main (#1169) | Evan Jones |
2023-04-26 | ggml : add Q5_0 and Q5_1 quantization (#1187) | Georgi Gerganov |
2023-04-26 | Allow setting the rng seed after initialization. (#1184) | Ásgeir Bjarni Ingvarsson |
2023-04-25 | ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (... | Georgi Gerganov |
2023-04-24 | llama : refactor get / set state + remove redundant kv cache API (#1143) | Georgi Gerganov |
2023-04-22 | llama : add api for getting/setting the complete state: rng, logits, embeddin... | xaedes |
2023-04-20 | llama : multi-threaded quantization (#1075) | Kawrakow |
2023-04-20 | ggml : add Q4_3 quantization (#1082) | Georgi Gerganov |
2023-04-18 | ggml : add new Q4_2 quantization (ARM only) (#1046) | Georgi Gerganov |
2023-04-17 | Add LoRA support (#820) | slaren |