aboutsummaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2023-06-18llama : prevent usage of k-quants when tensor size is not a multiple of 256 (...Kawrakow
2023-06-18metal : handle buffers larger than device's maxBufferLength (#1826)Georgi Gerganov
2023-06-17llama : fix kv_cache `n` init (close #1903)Georgi Gerganov
2023-06-17ggml : fix warnings under MSVC (#1908)Howard Su
2023-06-16llama : fix embd when offloading non-repeating layers (#1891)Johannes Gäßler
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-13Allow "quantizing" to f16 and f32 (#1787)Kerfuffle
2023-06-12Metal implementation for all k_quants (#1807)Kawrakow
2023-06-12Leverage mmap for offloading tensors to GPU (#1597)Howard Su
2023-06-10llama : support requantizing models instead of only allowing quantization fro...Kerfuffle
2023-06-09OpenCL: Add release memory (#1741)Robert Sung-wook Shin
2023-06-06llama : fix vram_scratch varGeorgi Gerganov
2023-06-06llama : fix compile warningsGeorgi Gerganov
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-06metal : add f16 supportGeorgi Gerganov
2023-06-06llama : temporary disable Q6_K output quantization (#1711)Georgi Gerganov
2023-06-06metal : add checks for buffer size (#1706)Spencer Sutton
2023-06-05llama : consistently catch and throw only exceptions deriving from std::excep...mgroeber9110
2023-06-05metal : use shared buffers between CPU and GPU (#1696)kiltyj
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-05Increase 3B scratch buffers. (#1698)Henri Vasserman
2023-06-05llama : fix Metal KV cache sync (close #1695)Georgi Gerganov
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-06-04OpenCL: Fix duplication of layers in VRAM and RAM, add GPU mul kernel (#1653)0cc4m
2023-05-30OpenLLaMA 3B support (#1588)Henri Vasserman
2023-05-23OpenCL Token Generation Acceleration (#1459)0cc4m
2023-05-20llama : define magic numbers as integer constants (#1518) (#1520)Juuso Alasuutari
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...Johannes Gäßler
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-20llama : fix name shadowing and C4146 (#1526)Maxime
2023-05-20llama : fix compile warnings in llama_set_state_data()Georgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-19minor : fix compile warningsGeorgi Gerganov
2023-05-18make kv_f16 the default for api users (#1517)Erik Scholz
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-13llama : fix unused warningGeorgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13ggml : implement backward pass for llama + small training-llama-from-scratch ...xaedes
2023-05-13llama : fix various warningsGeorgi Gerganov
2023-05-13llama : free ggml context in set / copy state data (close #1425)Georgi Gerganov
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-08llama : fix hparams shadow (#1367)Pavol Rusnak
2023-05-08llama : require first token to be BOS (#1303)Georgi Gerganov
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox
2023-05-02llama : only copy used KV cache in get / set state (#1272)Evan Jones
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02ggml: add names to tensors (#1268)slaren