aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13baby-llama : fix operator!= (#1821)0xspringtime
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-13llama : do a warm-up eval at start for better timings (#1824)Georgi Gerganov
2023-06-13Allow "quantizing" to f16 and f32 (#1787)Kerfuffle
2023-06-12Metal implementation for all k_quants (#1807)Kawrakow
2023-06-12ci : run when changing only the CUDA sources (#1800)slaren
2023-06-12Leverage mmap for offloading tensors to GPU (#1597)Howard Su
2023-06-12metal : fix failure to load model (#1817)Kawrakow
2023-06-11Fix issue where interactive mode crashes when input exceeds ctx size (#1789)Kerfuffle
2023-06-11Fixed WSL cuda's OOM error (#1594)Kyle Liang
2023-06-11Update SHA256SUMS with current hashes for models quantized using q4_0 (#1798)Ryan Landay
2023-06-10cmake : fix Metal build (close #1791)Georgi Gerganov
2023-06-10k-quants : GCC12 compilation fix (#1792)Artyom Lebedev
2023-06-10metal : fix issue with ggml-metal.metal path. Closes #1769 (#1782)Andrei
2023-06-10doc : fix wrong address of BLIS.md (#1772)Aisuko
2023-06-10ggml : force no_alloc == false when creating opt tensors (close #1699)Georgi Gerganov
2023-06-10metal : add Q4_1 implementation (#1785)Kawrakow
2023-06-10llama : support requantizing models instead of only allowing quantization fro...Kerfuffle
2023-06-10ggml : workaround for missing _mm256_setr_m128i in GCC < 8 (#1638)Xingchen Song(宋星辰)
2023-06-10make : add SSSE3 compilation use case (#1659)rankaiyx
2023-06-09OpenCL: Add release memory (#1741)Robert Sung-wook Shin
2023-06-09Windows nvcc workaround (#1753)Johannes Gäßler
2023-06-09metal : fix build "tanhf" -> "tanh"Georgi Gerganov
2023-06-09metal : add GELU implementation (#1770)AT
2023-06-09metal : faster q4_0 (#1775)Kawrakow
2023-06-08metal : add Q2_K implementation (#1762)Kawrakow
2023-06-08Revert "ggml : load data into int8x16x4_t using vld4q_s8 on arm64 (#1738)"Georgi Gerganov
2023-06-08ggml : load data into int8x16x4_t using vld4q_s8 on arm64 (#1738)le.chang
2023-06-08metal : Q6_K implementation (#1752)Kawrakow
2023-06-08Add llama.cpp docker support for non-latin languages (#1673)qingfengfenga
2023-06-08ggml : fix fprintf warnings (#1720)Steven Roussey
2023-06-08clang-tidy : restore dot file from accidental deletionGeorgi Gerganov
2023-06-08metal : add Q4_K implementation (#1733)Kawrakow
2023-06-08k-quants : add missing compile definition to CMakeLists (#1748)johnson442
2023-06-07k-quants : allow to optionally disable at compile time (#1734)Georgi Gerganov
2023-06-07flake : update to support metal on m1/m2 (#1724)jacobi petrucciani
2023-06-07readme : add June roadmapGeorgi Gerganov
2023-06-06main: add the possibility to open the prompt cache read-only (#1640)Willy Tarreau
2023-06-06llama : fix vram_scratch varGeorgi Gerganov
2023-06-06llama : fix compile warningsGeorgi Gerganov
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-06metal : add f16 supportGeorgi Gerganov
2023-06-06Clblast fixes + enhancements to save VRAM and offload more layers (#1675)LostRuins
2023-06-06ggml : fix builds, add ggml-quants-k.o (close #1712, close #1710)Georgi Gerganov
2023-06-06gitignore : add .clang-tidyGeorgi Gerganov
2023-06-06llama : temporary disable Q6_K output quantization (#1711)Georgi Gerganov
2023-06-06metal : add checks for buffer size (#1706)Spencer Sutton
2023-06-05docs : add performance troubleshoot + example benchmark documentation (#1674)Yuval Peled
2023-06-05readme : fix typo (#1700)Foul-Tarnished