index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-06-13
baby-llama : fix operator!= (#1821)
0xspringtime
2023-06-13
train : improved training-from-scratch example (#1652)
xaedes
2023-06-13
llama : do a warm-up eval at start for better timings (#1824)
Georgi Gerganov
2023-06-13
Allow "quantizing" to f16 and f32 (#1787)
Kerfuffle
2023-06-12
Metal implementation for all k_quants (#1807)
Kawrakow
2023-06-12
ci : run when changing only the CUDA sources (#1800)
slaren
2023-06-12
Leverage mmap for offloading tensors to GPU (#1597)
Howard Su
2023-06-12
metal : fix failure to load model (#1817)
Kawrakow
2023-06-11
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
Kerfuffle
2023-06-11
Fixed WSL cuda's OOM error (#1594)
Kyle Liang
2023-06-11
Update SHA256SUMS with current hashes for models quantized using q4_0 (#1798)
Ryan Landay
2023-06-10
cmake : fix Metal build (close #1791)
Georgi Gerganov
2023-06-10
k-quants : GCC12 compilation fix (#1792)
Artyom Lebedev
2023-06-10
metal : fix issue with ggml-metal.metal path. Closes #1769 (#1782)
Andrei
2023-06-10
doc : fix wrong address of BLIS.md (#1772)
Aisuko
2023-06-10
ggml : force no_alloc == false when creating opt tensors (close #1699)
Georgi Gerganov
2023-06-10
metal : add Q4_1 implementation (#1785)
Kawrakow
2023-06-10
llama : support requantizing models instead of only allowing quantization fro...
Kerfuffle
2023-06-10
ggml : workaround for missing _mm256_setr_m128i in GCC < 8 (#1638)
Xingchen Song(宋星辰)
2023-06-10
make : add SSSE3 compilation use case (#1659)
rankaiyx
2023-06-09
OpenCL: Add release memory (#1741)
Robert Sung-wook Shin
2023-06-09
Windows nvcc workaround (#1753)
Johannes Gäßler
2023-06-09
metal : fix build "tanhf" -> "tanh"
Georgi Gerganov
2023-06-09
metal : add GELU implementation (#1770)
AT
2023-06-09
metal : faster q4_0 (#1775)
Kawrakow
2023-06-08
metal : add Q2_K implementation (#1762)
Kawrakow
2023-06-08
Revert "ggml : load data into int8x16x4_t using vld4q_s8 on arm64 (#1738)"
Georgi Gerganov
2023-06-08
ggml : load data into int8x16x4_t using vld4q_s8 on arm64 (#1738)
le.chang
2023-06-08
metal : Q6_K implementation (#1752)
Kawrakow
2023-06-08
Add llama.cpp docker support for non-latin languages (#1673)
qingfengfenga
2023-06-08
ggml : fix fprintf warnings (#1720)
Steven Roussey
2023-06-08
clang-tidy : restore dot file from accidental deletion
Georgi Gerganov
2023-06-08
metal : add Q4_K implementation (#1733)
Kawrakow
2023-06-08
k-quants : add missing compile definition to CMakeLists (#1748)
johnson442
2023-06-07
k-quants : allow to optionally disable at compile time (#1734)
Georgi Gerganov
2023-06-07
flake : update to support metal on m1/m2 (#1724)
jacobi petrucciani
2023-06-07
readme : add June roadmap
Georgi Gerganov
2023-06-06
main: add the possibility to open the prompt cache read-only (#1640)
Willy Tarreau
2023-06-06
llama : fix vram_scratch var
Georgi Gerganov
2023-06-06
llama : fix compile warnings
Georgi Gerganov
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-06-06
metal : add f16 support
Georgi Gerganov
2023-06-06
Clblast fixes + enhancements to save VRAM and offload more layers (#1675)
LostRuins
2023-06-06
ggml : fix builds, add ggml-quants-k.o (close #1712, close #1710)
Georgi Gerganov
2023-06-06
gitignore : add .clang-tidy
Georgi Gerganov
2023-06-06
llama : temporary disable Q6_K output quantization (#1711)
Georgi Gerganov
2023-06-06
metal : add checks for buffer size (#1706)
Spencer Sutton
2023-06-05
docs : add performance troubleshoot + example benchmark documentation (#1674)
Yuval Peled
2023-06-05
readme : fix typo (#1700)
Foul-Tarnished
2023-06-05
llama : consistently catch and throw only exceptions deriving from std::excep...
mgroeber9110
[next]