index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-28
examples : add Jeopardy example (#1168)
CRD716
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-28
ggml : add helper debug printf in soft_max
Georgi Gerganov
2023-04-28
ggml : add CLBlast support (#1164)
0cc4m
2023-04-28
Correcting link to w64devkit (#1214)
Folko-Ven
2023-04-28
Add Manjaro CUDA include and lib dirs to Makefile (#1212)
Johannes Gäßler
2023-04-28
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Yann Follet
2023-04-26
ggml : slightly faster AVX2 implementation for Q5 (#1197)
Stephan Walter
2023-04-26
readme : add quantization info
Georgi Gerganov
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-26
Allow setting the rng seed after initialization. (#1184)
Ásgeir Bjarni Ingvarsson
2023-04-26
Updating build instructions to include BLAS support (#1183)
DaniAndTheWeb
2023-04-26
quantize : use `map` to assign quantization type from `string` (#1191)
Pavol Rusnak
2023-04-25
Update SHA256SUMS after quantization change (#1181)
Stephan Walter
2023-04-25
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)
ostix360
2023-04-25
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
Pavol Rusnak
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-25
ggml : use full range for Q4_0 and Q4_2 quantization (#729)
unbounded
2023-04-24
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
xaedes
2023-04-24
ggml : export symbols (#1155)
Georgi Gerganov
2023-04-24
examples : add save_load_state example (#1150)
xaedes
2023-04-24
llama : increase scratch buffer size for 65B (ref #1152)
Georgi Gerganov
2023-04-24
examples/main README improvements and some light refactoring (#1131)
mgroeber9110
2023-04-24
Fix build for gcc 8 and test in CI (#1154)
Stephan Walter
2023-04-24
Fix cuda compilation (#1128)
slaren
2023-04-24
llama : refactor get / set state + remove redundant kv cache API (#1143)
Georgi Gerganov
2023-04-23
Fix LoRA acronym (#1145)
slaren
2023-04-23
scripts : add helper scripts to synch ggml repo
Georgi Gerganov
2023-04-23
Added README.md for main with examples and explanations (#1139)
DannyDaemonic
2023-04-23
ggml : do not print perf ops that have not been used at all
Georgi Gerganov
2023-04-23
ggml : better PERF prints + support "LLAMA_PERF=1 make"
Georgi Gerganov
2023-04-23
Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)
Stephan Walter
2023-04-23
readme : update gpt4all instructions (#980)
Pavol Rusnak
2023-04-23
A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX5...
Yishuo Wang
2023-04-22
ggml : fix Q4_3 cuBLAS
Georgi Gerganov
2023-04-22
ci : trigger CI for drafts, but not most PR actions (#1125)
Stephan Walter
2023-04-22
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
Stephan Walter
2023-04-22
ggml : unit test for quantization functions (#953)
unbounded
2023-04-22
llama : print timings on ctrl+c exit (#1021)
wbpxre150
2023-04-22
llama : have n_batch default to 512 (#1091)
eiery
2023-04-22
cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)
Howard Su
2023-04-22
ggml : fix AVX build + update to new Q8_0 format
Georgi Gerganov
2023-04-22
ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)
Georgi Gerganov
2023-04-22
ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)
Stephan Walter
2023-04-22
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Exp...
Clint Herron
2023-04-22
llama : add api for getting/setting the complete state: rng, logits, embeddin...
xaedes
2023-04-21
Improve cuBLAS performance by using a memory pool (#1094)
slaren
2023-04-21
llama : fixed rlimit error message (#888)
apaz
2023-04-21
cmake : link threads publicly to ggml (#1042)
源文雨
2023-04-21
main : evaluate tokens in batches after swapping context (#1014)
Alex Klinkhamer
[prev]
[next]