index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-05-08
Interface improvements and `--multiline-input` (previously `--author-mode`) (...
DannyDaemonic
2023-05-08
llama : require first token to be BOS (#1303)
Georgi Gerganov
2023-05-08
Documented CUDA reproducibility, added warning (#1346)
Johannes Gäßler
2023-05-06
Remove default arguments from sampling functions (#1343)
Jed Fox
2023-05-05
quantize: make output filename optional, default to ggml-model-<ftype>.bin (#...
slaren
2023-05-04
main : add --in-suffix option (#1318)
44670
2023-05-04
Only escape prompts when used with `-e` (#1311)
DannyDaemonic
2023-05-04
Update main's README.md with new features (#1296)
DannyDaemonic
2023-05-04
fix #1224 reverse prompt and multi line (#1297)
Tomas
2023-05-03
examples : read chat prompts from a template file (#1196)
khimaros
2023-05-03
examples : various prompt and example fixes (#1298)
CRD716
2023-05-02
Process escape sequences given in prompts (#1173)
DannyDaemonic
2023-05-02
Handle signals properly on Windows (#1123)
DannyDaemonic
2023-05-03
fix missing parameters in `llama_init_from_gpt_params` (#1293)
slaren
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
2023-05-02
llama : fix compile warnings
Georgi Gerganov
2023-05-02
examples : improve vertical alignment of a few variables (#1286)
Ron Evans
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-05-02
main : switch input_noecho to input_echo to remove negation (#979)
Ron Evans
2023-05-01
Add git-based build information for better issue tracking (#1232)
DannyDaemonic
2023-05-01
llama : fix session load / save (#1263)
Georgi Gerganov
2023-04-30
common : better default number of threads (#934)
jon-chuang
2023-04-30
Various fixes to mat_mul benchmark (#1253)
Stephan Walter
2023-04-29
build : fix reference to old llama_util.h
Georgi Gerganov
2023-04-29
examples : fix save-load-state + rename llama-util.h
Georgi Gerganov
2023-04-29
common : change default parameters to pre-#1126 (#1223)
Georgi Gerganov
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-28
examples : add Jeopardy example (#1168)
CRD716
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-26
quantize : use `map` to assign quantization type from `string` (#1191)
Pavol Rusnak
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-24
examples : add save_load_state example (#1150)
xaedes
2023-04-24
examples/main README improvements and some light refactoring (#1131)
mgroeber9110
2023-04-23
Fix LoRA acronym (#1145)
slaren
2023-04-23
Added README.md for main with examples and explanations (#1139)
DannyDaemonic
2023-04-22
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
Stephan Walter
2023-04-22
llama : print timings on ctrl+c exit (#1021)
wbpxre150
2023-04-22
llama : have n_batch default to 512 (#1091)
eiery
2023-04-22
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Exp...
Clint Herron
2023-04-21
main : evaluate tokens in batches after swapping context (#1014)
Alex Klinkhamer
2023-04-21
Show perplexity ETA in hours and minutes (#1096)
slaren
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-15
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
[next]