index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-05-02
main : switch input_noecho to input_echo to remove negation (#979)
Ron Evans
2023-05-01
Add git-based build information for better issue tracking (#1232)
DannyDaemonic
2023-05-01
llama : fix session load / save (#1263)
Georgi Gerganov
2023-04-30
common : better default number of threads (#934)
jon-chuang
2023-04-30
Various fixes to mat_mul benchmark (#1253)
Stephan Walter
2023-04-29
build : fix reference to old llama_util.h
Georgi Gerganov
2023-04-29
examples : fix save-load-state + rename llama-util.h
Georgi Gerganov
2023-04-29
common : change default parameters to pre-#1126 (#1223)
Georgi Gerganov
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-28
examples : add Jeopardy example (#1168)
CRD716
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-26
quantize : use `map` to assign quantization type from `string` (#1191)
Pavol Rusnak
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-24
examples : add save_load_state example (#1150)
xaedes
2023-04-24
examples/main README improvements and some light refactoring (#1131)
mgroeber9110
2023-04-23
Fix LoRA acronym (#1145)
slaren
2023-04-23
Added README.md for main with examples and explanations (#1139)
DannyDaemonic
2023-04-22
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
Stephan Walter
2023-04-22
llama : print timings on ctrl+c exit (#1021)
wbpxre150
2023-04-22
llama : have n_batch default to 512 (#1091)
eiery
2023-04-22
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Exp...
Clint Herron
2023-04-21
main : evaluate tokens in batches after swapping context (#1014)
Alex Klinkhamer
2023-04-21
Show perplexity ETA in hours and minutes (#1096)
slaren
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-15
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-14
perplexity : add support for batch size to `--perplexity` (#407)
Gary Linscott
2023-04-13
common : remove unnecessary includes (#947)
CRD716
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-13
fix whitespace (#944)
CRD716
2023-04-13
examples : add -n to alpaca and gpt4all scripts (#706)
niansa/tuxifan
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-11
Windows fixes (#890)
comex
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-06
Do not crash when it has nothing to say. (#796)
Sergey Alirzaev
2023-04-05
miku.sh : add executable bit (#780)
at8u
[prev]
[next]