index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
main
/
README.md
Age
Commit message (
Expand
)
Author
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-25
Some improvements to loading the session with --prompt-cache (#1550)
Kerfuffle
2023-05-10
main : add option to save full output to session (#1338)
Evan Jones
2023-05-04
main : add --in-suffix option (#1318)
44670
2023-05-04
Only escape prompts when used with `-e` (#1311)
DannyDaemonic
2023-05-04
Update main's README.md with new features (#1296)
DannyDaemonic
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-04-24
examples/main README improvements and some light refactoring (#1131)
mgroeber9110
2023-04-23
Fix LoRA acronym (#1145)
slaren
2023-04-23
Added README.md for main with examples and explanations (#1139)
DannyDaemonic
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-03-25
Overhaul the examples structure
Georgi Gerganov