index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
common.h
Age
Commit message (
Expand
)
Author
2023-05-28
examples : add --alias option to gpt_params to set use friendly model name (#...
Vladimir Zorin
2023-05-19
minor : fix compile warnings
Georgi Gerganov
2023-05-17
Remove unused n_parts parameter (#1509)
Stephan Walter
2023-05-16
define default model path once, sync path with readme (#1366)
András Salamon
2023-05-13
ggml : GPU-accelerated token generation (#1412)
Johannes Gäßler
2023-05-10
main : add option to save full output to session (#1338)
Evan Jones
2023-05-08
Interface improvements and `--multiline-input` (previously `--author-mode`) (...
DannyDaemonic
2023-05-04
main : add --in-suffix option (#1318)
44670
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
2023-04-30
common : better default number of threads (#934)
jon-chuang
2023-04-29
common : change default parameters to pre-#1126 (#1223)
Georgi Gerganov
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-24
examples/main README improvements and some light refactoring (#1131)
mgroeber9110
2023-04-22
llama : have n_batch default to 512 (#1091)
eiery
2023-04-17
Add LoRA support (#820)
slaren
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-03-28
main.cpp fixes, refactoring (#571)
anzz1
2023-03-25
Inifinite generation via context swapping (#71)
Georgi Gerganov
2023-03-25
Overhaul the examples structure
Georgi Gerganov