index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
common.cpp
Age
Commit message (
Expand
)
Author
2023-05-03
fix missing parameters in `llama_init_from_gpt_params` (#1293)
slaren
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-04-30
common : better default number of threads (#934)
jon-chuang
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-24
examples/main README improvements and some light refactoring (#1131)
mgroeber9110
2023-04-17
Add LoRA support (#820)
slaren
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-13
common : remove unnecessary includes (#947)
CRD716
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-04-02
fix default params for examples/main (#697)
Murilo Santana
2023-04-01
Show error message when -f fails
Slaren
2023-03-28
all : be more strict about converting float to double (#458)
Stephan Walter
2023-03-28
main.cpp fixes, refactoring (#571)
anzz1
2023-03-25
If n_predict == -1, generate forever
Georgi Gerganov
2023-03-25
Inifinite generation via context swapping (#71)
Georgi Gerganov
2023-03-25
Overhaul the examples structure
Georgi Gerganov