index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
perplexity
Age
Commit message (
Expand
)
Author
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-05-20
llama : add llama_init_backend() API (close #1527)
Georgi Gerganov
2023-05-16
define default model path once, sync path with readme (#1366)
AndrĂ¡s Salamon
2023-05-08
llama : require first token to be BOS (#1303)
Georgi Gerganov
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-05-01
Add git-based build information for better issue tracking (#1232)
DannyDaemonic
2023-04-21
Show perplexity ETA in hours and minutes (#1096)
slaren
2023-04-17
Add LoRA support (#820)
slaren
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-14
perplexity : add support for batch size to `--perplexity` (#407)
Gary Linscott
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-03-28
llama : fix linkage with mingw (#551)
anzz1
2023-03-28
all : be more strict about converting float to double (#458)
Stephan Walter
2023-03-27
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Marco Matthies
2023-03-26
Update README and comments for standalone perplexity tool (#525)
Stephan Walter
2023-03-25
Cleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov
2023-03-25
Overhaul the examples structure
Georgi Gerganov