index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
utils.h
Age
Commit message (
Expand
)
Author
2023-03-22
Don't force immediate interactive without `-i` (#354)
tjohnman
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
We could use std::unordered_map over std::map (#305)
Fabio R. Sluzala
2023-03-21
Compute perplexity over prompt (#270)
Gary Linscott
2023-03-21
cmdline option for custom amount of model parts (--n_parts N) (#348)
anzz1
2023-03-21
Change default repeat_penalty to 1.0
Georgi Gerganov
2023-03-21
Add tokenizer test + revert to C++11 (#355)
Georgi Gerganov
2023-03-20
move file magic/version to header, print expected version (#319)
Mack Straight
2023-03-20
sentencepiece bpe compatible tokenizer (#252)
Mack Straight
2023-03-19
Support for multiple reverse prompts. (#299)
tjohnman
2023-03-19
Make prompt randomization optional. (#300)
tjohnman
2023-03-19
Add --ignore-eos parameter (#181)
slaren
2023-03-19
Command line switch to use F16 for memory_k and memory_v (refactor of #154) (...
Erik Scholz
2023-03-19
Add "--instruct" argument for usage with Alpaca (#240)
Georgi Gerganov
2023-03-17
Default to 4 threads (#243)
Georgi Gerganov
2023-03-17
Don't tell users to use a bad number of threads (#243)
Stephan Walter
2023-03-15
added ctx_size parameter (#148)
Justin Suess
2023-03-12
Add interactive mode (#61)
Matvey Soloviev
2023-03-12
Add back top_k (#56)
beiller
2023-03-12
Add repetition penalty (#20)
beiller
2023-03-10
Fix a bug in the rope calculation
Georgi Gerganov
2023-03-10
Final touches
Georgi Gerganov
2023-03-10
Initial release
Georgi Gerganov