index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
main.cpp
Age
Commit message (
Expand
)
Author
2023-03-25
Remove obsolete assert and fix compiler warning
Georgi Gerganov
2023-03-25
feat: '--in-prefix STRING' option (#426)
anzz1
2023-03-24
Immediately start processing the prompt before user input has been provided (...
Georgi Gerganov
2023-03-24
Reduce memory usage and allocate enough memory for largest context (#473)
Georgi Gerganov
2023-03-24
fix instruct mode (#445)
rabidcopy
2023-03-24
Support calling mlock() on loaded model data on Linux and macOS (#453)
comex
2023-03-24
Add embedding mode with arg flag. Currently working (#282)
Luciano
2023-03-23
Replace EOS with newline to prevent context/memory being flushed by EOS in in...
rabidcopy
2023-03-23
Fix instruct mode broken by PR #354 (#409)
tjohnman
2023-03-22
Don't force immediate interactive without `-i` (#354)
tjohnman
2023-03-22
fix perplexity after c-api refactor (#390)
Erik Scholz
2023-03-22
When seed <= 0 - use the clock to generate one
Georgi Gerganov
2023-03-22
Init llama_context_params properly from CLI (#370)
Georgi Gerganov
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
We could use std::unordered_map over std::map (#305)
Fabio R. Sluzala
2023-03-21
Fix color codes emitting mid-UTF8 code. (#312)
Matvey Soloviev
2023-03-21
Importer for GPTQ quantized LLaMA models (#301)
comex
2023-03-21
Compute perplexity over prompt (#270)
Gary Linscott
2023-03-21
Enable ANSI colors on Windows 10+ (#311)
anzz1
2023-03-21
Check for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman
2023-03-21
Fix convert script, warnings alpaca instructions, default params
Georgi Gerganov
2023-03-21
cmdline option for custom amount of model parts (--n_parts N) (#348)
anzz1
2023-03-21
Add tokenizer test + revert to C++11 (#355)
Georgi Gerganov
2023-03-20
move file magic/version to header, print expected version (#319)
Mack Straight
2023-03-20
sentencepiece bpe compatible tokenizer (#252)
Mack Straight
2023-03-19
bugfix: default should not be interactive (#304)
cocktailpeanut
2023-03-19
fix coloring of last `n_batch` of prompt, and refactor line input (#221)
Rickey Bowers Jr
2023-03-19
Support for multiple reverse prompts. (#299)
tjohnman
2023-03-19
Make prompt randomization optional. (#300)
tjohnman
2023-03-19
Respect the maximum number of tokens in interactive. (#298)
tjohnman
2023-03-19
Add --ignore-eos parameter (#181)
slaren
2023-03-19
interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure...
Qingyou Meng
2023-03-19
Command line switch to use F16 for memory_k and memory_v (refactor of #154) (...
Erik Scholz
2023-03-19
Fix off-by-one bug (#115)
Georgi Gerganov
2023-03-19
Drop trailing new line from file prompts (#80)
Georgi Gerganov
2023-03-19
Add "--instruct" argument for usage with Alpaca (#240)
Georgi Gerganov
2023-03-18
Warn user if a context size greater than 2048 tokens is specified (#274)
Ronsor
2023-03-18
Remove unused code since n_vocab is model.hparams.n_vocab (#262)
Alex Nguyen
2023-03-18
fixed warning with std::ignore about unused function result (#151)
Justin Suess
2023-03-17
Implement non-greedy tokenizer that tries to maximize token lengths (#242)
thement
2023-03-16
Add RMS norm and use it (#187)
hoangmit
2023-03-15
add SIGINT support for _WIN32 environments (#120)
Rickey Bowers Jr
2023-03-15
added ctx_size parameter (#148)
Justin Suess
2023-03-15
fixed color reset on exit (#149)
Justin Suess
2023-03-13
Print system information
Georgi Gerganov
2023-03-13
Use fprintf for diagnostic output (#48)
Pavol Rusnak
2023-03-13
Reduce model loading time (#43)
uint256_t
2023-03-13
Fix UTF-8 handling (including colors) (#79)
Val Kharitonov
2023-03-13
Gate signal support on being on a unixoid system. (#74)
Matvey Soloviev
2023-03-13
Fix token count accounting
Matvey Soloviev
[next]