index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-03-20
Nix flake: set meta.mainProgram to llama
Ben Siraphob
2023-03-20
Fixed tokenizer.model not found error when model dir is symlink (#325)
Qingyou Meng
2023-03-20
move file magic/version to header, print expected version (#319)
Mack Straight
2023-03-20
Docker - Fix publish docker image in GitHub Registry (#235)
Bernat Vadell
2023-03-20
sentencepiece bpe compatible tokenizer (#252)
Mack Straight
2023-03-20
Add tqdm to Python requirements (#293)
Stephan Walter
2023-03-19
bugfix: default should not be interactive (#304)
cocktailpeanut
2023-03-19
Rename script
Georgi Gerganov
2023-03-19
Add temporary helper script for Alpaca chat
Georgi Gerganov
2023-03-19
fix coloring of last `n_batch` of prompt, and refactor line input (#221)
Rickey Bowers Jr
2023-03-19
Support for multiple reverse prompts. (#299)
tjohnman
2023-03-19
Improved quantize script (#222)
Suaj Carrot
2023-03-19
Make prompt randomization optional. (#300)
tjohnman
2023-03-19
Respect the maximum number of tokens in interactive. (#298)
tjohnman
2023-03-19
Add --ignore-eos parameter (#181)
slaren
2023-03-19
interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure...
Qingyou Meng
2023-03-19
Command line switch to use F16 for memory_k and memory_v (refactor of #154) (...
Erik Scholz
2023-03-19
Update hot topics to mention Alpaca support
Georgi Gerganov
2023-03-19
Fix off-by-one bug (#115)
Georgi Gerganov
2023-03-19
Fix python stuff (#109)
Georgi Gerganov
2023-03-19
Refactoring `convert-pth-to-ggml.py`: more concise and readable (#109)
qunash
2023-03-19
Drop trailing new line from file prompts (#80)
Georgi Gerganov
2023-03-19
Add instruction for using Alpaca (#240)
Georgi Gerganov
2023-03-19
Add "--instruct" argument for usage with Alpaca (#240)
Georgi Gerganov
2023-03-19
Change RMSNorm eps to 1e-6 (#173)
Georgi Gerganov
2023-03-18
Warn user if a context size greater than 2048 tokens is specified (#274)
Ronsor
2023-03-18
Fix typo in readme
Pavol Rusnak
2023-03-18
Add note about Python 3.11 to readme
Pavol Rusnak
2023-03-18
Add memory/disk requirements to readme
Pavol Rusnak
2023-03-18
Remove unused code since n_vocab is model.hparams.n_vocab (#262)
Alex Nguyen
2023-03-18
fixed warning with std::ignore about unused function result (#151)
Justin Suess
2023-03-18
Fix n^2 loop in tokenization (#254)
Gary Linscott
2023-03-18
CI Improvements (#230)
anzz1
2023-03-17
Nix flake (#40)
Niklas Korz
2023-03-17
Implement non-greedy tokenizer that tries to maximize token lengths (#242)
thement
2023-03-17
Default to 4 threads (#243)
Georgi Gerganov
2023-03-17
Update Contributing section
Georgi Gerganov
2023-03-17
Don't tell users to use a bad number of threads (#243)
Stephan Walter
2023-03-17
add ptread link to fix cmake build under linux (#114)
mmyjona
2023-03-17
🚀 Dockerize llamacpp (#132)
Bernat Vadell
2023-03-17
Q4_1 quantization (#193)
Matvey Soloviev
2023-03-16
Update README.md
Georgi Gerganov
2023-03-16
Expand "Contributing" section
Georgi Gerganov
2023-03-16
Update hot topics - RMSnorm
Georgi Gerganov
2023-03-15
Fix RMS norm in GGML (#191)
Nebula
2023-03-16
Add RMS norm and use it (#187)
hoangmit
2023-03-15
fixed typo (#178)
moritzbrantner
2023-03-15
add SIGINT support for _WIN32 environments (#120)
Rickey Bowers Jr
2023-03-15
added ctx_size parameter (#148)
Justin Suess
2023-03-15
fixed color reset on exit (#149)
Justin Suess
[next]