index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-03-21
makefile: Fix CPU feature detection on Haiku (#218)
Alex von Gluck IV
2023-03-21
Enable ANSI colors on Windows 10+ (#311)
anzz1
2023-03-21
Minor style changes
Georgi Gerganov
2023-03-21
Add chat.sh script
Georgi Gerganov
2023-03-21
Check for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman
2023-03-21
Check for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman
2023-03-21
Fix convert script, warnings alpaca instructions, default params
Georgi Gerganov
2023-03-21
Add OpenBSD support (#314)
Kevin Lo
2023-03-21
fix typo in comment (#318)
Mack Straight
2023-03-21
Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)
Qingyou Meng
2023-03-21
cmdline option for custom amount of model parts (--n_parts N) (#348)
anzz1
2023-03-21
Update IPFS links to quantized alpaca with new tokenizer format (#352)
Kevin Kwok
2023-03-21
Change default repeat_penalty to 1.0
Georgi Gerganov
2023-03-21
Add tokenizer test + revert to C++11 (#355)
Georgi Gerganov
2023-03-21
Add initial AVX512 support for dot product on Linux (#320)
Casey Primozic
2023-03-21
Adding missing features of CMakeLists.txt & Refactoring (#131)
nusu-github
2023-03-20
Nix flake: set meta.mainProgram to llama
Ben Siraphob
2023-03-20
Fixed tokenizer.model not found error when model dir is symlink (#325)
Qingyou Meng
2023-03-20
move file magic/version to header, print expected version (#319)
Mack Straight
2023-03-20
Docker - Fix publish docker image in GitHub Registry (#235)
Bernat Vadell
2023-03-20
sentencepiece bpe compatible tokenizer (#252)
Mack Straight
2023-03-20
Add tqdm to Python requirements (#293)
Stephan Walter
2023-03-19
bugfix: default should not be interactive (#304)
cocktailpeanut
2023-03-19
Rename script
Georgi Gerganov
2023-03-19
Add temporary helper script for Alpaca chat
Georgi Gerganov
2023-03-19
fix coloring of last `n_batch` of prompt, and refactor line input (#221)
Rickey Bowers Jr
2023-03-19
Support for multiple reverse prompts. (#299)
tjohnman
2023-03-19
Improved quantize script (#222)
Suaj Carrot
2023-03-19
Make prompt randomization optional. (#300)
tjohnman
2023-03-19
Respect the maximum number of tokens in interactive. (#298)
tjohnman
2023-03-19
Add --ignore-eos parameter (#181)
slaren
2023-03-19
interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure...
Qingyou Meng
2023-03-19
Command line switch to use F16 for memory_k and memory_v (refactor of #154) (...
Erik Scholz
2023-03-19
Update hot topics to mention Alpaca support
Georgi Gerganov
2023-03-19
Fix off-by-one bug (#115)
Georgi Gerganov
2023-03-19
Fix python stuff (#109)
Georgi Gerganov
2023-03-19
Refactoring `convert-pth-to-ggml.py`: more concise and readable (#109)
qunash
2023-03-19
Drop trailing new line from file prompts (#80)
Georgi Gerganov
2023-03-19
Add instruction for using Alpaca (#240)
Georgi Gerganov
2023-03-19
Add "--instruct" argument for usage with Alpaca (#240)
Georgi Gerganov
2023-03-19
Change RMSNorm eps to 1e-6 (#173)
Georgi Gerganov
2023-03-18
Warn user if a context size greater than 2048 tokens is specified (#274)
Ronsor
2023-03-18
Fix typo in readme
Pavol Rusnak
2023-03-18
Add note about Python 3.11 to readme
Pavol Rusnak
2023-03-18
Add memory/disk requirements to readme
Pavol Rusnak
2023-03-18
Remove unused code since n_vocab is model.hparams.n_vocab (#262)
Alex Nguyen
2023-03-18
fixed warning with std::ignore about unused function result (#151)
Justin Suess
2023-03-18
Fix n^2 loop in tokenization (#254)
Gary Linscott
2023-03-18
CI Improvements (#230)
anzz1
2023-03-17
Nix flake (#40)
Niklas Korz
[prev]
[next]