index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-03-23
Replace EOS with newline to prevent context/memory being flushed by EOS in in...
rabidcopy
2023-03-23
Fix GPTQ converter (#423)
Timmy Knight
2023-03-23
Generate library with CMake (#430)
nusu-github
2023-03-23
Command line args bounds checking (#424)
anzz1
2023-03-23
Fix Nix build
Ben Siraphob
2023-03-23
Revert "Delete SHA256SUMS for now" (#429)
Stephan Walter
2023-03-23
Fix Makefile echo escape codes (by removing them). (#418)
Kerfuffle
2023-03-23
Move model section from issue template to README.md (#421)
Gary Mulder
2023-03-23
Delete SHA256SUMS for now (#416)
anzz1
2023-03-23
Adjust repetition penalty ..
Georgi Gerganov
2023-03-23
Add link to recent podcast about whisper.cpp and llama.cpp
Georgi Gerganov
2023-03-23
CI: CMake: Separate build and test steps (#376)
anzz1
2023-03-23
Fix instruct mode broken by PR #354 (#409)
tjohnman
2023-03-22
Update issue template so people will use it (#404)
Gary Mulder
2023-03-22
Deduplicate q4 quantization functions (#383)
Stephan Walter
2023-03-22
fix: add POSIX functionality for Linux compilation (#51)
Valentyn Bezshapkin
2023-03-22
Don't force immediate interactive without `-i` (#354)
tjohnman
2023-03-22
cmake: make llama an actual library (#392)
Erik Scholz
2023-03-22
fix perplexity after c-api refactor (#390)
Erik Scholz
2023-03-22
Add details on perplexity to README.md (#395)
Gary Linscott
2023-03-22
Add missing header for memcpy (#386)
Yusuf Kağan Hanoğlu
2023-03-22
When seed <= 0 - use the clock to generate one
Georgi Gerganov
2023-03-22
Init llama_context_params properly from CLI (#370)
Georgi Gerganov
2023-03-22
Remove temporary notice and update hot topics
Georgi Gerganov
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
Add SHA256SUMS file and instructions to README how to obtain and verify the d...
Gary Mulder
2023-03-22
Fix bin dir for win ci
anzz1
2023-03-21
specify build type for ctest on windows (#371)
Erik Scholz
2023-03-21
Add notice about pending change
Georgi Gerganov
2023-03-21
fix typo in chatLLaMa (#368)
Mathieu Nayrolles
2023-03-21
Update issue templates
Georgi Gerganov
2023-03-21
We could use std::unordered_map over std::map (#305)
Fabio R. Sluzala
2023-03-21
Fix color codes emitting mid-UTF8 code. (#312)
Matvey Soloviev
2023-03-21
Importer for GPTQ quantized LLaMA models (#301)
comex
2023-03-21
Compute perplexity over prompt (#270)
Gary Linscott
2023-03-21
Add chatLLaMa script (#198)
Jean-Christophe Hoelt
2023-03-21
makefile: Fix CPU feature detection on Haiku (#218)
Alex von Gluck IV
2023-03-21
Enable ANSI colors on Windows 10+ (#311)
anzz1
2023-03-21
Minor style changes
Georgi Gerganov
2023-03-21
Add chat.sh script
Georgi Gerganov
2023-03-21
Check for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman
2023-03-21
Check for reverse prompt by characters instead of tokens (#292) (#330)
tjohnman
2023-03-21
Fix convert script, warnings alpaca instructions, default params
Georgi Gerganov
2023-03-21
Add OpenBSD support (#314)
Kevin Lo
2023-03-21
fix typo in comment (#318)
Mack Straight
2023-03-21
Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)
Qingyou Meng
2023-03-21
cmdline option for custom amount of model parts (--n_parts N) (#348)
anzz1
2023-03-21
Update IPFS links to quantized alpaca with new tokenizer format (#352)
Kevin Kwok
2023-03-21
Change default repeat_penalty to 1.0
Georgi Gerganov
2023-03-21
Add tokenizer test + revert to C++11 (#355)
Georgi Gerganov
[next]