index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-03-25
Add support for file load progress reporting callbacks (#434)
Jed Fox
2023-03-25
Add missing struct annotation (#483)
Doomsdayrs
2023-03-25
Fix crash for 65B model with pre-allocated memory (#485)
Chris Kuehl
2023-03-24
Disable BLAS altogether - the bug is not just for qunatized mat mul
Georgi Gerganov
2023-03-24
Disable BLAS branch in mul_mat - seems there is a bug
Georgi Gerganov
2023-03-24
Immediately start processing the prompt before user input has been provided (...
Georgi Gerganov
2023-03-24
Reduce memory usage and allocate enough memory for largest context (#473)
Georgi Gerganov
2023-03-24
Temporary bump the memory buffer size - hopefully fix issues from 483bab2e
Georgi Gerganov
2023-03-24
Update README.md (#444)
Gary Mulder
2023-03-24
fix instruct mode (#445)
rabidcopy
2023-03-24
Properly free llama_context on failure
Georgi Gerganov
2023-03-24
additional optimizations for POWER9 (#454)
Cameron Kaiser
2023-03-24
Support calling mlock() on loaded model data on Linux and macOS (#453)
comex
2023-03-24
Add embedding mode with arg flag. Currently working (#282)
Luciano
2023-03-24
Add link to Roadmap discussion
Georgi Gerganov
2023-03-24
Revert "Fix memory allocation issues and seg faults"
Georgi Gerganov
2023-03-24
Fix memory allocation issues and seg faults
Georgi Gerganov
2023-03-23
Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)
Georgi Gerganov
2023-03-23
Fix quantize script not finding models in parent directory (#428)
Jed Fox
2023-03-23
Remove oboslete command from Docker script
Georgi Gerganov
2023-03-23
Obsolete
Georgi Gerganov
2023-03-23
Replace EOS with newline to prevent context/memory being flushed by EOS in in...
rabidcopy
2023-03-23
Fix GPTQ converter (#423)
Timmy Knight
2023-03-23
Generate library with CMake (#430)
nusu-github
2023-03-23
Command line args bounds checking (#424)
anzz1
2023-03-23
Fix Nix build
Ben Siraphob
2023-03-23
Revert "Delete SHA256SUMS for now" (#429)
Stephan Walter
2023-03-23
Fix Makefile echo escape codes (by removing them). (#418)
Kerfuffle
2023-03-23
Move model section from issue template to README.md (#421)
Gary Mulder
2023-03-23
Delete SHA256SUMS for now (#416)
anzz1
2023-03-23
Adjust repetition penalty ..
Georgi Gerganov
2023-03-23
Add link to recent podcast about whisper.cpp and llama.cpp
Georgi Gerganov
2023-03-23
CI: CMake: Separate build and test steps (#376)
anzz1
2023-03-23
Fix instruct mode broken by PR #354 (#409)
tjohnman
2023-03-22
Update issue template so people will use it (#404)
Gary Mulder
2023-03-22
Deduplicate q4 quantization functions (#383)
Stephan Walter
2023-03-22
fix: add POSIX functionality for Linux compilation (#51)
Valentyn Bezshapkin
2023-03-22
Don't force immediate interactive without `-i` (#354)
tjohnman
2023-03-22
cmake: make llama an actual library (#392)
Erik Scholz
2023-03-22
fix perplexity after c-api refactor (#390)
Erik Scholz
2023-03-22
Add details on perplexity to README.md (#395)
Gary Linscott
2023-03-22
Add missing header for memcpy (#386)
Yusuf Kağan Hanoğlu
2023-03-22
When seed <= 0 - use the clock to generate one
Georgi Gerganov
2023-03-22
Init llama_context_params properly from CLI (#370)
Georgi Gerganov
2023-03-22
Remove temporary notice and update hot topics
Georgi Gerganov
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
Add SHA256SUMS file and instructions to README how to obtain and verify the d...
Gary Mulder
2023-03-22
Fix bin dir for win ci
anzz1
2023-03-21
specify build type for ctest on windows (#371)
Erik Scholz
2023-03-21
Add notice about pending change
Georgi Gerganov
[prev]
[next]