index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-05-12
CLI args use - instead of _, backwards compatible (#1416)
Johannes Gäßler
2023-05-12
Add clang-tidy reviews to CI (#1407)
slaren
2023-05-12
readme : add C#/.NET bindings repo (#1409)
Rinne
2023-05-12
ggml : remove bit shuffling (#1405)
Georgi Gerganov
2023-05-11
prompts : model agnostic DAN (#1304)
CRD716
2023-05-10
main : add option to save full output to session (#1338)
Evan Jones
2023-05-09
Locale fix for Windows (#1379)
DannyDaemonic
2023-05-09
use pause asm insn in busyloop to run the CPU (13600K) 10 °C cooler (#1314)
Sami Farin
2023-05-08
Interface improvements and `--multiline-input` (previously `--author-mode`) (...
DannyDaemonic
2023-05-08
readme : add notice about upcoming breaking change
Georgi Gerganov
2023-05-08
readme : add TOC and Pygmalion instructions (#1359)
AlpinDale
2023-05-08
llama : fix hparams shadow (#1367)
Pavol Rusnak
2023-05-08
llama : require first token to be BOS (#1303)
Georgi Gerganov
2023-05-08
convert: add ability to convert safetensors files (#1276)
ubik2
2023-05-08
Documented CUDA reproducibility, added warning (#1346)
Johannes Gäßler
2023-05-07
CI: add Windows CLBlast and OpenBLAS builds (#1277)
Henri Vasserman
2023-05-06
ggml : Allow usage of CLBlast alongside Accelerate.framework (#1336)
swittk
2023-05-06
Remove default arguments from sampling functions (#1343)
Jed Fox
2023-05-05
makefile: automatic Arch Linux detection (#1332)
DaniAndTheWeb
2023-05-05
ci : add cublas to windows release (#1271)
Erik Scholz
2023-05-05
readme: add missing info (#1324)
Pavol Rusnak
2023-05-05
Fix for OpenCL / clbast builds on macOS. (#1329)
Ionoclast Laboratories
2023-05-05
Convert.py @staticmethod (#1327)
Benjamin Lecaillon
2023-05-05
quantize: make output filename optional, default to ggml-model-<ftype>.bin (#...
slaren
2023-05-04
Wrap exceptions in std::exception to verbose output on exception. (#1316)
Ivan Stepanov
2023-05-04
convert: support DT_BF16 tensors (#1309)
Ivan Stepanov
2023-05-04
readme : add OpenBuddy link (#1321)
44670
2023-05-04
main : add --in-suffix option (#1318)
44670
2023-05-04
ggml : change immintrin.h to intrin.h for compatibility (#1307)
Ron Jailall
2023-05-04
Only escape prompts when used with `-e` (#1311)
DannyDaemonic
2023-05-04
Update main's README.md with new features (#1296)
DannyDaemonic
2023-05-04
fix #1224 reverse prompt and multi line (#1297)
Tomas
2023-05-03
ggml : vectorize Q8_0 quantization
Georgi Gerganov
2023-05-03
examples : read chat prompts from a template file (#1196)
khimaros
2023-05-03
minor : fix whitespaces (#1302)
Georgi Gerganov
2023-05-03
minor : fix trailing whitespaces
Georgi Gerganov
2023-05-03
scripts : platform independent script to verify sha256 checksums (#1203)
KASR
2023-05-03
examples : various prompt and example fixes (#1298)
CRD716
2023-05-02
llama : only copy used KV cache in get / set state (#1272)
Evan Jones
2023-05-02
Process escape sequences given in prompts (#1173)
DannyDaemonic
2023-05-02
Handle signals properly on Windows (#1123)
DannyDaemonic
2023-05-02
Call sh on build-info.sh (#1294)
DannyDaemonic
2023-05-03
fix build-info.h for git submodules (#1289)
kuvaus
2023-05-03
fix missing parameters in `llama_init_from_gpt_params` (#1293)
slaren
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
2023-05-02
llama : fix compile warnings
Georgi Gerganov
2023-05-02
ggml : fix 32-bit ARM
Georgi Gerganov
2023-05-02
examples : improve vertical alignment of a few variables (#1286)
Ron Evans
2023-05-02
ggml : fix ppc64le build error and make cmake detect Power processors (#1284)
Marvin Gießing
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
[prev]
[next]