index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-05-02
llama : allow 0 as a seed number. (#1275)
Robert Brisita
2023-05-02
main : switch input_noecho to input_echo to remove negation (#979)
Ron Evans
2023-05-02
ggml: add names to tensors (#1268)
slaren
2023-05-01
Add git-based build information for better issue tracking (#1232)
DannyDaemonic
2023-05-01
cuBLAS: refactor and optimize f16 mat mul performance (#1259)
slaren
2023-05-01
llama : update stubs for systems without mmap and mlock (#1266)
xloem
2023-05-01
ggml : fix ggml_used_mem() (#1264)
Kerfuffle
2023-05-01
llama : fix session load / save (#1263)
Georgi Gerganov
2023-05-01
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
slaren
2023-05-01
llama : let context be const when accessing const data (#1261)
Alex Klinkhamer
2023-04-30
ggml : fix UB (int << 31)
Georgi Gerganov
2023-04-30
build: add armv{6,7,8} support to cmake (#1251)
Pavol Rusnak
2023-04-30
common : better default number of threads (#934)
jon-chuang
2023-04-30
ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)
0cc4m
2023-04-30
ggml : add Q5 WASM SIMD + GGML_FTYPE
Georgi Gerganov
2023-04-30
Various fixes to mat_mul benchmark (#1253)
Stephan Walter
2023-04-30
ggml : fix labels for GGML_OP_ALIBI
Georgi Gerganov
2023-04-29
ggml : fix 32-bit ARM NEON
Georgi Gerganov
2023-04-29
ggml : use vzip instead of vuzp for consistency
Georgi Gerganov
2023-04-29
ggml : fix visibility and unused warnings
Georgi Gerganov
2023-04-29
ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)
Georgi Gerganov
2023-04-29
ggml : adjust mul_mat_f16 work memory (#1226)
Georgi Gerganov
2023-04-29
build : fix reference to old llama_util.h
Georgi Gerganov
2023-04-29
examples : fix save-load-state + rename llama-util.h
Georgi Gerganov
2023-04-29
common : change default parameters to pre-#1126 (#1223)
Georgi Gerganov
2023-04-29
llama : new sampling algorithms (#1126)
Ivan Stepanov
2023-04-29
cuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren
2023-04-29
cuBLAS: non-contiguous tensor support (#1215)
Henri Vasserman
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-28
readme : update hot topics
Georgi Gerganov
2023-04-28
ggml : sync ggml (ggml_alibi)
Georgi Gerganov
2023-04-28
examples : add Jeopardy example (#1168)
CRD716
2023-04-28
llama : add session file format and saved sessions in main (#1169)
Evan Jones
2023-04-28
ggml : add helper debug printf in soft_max
Georgi Gerganov
2023-04-28
ggml : add CLBlast support (#1164)
0cc4m
2023-04-28
Correcting link to w64devkit (#1214)
Folko-Ven
2023-04-28
Add Manjaro CUDA include and lib dirs to Makefile (#1212)
Johannes Gäßler
2023-04-28
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Yann Follet
2023-04-26
ggml : slightly faster AVX2 implementation for Q5 (#1197)
Stephan Walter
2023-04-26
readme : add quantization info
Georgi Gerganov
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-26
Allow setting the rng seed after initialization. (#1184)
Ásgeir Bjarni Ingvarsson
2023-04-26
Updating build instructions to include BLAS support (#1183)
DaniAndTheWeb
2023-04-26
quantize : use `map` to assign quantization type from `string` (#1191)
Pavol Rusnak
2023-04-25
Update SHA256SUMS after quantization change (#1181)
Stephan Walter
2023-04-25
py : cast lora_alpha to int in convert-lora-to-ggml (#1170)
ostix360
2023-04-25
nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)
Pavol Rusnak
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-25
ggml : use full range for Q4_0 and Q4_2 quantization (#729)
unbounded
2023-04-24
ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)
xaedes
[prev]
[next]