index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-21
llama : remember and restore kv cache data pointers (#1104)
xaedes
2023-04-21
ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)
Kawrakow
2023-04-21
Show perplexity ETA in hours and minutes (#1096)
slaren
2023-04-21
llama : fix comment for "output.weight" tensor
Georgi Gerganov
2023-04-20
Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)
Stephan Walter
2023-04-20
ggml : sync ggml (add GPT-NeoX RoPE implementation)
Georgi Gerganov
2023-04-20
ggml : fix bug in ggml_compute_forward_dup_f32()
Georgi Gerganov
2023-04-20
Add Q4_3 support to cuBLAS (#1086)
slaren
2023-04-20
ggml : do not break cuBLAS build (Q4_3 is not yet implemented)
Georgi Gerganov
2023-04-20
ggml : fix Q4_3 quantization
Georgi Gerganov
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-20
ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the C...
Ivan Komarov
2023-04-20
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
源文雨
2023-04-20
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
Stephan Walter
2023-04-20
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren
2023-04-19
Minor: Readme fixed grammar, spelling, and misc updates (#1071)
CRD716
2023-04-19
Q4_2 quantization with rmse-optimized scale and quants (#1062)
Kawrakow
2023-04-19
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
Georgi Gerganov
2023-04-19
readme : add warning about Q4_2 and Q4_3
Georgi Gerganov
2023-04-19
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
Stephan Walter
2023-04-19
Add NVIDIA cuBLAS support (#1044)
slaren
2023-04-19
Multi-threaded ggml_cpy (#1035)
slaren
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-18
ggml : scratch that - vmlaq_n_f32 is always better
Georgi Gerganov
2023-04-18
gitignore : vdot
Georgi Gerganov
2023-04-18
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Georgi Gerganov
2023-04-18
Adding a simple program to measure speed of dot products (#1041)
Kawrakow
2023-04-18
readme : update hot topics about new LoRA functionality
Georgi Gerganov
2023-04-18
ci : do not run on drafts
Georgi Gerganov
2023-04-18
Do not close file after mmap (Windows version) (#1034)
Ivan Komarov
2023-04-17
readme : add Ruby bindings (#1029)
Atsushi Tatsuma
2023-04-17
add 4_0 to default outfile namestr dict (#1031)
Cameron
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
llama : well-defined static initialization of complex objects (#927)
Arik Poznanski
2023-04-17
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-17
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Georgi Gerganov
2023-04-17
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
Ivan Komarov
2023-04-16
Fix: do not close file on mmap (#1017)
slaren
2023-04-16
stdout : vertical align outputs for better readibility
Georgi Gerganov
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-16
Fix msys2 build error and warnings (#1009)
nanahi
2023-04-15
convert.py: Fix loading safetensors and ggml format on Windows (#991)
comex
2023-04-15
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter
2023-04-15
Refactor ggml.c for future tensor types (#1001)
Stephan Walter
2023-04-15
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-15
ggml : use posix_memalign on non-Windows env
Georgi Gerganov
2023-04-15
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-15
cmake : add finding the OpenBLAS header file (#992)
katsu560
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
[prev]
[next]