index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2023-04-20
llama : multi-threaded quantization (#1075)
Kawrakow
2023-04-20
ggml : add Q4_3 quantization (#1082)
Georgi Gerganov
2023-04-20
AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)
Stephan Walter
2023-04-20
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren
2023-04-19
Q4_2 quantization with rmse-optimized scale and quants (#1062)
Kawrakow
2023-04-19
ggml : use 8-bit precision for Q4_1 intermediate results (#1047)
Georgi Gerganov
2023-04-19
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
Stephan Walter
2023-04-19
Add NVIDIA cuBLAS support (#1044)
slaren
2023-04-19
Multi-threaded ggml_cpy (#1035)
slaren
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-18
ggml : scratch that - vmlaq_n_f32 is always better
Georgi Gerganov
2023-04-18
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Georgi Gerganov
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Georgi Gerganov
2023-04-17
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
Ivan Komarov
2023-04-15
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter
2023-04-15
Refactor ggml.c for future tensor types (#1001)
Stephan Walter
2023-04-15
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-15
ggml : use posix_memalign on non-Windows env
Georgi Gerganov
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
2023-04-14
ggml : add unary and binary map operations (#874)
Kerfuffle
2023-04-14
ggml : minor
Georgi Gerganov
2023-04-14
ggml : always allocate buffers with size multiple of GGML_MEM_ALIGN
Georgi Gerganov
2023-04-14
ggml : fix q4_1 dot product types
Georgi Gerganov
2023-04-14
ggml : optimize rope function to avoid call powf in the tight loop (#807)
Howard Su
2023-04-13
ggml : add GGML_DEFAULT_N_THREADS
Georgi Gerganov
2023-04-13
ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)
Georgi Gerganov
2023-04-13
ggml : optimize non-SIMD Q4_0 vector dot product (#703)
Stephan Walter
2023-04-13
ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)
Pavol Rusnak
2023-04-13
ggml : update cblas_sgemm columns var to be more reasonable (#838)
Vladimir
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-11
Windows fixes (#890)
comex
2023-04-10
ggml : fix WASM build
Georgi Gerganov
2023-04-10
ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dst
Georgi Gerganov
2023-04-10
ggml : remove trailing whitespaces
Georgi Gerganov
2023-04-10
Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)
Marco Matthies
2023-04-10
ggml : fix quantize_row_q4_1() ARM_NEON (close #876)
Georgi Gerganov
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-05
ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)
Georgi Gerganov
2023-04-05
ggml, llama : avoid heavy V transpose + improvements (#775)
Georgi Gerganov
2023-04-03
10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)
SebastianApel
2023-04-02
ggml : change ne to int64_t (#626)
Marian Cepok
2023-03-31
Enable -std= for cmake builds, fix warnings (#598)
Stephan Walter
2023-03-31
Optimize AVX2 ggml_vec_dot_q4_0 (#642)
slaren
2023-03-31
Add AVX acceleration (#617)
perserk
2023-03-30
Ensure --mlock works properly with mmap() support
Justine Tunney
2023-03-30
Add mmap support for model files
Slaren
2023-03-30
Remove unused variable (#607)
Casey Primozic
[next]