aboutsummaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Expand)Author
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
2023-05-01ggml : fix ggml_used_mem() (#1264)Kerfuffle
2023-04-30ggml : fix UB (int << 31)Georgi Gerganov
2023-04-30ggml : add Q5 WASM SIMD + GGML_FTYPEGeorgi Gerganov
2023-04-30ggml : fix labels for GGML_OP_ALIBIGeorgi Gerganov
2023-04-29ggml : fix 32-bit ARM NEONGeorgi Gerganov
2023-04-29ggml : use vzip instead of vuzp for consistencyGeorgi Gerganov
2023-04-29ggml : fix visibility and unused warningsGeorgi Gerganov
2023-04-29ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)Georgi Gerganov
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28ggml : sync ggml (ggml_alibi)Georgi Gerganov
2023-04-28ggml : add helper debug printf in soft_maxGeorgi Gerganov
2023-04-28ggml : add CLBlast support (#1164)0cc4m
2023-04-28add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)Yann Follet
2023-04-26ggml : slightly faster AVX2 implementation for Q5 (#1197)Stephan Walter
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-25ggml : use full range for Q4_0 and Q4_2 quantization (#729)unbounded
2023-04-24ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)xaedes
2023-04-24Fix build for gcc 8 and test in CI (#1154)Stephan Walter
2023-04-23ggml : do not print perf ops that have not been used at allGeorgi Gerganov
2023-04-23ggml : better PERF prints + support "LLAMA_PERF=1 make"Georgi Gerganov
2023-04-23Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)Stephan Walter
2023-04-23A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX5...Yishuo Wang
2023-04-22ggml : fix Q4_3 cuBLASGeorgi Gerganov
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter
2023-04-22ggml : fix AVX build + update to new Q8_0 formatGeorgi Gerganov
2023-04-22ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)Georgi Gerganov
2023-04-22ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)Stephan Walter
2023-04-21Improve cuBLAS performance by using a memory pool (#1094)slaren
2023-04-21ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)Kawrakow
2023-04-20ggml : sync ggml (add GPT-NeoX RoPE implementation)Georgi Gerganov
2023-04-20ggml : fix bug in ggml_compute_forward_dup_f32()Georgi Gerganov
2023-04-20ggml : do not break cuBLAS build (Q4_3 is not yet implemented)Georgi Gerganov
2023-04-20ggml : fix Q4_3 quantizationGeorgi Gerganov
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-20AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)Stephan Walter
2023-04-20Improve cuBLAS performance by dequantizing on the GPU (#1065)slaren
2023-04-19Q4_2 quantization with rmse-optimized scale and quants (#1062)Kawrakow
2023-04-19ggml : use 8-bit precision for Q4_1 intermediate results (#1047)Georgi Gerganov
2023-04-19ggml : Q4 cleanup - remove 4-bit dot product code (#1061)Stephan Walter
2023-04-19Add NVIDIA cuBLAS support (#1044)slaren
2023-04-19Multi-threaded ggml_cpy (#1035)slaren
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
2023-04-18ggml : scratch that - vmlaq_n_f32 is always betterGeorgi Gerganov
2023-04-18ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulatorsGeorgi Gerganov