Age | Commit message (Collapse) | Author | |
---|---|---|---|
2023-04-25 | ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) ↵ | Georgi Gerganov | |
(#1179) * ggml : add Q8_0 quantization format (rename the old one to Q8_1) * tests : fix test-quantize-fns * ggml : finalize Q8_0 implementation * ggml : use q4_0_q8_0 and q4_2_q8_0 * ggml : fix Q8_0 dot product bug (ARM) * ggml : Q8_0 unroll x2 * ggml : fix bug - using wrong block type * ggml : extend quantize_fns_t with "vec_dot_type" * ggml : fix Q8_0 to use 255 values out of 256 * ggml : fix assert using wrong QK4_2 instead of QK4_3 | |||
2023-04-21 | Improve cuBLAS performance by using a memory pool (#1094) | slaren | |
* Improve cuBLAS performance by using a memory pool * Move cuda specific definitions to ggml-cuda.h/cu * Add CXX flags to nvcc * Change memory pool synchronization mechanism to a spin lock General code cleanup | |||
2023-04-20 | Add Q4_3 support to cuBLAS (#1086) | slaren | |
2023-04-20 | Improve cuBLAS performance by dequantizing on the GPU (#1065) | slaren | |