diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-04-18 23:54:57 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-04-18 23:54:57 +0300 |
commit | 77a73403ca8eaced2590559d0f9cebd2b3649d32 (patch) | |
tree | 7b95e7565ce86b81d8dd620117564da901ce3ce7 /ggml.h | |
parent | 50a8a2af97cb92e53e7a3195aa201c3d87da5415 (diff) |
ggml : add new Q4_2 quantization (ARM only) (#1046)
* ggml : Q4_2 ARM
* ggml : add ggml_is_quantized()
* llama : update llama_type_name() with Q4_2 entry
* ggml : speed-up q4_2
- 4 threads: ~100ms -> ~90ms
- 8 threads: ~55ms -> ~50ms
* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
Diffstat (limited to 'ggml.h')
-rw-r--r-- | ggml.h | 4 |
1 files changed, 3 insertions, 1 deletions
@@ -204,7 +204,8 @@ enum ggml_type { GGML_TYPE_F16 = 1, GGML_TYPE_Q4_0 = 2, GGML_TYPE_Q4_1 = 3, - GGML_TYPE_Q8_0 = 4, + GGML_TYPE_Q4_2 = 4, + GGML_TYPE_Q8_0 = 5, GGML_TYPE_I8, GGML_TYPE_I16, GGML_TYPE_I32, @@ -806,6 +807,7 @@ enum ggml_opt_result ggml_opt( size_t ggml_quantize_q4_0(const float * src, void * dst, int n, int k, int64_t * hist); size_t ggml_quantize_q4_1(const float * src, void * dst, int n, int k, int64_t * hist); +size_t ggml_quantize_q4_2(const float * src, void * dst, int n, int k, int64_t * hist); // // system info |