aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorGeorgi Gerganov <ggerganov@gmail.com>2023-05-19 22:17:18 +0300
committerGitHub <noreply@github.com>2023-05-19 22:17:18 +0300
commit2d5db48371052087a83974abda3767d1aedec598 (patch)
treeca7e6ad4b2be21d96272aece6489b2f39c444ecb /README.md
parent6986c7835adc13ba3f9d933b95671bb1f3984dc6 (diff)
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
* ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0 * llama : bump LLAMA_FILE_VERSION to 3 * cuda : update Q4 and Q8 dequantize kernels * ggml : fix AVX dot products * readme : update performance table + hot topics
Diffstat (limited to 'README.md')
-rw-r--r--README.md21
1 files changed, 11 insertions, 10 deletions
diff --git a/README.md b/README.md
index 6a67765..762f4aa 100644
--- a/README.md
+++ b/README.md
@@ -9,6 +9,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
**Hot topics:**
+- Quantization formats `Q4` and `Q8` have changed again (19 May) - [(info)](https://github.com/ggerganov/llama.cpp/pull/1508)
- Quantization formats `Q4` and `Q5` have changed - requantize any old models [(info)](https://github.com/ggerganov/llama.cpp/pull/1405)
- [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220)
@@ -334,16 +335,16 @@ Several quantization methods are supported. They differ in the resulting model d
| Model | Measure | F16 | Q4_0 | Q4_1 | Q5_0 | Q5_1 | Q8_0 |
|------:|--------------|-------:|-------:|-------:|-------:|-------:|-------:|
-| 7B | perplexity | 5.9066 | 6.1565 | 6.0910 | 5.9862 | 5.9481 | 5.9069 |
-| 7B | file size | 13.0G | 4.0G | 4.8G | 4.4G | 4.8G | 7.1G |
-| 7B | ms/tok @ 4th | 128 | 50 | 54 | 75 | 83 | 75 |
-| 7B | ms/tok @ 8th | 123 | 44 | 52 | 53 | 58 | 72 |
-| 7B | bits/weight | 16.0 | 5.0 | 6.0 | 5.5 | 6.0 | 9.0 |
-| 13B | perplexity | 5.2543 | 5.3860 | 5.3607 | 5.2856 | 5.2706 | 5.2548 |
-| 13B | file size | 25.0G | 7.6G | 9.1G | 8.4G | 9.1G | 14G |
-| 13B | ms/tok @ 4th | 239 | 93 | 101 | 150 | 164 | 141 |
-| 13B | ms/tok @ 8th | 240 | 81 | 96 | 96 | 104 | 136 |
-| 13B | bits/weight | 16.0 | 5.0 | 6.0 | 5.5 | 6.0 | 9.0 |
+| 7B | perplexity | 5.9066 | 6.1565 | 6.0912 | 5.9862 | 5.9481 | 5.9070 |
+| 7B | file size | 13.0G | 3.5G | 3.9G | 4.3G | 4.7G | 6.7G |
+| 7B | ms/tok @ 4th | 127 | 55 | 54 | 76 | 83 | 72 |
+| 7B | ms/tok @ 8th | 122 | 43 | 45 | 52 | 56 | 67 |
+| 7B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 |
+| 13B | perplexity | 5.2543 | 5.3860 | 5.3608 | 5.2856 | 5.2706 | 5.2548 |
+| 13B | file size | 25.0G | 6.8G | 7.6G | 8.3G | 9.1G | 13G |
+| 13B | ms/tok @ 4th | - | 103 | 105 | 148 | 160 | 131 |
+| 13B | ms/tok @ 8th | - | 73 | 82 | 98 | 105 | 128 |
+| 13B | bits/weight | 16.0 | 4.5 | 5.0 | 5.5 | 6.0 | 8.5 |
### Perplexity (measuring model quality)