diff options
author | LostRuins <39025047+LostRuins@users.noreply.github.com> | 2023-07-11 22:01:08 +0800 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-07-11 22:01:08 +0800 |
commit | bbef28218fe827265716b66977719b9ee2b21165 (patch) | |
tree | c38db93b20493f8c1e9c4daa67004e8fea262c42 /ggml-opencl.h | |
parent | 5656d10599bd756dc0f17284e418e704200b43f3 (diff) |
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)
* This allows LLAMA models that were previously incompatible with K quants to function mostly as normal. This happens when a model has a vocab != 32000, e.g 32001 which means it's not divisible by 256 or 64. Since the problematic dimensions only apply for `tok_embeddings.weight` and `output.weight` (dimentions 4096 x n_vocab), we can simply quantize these layers to Q8_0 whereas the majority of the hidden layers are still K-quanted since they have compatible dimensions.
* Fix indentation
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* As an alternative, to avoid failing on Metal due to lack of Q8_0 support, instead quantize tok_embeddings.weight to Q4_0 and retain output.weight as F16. This results in a net gain of about 55mb for a 7B model compared to previous approach, but should minimize adverse impact to model quality.
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'ggml-opencl.h')
0 files changed, 0 insertions, 0 deletions