aboutsummaryrefslogtreecommitdiff
path: root/llama-util.h
diff options
context:
space:
mode:
authorLostRuins <39025047+LostRuins@users.noreply.github.com>2023-07-11 22:01:08 +0800
committerGitHub <noreply@github.com>2023-07-11 22:01:08 +0800
commitbbef28218fe827265716b66977719b9ee2b21165 (patch)
treec38db93b20493f8c1e9c4daa67004e8fea262c42 /llama-util.h
parent5656d10599bd756dc0f17284e418e704200b43f3 (diff)
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)
* This allows LLAMA models that were previously incompatible with K quants to function mostly as normal. This happens when a model has a vocab != 32000, e.g 32001 which means it's not divisible by 256 or 64. Since the problematic dimensions only apply for `tok_embeddings.weight` and `output.weight` (dimentions 4096 x n_vocab), we can simply quantize these layers to Q8_0 whereas the majority of the hidden layers are still K-quanted since they have compatible dimensions. * Fix indentation Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * As an alternative, to avoid failing on Metal due to lack of Q8_0 support, instead quantize tok_embeddings.weight to Q4_0 and retain output.weight as F16. This results in a net gain of about 55mb for a 7B model compared to previous approach, but should minimize adverse impact to model quality. --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'llama-util.h')
0 files changed, 0 insertions, 0 deletions