diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-03-11 11:34:11 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-11 11:34:11 +0200 |
commit | ea977e85ecda7b983f0e7b1db20b509998ddc889 (patch) | |
tree | 4ab35368de5aadb63d35b070e426e031d739df77 | |
parent | 007a8f6f459c6eb56678fdee4c09219ddb85b640 (diff) |
Update README.md
-rw-r--r-- | README.md | 6 |
1 files changed, 6 insertions, 0 deletions
@@ -2,9 +2,15 @@ Inference of [Facebook's LLaMA](https://github.com/facebookresearch/llama) model in pure C/C++ +**!!! IMPORTANT!!!** + +Commit [007a8f6f459c6eb56678fdee4c09219ddb85b640](https://github.com/ggerganov/llama.cpp/commit/007a8f6f459c6eb56678fdee4c09219ddb85b640) added support for all LLaMA models, but introduced breaking changes. If you generated any models before that commit, you must regenerate them after updating to latest master. + + **TEMPORARY NOTICE:** Currently the quantized models run **only** on Apple Silicon. On other architectures, you can [use the F16 models](https://github.com/ggerganov/llama.cpp/issues/2#issuecomment-1464615286), but they will be much slower. Support will be [added later](https://github.com/ggerganov/ggml/pull/27) + ## Description The main goal is to run the model using 4-bit quantization on a MacBook. |