aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGeorgi Gerganov <ggerganov@gmail.com>2023-03-11 01:30:47 +0200
committerGitHub <noreply@github.com>2023-03-11 01:30:47 +0200
commit73c6ed5e8784a20f89d51b1703a09bc690c68227 (patch)
tree4c8a07523d7c0fd5488a519bac2e3af22d2e92de
parent01eeed8fb1437978603a8523c0b8ea2f6280f5d7 (diff)
Update README.md
-rw-r--r--README.md4
1 files changed, 1 insertions, 3 deletions
diff --git a/README.md b/README.md
index 3734383..1d5f6dc 100644
--- a/README.md
+++ b/README.md
@@ -3,9 +3,7 @@
Inference of [Facebook's LLaMA](https://github.com/facebookresearch/llama) model in pure C/C++
**TEMPORARY NOTICE:**
-If you observe garbage results, make sure to update to latest master. There was a bug and it was fixed here: https://github.com/ggerganov/llama.cpp/commit/70bc0b8b15b98dca23b28f0c8f5e34b27e424cda
-
-Also, currently the quantized models run **only** on Apple Silicon. On other architectures, you can [use the F16 models](https://github.com/ggerganov/llama.cpp/issues/2#issuecomment-1464615286), but they will be much slower. Support will be [added later](https://github.com/ggerganov/ggml/pull/27)
+Currently the quantized models run **only** on Apple Silicon. On other architectures, you can [use the F16 models](https://github.com/ggerganov/llama.cpp/issues/2#issuecomment-1464615286), but they will be much slower. Support will be [added later](https://github.com/ggerganov/ggml/pull/27)
## Description