aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGeorgi Gerganov <ggerganov@gmail.com>2023-03-25 16:30:32 +0200
committerGitHub <noreply@github.com>2023-03-25 16:30:32 +0200
commit4a7129acd2e939b92d70dd568c746f2fa078232c (patch)
tree3f92b4a18b8951564480627177cae00f58af123e
parent6b6dbc8910c6d53f4d96c46c8fcec70e2cd435d8 (diff)
Remove obsolete information from README
-rw-r--r--README.md10
1 files changed, 1 insertions, 9 deletions
diff --git a/README.md b/README.md
index 0830074..8a84324 100644
--- a/README.md
+++ b/README.md
@@ -17,7 +17,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
The main goal is to run the model using 4-bit quantization on a MacBook
- Plain C/C++ implementation without dependencies
-- Apple silicon first-class citizen - optimized via ARM NEON
+- Apple silicon first-class citizen - optimized via ARM NEON and Accelerate framework
- AVX2 support for x86 architectures
- Mixed F16 / F32 precision
- 4-bit quantization support
@@ -323,14 +323,6 @@ or with light image:
docker run -v /llama/models:/models ghcr.io/ggerganov/llama.cpp:light -m /models/7B/ggml-model-q4_0.bin -p "Building a website can be done in 10 simple steps:" -n 512
```
-## Limitations
-
-- Probably the token sampling can be improved
-- The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder,
- there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't
- know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the
- performance will be the same, since no BLAS calls are invoked by the current implementation
-
### Contributing
- Contributors can open PRs