aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authormoritzbrantner <31051084+moritzbrantner@users.noreply.github.com>2023-03-15 21:35:25 +0100
committerGitHub <noreply@github.com>2023-03-15 22:35:25 +0200
commit27944c4206a49bbe003021a2610bacaa3044e619 (patch)
tree2af12b8f5eda2b442e56e3ee1a784dab8b69873f
parent2d15d6c9a959749f954d4fbbf44d711e19c5bdff (diff)
fixed typo (#178)
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 0b2532a..1f7e194 100644
--- a/README.md
+++ b/README.md
@@ -199,7 +199,7 @@ https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b0
- We don't know yet how much the quantization affects the quality of the generated text
- Probably the token sampling can be improved
- The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder,
- there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simlpy don't
+ there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't
know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the
performance will be the same, since no BLAS calls are invoked by the current implementation