diff options
author | moritzbrantner <31051084+moritzbrantner@users.noreply.github.com> | 2023-03-15 21:35:25 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-15 22:35:25 +0200 |
commit | 27944c4206a49bbe003021a2610bacaa3044e619 (patch) | |
tree | 2af12b8f5eda2b442e56e3ee1a784dab8b69873f | |
parent | 2d15d6c9a959749f954d4fbbf44d711e19c5bdff (diff) |
fixed typo (#178)
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -199,7 +199,7 @@ https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b0 - We don't know yet how much the quantization affects the quality of the generated text - Probably the token sampling can be improved - The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder, - there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simlpy don't + there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simply don't know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the performance will be the same, since no BLAS calls are invoked by the current implementation |