aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGeorgi Gerganov <ggerganov@gmail.com>2023-03-10 21:52:27 +0200
committerGitHub <noreply@github.com>2023-03-10 21:52:27 +0200
commit18ebda34d67c05f4f5584a9209e7efb949f5fd56 (patch)
tree819fa828f8efca3d6ffa60e7485aab39413f16ff
parent319cdb3e1ffe263cf5b08249c9559e011396c1de (diff)
Update README.md
-rw-r--r--README.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/README.md b/README.md
index d2b9a70..f091909 100644
--- a/README.md
+++ b/README.md
@@ -15,7 +15,7 @@ The main goal is to run the model using 4-bit quantization on a MacBook.
This was hacked in an evening - I have no idea if it works correctly.
So far, I've tested just the 7B model and the generated text starts coherently, but typically degrades significanlty after ~30-40 tokens.
-Here is a "typicaly" run:
+Here is a "typical" run:
```java
make -j && ./main -m ./models/7B/ggml-model-q4_0.bin -t 8 -n 128
@@ -73,7 +73,7 @@ sampling parameters: temp = 0.800000, top_k = 40, top_p = 0.950000
If you are a fan of the original Star Wars trilogy, then you'll want to see this.
If you don't know your Star Wars lore, this will be a huge eye-opening and you will be a little confusing.
-Awesome movie.(end of text)
+Awesome movie. [end of text]
main: mem per token = 14434244 bytes