aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorThatcher Chamberlin <j.thatcher.c@gmail.com>2023-04-02 06:48:57 -0400
committerGitHub <noreply@github.com>2023-04-02 12:48:57 +0200
commitd8d4e865cd481b18f10508ffee35db903767ef5c (patch)
treed7daf6d7a68c499825aa6b80489707d297b76684
parente986f94829bae0b9e66b326acbbba179931c84f1 (diff)
Add a missing step to the gpt4all instructions (#690)
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
-rw-r--r--README.md6
1 files changed, 4 insertions, 2 deletions
diff --git a/README.md b/README.md
index f5744ea..508d315 100644
--- a/README.md
+++ b/README.md
@@ -232,13 +232,15 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach.
- Obtain the `gpt4all-lora-quantized.bin` model
- It is distributed in the old `ggml` format which is now obsoleted
-- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py):
+- You have to convert it to the new format using [./convert-gpt4all-to-ggml.py](./convert-gpt4all-to-ggml.py). You may also need to
+convert the model from the old format to the new format with [./migrate-ggml-2023-03-30-pr613.py](./migrate-ggml-2023-03-30-pr613.py):
```bash
python3 convert-gpt4all-to-ggml.py models/gpt4all-7B/gpt4all-lora-quantized.bin ./models/tokenizer.model
+ python3 migrate-ggml-2023-03-30-pr613.py models/gpt4all-7B/gpt4all-lora-quantized.bin models/gpt4all-7B/gpt4all-lora-quantized-new.bin
```
-- You can now use the newly generated `gpt4all-lora-quantized.bin` model in exactly the same way as all other models
+- You can now use the newly generated `gpt4all-lora-quantized-new.bin` model in exactly the same way as all other models
- The original model is saved in the same folder with a suffix `.orig`
### Obtaining and verifying the Facebook LLaMA original model and Stanford Alpaca model data