aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorldwang <ftgreat@163.com>2023-08-02 16:21:11 +0800
committerGitHub <noreply@github.com>2023-08-02 11:21:11 +0300
commit220d9318647a8ce127dbf7c9de5400455f41e7d8 (patch)
tree8ce7241d135bf79a0f155f695e5ffe829a51c467
parent81844fbcfd93a162b7aeaea9e4f2ab1358f7f97e (diff)
readme : add Aquila-7B model series to supported models (#2487)
* support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert, fix Signed-off-by: ldwang <ftgreat@gmail.com> * Add Aquila-7B models in README.md Signed-off-by: ldwang <ftgreat@gmail.com> * Up Aquila-7B models in README.md Signed-off-by: ldwang <ftgreat@gmail.com> --------- Signed-off-by: ldwang <ftgreat@gmail.com> Co-authored-by: ldwang <ftgreat@gmail.com>
-rw-r--r--README.md7
1 files changed, 7 insertions, 0 deletions
diff --git a/README.md b/README.md
index 515c80c..2ece294 100644
--- a/README.md
+++ b/README.md
@@ -88,6 +88,7 @@ as the main playground for developing new features for the [ggml](https://github
- [X] [Pygmalion 7B / Metharme 7B](#using-pygmalion-7b--metharme-7b)
- [X] [WizardLM](https://github.com/nlpxucan/WizardLM)
- [X] [Baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B) and its derivations (such as [baichuan-7b-sft](https://huggingface.co/hiyouga/baichuan-7b-sft))
+- [X] [Aquila-7B](https://huggingface.co/BAAI/Aquila-7B) / [AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
**Bindings:**
@@ -492,6 +493,9 @@ Building the program with BLAS support may lead to some performance improvements
# obtain the original LLaMA model weights and place them in ./models
ls ./models
65B 30B 13B 7B tokenizer_checklist.chk tokenizer.model
+ # [Optional] for models using BPE tokenizers
+ ls ./models
+ 65B 30B 13B 7B vocab.json
# install Python dependencies
python3 -m pip install -r requirements.txt
@@ -499,6 +503,9 @@ python3 -m pip install -r requirements.txt
# convert the 7B model to ggml FP16 format
python3 convert.py models/7B/
+ # [Optional] for models using BPE tokenizers
+ python convert.py models/7B/ --vocabtype bpe
+
# quantize the model to 4-bits (using q4_0 method)
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin q4_0