aboutsummaryrefslogtreecommitdiff
path: root/convert.py
AgeCommit message (Collapse)Author
2023-08-06convert.py : add missing abstract methods for quantized data (#2491)Keiichi Tabata
2023-07-27convert.py : Update to support 70B HF format model files (#2427)mj-shifu
* convert.py : fix llama 2 70b conversion from Huggingface
2023-07-25convert.py : support bpe tokenizer (#2228)ldwang
* support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert, fix Signed-off-by: ldwang <ftgreat@gmail.com> --------- Signed-off-by: ldwang <ftgreat@gmail.com> Co-authored-by: ldwang <ftgreat@gmail.com>
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
* CUDA: GQA implementation * llama : support for GQA and LLaMAv2 70B ggml-ci * py : fix hparams parsing (if-else blocks) ggml-ci * py : oh boy .. ggml-ci * help : fix gqa value for 70B ggml-ci --------- Co-authored-by: JohannesGaessler <johannesg@5d6.de>
2023-07-19cmake : install targets (#2256)wzy
fix #2252
2023-07-07convert.py: add mapping for safetensors bf16 (#1598)Aarni Koskela
Fixes #1473
2023-07-06convert : update for baichuan (#2081)Judd
1. guess n_layers; 2. relax warnings on context size; 3. add a note that its derivations are also supported. Co-authored-by: Judd <foldl@boxvest.com>
2023-07-01convert : add support of baichuan-7b (#2055)Judd
Co-authored-by: Judd <foldl@boxvest.com>
2023-06-24convert : fix invalid params in write_vocab_only (#1975)AN Long
2023-06-22rework convert.py to read hyper-parameters from config.json (#1958)Erik Scholz
* Read hyper-parameters from HuggingFace-transformer config.json, if they exist, and fall back to guessing, like before otherwise. This allows converting open_llama 3B and other non-standard model designs.
2023-06-17hooks : setting up flake8 and pre-commit hooks (#1681)Jiří Podivín
Small, non-functional changes were made to non-compliant files. These include breaking up long lines, whitespace sanitation and unused import removal. Maximum line length in python files was set to a generous 125 chars, in order to minimize number of changes needed in scripts and general annoyance. The "txt" prompts directory is excluded from the checks as it may contain oddly formatted files and strings for a good reason. Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
2023-05-17convert.py: Support models which are stored in a single pytorch_model.bin ↵Tom Jobbins
(#1469) * Support models in a single pytorch_model.bin * Remove spurious line with typo
2023-05-08convert: add ability to convert safetensors files (#1276)ubik2
* when loading a safetensors file, ignore the metadata header * check for safetensors files first, and only use PyTorch versions when safetensors aren't available
2023-05-05Convert.py @staticmethod (#1327)Benjamin Lecaillon
* Line 698 has one #staticmethod and should not otherwise throw error at unpickle.load() as not callable * Update convert.py --------- Co-authored-by: Ivan Stepanov <ivanstepanovftw@gmail.com>
2023-05-04convert: support DT_BF16 tensors (#1309)Ivan Stepanov
Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-04-17add 4_0 to default outfile namestr dict (#1031)Cameron
this came up when trying to convert the gpt4all-lora-unfiltered-quantized.bin file
2023-04-16stdout : vertical align outputs for better readibilityGeorgi Gerganov
2023-04-15convert.py: Fix loading safetensors and ggml format on Windows (#991)comex
Calling `mmap.mmap` on Windows apparently resets the file offset of the raw file object (and makes the BufferedReader return a *negative* file offset). For safetensors, avoid using the file offset after calling mmap. For GGML format, explicitly save and restore the offset. Fixes #966.
2023-04-14py : fix flake8 and isort nitpicks (#960)Pavol Rusnak
2023-04-14py : new conversion script (#545)comex
Current status: Working, except for the latest GPTQ-for-LLaMa format that includes `g_idx`. This turns out to require changes to GGML, so for now it only works if you use the `--outtype` option to dequantize it back to f16 (which is pointless except for debugging). I also included some cleanup for the C++ code. This script is meant to replace all the existing conversion scripts (including the ones that convert from older GGML formats), while also adding support for some new formats. Specifically, I've tested with: - [x] `LLaMA` (original) - [x] `llama-65b-4bit` - [x] `alpaca-native` - [x] `alpaca-native-4bit` - [x] LLaMA converted to 'transformers' format using `convert_llama_weights_to_hf.py` - [x] `alpaca-native` quantized with `--true-sequential --act-order --groupsize 128` (dequantized only) - [x] same as above plus `--save_safetensors` - [x] GPT4All - [x] stock unversioned ggml - [x] ggmh There's enough overlap in the logic needed to handle these different cases that it seemed best to move to a single script. I haven't tried this with Alpaca-LoRA because I don't know where to find it. Useful features: - Uses multiple threads for a speedup in some cases (though the Python GIL limits the gain, and sometimes it's disk-bound anyway). - Combines split models into a single file (both the intra-tensor split of the original and the inter-tensor split of 'transformers' format files). Single files are more convenient to work with and more friendly to future changes to use memory mapping on the C++ side. To accomplish this without increasing memory requirements, it has some custom loading code which avoids loading whole input files into memory at once. - Because of the custom loading code, it no longer depends in PyTorch, which might make installing dependencies slightly easier or faster... although it still depends on NumPy and sentencepiece, so I don't know if there's any meaningful difference. In any case, I also added a requirements.txt file to lock the dependency versions in case of any future breaking changes. - Type annotations checked with mypy. - Some attempts to be extra user-friendly: - The script tries to be forgiving with arguments, e.g. you can specify either the model file itself or the directory containing it. - The script doesn't depend on config.json / params.json, just in case the user downloaded files individually and doesn't have those handy. But you still need tokenizer.model and, for Alpaca, added_tokens.json. - The script tries to give a helpful error message if added_tokens.json is missing.