aboutsummaryrefslogtreecommitdiff
path: root/convert-pth-to-ggml.py
AgeCommit message (Collapse)Author
2023-03-17🚀 Dockerize llamacpp (#132)Bernat Vadell
* feat: dockerize llamacpp * feat: split build & runtime stages * split dockerfile into main & tools * add quantize into tool docker image * Update .devops/tools.sh Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * add docker action pipeline * change CI to publish at github docker registry * fix name runs-on macOS-latest is macos-latest (lowercase) * include docker versioned images * fix github action docker * fix docker.yml * feat: include all-in-one command tool & update readme.md --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-15Use `tokenizer.vocab_size()` instead of hardcoding 32000 in ↵Ronsor
convert-pth-to-ggml.py (#142) There are ways that special tokens or other new tokens could be added to the tokenizer; therefore it's probably best not to assume the vocabulary is only 32000 tokens.
2023-03-13Fix UTF-8 handling (including colors) (#79)Val Kharitonov
2023-03-12Revert "weights_only" arg - this causing more trouble than helpGeorgi Gerganov
2023-03-12python/pytorch compat notes (#44)Oleksandr Nikitin
2023-03-12use weights_only in conversion script (#32)deepdiffuser
this restricts malicious weights from executing arbitrary code by restricting the unpickler to only loading tensors, primitive types, and dictionaries
2023-03-11Support all LLaMA models + change Q4_0 quantization storageGeorgi Gerganov
2023-03-10Fix a bug in the rope calculationGeorgi Gerganov
2023-03-10Initial releaseGeorgi Gerganov