index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
convert-pth-to-ggml.py
Age
Commit message (
Expand
)
Author
2023-04-14
py : new conversion script (#545)
comex
2023-03-31
py : cleanup the code
Pavol Rusnak
2023-03-30
Introduce GGML migration tool for new file format
Justine Tunney
2023-03-30
Make loading weights 10-100x faster
Justine Tunney
2023-03-28
py : removed unused `model` variable and verified that the code functions cor...
DooWoong Lee (David)
2023-03-25
Clarify console output in convert-pth-to-ggml.py (#512)
jp-x-g
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
Fix convert script, warnings alpaca instructions, default params
Georgi Gerganov
2023-03-21
fix typo in comment (#318)
Mack Straight
2023-03-21
Add tokenizer test + revert to C++11 (#355)
Georgi Gerganov
2023-03-20
Fixed tokenizer.model not found error when model dir is symlink (#325)
Qingyou Meng
2023-03-20
sentencepiece bpe compatible tokenizer (#252)
Mack Straight
2023-03-19
Fix python stuff (#109)
Georgi Gerganov
2023-03-19
Refactoring `convert-pth-to-ggml.py`: more concise and readable (#109)
qunash
2023-03-17
🚀 Dockerize llamacpp (#132)
Bernat Vadell
2023-03-15
Use `tokenizer.vocab_size()` instead of hardcoding 32000 in convert-pth-to-gg...
Ronsor
2023-03-13
Fix UTF-8 handling (including colors) (#79)
Val Kharitonov
2023-03-12
Revert "weights_only" arg - this causing more trouble than help
Georgi Gerganov
2023-03-12
python/pytorch compat notes (#44)
Oleksandr Nikitin
2023-03-12
use weights_only in conversion script (#32)
deepdiffuser
2023-03-11
Support all LLaMA models + change Q4_0 quantization storage
Georgi Gerganov
2023-03-10
Fix a bug in the rope calculation
Georgi Gerganov
2023-03-10
Initial release
Georgi Gerganov