aboutsummaryrefslogtreecommitdiff
path: root/README.md
AgeCommit message (Collapse)Author
2023-04-19Minor: Readme fixed grammar, spelling, and misc updates (#1071)CRD716
2023-04-19readme : add warning about Q4_2 and Q4_3Georgi Gerganov
2023-04-18readme : update hot topics about new LoRA functionalityGeorgi Gerganov
2023-04-17readme : add Ruby bindings (#1029)Atsushi Tatsuma
2023-04-14py : new conversion script (#545)comex
Current status: Working, except for the latest GPTQ-for-LLaMa format that includes `g_idx`. This turns out to require changes to GGML, so for now it only works if you use the `--outtype` option to dequantize it back to f16 (which is pointless except for debugging). I also included some cleanup for the C++ code. This script is meant to replace all the existing conversion scripts (including the ones that convert from older GGML formats), while also adding support for some new formats. Specifically, I've tested with: - [x] `LLaMA` (original) - [x] `llama-65b-4bit` - [x] `alpaca-native` - [x] `alpaca-native-4bit` - [x] LLaMA converted to 'transformers' format using `convert_llama_weights_to_hf.py` - [x] `alpaca-native` quantized with `--true-sequential --act-order --groupsize 128` (dequantized only) - [x] same as above plus `--save_safetensors` - [x] GPT4All - [x] stock unversioned ggml - [x] ggmh There's enough overlap in the logic needed to handle these different cases that it seemed best to move to a single script. I haven't tried this with Alpaca-LoRA because I don't know where to find it. Useful features: - Uses multiple threads for a speedup in some cases (though the Python GIL limits the gain, and sometimes it's disk-bound anyway). - Combines split models into a single file (both the intra-tensor split of the original and the inter-tensor split of 'transformers' format files). Single files are more convenient to work with and more friendly to future changes to use memory mapping on the C++ side. To accomplish this without increasing memory requirements, it has some custom loading code which avoids loading whole input files into memory at once. - Because of the custom loading code, it no longer depends in PyTorch, which might make installing dependencies slightly easier or faster... although it still depends on NumPy and sentencepiece, so I don't know if there's any meaningful difference. In any case, I also added a requirements.txt file to lock the dependency versions in case of any future breaking changes. - Type annotations checked with mypy. - Some attempts to be extra user-friendly: - The script tries to be forgiving with arguments, e.g. you can specify either the model file itself or the directory containing it. - The script doesn't depend on config.json / params.json, just in case the user downloaded files individually and doesn't have those handy. But you still need tokenizer.model and, for Alpaca, added_tokens.json. - The script tries to give a helpful error message if added_tokens.json is missing.
2023-04-13readme : remove python 3.10 warning (#929)CRD716
2023-04-13readme : llama node binding (#911)Genkagaku.GPT
* chore: add nodejs binding * chore: add nodejs binding
2023-04-13zig : update build.zig (#872)Judd
* update * update readme * minimize the changes. --------- Co-authored-by: zjli2019 <zhengji.li@ingchips.com>
2023-04-12readme : change "GPU support" link to discussionGeorgi Gerganov
2023-04-12readme : update hot topics with link to "GPU support" issueGeorgi Gerganov
2023-04-12readme: link to sha256sums file (#902)Nicolai Weitkemper
This is to emphasize that these do not need to be obtained from elsewhere.
2023-04-11Fix whitespace, add .editorconfig, add GitHub workflow (#883)Pavol Rusnak
2023-04-10Add BAIR's Koala to supported models (#877)qouoq
2023-04-06Make docker instructions more explicit (#785)Pavol Rusnak
2023-04-05Update README.mdGeorgi Gerganov
2023-04-05readme : change logo + add bindings + add uis + add wikiGeorgi Gerganov
2023-04-05readme : update with CMake and windows example (#748)Adithya Balaji
* README: Update with CMake and windows example * README: update with code-review for cmake build
2023-04-02Add a missing step to the gpt4all instructions (#690)Thatcher Chamberlin
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
2023-04-01readme: replace termux links with homepage, play store is deprecated (#680)rimoliga
2023-03-31drop quantize.py (now that models are using a single file)Pavol Rusnak
2023-03-30readme : update supported modelsGeorgi Gerganov
2023-03-29readme : fix typosGeorgi Gerganov
2023-03-29readme : add GPT4All instructions (close #588)Georgi Gerganov
2023-03-26Update README and comments for standalone perplexity tool (#525)Stephan Walter
2023-03-26Add logo to README.mdGeorgi Gerganov
2023-03-25Move chat scripts into "./examples"Georgi Gerganov
2023-03-25Remove obsolete information from READMEGeorgi Gerganov
2023-03-24Update README.md (#444)Gary Mulder
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
2023-03-24Add link to Roadmap discussionGeorgi Gerganov
2023-03-23Revert "Delete SHA256SUMS for now" (#429)Stephan Walter
* Revert "Delete SHA256SUMS for now (#416)" This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37. * Remove ggml files until they can be verified * Remove alpaca json * Add also model/tokenizer.model to SHA256SUMS + update README --------- Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-23Move model section from issue template to README.md (#421)Gary Mulder
* Update custom.md * Removed Model section as it is better placed in README.md * Updates to README.md model section * Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models * Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway * Updated the perplexity section to point at Perplexity scores #406 discussion
2023-03-23Adjust repetition penalty ..Georgi Gerganov
2023-03-23Add link to recent podcast about whisper.cpp and llama.cppGeorgi Gerganov
2023-03-22Add details on perplexity to README.md (#395)Gary Linscott
2023-03-22Remove temporary notice and update hot topicsGeorgi Gerganov
2023-03-21Add SHA256SUMS file and instructions to README how to obtain and verify the ↵Gary Mulder
downloads Hashes created using: sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS
2023-03-21Add notice about pending changeGeorgi Gerganov
2023-03-21Minor style changesGeorgi Gerganov
2023-03-21Add chat.sh scriptGeorgi Gerganov
2023-03-21Fix convert script, warnings alpaca instructions, default paramsGeorgi Gerganov
2023-03-21Update IPFS links to quantized alpaca with new tokenizer format (#352)Kevin Kwok
2023-03-20sentencepiece bpe compatible tokenizer (#252)Mack Straight
* potential out of bounds read * fix quantize * style * Update convert-pth-to-ggml.py * mild cleanup * don't need the space-prefixing here rn since main.cpp already does it * new file magic + version header field * readme notice * missing newlines Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
2023-03-19Improved quantize script (#222)Suaj Carrot
* Improved quantize script I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility. * Fixes and improvements based on Matt's observations Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented). * Small fixes to the previous commit * Corrected to use the original glob pattern The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now. * Added support for Windows and updated README to use this script New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one. * Fixed a typo and removed shell=True in the subprocess.run call Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters. * Corrected previous commit * Small tweak: changed the name of the program in argparse This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
2023-03-19Update hot topics to mention Alpaca supportGeorgi Gerganov
2023-03-19Add instruction for using Alpaca (#240)Georgi Gerganov
2023-03-18Fix typo in readmePavol Rusnak
2023-03-18Add note about Python 3.11 to readmePavol Rusnak
2023-03-18Add memory/disk requirements to readmePavol Rusnak
2023-03-17Update Contributing sectionGeorgi Gerganov
2023-03-17Don't tell users to use a bad number of threads (#243)Stephan Walter
The readme tells people to use the command line option "-t 8", causing 8 threads to be started. On systems with fewer than 8 cores, this causes a significant slowdown. Remove the option from the example command lines and use /proc/cpuinfo on Linux to determine a sensible default.