aboutsummaryrefslogtreecommitdiff
path: root/README.md
AgeCommit message (Collapse)Author
2023-05-05readme: add missing info (#1324)Pavol Rusnak
2023-05-04readme : add OpenBuddy link (#1321)44670
2023-05-03minor : fix whitespaces (#1302)Georgi Gerganov
2023-05-03scripts : platform independent script to verify sha256 checksums (#1203)KASR
* python script to verify the checksum of the llama models Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability. * Update README.md update to the readme for improved readability and to explain the usage of the python checksum verification script * update the verification script I've extended the script based on suggestions by @prusnak The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks. * minor improvment small change so that the available ram is checked and not the total ram * remove the part of the code that reads the file at once if enough ram is available based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks. * Update verify-checksum-models.py quick fix to pass the git check
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28readme : update hot topicsGeorgi Gerganov
2023-04-28Correcting link to w64devkit (#1214)Folko-Ven
Correcting link to w64devkit (change seeto to skeeto).
2023-04-26readme : add quantization infoGeorgi Gerganov
2023-04-26Updating build instructions to include BLAS support (#1183)DaniAndTheWeb
* Updated build information First update to the build instructions to include BLAS. * Update README.md * Update information about BLAS * Better BLAS explanation Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit. * Better BLAS explanation * BLAS for Mac Specifying that BLAS is already supported on Macs using the Accelerate Framework. * Clarify the effect of BLAS * Windows Make instructions Added the instructions to build with Make on Windows * Fixing typo * Fix trailing whitespace
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
instead of `int` (while `int` option still being supported) This allows the following usage: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0` instead of: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`
2023-04-24examples/main README improvements and some light refactoring (#1131)mgroeber9110
2023-04-23readme : update gpt4all instructions (#980)Pavol Rusnak
2023-04-19Minor: Readme fixed grammar, spelling, and misc updates (#1071)CRD716
2023-04-19readme : add warning about Q4_2 and Q4_3Georgi Gerganov
2023-04-18readme : update hot topics about new LoRA functionalityGeorgi Gerganov
2023-04-17readme : add Ruby bindings (#1029)Atsushi Tatsuma
2023-04-14py : new conversion script (#545)comex
Current status: Working, except for the latest GPTQ-for-LLaMa format that includes `g_idx`. This turns out to require changes to GGML, so for now it only works if you use the `--outtype` option to dequantize it back to f16 (which is pointless except for debugging). I also included some cleanup for the C++ code. This script is meant to replace all the existing conversion scripts (including the ones that convert from older GGML formats), while also adding support for some new formats. Specifically, I've tested with: - [x] `LLaMA` (original) - [x] `llama-65b-4bit` - [x] `alpaca-native` - [x] `alpaca-native-4bit` - [x] LLaMA converted to 'transformers' format using `convert_llama_weights_to_hf.py` - [x] `alpaca-native` quantized with `--true-sequential --act-order --groupsize 128` (dequantized only) - [x] same as above plus `--save_safetensors` - [x] GPT4All - [x] stock unversioned ggml - [x] ggmh There's enough overlap in the logic needed to handle these different cases that it seemed best to move to a single script. I haven't tried this with Alpaca-LoRA because I don't know where to find it. Useful features: - Uses multiple threads for a speedup in some cases (though the Python GIL limits the gain, and sometimes it's disk-bound anyway). - Combines split models into a single file (both the intra-tensor split of the original and the inter-tensor split of 'transformers' format files). Single files are more convenient to work with and more friendly to future changes to use memory mapping on the C++ side. To accomplish this without increasing memory requirements, it has some custom loading code which avoids loading whole input files into memory at once. - Because of the custom loading code, it no longer depends in PyTorch, which might make installing dependencies slightly easier or faster... although it still depends on NumPy and sentencepiece, so I don't know if there's any meaningful difference. In any case, I also added a requirements.txt file to lock the dependency versions in case of any future breaking changes. - Type annotations checked with mypy. - Some attempts to be extra user-friendly: - The script tries to be forgiving with arguments, e.g. you can specify either the model file itself or the directory containing it. - The script doesn't depend on config.json / params.json, just in case the user downloaded files individually and doesn't have those handy. But you still need tokenizer.model and, for Alpaca, added_tokens.json. - The script tries to give a helpful error message if added_tokens.json is missing.
2023-04-13readme : remove python 3.10 warning (#929)CRD716
2023-04-13readme : llama node binding (#911)Genkagaku.GPT
* chore: add nodejs binding * chore: add nodejs binding
2023-04-13zig : update build.zig (#872)Judd
* update * update readme * minimize the changes. --------- Co-authored-by: zjli2019 <zhengji.li@ingchips.com>
2023-04-12readme : change "GPU support" link to discussionGeorgi Gerganov
2023-04-12readme : update hot topics with link to "GPU support" issueGeorgi Gerganov
2023-04-12readme: link to sha256sums file (#902)Nicolai Weitkemper
This is to emphasize that these do not need to be obtained from elsewhere.
2023-04-11Fix whitespace, add .editorconfig, add GitHub workflow (#883)Pavol Rusnak
2023-04-10Add BAIR's Koala to supported models (#877)qouoq
2023-04-06Make docker instructions more explicit (#785)Pavol Rusnak
2023-04-05Update README.mdGeorgi Gerganov
2023-04-05readme : change logo + add bindings + add uis + add wikiGeorgi Gerganov
2023-04-05readme : update with CMake and windows example (#748)Adithya Balaji
* README: Update with CMake and windows example * README: update with code-review for cmake build
2023-04-02Add a missing step to the gpt4all instructions (#690)Thatcher Chamberlin
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
2023-04-01readme: replace termux links with homepage, play store is deprecated (#680)rimoliga
2023-03-31drop quantize.py (now that models are using a single file)Pavol Rusnak
2023-03-30readme : update supported modelsGeorgi Gerganov
2023-03-29readme : fix typosGeorgi Gerganov
2023-03-29readme : add GPT4All instructions (close #588)Georgi Gerganov
2023-03-26Update README and comments for standalone perplexity tool (#525)Stephan Walter
2023-03-26Add logo to README.mdGeorgi Gerganov
2023-03-25Move chat scripts into "./examples"Georgi Gerganov
2023-03-25Remove obsolete information from READMEGeorgi Gerganov
2023-03-24Update README.md (#444)Gary Mulder
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
2023-03-24Add link to Roadmap discussionGeorgi Gerganov
2023-03-23Revert "Delete SHA256SUMS for now" (#429)Stephan Walter
* Revert "Delete SHA256SUMS for now (#416)" This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37. * Remove ggml files until they can be verified * Remove alpaca json * Add also model/tokenizer.model to SHA256SUMS + update README --------- Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-23Move model section from issue template to README.md (#421)Gary Mulder
* Update custom.md * Removed Model section as it is better placed in README.md * Updates to README.md model section * Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models * Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway * Updated the perplexity section to point at Perplexity scores #406 discussion
2023-03-23Adjust repetition penalty ..Georgi Gerganov
2023-03-23Add link to recent podcast about whisper.cpp and llama.cppGeorgi Gerganov
2023-03-22Add details on perplexity to README.md (#395)Gary Linscott
2023-03-22Remove temporary notice and update hot topicsGeorgi Gerganov
2023-03-21Add SHA256SUMS file and instructions to README how to obtain and verify the ↵Gary Mulder
downloads Hashes created using: sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS
2023-03-21Add notice about pending changeGeorgi Gerganov
2023-03-21Minor style changesGeorgi Gerganov