Age | Commit message (Collapse) | Author |
|
Co-authored-by: Johnman <>
Co-authored-by: Johnman <tjohnman@github>
|
|
* Improved quantize script
I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility.
* Fixes and improvements based on Matt's observations
Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented).
* Small fixes to the previous commit
* Corrected to use the original glob pattern
The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now.
* Added support for Windows and updated README to use this script
New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one.
* Fixed a typo and removed shell=True in the subprocess.run call
Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters.
* Corrected previous commit
* Small tweak: changed the name of the program in argparse
This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
|
|
Co-authored-by: Johnman <>
|
|
Co-authored-by: Johnman <johnman@github>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
ensure color reset. (#283)
|
|
(#294)
* Use F16 for memory_k and memory_v
* add command line switch to use f16 instead of f32 for memory k+v
---------
Co-authored-by: Ty Everett <ty@tyweb.us>
|
|
|
|
|
|
|
|
* Refactor get_n_parts function to simplify code and improve readability
* Use f-strings instead of concatenation
* Refactoring: more concise and readable
* modularize
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
|
|
Also start adding prompts in "./prompts"
|
|
I think this is what is used in the Python code
|
|
LLaMA doesn't support more than 2048 token context sizes, and going above that produces terrible results.
|
|
|
|
|
|
|
|
|
|
fixed warning with std::ignore about unused function result
|
|
This causes long prompts to parse very slowly.
|
|
* CI Improvements
Manual build feature, autoreleases for Windows
* better CI naming convention
use branch name in releases and tags
|
|
* Nix flake
* Nix: only add Accelerate framework on macOS
* Nix: development shel, direnv and compatibility
* Nix: use python packages supplied by withPackages
* Nix: remove channel compatibility
* Nix: fix ARM neon dotproduct on macOS
---------
Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
|
|
* Implement non-greedy tokenizer that tries to maximize token lengths
* Insert single space in front of the prompt
- this is to match original llama tokenizer behavior
---------
Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
|
|
|
|
|
|
The readme tells people to use the command line option "-t 8", causing 8
threads to be started. On systems with fewer than 8 cores, this causes a
significant slowdown. Remove the option from the example command lines
and use /proc/cpuinfo on Linux to determine a sensible default.
|
|
* add ptread link to fix cmake build under linux
* add cmake to linux and macos platform
* separate make and cmake workflow
---------
Co-authored-by: Sebastián A <sebastian.aedo29@gmail.com>
|
|
* feat: dockerize llamacpp
* feat: split build & runtime stages
* split dockerfile into main & tools
* add quantize into tool docker image
* Update .devops/tools.sh
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* add docker action pipeline
* change CI to publish at github docker registry
* fix name runs-on macOS-latest is macos-latest (lowercase)
* include docker versioned images
* fix github action docker
* fix docker.yml
* feat: include all-in-one command tool & update readme.md
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* Add AVX2 version of ggml_vec_dot_q4_1
* Small optimisations to q4_1 dot product (@Const-me)
* Rearrange Q4_1 quantization to work for multipart models. (Fix #152)
* Fix ggml_vec_mad_q4_1 too
* Fix non-vectorised q4_1 vec mul
|
|
|
|
|
|
|
|
|
|
* add ggml_rms_norm
* update op num
|
|
|
|
* add SIGINT support for _WIN32 environments
* perhaps more consistent
|
|
* added ctx_size parameter
* added it in more places
* Apply suggestions from code review
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* fixed color reset on exit
* added sigint handler for ansi_color_reset
* Update main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* Update README.md
* Update README.md
remove facebook
|
|
convert-pth-to-ggml.py (#142)
There are ways that special tokens or other new tokens could be added to the tokenizer; therefore it's probably best not to assume the vocabulary is only 32000 tokens.
|
|
Without "static" prefix, it fails to compile in clang
|
|
* Don't use vdotq_s32 if it's not available
`dotprod` extensions aren't available on some ARM CPUs (e.g. Raspberry Pi 4), so check for them and only use them if they're available.
Reintroduces the code removed in 84d9015 if `__ARM_FEATURE_DOTPROD` isn't defined.
* Update ggml.c
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|