aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-03-30readme : update supported modelsGeorgi Gerganov
2023-03-30Introduce GGML migration tool for new file formatJustine Tunney
If you deleted your old Meta LLaMA .pth files, then the migrate-ggml-2023-03-30-pr613.py script will allow you to convert your old ggml files into the new mmap()'able format. See #613
2023-03-30Ensure --mlock works properly with mmap() supportJustine Tunney
2023-03-30Make loading weights 10-100x fasterJustine Tunney
This is a breaking change that's going to give you three benefits: 1. Your inference commands should load 100x faster 2. You may be able to safely load models 2x larger 3. You can run many concurrent inference processes This was accomplished by changing the file format so we can mmap() weights directly into memory without having to read() or copy them thereby ensuring the kernel can make its file cache pages directly accessible to our inference processes; and secondly, that the file cache pages are much less likely to get evicted (which would force loads to hit disk) because they're no longer competing with memory pages that were needlessly created by gigabytes of standard i/o. The new file format supports single-file models like LLaMA 7b, and it also supports multi-file models like LLaMA 13B. Our Python tool now merges the foo.1, foo.2, etc. files back into a single file so that the C++ code which maps it doesn't need to reshape data every time. That's made llama.cpp so much simpler. Much of its load code has now been deleted. Furthermore, this change ensures that tensors are aligned properly on a 32-byte boundary. That opens the door to seeing if we can get additional performance gains on some microprocessors, by using ops that require memory alignment. Lastly note that both POSIX and the Windows platform are supported Fixes #91
2023-03-30Initial windows support (untested)Slaren
2023-03-30Always initialize mm_addr and mm_length in llama_modelSlaren
2023-03-30Unmap the file in llama_freeSlaren
2023-03-30Make mmap_file staticSlaren
2023-03-30Fix ggml_init_params in quantizeSlaren
2023-03-30Add mmap support for model filesSlaren
2023-03-30cmake : properly invoke CTest (#629)Stephan Walter
2023-03-30Remove unused variable (#607)Casey Primozic
* It seems some new warning were added recently that exposed this. I wrote the code that included this unused variable originally and it is indeed not needed.
2023-03-30make : fix darwin f16c flags check (#615)david raistrick
...there was no check. ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)
2023-03-30ggml : fix NEON signs (close #620, #622)Georgi Gerganov
2023-03-30Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)slaren
2023-03-29ci : re-enable AVX512 testing (Windows-MSVC) (#584)anzz1
* CI: Re-enable AVX512 testing (Windows-MSVC) Now with 100% less base64 encoding * plain __cpuid is enough here
2023-03-29ggml : init time on first ggml_init() callGeorgi Gerganov
2023-03-29llama : fix compile warnings when reading the vocabGeorgi Gerganov
2023-03-29ggml : add ARM_NEON dequantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON quantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON ggml_vec_dot_q4_1()Georgi Gerganov
2023-03-29rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)Pavol Rusnak
to match filenames of other converters
2023-03-29Create chat-13B.bat (#592)Thérence
* Create chat-13B.bat Same script than chat-13B.sh, but for windows users. Tested and working on windows 10/11 v 22H2 * Apply suggestions from code review --------- Co-authored-by: anzz1 <anzz1@live.com>
2023-03-29readme : fix typosGeorgi Gerganov
2023-03-29readme : add GPT4All instructions (close #588)Georgi Gerganov
2023-03-29py : add GPT4All conversion scriptGeorgi Gerganov
For now: copy-paste Too much time for me to deduplicate the python code
2023-03-29llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)Maël Kerbiriou
2023-03-29add example of re-act pattern (#583)Tobias Lütke
* add example of re-act pattern * spelling... * fixed whitespace in reverse prompt issue
2023-03-29Fix GCC warning about binary literal (#595)anzz1
0b10101010 -> 0xAA /* 0b10101010 */
2023-03-29Fix typo in llama.h (#593)anzz1
2023-03-28Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)anzz1
* Enable Fused-Multiply-Add (FMA) instructions on MSVC __FMA__ macro does not exist in MSVC * Enable F16C/CVT16 vector extensions on MSVC __F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512 * MSVC cvt intrinsics * Add __SSE3__ macro for MSVC too because why not even though it's not currently used for anything when AVX is defined
2023-03-28CI: fix subdirectory path globbing (#546)anzz1
- Changes in subdirectories will now be detecter properly - (Windows-MSVC) AVX512 tests temporarily disabled
2023-03-28llama : fix linkage with mingw (#551)anzz1
* Revert 7e53955 (#542) Still needs to be fixed properly * Fix linking on mingw32
2023-03-28ggml : add AVX2 implementation of quantize_row_q4_1 (#515)slaren
* Add AVX2 implementation of quantize_row_q4_1 * Actually use AVX2 * Make quantize_row_q4_1 static Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28py : add temporary script to convert old ggml files to newer version (#539)thement
Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
2023-03-28py : add capabiliy to convert from ggml back to torch or hf format for ↵Tai Duc Nguyen
further consumption/training/finetuning (#403)
2023-03-28ggml : refactor quantized processing functions (#509)Stephan Walter
* Refactor quantized processing functions * ggml : minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28py : removed unused `model` variable and verified that the code functions ↵DooWoong Lee (David)
correctly with `vocab_only` setting. Also confirmed that the code works as expected after running with reduced memory usage due to deletion of no-longer-needed variable. (#547)
2023-03-28ci : make ctest verbose, hopefully we see what is wrong with the sanitizerGeorgi Gerganov
2023-03-28tests : free llama context at the end of the testGeorgi Gerganov
2023-03-28all : be more strict about converting float to double (#458)Stephan Walter
* Be more strict about converting float to double * Test equivalence of round, SILU implementations Test module is commented out in CMakeLists.txt because the tests may take a long time, depending on how much the compiler optimizes. * Fix softmax in perplexity.cpp * all : prefer float over double where appropriate * perplexity : add <cmath> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28deploy : add a Package.swift for SwiftPM support (#393)Jed Fox
* Add a Package.swift for SwiftPM support * Swap from exclusions to allowlist
2023-03-28ggml : introduce structs for the q4 data blocks (#356)Stephan Walter
* Introduce structs for the q4 data blocks * ggml : rename quant struct variables + fix ARM_NEON --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28gitignore : add "embedding"Georgi Gerganov
2023-03-28Check the existence of f16_model_path_base in quantize.py (#574)dotpy314
Co-authored-by: Jincheng Miao <jincheng.miao@gmail.com>
2023-03-28Fix usage of F16C intrinsics in AVX code (#563)slaren
* Fix usage of F16C intrinsics in AVX code when F16C is not defined
2023-03-28main.cpp fixes, refactoring (#571)anzz1
- main: entering empty line passes back control without new input in interactive/instruct modes - instruct mode: keep prompt fix - instruct mode: duplicate instruct prompt fix - refactor: move common console code from main->common
2023-03-28Add embedding example to Makefile (#540)RJ Adriaansen
2023-03-27Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)Marco Matthies
2023-03-26ci: add debug build to sanitizer build matrix (#527)Erik Scholz