aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-04-03Windows: reactive sigint handler after each Ctrl-C (#736)mgroeber9110
2023-04-0310+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)SebastianApel
* Performance improvement of AVX2 code * Fixed problem with MSVC compiler * Reviewer comments: removed double semicolon, deleted empty line 1962
2023-04-03Define non-positive temperature behavior (#720)Ivan Stepanov
2023-04-03Remove torch GPU dependencies from the Docker.full image (#665)bsilvereagle
By using `pip install torch --index-url https://download.pytorch.org/whl/cpu` instead of `pip install torch` we can specify we want to install a CPU-only version of PyTorch without any GPU dependencies. This reduces the size of the Docker image from 7.32 GB to 1.62 GB
2023-04-02Add a missing step to the gpt4all instructions (#690)Thatcher Chamberlin
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
2023-04-02Added api for getting/setting the kv_cache (#685)Christian Falch
The api provides access methods for retrieving the current memory buffer for the kv_cache and its token number. It also contains a method for setting the kv_cache from a memory buffer. This makes it possible to load/save history - maybe support --cache-prompt paramater as well? Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-04-02ggml : change ne to int64_t (#626)Marian Cepok
2023-04-02examples : add gpt4all script (#658)Leonardo Neumann
2023-04-02llama : do not allocate KV cache for "vocab_only == true" (#682)Stephan Walter
Fixes sanitizer CI
2023-04-02make : use -march=native -mtune=native on x86 (#609)Fabian
2023-04-02fix default params for examples/main (#697)Murilo Santana
2023-04-01py: huggingface -> Hugging Face (#686)Ikko Eltociear Ashimine
2023-04-01readme: replace termux links with homepage, play store is deprecated (#680)rimoliga
2023-04-01Show error message when -f failsSlaren
2023-03-31Enable -std= for cmake builds, fix warnings (#598)Stephan Walter
2023-03-31Optimize AVX2 ggml_vec_dot_q4_0 (#642)slaren
2023-03-31Add AVX acceleration (#617)perserk
* ggml : add AVX quantize_row_q4_0() * ggml : add AVX ggml_vec_dot_q4_0() * ggml : refactor AVX part of ggml_vec_dot_q4_0() https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645
2023-03-31py : cleanup the codePavol Rusnak
- use f-strings where possible - drop first param of encode/decode functions since "utf-8" is the default
2023-03-31drop quantize.py (now that models are using a single file)Pavol Rusnak
2023-03-30readme : update supported modelsGeorgi Gerganov
2023-03-30Introduce GGML migration tool for new file formatJustine Tunney
If you deleted your old Meta LLaMA .pth files, then the migrate-ggml-2023-03-30-pr613.py script will allow you to convert your old ggml files into the new mmap()'able format. See #613
2023-03-30Ensure --mlock works properly with mmap() supportJustine Tunney
2023-03-30Make loading weights 10-100x fasterJustine Tunney
This is a breaking change that's going to give you three benefits: 1. Your inference commands should load 100x faster 2. You may be able to safely load models 2x larger 3. You can run many concurrent inference processes This was accomplished by changing the file format so we can mmap() weights directly into memory without having to read() or copy them thereby ensuring the kernel can make its file cache pages directly accessible to our inference processes; and secondly, that the file cache pages are much less likely to get evicted (which would force loads to hit disk) because they're no longer competing with memory pages that were needlessly created by gigabytes of standard i/o. The new file format supports single-file models like LLaMA 7b, and it also supports multi-file models like LLaMA 13B. Our Python tool now merges the foo.1, foo.2, etc. files back into a single file so that the C++ code which maps it doesn't need to reshape data every time. That's made llama.cpp so much simpler. Much of its load code has now been deleted. Furthermore, this change ensures that tensors are aligned properly on a 32-byte boundary. That opens the door to seeing if we can get additional performance gains on some microprocessors, by using ops that require memory alignment. Lastly note that both POSIX and the Windows platform are supported Fixes #91
2023-03-30Initial windows support (untested)Slaren
2023-03-30Always initialize mm_addr and mm_length in llama_modelSlaren
2023-03-30Unmap the file in llama_freeSlaren
2023-03-30Make mmap_file staticSlaren
2023-03-30Fix ggml_init_params in quantizeSlaren
2023-03-30Add mmap support for model filesSlaren
2023-03-30cmake : properly invoke CTest (#629)Stephan Walter
2023-03-30Remove unused variable (#607)Casey Primozic
* It seems some new warning were added recently that exposed this. I wrote the code that included this unused variable originally and it is indeed not needed.
2023-03-30make : fix darwin f16c flags check (#615)david raistrick
...there was no check. ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)
2023-03-30ggml : fix NEON signs (close #620, #622)Georgi Gerganov
2023-03-30Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)slaren
2023-03-29ci : re-enable AVX512 testing (Windows-MSVC) (#584)anzz1
* CI: Re-enable AVX512 testing (Windows-MSVC) Now with 100% less base64 encoding * plain __cpuid is enough here
2023-03-29ggml : init time on first ggml_init() callGeorgi Gerganov
2023-03-29llama : fix compile warnings when reading the vocabGeorgi Gerganov
2023-03-29ggml : add ARM_NEON dequantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON quantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON ggml_vec_dot_q4_1()Georgi Gerganov
2023-03-29rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)Pavol Rusnak
to match filenames of other converters
2023-03-29Create chat-13B.bat (#592)Thérence
* Create chat-13B.bat Same script than chat-13B.sh, but for windows users. Tested and working on windows 10/11 v 22H2 * Apply suggestions from code review --------- Co-authored-by: anzz1 <anzz1@live.com>
2023-03-29readme : fix typosGeorgi Gerganov
2023-03-29readme : add GPT4All instructions (close #588)Georgi Gerganov
2023-03-29py : add GPT4All conversion scriptGeorgi Gerganov
For now: copy-paste Too much time for me to deduplicate the python code
2023-03-29llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)Maël Kerbiriou
2023-03-29add example of re-act pattern (#583)Tobias Lütke
* add example of re-act pattern * spelling... * fixed whitespace in reverse prompt issue
2023-03-29Fix GCC warning about binary literal (#595)anzz1
0b10101010 -> 0xAA /* 0b10101010 */
2023-03-29Fix typo in llama.h (#593)anzz1
2023-03-28Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)anzz1
* Enable Fused-Multiply-Add (FMA) instructions on MSVC __FMA__ macro does not exist in MSVC * Enable F16C/CVT16 vector extensions on MSVC __F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512 * MSVC cvt intrinsics * Add __SSE3__ macro for MSVC too because why not even though it's not currently used for anything when AVX is defined