aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-04-10Add BAIR's Koala to supported models (#877)qouoq
2023-04-10ggml : fix WASM buildGeorgi Gerganov
2023-04-10ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dstGeorgi Gerganov
2023-04-10ggml : remove trailing whitespacesGeorgi Gerganov
2023-04-10Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)Marco Matthies
2023-04-10ggml : fix quantize_row_q4_1() ARM_NEON (close #876)Georgi Gerganov
2023-04-10Print model version.comex
Also improve model type printing, and fix indentation of an unrelated switch statement.
2023-04-10Rewrite loading code to try to satisfy everyone:comex
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't include the hack needed to support GPT4All files without conversion. Those can still be used after converting them with convert.py from my other PR.) - Support both mmap and read (mmap is used by default, but can be disabled with `--no-mmap`, and is automatically disabled for pre-ggjt files or on platforms where mmap is not supported). - Support multi-file models like before, but automatically determine the number of parts rather than requiring `--n_parts`. - Improve validation and error checking. - Stop using the per-file type field (f16) entirely in favor of just relying on the per-tensor type/size fields. This has no immediate benefit, but makes it easier to experiment with different formats, and should make it easier to support the new GPTQ-for-LLaMa models in the future (I have some work in progress on that front). - Support VirtualLock on Windows (using the same `--mlock` option as on Unix). - Indicate loading progress when using mmap + mlock. (Which led me to the interesting observation that on my Linux machine, with a warm file cache, mlock actually takes some time, whereas mmap without mlock starts almost instantly...) - To help implement this, move mlock support from ggml to the loading code. - madvise/PrefetchVirtualMemory support (based on #740) - Switch from ifstream to the `fopen` family of functions to avoid unnecessary copying and, when mmap is enabled, allow reusing the same file descriptor for both metadata reads and mmap (whereas the existing implementation opens the file a second time to mmap). - Quantization now produces a single-file output even with multi-file inputs (not really a feature as much as 'it was easier this way'). Implementation notes: I tried to factor the code into more discrete pieces than before. Regarding code style: I tried to follow the code style, but I'm naughty and used a few advanced C++ features repeatedly: - Destructors to make it easier to ensure everything gets cleaned up. - Exceptions. I don't even usually use exceptions when writing C++, and I can remove them if desired... but here they make the loading code much more succinct while still properly handling a variety of errors, ranging from API calls failing to integer overflow and allocation failure. The exceptions are converted to error codes at the API boundary.) Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08fix for windows utf-8 input (#840)Tomáš Pazdiora
Use UTF-16 as input on Windows, since UTF-8 does not work and reads multibyte characters as zeros
2023-04-08cmake should link openblas properly with -lopenblas like how it's done in ↵eiery
the makefile (#839)
2023-04-08Add new binaries to flake.nix (#847)lon
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-07make : add libllama.so target for llama-cpp-python (#797)bhubbb
I was able to get llama-cpp-python working but only when I build libllama.so with make.
2023-04-07zig : don't link examples/common.cpp for non-example (#814)iacore
2023-04-07llama : always sort logits before nucleus sampling (#812)Ivan Stepanov
* Always sort logits before nucleus sampling * remove second normalization - fix windows build - remove normalization since std::discrete_distribution does not require it
2023-04-06Do not crash when it has nothing to say. (#796)Sergey Alirzaev
Otherwise observing this in the interactive mode: /usr/lib/gcc/x86_64-pc-linux-gnu/12/include/g++-v12/bits/stl_vector.h:1230: reference std::vector<int>::back() [_Tp = int, _Alloc = std::allocator<int>]: Assertion '!this->empty()' failed.
2023-04-06Make docker instructions more explicit (#785)Pavol Rusnak
2023-04-05ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)Georgi Gerganov
2023-04-05ggml, llama : avoid heavy V transpose + improvements (#775)Georgi Gerganov
ggml : - added ggml_view_3d() - ggml_view_tensor() now inherits the stride too - reimplement ggml_cpy() to account for dst stride - no longer require tensor->data to be memory aligned llama : - compute RoPE on 32-bit tensors (should be more accurate) - store RoPE-ed K in the KV cache - store transposed V in the KV cache (significant speed-up) - avoid unnecessary Q copy
2023-04-05Update README.mdGeorgi Gerganov
2023-04-05llama : define non-positive top_k; top_k range check (#779)Ivan Stepanov
* Define non-positive top_k; top_k range check * minor : brackets --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-05miku.sh : add executable bit (#780)at8u
2023-04-05media : add logos and bannersGeorgi Gerganov
2023-04-05readme : change logo + add bindings + add uis + add wikiGeorgi Gerganov
2023-04-05zig : add build.zig (#773)iacore
Co-authored-by: Locria Cyber <74560659+locriacyber@users.noreply.github.com>
2023-04-05make : missing host optimizations in CXXFLAGS (#763)Ivan Stepanov
2023-04-05readme : update with CMake and windows example (#748)Adithya Balaji
* README: Update with CMake and windows example * README: update with code-review for cmake build
2023-04-05examples : add Miku.sh (#724)at8u
* Add Miku.sh to examples * Add missing line to prompt in Miku.sh * Add --keep param to Miku.sh * Remove '[end_of_conversation]' line from Miku.sh No longer is necessary.
2023-04-05Add Accelerate/BLAS when using Swift (#765)Andrew Duffy
2023-04-03Windows: reactive sigint handler after each Ctrl-C (#736)mgroeber9110
2023-04-0310+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)SebastianApel
* Performance improvement of AVX2 code * Fixed problem with MSVC compiler * Reviewer comments: removed double semicolon, deleted empty line 1962
2023-04-03Define non-positive temperature behavior (#720)Ivan Stepanov
2023-04-03Remove torch GPU dependencies from the Docker.full image (#665)bsilvereagle
By using `pip install torch --index-url https://download.pytorch.org/whl/cpu` instead of `pip install torch` we can specify we want to install a CPU-only version of PyTorch without any GPU dependencies. This reduces the size of the Docker image from 7.32 GB to 1.62 GB
2023-04-02Add a missing step to the gpt4all instructions (#690)Thatcher Chamberlin
`migrate-ggml-2023-03-30-pr613.py` is needed to get gpt4all running.
2023-04-02Added api for getting/setting the kv_cache (#685)Christian Falch
The api provides access methods for retrieving the current memory buffer for the kv_cache and its token number. It also contains a method for setting the kv_cache from a memory buffer. This makes it possible to load/save history - maybe support --cache-prompt paramater as well? Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-04-02ggml : change ne to int64_t (#626)Marian Cepok
2023-04-02examples : add gpt4all script (#658)Leonardo Neumann
2023-04-02llama : do not allocate KV cache for "vocab_only == true" (#682)Stephan Walter
Fixes sanitizer CI
2023-04-02make : use -march=native -mtune=native on x86 (#609)Fabian
2023-04-02fix default params for examples/main (#697)Murilo Santana
2023-04-01py: huggingface -> Hugging Face (#686)Ikko Eltociear Ashimine
2023-04-01readme: replace termux links with homepage, play store is deprecated (#680)rimoliga
2023-04-01Show error message when -f failsSlaren
2023-03-31Enable -std= for cmake builds, fix warnings (#598)Stephan Walter
2023-03-31Optimize AVX2 ggml_vec_dot_q4_0 (#642)slaren
2023-03-31Add AVX acceleration (#617)perserk
* ggml : add AVX quantize_row_q4_0() * ggml : add AVX ggml_vec_dot_q4_0() * ggml : refactor AVX part of ggml_vec_dot_q4_0() https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645
2023-03-31py : cleanup the codePavol Rusnak
- use f-strings where possible - drop first param of encode/decode functions since "utf-8" is the default
2023-03-31drop quantize.py (now that models are using a single file)Pavol Rusnak
2023-03-30readme : update supported modelsGeorgi Gerganov
2023-03-30Introduce GGML migration tool for new file formatJustine Tunney
If you deleted your old Meta LLaMA .pth files, then the migrate-ggml-2023-03-30-pr613.py script will allow you to convert your old ggml files into the new mmap()'able format. See #613