aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-04-14py : cleanup dependencies (#962)Pavol Rusnak
after #545 we do not need torch, tqdm and requests in the dependencies
2023-04-14py : fix flake8 and isort nitpicks (#960)Pavol Rusnak
2023-04-14ggml : minorGeorgi Gerganov
2023-04-14ggml : always allocate buffers with size multiple of GGML_MEM_ALIGNGeorgi Gerganov
2023-04-14py : new conversion script (#545)comex
Current status: Working, except for the latest GPTQ-for-LLaMa format that includes `g_idx`. This turns out to require changes to GGML, so for now it only works if you use the `--outtype` option to dequantize it back to f16 (which is pointless except for debugging). I also included some cleanup for the C++ code. This script is meant to replace all the existing conversion scripts (including the ones that convert from older GGML formats), while also adding support for some new formats. Specifically, I've tested with: - [x] `LLaMA` (original) - [x] `llama-65b-4bit` - [x] `alpaca-native` - [x] `alpaca-native-4bit` - [x] LLaMA converted to 'transformers' format using `convert_llama_weights_to_hf.py` - [x] `alpaca-native` quantized with `--true-sequential --act-order --groupsize 128` (dequantized only) - [x] same as above plus `--save_safetensors` - [x] GPT4All - [x] stock unversioned ggml - [x] ggmh There's enough overlap in the logic needed to handle these different cases that it seemed best to move to a single script. I haven't tried this with Alpaca-LoRA because I don't know where to find it. Useful features: - Uses multiple threads for a speedup in some cases (though the Python GIL limits the gain, and sometimes it's disk-bound anyway). - Combines split models into a single file (both the intra-tensor split of the original and the inter-tensor split of 'transformers' format files). Single files are more convenient to work with and more friendly to future changes to use memory mapping on the C++ side. To accomplish this without increasing memory requirements, it has some custom loading code which avoids loading whole input files into memory at once. - Because of the custom loading code, it no longer depends in PyTorch, which might make installing dependencies slightly easier or faster... although it still depends on NumPy and sentencepiece, so I don't know if there's any meaningful difference. In any case, I also added a requirements.txt file to lock the dependency versions in case of any future breaking changes. - Type annotations checked with mypy. - Some attempts to be extra user-friendly: - The script tries to be forgiving with arguments, e.g. you can specify either the model file itself or the directory containing it. - The script doesn't depend on config.json / params.json, just in case the user downloaded files individually and doesn't have those handy. But you still need tokenizer.model and, for Alpaca, added_tokens.json. - The script tries to give a helpful error message if added_tokens.json is missing.
2023-04-14ggml : fix q4_1 dot product typesGeorgi Gerganov
2023-04-14ggml : optimize rope function to avoid call powf in the tight loop (#807)Howard Su
2023-04-14perplexity : add support for batch size to `--perplexity` (#407)Gary Linscott
* Add support to batch size for perplexity * Revert "Fix memory allocation issues and seg faults" This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df. * update from merge * Remove perplexity from main * updates * Update batch size for efficiency
2023-04-13common : remove unnecessary includes (#947)CRD716
2023-04-13ggml : add GGML_DEFAULT_N_THREADSGeorgi Gerganov
2023-04-13ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)Georgi Gerganov
* ggml : speed-up q4_1 ARM_NEON by ~5% * ggml : implement vaddvq when missing * ggml : implement vminvq and vmaxvq when missing * ggml : implement vzip when missing * ggml : fix comment * ggml : try to use correct ifdef
2023-04-13llama : merge llama_internal.h into llama.hGeorgi Gerganov
Hide it behind an #ifdef
2023-04-13gitignore : benchmarkGeorgi Gerganov
2023-04-13ggml : optimize non-SIMD Q4_0 vector dot product (#703)Stephan Walter
2023-04-13ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)Pavol Rusnak
which allows us to use aligned_alloc or _aligned_malloc functions
2023-04-13fix whitespace (#944)CRD716
2023-04-13readme : remove python 3.10 warning (#929)CRD716
2023-04-13readme : llama node binding (#911)Genkagaku.GPT
* chore: add nodejs binding * chore: add nodejs binding
2023-04-13flake.nix: add all binaries from bin (#848)Pavol Rusnak
2023-04-13zig : update build.zig (#872)Judd
* update * update readme * minimize the changes. --------- Co-authored-by: zjli2019 <zhengji.li@ingchips.com>
2023-04-13ggml : update cblas_sgemm columns var to be more reasonable (#838)Vladimir
2023-04-13examples : add -n to alpaca and gpt4all scripts (#706)niansa/tuxifan
2023-04-13cmake : add explicit F16C option (x86) (#576)anzz1
Fixes building for x86 processors missing F16C featureset MSVC not included, as in MSVC F16C is implied with AVX2/AVX512
2023-04-13benchmark : add tool for timing q4_0 matrix multiplication (#653)SebastianApel
* Initial version of q4_0 matrix multiplication benchmark * Bugfix: Added dependency to ggml.o to benchmark * Reviewer requests: added parameter for threads, switched to ggml_time_us() * Reviewer input: removed rtsc, use epsilon for check * Review comment: Removed set_locale * Feature: Param for numer of iterations, Bugfix for use of parameter threads * Reviewer suggestion: Moved to examples * Reviewer feedback: Updated clean: and benchmark: sections --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-13do not force the prompt file to end with a new line (#908)Pavol Rusnak
2023-04-12Don't crash on ftype (formerly f16) == 4 (#917)Stephan Walter
2023-04-12readme : change "GPU support" link to discussionGeorgi Gerganov
2023-04-12readme : update hot topics with link to "GPU support" issueGeorgi Gerganov
2023-04-12readme: link to sha256sums file (#902)Nicolai Weitkemper
This is to emphasize that these do not need to be obtained from elsewhere.
2023-04-11Fix whitespace, add .editorconfig, add GitHub workflow (#883)Pavol Rusnak
2023-04-11Add enum llama_ftype, sync ggml_type to model files (#709)Stephan Walter
2023-04-11Windows fixes (#890)comex
Mostly for msys2 and mingw64 builds, which are different from each other and different from standard Visual Studio builds. Isn't Windows fun? - Define _GNU_SOURCE in more files (it's already used in ggml.c for Linux's sake). - Don't use PrefetchVirtualMemory if not building for Windows 8 or later (mingw64 doesn't by default). But warn the user about this situation since it's probably not intended. - Check for NOMINMAX already being defined, which it is on mingw64. - Actually use the `increment` variable (bug in my `pizza` PR). - Suppress unused variable warnings in the fake pthread_create and pthread_join implementations for Windows. - (not Windows-related) Remove mention of `asprintf` from comment; `asprintf` is no longer used. Fixes #871.
2023-04-10Add BAIR's Koala to supported models (#877)qouoq
2023-04-10ggml : fix WASM buildGeorgi Gerganov
2023-04-10ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dstGeorgi Gerganov
2023-04-10ggml : remove trailing whitespacesGeorgi Gerganov
2023-04-10Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)Marco Matthies
2023-04-10ggml : fix quantize_row_q4_1() ARM_NEON (close #876)Georgi Gerganov
2023-04-10Print model version.comex
Also improve model type printing, and fix indentation of an unrelated switch statement.
2023-04-10Rewrite loading code to try to satisfy everyone:comex
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't include the hack needed to support GPT4All files without conversion. Those can still be used after converting them with convert.py from my other PR.) - Support both mmap and read (mmap is used by default, but can be disabled with `--no-mmap`, and is automatically disabled for pre-ggjt files or on platforms where mmap is not supported). - Support multi-file models like before, but automatically determine the number of parts rather than requiring `--n_parts`. - Improve validation and error checking. - Stop using the per-file type field (f16) entirely in favor of just relying on the per-tensor type/size fields. This has no immediate benefit, but makes it easier to experiment with different formats, and should make it easier to support the new GPTQ-for-LLaMa models in the future (I have some work in progress on that front). - Support VirtualLock on Windows (using the same `--mlock` option as on Unix). - Indicate loading progress when using mmap + mlock. (Which led me to the interesting observation that on my Linux machine, with a warm file cache, mlock actually takes some time, whereas mmap without mlock starts almost instantly...) - To help implement this, move mlock support from ggml to the loading code. - madvise/PrefetchVirtualMemory support (based on #740) - Switch from ifstream to the `fopen` family of functions to avoid unnecessary copying and, when mmap is enabled, allow reusing the same file descriptor for both metadata reads and mmap (whereas the existing implementation opens the file a second time to mmap). - Quantization now produces a single-file output even with multi-file inputs (not really a feature as much as 'it was easier this way'). Implementation notes: I tried to factor the code into more discrete pieces than before. Regarding code style: I tried to follow the code style, but I'm naughty and used a few advanced C++ features repeatedly: - Destructors to make it easier to ensure everything gets cleaned up. - Exceptions. I don't even usually use exceptions when writing C++, and I can remove them if desired... but here they make the loading code much more succinct while still properly handling a variety of errors, ranging from API calls failing to integer overflow and allocation failure. The exceptions are converted to error codes at the API boundary.) Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08fix for windows utf-8 input (#840)Tomáš Pazdiora
Use UTF-16 as input on Windows, since UTF-8 does not work and reads multibyte characters as zeros
2023-04-08cmake should link openblas properly with -lopenblas like how it's done in ↵eiery
the makefile (#839)
2023-04-08Add new binaries to flake.nix (#847)lon
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-07make : add libllama.so target for llama-cpp-python (#797)bhubbb
I was able to get llama-cpp-python working but only when I build libllama.so with make.
2023-04-07zig : don't link examples/common.cpp for non-example (#814)iacore
2023-04-07llama : always sort logits before nucleus sampling (#812)Ivan Stepanov
* Always sort logits before nucleus sampling * remove second normalization - fix windows build - remove normalization since std::discrete_distribution does not require it
2023-04-06Do not crash when it has nothing to say. (#796)Sergey Alirzaev
Otherwise observing this in the interactive mode: /usr/lib/gcc/x86_64-pc-linux-gnu/12/include/g++-v12/bits/stl_vector.h:1230: reference std::vector<int>::back() [_Tp = int, _Alloc = std::allocator<int>]: Assertion '!this->empty()' failed.
2023-04-06Make docker instructions more explicit (#785)Pavol Rusnak
2023-04-05ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)Georgi Gerganov