aboutsummaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Collapse)Author
2023-04-15ggml : add Q8_0 quantization for intermediate results (#951)Georgi Gerganov
* ggml : add Q8_0 quantization for intermediate results * quantize-stats : fix test + add it to Makefile default * Q8: use int8_t, AVX/AVX2 optimizations * ggml : fix quantize_row_q8_0() ARM_NEON rounding * minor : updates after rebase to latest master * quantize-stats : delete obsolete strings * ggml : fix q4_1 dot func --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-04-15ggml : use posix_memalign on non-Windows envGeorgi Gerganov
2023-04-14Expose type name from ggml (#970)Pavol Rusnak
Avoid duplication of type names in utils Co-authored-by: HÃ¥kon H. Hitland <haakon@likedan.net>
2023-04-14ggml : add unary and binary map operations (#874)Kerfuffle
* GGML map ops proof of concept. * Various cleanups. Add handling for task setting. Add handling for ggml_compute_backward. Rename functions to ggml_map_unary_f32 and ggml_map_binary_f32 Fix compiler warnings related to casting function pointers and `void *` Reorder functions and definitions based on the GGML op number. Use typedefs for map op function pointer types. * Fix position of map ops cases in ggml_compute_forward
2023-04-14ggml : minorGeorgi Gerganov
2023-04-14ggml : always allocate buffers with size multiple of GGML_MEM_ALIGNGeorgi Gerganov
2023-04-14ggml : fix q4_1 dot product typesGeorgi Gerganov
2023-04-14ggml : optimize rope function to avoid call powf in the tight loop (#807)Howard Su
2023-04-13ggml : add GGML_DEFAULT_N_THREADSGeorgi Gerganov
2023-04-13ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)Georgi Gerganov
* ggml : speed-up q4_1 ARM_NEON by ~5% * ggml : implement vaddvq when missing * ggml : implement vminvq and vmaxvq when missing * ggml : implement vzip when missing * ggml : fix comment * ggml : try to use correct ifdef
2023-04-13ggml : optimize non-SIMD Q4_0 vector dot product (#703)Stephan Walter
2023-04-13ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)Pavol Rusnak
which allows us to use aligned_alloc or _aligned_malloc functions
2023-04-13ggml : update cblas_sgemm columns var to be more reasonable (#838)Vladimir
2023-04-11Fix whitespace, add .editorconfig, add GitHub workflow (#883)Pavol Rusnak
2023-04-11Add enum llama_ftype, sync ggml_type to model files (#709)Stephan Walter
2023-04-11Windows fixes (#890)comex
Mostly for msys2 and mingw64 builds, which are different from each other and different from standard Visual Studio builds. Isn't Windows fun? - Define _GNU_SOURCE in more files (it's already used in ggml.c for Linux's sake). - Don't use PrefetchVirtualMemory if not building for Windows 8 or later (mingw64 doesn't by default). But warn the user about this situation since it's probably not intended. - Check for NOMINMAX already being defined, which it is on mingw64. - Actually use the `increment` variable (bug in my `pizza` PR). - Suppress unused variable warnings in the fake pthread_create and pthread_join implementations for Windows. - (not Windows-related) Remove mention of `asprintf` from comment; `asprintf` is no longer used. Fixes #871.
2023-04-10ggml : fix WASM buildGeorgi Gerganov
2023-04-10ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dstGeorgi Gerganov
2023-04-10ggml : remove trailing whitespacesGeorgi Gerganov
2023-04-10Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)Marco Matthies
2023-04-10ggml : fix quantize_row_q4_1() ARM_NEON (close #876)Georgi Gerganov
2023-04-10Rewrite loading code to try to satisfy everyone:comex
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't include the hack needed to support GPT4All files without conversion. Those can still be used after converting them with convert.py from my other PR.) - Support both mmap and read (mmap is used by default, but can be disabled with `--no-mmap`, and is automatically disabled for pre-ggjt files or on platforms where mmap is not supported). - Support multi-file models like before, but automatically determine the number of parts rather than requiring `--n_parts`. - Improve validation and error checking. - Stop using the per-file type field (f16) entirely in favor of just relying on the per-tensor type/size fields. This has no immediate benefit, but makes it easier to experiment with different formats, and should make it easier to support the new GPTQ-for-LLaMa models in the future (I have some work in progress on that front). - Support VirtualLock on Windows (using the same `--mlock` option as on Unix). - Indicate loading progress when using mmap + mlock. (Which led me to the interesting observation that on my Linux machine, with a warm file cache, mlock actually takes some time, whereas mmap without mlock starts almost instantly...) - To help implement this, move mlock support from ggml to the loading code. - madvise/PrefetchVirtualMemory support (based on #740) - Switch from ifstream to the `fopen` family of functions to avoid unnecessary copying and, when mmap is enabled, allow reusing the same file descriptor for both metadata reads and mmap (whereas the existing implementation opens the file a second time to mmap). - Quantization now produces a single-file output even with multi-file inputs (not really a feature as much as 'it was easier this way'). Implementation notes: I tried to factor the code into more discrete pieces than before. Regarding code style: I tried to follow the code style, but I'm naughty and used a few advanced C++ features repeatedly: - Destructors to make it easier to ensure everything gets cleaned up. - Exceptions. I don't even usually use exceptions when writing C++, and I can remove them if desired... but here they make the loading code much more succinct while still properly handling a variety of errors, ranging from API calls failing to integer overflow and allocation failure. The exceptions are converted to error codes at the API boundary.) Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-05ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)Georgi Gerganov
2023-04-05ggml, llama : avoid heavy V transpose + improvements (#775)Georgi Gerganov
ggml : - added ggml_view_3d() - ggml_view_tensor() now inherits the stride too - reimplement ggml_cpy() to account for dst stride - no longer require tensor->data to be memory aligned llama : - compute RoPE on 32-bit tensors (should be more accurate) - store RoPE-ed K in the KV cache - store transposed V in the KV cache (significant speed-up) - avoid unnecessary Q copy
2023-04-0310+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)SebastianApel
* Performance improvement of AVX2 code * Fixed problem with MSVC compiler * Reviewer comments: removed double semicolon, deleted empty line 1962
2023-04-02ggml : change ne to int64_t (#626)Marian Cepok
2023-03-31Enable -std= for cmake builds, fix warnings (#598)Stephan Walter
2023-03-31Optimize AVX2 ggml_vec_dot_q4_0 (#642)slaren
2023-03-31Add AVX acceleration (#617)perserk
* ggml : add AVX quantize_row_q4_0() * ggml : add AVX ggml_vec_dot_q4_0() * ggml : refactor AVX part of ggml_vec_dot_q4_0() https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645
2023-03-30Ensure --mlock works properly with mmap() supportJustine Tunney
2023-03-30Add mmap support for model filesSlaren
2023-03-30Remove unused variable (#607)Casey Primozic
* It seems some new warning were added recently that exposed this. I wrote the code that included this unused variable originally and it is indeed not needed.
2023-03-30ggml : fix NEON signs (close #620, #622)Georgi Gerganov
2023-03-30Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)slaren
2023-03-29ggml : init time on first ggml_init() callGeorgi Gerganov
2023-03-29ggml : add ARM_NEON dequantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON quantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON ggml_vec_dot_q4_1()Georgi Gerganov
2023-03-29Fix GCC warning about binary literal (#595)anzz1
0b10101010 -> 0xAA /* 0b10101010 */
2023-03-28Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)anzz1
* Enable Fused-Multiply-Add (FMA) instructions on MSVC __FMA__ macro does not exist in MSVC * Enable F16C/CVT16 vector extensions on MSVC __F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512 * MSVC cvt intrinsics * Add __SSE3__ macro for MSVC too because why not even though it's not currently used for anything when AVX is defined
2023-03-28ggml : add AVX2 implementation of quantize_row_q4_1 (#515)slaren
* Add AVX2 implementation of quantize_row_q4_1 * Actually use AVX2 * Make quantize_row_q4_1 static Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28ggml : refactor quantized processing functions (#509)Stephan Walter
* Refactor quantized processing functions * ggml : minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28all : be more strict about converting float to double (#458)Stephan Walter
* Be more strict about converting float to double * Test equivalence of round, SILU implementations Test module is commented out in CMakeLists.txt because the tests may take a long time, depending on how much the compiler optimizes. * Fix softmax in perplexity.cpp * all : prefer float over double where appropriate * perplexity : add <cmath> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28ggml : introduce structs for the q4 data blocks (#356)Stephan Walter
* Introduce structs for the q4 data blocks * ggml : rename quant struct variables + fix ARM_NEON --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28Fix usage of F16C intrinsics in AVX code (#563)slaren
* Fix usage of F16C intrinsics in AVX code when F16C is not defined
2023-03-26Fix undefined variables in debug build, remove unused variables (#531)Stephan Walter
2023-03-25Add AVX2 implementation of dequantize_row_q4_1 (#505)slaren
2023-03-25Overhaul the examples structureGeorgi Gerganov
- main -> examples - utils -> examples (renamed to "common") - quantize -> examples - separate tools for "perplexity" and "embedding" Hope I didn't break something !
2023-03-25Retire the ggml_mul_mat() branch for transposed src0 (#500)Georgi Gerganov
* Retire the ggml_mul_mat() for transposed src0 - It can always be made contiguous with ggml_cpy() - The code is now simplified - The results are deterministic in respect to num threads * SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502) * Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON * Fix dequantization - forgot to interleave the quants