aboutsummaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Collapse)Author
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-05ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)Georgi Gerganov
2023-04-05ggml, llama : avoid heavy V transpose + improvements (#775)Georgi Gerganov
ggml : - added ggml_view_3d() - ggml_view_tensor() now inherits the stride too - reimplement ggml_cpy() to account for dst stride - no longer require tensor->data to be memory aligned llama : - compute RoPE on 32-bit tensors (should be more accurate) - store RoPE-ed K in the KV cache - store transposed V in the KV cache (significant speed-up) - avoid unnecessary Q copy
2023-04-0310+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)SebastianApel
* Performance improvement of AVX2 code * Fixed problem with MSVC compiler * Reviewer comments: removed double semicolon, deleted empty line 1962
2023-04-02ggml : change ne to int64_t (#626)Marian Cepok
2023-03-31Enable -std= for cmake builds, fix warnings (#598)Stephan Walter
2023-03-31Optimize AVX2 ggml_vec_dot_q4_0 (#642)slaren
2023-03-31Add AVX acceleration (#617)perserk
* ggml : add AVX quantize_row_q4_0() * ggml : add AVX ggml_vec_dot_q4_0() * ggml : refactor AVX part of ggml_vec_dot_q4_0() https://github.com/ggerganov/llama.cpp/pull/617#issuecomment-1489985645
2023-03-30Ensure --mlock works properly with mmap() supportJustine Tunney
2023-03-30Add mmap support for model filesSlaren
2023-03-30Remove unused variable (#607)Casey Primozic
* It seems some new warning were added recently that exposed this. I wrote the code that included this unused variable originally and it is indeed not needed.
2023-03-30ggml : fix NEON signs (close #620, #622)Georgi Gerganov
2023-03-30Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)slaren
2023-03-29ggml : init time on first ggml_init() callGeorgi Gerganov
2023-03-29ggml : add ARM_NEON dequantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON quantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON ggml_vec_dot_q4_1()Georgi Gerganov
2023-03-29Fix GCC warning about binary literal (#595)anzz1
0b10101010 -> 0xAA /* 0b10101010 */
2023-03-28Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)anzz1
* Enable Fused-Multiply-Add (FMA) instructions on MSVC __FMA__ macro does not exist in MSVC * Enable F16C/CVT16 vector extensions on MSVC __F16C__ macro does not exist in MSVC, but is implied with AVX2/AVX512 * MSVC cvt intrinsics * Add __SSE3__ macro for MSVC too because why not even though it's not currently used for anything when AVX is defined
2023-03-28ggml : add AVX2 implementation of quantize_row_q4_1 (#515)slaren
* Add AVX2 implementation of quantize_row_q4_1 * Actually use AVX2 * Make quantize_row_q4_1 static Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28ggml : refactor quantized processing functions (#509)Stephan Walter
* Refactor quantized processing functions * ggml : minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28all : be more strict about converting float to double (#458)Stephan Walter
* Be more strict about converting float to double * Test equivalence of round, SILU implementations Test module is commented out in CMakeLists.txt because the tests may take a long time, depending on how much the compiler optimizes. * Fix softmax in perplexity.cpp * all : prefer float over double where appropriate * perplexity : add <cmath> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28ggml : introduce structs for the q4 data blocks (#356)Stephan Walter
* Introduce structs for the q4 data blocks * ggml : rename quant struct variables + fix ARM_NEON --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28Fix usage of F16C intrinsics in AVX code (#563)slaren
* Fix usage of F16C intrinsics in AVX code when F16C is not defined
2023-03-26Fix undefined variables in debug build, remove unused variables (#531)Stephan Walter
2023-03-25Add AVX2 implementation of dequantize_row_q4_1 (#505)slaren
2023-03-25Overhaul the examples structureGeorgi Gerganov
- main -> examples - utils -> examples (renamed to "common") - quantize -> examples - separate tools for "perplexity" and "embedding" Hope I didn't break something !
2023-03-25Retire the ggml_mul_mat() branch for transposed src0 (#500)Georgi Gerganov
* Retire the ggml_mul_mat() for transposed src0 - It can always be made contiguous with ggml_cpy() - The code is now simplified - The results are deterministic in respect to num threads * SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502) * Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON * Fix dequantization - forgot to interleave the quants
2023-03-25Add AVX2 implementation of dequantize_row_q4_0 (#467)slaren
2023-03-25Remove obsolete assert and fix compiler warningGeorgi Gerganov
2023-03-25Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLASGeorgi Gerganov
2023-03-24Disable BLAS altogether - the bug is not just for qunatized mat mulGeorgi Gerganov
2023-03-24Disable BLAS branch in mul_mat - seems there is a bugGeorgi Gerganov
2023-03-24Reduce memory usage and allocate enough memory for largest context (#473)Georgi Gerganov
* Reduce memory usage and allocate enough memory for large contexts * Simpler scratch buffer usage * Reenable BLAS for quantized mul_mat * Fix number of layers in 30B and 65B * Fix KV cache size for F32
2023-03-24additional optimizations for POWER9 (#454)Cameron Kaiser
2023-03-24Support calling mlock() on loaded model data on Linux and macOS (#453)comex
* Support calling mlock() on loaded model data on Linux and macOS This is enabled by a new --mlock command line option. Using mlock() disables swapping and memory compression for the model data. Doing so can be useful on systems where the model takes up a large fraction of system RAM. In my experience, macOS is quite eager to start compressing llama.cpp's memory, which then makes it halt for a few seconds while it decompresses, even with a model that uses "only" 25GB out of 32GB. Of course, this comes at the cost of forcing the system to swap or compress other processes' memory instead, so it needs to be used with care and shouldn't be enabled by default. In theory it should be possible to support this on Windows as well using VirtualLock(), but I'm not much of a Windows user. * Update llama.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-22Deduplicate q4 quantization functions (#383)Stephan Walter
* Deduplicate q4 quantization functions * Use const; add basic test * Re-enable quantization test * Disable AVX2 flags in CI --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-22fix: add POSIX functionality for Linux compilation (#51)Valentyn Bezshapkin
* fix: add POSIX functionality for Linux compilation * fix: older standard for compatibility
2023-03-22Introduce C-style API (#370)Georgi Gerganov
* Major refactoring - introduce C-style API * Clean up * Add <cassert> * Add <iterator> * Add <algorithm> .... * Fix timing reporting and accumulation * Measure eval time only for single-token calls * Change llama_tokenize return meaning
2023-03-21Add OpenBSD support (#314)Kevin Lo
2023-03-21Add initial AVX512 support for dot product on Linux (#320)Casey Primozic
* Update Makefile to detect AVX512 support and add compiler flags if it's available * Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time * Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16 * Use built-in AVX512 horizontal reduce add to get sum at the end * Manual unrolling on inner dot product loop to reduce loop counter overhead
2023-03-19Change RMSNorm eps to 1e-6 (#173)Georgi Gerganov
I think this is what is used in the Python code
2023-03-17Don't tell users to use a bad number of threads (#243)Stephan Walter
The readme tells people to use the command line option "-t 8", causing 8 threads to be started. On systems with fewer than 8 cores, this causes a significant slowdown. Remove the option from the example command lines and use /proc/cpuinfo on Linux to determine a sensible default.
2023-03-17Q4_1 quantization (#193)Matvey Soloviev
* Add AVX2 version of ggml_vec_dot_q4_1 * Small optimisations to q4_1 dot product (@Const-me) * Rearrange Q4_1 quantization to work for multipart models. (Fix #152) * Fix ggml_vec_mad_q4_1 too * Fix non-vectorised q4_1 vec mul
2023-03-15Fix RMS norm in GGML (#191)Nebula
2023-03-16Add RMS norm and use it (#187)hoangmit
* add ggml_rms_norm * update op num
2023-03-15inline -> static inline for "bytesFromNibbles" (#161)hoangmit
Without "static" prefix, it fails to compile in clang
2023-03-14Don't use vdotq_s32 if it's not available (#139)Ronsor
* Don't use vdotq_s32 if it's not available `dotprod` extensions aren't available on some ARM CPUs (e.g. Raspberry Pi 4), so check for them and only use them if they're available. Reintroduces the code removed in 84d9015 if `__ARM_FEATURE_DOTPROD` isn't defined. * Update ggml.c --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-13Add NetBSD support. (#90)Thomas Klausner
2023-03-13Use vdotq_s32 to improve performance (#67)Georgi Gerganov
* 10% performance boost on ARM * Back to original change