Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
AVX512 (#1119)
|
|
|
|
|
|
|
|
* ggml : prefer vzip to vuzp
This way we always use the same type of instruction across all quantizations
* ggml : alternative Q4_3 implementation using modified Q8_0
* ggml : fix Q4_3 scalar imlpementation
* ggml : slight improvement of Q4_3 - no need for loop unrolling
* ggml : fix AVX paths for Q8_0 quantization
|
|
* AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring
* finish AVX vectorization of quantize_row_q8_0
* Rename hsum_int_8 to hsum_i32_8
|
|
* Improve cuBLAS performance by using a memory pool
* Move cuda specific definitions to ggml-cuda.h/cu
* Add CXX flags to nvcc
* Change memory pool synchronization mechanism to a spin lock
General code cleanup
|
|
* A faster version for Q4_1 x Q8_0 dot products
The idea nehind being that Q8_0 quantized
values get used many times in the matrix multiplications
where they are involved. In the current implementations,
when we are evaluating the dot products, we need to compute
the sum of the quants in the Q8_0 vector, so the same
operation is repeated many times. Here we pre-compute
the sum during Q8_0 quantization, store it in the
now modified block_q8_0 struct, and then reuse this
result in the subsequent dot products.
In a synthetic benchmark (just compute a bunch of dot
products), this change speeds up the Q4_1 * Q8_0 dot
product by 80%, making the performance identical to
Q4_0 * Q8_0.
In practical application, I see a ~15% gain in speed for
token prediction on M2, and ~5% gain on Ryzen 7950X.
The speed gain in the prompt evaluation is much bigger
(around 50%).
I have only done the change for the scalar version,
ARM_NEON, and AVX2, so we still need an AVX implementation.
* Cleaning up
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
|
|
|
|
|
|
Broke it during conflict resolution in last PR
|
|
* Multi-threading quantization.
Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.
* Multi-threading for quantize-stats
It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.
* Reviewer comments
* Avoiding compiler confusion
After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.
* Still fighting with lambda captures in MSVC
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
|
|
|
|
* Q4_2 quantization with rmse-optimized scale and quants
For quantize-stats we get
q4_2: rmse 0.00159301, maxerr 0.17480469, 95pct<0.0030, median<0.0012
For 7B perplexity with BLAS enabled we get 6.2038 after 655 chunks.
Quantization is slow (~90 seconds on my Mac for 7B) as not
multi-threaded as in PR #896.
* ggml : satisfy the sanitizer builds
Not sure why this makes them fail
* Better follow ggml conventions for function names
* Fixed type as per reviewer comment
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* ggml : use 8-bit precision for Q4_1 intermediate results (ARM)
* ggml : optimize ggml_vec_dot_q4_1_q8_0() via vmalq_n_f32
56 ms/token with Q4_1 !
* ggml : AVX2 implementation of ggml_vec_dot_q4_1_q8_0 (#1051)
* gitignore : ignore ppl-*.txt files
---------
Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
|
|
* Q4 cleanup
* Remove unused AVX512 Q4_0 code
|
|
|
|
* Multi-threaded ggml_cpy
* Update ggml.c
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Also fix wdata offset in ggml_compute_forward_add_q_f32
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* ggml : Q4_2 ARM
* ggml : add ggml_is_quantized()
* llama : update llama_type_name() with Q4_2 entry
* ggml : speed-up q4_2
- 4 threads: ~100ms -> ~90ms
- 8 threads: ~55ms -> ~50ms
* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
|
|
Had a background process that was messing with the timings
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* ggml : add Q8_0 quantization for intermediate results
* quantize-stats : fix test + add it to Makefile default
* Q8: use int8_t, AVX/AVX2 optimizations
* ggml : fix quantize_row_q8_0() ARM_NEON rounding
* minor : updates after rebase to latest master
* quantize-stats : delete obsolete strings
* ggml : fix q4_1 dot func
---------
Co-authored-by: Stephan Walter <stephan@walter.name>
|
|
|
|
Avoid duplication of type names in utils
Co-authored-by: Håkon H. Hitland <haakon@likedan.net>
|
|
* GGML map ops proof of concept.
* Various cleanups.
Add handling for task setting.
Add handling for ggml_compute_backward.
Rename functions to ggml_map_unary_f32 and ggml_map_binary_f32
Fix compiler warnings related to casting function pointers and `void *`
Reorder functions and definitions based on the GGML op number.
Use typedefs for map op function pointer types.
* Fix position of map ops cases in ggml_compute_forward
|
|
|
|
|
|
|
|
|
|
|
|
* ggml : speed-up q4_1 ARM_NEON by ~5%
* ggml : implement vaddvq when missing
* ggml : implement vminvq and vmaxvq when missing
* ggml : implement vzip when missing
* ggml : fix comment
* ggml : try to use correct ifdef
|
|
|
|
which allows us to use aligned_alloc or _aligned_malloc functions
|
|
|
|
|
|
|
|
Mostly for msys2 and mingw64 builds, which are different from each other
and different from standard Visual Studio builds. Isn't Windows fun?
- Define _GNU_SOURCE in more files (it's already used in ggml.c for
Linux's sake).
- Don't use PrefetchVirtualMemory if not building for Windows 8 or later
(mingw64 doesn't by default). But warn the user about this situation
since it's probably not intended.
- Check for NOMINMAX already being defined, which it is on mingw64.
- Actually use the `increment` variable (bug in my `pizza` PR).
- Suppress unused variable warnings in the fake pthread_create and
pthread_join implementations for Windows.
- (not Windows-related) Remove mention of `asprintf` from comment;
`asprintf` is no longer used.
Fixes #871.
|
|
|
|
|