Age | Commit message (Collapse) | Author |
|
|
|
|
|
Broke it during conflict resolution in last PR
|
|
* Multi-threading quantization.
Not much gain for simple quantizations, bit it will be important
for quantizations that require more CPU cycles.
* Multi-threading for quantize-stats
It now does the job in ~14 seconds on my Mac for
Q4_0, Q4_1 and Q4_2. Single-threaded it was taking
more than 2 minutes after adding the more elaborate
version of Q4_2.
* Reviewer comments
* Avoiding compiler confusion
After changing chunk_size to const int as suggested by
@ggerganov, clang and GCC starting to warn me that I don't
need to capture it in the lambda. So, I removed it from the
capture list. But that makes the MSVC build fail. So,
making it a constexpr to make every compiler happy.
* Still fighting with lambda captures in MSVC
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
CI (#1074)
[Accelerate](https://developer.apple.com/documentation/accelerate) is an Apple framework which can only be used on macOS, and the CMake build [ignores](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt#L102) the `LLAMA_ACCELERATE` variable when run on non-Apple platforms. This implies setting `LLAMA_ACCELERATE` is a no-op on Ubuntu and can be removed.
This will reduce visual noise in CI check results (in addition to reducing the number of checks we have to run for every PR). Right now every sanitized build is duplicated twice for no good reason (e.g., we have `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, ON)` and `CI / ubuntu-latest-cmake-sanitizer (ADDRESS, Debug, OFF)`).
|
|
|
|
|
|
|
|
|
|
* Q4_2 quantization with rmse-optimized scale and quants
For quantize-stats we get
q4_2: rmse 0.00159301, maxerr 0.17480469, 95pct<0.0030, median<0.0012
For 7B perplexity with BLAS enabled we get 6.2038 after 655 chunks.
Quantization is slow (~90 seconds on my Mac for 7B) as not
multi-threaded as in PR #896.
* ggml : satisfy the sanitizer builds
Not sure why this makes them fail
* Better follow ggml conventions for function names
* Fixed type as per reviewer comment
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* ggml : use 8-bit precision for Q4_1 intermediate results (ARM)
* ggml : optimize ggml_vec_dot_q4_1_q8_0() via vmalq_n_f32
56 ms/token with Q4_1 !
* ggml : AVX2 implementation of ggml_vec_dot_q4_1_q8_0 (#1051)
* gitignore : ignore ppl-*.txt files
---------
Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
|
|
|
|
* Q4 cleanup
* Remove unused AVX512 Q4_0 code
|
|
|
|
* Multi-threaded ggml_cpy
* Update ggml.c
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Also fix wdata offset in ggml_compute_forward_add_q_f32
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* ggml : Q4_2 ARM
* ggml : add ggml_is_quantized()
* llama : update llama_type_name() with Q4_2 entry
* ggml : speed-up q4_2
- 4 threads: ~100ms -> ~90ms
- 8 threads: ~55ms -> ~50ms
* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
|
|
Had a background process that was messing with the timings
|
|
|
|
|
|
On my Mac, the direct Q4_1 product is marginally slower
(~69 vs ~55 us for Q4_0). The SIMD-ified ggml version
is now almost 2X slower (~121 us).
On a Ryzen 7950X CPU, the direct product for Q4_1 quantization
is faster than the AVX2 implementation (~60 vs ~62 us).
---------
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
|
|
|
|
|
|
|
|
|
|
this came up when trying to convert the gpt4all-lora-unfiltered-quantized.bin file
|
|
|
|
* Replaced static initialization of complex objects with a initialization on first use. This prevents an undefined behavior on program run, for example, crash in Release build, works in Debug build
* replaced use of auto with exact type to avoid using -std=c++14
* Made the assessors functions for static maps be static const
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Calling `mmap.mmap` on Windows apparently resets the file offset of the
raw file object (and makes the BufferedReader return a *negative* file
offset). For safetensors, avoid using the file offset after calling
mmap. For GGML format, explicitly save and restore the offset.
Fixes #966.
|
|
|
|
|
|
* ggml : add Q8_0 quantization for intermediate results
* quantize-stats : fix test + add it to Makefile default
* Q8: use int8_t, AVX/AVX2 optimizations
* ggml : fix quantize_row_q8_0() ARM_NEON rounding
* minor : updates after rebase to latest master
* quantize-stats : delete obsolete strings
* ggml : fix q4_1 dot func
---------
Co-authored-by: Stephan Walter <stephan@walter.name>
|
|
|
|
|
|
|
|
This reverts commit f4d277ae17247ee51129ef1a9ff74d377cc90b1b.
|
|
|
|
|
|
Avoid duplication of type names in utils
Co-authored-by: Håkon H. Hitland <haakon@likedan.net>
|
|
* Add support for configs, add configurable prefixes / suffixes, deprecate instruct mode, add stop prompt
* Add multiline mode, update text input.
* bugfix
* update implementation
* typos
* Change --multiline implementation to be toggled by EOF.
* bugfix
* default multiline mode
* add more configs
* update formating
* update formatting
* apply suggestions
|
|
* GGML map ops proof of concept.
* Various cleanups.
Add handling for task setting.
Add handling for ggml_compute_backward.
Rename functions to ggml_map_unary_f32 and ggml_map_binary_f32
Fix compiler warnings related to casting function pointers and `void *`
Reorder functions and definitions based on the GGML op number.
Use typedefs for map op function pointer types.
* Fix position of map ops cases in ggml_compute_forward
|
|
after #545 we do not need torch, tqdm and requests in the dependencies
|
|
|