aboutsummaryrefslogtreecommitdiff
path: root/llama.h
diff options
context:
space:
mode:
authorKawrakow <48489457+ikawrakow@users.noreply.github.com>2023-04-20 19:42:27 +0200
committerGitHub <noreply@github.com>2023-04-20 20:42:27 +0300
commit38de86a7114c97ecf3644e3a60159f1ed893e1b0 (patch)
treefc6b90dd99825ce4e745304aab484b85903949c0 /llama.h
parente0305ead3a072db9c08b35c9600c49273b38a4b5 (diff)
llama : multi-threaded quantization (#1075)
* Multi-threading quantization. Not much gain for simple quantizations, bit it will be important for quantizations that require more CPU cycles. * Multi-threading for quantize-stats It now does the job in ~14 seconds on my Mac for Q4_0, Q4_1 and Q4_2. Single-threaded it was taking more than 2 minutes after adding the more elaborate version of Q4_2. * Reviewer comments * Avoiding compiler confusion After changing chunk_size to const int as suggested by @ggerganov, clang and GCC starting to warn me that I don't need to capture it in the lambda. So, I removed it from the capture list. But that makes the MSVC build fail. So, making it a constexpr to make every compiler happy. * Still fighting with lambda captures in MSVC --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'llama.h')
-rw-r--r--llama.h4
1 files changed, 3 insertions, 1 deletions
diff --git a/llama.h b/llama.h
index 011e34c..e95ff73 100644
--- a/llama.h
+++ b/llama.h
@@ -93,10 +93,12 @@ extern "C" {
// TODO: not great API - very likely to change
// Returns 0 on success
+ // nthread - how many threads to use. If <=0, will use std::thread::hardware_concurrency(), else the number given
LLAMA_API int llama_model_quantize(
const char * fname_inp,
const char * fname_out,
- enum llama_ftype ftype);
+ enum llama_ftype ftype,
+ int nthread);
// Apply a LoRA adapter to a loaded model
// path_base_model is the path to a higher quality model to use as a base for