From b853d456018b10820686362af41b2f2f75f1eec6 Mon Sep 17 00:00:00 2001 From: zrm Date: Mon, 26 Jun 2023 13:57:59 -0400 Subject: ggml : add NUMA support (#1556) * detect NUMA systems and pin work threads to nodes (linux) * disable mmap prefetch/readahead for NUMA systems * avoid sending finalize op to thread pool if it does nothing * silence robot * fix args * make --numa a param * recommendation that n_nodes evenly divide n_threads did not warrant such aggressive enforcement * lower synchronization overhead * statically allocate * move numa state to g_state * add description for --numa * ggml : minor style changes * ggml : minor style + try fix sanitizer build * llama : allow to initialize backend with NUMA support * llama : avoid ggml include in llama-util.h * ggml : style / formatting * ggml : fix handling of ops with n_threads > n_tasks > 1 * server : utilize numa parameter --------- Co-authored-by: Georgi Gerganov --- examples/quantize/quantize.cpp | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'examples/quantize') diff --git a/examples/quantize/quantize.cpp b/examples/quantize/quantize.cpp index 4e8e6f5..1eb0f75 100644 --- a/examples/quantize/quantize.cpp +++ b/examples/quantize/quantize.cpp @@ -180,7 +180,7 @@ int main(int argc, char ** argv) { usage(argv[0]); } - llama_init_backend(); + llama_init_backend(false); // parse command line arguments const std::string fname_inp = argv[arg_idx]; -- cgit v1.2.3