diff options
author | zrm <trustiosity.zrm@gmail.com> | 2023-06-26 13:57:59 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-06-26 20:57:59 +0300 |
commit | b853d456018b10820686362af41b2f2f75f1eec6 (patch) | |
tree | 264e68c8555d8509a5ac27f01eed5e6c69940174 /examples/common.cpp | |
parent | 9225baef71407d799a6f7f563b77fd7f82791416 (diff) |
ggml : add NUMA support (#1556)
* detect NUMA systems and pin work threads to nodes (linux)
* disable mmap prefetch/readahead for NUMA systems
* avoid sending finalize op to thread pool if it does nothing
* silence robot
* fix args
* make --numa a param
* recommendation that n_nodes evenly divide n_threads did not warrant such aggressive enforcement
* lower synchronization overhead
* statically allocate
* move numa state to g_state
* add description for --numa
* ggml : minor style changes
* ggml : minor style + try fix sanitizer build
* llama : allow to initialize backend with NUMA support
* llama : avoid ggml include in llama-util.h
* ggml : style / formatting
* ggml : fix handling of ops with n_threads > n_tasks > 1
* server : utilize numa parameter
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/common.cpp')
-rw-r--r-- | examples/common.cpp | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/examples/common.cpp b/examples/common.cpp index 6ac4845..0023027 100644 --- a/examples/common.cpp +++ b/examples/common.cpp @@ -343,6 +343,8 @@ bool gpt_params_parse(int argc, char ** argv, gpt_params & params) { params.use_mmap = false; } else if (arg == "--mtest") { params.mem_test = true; + } else if (arg == "--numa") { + params.numa = true; } else if (arg == "--export") { params.export_cgraph = true; } else if (arg == "--verbose-prompt") { @@ -488,6 +490,9 @@ void gpt_print_usage(int /*argc*/, char ** argv, const gpt_params & params) { if (llama_mmap_supported()) { fprintf(stderr, " --no-mmap do not memory-map model (slower load but may reduce pageouts if not using mlock)\n"); } + fprintf(stderr, " --numa attempt optimizations that help on some NUMA systems\n"); + fprintf(stderr, " if run without this previously, it is recommended to drop the system page cache before using this\n"); + fprintf(stderr, " see https://github.com/ggerganov/llama.cpp/issues/1437\n"); #ifdef LLAMA_SUPPORTS_GPU_OFFLOAD fprintf(stderr, " -ngl N, --n-gpu-layers N\n"); fprintf(stderr, " number of layers to store in VRAM\n"); |