From 563cdc391dde140f1084d1012234e8e6f57f881f Mon Sep 17 00:00:00 2001 From: comex Date: Fri, 24 Mar 2023 08:19:05 -0700 Subject: Support calling mlock() on loaded model data on Linux and macOS (#453) * Support calling mlock() on loaded model data on Linux and macOS This is enabled by a new --mlock command line option. Using mlock() disables swapping and memory compression for the model data. Doing so can be useful on systems where the model takes up a large fraction of system RAM. In my experience, macOS is quite eager to start compressing llama.cpp's memory, which then makes it halt for a few seconds while it decompresses, even with a model that uses "only" 25GB out of 32GB. Of course, this comes at the cost of forcing the system to swap or compress other processes' memory instead, so it needs to be used with care and shouldn't be enabled by default. In theory it should be possible to support this on Windows as well using VirtualLock(), but I'm not much of a Windows user. * Update llama.cpp --------- Co-authored-by: Georgi Gerganov --- main.cpp | 1 + 1 file changed, 1 insertion(+) (limited to 'main.cpp') diff --git a/main.cpp b/main.cpp index 46a80ff..39dfc57 100644 --- a/main.cpp +++ b/main.cpp @@ -199,6 +199,7 @@ int main(int argc, char ** argv) { lparams.seed = params.seed; lparams.f16_kv = params.memory_f16; lparams.logits_all = params.perplexity; + lparams.use_mlock = params.use_mlock; lparams.embedding = params.embedding; ctx = llama_init_from_file(params.model.c_str(), lparams); -- cgit v1.2.3