aboutsummaryrefslogtreecommitdiff
path: root/llama.h
diff options
context:
space:
mode:
authorJustine Tunney <jtunney@gmail.com>2023-03-29 13:51:37 -0700
committerJustine Tunney <jtunney@gmail.com>2023-03-30 12:28:25 -0700
commit78ca9838ee36660a776e97e3391b6fb5dcaacf7f (patch)
treeff2208ce8c2ab62b71eb8b5db279619ac2680df2 /llama.h
parenta017390358cdb23fffb30988dc84bb190d0403ca (diff)
Make loading weights 10-100x faster
This is a breaking change that's going to give you three benefits: 1. Your inference commands should load 100x faster 2. You may be able to safely load models 2x larger 3. You can run many concurrent inference processes This was accomplished by changing the file format so we can mmap() weights directly into memory without having to read() or copy them thereby ensuring the kernel can make its file cache pages directly accessible to our inference processes; and secondly, that the file cache pages are much less likely to get evicted (which would force loads to hit disk) because they're no longer competing with memory pages that were needlessly created by gigabytes of standard i/o. The new file format supports single-file models like LLaMA 7b, and it also supports multi-file models like LLaMA 13B. Our Python tool now merges the foo.1, foo.2, etc. files back into a single file so that the C++ code which maps it doesn't need to reshape data every time. That's made llama.cpp so much simpler. Much of its load code has now been deleted. Furthermore, this change ensures that tensors are aligned properly on a 32-byte boundary. That opens the door to seeing if we can get additional performance gains on some microprocessors, by using ops that require memory alignment. Lastly note that both POSIX and the Windows platform are supported Fixes #91
Diffstat (limited to 'llama.h')
-rw-r--r--llama.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/llama.h b/llama.h
index 3368de3..258de5a 100644
--- a/llama.h
+++ b/llama.h
@@ -20,7 +20,7 @@
#endif
#define LLAMA_FILE_VERSION 1
-#define LLAMA_FILE_MAGIC 0x67676d66 // 'ggmf' in hex
+#define LLAMA_FILE_MAGIC 0x67676a74 // 'ggjt' in hex
#define LLAMA_FILE_MAGIC_UNVERSIONED 0x67676d6c // pre-versioned files
#ifdef __cplusplus