aboutsummaryrefslogtreecommitdiff
path: root/.gitignore
AgeCommit message (Collapse)Author
2023-04-19ggml : use 8-bit precision for Q4_1 intermediate results (#1047)Georgi Gerganov
* ggml : use 8-bit precision for Q4_1 intermediate results (ARM) * ggml : optimize ggml_vec_dot_q4_1_q8_0() via vmalq_n_f32 56 ms/token with Q4_1 ! * ggml : AVX2 implementation of ggml_vec_dot_q4_1_q8_0 (#1051) * gitignore : ignore ppl-*.txt files --------- Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
2023-04-18gitignore : vdotGeorgi Gerganov
2023-04-13gitignore : benchmarkGeorgi Gerganov
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-05zig : add build.zig (#773)iacore
Co-authored-by: Locria Cyber <74560659+locriacyber@users.noreply.github.com>
2023-03-30Make loading weights 10-100x fasterJustine Tunney
This is a breaking change that's going to give you three benefits: 1. Your inference commands should load 100x faster 2. You may be able to safely load models 2x larger 3. You can run many concurrent inference processes This was accomplished by changing the file format so we can mmap() weights directly into memory without having to read() or copy them thereby ensuring the kernel can make its file cache pages directly accessible to our inference processes; and secondly, that the file cache pages are much less likely to get evicted (which would force loads to hit disk) because they're no longer competing with memory pages that were needlessly created by gigabytes of standard i/o. The new file format supports single-file models like LLaMA 7b, and it also supports multi-file models like LLaMA 13B. Our Python tool now merges the foo.1, foo.2, etc. files back into a single file so that the C++ code which maps it doesn't need to reshape data every time. That's made llama.cpp so much simpler. Much of its load code has now been deleted. Furthermore, this change ensures that tensors are aligned properly on a 32-byte boundary. That opens the door to seeing if we can get additional performance gains on some microprocessors, by using ops that require memory alignment. Lastly note that both POSIX and the Windows platform are supported Fixes #91
2023-03-28deploy : add a Package.swift for SwiftPM support (#393)Jed Fox
* Add a Package.swift for SwiftPM support * Swap from exclusions to allowlist
2023-03-28gitignore : add "embedding"Georgi Gerganov
2023-03-25Overhaul the examples structureGeorgi Gerganov
- main -> examples - utils -> examples (renamed to "common") - quantize -> examples - separate tools for "perplexity" and "embedding" Hope I didn't break something !
2023-03-17Nix flake (#40)Niklas Korz
* Nix flake * Nix: only add Accelerate framework on macOS * Nix: development shel, direnv and compatibility * Nix: use python packages supplied by withPackages * Nix: remove channel compatibility * Nix: fix ARM neon dotproduct on macOS --------- Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-11.gitignore models/Georgi Gerganov
2023-03-10Initial releaseGeorgi Gerganov