aboutsummaryrefslogtreecommitdiff
path: root/Makefile
AgeCommit message (Collapse)Author
2023-05-16Add alternate include path for openblas (#1476)sandyiscool
In some linux distributions (fedora, for example), the include path for openblas is located at '/usr/local/include'
2023-05-13make : fix PERF build with cuBLASGeorgi Gerganov
2023-05-05makefile: automatic Arch Linux detection (#1332)DaniAndTheWeb
This commit is a port of a detection method used in koboldcpp's Makefile in order to automatically set the -lcblas option on Arch Linux
2023-05-05Fix for OpenCL / clbast builds on macOS. (#1329)Ionoclast Laboratories
2023-05-02Call sh on build-info.sh (#1294)DannyDaemonic
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
* Add git-based build information for better issue tracking * macOS fix * "build (hash)" and "CMAKE_SOURCE_DIR" changes * Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages * Fix conditional dependency on missing target * Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile * 4 space indenting for cmake, attempt to clean up my mess in Makefile * Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
2023-04-30build: add armv{6,7,8} support to cmake (#1251)Pavol Rusnak
- flags copied from Makefile - updated comments in both CMakeLists.txt and Makefile to match reality
2023-04-30Various fixes to mat_mul benchmark (#1253)Stephan Walter
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
* llama : minor - remove explicity int64_t cast * ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS * ggml : add asserts to guard for incorrect wsize
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
* cuBLAS: dequantize simultaneously while copying memory * cuBLAS: use host pinned memory * cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory * cuBLAS: also pin kv cache * fix rebase
2023-04-28ggml : add CLBlast support (#1164)0cc4m
* Allow use of OpenCL GPU-based BLAS using ClBlast instead of OpenBLAS for context processing * Improve ClBlast implementation, avoid recreating buffers, remove redundant transfers * Finish merge of ClBlast support * Move CLBlast implementation to separate file Add buffer reuse code (adapted from slaren's cuda implementation) * Add q4_2 and q4_3 CLBlast support, improve code * Double CLBlast speed by disabling OpenBLAS thread workaround Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com> Co-authored-by: slaren <2141330+slaren@users.noreply.github.com> * Fix device selection env variable names * Fix cast in opencl kernels * Add CLBlast to CMakeLists.txt * Replace buffer pool with static buffers a, b, qb, c Fix compile warnings * Fix typos, use GGML_TYPE defines, improve code * Improve btype dequant kernel selection code, add error if type is unsupported * Improve code quality * Move internal stuff out of header * Use internal enums instead of CLBlast enums * Remove leftover C++ includes and defines * Make event use easier to read Co-authored-by: Henri Vasserman <henv@hot.ee> * Use c compiler for opencl files * Simplify code, fix include * First check error, then release event * Make globals static, fix indentation * Rename dequant kernels file to conform with other file names * Fix import cl file name --------- Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com> Co-authored-by: slaren <2141330+slaren@users.noreply.github.com> Co-authored-by: Henri Vasserman <henv@hot.ee> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-28Add Manjaro CUDA include and lib dirs to Makefile (#1212)Johannes Gäßler
2023-04-24Fix cuda compilation (#1128)slaren
* Fix: Issue with CUBLAS compilation error due to missing -fPIC flag --------- Co-authored-by: B1gM8c <89020353+B1gM8c@users.noreply.github.com>
2023-04-23ggml : better PERF prints + support "LLAMA_PERF=1 make"Georgi Gerganov
2023-04-22ggml : fix AVX build + update to new Q8_0 formatGeorgi Gerganov
2023-04-21Improve cuBLAS performance by using a memory pool (#1094)slaren
* Improve cuBLAS performance by using a memory pool * Move cuda specific definitions to ggml-cuda.h/cu * Add CXX flags to nvcc * Change memory pool synchronization mechanism to a spin lock General code cleanup
2023-04-20Add Q4_3 support to cuBLAS (#1086)slaren
2023-04-20fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)源文雨
2023-04-20Improve cuBLAS performance by dequantizing on the GPU (#1065)slaren
2023-04-19ggml : Q4 cleanup - remove 4-bit dot product code (#1061)Stephan Walter
* Q4 cleanup * Remove unused AVX512 Q4_0 code
2023-04-19Add NVIDIA cuBLAS support (#1044)slaren
2023-04-18Adding a simple program to measure speed of dot products (#1041)Kawrakow
On my Mac, the direct Q4_1 product is marginally slower (~69 vs ~55 us for Q4_0). The SIMD-ified ggml version is now almost 2X slower (~121 us). On a Ryzen 7950X CPU, the direct product for Q4_1 quantization is faster than the AVX2 implementation (~60 vs ~62 us). --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-04-15ggml : add Q8_0 quantization for intermediate results (#951)Georgi Gerganov
* ggml : add Q8_0 quantization for intermediate results * quantize-stats : fix test + add it to Makefile default * Q8: use int8_t, AVX/AVX2 optimizations * ggml : fix quantize_row_q8_0() ARM_NEON rounding * minor : updates after rebase to latest master * quantize-stats : delete obsolete strings * ggml : fix q4_1 dot func --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-04-14make : fix dependencies, use auto variables (#983)Stephan Walter
2023-04-13llama : merge llama_internal.h into llama.hGeorgi Gerganov
Hide it behind an #ifdef
2023-04-13fix whitespace (#944)CRD716
2023-04-13benchmark : add tool for timing q4_0 matrix multiplication (#653)SebastianApel
* Initial version of q4_0 matrix multiplication benchmark * Bugfix: Added dependency to ggml.o to benchmark * Reviewer requests: added parameter for threads, switched to ggml_time_us() * Reviewer input: removed rtsc, use epsilon for check * Review comment: Removed set_locale * Feature: Param for numer of iterations, Bugfix for use of parameter threads * Reviewer suggestion: Moved to examples * Reviewer feedback: Updated clean: and benchmark: sections --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-10Rewrite loading code to try to satisfy everyone:comex
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't include the hack needed to support GPT4All files without conversion. Those can still be used after converting them with convert.py from my other PR.) - Support both mmap and read (mmap is used by default, but can be disabled with `--no-mmap`, and is automatically disabled for pre-ggjt files or on platforms where mmap is not supported). - Support multi-file models like before, but automatically determine the number of parts rather than requiring `--n_parts`. - Improve validation and error checking. - Stop using the per-file type field (f16) entirely in favor of just relying on the per-tensor type/size fields. This has no immediate benefit, but makes it easier to experiment with different formats, and should make it easier to support the new GPTQ-for-LLaMa models in the future (I have some work in progress on that front). - Support VirtualLock on Windows (using the same `--mlock` option as on Unix). - Indicate loading progress when using mmap + mlock. (Which led me to the interesting observation that on my Linux machine, with a warm file cache, mlock actually takes some time, whereas mmap without mlock starts almost instantly...) - To help implement this, move mlock support from ggml to the loading code. - madvise/PrefetchVirtualMemory support (based on #740) - Switch from ifstream to the `fopen` family of functions to avoid unnecessary copying and, when mmap is enabled, allow reusing the same file descriptor for both metadata reads and mmap (whereas the existing implementation opens the file a second time to mmap). - Quantization now produces a single-file output even with multi-file inputs (not really a feature as much as 'it was easier this way'). Implementation notes: I tried to factor the code into more discrete pieces than before. Regarding code style: I tried to follow the code style, but I'm naughty and used a few advanced C++ features repeatedly: - Destructors to make it easier to ensure everything gets cleaned up. - Exceptions. I don't even usually use exceptions when writing C++, and I can remove them if desired... but here they make the loading code much more succinct while still properly handling a variety of errors, ranging from API calls failing to integer overflow and allocation failure. The exceptions are converted to error codes at the API boundary.) Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-07make : add libllama.so target for llama-cpp-python (#797)bhubbb
I was able to get llama-cpp-python working but only when I build libllama.so with make.
2023-04-05make : missing host optimizations in CXXFLAGS (#763)Ivan Stepanov
2023-04-02make : use -march=native -mtune=native on x86 (#609)Fabian
2023-03-30make : fix darwin f16c flags check (#615)david raistrick
...there was no check. ported upstream from https://github.com/zanussbaum/gpt4all.cpp/pull/2 (I dont see any clean path for upstream patches)
2023-03-28all : be more strict about converting float to double (#458)Stephan Walter
* Be more strict about converting float to double * Test equivalence of round, SILU implementations Test module is commented out in CMakeLists.txt because the tests may take a long time, depending on how much the compiler optimizes. * Fix softmax in perplexity.cpp * all : prefer float over double where appropriate * perplexity : add <cmath> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28Add embedding example to Makefile (#540)RJ Adriaansen
2023-03-25Overhaul the examples structureGeorgi Gerganov
- main -> examples - utils -> examples (renamed to "common") - quantize -> examples - separate tools for "perplexity" and "embedding" Hope I didn't break something !
2023-03-24additional optimizations for POWER9 (#454)Cameron Kaiser
2023-03-23Fix Makefile echo escape codes (by removing them). (#418)Kerfuffle
2023-03-22Introduce C-style API (#370)Georgi Gerganov
* Major refactoring - introduce C-style API * Clean up * Add <cassert> * Add <iterator> * Add <algorithm> .... * Fix timing reporting and accumulation * Measure eval time only for single-token calls * Change llama_tokenize return meaning
2023-03-21makefile: Fix CPU feature detection on Haiku (#218)Alex von Gluck IV
2023-03-21Add OpenBSD support (#314)Kevin Lo
2023-03-21Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)Qingyou Meng
2023-03-21Add tokenizer test + revert to C++11 (#355)Georgi Gerganov
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand * Added option to convert-pth-to-ggml.py script to dump just the vocabulary * Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests) * Added utility to load vocabulary file from previous point (temporary implementation) * Avoid using std::string_view and drop back to C++11 (hope I didn't break something) * Rename gpt_vocab -> llama_vocab * All CMake binaries go into ./bin/ now
2023-03-21Add initial AVX512 support for dot product on Linux (#320)Casey Primozic
* Update Makefile to detect AVX512 support and add compiler flags if it's available * Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time * Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16 * Use built-in AVX512 horizontal reduce add to get sum at the end * Manual unrolling on inner dot product loop to reduce loop counter overhead
2023-03-20sentencepiece bpe compatible tokenizer (#252)Mack Straight
* potential out of bounds read * fix quantize * style * Update convert-pth-to-ggml.py * mild cleanup * don't need the space-prefixing here rn since main.cpp already does it * new file magic + version header field * readme notice * missing newlines Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
2023-03-13Add NetBSD support. (#90)Thomas Klausner
2023-03-11Update Makefile var + add commentGeorgi Gerganov
2023-03-10Initial releaseGeorgi Gerganov