aboutsummaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Collapse)Author
2023-03-15inline -> static inline for "bytesFromNibbles" (#161)hoangmit
Without "static" prefix, it fails to compile in clang
2023-03-14Don't use vdotq_s32 if it's not available (#139)Ronsor
* Don't use vdotq_s32 if it's not available `dotprod` extensions aren't available on some ARM CPUs (e.g. Raspberry Pi 4), so check for them and only use them if they're available. Reintroduces the code removed in 84d9015 if `__ARM_FEATURE_DOTPROD` isn't defined. * Update ggml.c --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-13Add NetBSD support. (#90)Thomas Klausner
2023-03-13Use vdotq_s32 to improve performance (#67)Georgi Gerganov
* 10% performance boost on ARM * Back to original change
2023-03-13Revert "10% performance boost on ARM"Georgi Gerganov
This reverts commit 113a9e83ebc0f788f861394437087bf3ca0e019b. There are some reports for illegal instruction. Moved this stuff to vdotq_s32 branch until resolve
2023-03-13Check for vdotq_s32 availabilityGeorgi Gerganov
2023-03-13Ammend to previous commit - forgot to update non-QRDMX branchGeorgi Gerganov
2023-03-1310% performance boost on ARMGeorgi Gerganov
2023-03-12Windows fixes (#31)Sebastián A
* Apply fixes suggested to build on windows Issue: https://github.com/ggerganov/llama.cpp/issues/22 * Remove unsupported VLAs * MSVC: Remove features that are only available on MSVC C++20. * Fix zero initialization of the other fields. * Change the use of vector for stack allocations.
2023-03-11Add AVX2 support for x86 architectures thanks to @Const-me !Georgi Gerganov
2023-03-11Support all LLaMA models + change Q4_0 quantization storageGeorgi Gerganov
2023-03-10Initial releaseGeorgi Gerganov