aboutsummaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Expand)Author
2023-03-25Retire the ggml_mul_mat() branch for transposed src0 (#500)Georgi Gerganov
2023-03-25Add AVX2 implementation of dequantize_row_q4_0 (#467)slaren
2023-03-25Remove obsolete assert and fix compiler warningGeorgi Gerganov
2023-03-25Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLASGeorgi Gerganov
2023-03-24Disable BLAS altogether - the bug is not just for qunatized mat mulGeorgi Gerganov
2023-03-24Disable BLAS branch in mul_mat - seems there is a bugGeorgi Gerganov
2023-03-24Reduce memory usage and allocate enough memory for largest context (#473)Georgi Gerganov
2023-03-24additional optimizations for POWER9 (#454)Cameron Kaiser
2023-03-24Support calling mlock() on loaded model data on Linux and macOS (#453)comex
2023-03-22Deduplicate q4 quantization functions (#383)Stephan Walter
2023-03-22fix: add POSIX functionality for Linux compilation (#51)Valentyn Bezshapkin
2023-03-22Introduce C-style API (#370)Georgi Gerganov
2023-03-21Add OpenBSD support (#314)Kevin Lo
2023-03-21Add initial AVX512 support for dot product on Linux (#320)Casey Primozic
2023-03-19Change RMSNorm eps to 1e-6 (#173)Georgi Gerganov
2023-03-17Don't tell users to use a bad number of threads (#243)Stephan Walter
2023-03-17Q4_1 quantization (#193)Matvey Soloviev
2023-03-15Fix RMS norm in GGML (#191)Nebula
2023-03-16Add RMS norm and use it (#187)hoangmit
2023-03-15inline -> static inline for "bytesFromNibbles" (#161)hoangmit
2023-03-14Don't use vdotq_s32 if it's not available (#139)Ronsor
2023-03-13Add NetBSD support. (#90)Thomas Klausner
2023-03-13Use vdotq_s32 to improve performance (#67)Georgi Gerganov
2023-03-13Revert "10% performance boost on ARM"Georgi Gerganov
2023-03-13Check for vdotq_s32 availabilityGeorgi Gerganov
2023-03-13Ammend to previous commit - forgot to update non-QRDMX branchGeorgi Gerganov
2023-03-1310% performance boost on ARMGeorgi Gerganov
2023-03-12Windows fixes (#31)Sebastián A
2023-03-11Add AVX2 support for x86 architectures thanks to @Const-me !Georgi Gerganov
2023-03-11Support all LLaMA models + change Q4_0 quantization storageGeorgi Gerganov
2023-03-10Initial releaseGeorgi Gerganov