index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml.c
Age
Commit message (
Expand
)
Author
2023-03-22
fix: add POSIX functionality for Linux compilation (#51)
Valentyn Bezshapkin
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
Add OpenBSD support (#314)
Kevin Lo
2023-03-21
Add initial AVX512 support for dot product on Linux (#320)
Casey Primozic
2023-03-19
Change RMSNorm eps to 1e-6 (#173)
Georgi Gerganov
2023-03-17
Don't tell users to use a bad number of threads (#243)
Stephan Walter
2023-03-17
Q4_1 quantization (#193)
Matvey Soloviev
2023-03-15
Fix RMS norm in GGML (#191)
Nebula
2023-03-16
Add RMS norm and use it (#187)
hoangmit
2023-03-15
inline -> static inline for "bytesFromNibbles" (#161)
hoangmit
2023-03-14
Don't use vdotq_s32 if it's not available (#139)
Ronsor
2023-03-13
Add NetBSD support. (#90)
Thomas Klausner
2023-03-13
Use vdotq_s32 to improve performance (#67)
Georgi Gerganov
2023-03-13
Revert "10% performance boost on ARM"
Georgi Gerganov
2023-03-13
Check for vdotq_s32 availability
Georgi Gerganov
2023-03-13
Ammend to previous commit - forgot to update non-QRDMX branch
Georgi Gerganov
2023-03-13
10% performance boost on ARM
Georgi Gerganov
2023-03-12
Windows fixes (#31)
Sebastián A
2023-03-11
Add AVX2 support for x86 architectures thanks to @Const-me !
Georgi Gerganov
2023-03-11
Support all LLaMA models + change Q4_0 quantization storage
Georgi Gerganov
2023-03-10
Initial release
Georgi Gerganov
[prev]