index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
Makefile
Age
Commit message (
Expand
)
Author
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-07
make : add libllama.so target for llama-cpp-python (#797)
bhubbb
2023-04-05
make : missing host optimizations in CXXFLAGS (#763)
Ivan Stepanov
2023-04-02
make : use -march=native -mtune=native on x86 (#609)
Fabian
2023-03-30
make : fix darwin f16c flags check (#615)
david raistrick
2023-03-28
all : be more strict about converting float to double (#458)
Stephan Walter
2023-03-28
Add embedding example to Makefile (#540)
RJ Adriaansen
2023-03-25
Overhaul the examples structure
Georgi Gerganov
2023-03-24
additional optimizations for POWER9 (#454)
Cameron Kaiser
2023-03-23
Fix Makefile echo escape codes (by removing them). (#418)
Kerfuffle
2023-03-22
Introduce C-style API (#370)
Georgi Gerganov
2023-03-21
makefile: Fix CPU feature detection on Haiku (#218)
Alex von Gluck IV
2023-03-21
Add OpenBSD support (#314)
Kevin Lo
2023-03-21
Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)
Qingyou Meng
2023-03-21
Add tokenizer test + revert to C++11 (#355)
Georgi Gerganov
2023-03-21
Add initial AVX512 support for dot product on Linux (#320)
Casey Primozic
2023-03-20
sentencepiece bpe compatible tokenizer (#252)
Mack Straight
2023-03-13
Add NetBSD support. (#90)
Thomas Klausner
2023-03-11
Update Makefile var + add comment
Georgi Gerganov
2023-03-10
Initial release
Georgi Gerganov