index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
Makefile
Age
Commit message (
Expand
)
Author
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-16
examples : add "simple" (#1840)
SuperUserNameMan
2023-06-16
CUDA : faster k-quant dot kernels (#1862)
Kawrakow
2023-06-15
make : add train-text-from-scratch (#1850)
daboe01
2023-06-15
make : clean *.so files (#1857)
sandyiscool
2023-06-13
Allow "quantizing" to f16 and f32 (#1787)
Kerfuffle
2023-06-10
make : add SSSE3 compilation use case (#1659)
rankaiyx
2023-06-07
k-quants : allow to optionally disable at compile time (#1734)
Georgi Gerganov
2023-06-06
ggml : fix builds, add ggml-quants-k.o (close #1712, close #1710)
Georgi Gerganov
2023-06-05
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
Kawrakow
2023-06-04
llama : Metal inference (#1642)
Georgi Gerganov
2023-05-28
LLAMA_DEBUG adds debug symbols (#1617)
Johannes Gäßler
2023-05-27
Include server in releases + other build system cleanups (#1610)
Kerfuffle
2023-05-26
cuda : performance optimizations (#1530)
Johannes Gäßler
2023-05-23
OpenCL Token Generation Acceleration (#1459)
0cc4m
2023-05-21
make : .PHONY clean (#1553)
Stefan Sydow
2023-05-20
feature : support blis and other blas implementation (#1536)
Zenix
2023-05-20
Revert "feature : add blis and other BLAS implementation support (#1502)"
Georgi Gerganov
2023-05-20
feature : add blis and other BLAS implementation support (#1502)
Zenix
2023-05-16
Add alternate include path for openblas (#1476)
sandyiscool
2023-05-13
make : fix PERF build with cuBLAS
Georgi Gerganov
2023-05-05
makefile: automatic Arch Linux detection (#1332)
DaniAndTheWeb
2023-05-05
Fix for OpenCL / clbast builds on macOS. (#1329)
Ionoclast Laboratories
2023-05-02
Call sh on build-info.sh (#1294)
DannyDaemonic
2023-05-01
Add git-based build information for better issue tracking (#1232)
DannyDaemonic
2023-04-30
build: add armv{6,7,8} support to cmake (#1251)
Pavol Rusnak
2023-04-30
Various fixes to mat_mul benchmark (#1253)
Stephan Walter
2023-04-29
ggml : adjust mul_mat_f16 work memory (#1226)
Georgi Gerganov
2023-04-29
build : fix reference to old llama_util.h
Georgi Gerganov
2023-04-29
cuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren
2023-04-28
ggml : add CLBlast support (#1164)
0cc4m
2023-04-28
Add Manjaro CUDA include and lib dirs to Makefile (#1212)
Johannes Gäßler
2023-04-24
Fix cuda compilation (#1128)
slaren
2023-04-23
ggml : better PERF prints + support "LLAMA_PERF=1 make"
Georgi Gerganov
2023-04-22
ggml : fix AVX build + update to new Q8_0 format
Georgi Gerganov
2023-04-21
Improve cuBLAS performance by using a memory pool (#1094)
slaren
2023-04-20
Add Q4_3 support to cuBLAS (#1086)
slaren
2023-04-20
fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)
源文雨
2023-04-20
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren
2023-04-19
ggml : Q4 cleanup - remove 4-bit dot product code (#1061)
Stephan Walter
2023-04-19
Add NVIDIA cuBLAS support (#1044)
slaren
2023-04-18
Adding a simple program to measure speed of dot products (#1041)
Kawrakow
2023-04-15
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-14
make : fix dependencies, use auto variables (#983)
Stephan Walter
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-13
fix whitespace (#944)
CRD716
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-07
make : add libllama.so target for llama-cpp-python (#797)
bhubbb
[next]