index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
ggml-cuda.cu
Age
Commit message (
Expand
)
Author
2023-05-26
cuda : performance optimizations (#1530)
Johannes Gäßler
2023-05-20
cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...
Johannes Gäßler
2023-05-19
ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)
Georgi Gerganov
2023-05-14
cuda : deduplicated dequantization code (#1453)
Johannes Gäßler
2023-05-13
cuda : fix convert function (#1412)
Georgi Gerganov
2023-05-13
ggml : GPU-accelerated token generation (#1412)
Johannes Gäßler
2023-05-12
ggml : remove bit shuffling (#1405)
Georgi Gerganov
2023-05-08
Documented CUDA reproducibility, added warning (#1346)
Johannes Gäßler
2023-05-01
cuBLAS: refactor and optimize f16 mat mul performance (#1259)
slaren
2023-05-01
cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)
slaren
2023-04-29
cuBLAS: use host pinned memory and dequantize while copying (#1207)
slaren
2023-04-29
cuBLAS: non-contiguous tensor support (#1215)
Henri Vasserman
2023-04-28
Remove Q4_3 which is no better than Q5 (#1218)
Stephan Walter
2023-04-26
ggml : add Q5_0 and Q5_1 quantization (#1187)
Georgi Gerganov
2023-04-25
ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...
Georgi Gerganov
2023-04-21
Improve cuBLAS performance by using a memory pool (#1094)
slaren
2023-04-20
Add Q4_3 support to cuBLAS (#1086)
slaren
2023-04-20
Improve cuBLAS performance by dequantizing on the GPU (#1065)
slaren