aboutsummaryrefslogtreecommitdiff
path: root/ggml-cuda.cu
AgeCommit message (Expand)Author
2023-06-09Windows nvcc workaround (#1753)Johannes Gäßler
2023-06-07k-quants : allow to optionally disable at compile time (#1734)Georgi Gerganov
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-05-26cuda : performance optimizations (#1530)Johannes Gäßler
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...Johannes Gäßler
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-14cuda : deduplicated dequantization code (#1453)Johannes Gäßler
2023-05-13cuda : fix convert function (#1412)Georgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-08Documented CUDA reproducibility, added warning (#1346)Johannes Gäßler
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-21Improve cuBLAS performance by using a memory pool (#1094)slaren
2023-04-20Add Q4_3 support to cuBLAS (#1086)slaren
2023-04-20Improve cuBLAS performance by dequantizing on the GPU (#1065)slaren