Age | Commit message (Expand) | Author |
2023-06-26 | k-quants : support for super-block size of 64 (#2001) | Kawrakow |
2023-06-26 | Fix assert when free invalid cuda pointer (#2005) | Howard Su |
2023-06-24 | #1869 Fix null reference errors when training from scratch with CUDA (#1907) | Robyn |
2023-06-19 | cuda : faster k-quants on older GPUs (#1930) | Kawrakow |
2023-06-19 | Convert vector to f16 for dequantize mul mat vec (#1913) | Johannes Gäßler |
2023-06-17 | Only one CUDA stream per device for async compute (#1898) | Johannes Gäßler |
2023-06-17 | ggml : fix warnings under MSVC (#1908) | Howard Su |
2023-06-16 | CUDA : faster k-quant dot kernels (#1862) | Kawrakow |
2023-06-15 | Fixed CUDA runtime version check (#1879) | Johannes Gäßler |
2023-06-15 | Fix the validation of main device (#1872) | Howard Su |
2023-06-14 | CUDA full GPU acceleration, KV cache in VRAM (#1827) | Johannes Gäßler |
2023-06-12 | Leverage mmap for offloading tensors to GPU (#1597) | Howard Su |
2023-06-11 | Fixed WSL cuda's OOM error (#1594) | Kyle Liang |
2023-06-09 | Windows nvcc workaround (#1753) | Johannes Gäßler |
2023-06-07 | k-quants : allow to optionally disable at compile time (#1734) | Georgi Gerganov |
2023-06-06 | Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) | Johannes Gäßler |
2023-06-05 | ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) | Kawrakow |
2023-05-26 | cuda : performance optimizations (#1530) | Johannes Gäßler |
2023-05-20 | cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti... | Johannes Gäßler |
2023-05-19 | ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) | Georgi Gerganov |
2023-05-14 | cuda : deduplicated dequantization code (#1453) | Johannes Gäßler |
2023-05-13 | cuda : fix convert function (#1412) | Georgi Gerganov |
2023-05-13 | ggml : GPU-accelerated token generation (#1412) | Johannes Gäßler |
2023-05-12 | ggml : remove bit shuffling (#1405) | Georgi Gerganov |
2023-05-08 | Documented CUDA reproducibility, added warning (#1346) | Johannes Gäßler |
2023-05-01 | cuBLAS: refactor and optimize f16 mat mul performance (#1259) | slaren |
2023-05-01 | cuBLAS: fall back to pageable memory if pinned alloc fails (#1233) | slaren |
2023-04-29 | cuBLAS: use host pinned memory and dequantize while copying (#1207) | slaren |
2023-04-29 | cuBLAS: non-contiguous tensor support (#1215) | Henri Vasserman |
2023-04-28 | Remove Q4_3 which is no better than Q5 (#1218) | Stephan Walter |
2023-04-26 | ggml : add Q5_0 and Q5_1 quantization (#1187) | Georgi Gerganov |
2023-04-25 | ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (... | Georgi Gerganov |
2023-04-21 | Improve cuBLAS performance by using a memory pool (#1094) | slaren |
2023-04-20 | Add Q4_3 support to cuBLAS (#1086) | slaren |
2023-04-20 | Improve cuBLAS performance by dequantizing on the GPU (#1065) | slaren |