Age | Commit message (Expand) | Author |
2023-07-21 | llama : make tensor_split ptr instead of array (#2272) | Georgi Gerganov |
2023-07-17 | Support dup & cont ops on CUDA (#2242) | Jiahao Li |
2023-07-14 | cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer ... | Bach Le |
2023-07-14 | cuda : support broadcast add & mul (#2192) | Jiahao Li |
2023-07-14 | CUDA: mul_mat_vec_q kernels for k-quants (#2203) | Johannes Gäßler |
2023-07-14 | ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope) | Georgi Gerganov |
2023-07-13 | Fix compile error on Windows CUDA (#2207) | Howard Su |
2023-07-12 | cuda : add gelu support | Georgi Gerganov |
2023-07-12 | Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189) | Johannes Gäßler |
2023-07-12 | ggml : revert CUDA broadcast changes from #2183 (#2191) | Georgi Gerganov |
2023-07-11 | ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183) | Georgi Gerganov |
2023-07-11 | ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178) | Spencer Sutton |
2023-07-08 | Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144) | Johannes Gäßler |
2023-07-08 | CUDA: add __restrict__ to mul mat vec kernels (#2140) | Johannes Gäßler |
2023-07-05 | Quantized dot products for CUDA mul mat vec (#2067) | Johannes Gäßler |
2023-07-03 | Fix crash of test-tokenizer-0 under Debug build (#2064) | Howard Su |
2023-07-01 | Better CUDA synchronization logic (#2057) | Johannes Gäßler |
2023-06-28 | cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028) | Salvador E. Tropea |
2023-06-28 | cuda : fix missing const qualifier in casts (#2027) | Salvador E. Tropea |
2023-06-28 | CUDA GPU acceleration for LoRAs + f16 models (#1970) | Johannes Gäßler |
2023-06-26 | k-quants : support for super-block size of 64 (#2001) | Kawrakow |
2023-06-26 | Fix assert when free invalid cuda pointer (#2005) | Howard Su |
2023-06-24 | #1869 Fix null reference errors when training from scratch with CUDA (#1907) | Robyn |
2023-06-19 | cuda : faster k-quants on older GPUs (#1930) | Kawrakow |
2023-06-19 | Convert vector to f16 for dequantize mul mat vec (#1913) | Johannes Gäßler |
2023-06-17 | Only one CUDA stream per device for async compute (#1898) | Johannes Gäßler |
2023-06-17 | ggml : fix warnings under MSVC (#1908) | Howard Su |
2023-06-16 | CUDA : faster k-quant dot kernels (#1862) | Kawrakow |
2023-06-15 | Fixed CUDA runtime version check (#1879) | Johannes Gäßler |
2023-06-15 | Fix the validation of main device (#1872) | Howard Su |
2023-06-14 | CUDA full GPU acceleration, KV cache in VRAM (#1827) | Johannes Gäßler |
2023-06-12 | Leverage mmap for offloading tensors to GPU (#1597) | Howard Su |
2023-06-11 | Fixed WSL cuda's OOM error (#1594) | Kyle Liang |
2023-06-09 | Windows nvcc workaround (#1753) | Johannes Gäßler |
2023-06-07 | k-quants : allow to optionally disable at compile time (#1734) | Georgi Gerganov |
2023-06-06 | Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) | Johannes Gäßler |
2023-06-05 | ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684) | Kawrakow |
2023-05-26 | cuda : performance optimizations (#1530) | Johannes Gäßler |
2023-05-20 | cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti... | Johannes Gäßler |
2023-05-19 | ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) | Georgi Gerganov |
2023-05-14 | cuda : deduplicated dequantization code (#1453) | Johannes Gäßler |
2023-05-13 | cuda : fix convert function (#1412) | Georgi Gerganov |
2023-05-13 | ggml : GPU-accelerated token generation (#1412) | Johannes Gäßler |
2023-05-12 | ggml : remove bit shuffling (#1405) | Georgi Gerganov |
2023-05-08 | Documented CUDA reproducibility, added warning (#1346) | Johannes Gäßler |
2023-05-01 | cuBLAS: refactor and optimize f16 mat mul performance (#1259) | slaren |
2023-05-01 | cuBLAS: fall back to pageable memory if pinned alloc fails (#1233) | slaren |
2023-04-29 | cuBLAS: use host pinned memory and dequantize while copying (#1207) | slaren |
2023-04-29 | cuBLAS: non-contiguous tensor support (#1215) | Henri Vasserman |
2023-04-28 | Remove Q4_3 which is no better than Q5 (#1218) | Stephan Walter |