aboutsummaryrefslogtreecommitdiff
path: root/ggml-cuda.cu
AgeCommit message (Expand)Author
2023-07-21Custom RoPE + bettter memory management for CUDA (#2295)Kawrakow
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-17Support dup & cont ops on CUDA (#2242)Jiahao Li
2023-07-14cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer ...Bach Le
2023-07-14cuda : support broadcast add & mul (#2192)Jiahao Li
2023-07-14CUDA: mul_mat_vec_q kernels for k-quants (#2203)Johannes Gäßler
2023-07-14ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)Georgi Gerganov
2023-07-13Fix compile error on Windows CUDA (#2207)Howard Su
2023-07-12cuda : add gelu supportGeorgi Gerganov
2023-07-12Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189)Johannes Gäßler
2023-07-12ggml : revert CUDA broadcast changes from #2183 (#2191)Georgi Gerganov
2023-07-11ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)Georgi Gerganov
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
2023-07-08Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)Johannes Gäßler
2023-07-08CUDA: add __restrict__ to mul mat vec kernels (#2140)Johannes Gäßler
2023-07-05Quantized dot products for CUDA mul mat vec (#2067)Johannes Gäßler
2023-07-03Fix crash of test-tokenizer-0 under Debug build (#2064)Howard Su
2023-07-01Better CUDA synchronization logic (#2057)Johannes Gäßler
2023-06-28cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)Salvador E. Tropea
2023-06-28cuda : fix missing const qualifier in casts (#2027)Salvador E. Tropea
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-26k-quants : support for super-block size of 64 (#2001)Kawrakow
2023-06-26Fix assert when free invalid cuda pointer (#2005)Howard Su
2023-06-24#1869 Fix null reference errors when training from scratch with CUDA (#1907)Robyn
2023-06-19cuda : faster k-quants on older GPUs (#1930)Kawrakow
2023-06-19Convert vector to f16 for dequantize mul mat vec (#1913)Johannes Gäßler
2023-06-17Only one CUDA stream per device for async compute (#1898)Johannes Gäßler
2023-06-17ggml : fix warnings under MSVC (#1908)Howard Su
2023-06-16CUDA : faster k-quant dot kernels (#1862)Kawrakow
2023-06-15Fixed CUDA runtime version check (#1879)Johannes Gäßler
2023-06-15Fix the validation of main device (#1872)Howard Su
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-12Leverage mmap for offloading tensors to GPU (#1597)Howard Su
2023-06-11Fixed WSL cuda's OOM error (#1594)Kyle Liang
2023-06-09Windows nvcc workaround (#1753)Johannes Gäßler
2023-06-07k-quants : allow to optionally disable at compile time (#1734)Georgi Gerganov
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-05-26cuda : performance optimizations (#1530)Johannes Gäßler
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...Johannes Gäßler
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-14cuda : deduplicated dequantization code (#1453)Johannes Gäßler
2023-05-13cuda : fix convert function (#1412)Georgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-08Documented CUDA reproducibility, added warning (#1346)Johannes Gäßler
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman