aboutsummaryrefslogtreecommitdiff
path: root/ggml.c
AgeCommit message (Expand)Author
2023-06-25ggml : sync latest ggml (custom operators)Georgi Gerganov
2023-06-24#1869 Fix null reference errors when training from scratch with CUDA (#1907)Robyn
2023-06-24ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)slaren
2023-06-19ggml : fix bug in LBFGS optimizer (found by ggml tests)Georgi Gerganov
2023-06-19ggml : sync latest ggml repo (#1924)Georgi Gerganov
2023-06-18ggml : fix bug in ggml_compute_forward_add_q_f32 (#1918)l3utterfly
2023-06-18metal : handle buffers larger than device's maxBufferLength (#1826)Georgi Gerganov
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-13Allow "quantizing" to f16 and f32 (#1787)Kerfuffle
2023-06-10ggml : force no_alloc == false when creating opt tensors (close #1699)Georgi Gerganov
2023-06-10ggml : workaround for missing _mm256_setr_m128i in GCC < 8 (#1638)Xingchen Song(宋星辰)
2023-06-08ggml : fix fprintf warnings (#1720)Steven Roussey
2023-06-07k-quants : allow to optionally disable at compile time (#1734)Georgi Gerganov
2023-06-06llama : fix compile warningsGeorgi Gerganov
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-06ggml : fix builds, add ggml-quants-k.o (close #1712, close #1710)Georgi Gerganov
2023-06-05metal : use shared buffers between CPU and GPU (#1696)kiltyj
2023-06-05ggml : fix internal overflow in ggml_time_us on Windows (#1702)grahameth
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-06-04OpenCL: Fix duplication of layers in VRAM and RAM, add GPU mul kernel (#1653)0cc4m
2023-05-29ggml : sync cgraph import / export APIGeorgi Gerganov
2023-05-29ggml : fix bug in ggml_alibiGeorgi Gerganov
2023-05-27ggml : add support for the RISCV architecture (#1616)apcameron
2023-05-27ggml : add ggml_tensor_overhead()Georgi Gerganov
2023-05-27ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())Georgi Gerganov
2023-05-23OpenCL Token Generation Acceleration (#1459)0cc4m
2023-05-21ggml : output 3d sizes in ggml_graph_dump_dot()Georgi Gerganov
2023-05-20ggml : update WASM SIMDGeorgi Gerganov
2023-05-20ggml : add ggml_clamp() (#1539)Georgi Gerganov
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...Johannes Gäßler
2023-05-20llama : fix name shadowing and C4146 (#1526)Maxime
2023-05-20ggml : fix scalar implementation of Q4_1 dotGeorgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-16~7% faster Q5_1 AVX2 code (#1477)Ilya Kurdyukov
2023-05-14ggml : alternative fix for race condition bug in non-inplace ggml_compute_for...xaedes
2023-05-14ggml : various fixes (#1450)Georgi Gerganov
2023-05-14ggml : add AVX support based on AVX2 code (#1430)katsu560
2023-05-13ggml : multi-thread mul and diag_mask ops (#1428)Georgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13ggml : implement backward pass for llama + small training-llama-from-scratch ...xaedes
2023-05-13ggml : sync alibi fix from ggml repoGeorgi Gerganov
2023-05-13Adding SSE instructions to ggml_vec_dot_q4_0_q8_0 (#1413)3ooabkhxtn
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-09use pause asm insn in busyloop to run the CPU (13600K) 10 °C cooler (#1314)Sami Farin
2023-05-06ggml : Allow usage of CLBlast alongside Accelerate.framework (#1336)swittk
2023-05-04ggml : change immintrin.h to intrin.h for compatibility (#1307)Ron Jailall
2023-05-03ggml : vectorize Q8_0 quantizationGeorgi Gerganov