aboutsummaryrefslogtreecommitdiff
path: root/ggml.h
AgeCommit message (Expand)Author
2023-07-30ggml : add graph tensor allocator (#2411)slaren
2023-07-26ggml : allocate graphs in a context (#2392)slaren
2023-07-25ggml : improve graph build time via hash table lookup (#2329)slaren
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-24ggml : sync (unary ops refactor, static-correctness) (#2370)Georgi Gerganov
2023-07-23ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)slaren
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-12ggml : add ggml_pool_1d and ggml_pool_2dGeorgi Gerganov
2023-07-11ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)Georgi Gerganov
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
2023-07-06ggml : fix restrict usageGeorgi Gerganov
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
2023-07-04ggml : sync latest (new ops, macros, refactoring) (#2106)Georgi Gerganov
2023-07-01ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)Qingyou Meng
2023-06-27ggml : add support for ChatGLM RoPEGeorgi Gerganov
2023-06-26ggml : increase max tensor name + clean up compiler warnings in train-text (#...David Yang
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-25ggml : sync latest ggml (custom operators)Georgi Gerganov
2023-06-24ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)slaren
2023-06-19ggml : sync latest ggml repo (#1924)Georgi Gerganov
2023-06-18metal : handle buffers larger than device's maxBufferLength (#1826)Georgi Gerganov
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-05-29ggml : sync cgraph import / export APIGeorgi Gerganov
2023-05-27ggml : add ggml_tensor_overhead()Georgi Gerganov
2023-05-27ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())Georgi Gerganov
2023-05-23OpenCL Token Generation Acceleration (#1459)0cc4m
2023-05-20ggml : add ggml_clamp() (#1539)Georgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-14ggml : various fixes (#1450)Georgi Gerganov
2023-05-14ggml : add GGML_QNT_VERSION to track quantization format changesGeorgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13ggml : implement backward pass for llama + small training-llama-from-scratch ...xaedes
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-02ggml: add names to tensors (#1268)slaren
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
2023-04-30ggml : add Q5 WASM SIMD + GGML_FTYPEGeorgi Gerganov
2023-04-29ggml : fix visibility and unused warningsGeorgi Gerganov
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28ggml : sync ggml (ggml_alibi)Georgi Gerganov
2023-04-28ggml : add CLBlast support (#1164)0cc4m
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-24ggml : export symbols (#1155)Georgi Gerganov
2023-04-20ggml : sync ggml (add GPT-NeoX RoPE implementation)Georgi Gerganov