aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-07-19llama : extend API to get max devices at runtime (#2253)Rinne
2023-07-19flake : update flake.nix (#2270)wzy
2023-07-19cmake : install targets (#2256)wzy
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
2023-07-18llama : shorten quantization descriptionsGeorgi Gerganov
2023-07-17Support dup & cont ops on CUDA (#2242)Jiahao Li
2023-07-17llama : fix t_start_sample_us initialization warning (#2238)Alex Klinkhamer
2023-07-16ggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DE...Qingyou Meng
2023-07-16py : turn verify-checksum-models.py into executable (#2245)Jiří Podivín
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-14flake : add runHook preInstall/postInstall to installPhase so hooks function ...Dave Della Costa
2023-07-14make : use pkg-config for OpenBLAS (#2222)wzy
2023-07-14cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer ...Bach Le
2023-07-14ggml : fix static_assert with older compilers #2024 (#2218)Evan Miller
2023-07-14llama : add functions that work directly on model (#2197)Bach Le
2023-07-14build.zig : install config header (#2216)Ali Chraghi
2023-07-14examples : fixed path typos in embd-input (#2214)Shangning Xu
2023-07-14cuda : support broadcast add & mul (#2192)Jiahao Li
2023-07-14CUDA: mul_mat_vec_q kernels for k-quants (#2203)Johannes Gäßler
2023-07-14make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)James Reynolds
2023-07-14ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)Georgi Gerganov
2023-07-14Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)Kawrakow
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-13Fix compile error on Windows CUDA (#2207)Howard Su
2023-07-13devops : add missing quotes to bash script (#2193)Bodo Graumann
2023-07-12metal : new q4_0 matrix-vector kernel (#2188)Shouzheng Liu
2023-07-12ggml : broadcast mul_mat + conv batch support (#2199)Georgi Gerganov
2023-07-12ggml : add ggml_pool_1d and ggml_pool_2dGeorgi Gerganov
2023-07-12cuda : add gelu supportGeorgi Gerganov
2023-07-12FP16 is supported in CM=6.0 (#2177)Howard Su
2023-07-12Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189)Johannes Gäßler
2023-07-12ggml : revert CUDA broadcast changes from #2183 (#2191)Georgi Gerganov
2023-07-11ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)Georgi Gerganov
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-11docker : add '--server' option (#2174)Jinwoo Jeong
2023-07-11readme : fix zig build instructions (#2171)Chad Brewbaker
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-11Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)LostRuins
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-09llama : remove "first token must be BOS" restriction (#2153)oobabooga
2023-07-09main : escape prompt prefix/suffix (#2151)Nigel Bosch
2023-07-09readme : update Termux instructions (#2147)JackJollimore
2023-07-09ggml : fix buidling with Intel MKL but ask for "cblas.h" issue (#2104) (#2115)clyang
2023-07-09readme : add more docs indexes (#2127)rankaiyx
2023-07-08Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)Johannes Gäßler
2023-07-08CUDA: add __restrict__ to mul mat vec kernels (#2140)Johannes Gäßler
2023-07-07docker : add support for CUDA in docker (#1461)dylan
2023-07-07ci : switch threads to 1 (#2138)Georgi Gerganov
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng