aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-07-01llama : fix return value of llama_load_session_file_internal (#2022)Georgi Gerganov
2023-07-01llama : catch llama_load_session_file_internal exceptions (#2022)Rand Xie
2023-07-01embd-input : fix returning ptr to temporaryGeorgi Gerganov
2023-07-01train : fix compile warningGeorgi Gerganov
2023-07-01ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)Qingyou Meng
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-29Porting the improved K-Quant CUDA kernels to OpenCL (#1966)LostRuins
2023-06-28llama : replacing auto &kv with const auto &kv (#2041)m3ndax
2023-06-28cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)Salvador E. Tropea
2023-06-28cuda : fix missing const qualifier in casts (#2027)Salvador E. Tropea
2023-06-28llama : remove shards weight file support (#2000)Howard Su
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-27fix pthreads setaffinity usage on android (#2020)Erik Scholz
2023-06-27baby-llama : fix build after ggml_rope change (#2016)Howard Su
2023-06-27llama : fix rope usage after ChatGLM changeGeorgi Gerganov
2023-06-27ggml : add support for ChatGLM RoPEGeorgi Gerganov
2023-06-26readme : add Scala 3 bindings repo (#2010)Roman Parykin
2023-06-26ggml : increase max tensor name + clean up compiler warnings in train-text (#...David Yang
2023-06-26readme : LD_LIBRARY_PATH complement for some Android devices when building wi...Gustavo Rocha Dias
2023-06-26ggml : avoid conv 2d kernel round upGeorgi Gerganov
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-26k-quants : fix indentationGeorgi Gerganov
2023-06-26tests : fix quantize perf (#1990)katsu560
2023-06-26k-quants : add AVX support to dot functions (#1916)katsu560
2023-06-26readme : add link to new k-quants for visibilityGeorgi Gerganov
2023-06-26k-quants : support for super-block size of 64 (#2001)Kawrakow
2023-06-26Fix assert when free invalid cuda pointer (#2005)Howard Su
2023-06-25readme : add new roadmap + manifestoGeorgi Gerganov
2023-06-25ggml : sync latest ggml (custom operators)Georgi Gerganov
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-25readme : add Azure CI discussion linkGeorgi Gerganov
2023-06-25zig : upgrade build system support (#1981)sjinzh
2023-06-24#1869 Fix null reference errors when training from scratch with CUDA (#1907)Robyn
2023-06-24tests : sync test-grad0 from ggmlGeorgi Gerganov
2023-06-24flake : fix ggml-metal.metal path and run nixfmt (#1974)Rowan Hart
2023-06-24convert : fix invalid params in write_vocab_only (#1975)AN Long
2023-06-24ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)slaren
2023-06-24readme : fix whitespacesGeorgi Gerganov
2023-06-24readme : fixed termux instructions (#1973)Alberto
2023-06-24llama : fix top-p sampling to match the canonical definition (#1953)Alex Renda
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-23Add OpenLLaMA instructions to the README (#1954)eiery
2023-06-22rework convert.py to read hyper-parameters from config.json (#1958)Erik Scholz
2023-06-21cmake: revert CUDA arch default to 52, 61 if f16 (#1959)Johannes Gäßler
2023-06-21Fix typo in README.md (#1961)Rahul Vivek Nair
2023-06-20readme : add link to p1Georgi Gerganov
2023-06-20Fix typo (#1949)Xiake Sun
2023-06-20llama : fix params struct slignment (#1936)Ettore Di Giacinto
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman