aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-07-04[ggml] fix index for ne03 value in ggml_cl_mul_f32 (#2088)Govlzkoy
2023-07-04fix server crashes (#2076)Henri Vasserman
2023-07-03Fix crash of test-tokenizer-0 under Debug build (#2064)Howard Su
2023-07-03[llama] No need to check file version when loading vocab score (#2079)Howard Su
2023-07-03server: add option to output probabilities for completion (#1962)WangHaoranRobin
2023-07-02ggml : fix build with OpenBLAS (close #2066)Georgi Gerganov
2023-07-01Better CUDA synchronization logic (#2057)Johannes Gäßler
2023-07-01Test-based VRAM scratch size + context adjustment (#2056)Johannes Gäßler
2023-07-01cmake : don't force -mcpu=native on aarch64 (#2063)Daniel Drake
2023-07-01metal : release buffers when freeing metal context (#2062)Aaron Miller
2023-07-01convert : add support of baichuan-7b (#2055)Judd
2023-07-01llama : fix return value of llama_load_session_file_internal (#2022)Georgi Gerganov
2023-07-01llama : catch llama_load_session_file_internal exceptions (#2022)Rand Xie
2023-07-01embd-input : fix returning ptr to temporaryGeorgi Gerganov
2023-07-01train : fix compile warningGeorgi Gerganov
2023-07-01ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)Qingyou Meng
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-29Porting the improved K-Quant CUDA kernels to OpenCL (#1966)LostRuins
2023-06-28llama : replacing auto &kv with const auto &kv (#2041)m3ndax
2023-06-28cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)Salvador E. Tropea
2023-06-28cuda : fix missing const qualifier in casts (#2027)Salvador E. Tropea
2023-06-28llama : remove shards weight file support (#2000)Howard Su
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-27fix pthreads setaffinity usage on android (#2020)Erik Scholz
2023-06-27baby-llama : fix build after ggml_rope change (#2016)Howard Su
2023-06-27llama : fix rope usage after ChatGLM changeGeorgi Gerganov
2023-06-27ggml : add support for ChatGLM RoPEGeorgi Gerganov
2023-06-26readme : add Scala 3 bindings repo (#2010)Roman Parykin
2023-06-26ggml : increase max tensor name + clean up compiler warnings in train-text (#...David Yang
2023-06-26readme : LD_LIBRARY_PATH complement for some Android devices when building wi...Gustavo Rocha Dias
2023-06-26ggml : avoid conv 2d kernel round upGeorgi Gerganov
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-26k-quants : fix indentationGeorgi Gerganov
2023-06-26tests : fix quantize perf (#1990)katsu560
2023-06-26k-quants : add AVX support to dot functions (#1916)katsu560
2023-06-26readme : add link to new k-quants for visibilityGeorgi Gerganov
2023-06-26k-quants : support for super-block size of 64 (#2001)Kawrakow
2023-06-26Fix assert when free invalid cuda pointer (#2005)Howard Su
2023-06-25readme : add new roadmap + manifestoGeorgi Gerganov
2023-06-25ggml : sync latest ggml (custom operators)Georgi Gerganov
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-25readme : add Azure CI discussion linkGeorgi Gerganov
2023-06-25zig : upgrade build system support (#1981)sjinzh
2023-06-24#1869 Fix null reference errors when training from scratch with CUDA (#1907)Robyn
2023-06-24tests : sync test-grad0 from ggmlGeorgi Gerganov
2023-06-24flake : fix ggml-metal.metal path and run nixfmt (#1974)Rowan Hart
2023-06-24convert : fix invalid params in write_vocab_only (#1975)AN Long
2023-06-24ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)slaren
2023-06-24readme : fix whitespacesGeorgi Gerganov