aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-04-29ggml : use vzip instead of vuzp for consistencyGeorgi Gerganov
2023-04-29ggml : fix visibility and unused warningsGeorgi Gerganov
2023-04-29ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)Georgi Gerganov
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28readme : update hot topicsGeorgi Gerganov
2023-04-28ggml : sync ggml (ggml_alibi)Georgi Gerganov
2023-04-28examples : add Jeopardy example (#1168)CRD716
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-28ggml : add helper debug printf in soft_maxGeorgi Gerganov
2023-04-28ggml : add CLBlast support (#1164)0cc4m
2023-04-28Correcting link to w64devkit (#1214)Folko-Ven
2023-04-28Add Manjaro CUDA include and lib dirs to Makefile (#1212)Johannes Gäßler
2023-04-28add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)Yann Follet
2023-04-26ggml : slightly faster AVX2 implementation for Q5 (#1197)Stephan Walter
2023-04-26readme : add quantization infoGeorgi Gerganov
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-26Allow setting the rng seed after initialization. (#1184)Ásgeir Bjarni Ingvarsson
2023-04-26Updating build instructions to include BLAS support (#1183)DaniAndTheWeb
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
2023-04-25Update SHA256SUMS after quantization change (#1181)Stephan Walter
2023-04-25py : cast lora_alpha to int in convert-lora-to-ggml (#1170)ostix360
2023-04-25nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)Pavol Rusnak
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-25ggml : use full range for Q4_0 and Q4_2 quantization (#729)unbounded
2023-04-24ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)xaedes
2023-04-24ggml : export symbols (#1155)Georgi Gerganov
2023-04-24examples : add save_load_state example (#1150)xaedes
2023-04-24llama : increase scratch buffer size for 65B (ref #1152)Georgi Gerganov
2023-04-24examples/main README improvements and some light refactoring (#1131)mgroeber9110
2023-04-24Fix build for gcc 8 and test in CI (#1154)Stephan Walter
2023-04-24Fix cuda compilation (#1128)slaren
2023-04-24llama : refactor get / set state + remove redundant kv cache API (#1143)Georgi Gerganov
2023-04-23Fix LoRA acronym (#1145)slaren
2023-04-23scripts : add helper scripts to synch ggml repoGeorgi Gerganov
2023-04-23Added README.md for main with examples and explanations (#1139)DannyDaemonic
2023-04-23ggml : do not print perf ops that have not been used at allGeorgi Gerganov
2023-04-23ggml : better PERF prints + support "LLAMA_PERF=1 make"Georgi Gerganov
2023-04-23Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)Stephan Walter
2023-04-23readme : update gpt4all instructions (#980)Pavol Rusnak
2023-04-23A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX5...Yishuo Wang
2023-04-22ggml : fix Q4_3 cuBLASGeorgi Gerganov
2023-04-22ci : trigger CI for drafts, but not most PR actions (#1125)Stephan Walter
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter