aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-04-25Update SHA256SUMS after quantization change (#1181)Stephan Walter
2023-04-25py : cast lora_alpha to int in convert-lora-to-ggml (#1170)ostix360
2023-04-25nix: use convert.py instead of legacy wrapper convert-pth-to-ggml.py (#981)Pavol Rusnak
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-25ggml : use full range for Q4_0 and Q4_2 quantization (#729)unbounded
2023-04-24ggml : fix bug in ggml_compute_forward_sum_f32 (#1162)xaedes
2023-04-24ggml : export symbols (#1155)Georgi Gerganov
2023-04-24examples : add save_load_state example (#1150)xaedes
2023-04-24llama : increase scratch buffer size for 65B (ref #1152)Georgi Gerganov
2023-04-24examples/main README improvements and some light refactoring (#1131)mgroeber9110
2023-04-24Fix build for gcc 8 and test in CI (#1154)Stephan Walter
2023-04-24Fix cuda compilation (#1128)slaren
2023-04-24llama : refactor get / set state + remove redundant kv cache API (#1143)Georgi Gerganov
2023-04-23Fix LoRA acronym (#1145)slaren
2023-04-23scripts : add helper scripts to synch ggml repoGeorgi Gerganov
2023-04-23Added README.md for main with examples and explanations (#1139)DannyDaemonic
2023-04-23ggml : do not print perf ops that have not been used at allGeorgi Gerganov
2023-04-23ggml : better PERF prints + support "LLAMA_PERF=1 make"Georgi Gerganov
2023-04-23Improve AVX2 for vec_dot_q4_3_q8_0 (#1138)Stephan Walter
2023-04-23readme : update gpt4all instructions (#980)Pavol Rusnak
2023-04-23A better `packNibbles` and `mul_sum_i8_pairs_float` implementation using AVX5...Yishuo Wang
2023-04-22ggml : fix Q4_3 cuBLASGeorgi Gerganov
2023-04-22ci : trigger CI for drafts, but not most PR actions (#1125)Stephan Walter
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter
2023-04-22ggml : unit test for quantization functions (#953)unbounded
2023-04-22llama : print timings on ctrl+c exit (#1021)wbpxre150
2023-04-22llama : have n_batch default to 512 (#1091)eiery
2023-04-22cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)Howard Su
2023-04-22ggml : fix AVX build + update to new Q8_0 formatGeorgi Gerganov
2023-04-22ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)Georgi Gerganov
2023-04-22ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)Stephan Walter
2023-04-22examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Exp...Clint Herron
2023-04-22llama : add api for getting/setting the complete state: rng, logits, embeddin...xaedes
2023-04-21Improve cuBLAS performance by using a memory pool (#1094)slaren
2023-04-21llama : fixed rlimit error message (#888)apaz
2023-04-21cmake : link threads publicly to ggml (#1042)源文雨
2023-04-21main : evaluate tokens in batches after swapping context (#1014)Alex Klinkhamer
2023-04-21llama : remember and restore kv cache data pointers (#1104)xaedes
2023-04-21ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)Kawrakow
2023-04-21Show perplexity ETA in hours and minutes (#1096)slaren
2023-04-21llama : fix comment for "output.weight" tensorGeorgi Gerganov
2023-04-20Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)Stephan Walter
2023-04-20ggml : sync ggml (add GPT-NeoX RoPE implementation)Georgi Gerganov
2023-04-20ggml : fix bug in ggml_compute_forward_dup_f32()Georgi Gerganov
2023-04-20Add Q4_3 support to cuBLAS (#1086)slaren
2023-04-20ggml : do not break cuBLAS build (Q4_3 is not yet implemented)Georgi Gerganov
2023-04-20ggml : fix Q4_3 quantizationGeorgi Gerganov
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-20ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the C...Ivan Komarov