aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-04-22ggml : unit test for quantization functions (#953)unbounded
2023-04-22llama : print timings on ctrl+c exit (#1021)wbpxre150
2023-04-22llama : have n_batch default to 512 (#1091)eiery
2023-04-22cmake : fix build under Windows when enable BUILD_SHARED_LIBS (#1100)Howard Su
2023-04-22ggml : fix AVX build + update to new Q8_0 formatGeorgi Gerganov
2023-04-22ggml : alternative Q4_3 implementation using modified Q8_0 (#1109)Georgi Gerganov
2023-04-22ggml : AVX2 optimization for vec_dot_q4_3_q8_0 and refactoring (#1099)Stephan Walter
2023-04-22examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Exp...Clint Herron
2023-04-22llama : add api for getting/setting the complete state: rng, logits, embeddin...xaedes
2023-04-21Improve cuBLAS performance by using a memory pool (#1094)slaren
2023-04-21llama : fixed rlimit error message (#888)apaz
2023-04-21cmake : link threads publicly to ggml (#1042)源文雨
2023-04-21main : evaluate tokens in batches after swapping context (#1014)Alex Klinkhamer
2023-04-21llama : remember and restore kv cache data pointers (#1104)xaedes
2023-04-21ggml : a faster version for Q4_1 x Q8_0 dot products (#1083)Kawrakow
2023-04-21Show perplexity ETA in hours and minutes (#1096)slaren
2023-04-21llama : fix comment for "output.weight" tensorGeorgi Gerganov
2023-04-20Add ggml-model-*.bin checksums for 7B, 13B, 30B, 65B (#1088)Stephan Walter
2023-04-20ggml : sync ggml (add GPT-NeoX RoPE implementation)Georgi Gerganov
2023-04-20ggml : fix bug in ggml_compute_forward_dup_f32()Georgi Gerganov
2023-04-20Add Q4_3 support to cuBLAS (#1086)slaren
2023-04-20ggml : do not break cuBLAS build (Q4_3 is not yet implemented)Georgi Gerganov
2023-04-20ggml : fix Q4_3 quantizationGeorgi Gerganov
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-20ci : remove the LLAMA_ACCELERATE matrix dimension from Ubuntu builds in the C...Ivan Komarov
2023-04-20fix: LLAMA_CUBLAS=1 undefined reference 'shm_open' (#1080)源文雨
2023-04-20AVX2 optimization for vec_dot_q4_2_q8_0 (#1068)Stephan Walter
2023-04-20Improve cuBLAS performance by dequantizing on the GPU (#1065)slaren
2023-04-19Minor: Readme fixed grammar, spelling, and misc updates (#1071)CRD716
2023-04-19Q4_2 quantization with rmse-optimized scale and quants (#1062)Kawrakow
2023-04-19ggml : use 8-bit precision for Q4_1 intermediate results (#1047)Georgi Gerganov
2023-04-19readme : add warning about Q4_2 and Q4_3Georgi Gerganov
2023-04-19ggml : Q4 cleanup - remove 4-bit dot product code (#1061)Stephan Walter
2023-04-19Add NVIDIA cuBLAS support (#1044)slaren
2023-04-19Multi-threaded ggml_cpy (#1035)slaren
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
2023-04-18ggml : scratch that - vmlaq_n_f32 is always betterGeorgi Gerganov
2023-04-18gitignore : vdotGeorgi Gerganov
2023-04-18ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulatorsGeorgi Gerganov
2023-04-18Adding a simple program to measure speed of dot products (#1041)Kawrakow
2023-04-18readme : update hot topics about new LoRA functionalityGeorgi Gerganov
2023-04-18ci : do not run on draftsGeorgi Gerganov
2023-04-18Do not close file after mmap (Windows version) (#1034)Ivan Komarov
2023-04-17readme : add Ruby bindings (#1029)Atsushi Tatsuma
2023-04-17add 4_0 to default outfile namestr dict (#1031)Cameron
2023-04-17Add LoRA support (#820)slaren
2023-04-17llama : well-defined static initialization of complex objects (#927)Arik Poznanski
2023-04-17quantize-stats : fix bug in --type argumentGeorgi Gerganov
2023-04-17ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.cGeorgi Gerganov