aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-05-02examples : add llama_init_from_gpt_params() common function (#1290)Ron Evans
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02ggml : fix 32-bit ARMGeorgi Gerganov
2023-05-02examples : improve vertical alignment of a few variables (#1286)Ron Evans
2023-05-02ggml : fix ppc64le build error and make cmake detect Power processors (#1284)Marvin Gießing
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02main : switch input_noecho to input_echo to remove negation (#979)Ron Evans
2023-05-02ggml: add names to tensors (#1268)slaren
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
2023-05-01llama : update stubs for systems without mmap and mlock (#1266)xloem
2023-05-01ggml : fix ggml_used_mem() (#1264)Kerfuffle
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
2023-05-01llama : let context be const when accessing const data (#1261)Alex Klinkhamer
2023-04-30ggml : fix UB (int << 31)Georgi Gerganov
2023-04-30build: add armv{6,7,8} support to cmake (#1251)Pavol Rusnak
2023-04-30common : better default number of threads (#934)jon-chuang
2023-04-30ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)0cc4m
2023-04-30ggml : add Q5 WASM SIMD + GGML_FTYPEGeorgi Gerganov
2023-04-30Various fixes to mat_mul benchmark (#1253)Stephan Walter
2023-04-30ggml : fix labels for GGML_OP_ALIBIGeorgi Gerganov
2023-04-29ggml : fix 32-bit ARM NEONGeorgi Gerganov
2023-04-29ggml : use vzip instead of vuzp for consistencyGeorgi Gerganov
2023-04-29ggml : fix visibility and unused warningsGeorgi Gerganov
2023-04-29ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)Georgi Gerganov
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28readme : update hot topicsGeorgi Gerganov
2023-04-28ggml : sync ggml (ggml_alibi)Georgi Gerganov
2023-04-28examples : add Jeopardy example (#1168)CRD716
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-28ggml : add helper debug printf in soft_maxGeorgi Gerganov
2023-04-28ggml : add CLBlast support (#1164)0cc4m
2023-04-28Correcting link to w64devkit (#1214)Folko-Ven
2023-04-28Add Manjaro CUDA include and lib dirs to Makefile (#1212)Johannes Gäßler
2023-04-28add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)Yann Follet
2023-04-26ggml : slightly faster AVX2 implementation for Q5 (#1197)Stephan Walter
2023-04-26readme : add quantization infoGeorgi Gerganov
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-26Allow setting the rng seed after initialization. (#1184)Ásgeir Bjarni Ingvarsson
2023-04-26Updating build instructions to include BLAS support (#1183)DaniAndTheWeb
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
2023-04-25Update SHA256SUMS after quantization change (#1181)Stephan Walter