aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2023-05-12llama : fix --mtest option (close #1414)Georgi Gerganov
2023-05-12CLI args use - instead of _, backwards compatible (#1416)Johannes Gäßler
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
2023-05-10main : add option to save full output to session (#1338)Evan Jones
2023-05-09Locale fix for Windows (#1379)DannyDaemonic
2023-05-08Interface improvements and `--multiline-input` (previously `--author-mode`) (...DannyDaemonic
2023-05-08llama : require first token to be BOS (#1303)Georgi Gerganov
2023-05-08Documented CUDA reproducibility, added warning (#1346)Johannes Gäßler
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox
2023-05-05quantize: make output filename optional, default to ggml-model-<ftype>.bin (#...slaren
2023-05-04main : add --in-suffix option (#1318)44670
2023-05-04Only escape prompts when used with `-e` (#1311)DannyDaemonic
2023-05-04Update main's README.md with new features (#1296)DannyDaemonic
2023-05-04fix #1224 reverse prompt and multi line (#1297)Tomas
2023-05-03examples : read chat prompts from a template file (#1196)khimaros
2023-05-03examples : various prompt and example fixes (#1298)CRD716
2023-05-02Process escape sequences given in prompts (#1173)DannyDaemonic
2023-05-02Handle signals properly on Windows (#1123)DannyDaemonic
2023-05-03fix missing parameters in `llama_init_from_gpt_params` (#1293)slaren
2023-05-02examples : add llama_init_from_gpt_params() common function (#1290)Ron Evans
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02examples : improve vertical alignment of a few variables (#1286)Ron Evans
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02main : switch input_noecho to input_echo to remove negation (#979)Ron Evans
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-04-30common : better default number of threads (#934)jon-chuang
2023-04-30Various fixes to mat_mul benchmark (#1253)Stephan Walter
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28examples : add Jeopardy example (#1168)CRD716
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) (...Georgi Gerganov
2023-04-24examples : add save_load_state example (#1150)xaedes
2023-04-24examples/main README improvements and some light refactoring (#1131)mgroeber9110
2023-04-23Fix LoRA acronym (#1145)slaren
2023-04-23Added README.md for main with examples and explanations (#1139)DannyDaemonic
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter
2023-04-22llama : print timings on ctrl+c exit (#1021)wbpxre150
2023-04-22llama : have n_batch default to 512 (#1091)eiery
2023-04-22examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Exp...Clint Herron
2023-04-21main : evaluate tokens in batches after swapping context (#1014)Alex Klinkhamer
2023-04-21Show perplexity ETA in hours and minutes (#1096)slaren
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov