aboutsummaryrefslogtreecommitdiff
path: root/examples/common.cpp
AgeCommit message (Expand)Author
2023-08-04Add --simple-io option for subprocesses and break out console.h and cpp (#1558)DannyDaemonic
2023-08-02tests : Fix compilation warnings (Linux/GCC) (#2451)Eve
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-28perplexity : add Hellaswag calculation (#2389)klosax
2023-07-25main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)Xiao-Yong Jin
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
2023-07-23common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)wzy
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
2023-07-23llama : print help to stdout (#2338)maddes8cht
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-22Perplexity: Compute scores correlated to HellaSwag (#2312)klosax
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the gu...Guillaume "Vermeille" Sanchez
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-09main : escape prompt prefix/suffix (#2151)Nigel Bosch
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-17Only one CUDA stream per device for async compute (#1898)Johannes Gäßler
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-06-15Better error when using both LoRA + GPU layers (#1861)Johannes Gäßler
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-11Fix issue where interactive mode crashes when input exceeds ctx size (#1789)Kerfuffle
2023-06-06main: add the possibility to open the prompt cache read-only (#1640)Willy Tarreau
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-28examples : add --alias option to gpt_params to set use friendly model name (#...Vladimir Zorin
2023-05-20Fix for mingw (#1462)DannyDaemonic
2023-05-19main : make reverse prompt option act as a stop token in non-interactive mode...Jason McCartney
2023-05-19minor : fix compile warningsGeorgi Gerganov
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-15fix get_num_physical_cores() (#1436)zrm
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-12CLI args use - instead of _, backwards compatible (#1416)Johannes Gäßler
2023-05-10main : add option to save full output to session (#1338)Evan Jones
2023-05-09Locale fix for Windows (#1379)DannyDaemonic
2023-05-08Interface improvements and `--multiline-input` (previously `--author-mode`) (...DannyDaemonic
2023-05-08llama : require first token to be BOS (#1303)Georgi Gerganov
2023-05-08Documented CUDA reproducibility, added warning (#1346)Johannes Gäßler
2023-05-04main : add --in-suffix option (#1318)44670
2023-05-04Only escape prompts when used with `-e` (#1311)DannyDaemonic
2023-05-02Process escape sequences given in prompts (#1173)DannyDaemonic
2023-05-03fix missing parameters in `llama_init_from_gpt_params` (#1293)slaren