aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
2023-07-05Update server instructions for web front end (#2103)Jesse Jojo Johnson
2023-07-05embd-input: Fix input embedding example unsigned int seed (#2105)Nigel Bosch
2023-07-04Add an API example using server.cpp similar to OAI. (#2009)jwj7140
2023-07-04Simple webchat for server (#1998)Tobias Lütke
2023-07-04fix server crashes (#2076)Henri Vasserman
2023-07-03server: add option to output probabilities for completion (#1962)WangHaoranRobin
2023-07-01embd-input : fix returning ptr to temporaryGeorgi Gerganov
2023-07-01train : fix compile warningGeorgi Gerganov
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-27baby-llama : fix build after ggml_rope change (#2016)Howard Su
2023-06-27llama : fix rope usage after ChatGLM changeGeorgi Gerganov
2023-06-26ggml : increase max tensor name + clean up compiler warnings in train-text (#...David Yang
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman
2023-06-18examples : fix examples/metal (#1920)Kawrakow
2023-06-17minor : warning fixesGeorgi Gerganov
2023-06-17Only one CUDA stream per device for async compute (#1898)Johannes Gäßler
2023-06-17llama : fix kv_cache `n` init (close #1903)Georgi Gerganov
2023-06-17Server Example Refactor and Improvements (#1570)Randall Fitzgerald
2023-06-17hooks : setting up flake8 and pre-commit hooks (#1681)Jiří Podivín
2023-06-17train : get raw text instead of page with html (#1905)David Yang
2023-06-16examples : add "simple" (#1840)SuperUserNameMan
2023-06-16Fixed possible macro redefinition (#1892)FrankHB
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-06-15examples : add chat-vicuna.sh (#1854)yangli2
2023-06-15readme : server compile flag (#1874)Srinivas Billa
2023-06-15Better error when using both LoRA + GPU layers (#1861)Johannes Gäßler
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13baby-llama : fix operator!= (#1821)0xspringtime
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-13llama : do a warm-up eval at start for better timings (#1824)Georgi Gerganov
2023-06-13Allow "quantizing" to f16 and f32 (#1787)Kerfuffle
2023-06-11Fix issue where interactive mode crashes when input exceeds ctx size (#1789)Kerfuffle
2023-06-10llama : support requantizing models instead of only allowing quantization fro...Kerfuffle
2023-06-06main: add the possibility to open the prompt cache read-only (#1640)Willy Tarreau
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-06-03Fix prompt cache saving and chat-persistent rollover (#1678)Evan Jones
2023-05-29Work around for recalculating logits in cached prompts (Fixes #1585) (#1609)DannyDaemonic
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-28examples : add --alias option to gpt_params to set use friendly model name (#...Vladimir Zorin
2023-05-27Include server in releases + other build system cleanups (#1610)Kerfuffle
2023-05-25Some improvements to loading the session with --prompt-cache (#1550)Kerfuffle
2023-05-24chat-persistent.sh : use bracket expressions in grep (#1564)Senemu