aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2023-07-14examples : fixed path typos in embd-input (#2214)Shangning Xu
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-09main : escape prompt prefix/suffix (#2151)Nigel Bosch
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
2023-07-06convert : update for baichuan (#2081)Judd
2023-07-06alpaca.sh : update model file name (#2074)tslmy
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05Update Server Instructions (#2113)Jesse Jojo Johnson
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
2023-07-05Update server instructions for web front end (#2103)Jesse Jojo Johnson
2023-07-05embd-input: Fix input embedding example unsigned int seed (#2105)Nigel Bosch
2023-07-04Add an API example using server.cpp similar to OAI. (#2009)jwj7140
2023-07-04Simple webchat for server (#1998)Tobias Lütke
2023-07-04fix server crashes (#2076)Henri Vasserman
2023-07-03server: add option to output probabilities for completion (#1962)WangHaoranRobin
2023-07-01embd-input : fix returning ptr to temporaryGeorgi Gerganov
2023-07-01train : fix compile warningGeorgi Gerganov
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-27baby-llama : fix build after ggml_rope change (#2016)Howard Su
2023-06-27llama : fix rope usage after ChatGLM changeGeorgi Gerganov
2023-06-26ggml : increase max tensor name + clean up compiler warnings in train-text (#...David Yang
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman
2023-06-18examples : fix examples/metal (#1920)Kawrakow
2023-06-17minor : warning fixesGeorgi Gerganov
2023-06-17Only one CUDA stream per device for async compute (#1898)Johannes Gäßler
2023-06-17llama : fix kv_cache `n` init (close #1903)Georgi Gerganov
2023-06-17Server Example Refactor and Improvements (#1570)Randall Fitzgerald
2023-06-17hooks : setting up flake8 and pre-commit hooks (#1681)Jiří Podivín
2023-06-17train : get raw text instead of page with html (#1905)David Yang
2023-06-16examples : add "simple" (#1840)SuperUserNameMan
2023-06-16Fixed possible macro redefinition (#1892)FrankHB
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-06-15examples : add chat-vicuna.sh (#1854)yangli2
2023-06-15readme : server compile flag (#1874)Srinivas Billa
2023-06-15Better error when using both LoRA + GPU layers (#1861)Johannes Gäßler
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-13baby-llama : fix operator!= (#1821)0xspringtime
2023-06-13train : improved training-from-scratch example (#1652)xaedes
2023-06-13llama : do a warm-up eval at start for better timings (#1824)Georgi Gerganov
2023-06-13Allow "quantizing" to f16 and f32 (#1787)Kerfuffle
2023-06-11Fix issue where interactive mode crashes when input exceeds ctx size (#1789)Kerfuffle