aboutsummaryrefslogtreecommitdiff
path: root/examples/server
AgeCommit message (Expand)Author
2023-08-08Allow passing grammar to completion endpoint (#2532)Martin Krasser
2023-08-04fix firefox autoscroll (#2519)Jonas Wunderlich
2023-08-04server: regenerate completion.js.hpp (#2515)Cebtenzzre
2023-08-04Fixing race condition in server and partial stream handling in frontend. (#2391)Stephen Nichols
2023-08-01fix a typo in examples/server/README.md (#2478)Bono Lv
2023-08-01server : Support dark mode (#2414)ebraminio
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-28examples : server chat mode with llama2 (#2400)nhamanasu
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-25[Server] Escape HTML in webchat (#2368)Henri Vasserman
2023-07-24Chat UI extras (#2366)Aarni Koskela
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
2023-07-21make : fix embdinput library and server examples building on MSYS2 (#2235)Przemysław Pawełczyk
2023-07-19cmake : install targets (#2256)wzy
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-06convert : update for baichuan (#2081)Judd
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05Update Server Instructions (#2113)Jesse Jojo Johnson
2023-07-05Update server instructions for web front end (#2103)Jesse Jojo Johnson
2023-07-04Add an API example using server.cpp similar to OAI. (#2009)jwj7140
2023-07-04Simple webchat for server (#1998)Tobias Lütke
2023-07-04fix server crashes (#2076)Henri Vasserman
2023-07-03server: add option to output probabilities for completion (#1962)WangHaoranRobin
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman
2023-06-17Server Example Refactor and Improvements (#1570)Randall Fitzgerald
2023-06-15readme : server compile flag (#1874)Srinivas Billa
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-28examples : add --alias option to gpt_params to set use friendly model name (#...Vladimir Zorin
2023-05-27Include server in releases + other build system cleanups (#1610)Kerfuffle
2023-05-21examples : add server example with REST API (#1443)Steward Garcia