index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
Age
Commit message (
Expand
)
Author
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
Update Server Instructions (#2113)
Jesse Jojo Johnson
2023-07-05
Update server instructions for web front end (#2103)
Jesse Jojo Johnson
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-07-04
Simple webchat for server (#1998)
Tobias Lütke
2023-07-04
fix server crashes (#2076)
Henri Vasserman
2023-07-03
server: add option to output probabilities for completion (#1962)
WangHaoranRobin
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-15
readme : server compile flag (#1874)
Srinivas Billa
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-28
examples : add --alias option to gpt_params to set use friendly model name (#...
Vladimir Zorin
2023-05-27
Include server in releases + other build system cleanups (#1610)
Kerfuffle
2023-05-21
examples : add server example with REST API (#1443)
Steward Garcia