index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
server
/
README.md
Age
Commit message (
Expand
)
Author
2023-08-08
Allow passing grammar to completion endpoint (#2532)
Martin Krasser
2023-08-01
fix a typo in examples/server/README.md (#2478)
Bono Lv
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
Update Server Instructions (#2113)
Jesse Jojo Johnson
2023-07-05
Update server instructions for web front end (#2103)
Jesse Jojo Johnson
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-15
readme : server compile flag (#1874)
Srinivas Billa
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-21
examples : add server example with REST API (#1443)
Steward Garcia