Age | Commit message (Expand) | Author |
---|---|---|
2023-06-26 | ggml : add NUMA support (#1556) | zrm |
2023-06-25 | fix server sampling: top k sampler first (#1977) | anon998 |
2023-06-24 | llama : make model stateless and context stateful (llama_state) (#1797) | Didzis Gosko |
2023-06-20 | [Fix] Reenable server embedding endpoint (#1937) | Henri Vasserman |
2023-06-17 | Server Example Refactor and Improvements (#1570) | Randall Fitzgerald |
2023-06-14 | CUDA full GPU acceleration, KV cache in VRAM (#1827) | Johannes Gäßler |
2023-06-06 | Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703) | Johannes Gäßler |
2023-05-28 | Only show -ngl option when relevant + other doc/arg handling updates (#1625) | Kerfuffle |
2023-05-28 | examples : add --alias option to gpt_params to set use friendly model name (#... | Vladimir Zorin |
2023-05-27 | Include server in releases + other build system cleanups (#1610) | Kerfuffle |
2023-05-21 | examples : add server example with REST API (#1443) | Steward Garcia |