From f257fd255044decffad93dee2502875ce66ad80c Mon Sep 17 00:00:00 2001 From: jwj7140 <32943891+jwj7140@users.noreply.github.com> Date: Wed, 5 Jul 2023 03:06:12 +0900 Subject: Add an API example using server.cpp similar to OAI. (#2009) * add api_like_OAI.py * add evaluated token count to server * add /v1/ endpoints binding --- examples/server/README.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) (limited to 'examples/server/README.md') diff --git a/examples/server/README.md b/examples/server/README.md index ba4b2fe..4ed226e 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -190,3 +190,19 @@ Run with bash: ```sh bash chat.sh ``` + +### API like OAI + +API example using Python Flask: [api_like_OAI.py](api_like_OAI.py) +This example must be used with server.cpp + +```sh +python api_like_OAI.py +``` + +After running the API server, you can use it in Python by setting the API base URL. +```python +openai.api_base = "http://:port" +``` + +Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API -- cgit v1.2.3