aboutsummaryrefslogtreecommitdiff
path: root/examples
diff options
context:
space:
mode:
authorJesse Jojo Johnson <williamsaintgeorge@gmail.com>2023-07-05 18:03:19 +0000
committerGitHub <noreply@github.com>2023-07-05 21:03:19 +0300
commit983b555e9ddb36703cee4d22642afe958de093b7 (patch)
tree1082286ada16a748f72ed24d8a125623cde052f4 /examples
parentec326d350c72afd23709a409944728a607188cc0 (diff)
Update Server Instructions (#2113)
* Update server instructions for web front end * Update server README * Remove duplicate OAI instructions * Fix duplicate text --------- Co-authored-by: Jesse Johnson <thatguy@jessejojojohnson.com>
Diffstat (limited to 'examples')
-rw-r--r--examples/server/README.md26
1 files changed, 25 insertions, 1 deletions
diff --git a/examples/server/README.md b/examples/server/README.md
index 160614b..037412d 100644
--- a/examples/server/README.md
+++ b/examples/server/README.md
@@ -21,7 +21,7 @@ Command line options:
- `-to N`, `--timeout N`: Server read/write timeout in seconds. Default `600`.
- `--host`: Set the hostname or ip address to listen. Default `127.0.0.1`.
- `--port`: Set the port to listen. Default: `8080`.
-- `--public`: path from which to serve static files (default examples/server/public)
+- `--path`: path from which to serve static files (default examples/server/public)
- `--embedding`: Enable embedding extraction, Default: disabled.
## Build
@@ -207,3 +207,27 @@ openai.api_base = "http://<Your api-server IP>:port"
```
Then you can utilize llama.cpp as an OpenAI's **chat.completion** or **text_completion** API
+
+### Extending the Web Front End
+
+The default location for the static files is `examples/server/public`. You can extend the front end by running the server binary with `--path` set to `./your-directory` and importing `/completion.js` to get access to the llamaComplete() method. A simple example is below:
+
+```
+<html>
+ <body>
+ <pre>
+ <script type="module">
+ import { llamaComplete } from '/completion.js'
+
+ llamaComplete({
+ prompt: "### Instruction:\nWrite dad jokes, each one paragraph. You can use html formatting if needed.\n\n### Response:",
+ n_predict: 1024,
+ },
+ null,
+ (chunk) => document.write(chunk.data.content)
+ )
+ </script>
+ </pre>
+ </body>
+</html>
+```