aboutsummaryrefslogtreecommitdiff
path: root/examples/server/README.md
diff options
context:
space:
mode:
authorSrinivas Billa <nivibilla@gmail.com>2023-06-15 18:36:38 +0100
committerGitHub <noreply@github.com>2023-06-15 20:36:38 +0300
commit9dda13e5e1f70bdfc25fbc0f0378f27c8b67e983 (patch)
treed0d54726faec08135d4ea91a3d71dc1c6fc93149 /examples/server/README.md
parent37e257c48e350cf03c353c10d31e777f8d00123d (diff)
readme : server compile flag (#1874)
Explicitly include the server make instructions for C++ noobsl like me ;)
Diffstat (limited to 'examples/server/README.md')
-rw-r--r--examples/server/README.md4
1 files changed, 4 insertions, 0 deletions
diff --git a/examples/server/README.md b/examples/server/README.md
index 7dabac9..3b11165 100644
--- a/examples/server/README.md
+++ b/examples/server/README.md
@@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
To get started right away, run the following command, making sure to use the correct path for the model you have:
#### Unix-based systems (Linux, macOS, etc.):
+Make sure to build with the server option on
+```bash
+LLAMA_BUILD_SERVER=1 make
+```
```bash
./server -m models/7B/ggml-model.bin --ctx_size 2048