diff options
author | Srinivas Billa <nivibilla@gmail.com> | 2023-06-15 18:36:38 +0100 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-06-15 20:36:38 +0300 |
commit | 9dda13e5e1f70bdfc25fbc0f0378f27c8b67e983 (patch) | |
tree | d0d54726faec08135d4ea91a3d71dc1c6fc93149 /examples | |
parent | 37e257c48e350cf03c353c10d31e777f8d00123d (diff) |
readme : server compile flag (#1874)
Explicitly include the server make instructions for C++ noobsl like me ;)
Diffstat (limited to 'examples')
-rw-r--r-- | examples/server/README.md | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/examples/server/README.md b/examples/server/README.md index 7dabac9..3b11165 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa To get started right away, run the following command, making sure to use the correct path for the model you have: #### Unix-based systems (Linux, macOS, etc.): +Make sure to build with the server option on +```bash +LLAMA_BUILD_SERVER=1 make +``` ```bash ./server -m models/7B/ggml-model.bin --ctx_size 2048 |