aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorHenri Vasserman <henv@hot.ee>2023-06-03 16:35:20 +0300
committerGitHub <noreply@github.com>2023-06-03 16:35:20 +0300
commitd8bd0013e8768aaa3dc9cfc1ff01499419d5348e (patch)
treeb052838810279e22f1b1dd1809501d64d6a7669d /README.md
parentb5c85468a3eadf424420af5bf11c2353ff828cda (diff)
Add info about CUDA_VISIBLE_DEVICES (#1682)
Diffstat (limited to 'README.md')
-rw-r--r--README.md4
1 files changed, 3 insertions, 1 deletions
diff --git a/README.md b/README.md
index 00571d8..aba22b9 100644
--- a/README.md
+++ b/README.md
@@ -310,6 +310,8 @@ Building the program with BLAS support may lead to some performance improvements
```
Note: Because llama.cpp uses multiple CUDA streams for matrix multiplication results [are not guaranteed to be reproducible](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility). If you need reproducibility, set `GGML_CUDA_MAX_STREAMS` in the file `ggml-cuda.cu` to 1.
+ The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used.
+
- **CLBlast**
OpenCL acceleration is provided by the matrix multiplication kernels from the [CLBlast](https://github.com/CNugteren/CLBlast) project and custom kernels for ggml that can generate tokens on the GPU.
@@ -348,7 +350,7 @@ Building the program with BLAS support may lead to some performance improvements
cmake --install . --prefix /some/path
```
- Where `/some/path` is where the built library will be installed (default is `/usr/loca`l`).
+ Where `/some/path` is where the built library will be installed (default is `/usr/local`).
</details>
Building: