aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorJohannes Gäßler <johannesg@5d6.de>2023-06-17 19:15:02 +0200
committerGitHub <noreply@github.com>2023-06-17 19:15:02 +0200
commit2c9380dd2f77e41149340f3ecb09764d793b16db (patch)
tree55a8e2cfc2dce879981d9610f499f292a4702b31 /README.md
parent051e1b0e6a6e3aee7d989b47760980e6fda5861c (diff)
Only one CUDA stream per device for async compute (#1898)
Diffstat (limited to 'README.md')
-rw-r--r--README.md1
1 files changed, 0 insertions, 1 deletions
diff --git a/README.md b/README.md
index b9759b0..7defb75 100644
--- a/README.md
+++ b/README.md
@@ -336,7 +336,6 @@ Building the program with BLAS support may lead to some performance improvements
cmake .. -DLLAMA_CUBLAS=ON
cmake --build . --config Release
```
- Note: Because llama.cpp uses multiple CUDA streams for matrix multiplication results [are not guaranteed to be reproducible](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility). If you need reproducibility, set `GGML_CUDA_MAX_STREAMS` in the file `ggml-cuda.cu` to 1.
The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used.