diff options
author | Johannes Gäßler <johannesg@5d6.de> | 2023-06-17 19:15:02 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-06-17 19:15:02 +0200 |
commit | 2c9380dd2f77e41149340f3ecb09764d793b16db (patch) | |
tree | 55a8e2cfc2dce879981d9610f499f292a4702b31 /README.md | |
parent | 051e1b0e6a6e3aee7d989b47760980e6fda5861c (diff) |
Only one CUDA stream per device for async compute (#1898)
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 1 |
1 files changed, 0 insertions, 1 deletions
@@ -336,7 +336,6 @@ Building the program with BLAS support may lead to some performance improvements cmake .. -DLLAMA_CUBLAS=ON cmake --build . --config Release ``` - Note: Because llama.cpp uses multiple CUDA streams for matrix multiplication results [are not guaranteed to be reproducible](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility). If you need reproducibility, set `GGML_CUDA_MAX_STREAMS` in the file `ggml-cuda.cu` to 1. The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used. |