From 2c9380dd2f77e41149340f3ecb09764d793b16db Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Johannes=20G=C3=A4=C3=9Fler?= Date: Sat, 17 Jun 2023 19:15:02 +0200 Subject: Only one CUDA stream per device for async compute (#1898) --- README.md | 1 - 1 file changed, 1 deletion(-) (limited to 'README.md') diff --git a/README.md b/README.md index b9759b0..7defb75 100644 --- a/README.md +++ b/README.md @@ -336,7 +336,6 @@ Building the program with BLAS support may lead to some performance improvements cmake .. -DLLAMA_CUBLAS=ON cmake --build . --config Release ``` - Note: Because llama.cpp uses multiple CUDA streams for matrix multiplication results [are not guaranteed to be reproducible](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility). If you need reproducibility, set `GGML_CUDA_MAX_STREAMS` in the file `ggml-cuda.cu` to 1. The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used. -- cgit v1.2.3