aboutsummaryrefslogtreecommitdiff
path: root/prompts/chat-with-vicuna-v1.txt
diff options
context:
space:
mode:
authorDaniel Drake <drake@endlessos.org>2023-07-01 20:31:44 +0200
committerGitHub <noreply@github.com>2023-07-01 21:31:44 +0300
commitb2132270678c473f7cd9ba871b03d694126bc33a (patch)
tree2261f5cc594fc57b1b341e21cb7c38ae7ea8d76d /prompts/chat-with-vicuna-v1.txt
parent2f8cd979ecd1fa582852e7136e92ff8990b98fd8 (diff)
cmake : don't force -mcpu=native on aarch64 (#2063)
It's currently not possible to cross-compile llama.cpp for aarch64 because CMakeLists.txt forces -mcpu=native for that target. -mcpu=native doesn't make sense if your build host is not the target architecture, and clang rejects it for that reason, aborting the build. This can be easily reproduced using the current Android NDK to build for aarch64 on an x86_64 host. If there is not a specific CPU-tuning target for aarch64 then -mcpu should be omitted completely. I think that makes sense, there is not enough variance in the aarch64 instruction set to warrant a fixed -mcpu optimization at this point. And if someone is building natively and wishes to enable any possible optimizations for the host device, then there is already the LLAMA_NATIVE option available. Fixes #495.
Diffstat (limited to 'prompts/chat-with-vicuna-v1.txt')
0 files changed, 0 insertions, 0 deletions