From f1eaff4721153a5a5094fd1bd8cbdae7a3c079cc Mon Sep 17 00:00:00 2001 From: Georgi Gerganov Date: Sat, 11 Mar 2023 17:58:18 +0200 Subject: Add AVX2 support for x86 architectures thanks to @Const-me ! --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) (limited to 'README.md') diff --git a/README.md b/README.md index 73862d8..1770441 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ The main goal is to run the model using 4-bit quantization on a MacBook. - Plain C/C++ implementation without dependencies - Apple silicon first-class citizen - optimized via Arm Neon and Accelerate framework +- AVX2 support for x86 architectures - Mixed F16 / F32 precision - 4-bit quantization support - Runs on the CPU @@ -185,9 +186,6 @@ When running the larger models, make sure you have enough disk space to store al In general, it seems to work, but I think it fails for unicode character support. Hopefully, someone can help with that - I don't know yet how much the quantization affects the quality of the generated text - Probably the token sampling can be improved -- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this - on Apple Silicon. For now, on Linux and Windows you can use the F16 `ggml-model-f16.bin` model, but it will be much - slower. - The Accelerate framework is actually currently unused since I found that for tensor shapes typical for the Decoder, there is no benefit compared to the ARM_NEON intrinsics implementation. Of course, it's possible that I simlpy don't know how to utilize it properly. But in any case, you can even disable it with `LLAMA_NO_ACCELERATE=1 make` and the -- cgit v1.2.3