diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-03-12 09:03:25 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-12 09:03:25 +0200 |
commit | 702fddf5c5c3c1377e169ba9ecdfed4cb16c268b (patch) | |
tree | eabf0dfd2b632223d699e2d0ab1b44faf885af69 | |
parent | 7d86e25bf648eb369a3a8388bf239b6b19f7a789 (diff) |
Clarify meaning of hacking
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -18,7 +18,7 @@ The main goal is to run the model using 4-bit quantization on a MacBook - 4-bit quantization support - Runs on the CPU -This was hacked in an evening - I have no idea if it works correctly. +This was [hacked in an evening](https://github.com/ggerganov/llama.cpp/issues/33#issuecomment-1465108022) - I have no idea if it works correctly. Please do not make conclusions about the models based on the results from this implementation. For all I know, it can be completely wrong. This project is for educational purposes and is not going to be maintained properly. New features will probably be added mostly through community contributions, if any. |