diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-03-21 22:57:35 +0200 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-21 22:57:35 +0200 |
commit | 3366853e41fcc818222a0271c76b6106179106fb (patch) | |
tree | 89cb85ce5aedc28404db1d7ba6837f57a5d2dc2d | |
parent | 3f9c6135e45ae3f520b1e17197004cc60c9ca45b (diff) |
Add notice about pending change
-rw-r--r-- | README.md | 12 |
1 files changed, 9 insertions, 3 deletions
@@ -5,15 +5,21 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ +--- + +**TEMPORARY NOTICE:** +Big code change incoming: https://github.com/ggerganov/llama.cpp/pull/370 + +Do not merge stuff until we merge this. Probably merge will happen on March 22 ~6:00am UTC + +--- + **Hot topics:** - [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca) - Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64 - Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105 -**TEMPORARY NOTICE:** -If you're updating to the latest master, you will need to regenerate your model files as the format has changed. - ## Description The main goal is to run the model using 4-bit quantization on a MacBook |