diff options
author | Georgi Gerganov <ggerganov@gmail.com> | 2023-06-04 23:38:19 +0300 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-06-04 23:38:19 +0300 |
commit | 827f5eda91e5b7299848ee2c7179d873bdee0f7b (patch) | |
tree | 8e6a46b9bb7d8d938272b1f944aeac7f05c00b80 /README.md | |
parent | ecb217db4fcfa3880300ad08531a5fb6bb142d45 (diff) |
readme : update hot topics
Diffstat (limited to 'README.md')
-rw-r--r-- | README.md | 8 |
1 files changed, 5 insertions, 3 deletions
@@ -9,9 +9,11 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++ **Hot topics:** -- Quantization formats `Q4` and `Q8` have changed again (19 May) - [(info)](https://github.com/ggerganov/llama.cpp/pull/1508) -- Quantization formats `Q4` and `Q5` have changed - requantize any old models [(info)](https://github.com/ggerganov/llama.cpp/pull/1405) -- [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220) +- GPU support with Metal (Apple Silicon): https://github.com/ggerganov/llama.cpp/pull/1642 +- High-quality 2,3,4,5,6-bit quantization: https://github.com/ggerganov/llama.cpp/pull/1684 +- Multi-GPU support: https://github.com/ggerganov/llama.cpp/pull/1607 +- Training LLaMA models from scratch: https://github.com/ggerganov/llama.cpp/pull/1652 +- CPU threading improvements: https://github.com/ggerganov/llama.cpp/pull/1632 <details> <summary>Table of Contents</summary> |