aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-06-07k-quants : allow to optionally disable at compile time (#1734)Georgi Gerganov
2023-06-07flake : update to support metal on m1/m2 (#1724)jacobi petrucciani
2023-06-07readme : add June roadmapGeorgi Gerganov
2023-06-06main: add the possibility to open the prompt cache read-only (#1640)Willy Tarreau
2023-06-06llama : fix vram_scratch varGeorgi Gerganov
2023-06-06llama : fix compile warningsGeorgi Gerganov
2023-06-06Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)Johannes Gäßler
2023-06-06metal : add f16 supportGeorgi Gerganov
2023-06-06Clblast fixes + enhancements to save VRAM and offload more layers (#1675)LostRuins
2023-06-06ggml : fix builds, add ggml-quants-k.o (close #1712, close #1710)Georgi Gerganov
2023-06-06gitignore : add .clang-tidyGeorgi Gerganov
2023-06-06llama : temporary disable Q6_K output quantization (#1711)Georgi Gerganov
2023-06-06metal : add checks for buffer size (#1706)Spencer Sutton
2023-06-05docs : add performance troubleshoot + example benchmark documentation (#1674)Yuval Peled
2023-06-05readme : fix typo (#1700)Foul-Tarnished
2023-06-05llama : consistently catch and throw only exceptions deriving from std::excep...mgroeber9110
2023-06-05metal : use shared buffers between CPU and GPU (#1696)kiltyj
2023-06-05ggml : fix internal overflow in ggml_time_us on Windows (#1702)grahameth
2023-06-05ci : disable auto tidy (#1705)Georgi Gerganov
2023-06-05ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)Kawrakow
2023-06-05Increase 3B scratch buffers. (#1698)Henri Vasserman
2023-06-05llama : fix Metal KV cache sync (close #1695)Georgi Gerganov
2023-06-04readme : update hot topicsGeorgi Gerganov
2023-06-04llama : Metal inference (#1642)Georgi Gerganov
2023-06-04OpenCL: Fix duplication of layers in VRAM and RAM, add GPU mul kernel (#1653)0cc4m
2023-06-03Add info about CUDA_VISIBLE_DEVICES (#1682)Henri Vasserman
2023-06-03Docker: change to calling convert.py (#1641)Jiří Podivín
2023-06-03Fix prompt cache saving and chat-persistent rollover (#1678)Evan Jones
2023-05-30OpenLLaMA 3B support (#1588)Henri Vasserman
2023-05-29ggml : sync cgraph import / export APIGeorgi Gerganov
2023-05-29ggml : fix bug in ggml_alibiGeorgi Gerganov
2023-05-29Work around for recalculating logits in cached prompts (Fixes #1585) (#1609)DannyDaemonic
2023-05-28Adding git in container package dependencies (#1621)Jiří Podivín
2023-05-28LLAMA_DEBUG adds debug symbols (#1617)Johannes Gäßler
2023-05-28Only show -ngl option when relevant + other doc/arg handling updates (#1625)Kerfuffle
2023-05-28examples : add --alias option to gpt_params to set use friendly model name (#...Vladimir Zorin
2023-05-28opencl : no need to allocate cl_mem on heap (#1612)Howard Su
2023-05-28opencl : use strstr to check if fp16 supported (#1611)Howard Su
2023-05-27ggml : add support for the RISCV architecture (#1616)apcameron
2023-05-27Include server in releases + other build system cleanups (#1610)Kerfuffle
2023-05-27Add documentation about CLBlast (#1604)Henri Vasserman
2023-05-27[CI] Fix openblas (#1613)Henri Vasserman
2023-05-27ggml : add ggml_tensor_overhead()Georgi Gerganov
2023-05-27[CI] CLBlast: Fix directory name (#1606)Henri Vasserman
2023-05-27ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())Georgi Gerganov
2023-05-25Some improvements to loading the session with --prompt-cache (#1550)Kerfuffle
2023-05-26cuda : performance optimizations (#1530)Johannes Gäßler
2023-05-24Update CLBlast to 1.6.0 (#1580)Henri Vasserman
2023-05-24readme : add docs for chat-persistent.sh (#1568)Evan Jones
2023-05-24chat-persistent.sh : use bracket expressions in grep (#1564)Senemu