index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-06-05
ci : disable auto tidy (#1705)
Georgi Gerganov
2023-06-05
ggml : add SOTA 2,3,4,5,6 bit k-quantizations (#1684)
Kawrakow
2023-06-05
Increase 3B scratch buffers. (#1698)
Henri Vasserman
2023-06-05
llama : fix Metal KV cache sync (close #1695)
Georgi Gerganov
2023-06-04
readme : update hot topics
Georgi Gerganov
2023-06-04
llama : Metal inference (#1642)
Georgi Gerganov
2023-06-04
OpenCL: Fix duplication of layers in VRAM and RAM, add GPU mul kernel (#1653)
0cc4m
2023-06-03
Add info about CUDA_VISIBLE_DEVICES (#1682)
Henri Vasserman
2023-06-03
Docker: change to calling convert.py (#1641)
Jiří Podivín
2023-06-03
Fix prompt cache saving and chat-persistent rollover (#1678)
Evan Jones
2023-05-30
OpenLLaMA 3B support (#1588)
Henri Vasserman
2023-05-29
ggml : sync cgraph import / export API
Georgi Gerganov
2023-05-29
ggml : fix bug in ggml_alibi
Georgi Gerganov
2023-05-29
Work around for recalculating logits in cached prompts (Fixes #1585) (#1609)
DannyDaemonic
2023-05-28
Adding git in container package dependencies (#1621)
Jiří Podivín
2023-05-28
LLAMA_DEBUG adds debug symbols (#1617)
Johannes Gäßler
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-28
examples : add --alias option to gpt_params to set use friendly model name (#...
Vladimir Zorin
2023-05-28
opencl : no need to allocate cl_mem on heap (#1612)
Howard Su
2023-05-28
opencl : use strstr to check if fp16 supported (#1611)
Howard Su
2023-05-27
ggml : add support for the RISCV architecture (#1616)
apcameron
2023-05-27
Include server in releases + other build system cleanups (#1610)
Kerfuffle
2023-05-27
Add documentation about CLBlast (#1604)
Henri Vasserman
2023-05-27
[CI] Fix openblas (#1613)
Henri Vasserman
2023-05-27
ggml : add ggml_tensor_overhead()
Georgi Gerganov
2023-05-27
[CI] CLBlast: Fix directory name (#1606)
Henri Vasserman
2023-05-27
ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())
Georgi Gerganov
2023-05-25
Some improvements to loading the session with --prompt-cache (#1550)
Kerfuffle
2023-05-26
cuda : performance optimizations (#1530)
Johannes Gäßler
2023-05-24
Update CLBlast to 1.6.0 (#1580)
Henri Vasserman
2023-05-24
readme : add docs for chat-persistent.sh (#1568)
Evan Jones
2023-05-24
chat-persistent.sh : use bracket expressions in grep (#1564)
Senemu
2023-05-23
Fix handling of "invalid property" when creating OpenCL command queue (#1565)
Maarten ter Huurne
2023-05-23
OpenCL Token Generation Acceleration (#1459)
0cc4m
2023-05-21
examples : add server example with REST API (#1443)
Steward Garcia
2023-05-21
make : .PHONY clean (#1553)
Stefan Sydow
2023-05-21
ggml : output 3d sizes in ggml_graph_dump_dot()
Georgi Gerganov
2023-05-20
ggml : update WASM SIMD
Georgi Gerganov
2023-05-20
feature : support blis and other blas implementation (#1536)
Zenix
2023-05-20
OpenCL: Fixes for older devices. (#1435)
Henri Vasserman
2023-05-20
llama : define magic numbers as integer constants (#1518) (#1520)
Juuso Alasuutari
2023-05-20
ggml : add ggml_clamp() (#1539)
Georgi Gerganov
2023-05-20
cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...
Johannes Gäßler
2023-05-20
Revert "feature : add blis and other BLAS implementation support (#1502)"
Georgi Gerganov
2023-05-20
feature : add blis and other BLAS implementation support (#1502)
Zenix
2023-05-20
llama : add llama_init_backend() API (close #1527)
Georgi Gerganov
2023-05-20
Fix for mingw (#1462)
DannyDaemonic
2023-05-20
llama : fix name shadowing and C4146 (#1526)
Maxime
2023-05-20
llama : fix compile warnings in llama_set_state_data()
Georgi Gerganov
2023-05-20
ggml : fix scalar implementation of Q4_1 dot
Georgi Gerganov
[next]