index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-06-18
Added tokens per second to info prints (#1928)
Johannes Gäßler
2023-06-18
Fixed incorrectly applying RMS norm twice (#1925)
Johannes Gäßler
2023-06-18
ggml : fix bug in ggml_compute_forward_add_q_f32 (#1918)
l3utterfly
2023-06-18
readme : update Android build instructions (#1922)
Mike
2023-06-18
llama : prevent usage of k-quants when tensor size is not a multiple of 256 (...
Kawrakow
2023-06-18
examples : fix examples/metal (#1920)
Kawrakow
2023-06-18
metal : handle buffers larger than device's maxBufferLength (#1826)
Georgi Gerganov
2023-06-18
cmake : add CUDA_ARCHITECTURES to new target ggml_static (#1917)
Howard Su
2023-06-17
make : do not print help for simple example
Georgi Gerganov
2023-06-17
minor : warning fixes
Georgi Gerganov
2023-06-17
Only one CUDA stream per device for async compute (#1898)
Johannes Gäßler
2023-06-17
llama : fix kv_cache `n` init (close #1903)
Georgi Gerganov
2023-06-17
make : update for latest Arch (#1701)
DaniAndTheWeb
2023-06-17
ggml : fix warnings under MSVC (#1908)
Howard Su
2023-06-17
metal : add norm, cpy f16->f16, alibi kernels (#1823)
Aaron Miller
2023-06-17
exposed modules so that they can be invoked by nix run github:ggerganov/llama...
Faez Shakil
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-17
hooks : setting up flake8 and pre-commit hooks (#1681)
Jiří Podivín
2023-06-17
readme : alternative way to build for Android with CLBlast. (#1828)
Gustavo Rocha Dias
2023-06-17
Allow cmake to build ggml as a library (#1896)
Kerfuffle
2023-06-17
train : get raw text instead of page with html (#1905)
David Yang
2023-06-16
opencl : support k-quants (#1836)
0cc4m
2023-06-16
examples : add "simple" (#1840)
SuperUserNameMan
2023-06-16
cmake : add auto detection of BLAS_INCLUDE_DIRS (#1886)
Zenix
2023-06-16
llama : fix embd when offloading non-repeating layers (#1891)
Johannes Gäßler
2023-06-16
Fixed possible macro redefinition (#1892)
FrankHB
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-06-16
CUDA : faster k-quant dot kernels (#1862)
Kawrakow
2023-06-16
gitignore : add several entries specific to Visual Studio (#1888)
Borislav Stanimirov
2023-06-15
Fixed CUDA runtime version check (#1879)
Johannes Gäßler
2023-06-15
cmake : remove whitespaces
Georgi Gerganov
2023-06-15
examples : add chat-vicuna.sh (#1854)
yangli2
2023-06-15
cmake : set include path for OpenBlas (#1830)
Igor Okulist
2023-06-15
swift : Package compile breaks due to ggml-metal.metal (#1831)
Frederik Vogel
2023-06-15
make : add train-text-from-scratch (#1850)
daboe01
2023-06-15
readme : server compile flag (#1874)
Srinivas Billa
2023-06-15
make : clean *.so files (#1857)
sandyiscool
2023-06-15
Fix the validation of main device (#1872)
Howard Su
2023-06-15
metal : parallel command buffer encoding (#1860)
Georgi Gerganov
2023-06-15
Better error when using both LoRA + GPU layers (#1861)
Johannes Gäßler
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-13
baby-llama : fix operator!= (#1821)
0xspringtime
2023-06-13
train : improved training-from-scratch example (#1652)
xaedes
2023-06-13
llama : do a warm-up eval at start for better timings (#1824)
Georgi Gerganov
2023-06-13
Allow "quantizing" to f16 and f32 (#1787)
Kerfuffle
2023-06-12
Metal implementation for all k_quants (#1807)
Kawrakow
2023-06-12
ci : run when changing only the CUDA sources (#1800)
slaren
2023-06-12
Leverage mmap for offloading tensors to GPU (#1597)
Howard Su
2023-06-12
metal : fix failure to load model (#1817)
Kawrakow
2023-06-11
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
Kerfuffle
[prev]
[next]