index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-07-22
CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)
Johannes Gäßler
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-22
examples : basic VIM plugin
whoreson
2023-07-22
ci : fix args
Georgi Gerganov
2023-07-22
ci : add 7B CUDA tests (#2319)
Georgi Gerganov
2023-07-21
examples : add easy python script to create quantized (k-bit support) GGML mo...
Richard Roberson
2023-07-21
Custom RoPE + bettter memory management for CUDA (#2295)
Kawrakow
2023-07-21
Faster Q3_K implementation on Metal (#2307)
Kawrakow
2023-07-21
ggml : fix the rope fix (513f8619535a64fa9ace808cdcbcf66211535f5c)
Georgi Gerganov
2023-07-21
examples : fix typo in minigpt4.py (#2298)
Ikko Eltociear Ashimine
2023-07-21
ggml : fix rope args order + assert (#2054)
Georgi Gerganov
2023-07-21
gitignore : fix final newline
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
gitignore : changes for Poetry users + chat examples (#2284)
Jose Maldonado
2023-07-21
make : fix indentation
Georgi Gerganov
2023-07-21
ci : fix MNT realpath usage (#2250)
Georgi Gerganov
2023-07-21
make : support customized LLAMA_CUDA_NVCC and LLAMA_CUDA_CCBIN (#2275)
Sky Yan
2023-07-21
flake : remove intel mkl from flake.nix due to missing files (#2277)
wzy
2023-07-21
llama : make tensor_split ptr instead of array (#2272)
Georgi Gerganov
2023-07-21
make : add new target for test binaries (#2244)
Jiří Podivín
2023-07-21
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
Hatsune Miku
2023-07-21
Faster Q2_K on Metal (#2297)
Kawrakow
2023-07-21
make : fix embdinput library and server examples building on MSYS2 (#2235)
Przemysław Pawełczyk
2023-07-20
Faster Q5_K and Q6_K on Metal (#2294)
Kawrakow
2023-07-20
Faster Q4_K on Metal (#2290)
Kawrakow
2023-07-20
llama : fix regression from #2000 - could not load no-mmap models
Georgi Gerganov
2023-07-20
metal: minor q4 optimization and reduce code size (#2248)
Shouzheng Liu
2023-07-19
llama : extend API to get max devices at runtime (#2253)
Rinne
2023-07-19
flake : update flake.nix (#2270)
wzy
2023-07-19
cmake : install targets (#2256)
wzy
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-18
llama : shorten quantization descriptions
Georgi Gerganov
2023-07-17
Support dup & cont ops on CUDA (#2242)
Jiahao Li
2023-07-17
llama : fix t_start_sample_us initialization warning (#2238)
Alex Klinkhamer
2023-07-16
ggml : fixed runtime bugs and compile errors related to GGML_PERF and GGML_DE...
Qingyou Meng
2023-07-16
py : turn verify-checksum-models.py into executable (#2245)
Jiří Podivín
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-14
flake : add runHook preInstall/postInstall to installPhase so hooks function ...
Dave Della Costa
2023-07-14
make : use pkg-config for OpenBLAS (#2222)
wzy
2023-07-14
cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer ...
Bach Le
2023-07-14
ggml : fix static_assert with older compilers #2024 (#2218)
Evan Miller
2023-07-14
llama : add functions that work directly on model (#2197)
Bach Le
2023-07-14
build.zig : install config header (#2216)
Ali Chraghi
2023-07-14
examples : fixed path typos in embd-input (#2214)
Shangning Xu
2023-07-14
cuda : support broadcast add & mul (#2192)
Jiahao Li
2023-07-14
CUDA: mul_mat_vec_q kernels for k-quants (#2203)
Johannes Gäßler
2023-07-14
make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)
James Reynolds
2023-07-14
ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)
Georgi Gerganov
2023-07-14
Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)
Kawrakow
[prev]
[next]