index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-07-25
convert.py : support bpe tokenizer (#2228)
ldwang
2023-07-25
ggml : relax contiguous constraints in activation function (#2371)
Jiahao Li
2023-07-25
ggml : improve graph build time via hash table lookup (#2329)
slaren
2023-07-25
build : fix line breaking error in build-info.sh (#2349)
Hesen Peng
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-25
ci : add non-AVX scalar build/test (#2356)
Eve
2023-07-25
k_quants : add AVX support to dot functions with QK_K as 64 (#2339)
katsu560
2023-07-25
metal : concurrently dispatch commands (#2358)
Shouzheng Liu
2023-07-25
Another speed gain for Q4_0 and Q4_1 on Metal (#2375)
Kawrakow
2023-07-25
Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359)
Kawrakow
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-25
[Server] Escape HTML in webchat (#2368)
Henri Vasserman
2023-07-24
make rms_norm_eps a parameter (#2374)
slaren
2023-07-24
Chat UI extras (#2366)
Aarni Koskela
2023-07-24
ggml : sync (unary ops refactor, static-correctness) (#2370)
Georgi Gerganov
2023-07-24
Fix scalar version of Q5_K when QK_K = 64 (#2362)
Kawrakow
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-24
Some more Q4_K and Q5_K speedup on CUDA (#2346)
Kawrakow
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-23
Fix __dp4a documentation (#2348)
Johannes Gäßler
2023-07-23
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
wzy
2023-07-23
fix n_tasks (#2342)
slaren
2023-07-23
ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)
slaren
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-23
llama : print help to stdout (#2338)
maddes8cht
2023-07-23
flake : support `nix build '.#opencl'` (#2337)
wzy
2023-07-23
llama : print max tensor size to stderr (#2336)
Christian Demsar
2023-07-23
make : fix CLBLAST compile support in FreeBSD (#2331)
Jose Maldonado
2023-07-23
examples : simplify vim plugin (#2327)
AustinMroz
2023-07-23
metal : support bcast add & dup & cont op (#2323)
Jiahao Li
2023-07-23
Speed up Q4_K (#2322)
Kawrakow
2023-07-22
CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)
Johannes Gäßler
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-22
examples : basic VIM plugin
whoreson
2023-07-22
ci : fix args
Georgi Gerganov
2023-07-22
ci : add 7B CUDA tests (#2319)
Georgi Gerganov
2023-07-21
examples : add easy python script to create quantized (k-bit support) GGML mo...
Richard Roberson
2023-07-21
Custom RoPE + bettter memory management for CUDA (#2295)
Kawrakow
2023-07-21
Faster Q3_K implementation on Metal (#2307)
Kawrakow
2023-07-21
ggml : fix the rope fix (513f8619535a64fa9ace808cdcbcf66211535f5c)
Georgi Gerganov
2023-07-21
examples : fix typo in minigpt4.py (#2298)
Ikko Eltociear Ashimine
2023-07-21
ggml : fix rope args order + assert (#2054)
Georgi Gerganov
2023-07-21
gitignore : fix final newline
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
gitignore : changes for Poetry users + chat examples (#2284)
Jose Maldonado
2023-07-21
make : fix indentation
Georgi Gerganov
2023-07-21
ci : fix MNT realpath usage (#2250)
Georgi Gerganov
2023-07-21
make : support customized LLAMA_CUDA_NVCC and LLAMA_CUDA_CCBIN (#2275)
Sky Yan
2023-07-21
flake : remove intel mkl from flake.nix due to missing files (#2277)
wzy
[prev]
[next]