index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-08-05
CUDA: faster k-quant mul_mat_q kernels (#2525)
Johannes Gäßler
2023-08-04
fix firefox autoscroll (#2519)
Jonas Wunderlich
2023-08-04
server: regenerate completion.js.hpp (#2515)
Cebtenzzre
2023-08-04
CUDA: use min compute capability of GPUs actually used (#2506)
Cebtenzzre
2023-08-04
CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)
Cebtenzzre
2023-08-04
Add --simple-io option for subprocesses and break out console.h and cpp (#1558)
DannyDaemonic
2023-08-04
Fixing race condition in server and partial stream handling in frontend. (#2391)
Stephen Nichols
2023-08-04
Stream save llama context data to file instead of allocating entire buffer up...
l3utterfly
2023-08-04
build : fix several cast and printf warnings (#2499)
Borislav Stanimirov
2023-08-02
examples : generate JSON according to schema (#1887)
Evan Jones
2023-08-02
CUDA: faster non k-quant mul_mat_q kernels (#2483)
Johannes Gäßler
2023-08-02
CUDA: Fix models with output size != 32000 (#2480)
Johannes Gäßler
2023-08-02
readme : add Aquila-7B model series to supported models (#2487)
ldwang
2023-08-02
tests : Fix compilation warnings (Linux/GCC) (#2451)
Eve
2023-08-02
readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475)
Yiming Cui
2023-08-01
fix a typo in examples/server/README.md (#2478)
Bono Lv
2023-08-01
server : Support dark mode (#2414)
ebraminio
2023-08-01
metal : add gqa8 kernel to allow llama-2-70B on metal (#2459)
Matteo Boschini
2023-07-31
CUDA: fixed LLAMA_FAST compilation option (#2473)
Johannes Gäßler
2023-07-31
CUDA: fixed cmake F16 option (#2471)
Johannes Gäßler
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-31
CUDA: Implemented row flattening for non-glm RoPE (#2468)
Johannes Gäßler
2023-07-31
CUDA: fewer memory bank conflicts for mul_mat_q (#2458)
Johannes Gäßler
2023-07-31
Fix Metal backend broken from the allocator changes (#2455)
slaren
2023-07-30
ggml : add graph tensor allocator (#2411)
slaren
2023-07-29
CUDA: Quantized matrix matrix multiplication (#2160)
Johannes Gäßler
2023-07-29
CUDA: faster multi GPU synchronization (#2448)
Johannes Gäßler
2023-07-28
perplexity : add Hellaswag calculation (#2389)
klosax
2023-07-28
ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405)
Lee
2023-07-28
llama : support more diverse tokenizers? (#2420)
eric8607242
2023-07-28
examples : fix whitespace
Georgi Gerganov
2023-07-28
examples : server chat mode with llama2 (#2400)
nhamanasu
2023-07-28
readme : fix the description of the Tail free sampling (TFS) method (#2431)
Weird Constructor
2023-07-28
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)
Rand Xie
2023-07-28
Obtaining LLaMA 2 instructions (#2308)
niansa/tuxifan
2023-07-27
convert.py : Update to support 70B HF format model files (#2427)
mj-shifu
2023-07-27
metal : disable graph concurrency optimization due to bug (#2413)
Georgi Gerganov
2023-07-26
ggml : fix assert in ggml_set_unary_op (#2410)
slaren
2023-07-26
make : build with -Wmissing-prototypes (#2394)
Cebtenzzre
2023-07-26
ggml : allocate graphs in a context (#2392)
slaren
2023-07-25
Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
Kawrakow
2023-07-25
ggml : fix ggml_flash_attn to use op_params (#2387)
slaren
2023-07-25
convert.py : support bpe tokenizer (#2228)
ldwang
2023-07-25
ggml : relax contiguous constraints in activation function (#2371)
Jiahao Li
2023-07-25
ggml : improve graph build time via hash table lookup (#2329)
slaren
2023-07-25
build : fix line breaking error in build-info.sh (#2349)
Hesen Peng
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-25
ci : add non-AVX scalar build/test (#2356)
Eve
2023-07-25
k_quants : add AVX support to dot functions with QK_K as 64 (#2339)
katsu560
2023-07-25
metal : concurrently dispatch commands (#2358)
Shouzheng Liu
[next]