index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-07-01
Test-based VRAM scratch size + context adjustment (#2056)
Johannes Gäßler
2023-07-01
cmake : don't force -mcpu=native on aarch64 (#2063)
Daniel Drake
2023-07-01
metal : release buffers when freeing metal context (#2062)
Aaron Miller
2023-07-01
convert : add support of baichuan-7b (#2055)
Judd
2023-07-01
llama : fix return value of llama_load_session_file_internal (#2022)
Georgi Gerganov
2023-07-01
llama : catch llama_load_session_file_internal exceptions (#2022)
Rand Xie
2023-07-01
embd-input : fix returning ptr to temporary
Georgi Gerganov
2023-07-01
train : fix compile warning
Georgi Gerganov
2023-07-01
ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)
Qingyou Meng
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-29
Porting the improved K-Quant CUDA kernels to OpenCL (#1966)
LostRuins
2023-06-28
llama : replacing auto &kv with const auto &kv (#2041)
m3ndax
2023-06-28
cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)
Salvador E. Tropea
2023-06-28
cuda : fix missing const qualifier in casts (#2027)
Salvador E. Tropea
2023-06-28
llama : remove shards weight file support (#2000)
Howard Su
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-28
llama : support input embeddings directly (#1910)
ningshanwutuobang
2023-06-27
fix pthreads setaffinity usage on android (#2020)
Erik Scholz
2023-06-27
baby-llama : fix build after ggml_rope change (#2016)
Howard Su
2023-06-27
llama : fix rope usage after ChatGLM change
Georgi Gerganov
2023-06-27
ggml : add support for ChatGLM RoPE
Georgi Gerganov
2023-06-26
readme : add Scala 3 bindings repo (#2010)
Roman Parykin
2023-06-26
ggml : increase max tensor name + clean up compiler warnings in train-text (#...
David Yang
2023-06-26
readme : LD_LIBRARY_PATH complement for some Android devices when building wi...
Gustavo Rocha Dias
2023-06-26
ggml : avoid conv 2d kernel round up
Georgi Gerganov
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-26
k-quants : fix indentation
Georgi Gerganov
2023-06-26
tests : fix quantize perf (#1990)
katsu560
2023-06-26
k-quants : add AVX support to dot functions (#1916)
katsu560
2023-06-26
readme : add link to new k-quants for visibility
Georgi Gerganov
2023-06-26
k-quants : support for super-block size of 64 (#2001)
Kawrakow
2023-06-26
Fix assert when free invalid cuda pointer (#2005)
Howard Su
2023-06-25
readme : add new roadmap + manifesto
Georgi Gerganov
2023-06-25
ggml : sync latest ggml (custom operators)
Georgi Gerganov
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-25
readme : add Azure CI discussion link
Georgi Gerganov
2023-06-25
zig : upgrade build system support (#1981)
sjinzh
2023-06-24
#1869 Fix null reference errors when training from scratch with CUDA (#1907)
Robyn
2023-06-24
tests : sync test-grad0 from ggml
Georgi Gerganov
2023-06-24
flake : fix ggml-metal.metal path and run nixfmt (#1974)
Rowan Hart
2023-06-24
convert : fix invalid params in write_vocab_only (#1975)
AN Long
2023-06-24
ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)
slaren
2023-06-24
readme : fix whitespaces
Georgi Gerganov
2023-06-24
readme : fixed termux instructions (#1973)
Alberto
2023-06-24
llama : fix top-p sampling to match the canonical definition (#1953)
Alex Renda
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-23
Add OpenLLaMA instructions to the README (#1954)
eiery
2023-06-22
rework convert.py to read hyper-parameters from config.json (#1958)
Erik Scholz
2023-06-21
cmake: revert CUDA arch default to 52, 61 if f16 (#1959)
Johannes Gäßler
2023-06-21
Fix typo in README.md (#1961)
Rahul Vivek Nair
[prev]
[next]