index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-06-28
cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)
Salvador E. Tropea
2023-06-28
cuda : fix missing const qualifier in casts (#2027)
Salvador E. Tropea
2023-06-28
llama : remove shards weight file support (#2000)
Howard Su
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-28
llama : support input embeddings directly (#1910)
ningshanwutuobang
2023-06-27
fix pthreads setaffinity usage on android (#2020)
Erik Scholz
2023-06-27
baby-llama : fix build after ggml_rope change (#2016)
Howard Su
2023-06-27
llama : fix rope usage after ChatGLM change
Georgi Gerganov
2023-06-27
ggml : add support for ChatGLM RoPE
Georgi Gerganov
2023-06-26
readme : add Scala 3 bindings repo (#2010)
Roman Parykin
2023-06-26
ggml : increase max tensor name + clean up compiler warnings in train-text (#...
David Yang
2023-06-26
readme : LD_LIBRARY_PATH complement for some Android devices when building wi...
Gustavo Rocha Dias
2023-06-26
ggml : avoid conv 2d kernel round up
Georgi Gerganov
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-26
k-quants : fix indentation
Georgi Gerganov
2023-06-26
tests : fix quantize perf (#1990)
katsu560
2023-06-26
k-quants : add AVX support to dot functions (#1916)
katsu560
2023-06-26
readme : add link to new k-quants for visibility
Georgi Gerganov
2023-06-26
k-quants : support for super-block size of 64 (#2001)
Kawrakow
2023-06-26
Fix assert when free invalid cuda pointer (#2005)
Howard Su
2023-06-25
readme : add new roadmap + manifesto
Georgi Gerganov
2023-06-25
ggml : sync latest ggml (custom operators)
Georgi Gerganov
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-25
readme : add Azure CI discussion link
Georgi Gerganov
2023-06-25
zig : upgrade build system support (#1981)
sjinzh
2023-06-24
#1869 Fix null reference errors when training from scratch with CUDA (#1907)
Robyn
2023-06-24
tests : sync test-grad0 from ggml
Georgi Gerganov
2023-06-24
flake : fix ggml-metal.metal path and run nixfmt (#1974)
Rowan Hart
2023-06-24
convert : fix invalid params in write_vocab_only (#1975)
AN Long
2023-06-24
ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)
slaren
2023-06-24
readme : fix whitespaces
Georgi Gerganov
2023-06-24
readme : fixed termux instructions (#1973)
Alberto
2023-06-24
llama : fix top-p sampling to match the canonical definition (#1953)
Alex Renda
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-23
Add OpenLLaMA instructions to the README (#1954)
eiery
2023-06-22
rework convert.py to read hyper-parameters from config.json (#1958)
Erik Scholz
2023-06-21
cmake: revert CUDA arch default to 52, 61 if f16 (#1959)
Johannes Gäßler
2023-06-21
Fix typo in README.md (#1961)
Rahul Vivek Nair
2023-06-20
readme : add link to p1
Georgi Gerganov
2023-06-20
Fix typo (#1949)
Xiake Sun
2023-06-20
llama : fix params struct slignment (#1936)
Ettore Di Giacinto
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-19
ggml : fix bug in LBFGS optimizer (found by ggml tests)
Georgi Gerganov
2023-06-19
llama : use aligned memory during ggml_init call from loading saved sessions ...
l3utterfly
2023-06-19
cmake : fix trailing whitespaces
Georgi Gerganov
2023-06-19
llama : only use Q6_K for output weights if tensor size is multiple of 256 (#...
Kawrakow
2023-06-19
cuda : faster k-quants on older GPUs (#1930)
Kawrakow
2023-06-19
ggml : sync latest ggml repo (#1924)
Georgi Gerganov
2023-06-19
cmake : fix build shared ggml when CUDA is enabled (#1929)
Howard Su
2023-06-19
Convert vector to f16 for dequantize mul mat vec (#1913)
Johannes Gäßler
[next]