index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-06-26
k-quants : add AVX support to dot functions (#1916)
katsu560
2023-06-26
readme : add link to new k-quants for visibility
Georgi Gerganov
2023-06-26
k-quants : support for super-block size of 64 (#2001)
Kawrakow
2023-06-26
Fix assert when free invalid cuda pointer (#2005)
Howard Su
2023-06-25
readme : add new roadmap + manifesto
Georgi Gerganov
2023-06-25
ggml : sync latest ggml (custom operators)
Georgi Gerganov
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-25
readme : add Azure CI discussion link
Georgi Gerganov
2023-06-25
zig : upgrade build system support (#1981)
sjinzh
2023-06-24
#1869 Fix null reference errors when training from scratch with CUDA (#1907)
Robyn
2023-06-24
tests : sync test-grad0 from ggml
Georgi Gerganov
2023-06-24
flake : fix ggml-metal.metal path and run nixfmt (#1974)
Rowan Hart
2023-06-24
convert : fix invalid params in write_vocab_only (#1975)
AN Long
2023-06-24
ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)
slaren
2023-06-24
readme : fix whitespaces
Georgi Gerganov
2023-06-24
readme : fixed termux instructions (#1973)
Alberto
2023-06-24
llama : fix top-p sampling to match the canonical definition (#1953)
Alex Renda
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-23
Add OpenLLaMA instructions to the README (#1954)
eiery
2023-06-22
rework convert.py to read hyper-parameters from config.json (#1958)
Erik Scholz
2023-06-21
cmake: revert CUDA arch default to 52, 61 if f16 (#1959)
Johannes Gäßler
2023-06-21
Fix typo in README.md (#1961)
Rahul Vivek Nair
2023-06-20
readme : add link to p1
Georgi Gerganov
2023-06-20
Fix typo (#1949)
Xiake Sun
2023-06-20
llama : fix params struct slignment (#1936)
Ettore Di Giacinto
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-19
ggml : fix bug in LBFGS optimizer (found by ggml tests)
Georgi Gerganov
2023-06-19
llama : use aligned memory during ggml_init call from loading saved sessions ...
l3utterfly
2023-06-19
cmake : fix trailing whitespaces
Georgi Gerganov
2023-06-19
llama : only use Q6_K for output weights if tensor size is multiple of 256 (#...
Kawrakow
2023-06-19
cuda : faster k-quants on older GPUs (#1930)
Kawrakow
2023-06-19
ggml : sync latest ggml repo (#1924)
Georgi Gerganov
2023-06-19
cmake : fix build shared ggml when CUDA is enabled (#1929)
Howard Su
2023-06-19
Convert vector to f16 for dequantize mul mat vec (#1913)
Johannes Gäßler
2023-06-18
Added tokens per second to info prints (#1928)
Johannes Gäßler
2023-06-18
Fixed incorrectly applying RMS norm twice (#1925)
Johannes Gäßler
2023-06-18
ggml : fix bug in ggml_compute_forward_add_q_f32 (#1918)
l3utterfly
2023-06-18
readme : update Android build instructions (#1922)
Mike
2023-06-18
llama : prevent usage of k-quants when tensor size is not a multiple of 256 (...
Kawrakow
2023-06-18
examples : fix examples/metal (#1920)
Kawrakow
2023-06-18
metal : handle buffers larger than device's maxBufferLength (#1826)
Georgi Gerganov
2023-06-18
cmake : add CUDA_ARCHITECTURES to new target ggml_static (#1917)
Howard Su
2023-06-17
make : do not print help for simple example
Georgi Gerganov
2023-06-17
minor : warning fixes
Georgi Gerganov
2023-06-17
Only one CUDA stream per device for async compute (#1898)
Johannes Gäßler
2023-06-17
llama : fix kv_cache `n` init (close #1903)
Georgi Gerganov
2023-06-17
make : update for latest Arch (#1701)
DaniAndTheWeb
2023-06-17
ggml : fix warnings under MSVC (#1908)
Howard Su
2023-06-17
metal : add norm, cpy f16->f16, alibi kernels (#1823)
Aaron Miller
2023-06-17
exposed modules so that they can be invoked by nix run github:ggerganov/llama...
Faez Shakil
[next]