index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
llama.cpp
Age
Commit message (
Expand
)
Author
2023-07-21
ggml : fix rope args order + assert (#2054)
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
llama : make tensor_split ptr instead of array (#2272)
Georgi Gerganov
2023-07-20
llama : fix regression from #2000 - could not load no-mmap models
Georgi Gerganov
2023-07-19
llama : extend API to get max devices at runtime (#2253)
Rinne
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-17
llama : fix t_start_sample_us initialization warning (#2238)
Alex Klinkhamer
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-14
llama : add functions that work directly on model (#2197)
Bach Le
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-11
Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)
LostRuins
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-09
llama : remove "first token must be BOS" restriction (#2153)
oobabooga
2023-07-07
ggml : change ggml_graph_compute() API to not require context (#1999)
Qingyou Meng
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
Stephan Walter
2023-07-05
llama: Don't double count the sampling time (#2107)
Howard Su
2023-07-05
Fixed OpenCL offloading prints (#2082)
Johannes Gäßler
2023-07-03
Fix crash of test-tokenizer-0 under Debug build (#2064)
Howard Su
2023-07-03
[llama] No need to check file version when loading vocab score (#2079)
Howard Su
2023-07-01
Test-based VRAM scratch size + context adjustment (#2056)
Johannes Gäßler
2023-07-01
metal : release buffers when freeing metal context (#2062)
Aaron Miller
2023-07-01
llama : fix return value of llama_load_session_file_internal (#2022)
Georgi Gerganov
2023-07-01
llama : catch llama_load_session_file_internal exceptions (#2022)
Rand Xie
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-28
llama : replacing auto &kv with const auto &kv (#2041)
m3ndax
2023-06-28
llama : remove shards weight file support (#2000)
Howard Su
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-28
llama : support input embeddings directly (#1910)
ningshanwutuobang
2023-06-27
llama : fix rope usage after ChatGLM change
Georgi Gerganov
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-26
k-quants : support for super-block size of 64 (#2001)
Kawrakow
2023-06-24
llama : fix top-p sampling to match the canonical definition (#1953)
Alex Renda
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-20
llama : fix params struct slignment (#1936)
Ettore Di Giacinto
2023-06-19
llama : use aligned memory during ggml_init call from loading saved sessions ...
l3utterfly
2023-06-19
llama : only use Q6_K for output weights if tensor size is multiple of 256 (#...
Kawrakow
2023-06-19
Convert vector to f16 for dequantize mul mat vec (#1913)
Johannes Gäßler
2023-06-18
Added tokens per second to info prints (#1928)
Johannes Gäßler
2023-06-18
Fixed incorrectly applying RMS norm twice (#1925)
Johannes Gäßler
2023-06-18
llama : prevent usage of k-quants when tensor size is not a multiple of 256 (...
Kawrakow
2023-06-18
metal : handle buffers larger than device's maxBufferLength (#1826)
Georgi Gerganov
2023-06-17
llama : fix kv_cache `n` init (close #1903)
Georgi Gerganov
2023-06-17
ggml : fix warnings under MSVC (#1908)
Howard Su
2023-06-16
llama : fix embd when offloading non-repeating layers (#1891)
Johannes Gäßler
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-13
train : improved training-from-scratch example (#1652)
xaedes
2023-06-13
Allow "quantizing" to f16 and f32 (#1787)
Kerfuffle
2023-06-12
Metal implementation for all k_quants (#1807)
Kawrakow
[next]