index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-07-04
readme : add link web chat PR
Georgi Gerganov
2023-07-04
ggml : sync latest (new ops, macros, refactoring) (#2106)
Georgi Gerganov
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-07-04
Simple webchat for server (#1998)
Tobias Lütke
2023-07-04
Allow old Make to build server. (#2098)
Henri Vasserman
2023-07-04
Update Makefile: clean simple (#2097)
ZhouYuChen
2023-07-04
CI: make the brew update temporarily optional. (#2092)
Erik Scholz
2023-07-04
[ggml] fix index for ne03 value in ggml_cl_mul_f32 (#2088)
Govlzkoy
2023-07-04
fix server crashes (#2076)
Henri Vasserman
2023-07-03
Fix crash of test-tokenizer-0 under Debug build (#2064)
Howard Su
2023-07-03
[llama] No need to check file version when loading vocab score (#2079)
Howard Su
2023-07-03
server: add option to output probabilities for completion (#1962)
WangHaoranRobin
2023-07-02
ggml : fix build with OpenBLAS (close #2066)
Georgi Gerganov
2023-07-01
Better CUDA synchronization logic (#2057)
Johannes Gäßler
2023-07-01
Test-based VRAM scratch size + context adjustment (#2056)
Johannes Gäßler
2023-07-01
cmake : don't force -mcpu=native on aarch64 (#2063)
Daniel Drake
2023-07-01
metal : release buffers when freeing metal context (#2062)
Aaron Miller
2023-07-01
convert : add support of baichuan-7b (#2055)
Judd
2023-07-01
llama : fix return value of llama_load_session_file_internal (#2022)
Georgi Gerganov
2023-07-01
llama : catch llama_load_session_file_internal exceptions (#2022)
Rand Xie
2023-07-01
embd-input : fix returning ptr to temporary
Georgi Gerganov
2023-07-01
train : fix compile warning
Georgi Gerganov
2023-07-01
ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)
Qingyou Meng
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-29
Porting the improved K-Quant CUDA kernels to OpenCL (#1966)
LostRuins
2023-06-28
llama : replacing auto &kv with const auto &kv (#2041)
m3ndax
2023-06-28
cuda : remove nchannels_x argument from mul_mat_vec_nc_f16_f32 (#2028)
Salvador E. Tropea
2023-06-28
cuda : fix missing const qualifier in casts (#2027)
Salvador E. Tropea
2023-06-28
llama : remove shards weight file support (#2000)
Howard Su
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-28
llama : support input embeddings directly (#1910)
ningshanwutuobang
2023-06-27
fix pthreads setaffinity usage on android (#2020)
Erik Scholz
2023-06-27
baby-llama : fix build after ggml_rope change (#2016)
Howard Su
2023-06-27
llama : fix rope usage after ChatGLM change
Georgi Gerganov
2023-06-27
ggml : add support for ChatGLM RoPE
Georgi Gerganov
2023-06-26
readme : add Scala 3 bindings repo (#2010)
Roman Parykin
2023-06-26
ggml : increase max tensor name + clean up compiler warnings in train-text (#...
David Yang
2023-06-26
readme : LD_LIBRARY_PATH complement for some Android devices when building wi...
Gustavo Rocha Dias
2023-06-26
ggml : avoid conv 2d kernel round up
Georgi Gerganov
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-26
k-quants : fix indentation
Georgi Gerganov
2023-06-26
tests : fix quantize perf (#1990)
katsu560
2023-06-26
k-quants : add AVX support to dot functions (#1916)
katsu560
2023-06-26
readme : add link to new k-quants for visibility
Georgi Gerganov
2023-06-26
k-quants : support for super-block size of 64 (#2001)
Kawrakow
2023-06-26
Fix assert when free invalid cuda pointer (#2005)
Howard Su
2023-06-25
readme : add new roadmap + manifesto
Georgi Gerganov
2023-06-25
ggml : sync latest ggml (custom operators)
Georgi Gerganov
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-25
readme : add Azure CI discussion link
Georgi Gerganov
[prev]
[next]