index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-07-21
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
Hatsune Miku
2023-07-21
make : fix embdinput library and server examples building on MSYS2 (#2235)
Przemysław Pawełczyk
2023-07-19
cmake : install targets (#2256)
wzy
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-18
llama : shorten quantization descriptions
Georgi Gerganov
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-14
examples : fixed path typos in embd-input (#2214)
Shangning Xu
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
Spencer Sutton
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-09
main : escape prompt prefix/suffix (#2151)
Nigel Bosch
2023-07-07
ggml : change ggml_graph_compute() API to not require context (#1999)
Qingyou Meng
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-07-06
alpaca.sh : update model file name (#2074)
tslmy
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
Update Server Instructions (#2113)
Jesse Jojo Johnson
2023-07-05
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
Stephan Walter
2023-07-05
Update server instructions for web front end (#2103)
Jesse Jojo Johnson
2023-07-05
embd-input: Fix input embedding example unsigned int seed (#2105)
Nigel Bosch
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-07-04
Simple webchat for server (#1998)
Tobias Lütke
2023-07-04
fix server crashes (#2076)
Henri Vasserman
2023-07-03
server: add option to output probabilities for completion (#1962)
WangHaoranRobin
2023-07-01
embd-input : fix returning ptr to temporary
Georgi Gerganov
2023-07-01
train : fix compile warning
Georgi Gerganov
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-28
llama : support input embeddings directly (#1910)
ningshanwutuobang
2023-06-27
baby-llama : fix build after ggml_rope change (#2016)
Howard Su
2023-06-27
llama : fix rope usage after ChatGLM change
Georgi Gerganov
2023-06-26
ggml : increase max tensor name + clean up compiler warnings in train-text (#...
David Yang
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-25
fix server sampling: top k sampler first (#1977)
anon998
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-20
[Fix] Reenable server embedding endpoint (#1937)
Henri Vasserman
2023-06-18
examples : fix examples/metal (#1920)
Kawrakow
2023-06-17
minor : warning fixes
Georgi Gerganov
2023-06-17
Only one CUDA stream per device for async compute (#1898)
Johannes Gäßler
2023-06-17
llama : fix kv_cache `n` init (close #1903)
Georgi Gerganov
2023-06-17
Server Example Refactor and Improvements (#1570)
Randall Fitzgerald
2023-06-17
hooks : setting up flake8 and pre-commit hooks (#1681)
Jiřà PodivÃn
2023-06-17
train : get raw text instead of page with html (#1905)
David Yang
2023-06-16
examples : add "simple" (#1840)
SuperUserNameMan
2023-06-16
Fixed possible macro redefinition (#1892)
FrankHB
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-06-15
examples : add chat-vicuna.sh (#1854)
yangli2
2023-06-15
readme : server compile flag (#1874)
Srinivas Billa
2023-06-15
Better error when using both LoRA + GPU layers (#1861)
Johannes Gäßler
[next]