index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-08-08
vim : streaming and more (#2495)
AustinMroz
2023-08-07
Add --rope-scale parameter (#2544)
klosax
2023-08-06
console : fix issue related to Windows 11 PowerShell console mode persistence...
DannyDaemonic
2023-08-04
fix firefox autoscroll (#2519)
Jonas Wunderlich
2023-08-04
server: regenerate completion.js.hpp (#2515)
Cebtenzzre
2023-08-04
Add --simple-io option for subprocesses and break out console.h and cpp (#1558)
DannyDaemonic
2023-08-04
Fixing race condition in server and partial stream handling in frontend. (#2391)
Stephen Nichols
2023-08-04
build : fix several cast and printf warnings (#2499)
Borislav Stanimirov
2023-08-02
examples : generate JSON according to schema (#1887)
Evan Jones
2023-08-02
tests : Fix compilation warnings (Linux/GCC) (#2451)
Eve
2023-08-01
fix a typo in examples/server/README.md (#2478)
Bono Lv
2023-08-01
server : Support dark mode (#2414)
ebraminio
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-28
perplexity : add Hellaswag calculation (#2389)
klosax
2023-07-28
examples : fix whitespace
Georgi Gerganov
2023-07-28
examples : server chat mode with llama2 (#2400)
nhamanasu
2023-07-28
readme : fix the description of the Tail free sampling (TFS) method (#2431)
Weird Constructor
2023-07-28
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)
Rand Xie
2023-07-25
Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
Kawrakow
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-25
[Server] Escape HTML in webchat (#2368)
Henri Vasserman
2023-07-24
make rms_norm_eps a parameter (#2374)
slaren
2023-07-24
Chat UI extras (#2366)
Aarni Koskela
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-23
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
wzy
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-23
llama : print help to stdout (#2338)
maddes8cht
2023-07-23
examples : simplify vim plugin (#2327)
AustinMroz
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-22
examples : basic VIM plugin
whoreson
2023-07-21
examples : add easy python script to create quantized (k-bit support) GGML mo...
Richard Roberson
2023-07-21
examples : fix typo in minigpt4.py (#2298)
Ikko Eltociear Ashimine
2023-07-21
ggml : fix rope args order + assert (#2054)
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
gitignore : changes for Poetry users + chat examples (#2284)
Jose Maldonado
2023-07-21
llama : make tensor_split ptr instead of array (#2272)
Georgi Gerganov
2023-07-21
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
Hatsune Miku
2023-07-21
make : fix embdinput library and server examples building on MSYS2 (#2235)
Przemysław Pawełczyk
2023-07-19
cmake : install targets (#2256)
wzy
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-18
llama : shorten quantization descriptions
Georgi Gerganov
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-14
examples : fixed path typos in embd-input (#2214)
Shangning Xu
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
Spencer Sutton
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
[next]