index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-07-28
perplexity : add Hellaswag calculation (#2389)
klosax
2023-07-28
examples : fix whitespace
Georgi Gerganov
2023-07-28
examples : server chat mode with llama2 (#2400)
nhamanasu
2023-07-28
readme : fix the description of the Tail free sampling (TFS) method (#2431)
Weird Constructor
2023-07-28
llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)
Rand Xie
2023-07-25
Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)
Kawrakow
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-25
server: add rms_norm_eps parameter (#2380)
slaren
2023-07-25
[Server] Escape HTML in webchat (#2368)
Henri Vasserman
2023-07-24
make rms_norm_eps a parameter (#2374)
slaren
2023-07-24
Chat UI extras (#2366)
Aarni Koskela
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-23
Add gqa parameter support to the server (#2351)
IgnacioFDM
2023-07-23
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
wzy
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-23
llama : print help to stdout (#2338)
maddes8cht
2023-07-23
examples : simplify vim plugin (#2327)
AustinMroz
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-22
examples : basic VIM plugin
whoreson
2023-07-21
examples : add easy python script to create quantized (k-bit support) GGML mo...
Richard Roberson
2023-07-21
examples : fix typo in minigpt4.py (#2298)
Ikko Eltociear Ashimine
2023-07-21
ggml : fix rope args order + assert (#2054)
Georgi Gerganov
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
gitignore : changes for Poetry users + chat examples (#2284)
Jose Maldonado
2023-07-21
llama : make tensor_split ptr instead of array (#2272)
Georgi Gerganov
2023-07-21
MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)
Hatsune Miku
2023-07-21
make : fix embdinput library and server examples building on MSYS2 (#2235)
Przemysław Pawełczyk
2023-07-19
cmake : install targets (#2256)
wzy
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-18
llama : shorten quantization descriptions
Georgi Gerganov
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-14
examples : fixed path typos in embd-input (#2214)
Shangning Xu
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)
Spencer Sutton
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-10
mpi : add support for distributed inference via MPI (#2099)
Evan Miller
2023-07-09
main : escape prompt prefix/suffix (#2151)
Nigel Bosch
2023-07-07
ggml : change ggml_graph_compute() API to not require context (#1999)
Qingyou Meng
2023-07-06
convert : update for baichuan (#2081)
Judd
2023-07-06
alpaca.sh : update model file name (#2074)
tslmy
2023-07-05
Expose generation timings from server & update completions.js (#2116)
Tobias Lütke
2023-07-05
Update Server Instructions (#2113)
Jesse Jojo Johnson
2023-07-05
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)
Stephan Walter
2023-07-05
Update server instructions for web front end (#2103)
Jesse Jojo Johnson
2023-07-05
embd-input: Fix input embedding example unsigned int seed (#2105)
Nigel Bosch
2023-07-04
Add an API example using server.cpp similar to OAI. (#2009)
jwj7140
2023-07-04
Simple webchat for server (#1998)
Tobias Lütke
2023-07-04
fix server crashes (#2076)
Henri Vasserman
[next]