index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
common.cpp
Age
Commit message (
Expand
)
Author
2023-08-02
tests : Fix compilation warnings (Linux/GCC) (#2451)
Eve
2023-07-31
CUDA: mmq CLI option, fixed mmq build issues (#2453)
Johannes Gäßler
2023-07-28
perplexity : add Hellaswag calculation (#2389)
klosax
2023-07-25
main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)
Xiao-Yong Jin
2023-07-24
make rms_norm_eps a parameter (#2374)
slaren
2023-07-23
llama : add grammar-based sampling (#1773)
Evan Jones
2023-07-23
common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)
wzy
2023-07-23
llama : grouped-query attention + LLaMAv2 70B support (#2276)
Georgi Gerganov
2023-07-23
llama : print help to stdout (#2338)
maddes8cht
2023-07-22
llama : optimize memory buffers (#2325)
Georgi Gerganov
2023-07-22
Perplexity: Compute scores correlated to HellaSwag (#2312)
klosax
2023-07-21
llama : remove cfg smooth factor as it is only a reparameterization of the gu...
Guillaume "Vermeille" Sanchez
2023-07-21
llama : make tensor_split ptr instead of array (#2272)
Georgi Gerganov
2023-07-18
ci : integrate with ggml-org/ci (#2250)
Georgi Gerganov
2023-07-15
llama : add custom RoPE (#2054)
Xiao-Yong Jin
2023-07-13
Revert "Support using mmap when applying LoRA (#2095)" (#2206)
Howard Su
2023-07-11
llama : add classifier-free guidance (#2135)
Bach Le
2023-07-11
Support using mmap when applying LoRA (#2095)
Howard Su
2023-07-09
main : escape prompt prefix/suffix (#2151)
Nigel Bosch
2023-06-29
Use unsigned for random seed (#2006)
Howard Su
2023-06-28
CUDA GPU acceleration for LoRAs + f16 models (#1970)
Johannes Gäßler
2023-06-26
ggml : add NUMA support (#1556)
zrm
2023-06-24
llama : make model stateless and context stateful (llama_state) (#1797)
Didzis Gosko
2023-06-17
Only one CUDA stream per device for async compute (#1898)
Johannes Gäßler
2023-06-16
build : fix and ignore MSVC warnings (#1889)
Borislav Stanimirov
2023-06-15
Better error when using both LoRA + GPU layers (#1861)
Johannes Gäßler
2023-06-14
CUDA full GPU acceleration, KV cache in VRAM (#1827)
Johannes Gäßler
2023-06-11
Fix issue where interactive mode crashes when input exceeds ctx size (#1789)
Kerfuffle
2023-06-06
main: add the possibility to open the prompt cache read-only (#1640)
Willy Tarreau
2023-06-06
Multi GPU support, CUDA refactor, CUDA scratch buffer (#1703)
Johannes Gäßler
2023-06-04
llama : Metal inference (#1642)
Georgi Gerganov
2023-05-28
Only show -ngl option when relevant + other doc/arg handling updates (#1625)
Kerfuffle
2023-05-28
examples : add --alias option to gpt_params to set use friendly model name (#...
Vladimir Zorin
2023-05-20
Fix for mingw (#1462)
DannyDaemonic
2023-05-19
main : make reverse prompt option act as a stop token in non-interactive mode...
Jason McCartney
2023-05-19
minor : fix compile warnings
Georgi Gerganov
2023-05-17
Remove unused n_parts parameter (#1509)
Stephan Walter
2023-05-15
fix get_num_physical_cores() (#1436)
zrm
2023-05-13
ggml : GPU-accelerated token generation (#1412)
Johannes Gäßler
2023-05-12
CLI args use - instead of _, backwards compatible (#1416)
Johannes Gäßler
2023-05-10
main : add option to save full output to session (#1338)
Evan Jones
2023-05-09
Locale fix for Windows (#1379)
DannyDaemonic
2023-05-08
Interface improvements and `--multiline-input` (previously `--author-mode`) (...
DannyDaemonic
2023-05-08
llama : require first token to be BOS (#1303)
Georgi Gerganov
2023-05-08
Documented CUDA reproducibility, added warning (#1346)
Johannes Gäßler
2023-05-04
main : add --in-suffix option (#1318)
44670
2023-05-04
Only escape prompts when used with `-e` (#1311)
DannyDaemonic
2023-05-02
Process escape sequences given in prompts (#1173)
DannyDaemonic
2023-05-03
fix missing parameters in `llama_init_from_gpt_params` (#1293)
slaren
2023-05-02
examples : add llama_init_from_gpt_params() common function (#1290)
Ron Evans
[next]