index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
Age
Commit message (
Expand
)
Author
2023-04-18
ggml : add new Q4_2 quantization (ARM only) (#1046)
Georgi Gerganov
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-15
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-14
perplexity : add support for batch size to `--perplexity` (#407)
Gary Linscott
2023-04-13
common : remove unnecessary includes (#947)
CRD716
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-13
fix whitespace (#944)
CRD716
2023-04-13
examples : add -n to alpaca and gpt4all scripts (#706)
niansa/tuxifan
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-11
Windows fixes (#890)
comex
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-06
Do not crash when it has nothing to say. (#796)
Sergey Alirzaev
2023-04-05
miku.sh : add executable bit (#780)
at8u
2023-04-05
examples : add Miku.sh (#724)
at8u
2023-04-03
Windows: reactive sigint handler after each Ctrl-C (#736)
mgroeber9110
2023-04-02
examples : add gpt4all script (#658)
Leonardo Neumann
2023-04-02
fix default params for examples/main (#697)
Murilo Santana
2023-04-01
Show error message when -f fails
Slaren
2023-03-30
Fix ggml_init_params in quantize
Slaren
2023-03-29
Create chat-13B.bat (#592)
Thérence
2023-03-29
add example of re-act pattern (#583)
Tobias Lütke
2023-03-28
llama : fix linkage with mingw (#551)
anzz1
2023-03-28
all : be more strict about converting float to double (#458)
Stephan Walter
2023-03-28
ggml : introduce structs for the q4 data blocks (#356)
Stephan Walter
2023-03-28
main.cpp fixes, refactoring (#571)
anzz1
2023-03-27
Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)
Marco Matthies
2023-03-26
Update README and comments for standalone perplexity tool (#525)
Stephan Walter
2023-03-26
[main] fix infinite generation (-n == -1) (#523)
anzz1
2023-03-26
Exit from interactive mode if input stream is bad (#491)
Harald Fernengel
2023-03-25
(Windows) Set console to UTF-8 on init (#420)
anzz1
2023-03-25
Fix colors enabling on WIN32
Georgi Gerganov
2023-03-25
If n_predict == -1, generate forever
Georgi Gerganov
2023-03-25
Inifinite generation via context swapping (#71)
Georgi Gerganov
2023-03-25
Cleanup STL headers + fix embedding examples + minor stuff
Georgi Gerganov
2023-03-25
Move chat scripts into "./examples"
Georgi Gerganov
2023-03-25
Overhaul the examples structure
Georgi Gerganov
2023-03-24
Immediately start processing the prompt before user input has been provided (...
Georgi Gerganov
2023-03-21
fix typo in chatLLaMa (#368)
Mathieu Nayrolles
2023-03-21
Add chatLLaMa script (#198)
Jean-Christophe Hoelt
[prev]