index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
examples
/
main
/
main.cpp
Age
Commit message (
Expand
)
Author
2023-04-22
Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)
Stephan Walter
2023-04-22
llama : print timings on ctrl+c exit (#1021)
wbpxre150
2023-04-21
main : evaluate tokens in batches after swapping context (#1014)
Alex Klinkhamer
2023-04-17
Add LoRA support (#820)
slaren
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Windows fixes (#890)
comex
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-04-06
Do not crash when it has nothing to say. (#796)
Sergey Alirzaev
2023-04-03
Windows: reactive sigint handler after each Ctrl-C (#736)
mgroeber9110
2023-03-28
all : be more strict about converting float to double (#458)
Stephan Walter
2023-03-28
main.cpp fixes, refactoring (#571)
anzz1
2023-03-26
[main] fix infinite generation (-n == -1) (#523)
anzz1
2023-03-26
Exit from interactive mode if input stream is bad (#491)
Harald Fernengel
2023-03-25
(Windows) Set console to UTF-8 on init (#420)
anzz1
2023-03-25
Fix colors enabling on WIN32
Georgi Gerganov
2023-03-25
If n_predict == -1, generate forever
Georgi Gerganov
2023-03-25
Inifinite generation via context swapping (#71)
Georgi Gerganov
2023-03-25
Overhaul the examples structure
Georgi Gerganov