aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Collapse)Author
2023-04-06Do not crash when it has nothing to say. (#796)Sergey Alirzaev
Otherwise observing this in the interactive mode: /usr/lib/gcc/x86_64-pc-linux-gnu/12/include/g++-v12/bits/stl_vector.h:1230: reference std::vector<int>::back() [_Tp = int, _Alloc = std::allocator<int>]: Assertion '!this->empty()' failed.
2023-04-05miku.sh : add executable bit (#780)at8u
2023-04-05examples : add Miku.sh (#724)at8u
* Add Miku.sh to examples * Add missing line to prompt in Miku.sh * Add --keep param to Miku.sh * Remove '[end_of_conversation]' line from Miku.sh No longer is necessary.
2023-04-03Windows: reactive sigint handler after each Ctrl-C (#736)mgroeber9110
2023-04-02examples : add gpt4all script (#658)Leonardo Neumann
2023-04-02fix default params for examples/main (#697)Murilo Santana
2023-04-01Show error message when -f failsSlaren
2023-03-30Fix ggml_init_params in quantizeSlaren
2023-03-29Create chat-13B.bat (#592)Thérence
* Create chat-13B.bat Same script than chat-13B.sh, but for windows users. Tested and working on windows 10/11 v 22H2 * Apply suggestions from code review --------- Co-authored-by: anzz1 <anzz1@live.com>
2023-03-29add example of re-act pattern (#583)Tobias Lütke
* add example of re-act pattern * spelling... * fixed whitespace in reverse prompt issue
2023-03-28llama : fix linkage with mingw (#551)anzz1
* Revert 7e53955 (#542) Still needs to be fixed properly * Fix linking on mingw32
2023-03-28all : be more strict about converting float to double (#458)Stephan Walter
* Be more strict about converting float to double * Test equivalence of round, SILU implementations Test module is commented out in CMakeLists.txt because the tests may take a long time, depending on how much the compiler optimizes. * Fix softmax in perplexity.cpp * all : prefer float over double where appropriate * perplexity : add <cmath> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28ggml : introduce structs for the q4 data blocks (#356)Stephan Walter
* Introduce structs for the q4 data blocks * ggml : rename quant struct variables + fix ARM_NEON --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-28main.cpp fixes, refactoring (#571)anzz1
- main: entering empty line passes back control without new input in interactive/instruct modes - instruct mode: keep prompt fix - instruct mode: duplicate instruct prompt fix - refactor: move common console code from main->common
2023-03-27Fix missing ggml link in cmake for examples/* on w64-mingw32 (#542)Marco Matthies
2023-03-26Update README and comments for standalone perplexity tool (#525)Stephan Walter
2023-03-26[main] fix infinite generation (-n == -1) (#523)anzz1
2023-03-26Exit from interactive mode if input stream is bad (#491)Harald Fernengel
Allow exiting the interactive prompt also with CTRL-D on Unix and CTRL-Z on Windows.
2023-03-25(Windows) Set console to UTF-8 on init (#420)anzz1
Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.
2023-03-25Fix colors enabling on WIN32Georgi Gerganov
2023-03-25If n_predict == -1, generate foreverGeorgi Gerganov
2023-03-25Inifinite generation via context swapping (#71)Georgi Gerganov
2023-03-25Cleanup STL headers + fix embedding examples + minor stuffGeorgi Gerganov
2023-03-25Move chat scripts into "./examples"Georgi Gerganov
2023-03-25Overhaul the examples structureGeorgi Gerganov
- main -> examples - utils -> examples (renamed to "common") - quantize -> examples - separate tools for "perplexity" and "embedding" Hope I didn't break something !
2023-03-24Immediately start processing the prompt before user input has been provided ↵Georgi Gerganov
(#476)
2023-03-21fix typo in chatLLaMa (#368)Mathieu Nayrolles
The prompt contains a typo where 'alound' is used instead of 'aloud'.
2023-03-21Add chatLLaMa script (#198)Jean-Christophe Hoelt
* Add chatLLaMa script * Fix shellcheck errors and do some cleanup * Move chatLLaMa script to `examples` directory * Reduce chatLLaMa context size to 2048 Ref d7def1a7524f712e5ebb7cd02bab0f13aa56a7f9 * Include n_predict to 2048 in examples/chatLLaMa