aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Collapse)Author
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
* Sample interface, new samplers. New samplers: - locally typical sampling - tail free sampling - frequency and presence penalty - mirostat Ignore EOS fix: -inf should be used. * mirostat * Added --logit-bias and --no-penalize-nl, removed std::span * Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) * Save and load example adjust * Tests * Windows build fix * Windows test fix
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28examples : add Jeopardy example (#1168)CRD716
* Basic Setup * Prevent Results.txt from coming up * Prefixes, Line separators, etc * editorcheck * introduction to give more consistent results * Basic graph thing * Grading, ready for testing! * Y'all ready to get funky? * fix column removal stuff * missed a few
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
* ggml : add Q5_0 quantization (cuBLAS only) * ggml : fix Q5_0 qh -> uint32_t * ggml : fix q5_0 histogram stats * ggml : q5_0 scalar dot product * ggml : q5_0 ARM NEON dot * ggml : q5_0 more efficient ARM NEON using uint64_t masks * ggml : rename Q5_0 -> Q5_1 * ggml : adding Q5_0 mode * quantize : add Q5_0 and Q5_1 to map * ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195) --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
instead of `int` (while `int` option still being supported) This allows the following usage: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0` instead of: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) ↵Georgi Gerganov
(#1179) * ggml : add Q8_0 quantization format (rename the old one to Q8_1) * tests : fix test-quantize-fns * ggml : finalize Q8_0 implementation * ggml : use q4_0_q8_0 and q4_2_q8_0 * ggml : fix Q8_0 dot product bug (ARM) * ggml : Q8_0 unroll x2 * ggml : fix bug - using wrong block type * ggml : extend quantize_fns_t with "vec_dot_type" * ggml : fix Q8_0 to use 255 values out of 256 * ggml : fix assert using wrong QK4_2 instead of QK4_3
2023-04-24examples : add save_load_state example (#1150)xaedes
* add save_load_state example * use <cstdio> instead of <iostream> and fprintf / printf instead of cout * renamed save-load-state example files replacing underscores by dashes
2023-04-24examples/main README improvements and some light refactoring (#1131)mgroeber9110
2023-04-23Fix LoRA acronym (#1145)slaren
2023-04-23Added README.md for main with examples and explanations (#1139)DannyDaemonic
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter
2023-04-22llama : print timings on ctrl+c exit (#1021)wbpxre150
* print timings on ctrl+c exit * remove redundant free memory call. * add global pointer to ctx.
2023-04-22llama : have n_batch default to 512 (#1091)eiery
* set default n_batch to 512 when using BLAS * spacing * alternate implementation of setting different n_batch for BLAS * set n_batch to 512 for all cases
2023-04-22examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp ↵Clint Herron
Experience (#1107) * Moving parameters to separate lines for readability. * Increasing repeate_penalty to 1.1 to make alpaca more usable by default. * Adding trailing newline.
2023-04-21main : evaluate tokens in batches after swapping context (#1014)Alex Klinkhamer
* examples : evaluate tokens in batches after swapping context * Update examples/main/main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-21Show perplexity ETA in hours and minutes (#1096)slaren
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
* Multi-threading quantization. Not much gain for simple quantizations, bit it will be important for quantizations that require more CPU cycles. * Multi-threading for quantize-stats It now does the job in ~14 seconds on my Mac for Q4_0, Q4_1 and Q4_2. Single-threaded it was taking more than 2 minutes after adding the more elaborate version of Q4_2. * Reviewer comments * Avoiding compiler confusion After changing chunk_size to const int as suggested by @ggerganov, clang and GCC starting to warn me that I don't need to capture it in the lambda. So, I removed it from the capture list. But that makes the MSVC build fail. So, making it a constexpr to make every compiler happy. * Still fighting with lambda captures in MSVC --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
* ggml : Q4_2 ARM * ggml : add ggml_is_quantized() * llama : update llama_type_name() with Q4_2 entry * ggml : speed-up q4_2 - 4 threads: ~100ms -> ~90ms - 8 threads: ~55ms -> ~50ms * ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
2023-04-17Add LoRA support (#820)slaren
2023-04-17quantize-stats : fix bug in --type argumentGeorgi Gerganov
2023-04-16examples: add missing <ctime> include for time() (#1011)Pavol Rusnak
2023-04-15benchmark : fix result validation in benchmark-q4_0-matmult (#987)Ivan Komarov
2023-04-14Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)Pavol Rusnak
This reverts commit f4d277ae17247ee51129ef1a9ff74d377cc90b1b.
2023-04-14Expose type name from ggml (#970)Pavol Rusnak
Avoid duplication of type names in utils Co-authored-by: Håkon H. Hitland <haakon@likedan.net>
2023-04-14main : alternative instruct mode (Vicuna support, etc.) (#863)Tomáš Pazdiora
* Add support for configs, add configurable prefixes / suffixes, deprecate instruct mode, add stop prompt * Add multiline mode, update text input. * bugfix * update implementation * typos * Change --multiline implementation to be toggled by EOF. * bugfix * default multiline mode * add more configs * update formating * update formatting * apply suggestions
2023-04-14perplexity : add support for batch size to `--perplexity` (#407)Gary Linscott
* Add support to batch size for perplexity * Revert "Fix memory allocation issues and seg faults" This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df. * update from merge * Remove perplexity from main * updates * Update batch size for efficiency
2023-04-13common : remove unnecessary includes (#947)CRD716
2023-04-13llama : merge llama_internal.h into llama.hGeorgi Gerganov
Hide it behind an #ifdef
2023-04-13fix whitespace (#944)CRD716
2023-04-13examples : add -n to alpaca and gpt4all scripts (#706)niansa/tuxifan
2023-04-13benchmark : add tool for timing q4_0 matrix multiplication (#653)SebastianApel
* Initial version of q4_0 matrix multiplication benchmark * Bugfix: Added dependency to ggml.o to benchmark * Reviewer requests: added parameter for threads, switched to ggml_time_us() * Reviewer input: removed rtsc, use epsilon for check * Review comment: Removed set_locale * Feature: Param for numer of iterations, Bugfix for use of parameter threads * Reviewer suggestion: Moved to examples * Reviewer feedback: Updated clean: and benchmark: sections --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-11Fix whitespace, add .editorconfig, add GitHub workflow (#883)Pavol Rusnak
2023-04-11Add enum llama_ftype, sync ggml_type to model files (#709)Stephan Walter
2023-04-11Windows fixes (#890)comex
Mostly for msys2 and mingw64 builds, which are different from each other and different from standard Visual Studio builds. Isn't Windows fun? - Define _GNU_SOURCE in more files (it's already used in ggml.c for Linux's sake). - Don't use PrefetchVirtualMemory if not building for Windows 8 or later (mingw64 doesn't by default). But warn the user about this situation since it's probably not intended. - Check for NOMINMAX already being defined, which it is on mingw64. - Actually use the `increment` variable (bug in my `pizza` PR). - Suppress unused variable warnings in the fake pthread_create and pthread_join implementations for Windows. - (not Windows-related) Remove mention of `asprintf` from comment; `asprintf` is no longer used. Fixes #871.
2023-04-10Rewrite loading code to try to satisfy everyone:comex
- Support all three formats (ggml, ggmf, ggjt). (However, I didn't include the hack needed to support GPT4All files without conversion. Those can still be used after converting them with convert.py from my other PR.) - Support both mmap and read (mmap is used by default, but can be disabled with `--no-mmap`, and is automatically disabled for pre-ggjt files or on platforms where mmap is not supported). - Support multi-file models like before, but automatically determine the number of parts rather than requiring `--n_parts`. - Improve validation and error checking. - Stop using the per-file type field (f16) entirely in favor of just relying on the per-tensor type/size fields. This has no immediate benefit, but makes it easier to experiment with different formats, and should make it easier to support the new GPTQ-for-LLaMa models in the future (I have some work in progress on that front). - Support VirtualLock on Windows (using the same `--mlock` option as on Unix). - Indicate loading progress when using mmap + mlock. (Which led me to the interesting observation that on my Linux machine, with a warm file cache, mlock actually takes some time, whereas mmap without mlock starts almost instantly...) - To help implement this, move mlock support from ggml to the loading code. - madvise/PrefetchVirtualMemory support (based on #740) - Switch from ifstream to the `fopen` family of functions to avoid unnecessary copying and, when mmap is enabled, allow reusing the same file descriptor for both metadata reads and mmap (whereas the existing implementation opens the file a second time to mmap). - Quantization now produces a single-file output even with multi-file inputs (not really a feature as much as 'it was easier this way'). Implementation notes: I tried to factor the code into more discrete pieces than before. Regarding code style: I tried to follow the code style, but I'm naughty and used a few advanced C++ features repeatedly: - Destructors to make it easier to ensure everything gets cleaned up. - Exceptions. I don't even usually use exceptions when writing C++, and I can remove them if desired... but here they make the loading code much more succinct while still properly handling a variety of errors, ranging from API calls failing to integer overflow and allocation failure. The exceptions are converted to error codes at the API boundary.) Co-authored-by: Pavol Rusnak <pavol@rusnak.io> (for the bit I copied from #740)
2023-04-08fix for windows utf-8 input (#840)Tomáš Pazdiora
Use UTF-16 as input on Windows, since UTF-8 does not work and reads multibyte characters as zeros
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
Command that calculates some statistics over the errors introduced by quantization, like mean square error, max error and some percentile errors for layer weights. Should be useful for testing quantization improvements. Exposes some internal state from ggml and llama for testing
2023-04-06Do not crash when it has nothing to say. (#796)Sergey Alirzaev
Otherwise observing this in the interactive mode: /usr/lib/gcc/x86_64-pc-linux-gnu/12/include/g++-v12/bits/stl_vector.h:1230: reference std::vector<int>::back() [_Tp = int, _Alloc = std::allocator<int>]: Assertion '!this->empty()' failed.
2023-04-05miku.sh : add executable bit (#780)at8u
2023-04-05examples : add Miku.sh (#724)at8u
* Add Miku.sh to examples * Add missing line to prompt in Miku.sh * Add --keep param to Miku.sh * Remove '[end_of_conversation]' line from Miku.sh No longer is necessary.
2023-04-03Windows: reactive sigint handler after each Ctrl-C (#736)mgroeber9110
2023-04-02examples : add gpt4all script (#658)Leonardo Neumann
2023-04-02fix default params for examples/main (#697)Murilo Santana
2023-04-01Show error message when -f failsSlaren
2023-03-30Fix ggml_init_params in quantizeSlaren
2023-03-29Create chat-13B.bat (#592)Thérence
* Create chat-13B.bat Same script than chat-13B.sh, but for windows users. Tested and working on windows 10/11 v 22H2 * Apply suggestions from code review --------- Co-authored-by: anzz1 <anzz1@live.com>
2023-03-29add example of re-act pattern (#583)Tobias Lütke
* add example of re-act pattern * spelling... * fixed whitespace in reverse prompt issue