aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Collapse)Author
2023-05-12ggml : remove bit shuffling (#1405)Georgi Gerganov
* ggml : remove Q4_0 bit shufling (ARM NEON) * ggml : remove Q4_1 bit shuffling (ARM NEON + reference) * ggml : nibbles_from_floats() + bytes_from_nibbles() (ARM NEON) * ggml : remove Q4_2 bit shuffling (WIP, BROKEN) * ggml : remove Q5_0 bit shuffling (ARM NEON) * ggml : 2x faster scalar implementations * ggml : remove Q5_1 bit shuffling (ARM NEON + scalar) * ggml : simplify scalar dot * ggml : remove WASM SIMD bit shuffling + remove vzip for ARM 32-bit * ggml : fix Q4_1 quantization * ggml : update cuBLAS + normalize variable names * ggml : remove Q4_2 mode * ggml : minor formatting * ggml : fix Q5_0 quantization * scripts : add script for measuring the time per token * AVX implementations (#1370) * ggml : uniform 5th bit extraction * llama : produce error upon loading old model files * llama : fix model magic/version write * ggml : speed-up Q5_0 + Q5_1 at 4 threads * ggml : preserve old Q4 and Q5 formats * ggml : simplify Q8_1 - no need for low / high sums anymore * ggml : fix Q8_0 and Q8_1 rounding * Revert "AVX implementations (#1370)" This reverts commit 948d124837f9d287d8490f41338e0e4cceb0814f. * ggml : fix AVX2 implementation * sha : update hashes for 7B and 13B * readme : update timings + remove warning banner * llama : update v2 PR number to 1405 * ggml : fix WASM comments * ggml : back to original bit order * readme : add note that Q4 and Q5 have been changed * llama : fix return for unknown version --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-05-10main : add option to save full output to session (#1338)Evan Jones
* main : add option to save full output to session * split behavior into --session and --prompt-cache * restore original implementation with new names * PR comments * move the check for incompatible parameters to gpt_params_parse * Fix whitespace Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com> --------- Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>
2023-05-09Locale fix for Windows (#1379)DannyDaemonic
2023-05-08Interface improvements and `--multiline-input` (previously `--author-mode`) ↵DannyDaemonic
(#1040) * Interface improvements * Multiline input * Track character width * Works with all characters and control codes + Windows console fixes
2023-05-08llama : require first token to be BOS (#1303)Georgi Gerganov
* llama : require first token to be BOS * scripts : add ppl-run-all.sh * perplexity : add BOS for each chunk * readme : update perplexity values after BOS fix * perplexity : add clarifying comments
2023-05-08Documented CUDA reproducibility, added warning (#1346)Johannes Gäßler
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox
2023-05-05quantize: make output filename optional, default to ggml-model-<ftype>.bin ↵slaren
(#1301)
2023-05-04main : add --in-suffix option (#1318)44670
* adding --in-suffix option * print input suffix before generation
2023-05-04Only escape prompts when used with `-e` (#1311)DannyDaemonic
2023-05-04Update main's README.md with new features (#1296)DannyDaemonic
2023-05-04fix #1224 reverse prompt and multi line (#1297)Tomas
* fix reverse prompt and multi line * Code Formatting Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-03examples : read chat prompts from a template file (#1196)khimaros
2023-05-03examples : various prompt and example fixes (#1298)CRD716
* fix dan.txt * miku prompt improvements * use common characters
2023-05-02Process escape sequences given in prompts (#1173)DannyDaemonic
2023-05-02Handle signals properly on Windows (#1123)DannyDaemonic
2023-05-03fix missing parameters in `llama_init_from_gpt_params` (#1293)slaren
2023-05-02examples : add llama_init_from_gpt_params() common function (#1290)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02examples : improve vertical alignment of a few variables (#1286)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02main : switch input_noecho to input_echo to remove negation (#979)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
* Add git-based build information for better issue tracking * macOS fix * "build (hash)" and "CMAKE_SOURCE_DIR" changes * Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages * Fix conditional dependency on missing target * Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile * 4 space indenting for cmake, attempt to clean up my mess in Makefile * Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-04-30common : better default number of threads (#934)jon-chuang
* commit * fix * try-catch * apply code review * improve * improve * add macos headers * done * remove color * fix windows * minor * fix * Apply suggestions from code review Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com> * remove * minor * minor --------- Co-authored-by: jon-chuang <jon-chuang@users.noreply.github.com> Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>
2023-04-30Various fixes to mat_mul benchmark (#1253)Stephan Walter
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
* Sample interface, new samplers. New samplers: - locally typical sampling - tail free sampling - frequency and presence penalty - mirostat Ignore EOS fix: -inf should be used. * mirostat * Added --logit-bias and --no-penalize-nl, removed std::span * Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) * Save and load example adjust * Tests * Windows build fix * Windows test fix
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28examples : add Jeopardy example (#1168)CRD716
* Basic Setup * Prevent Results.txt from coming up * Prefixes, Line separators, etc * editorcheck * introduction to give more consistent results * Basic graph thing * Grading, ready for testing! * Y'all ready to get funky? * fix column removal stuff * missed a few
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
* ggml : add Q5_0 quantization (cuBLAS only) * ggml : fix Q5_0 qh -> uint32_t * ggml : fix q5_0 histogram stats * ggml : q5_0 scalar dot product * ggml : q5_0 ARM NEON dot * ggml : q5_0 more efficient ARM NEON using uint64_t masks * ggml : rename Q5_0 -> Q5_1 * ggml : adding Q5_0 mode * quantize : add Q5_0 and Q5_1 to map * ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195) --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
instead of `int` (while `int` option still being supported) This allows the following usage: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0` instead of: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`
2023-04-25ggml : add Q8_0 quantization format (rename the old one to Q8_1) (ARM NEON) ↵Georgi Gerganov
(#1179) * ggml : add Q8_0 quantization format (rename the old one to Q8_1) * tests : fix test-quantize-fns * ggml : finalize Q8_0 implementation * ggml : use q4_0_q8_0 and q4_2_q8_0 * ggml : fix Q8_0 dot product bug (ARM) * ggml : Q8_0 unroll x2 * ggml : fix bug - using wrong block type * ggml : extend quantize_fns_t with "vec_dot_type" * ggml : fix Q8_0 to use 255 values out of 256 * ggml : fix assert using wrong QK4_2 instead of QK4_3
2023-04-24examples : add save_load_state example (#1150)xaedes
* add save_load_state example * use <cstdio> instead of <iostream> and fprintf / printf instead of cout * renamed save-load-state example files replacing underscores by dashes
2023-04-24examples/main README improvements and some light refactoring (#1131)mgroeber9110
2023-04-23Fix LoRA acronym (#1145)slaren
2023-04-23Added README.md for main with examples and explanations (#1139)DannyDaemonic
2023-04-22Fix CI: ARM NEON, quantization unit tests, editorconfig (#1122)Stephan Walter
2023-04-22llama : print timings on ctrl+c exit (#1021)wbpxre150
* print timings on ctrl+c exit * remove redundant free memory call. * add global pointer to ctx.
2023-04-22llama : have n_batch default to 512 (#1091)eiery
* set default n_batch to 512 when using BLAS * spacing * alternate implementation of setting different n_batch for BLAS * set n_batch to 512 for all cases
2023-04-22examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp ↵Clint Herron
Experience (#1107) * Moving parameters to separate lines for readability. * Increasing repeate_penalty to 1.1 to make alpaca more usable by default. * Adding trailing newline.
2023-04-21main : evaluate tokens in batches after swapping context (#1014)Alex Klinkhamer
* examples : evaluate tokens in batches after swapping context * Update examples/main/main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-21Show perplexity ETA in hours and minutes (#1096)slaren
2023-04-20llama : multi-threaded quantization (#1075)Kawrakow
* Multi-threading quantization. Not much gain for simple quantizations, bit it will be important for quantizations that require more CPU cycles. * Multi-threading for quantize-stats It now does the job in ~14 seconds on my Mac for Q4_0, Q4_1 and Q4_2. Single-threaded it was taking more than 2 minutes after adding the more elaborate version of Q4_2. * Reviewer comments * Avoiding compiler confusion After changing chunk_size to const int as suggested by @ggerganov, clang and GCC starting to warn me that I don't need to capture it in the lambda. So, I removed it from the capture list. But that makes the MSVC build fail. So, making it a constexpr to make every compiler happy. * Still fighting with lambda captures in MSVC --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-20ggml : add Q4_3 quantization (#1082)Georgi Gerganov
2023-04-18ggml : add new Q4_2 quantization (ARM only) (#1046)Georgi Gerganov
* ggml : Q4_2 ARM * ggml : add ggml_is_quantized() * llama : update llama_type_name() with Q4_2 entry * ggml : speed-up q4_2 - 4 threads: ~100ms -> ~90ms - 8 threads: ~55ms -> ~50ms * ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
2023-04-17Add LoRA support (#820)slaren