aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Collapse)Author
2023-08-02tests : Fix compilation warnings (Linux/GCC) (#2451)Eve
* fix hellaswag print format, cast away warning in test-double-float * c++11 cannot use designated initializers * add static to test-grad0.c internal functions * use memcpy in test-double-float.c * port c tests to c++ * use initializer list for ggml_init_params
2023-08-01fix a typo in examples/server/README.md (#2478)Bono Lv
2023-08-01server : Support dark mode (#2414)ebraminio
* server : Support dark mode So it respects user system light / dark settings. * Update index.html.hpp by running ./deps.sh
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-28perplexity : add Hellaswag calculation (#2389)klosax
* common.h : add hellaswag / remove perplexity-lines * common.cpp : add hellaswag / remove perplexity-lines * perplexity.cpp : add hellswag scores / remove perplexity-lines * perplexity.cpp : clean up * common.h : change default param value * common.cpp : Change default param * perplexity.cpp : alter wording * common.h : alter wording * common.cpp : alter wording
2023-07-28examples : fix whitespaceGeorgi Gerganov
2023-07-28examples : server chat mode with llama2 (#2400)nhamanasu
* add: server chat mode with llama2 * fix: remove the unnecessary last \n
2023-07-28readme : fix the description of the Tail free sampling (TFS) method (#2431)Weird Constructor
2023-07-28llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)Rand Xie
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-25main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)Xiao-Yong Jin
* add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS The BOS precedes the string specified by `--in-prefix`. Model generated EOS is now kept in the context. It provides a way to strictly following the prompt format used in Llama-2-chat. The EOS handling also benefits some existing finetunes that uses EOS to mark the end of turn. * examples/common: move input_prefix_bos to other bools
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-25[Server] Escape HTML in webchat (#2368)Henri Vasserman
* escape HTML in webchat * add amp
2023-07-24make rms_norm_eps a parameter (#2374)slaren
* make rms_norm_eps a parameter * add rms_norm_eps to command line * fix baby llama, test-grad0 * use scientific notation for eps param in the help ggml-ci
2023-07-24Chat UI extras (#2366)Aarni Koskela
* makefile: correct deps for server * server: tighten settings layout a little * server: expose all currently configured generation params in UI * server: expose remaining generation params, for the adventurous * server: embetter mirostat fields
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
* llama, main : constrain sampling to grammar * allow loading grammar from file * fix whitespace errors * handle & print parser errors * add comments to grammar syntax and allow newlines where unambiguous * add missing include * support alternates in root rule * fix bugs with empty token and EOS * adjust JSON grammar * remove swp file * rewrite ternary expressions Co-authored-by: Henri Vasserman <henv@hot.ee> * use struct for grammar elements and add Unicode support * add unicode escapes * add inverse char ranges * only sample full tokens (no peeking or truncation) * llama : minor style changes blindly applied in online editor - hopefully I didn't break something * update help text * add warning message if EOS is disabled --------- Co-authored-by: Henri Vasserman <henv@hot.ee> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
* Add gqa parameter support to the server * Change help from stderr to stdout
2023-07-23common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)wzy
* Fix #2345, fix incorrect n_threads * Update examples/common.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
* CUDA: GQA implementation * llama : support for GQA and LLaMAv2 70B ggml-ci * py : fix hparams parsing (if-else blocks) ggml-ci * py : oh boy .. ggml-ci * help : fix gqa value for 70B ggml-ci --------- Co-authored-by: JohannesGaessler <johannesg@5d6.de>
2023-07-23llama : print help to stdout (#2338)maddes8cht
2023-07-23examples : simplify vim plugin (#2327)AustinMroz
Uses builtin json_encode and json_decode functions to simplify escaping Removes the need for temp files
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-22Perplexity: Compute scores correlated to HellaSwag (#2312)klosax
* Add parameter --perplexity-lines to perplexity.cpp
2023-07-22examples : basic VIM pluginwhoreson
VIM plugin for server exe
2023-07-21examples : add easy python script to create quantized (k-bit support) GGML ↵Richard Roberson
models from local HF Transformer models (#2311) * Resync my fork with new llama.cpp commits * examples : rename to use dash instead of underscore --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21examples : fix typo in minigpt4.py (#2298)Ikko Eltociear Ashimine
promt -> prompt
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the ↵Guillaume "Vermeille" Sanchez
guidance scale (#2280)
2023-07-21gitignore : changes for Poetry users + chat examples (#2284)Jose Maldonado
A fix in Makefile for FreeBSD users. In the platfrom x86_64 is amd64. This fix resolve compilation using CFLAGS and CXXFLAGS with -march=native and -mtune=native Add two examples for interactive mode using Llama2 models (thx TheBloke for models) Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-21MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)Hatsune Miku
* Miku.sh: Set default model to llama-2-7b-chat * Miku.sh: Set ctx_size to 4096 * Miku.sh: Add in-prefix/in-suffix opts * Miku.sh: Switch sampler to mirostat_v2 and tiny prompt improvements
2023-07-21make : fix embdinput library and server examples building on MSYS2 (#2235)Przemysław Pawełczyk
* make : fix embdinput library and server examples building on MSYS2 * cmake : fix server example building on MSYS2
2023-07-19cmake : install targets (#2256)wzy
fix #2252
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
* ci : run ctest ggml-ci * ci : add open llama 3B-v2 tests ggml-ci * ci : disable wget progress output ggml-ci * ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations ggml-ci * tests : try to fix tail free sampling test ggml-ci * ci : add K-quants ggml-ci * ci : add short perplexity tests ggml-ci * ci : add README.md * ppl : add --chunks argument to limit max number of chunks ggml-ci * ci : update README
2023-07-18llama : shorten quantization descriptionsGeorgi Gerganov
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
* Implement customizable RoPE The original RoPE has pre-defined parameters theta_i = 10000^(−2(i−1)/d), for i in [1, 2, ..., d/2] Our customizable RoPE, ggml_rope_custom_inplace, uses theta_i = scale * base^(−2(i−1)/d), for i in [1, 2, ..., d/2] with the default matches the original scale = 1.0 base = 10000 The new command line arguments --rope-freq-base --rope-freq-scale set the two new RoPE parameter. Recent researches show changing these two parameters extends the context limit with minimal loss. 1. Extending Context to 8K kaiokendev https://kaiokendev.github.io/til#extending-context-to-8k 2. Extending Context Window of Large Language Models via Positional Interpolation Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian https://arxiv.org/abs/2306.15595 3. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation. https://www.reddit.com/user/bloc97 https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/ For the bold, try adding the following command line parameters to your favorite model: -c 16384 --rope-freq-base 80000 --rope-freq-scale 0.5 * ggml-metal: fix custom rope * common: fix argument names in help * llama: increase MEM_REQ_EVAL for MODEL_3B It avoids crashing for quantized weights on CPU. Better ways to calculate the required buffer size would be better. * llama: make MEM_REQ_EVAL depend on n_ctx * server: use proper Content-Type in curl examples Without the header Content-Type: application/json, curl will POST with Content-Type: application/x-www-form-urlencoded Though our simple server doesn't care, the httplib.h used has a limit with CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192 With Content-Type: application/json, we can send large json data. * style : minor fixes, mostly indentations * ggml : fix asserts --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-14examples : fixed path typos in embd-input (#2214)Shangning Xu
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
Has perf regression when mlock is used. This reverts commit 2347463201a9f4159ae95b737e1544dd300569c8.
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
* Add ggml changes * Update train-text-from-scratch for change * mpi : adapt to new ggml_tensor->src --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
* Initial implementation * Remove debug print * Restore signature of llama_init_from_gpt_params * Free guidance context * Make freeing of guidance_ctx conditional * Make Classifier-Free Guidance a sampling function * Correct typo. CFG already means context-free grammar. * Record sampling time in llama_sample_classifier_free_guidance * Shift all values by the max value before applying logsoftmax * Fix styling based on review
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
* Support using mmap when applying LoRA * Fix Linux * Update comment to reflect the support lora with mmap
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
* MPI support, first cut * fix warnings, update README * fixes * wrap includes * PR comments * Update CMakeLists.txt * Add GH workflow, fix test * Add info to README * mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099) * mpi : add names for layer inputs + prep ggml_mpi_graph_compute() * mpi : move all MPI logic into ggml-mpi Not tested yet * mpi : various fixes - communication now works but results are wrong * mpi : fix output tensor after MPI compute (still not working) * mpi : fix inference * mpi : minor * Add OpenMPI to GH action * [mpi] continue-on-error: true * mpi : fix after master merge * [mpi] Link MPI C++ libraries to fix OpenMPI * tests : fix new llama_backend API * [mpi] use MPI_INT32_T * mpi : factor out recv / send in functions and reuse * mpi : extend API to allow usage with outer backends (e.g. Metal) --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-09main : escape prompt prefix/suffix (#2151)Nigel Bosch
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
* ggml_graph_compute: deprecate using ggml_context, try resolve issue #287 * rewrite: no longer consider backward compitability; plan and make_plan * minor: rename ctx as plan; const * remove ggml_graph_compute from tests/test-grad0.c, but current change breaks backward * add static ggml_graph_compute_sugar() * minor: update comments * reusable buffers * ggml : more consistent naming + metal fixes * ggml : fix docs * tests : disable grad / opt + minor naming changes * ggml : add ggml_graph_compute_with_ctx() - backwards compatible API - deduplicates a lot of copy-paste * ci : enable test-grad0 * examples : factor out plan allocation into a helper function * llama : factor out plan stuff into a helper function * ci : fix env * llama : fix duplicate symbols + refactor example benchmark * ggml : remove obsolete assert + refactor n_tasks section * ggml : fix indentation in switch * llama : avoid unnecessary bool * ggml : remove comments from source file and match order in header --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-06convert : update for baichuan (#2081)Judd
1. guess n_layers; 2. relax warnings on context size; 3. add a note that its derivations are also supported. Co-authored-by: Judd <foldl@boxvest.com>
2023-07-06alpaca.sh : update model file name (#2074)tslmy
The original file name, `ggml-alpaca-7b-q4.bin`, implied the first-generation GGML. After the breaking changes (mentioned in https://github.com/ggerganov/llama.cpp/issues/382), `llama.cpp` requires GGML V3 now. Those model files are named `*ggmlv3*.bin`. We should change the example to an actually working model file, so that this thing is more likely to run out-of-the-box for more people, and less people would waste time downloading the old Alpaca model.
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
* use javascript generators as much cleaner API Also add ways to access completion as promise and EventSource * export llama_timings as struct and expose them in server * update readme, update baked includes * llama : uniform variable names + struct init --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-05Update Server Instructions (#2113)Jesse Jojo Johnson
* Update server instructions for web front end * Update server README * Remove duplicate OAI instructions * Fix duplicate text --------- Co-authored-by: Jesse Johnson <thatguy@jessejojojohnson.com>
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
* Generalize quantize_fns for simpler FP16 handling * Remove call to ggml_cuda_mul_mat_get_wsize * ci : disable FMA for mac os actions --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-05Update server instructions for web front end (#2103)Jesse Jojo Johnson
Co-authored-by: Jesse Johnson <thatguy@jessejojojohnson.com>