aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-08-07metal : fix out-of-bounds access + inc concurrency nodes (#2416)Georgi Gerganov
* metal : fix out-of-bounds access + style changes * metal : increase concurrency nodes to 2*GGML_MAX_NODES
2023-08-07[Makefile] Move ARM CFLAGS before compilation (#2536)GiviMAD
2023-08-07[Zig] Rewrite build for Zig 0.11 (#2514)Henri Vasserman
* zig build fixes * Disable LTO on Windows.
2023-08-06console : fix issue related to Windows 11 PowerShell console mode ↵DannyDaemonic
persistence (#2521)
2023-08-06convert.py : add missing abstract methods for quantized data (#2491)Keiichi Tabata
2023-08-05CUDA: faster k-quant mul_mat_q kernels (#2525)Johannes Gäßler
2023-08-04fix firefox autoscroll (#2519)Jonas Wunderlich
2023-08-04server: regenerate completion.js.hpp (#2515)Cebtenzzre
2023-08-04CUDA: use min compute capability of GPUs actually used (#2506)Cebtenzzre
2023-08-04CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)Cebtenzzre
Fixes #2503
2023-08-04Add --simple-io option for subprocesses and break out console.h and cpp (#1558)DannyDaemonic
2023-08-04Fixing race condition in server and partial stream handling in frontend. (#2391)Stephen Nichols
* Fixing race condition in server.cpp and partial stream handling in completion.js * Reverting assert edits. * Adding newline to eof
2023-08-04Stream save llama context data to file instead of allocating entire buffer ↵l3utterfly
upfront (#2488) * added stream saving context data to file to avoid allocating unnecessary amounts of memory * generalised copying state data to file or buffer * added comments explaining how copy_state_data works * fixed trailing whitespaces * fixed save load state example * updated save load state to use public function in llama.cpp * - restored breakage of the llama_copy_state_data API - moved new logic for copying llama state data to internal function * fixed function declaration order * restored save load state example * fixed whitepace * removed unused llama-util.h include * Apply suggestions from code review Co-authored-by: slaren <slarengh@gmail.com> * Apply code review suggestions Co-authored-by: slaren <slarengh@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com>
2023-08-04build : fix several cast and printf warnings (#2499)Borislav Stanimirov
2023-08-02examples : generate JSON according to schema (#1887)Evan Jones
* examples : add JSON schema grammars * complete JSON grammar * ensure primitive types can be used as root of schema * support integer type and adjust usage text
2023-08-02CUDA: faster non k-quant mul_mat_q kernels (#2483)Johannes Gäßler
2023-08-02CUDA: Fix models with output size != 32000 (#2480)Johannes Gäßler
2023-08-02readme : add Aquila-7B model series to supported models (#2487)ldwang
* support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert, fix Signed-off-by: ldwang <ftgreat@gmail.com> * Add Aquila-7B models in README.md Signed-off-by: ldwang <ftgreat@gmail.com> * Up Aquila-7B models in README.md Signed-off-by: ldwang <ftgreat@gmail.com> --------- Signed-off-by: ldwang <ftgreat@gmail.com> Co-authored-by: ldwang <ftgreat@gmail.com>
2023-08-02tests : Fix compilation warnings (Linux/GCC) (#2451)Eve
* fix hellaswag print format, cast away warning in test-double-float * c++11 cannot use designated initializers * add static to test-grad0.c internal functions * use memcpy in test-double-float.c * port c tests to c++ * use initializer list for ggml_init_params
2023-08-02readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475)Yiming Cui
* add support for chinese llama-2 / alpaca-2 * remove white spaces
2023-08-01fix a typo in examples/server/README.md (#2478)Bono Lv
2023-08-01server : Support dark mode (#2414)ebraminio
* server : Support dark mode So it respects user system light / dark settings. * Update index.html.hpp by running ./deps.sh
2023-08-01metal : add gqa8 kernel to allow llama-2-70B on metal (#2459)Matteo Boschini
* Added gqa8 kernel to allow llama-2-70B on metal * Update ggml-metal.m Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com> * Extend kernel_mul_mat_f16_f32 to handle gqa broadcast * Added ne03==ne13 assertion --------- Co-authored-by: Cebtenzzre <cebtenzzre@gmail.com>
2023-07-31CUDA: fixed LLAMA_FAST compilation option (#2473)Johannes Gäßler
2023-07-31CUDA: fixed cmake F16 option (#2471)Johannes Gäßler
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-31CUDA: Implemented row flattening for non-glm RoPE (#2468)Johannes Gäßler
2023-07-31CUDA: fewer memory bank conflicts for mul_mat_q (#2458)Johannes Gäßler
2023-07-31Fix Metal backend broken from the allocator changes (#2455)slaren
* fix Metal backend broken from the allocator changes
2023-07-30ggml : add graph tensor allocator (#2411)slaren
* ggml : add graph tensor allocator * ggml : don't calculate data pointer of unallocated tensors when creating a view with an offset * ggml : refactor ggml_view_Nd into ggml_view_tensor_offset
2023-07-29CUDA: Quantized matrix matrix multiplication (#2160)Johannes Gäßler
* mmq implementation for non k-quants * q6_K * q2_K * q3_k * q4_K * vdr * q5_K * faster q8_1 loading * loop unrolling * add __restrict__ * q2_K sc_high * GGML_CUDA_MMQ_Y * Updated Makefile * Update Makefile * DMMV_F16 -> F16 * Updated README, CMakeLists * Fix CMakeLists.txt * Fix CMakeLists.txt * Fix multi GPU out-of-bounds
2023-07-29CUDA: faster multi GPU synchronization (#2448)Johannes Gäßler
2023-07-28perplexity : add Hellaswag calculation (#2389)klosax
* common.h : add hellaswag / remove perplexity-lines * common.cpp : add hellaswag / remove perplexity-lines * perplexity.cpp : add hellswag scores / remove perplexity-lines * perplexity.cpp : clean up * common.h : change default param value * common.cpp : Change default param * perplexity.cpp : alter wording * common.h : alter wording * common.cpp : alter wording
2023-07-28ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405)Lee
2023-07-28llama : support more diverse tokenizers? (#2420)eric8607242
* supporting more diverse tokenizers * Update llama.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-28examples : fix whitespaceGeorgi Gerganov
2023-07-28examples : server chat mode with llama2 (#2400)nhamanasu
* add: server chat mode with llama2 * fix: remove the unnecessary last \n
2023-07-28readme : fix the description of the Tail free sampling (TFS) method (#2431)Weird Constructor
2023-07-28llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)Rand Xie
2023-07-28Obtaining LLaMA 2 instructions (#2308)niansa/tuxifan
* Obtaining LLaMA 2 instructions * Removed sharing warning for LLaMA 2 * Linked TheBloke's GGML repos * Add LLaMA 2 to list of supported models * Added LLaMA 2 usage instructions * Added links to LLaMA 2 70B models
2023-07-27convert.py : Update to support 70B HF format model files (#2427)mj-shifu
* convert.py : fix llama 2 70b conversion from Huggingface
2023-07-27metal : disable graph concurrency optimization due to bug (#2413)Georgi Gerganov
2023-07-26ggml : fix assert in ggml_set_unary_op (#2410)slaren
2023-07-26make : build with -Wmissing-prototypes (#2394)Cebtenzzre
2023-07-26ggml : allocate graphs in a context (#2392)slaren
* ggml : graph allocation in contexts * allocate work buffer as a ggml_object in ggml_graph_compute_with_ctx * llama.cpp : allocate graph in the context * add GGML_PAD --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-25ggml : fix ggml_flash_attn to use op_params (#2387)slaren
* ggml : fix ggml_flash_attn to use op_params
2023-07-25convert.py : support bpe tokenizer (#2228)ldwang
* support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert, fix Signed-off-by: ldwang <ftgreat@gmail.com> --------- Signed-off-by: ldwang <ftgreat@gmail.com> Co-authored-by: ldwang <ftgreat@gmail.com>
2023-07-25ggml : relax contiguous constraints in activation function (#2371)Jiahao Li
2023-07-25ggml : improve graph build time via hash table lookup (#2329)slaren
* improve graph build time * ggml_tensor : use 1 bit per flag * use a hash table instead