aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-07-26make : build with -Wmissing-prototypes (#2394)Cebtenzzre
2023-07-26ggml : allocate graphs in a context (#2392)slaren
* ggml : graph allocation in contexts * allocate work buffer as a ggml_object in ggml_graph_compute_with_ctx * llama.cpp : allocate graph in the context * add GGML_PAD --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-25ggml : fix ggml_flash_attn to use op_params (#2387)slaren
* ggml : fix ggml_flash_attn to use op_params
2023-07-25convert.py : support bpe tokenizer (#2228)ldwang
* support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert Signed-off-by: ldwang <ftgreat@gmail.com> * support bpe tokenizer in convert, fix Signed-off-by: ldwang <ftgreat@gmail.com> --------- Signed-off-by: ldwang <ftgreat@gmail.com> Co-authored-by: ldwang <ftgreat@gmail.com>
2023-07-25ggml : relax contiguous constraints in activation function (#2371)Jiahao Li
2023-07-25ggml : improve graph build time via hash table lookup (#2329)slaren
* improve graph build time * ggml_tensor : use 1 bit per flag * use a hash table instead
2023-07-25build : fix line breaking error in build-info.sh (#2349)Hesen Peng
* fix line breaking * build number line break removal
2023-07-25main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)Xiao-Yong Jin
* add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS The BOS precedes the string specified by `--in-prefix`. Model generated EOS is now kept in the context. It provides a way to strictly following the prompt format used in Llama-2-chat. The EOS handling also benefits some existing finetunes that uses EOS to mark the end of turn. * examples/common: move input_prefix_bos to other bools
2023-07-25ci : add non-AVX scalar build/test (#2356)Eve
* noavx build and test * we don't need to remove f16c in windows
2023-07-25k_quants : add AVX support to dot functions with QK_K as 64 (#2339)katsu560
* add AVX to ggml_vec_dot_q2_K_q8_K() * add AVX to ggml_vec_dot_q3_K_q8_K() * add AVX to ggml_vec_dot_q4_K_q8_K() * add AVX to ggml_vec_dot_q5_K_q8_K() * add AVX to ggml_vec_dot_q6_K_q8_K() * refactor AVX code in ggml_vec_dot_q6_K_q8_K()
2023-07-25metal : concurrently dispatch commands (#2358)Shouzheng Liu
* metal: concurrently dispatch commands Function `ggml_metal_graph_find_concurrency` will run and write commands that can be issued concurrently to metal context `concur_list` array, when `ggml_metal_graph_compute` is called for the first time. * metal: don't call find_concurrency automatically. * metal : code style changes --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-25Another speed gain for Q4_0 and Q4_1 on Metal (#2375)Kawrakow
* Another speed gain for Q4_0 and Q4_1 on Metal * Have N_DST, etc., be template parameters --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-25Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359)Kawrakow
* Fix Q4_K and Q5_K for QK_K = 64 * Very slightly better Q5_K bit fiddling --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-25[Server] Escape HTML in webchat (#2368)Henri Vasserman
* escape HTML in webchat * add amp
2023-07-24make rms_norm_eps a parameter (#2374)slaren
* make rms_norm_eps a parameter * add rms_norm_eps to command line * fix baby llama, test-grad0 * use scientific notation for eps param in the help ggml-ci
2023-07-24Chat UI extras (#2366)Aarni Koskela
* makefile: correct deps for server * server: tighten settings layout a little * server: expose all currently configured generation params in UI * server: expose remaining generation params, for the adventurous * server: embetter mirostat fields
2023-07-24ggml : sync (unary ops refactor, static-correctness) (#2370)Georgi Gerganov
* ggml : sync (unary ops, tests) ggml-ci * tests : remove unnecessary funcs
2023-07-24Fix scalar version of Q5_K when QK_K = 64 (#2362)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
* llama, main : constrain sampling to grammar * allow loading grammar from file * fix whitespace errors * handle & print parser errors * add comments to grammar syntax and allow newlines where unambiguous * add missing include * support alternates in root rule * fix bugs with empty token and EOS * adjust JSON grammar * remove swp file * rewrite ternary expressions Co-authored-by: Henri Vasserman <henv@hot.ee> * use struct for grammar elements and add Unicode support * add unicode escapes * add inverse char ranges * only sample full tokens (no peeking or truncation) * llama : minor style changes blindly applied in online editor - hopefully I didn't break something * update help text * add warning message if EOS is disabled --------- Co-authored-by: Henri Vasserman <henv@hot.ee> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-24Some more Q4_K and Q5_K speedup on CUDA (#2346)Kawrakow
* Faster Q5_K on CUDA * Small Q5_K improvement on older GPUs * Spped up Q4_K on CUDA GTX1660: 29.5 ms/t -> 25.6 ms/t RTX4080: 8.40 ms/t -> 8.25 ms/t * Spped up Q4_K on CUDA GTX1660: 36.7 ms/t -> 35.6 ms/t RTX4080: 9.8 ms/t -> 9.5 ms/t * Address PR comments * Add some comments to satisfy PR reviewer --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
* Add gqa parameter support to the server * Change help from stderr to stdout
2023-07-23Fix __dp4a documentation (#2348)Johannes Gäßler
2023-07-23common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)wzy
* Fix #2345, fix incorrect n_threads * Update examples/common.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-23fix n_tasks (#2342)slaren
ggml-ci
2023-07-23ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)slaren
* ggml: move op parameters from tensors to ggml_tensor::op_params * alibi: use memcpy for float params * remove `src[1] = NULL` in ops
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
* CUDA: GQA implementation * llama : support for GQA and LLaMAv2 70B ggml-ci * py : fix hparams parsing (if-else blocks) ggml-ci * py : oh boy .. ggml-ci * help : fix gqa value for 70B ggml-ci --------- Co-authored-by: JohannesGaessler <johannesg@5d6.de>
2023-07-23llama : print help to stdout (#2338)maddes8cht
2023-07-23flake : support `nix build '.#opencl'` (#2337)wzy
2023-07-23llama : print max tensor size to stderr (#2336)Christian Demsar
2023-07-23make : fix CLBLAST compile support in FreeBSD (#2331)Jose Maldonado
* Fix Makefile for CLBLAST compile support and instructions for compile llama.cpp FreeBSD * More general use-case for CLBLAST support (Linux and FreeBSD)
2023-07-23examples : simplify vim plugin (#2327)AustinMroz
Uses builtin json_encode and json_decode functions to simplify escaping Removes the need for temp files
2023-07-23metal : support bcast add & dup & cont op (#2323)Jiahao Li
2023-07-23Speed up Q4_K (#2322)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-22CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)Johannes Gäßler
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-22Perplexity: Compute scores correlated to HellaSwag (#2312)klosax
* Add parameter --perplexity-lines to perplexity.cpp
2023-07-22examples : basic VIM pluginwhoreson
VIM plugin for server exe
2023-07-22ci : fix argsGeorgi Gerganov
2023-07-22ci : add 7B CUDA tests (#2319)Georgi Gerganov
* ci : add 7B CUDA tests ggml-ci * ci : add Q2_K to the tests * ci : bump CUDA ppl chunks ggml-ci * ci : increase CUDA TG len + add --ignore-eos * ci : reduce CUDA ppl cunks down to 4 to save time
2023-07-21examples : add easy python script to create quantized (k-bit support) GGML ↵Richard Roberson
models from local HF Transformer models (#2311) * Resync my fork with new llama.cpp commits * examples : rename to use dash instead of underscore --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21Custom RoPE + bettter memory management for CUDA (#2295)Kawrakow
* Custom RoPE + bettter memory management for CUDA * Adjusted look ahead in ggml_cuda_pool_malloc to 5% This is sufficient it seems. We end up using about 200 MB less VRAM that way when running the 13B model with context 8192. --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-21Faster Q3_K implementation on Metal (#2307)Kawrakow
* Faster Q3_K on Metal * Additional Q3_K speedup on Metal * Q3_K for QK_K = 64 * Better Q3_K for QK_K = 64 21.6 ms/t -> 21.1 ms/t --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-21ggml : fix the rope fix (513f8619535a64fa9ace808cdcbcf66211535f5c)Georgi Gerganov
2023-07-21examples : fix typo in minigpt4.py (#2298)Ikko Eltociear Ashimine
promt -> prompt
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-21gitignore : fix final newlineGeorgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the ↵Guillaume "Vermeille" Sanchez
guidance scale (#2280)
2023-07-21gitignore : changes for Poetry users + chat examples (#2284)Jose Maldonado
A fix in Makefile for FreeBSD users. In the platfrom x86_64 is amd64. This fix resolve compilation using CFLAGS and CXXFLAGS with -march=native and -mtune=native Add two examples for interactive mode using Llama2 models (thx TheBloke for models) Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>