aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-07-25Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359)Kawrakow
* Fix Q4_K and Q5_K for QK_K = 64 * Very slightly better Q5_K bit fiddling --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-25[Server] Escape HTML in webchat (#2368)Henri Vasserman
* escape HTML in webchat * add amp
2023-07-24make rms_norm_eps a parameter (#2374)slaren
* make rms_norm_eps a parameter * add rms_norm_eps to command line * fix baby llama, test-grad0 * use scientific notation for eps param in the help ggml-ci
2023-07-24Chat UI extras (#2366)Aarni Koskela
* makefile: correct deps for server * server: tighten settings layout a little * server: expose all currently configured generation params in UI * server: expose remaining generation params, for the adventurous * server: embetter mirostat fields
2023-07-24ggml : sync (unary ops refactor, static-correctness) (#2370)Georgi Gerganov
* ggml : sync (unary ops, tests) ggml-ci * tests : remove unnecessary funcs
2023-07-24Fix scalar version of Q5_K when QK_K = 64 (#2362)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
* llama, main : constrain sampling to grammar * allow loading grammar from file * fix whitespace errors * handle & print parser errors * add comments to grammar syntax and allow newlines where unambiguous * add missing include * support alternates in root rule * fix bugs with empty token and EOS * adjust JSON grammar * remove swp file * rewrite ternary expressions Co-authored-by: Henri Vasserman <henv@hot.ee> * use struct for grammar elements and add Unicode support * add unicode escapes * add inverse char ranges * only sample full tokens (no peeking or truncation) * llama : minor style changes blindly applied in online editor - hopefully I didn't break something * update help text * add warning message if EOS is disabled --------- Co-authored-by: Henri Vasserman <henv@hot.ee> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-24Some more Q4_K and Q5_K speedup on CUDA (#2346)Kawrakow
* Faster Q5_K on CUDA * Small Q5_K improvement on older GPUs * Spped up Q4_K on CUDA GTX1660: 29.5 ms/t -> 25.6 ms/t RTX4080: 8.40 ms/t -> 8.25 ms/t * Spped up Q4_K on CUDA GTX1660: 36.7 ms/t -> 35.6 ms/t RTX4080: 9.8 ms/t -> 9.5 ms/t * Address PR comments * Add some comments to satisfy PR reviewer --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
* Add gqa parameter support to the server * Change help from stderr to stdout
2023-07-23Fix __dp4a documentation (#2348)Johannes Gäßler
2023-07-23common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)wzy
* Fix #2345, fix incorrect n_threads * Update examples/common.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-23fix n_tasks (#2342)slaren
ggml-ci
2023-07-23ggml: move op parameters from tensors to ggml_tensor::op_params (#2333)slaren
* ggml: move op parameters from tensors to ggml_tensor::op_params * alibi: use memcpy for float params * remove `src[1] = NULL` in ops
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
* CUDA: GQA implementation * llama : support for GQA and LLaMAv2 70B ggml-ci * py : fix hparams parsing (if-else blocks) ggml-ci * py : oh boy .. ggml-ci * help : fix gqa value for 70B ggml-ci --------- Co-authored-by: JohannesGaessler <johannesg@5d6.de>
2023-07-23llama : print help to stdout (#2338)maddes8cht
2023-07-23flake : support `nix build '.#opencl'` (#2337)wzy
2023-07-23llama : print max tensor size to stderr (#2336)Christian Demsar
2023-07-23make : fix CLBLAST compile support in FreeBSD (#2331)Jose Maldonado
* Fix Makefile for CLBLAST compile support and instructions for compile llama.cpp FreeBSD * More general use-case for CLBLAST support (Linux and FreeBSD)
2023-07-23examples : simplify vim plugin (#2327)AustinMroz
Uses builtin json_encode and json_decode functions to simplify escaping Removes the need for temp files
2023-07-23metal : support bcast add & dup & cont op (#2323)Jiahao Li
2023-07-23Speed up Q4_K (#2322)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-22CUDA: Fixed 7b q3_K_S with mul_mat_vec_q (#2313)Johannes Gäßler
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-22Perplexity: Compute scores correlated to HellaSwag (#2312)klosax
* Add parameter --perplexity-lines to perplexity.cpp
2023-07-22examples : basic VIM pluginwhoreson
VIM plugin for server exe
2023-07-22ci : fix argsGeorgi Gerganov
2023-07-22ci : add 7B CUDA tests (#2319)Georgi Gerganov
* ci : add 7B CUDA tests ggml-ci * ci : add Q2_K to the tests * ci : bump CUDA ppl chunks ggml-ci * ci : increase CUDA TG len + add --ignore-eos * ci : reduce CUDA ppl cunks down to 4 to save time
2023-07-21examples : add easy python script to create quantized (k-bit support) GGML ↵Richard Roberson
models from local HF Transformer models (#2311) * Resync my fork with new llama.cpp commits * examples : rename to use dash instead of underscore --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21Custom RoPE + bettter memory management for CUDA (#2295)Kawrakow
* Custom RoPE + bettter memory management for CUDA * Adjusted look ahead in ggml_cuda_pool_malloc to 5% This is sufficient it seems. We end up using about 200 MB less VRAM that way when running the 13B model with context 8192. --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-21Faster Q3_K implementation on Metal (#2307)Kawrakow
* Faster Q3_K on Metal * Additional Q3_K speedup on Metal * Q3_K for QK_K = 64 * Better Q3_K for QK_K = 64 21.6 ms/t -> 21.1 ms/t --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-21ggml : fix the rope fix (513f8619535a64fa9ace808cdcbcf66211535f5c)Georgi Gerganov
2023-07-21examples : fix typo in minigpt4.py (#2298)Ikko Eltociear Ashimine
promt -> prompt
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-21gitignore : fix final newlineGeorgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the ↵Guillaume "Vermeille" Sanchez
guidance scale (#2280)
2023-07-21gitignore : changes for Poetry users + chat examples (#2284)Jose Maldonado
A fix in Makefile for FreeBSD users. In the platfrom x86_64 is amd64. This fix resolve compilation using CFLAGS and CXXFLAGS with -march=native and -mtune=native Add two examples for interactive mode using Llama2 models (thx TheBloke for models) Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21make : fix indentationGeorgi Gerganov
2023-07-21ci : fix MNT realpath usage (#2250)Georgi Gerganov
2023-07-21make : support customized LLAMA_CUDA_NVCC and LLAMA_CUDA_CCBIN (#2275)Sky Yan
Under certain environment, nvcc and gcc is installed under customized path but not standard path Co-authored-by: Yan Lin <yanlin@baidu.com>
2023-07-21flake : remove intel mkl from flake.nix due to missing files (#2277)wzy
NixOS's mkl misses some libraries like mkl-sdl.pc. See #2261 Currently NixOS doesn't have intel C compiler (icx, icpx). See https://discourse.nixos.org/t/packaging-intel-math-kernel-libraries-mkl/975 So remove it from flake.nix Some minor changes: - Change pkgs.python310 to pkgs.python3 to keep latest - Add pkgconfig to devShells.default - Remove installPhase because we have `cmake --install` from #2256
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-21make : add new target for test binaries (#2244)Jiří Podivín
Programs in the tests directory are now build with target tests and placed in the same location. * clean target was expanded to remove new binaries * test target binaries are listed in a variable * Locations of binaries were added to the .gitignore Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)Hatsune Miku
* Miku.sh: Set default model to llama-2-7b-chat * Miku.sh: Set ctx_size to 4096 * Miku.sh: Add in-prefix/in-suffix opts * Miku.sh: Switch sampler to mirostat_v2 and tiny prompt improvements
2023-07-21Faster Q2_K on Metal (#2297)Kawrakow
* Faster Q2_K on Metal * Deleting unnoticed and dangereous trailing white space * Fixed bug in new metal Q2_K implementation --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-21make : fix embdinput library and server examples building on MSYS2 (#2235)Przemysław Pawełczyk
* make : fix embdinput library and server examples building on MSYS2 * cmake : fix server example building on MSYS2
2023-07-20Faster Q5_K and Q6_K on Metal (#2294)Kawrakow
* Faster Q6_K on Metal * Faster Q5_K on Metal * Another Q5_K speedup --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-20Faster Q4_K on Metal (#2290)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-20llama : fix regression from #2000 - could not load no-mmap modelsGeorgi Gerganov
2023-07-20metal: minor q4 optimization and reduce code size (#2248)Shouzheng Liu
* metal: use uint16_t instead of uint8_t. Apple GPU doesn't like uint8_t. For every operation on uint8_t the gpu need to copy the uint8_t to an empty 16 bit register, then it can issue other instructions. For the matrix-vector multiplication kernel only, we observed a 340~350 GB/s memory read speed on M1 Max after this commit, which is very close to the reported hardware limit. * metal: update rms_norm kernel This commit double the speed of rms_norm operations by using 512 threads per threadgroup, combining with SIMD primitives to minimize the need for thread group barriers. * metal: use template to reduce size Revert modifications on block_q4_0 and block_q4_1.