aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-07-21gitignore : changes for Poetry users + chat examples (#2284)Jose Maldonado
A fix in Makefile for FreeBSD users. In the platfrom x86_64 is amd64. This fix resolve compilation using CFLAGS and CXXFLAGS with -march=native and -mtune=native Add two examples for interactive mode using Llama2 models (thx TheBloke for models) Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21make : fix indentationGeorgi Gerganov
2023-07-21ci : fix MNT realpath usage (#2250)Georgi Gerganov
2023-07-21make : support customized LLAMA_CUDA_NVCC and LLAMA_CUDA_CCBIN (#2275)Sky Yan
Under certain environment, nvcc and gcc is installed under customized path but not standard path Co-authored-by: Yan Lin <yanlin@baidu.com>
2023-07-21flake : remove intel mkl from flake.nix due to missing files (#2277)wzy
NixOS's mkl misses some libraries like mkl-sdl.pc. See #2261 Currently NixOS doesn't have intel C compiler (icx, icpx). See https://discourse.nixos.org/t/packaging-intel-math-kernel-libraries-mkl/975 So remove it from flake.nix Some minor changes: - Change pkgs.python310 to pkgs.python3 to keep latest - Add pkgconfig to devShells.default - Remove installPhase because we have `cmake --install` from #2256
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-21make : add new target for test binaries (#2244)Jiří Podivín
Programs in the tests directory are now build with target tests and placed in the same location. * clean target was expanded to remove new binaries * test target binaries are listed in a variable * Locations of binaries were added to the .gitignore Signed-off-by: Jiri Podivin <jpodivin@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-21MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)Hatsune Miku
* Miku.sh: Set default model to llama-2-7b-chat * Miku.sh: Set ctx_size to 4096 * Miku.sh: Add in-prefix/in-suffix opts * Miku.sh: Switch sampler to mirostat_v2 and tiny prompt improvements
2023-07-21Faster Q2_K on Metal (#2297)Kawrakow
* Faster Q2_K on Metal * Deleting unnoticed and dangereous trailing white space * Fixed bug in new metal Q2_K implementation --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-21make : fix embdinput library and server examples building on MSYS2 (#2235)Przemysław Pawełczyk
* make : fix embdinput library and server examples building on MSYS2 * cmake : fix server example building on MSYS2
2023-07-20Faster Q5_K and Q6_K on Metal (#2294)Kawrakow
* Faster Q6_K on Metal * Faster Q5_K on Metal * Another Q5_K speedup --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-20Faster Q4_K on Metal (#2290)Kawrakow
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-20llama : fix regression from #2000 - could not load no-mmap modelsGeorgi Gerganov
2023-07-20metal: minor q4 optimization and reduce code size (#2248)Shouzheng Liu
* metal: use uint16_t instead of uint8_t. Apple GPU doesn't like uint8_t. For every operation on uint8_t the gpu need to copy the uint8_t to an empty 16 bit register, then it can issue other instructions. For the matrix-vector multiplication kernel only, we observed a 340~350 GB/s memory read speed on M1 Max after this commit, which is very close to the reported hardware limit. * metal: update rms_norm kernel This commit double the speed of rms_norm operations by using 512 threads per threadgroup, combining with SIMD primitives to minimize the need for thread group barriers. * metal: use template to reduce size Revert modifications on block_q4_0 and block_q4_1.
2023-07-19llama : extend API to get max devices at runtime (#2253)Rinne
2023-07-19flake : update flake.nix (#2270)wzy
When `isx86_32 || isx86_64`, it will use mkl, else openblas According to https://discourse.nixos.org/t/rpath-of-binary-contains-a-forbidden-reference-to-build/12200/3, add -DCMAKE_SKIP_BUILD_RPATH=ON Fix #2261, Nix doesn't provide mkl-sdl.pc. When we build with -DBUILD_SHARED_LIBS=ON, -DLLAMA_BLAS_VENDOR=Intel10_lp64 replace mkl-sdl.pc by mkl-dynamic-lp64-iomp.pc
2023-07-19cmake : install targets (#2256)wzy
fix #2252
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
* ci : run ctest ggml-ci * ci : add open llama 3B-v2 tests ggml-ci * ci : disable wget progress output ggml-ci * ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations ggml-ci * tests : try to fix tail free sampling test ggml-ci * ci : add K-quants ggml-ci * ci : add short perplexity tests ggml-ci * ci : add README.md * ppl : add --chunks argument to limit max number of chunks ggml-ci * ci : update README
2023-07-18llama : shorten quantization descriptionsGeorgi Gerganov
2023-07-17Support dup & cont ops on CUDA (#2242)Jiahao Li
2023-07-17llama : fix t_start_sample_us initialization warning (#2238)Alex Klinkhamer
2023-07-16ggml : fixed runtime bugs and compile errors related to GGML_PERF and ↵Qingyou Meng
GGML_DEBUG (#2219) * fixed runtime bugs and compile errors related to GGML_PERF and GGML_DEBUG * remove ifdef GGML_PERF; update fmt
2023-07-16py : turn verify-checksum-models.py into executable (#2245)Jiří Podivín
README.md was adjusted to reflect the change. Signed-off-by: Jiri Podivin <jpodivin@gmail.com>
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
* Implement customizable RoPE The original RoPE has pre-defined parameters theta_i = 10000^(−2(i−1)/d), for i in [1, 2, ..., d/2] Our customizable RoPE, ggml_rope_custom_inplace, uses theta_i = scale * base^(−2(i−1)/d), for i in [1, 2, ..., d/2] with the default matches the original scale = 1.0 base = 10000 The new command line arguments --rope-freq-base --rope-freq-scale set the two new RoPE parameter. Recent researches show changing these two parameters extends the context limit with minimal loss. 1. Extending Context to 8K kaiokendev https://kaiokendev.github.io/til#extending-context-to-8k 2. Extending Context Window of Large Language Models via Positional Interpolation Shouyuan Chen, Sherman Wong, Liangjian Chen, Yuandong Tian https://arxiv.org/abs/2306.15595 3. NTK-Aware Scaled RoPE allows LLaMA models to have extended (8k+) context size without any fine-tuning and minimal perplexity degradation. https://www.reddit.com/user/bloc97 https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/ For the bold, try adding the following command line parameters to your favorite model: -c 16384 --rope-freq-base 80000 --rope-freq-scale 0.5 * ggml-metal: fix custom rope * common: fix argument names in help * llama: increase MEM_REQ_EVAL for MODEL_3B It avoids crashing for quantized weights on CPU. Better ways to calculate the required buffer size would be better. * llama: make MEM_REQ_EVAL depend on n_ctx * server: use proper Content-Type in curl examples Without the header Content-Type: application/json, curl will POST with Content-Type: application/x-www-form-urlencoded Though our simple server doesn't care, the httplib.h used has a limit with CPPHTTPLIB_FORM_URL_ENCODED_PAYLOAD_MAX_LENGTH 8192 With Content-Type: application/json, we can send large json data. * style : minor fixes, mostly indentations * ggml : fix asserts --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-14flake : add runHook preInstall/postInstall to installPhase so hooks function ↵Dave Della Costa
(#2224)
2023-07-14make : use pkg-config for OpenBLAS (#2222)wzy
2023-07-14cuda : allocate all temporary ggml_tensor_extra_gpu from a fixed-size buffer ↵Bach Le
(#2220)
2023-07-14ggml : fix static_assert with older compilers #2024 (#2218)Evan Miller
2023-07-14llama : add functions that work directly on model (#2197)Bach Le
* Remove vocab reference from context * Add functions that works directly with model
2023-07-14build.zig : install config header (#2216)Ali Chraghi
2023-07-14examples : fixed path typos in embd-input (#2214)Shangning Xu
2023-07-14cuda : support broadcast add & mul (#2192)Jiahao Li
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-14CUDA: mul_mat_vec_q kernels for k-quants (#2203)Johannes Gäßler
2023-07-14make : fix combination of LLAMA_METAL and LLAMA_MPI (#2208)James Reynolds
Fixes https://github.com/ggerganov/llama.cpp/issues/2166 by moving commands after the CFLAGS are changed.
2023-07-14ggml : sync (ggml_conv_2d, fix mul_mat bug, CUDA GLM rope)Georgi Gerganov
2023-07-14Metal: faster Q4_0 and Q4_1 matrix x vector kernels (#2212)Kawrakow
* 3-5% faster Q4_0 on Metal * 7-25% faster Q4_1 on Metal * Oops, forgot to delete the original Q4_1 kernel --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
Has perf regression when mlock is used. This reverts commit 2347463201a9f4159ae95b737e1544dd300569c8.
2023-07-13Fix compile error on Windows CUDA (#2207)Howard Su
2023-07-13devops : add missing quotes to bash script (#2193)Bodo Graumann
This prevents accidentally expanding arguments that contain spaces.
2023-07-12metal : new q4_0 matrix-vector kernel (#2188)Shouzheng Liu
Prefetch data to improve GPU utilization. ~48% faster for 33B model.
2023-07-12ggml : broadcast mul_mat + conv batch support (#2199)Georgi Gerganov
* ggml : broadcast mul_mat + conv batch support * ggml : apply mul_mat broadcast fix by @jploski
2023-07-12ggml : add ggml_pool_1d and ggml_pool_2dGeorgi Gerganov
2023-07-12cuda : add gelu supportGeorgi Gerganov
2023-07-12FP16 is supported in CM=6.0 (#2177)Howard Su
* FP16 is supported in CM=6.0 * Building PTX code for both of 60 and 61 Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2023-07-12Fixed __dp4a compute capability: 6.0 -> 6.1 (#2189)Johannes Gäßler
2023-07-12ggml : revert CUDA broadcast changes from #2183 (#2191)Georgi Gerganov
2023-07-11ggml : sync (abort callback, mul / add broadcast, fix alibi) (#2183)Georgi Gerganov
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
* Add ggml changes * Update train-text-from-scratch for change * mpi : adapt to new ggml_tensor->src --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
* Initial implementation * Remove debug print * Restore signature of llama_init_from_gpt_params * Free guidance context * Make freeing of guidance_ctx conditional * Make Classifier-Free Guidance a sampling function * Correct typo. CFG already means context-free grammar. * Record sampling time in llama_sample_classifier_free_guidance * Shift all values by the max value before applying logsoftmax * Fix styling based on review
2023-07-11docker : add '--server' option (#2174)Jinwoo Jeong