aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-05-03fix missing parameters in `llama_init_from_gpt_params` (#1293)slaren
2023-05-02examples : add llama_init_from_gpt_params() common function (#1290)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02ggml : fix 32-bit ARMGeorgi Gerganov
2023-05-02examples : improve vertical alignment of a few variables (#1286)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02ggml : fix ppc64le build error and make cmake detect Power processors (#1284)Marvin Gießing
* Fix ppc64le build issue * Added support to detect ppc64* processors
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02main : switch input_noecho to input_echo to remove negation (#979)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02ggml: add names to tensors (#1268)slaren
* ggml: add names to tensors * minor improvements to dot file formatting
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
* Add git-based build information for better issue tracking * macOS fix * "build (hash)" and "CMAKE_SOURCE_DIR" changes * Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages * Fix conditional dependency on missing target * Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile * 4 space indenting for cmake, attempt to clean up my mess in Makefile * Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
* cuBLAS: refactor, convert fp16 to fp32 on device * cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16 * fix build * cuBLAS: update block_q5_1
2023-05-01llama : update stubs for systems without mmap and mlock (#1266)xloem
Co-authored-by: John Doe <john.doe@example.com>
2023-05-01ggml : fix ggml_used_mem() (#1264)Kerfuffle
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
* cuBLAS: fall back to pageable memory if pinned alloc fails * cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
2023-05-01llama : let context be const when accessing const data (#1261)Alex Klinkhamer
2023-04-30ggml : fix UB (int << 31)Georgi Gerganov
2023-04-30build: add armv{6,7,8} support to cmake (#1251)Pavol Rusnak
- flags copied from Makefile - updated comments in both CMakeLists.txt and Makefile to match reality
2023-04-30common : better default number of threads (#934)jon-chuang
* commit * fix * try-catch * apply code review * improve * improve * add macos headers * done * remove color * fix windows * minor * fix * Apply suggestions from code review Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com> * remove * minor * minor --------- Co-authored-by: jon-chuang <jon-chuang@users.noreply.github.com> Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>
2023-04-30ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)0cc4m
* Implement q5_0, q5_1 and q8_0 * Work around q5_0 OpenCL issue * Fix q8_0 dequant kernel * Move cl kernels into ggml-opencl.c * Use two memcpy calls for q5_0 buffer transfer
2023-04-30ggml : add Q5 WASM SIMD + GGML_FTYPEGeorgi Gerganov
2023-04-30Various fixes to mat_mul benchmark (#1253)Stephan Walter
2023-04-30ggml : fix labels for GGML_OP_ALIBIGeorgi Gerganov
2023-04-29ggml : fix 32-bit ARM NEONGeorgi Gerganov
2023-04-29ggml : use vzip instead of vuzp for consistencyGeorgi Gerganov
2023-04-29ggml : fix visibility and unused warningsGeorgi Gerganov
2023-04-29ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)Georgi Gerganov
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
* llama : minor - remove explicity int64_t cast * ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS * ggml : add asserts to guard for incorrect wsize
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
* Sample interface, new samplers. New samplers: - locally typical sampling - tail free sampling - frequency and presence penalty - mirostat Ignore EOS fix: -inf should be used. * mirostat * Added --logit-bias and --no-penalize-nl, removed std::span * Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) * Save and load example adjust * Tests * Windows build fix * Windows test fix
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
* cuBLAS: dequantize simultaneously while copying memory * cuBLAS: use host pinned memory * cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory * cuBLAS: also pin kv cache * fix rebase
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman
* Cuda: non-contiguous tensor support * remove extra stuff * rename * fix error * more fixes, now OpenBLAS and CLBlast build too * now then?
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28readme : update hot topicsGeorgi Gerganov
2023-04-28ggml : sync ggml (ggml_alibi)Georgi Gerganov
2023-04-28examples : add Jeopardy example (#1168)CRD716
* Basic Setup * Prevent Results.txt from coming up * Prefixes, Line separators, etc * editorcheck * introduction to give more consistent results * Basic graph thing * Grading, ready for testing! * Y'all ready to get funky? * fix column removal stuff * missed a few
2023-04-28llama : add session file format and saved sessions in main (#1169)Evan Jones
2023-04-28ggml : add helper debug printf in soft_maxGeorgi Gerganov
2023-04-28ggml : add CLBlast support (#1164)0cc4m
* Allow use of OpenCL GPU-based BLAS using ClBlast instead of OpenBLAS for context processing * Improve ClBlast implementation, avoid recreating buffers, remove redundant transfers * Finish merge of ClBlast support * Move CLBlast implementation to separate file Add buffer reuse code (adapted from slaren's cuda implementation) * Add q4_2 and q4_3 CLBlast support, improve code * Double CLBlast speed by disabling OpenBLAS thread workaround Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com> Co-authored-by: slaren <2141330+slaren@users.noreply.github.com> * Fix device selection env variable names * Fix cast in opencl kernels * Add CLBlast to CMakeLists.txt * Replace buffer pool with static buffers a, b, qb, c Fix compile warnings * Fix typos, use GGML_TYPE defines, improve code * Improve btype dequant kernel selection code, add error if type is unsupported * Improve code quality * Move internal stuff out of header * Use internal enums instead of CLBlast enums * Remove leftover C++ includes and defines * Make event use easier to read Co-authored-by: Henri Vasserman <henv@hot.ee> * Use c compiler for opencl files * Simplify code, fix include * First check error, then release event * Make globals static, fix indentation * Rename dequant kernels file to conform with other file names * Fix import cl file name --------- Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com> Co-authored-by: slaren <2141330+slaren@users.noreply.github.com> Co-authored-by: Henri Vasserman <henv@hot.ee> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-04-28Correcting link to w64devkit (#1214)Folko-Ven
Correcting link to w64devkit (change seeto to skeeto).
2023-04-28Add Manjaro CUDA include and lib dirs to Makefile (#1212)Johannes Gäßler
2023-04-28add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)Yann Follet
2023-04-26ggml : slightly faster AVX2 implementation for Q5 (#1197)Stephan Walter
2023-04-26readme : add quantization infoGeorgi Gerganov
2023-04-26ggml : add Q5_0 and Q5_1 quantization (#1187)Georgi Gerganov
* ggml : add Q5_0 quantization (cuBLAS only) * ggml : fix Q5_0 qh -> uint32_t * ggml : fix q5_0 histogram stats * ggml : q5_0 scalar dot product * ggml : q5_0 ARM NEON dot * ggml : q5_0 more efficient ARM NEON using uint64_t masks * ggml : rename Q5_0 -> Q5_1 * ggml : adding Q5_0 mode * quantize : add Q5_0 and Q5_1 to map * ggml : AVX2 optimizations for Q5_0, Q5_1 (#1195) --------- Co-authored-by: Stephan Walter <stephan@walter.name>
2023-04-26Allow setting the rng seed after initialization. (#1184)Ásgeir Bjarni Ingvarsson
The llama_set_state_data function restores the rng state to what it was at the time llama_copy_state_data was called. But users may want to restore the state and proceed with a different seed.
2023-04-26Updating build instructions to include BLAS support (#1183)DaniAndTheWeb
* Updated build information First update to the build instructions to include BLAS. * Update README.md * Update information about BLAS * Better BLAS explanation Adding a clearer BLAS explanation and adding a link to download the CUDA toolkit. * Better BLAS explanation * BLAS for Mac Specifying that BLAS is already supported on Macs using the Accelerate Framework. * Clarify the effect of BLAS * Windows Make instructions Added the instructions to build with Make on Windows * Fixing typo * Fix trailing whitespace
2023-04-26quantize : use `map` to assign quantization type from `string` (#1191)Pavol Rusnak
instead of `int` (while `int` option still being supported) This allows the following usage: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin q4_0` instead of: `./quantize ggml-model-f16.bin ggml-model-q4_0.bin 2`