aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-05-04Update main's README.md with new features (#1296)DannyDaemonic
2023-05-04fix #1224 reverse prompt and multi line (#1297)Tomas
* fix reverse prompt and multi line * Code Formatting Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-03ggml : vectorize Q8_0 quantizationGeorgi Gerganov
https://github.com/ggerganov/ggml/pull/127#issuecomment-1533648531
2023-05-03examples : read chat prompts from a template file (#1196)khimaros
2023-05-03minor : fix whitespaces (#1302)Georgi Gerganov
2023-05-03minor : fix trailing whitespacesGeorgi Gerganov
2023-05-03scripts : platform independent script to verify sha256 checksums (#1203)KASR
* python script to verify the checksum of the llama models Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability. * Update README.md update to the readme for improved readability and to explain the usage of the python checksum verification script * update the verification script I've extended the script based on suggestions by @prusnak The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks. * minor improvment small change so that the available ram is checked and not the total ram * remove the part of the code that reads the file at once if enough ram is available based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks. * Update verify-checksum-models.py quick fix to pass the git check
2023-05-03examples : various prompt and example fixes (#1298)CRD716
* fix dan.txt * miku prompt improvements * use common characters
2023-05-02llama : only copy used KV cache in get / set state (#1272)Evan Jones
* llama : only copy used KV cache in get / set state * switch to ggml for copying k, v * avoid designated initializers
2023-05-02Process escape sequences given in prompts (#1173)DannyDaemonic
2023-05-02Handle signals properly on Windows (#1123)DannyDaemonic
2023-05-02Call sh on build-info.sh (#1294)DannyDaemonic
2023-05-03fix build-info.h for git submodules (#1289)kuvaus
* make git build info work with submodules --------- Co-authored-by: Green Sky <green@g-s.xyz>
2023-05-03fix missing parameters in `llama_init_from_gpt_params` (#1293)slaren
2023-05-02examples : add llama_init_from_gpt_params() common function (#1290)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02ggml : fix 32-bit ARMGeorgi Gerganov
2023-05-02examples : improve vertical alignment of a few variables (#1286)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02ggml : fix ppc64le build error and make cmake detect Power processors (#1284)Marvin Gießing
* Fix ppc64le build issue * Added support to detect ppc64* processors
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02main : switch input_noecho to input_echo to remove negation (#979)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02ggml: add names to tensors (#1268)slaren
* ggml: add names to tensors * minor improvements to dot file formatting
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
* Add git-based build information for better issue tracking * macOS fix * "build (hash)" and "CMAKE_SOURCE_DIR" changes * Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages * Fix conditional dependency on missing target * Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile * 4 space indenting for cmake, attempt to clean up my mess in Makefile * Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
* cuBLAS: refactor, convert fp16 to fp32 on device * cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16 * fix build * cuBLAS: update block_q5_1
2023-05-01llama : update stubs for systems without mmap and mlock (#1266)xloem
Co-authored-by: John Doe <john.doe@example.com>
2023-05-01ggml : fix ggml_used_mem() (#1264)Kerfuffle
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
* cuBLAS: fall back to pageable memory if pinned alloc fails * cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
2023-05-01llama : let context be const when accessing const data (#1261)Alex Klinkhamer
2023-04-30ggml : fix UB (int << 31)Georgi Gerganov
2023-04-30build: add armv{6,7,8} support to cmake (#1251)Pavol Rusnak
- flags copied from Makefile - updated comments in both CMakeLists.txt and Makefile to match reality
2023-04-30common : better default number of threads (#934)jon-chuang
* commit * fix * try-catch * apply code review * improve * improve * add macos headers * done * remove color * fix windows * minor * fix * Apply suggestions from code review Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com> * remove * minor * minor --------- Co-authored-by: jon-chuang <jon-chuang@users.noreply.github.com> Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com>
2023-04-30ggml : add CLBlast q5_0, q5_1, q8_0 dequant kernels (#1225)0cc4m
* Implement q5_0, q5_1 and q8_0 * Work around q5_0 OpenCL issue * Fix q8_0 dequant kernel * Move cl kernels into ggml-opencl.c * Use two memcpy calls for q5_0 buffer transfer
2023-04-30ggml : add Q5 WASM SIMD + GGML_FTYPEGeorgi Gerganov
2023-04-30Various fixes to mat_mul benchmark (#1253)Stephan Walter
2023-04-30ggml : fix labels for GGML_OP_ALIBIGeorgi Gerganov
2023-04-29ggml : fix 32-bit ARM NEONGeorgi Gerganov
2023-04-29ggml : use vzip instead of vuzp for consistencyGeorgi Gerganov
2023-04-29ggml : fix visibility and unused warningsGeorgi Gerganov
2023-04-29ggml : fix #if for f32_f32 mul_mat (CLBlast) (#1229)Georgi Gerganov
2023-04-29ggml : adjust mul_mat_f16 work memory (#1226)Georgi Gerganov
* llama : minor - remove explicity int64_t cast * ggml : reduce memory buffer for F16 mul_mat when not using cuBLAS * ggml : add asserts to guard for incorrect wsize
2023-04-29build : fix reference to old llama_util.hGeorgi Gerganov
2023-04-29examples : fix save-load-state + rename llama-util.hGeorgi Gerganov
2023-04-29common : change default parameters to pre-#1126 (#1223)Georgi Gerganov
2023-04-29llama : new sampling algorithms (#1126)Ivan Stepanov
* Sample interface, new samplers. New samplers: - locally typical sampling - tail free sampling - frequency and presence penalty - mirostat Ignore EOS fix: -inf should be used. * mirostat * Added --logit-bias and --no-penalize-nl, removed std::span * Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) Use C++11, clarify llama API documentation, rename Mirostat parameters to --mirostat_lr and --mirostat_ent, add temperature sampling for Mirostat, simplify Mirostat sampling API parameters (removed N and *k) * Save and load example adjust * Tests * Windows build fix * Windows test fix
2023-04-29cuBLAS: use host pinned memory and dequantize while copying (#1207)slaren
* cuBLAS: dequantize simultaneously while copying memory * cuBLAS: use host pinned memory * cuBLAS: improve ggml_compute_forward_mul_mat_f16_f32 with pinned memory * cuBLAS: also pin kv cache * fix rebase
2023-04-29cuBLAS: non-contiguous tensor support (#1215)Henri Vasserman
* Cuda: non-contiguous tensor support * remove extra stuff * rename * fix error * more fixes, now OpenBLAS and CLBlast build too * now then?
2023-04-28Remove Q4_3 which is no better than Q5 (#1218)Stephan Walter
2023-04-28readme : update hot topicsGeorgi Gerganov
2023-04-28ggml : sync ggml (ggml_alibi)Georgi Gerganov