aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-05-08readme : add notice about upcoming breaking changeGeorgi Gerganov
2023-05-08readme : add TOC and Pygmalion instructions (#1359)AlpinDale
2023-05-08llama : fix hparams shadow (#1367)Pavol Rusnak
fixes #1363
2023-05-08llama : require first token to be BOS (#1303)Georgi Gerganov
* llama : require first token to be BOS * scripts : add ppl-run-all.sh * perplexity : add BOS for each chunk * readme : update perplexity values after BOS fix * perplexity : add clarifying comments
2023-05-08convert: add ability to convert safetensors files (#1276)ubik2
* when loading a safetensors file, ignore the metadata header * check for safetensors files first, and only use PyTorch versions when safetensors aren't available
2023-05-08Documented CUDA reproducibility, added warning (#1346)Johannes Gäßler
2023-05-07CI: add Windows CLBlast and OpenBLAS builds (#1277)Henri Vasserman
* Add OpenCL and CLBlast support * Add OpenBLAS support * Remove testing from matrix * change build name to 'clblast'
2023-05-06ggml : Allow usage of CLBlast alongside Accelerate.framework (#1336)swittk
Minor edit in ggml.c which originally would prevent OpenCL from loading completely if GGML_USE_ACCELERATE was defined. Minor speedup in prompt eval time.
2023-05-06Remove default arguments from sampling functions (#1343)Jed Fox
2023-05-05makefile: automatic Arch Linux detection (#1332)DaniAndTheWeb
This commit is a port of a detection method used in koboldcpp's Makefile in order to automatically set the -lcblas option on Arch Linux
2023-05-05ci : add cublas to windows release (#1271)Erik Scholz
2023-05-05readme: add missing info (#1324)Pavol Rusnak
2023-05-05Fix for OpenCL / clbast builds on macOS. (#1329)Ionoclast Laboratories
2023-05-05Convert.py @staticmethod (#1327)Benjamin Lecaillon
* Line 698 has one #staticmethod and should not otherwise throw error at unpickle.load() as not callable * Update convert.py --------- Co-authored-by: Ivan Stepanov <ivanstepanovftw@gmail.com>
2023-05-05quantize: make output filename optional, default to ggml-model-<ftype>.bin ↵slaren
(#1301)
2023-05-04Wrap exceptions in std::exception to verbose output on exception. (#1316)Ivan Stepanov
2023-05-04convert: support DT_BF16 tensors (#1309)Ivan Stepanov
Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-05-04readme : add OpenBuddy link (#1321)44670
2023-05-04main : add --in-suffix option (#1318)44670
* adding --in-suffix option * print input suffix before generation
2023-05-04ggml : change immintrin.h to intrin.h for compatibility (#1307)Ron Jailall
* change immintrin.h to intrin.h for compatibility Building on windows11 arm throws an error on this line. Seems like using intrin.h covers x86 and and arm * conditional def of intrin.h * fix typo in ggml.c
2023-05-04Only escape prompts when used with `-e` (#1311)DannyDaemonic
2023-05-04Update main's README.md with new features (#1296)DannyDaemonic
2023-05-04fix #1224 reverse prompt and multi line (#1297)Tomas
* fix reverse prompt and multi line * Code Formatting Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-03ggml : vectorize Q8_0 quantizationGeorgi Gerganov
https://github.com/ggerganov/ggml/pull/127#issuecomment-1533648531
2023-05-03examples : read chat prompts from a template file (#1196)khimaros
2023-05-03minor : fix whitespaces (#1302)Georgi Gerganov
2023-05-03minor : fix trailing whitespacesGeorgi Gerganov
2023-05-03scripts : platform independent script to verify sha256 checksums (#1203)KASR
* python script to verify the checksum of the llama models Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability. * Update README.md update to the readme for improved readability and to explain the usage of the python checksum verification script * update the verification script I've extended the script based on suggestions by @prusnak The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks. * minor improvment small change so that the available ram is checked and not the total ram * remove the part of the code that reads the file at once if enough ram is available based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks. * Update verify-checksum-models.py quick fix to pass the git check
2023-05-03examples : various prompt and example fixes (#1298)CRD716
* fix dan.txt * miku prompt improvements * use common characters
2023-05-02llama : only copy used KV cache in get / set state (#1272)Evan Jones
* llama : only copy used KV cache in get / set state * switch to ggml for copying k, v * avoid designated initializers
2023-05-02Process escape sequences given in prompts (#1173)DannyDaemonic
2023-05-02Handle signals properly on Windows (#1123)DannyDaemonic
2023-05-02Call sh on build-info.sh (#1294)DannyDaemonic
2023-05-03fix build-info.h for git submodules (#1289)kuvaus
* make git build info work with submodules --------- Co-authored-by: Green Sky <green@g-s.xyz>
2023-05-03fix missing parameters in `llama_init_from_gpt_params` (#1293)slaren
2023-05-02examples : add llama_init_from_gpt_params() common function (#1290)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02llama : fix compile warningsGeorgi Gerganov
2023-05-02ggml : fix 32-bit ARMGeorgi Gerganov
2023-05-02examples : improve vertical alignment of a few variables (#1286)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02ggml : fix ppc64le build error and make cmake detect Power processors (#1284)Marvin Gießing
* Fix ppc64le build issue * Added support to detect ppc64* processors
2023-05-02llama : allow 0 as a seed number. (#1275)Robert Brisita
2023-05-02main : switch input_noecho to input_echo to remove negation (#979)Ron Evans
Signed-off-by: deadprogram <ron@hybridgroup.com>
2023-05-02ggml: add names to tensors (#1268)slaren
* ggml: add names to tensors * minor improvements to dot file formatting
2023-05-01Add git-based build information for better issue tracking (#1232)DannyDaemonic
* Add git-based build information for better issue tracking * macOS fix * "build (hash)" and "CMAKE_SOURCE_DIR" changes * Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages * Fix conditional dependency on missing target * Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile * 4 space indenting for cmake, attempt to clean up my mess in Makefile * Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
2023-05-01cuBLAS: refactor and optimize f16 mat mul performance (#1259)slaren
* cuBLAS: refactor, convert fp16 to fp32 on device * cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16 * fix build * cuBLAS: update block_q5_1
2023-05-01llama : update stubs for systems without mmap and mlock (#1266)xloem
Co-authored-by: John Doe <john.doe@example.com>
2023-05-01ggml : fix ggml_used_mem() (#1264)Kerfuffle
2023-05-01llama : fix session load / save (#1263)Georgi Gerganov
2023-05-01cuBLAS: fall back to pageable memory if pinned alloc fails (#1233)slaren
* cuBLAS: fall back to pageable memory if pinned alloc fails * cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
2023-05-01llama : let context be const when accessing const data (#1261)Alex Klinkhamer