Age | Commit message (Collapse) | Author |
|
|
|
|
|
fixes #1363
|
|
* llama : require first token to be BOS
* scripts : add ppl-run-all.sh
* perplexity : add BOS for each chunk
* readme : update perplexity values after BOS fix
* perplexity : add clarifying comments
|
|
* when loading a safetensors file, ignore the metadata header
* check for safetensors files first, and only use PyTorch versions when safetensors aren't available
|
|
|
|
* Add OpenCL and CLBlast support
* Add OpenBLAS support
* Remove testing from matrix
* change build name to 'clblast'
|
|
Minor edit in ggml.c which originally would prevent OpenCL from loading completely if GGML_USE_ACCELERATE was defined.
Minor speedup in prompt eval time.
|
|
|
|
This commit is a port of a detection method used in koboldcpp's Makefile in order to automatically set the -lcblas option on Arch Linux
|
|
|
|
|
|
|
|
* Line 698 has one #staticmethod and should not
otherwise throw error at unpickle.load() as not callable
* Update convert.py
---------
Co-authored-by: Ivan Stepanov <ivanstepanovftw@gmail.com>
|
|
(#1301)
|
|
|
|
Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
|
|
|
|
* adding --in-suffix option
* print input suffix before generation
|
|
* change immintrin.h to intrin.h for compatibility
Building on windows11 arm throws an error on this line. Seems like using intrin.h covers x86 and and arm
* conditional def of intrin.h
* fix typo in ggml.c
|
|
|
|
|
|
* fix reverse prompt and multi line
* Code Formatting
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
https://github.com/ggerganov/ggml/pull/127#issuecomment-1533648531
|
|
|
|
|
|
|
|
* python script to verify the checksum of the llama models
Added Python script for verifying SHA256 checksums of files in a directory, which can run on multiple platforms. Improved the formatting of the output results for better readability.
* Update README.md
update to the readme for improved readability and to explain the usage of the python checksum verification script
* update the verification script
I've extended the script based on suggestions by @prusnak
The script now checks the available RAM, is there is enough to check the file at once it will do so. If not the file is read in chunks.
* minor improvment
small change so that the available ram is checked and not the total ram
* remove the part of the code that reads the file at once if enough ram is available
based on suggestions from @prusnak i removed the part of the code that checks whether the user had enough ram to read the entire model at once. the file is now always read in chunks.
* Update verify-checksum-models.py
quick fix to pass the git check
|
|
* fix dan.txt
* miku prompt improvements
* use common characters
|
|
* llama : only copy used KV cache in get / set state
* switch to ggml for copying k, v
* avoid designated initializers
|
|
|
|
|
|
|
|
* make git build info work with submodules
---------
Co-authored-by: Green Sky <green@g-s.xyz>
|
|
|
|
Signed-off-by: deadprogram <ron@hybridgroup.com>
|
|
|
|
|
|
Signed-off-by: deadprogram <ron@hybridgroup.com>
|
|
* Fix ppc64le build issue
* Added support to detect ppc64* processors
|
|
|
|
Signed-off-by: deadprogram <ron@hybridgroup.com>
|
|
* ggml: add names to tensors
* minor improvements to dot file formatting
|
|
* Add git-based build information for better issue tracking
* macOS fix
* "build (hash)" and "CMAKE_SOURCE_DIR" changes
* Redo "CMAKE_CURRENT_SOURCE_DIR" and clearer build messages
* Fix conditional dependency on missing target
* Broke out build-info.cmake, added find_package fallback, and added build into to all examples, added dependencies to Makefile
* 4 space indenting for cmake, attempt to clean up my mess in Makefile
* Short hash, less fancy Makefile, and don't modify build-info.h if it wouldn't change it
|
|
* cuBLAS: refactor, convert fp16 to fp32 on device
* cuBLAS: use multiple streams, choose smartly between mul_mat_q and mul_mat_f16
* fix build
* cuBLAS: update block_q5_1
|
|
Co-authored-by: John Doe <john.doe@example.com>
|
|
|
|
|
|
* cuBLAS: fall back to pageable memory if pinned alloc fails
* cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED is set
|
|
|