Age | Commit message (Collapse) | Author |
|
|
|
|
|
- main -> examples
- utils -> examples (renamed to "common")
- quantize -> examples
- separate tools for "perplexity" and "embedding"
Hope I didn't break something !
|
|
* Retire the ggml_mul_mat() for transposed src0
- It can always be made contiguous with ggml_cpy()
- The code is now simplified
- The results are deterministic in respect to num threads
* SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502)
* Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON
* Fix dequantization - forgot to interleave the quants
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Prefix user inputs with a string
|
|
* File load progress reporting
* Move llama_progress_handler into llama_context_params
* Renames
* Use seekg to find file size instead
* More correct load progress
* Call progress callback more frequently
* Fix typo
|
|
`llama_sample_top_p_top_k` was missing the struct annotation on line 126.
This causes a compiler issue when being parsed by the Kotlin C interop generator.
This commit fixes the above issue by adding the struct annotation.
|
|
|
|
|
|
|
|
(#476)
|
|
* Reduce memory usage and allocate enough memory for large contexts
* Simpler scratch buffer usage
* Reenable BLAS for quantized mul_mat
* Fix number of layers in 30B and 65B
* Fix KV cache size for F32
|
|
|
|
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
|
|
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
|
|
|
|
|
|
* Support calling mlock() on loaded model data on Linux and macOS
This is enabled by a new --mlock command line option.
Using mlock() disables swapping and memory compression for the model
data. Doing so can be useful on systems where the model takes up a
large fraction of system RAM. In my experience, macOS is quite eager to
start compressing llama.cpp's memory, which then makes it halt for a few
seconds while it decompresses, even with a model that uses "only" 25GB
out of 32GB.
Of course, this comes at the cost of forcing the system to swap or
compress other processes' memory instead, so it needs to be used with
care and shouldn't be enabled by default.
In theory it should be possible to support this on Windows as well using
VirtualLock(), but I'm not much of a Windows user.
* Update llama.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* working but ugly
* add arg flag, not working on embedding mode
* typo
* Working! Thanks to @nullhook
* make params argument instead of hardcoded boolean. remove useless time check
* start doing the instructions but not finished. This probably doesnt compile
* Embeddings extraction support
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
|
|
This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df.
Will provide the correct fix later
|
|
|
|
Should make results reproducible for different number of threads and batch sizes
|
|
|
|
|
|
|
|
interactive mode (#333)
* Improve interactive mode's coherence after EOS
Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached.
Not sure what token 13 is or why it seems to help. See conversation for examples.
* Make newline token a constant
* dynamically determine newline token
* relocate previous newline token const
* cleanup whitespace
* print a new line on end of text in interactive
this may need to be looked into further when not using a reverse prompt
* only print manual newline with reverse prompt
fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise
* alternate approach to replace end of text tokens
* Inject the reverse prompt again after eos in interactive mode
* tokenize reverse prompt when needed
makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330
* tokenize and inject only first reverse prompt
thanks to tjohnman
* tokenize first reverse prompt once
* add newline token
* add newline token
* tokenize/inject reverse prompt for refactor
this doesn't seem right though
* tokenize nothing for antiprompt if no reverse
* Update main.cpp
* Update main.cpp
* tokenize and inject reverse prompt as needed
this doesn't seem to work if the reverse prompt is tokenized outside earlier on
* not needed
* remove newline token
* remove newline token
* tokenize newline token
* add space to comment
* Update main.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
---------
Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* Fix GPTQ converter
* Fix comment
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
|
* Generate library with CMake
BUILD_SHARED_LIBS to allow llama library to be generated.
* Turn ON PIC when BUILD_SHARED_LIBS is ON
|
|
* command line args bounds checking
* unknown and invalid param exit codes 0 -> 1
|
|
|
|
* Revert "Delete SHA256SUMS for now (#416)"
This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37.
* Remove ggml files until they can be verified
* Remove alpaca json
* Add also model/tokenizer.model to SHA256SUMS + update README
---------
Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
|
|
|
|
* Update custom.md
* Removed Model section as it is better placed in README.md
* Updates to README.md model section
* Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models
* Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway
* Updated the perplexity section to point at Perplexity scores #406 discussion
|
|
Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format
Re-add after #374 is resolved
|
|
|
|
|
|
* CI: Separate Build and Test steps (CMake)
* CI: Make sure build passes before running tests (CMake)
* CI: Standardise step id names
|
|
Co-authored-by: Johnman <tjohnman@github>
|
|
|
|
* Deduplicate q4 quantization functions
* Use const; add basic test
* Re-enable quantization test
* Disable AVX2 flags in CI
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|