aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-03-24Disable BLAS branch in mul_mat - seems there is a bugGeorgi Gerganov
2023-03-24Immediately start processing the prompt before user input has been provided ↵Georgi Gerganov
(#476)
2023-03-24Reduce memory usage and allocate enough memory for largest context (#473)Georgi Gerganov
* Reduce memory usage and allocate enough memory for large contexts * Simpler scratch buffer usage * Reenable BLAS for quantized mul_mat * Fix number of layers in 30B and 65B * Fix KV cache size for F32
2023-03-24Temporary bump the memory buffer size - hopefully fix issues from 483bab2eGeorgi Gerganov
2023-03-24Update README.md (#444)Gary Mulder
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
2023-03-24fix instruct mode (#445)rabidcopy
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
2023-03-24Properly free llama_context on failureGeorgi Gerganov
2023-03-24additional optimizations for POWER9 (#454)Cameron Kaiser
2023-03-24Support calling mlock() on loaded model data on Linux and macOS (#453)comex
* Support calling mlock() on loaded model data on Linux and macOS This is enabled by a new --mlock command line option. Using mlock() disables swapping and memory compression for the model data. Doing so can be useful on systems where the model takes up a large fraction of system RAM. In my experience, macOS is quite eager to start compressing llama.cpp's memory, which then makes it halt for a few seconds while it decompresses, even with a model that uses "only" 25GB out of 32GB. Of course, this comes at the cost of forcing the system to swap or compress other processes' memory instead, so it needs to be used with care and shouldn't be enabled by default. In theory it should be possible to support this on Windows as well using VirtualLock(), but I'm not much of a Windows user. * Update llama.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24Add embedding mode with arg flag. Currently working (#282)Luciano
* working but ugly * add arg flag, not working on embedding mode * typo * Working! Thanks to @nullhook * make params argument instead of hardcoded boolean. remove useless time check * start doing the instructions but not finished. This probably doesnt compile * Embeddings extraction support --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24Add link to Roadmap discussionGeorgi Gerganov
2023-03-24Revert "Fix memory allocation issues and seg faults"Georgi Gerganov
This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df. Will provide the correct fix later
2023-03-24Fix memory allocation issues and seg faultsGeorgi Gerganov
2023-03-23Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)Georgi Gerganov
Should make results reproducible for different number of threads and batch sizes
2023-03-23Fix quantize script not finding models in parent directory (#428)Jed Fox
2023-03-23Remove oboslete command from Docker scriptGeorgi Gerganov
2023-03-23ObsoleteGeorgi Gerganov
2023-03-23Replace EOS with newline to prevent context/memory being flushed by EOS in ↵rabidcopy
interactive mode (#333) * Improve interactive mode's coherence after EOS Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached. Not sure what token 13 is or why it seems to help. See conversation for examples. * Make newline token a constant * dynamically determine newline token * relocate previous newline token const * cleanup whitespace * print a new line on end of text in interactive this may need to be looked into further when not using a reverse prompt * only print manual newline with reverse prompt fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise * alternate approach to replace end of text tokens * Inject the reverse prompt again after eos in interactive mode * tokenize reverse prompt when needed makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330 * tokenize and inject only first reverse prompt thanks to tjohnman * tokenize first reverse prompt once * add newline token * add newline token * tokenize/inject reverse prompt for refactor this doesn't seem right though * tokenize nothing for antiprompt if no reverse * Update main.cpp * Update main.cpp * tokenize and inject reverse prompt as needed this doesn't seem to work if the reverse prompt is tokenized outside earlier on * not needed * remove newline token * remove newline token * tokenize newline token * add space to comment * Update main.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23Fix GPTQ converter (#423)Timmy Knight
* Fix GPTQ converter * Fix comment --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23Generate library with CMake (#430)nusu-github
* Generate library with CMake BUILD_SHARED_LIBS to allow llama library to be generated. * Turn ON PIC when BUILD_SHARED_LIBS is ON
2023-03-23Command line args bounds checking (#424)anzz1
* command line args bounds checking * unknown and invalid param exit codes 0 -> 1
2023-03-23Fix Nix buildBen Siraphob
2023-03-23Revert "Delete SHA256SUMS for now" (#429)Stephan Walter
* Revert "Delete SHA256SUMS for now (#416)" This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37. * Remove ggml files until they can be verified * Remove alpaca json * Add also model/tokenizer.model to SHA256SUMS + update README --------- Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-23Fix Makefile echo escape codes (by removing them). (#418)Kerfuffle
2023-03-23Move model section from issue template to README.md (#421)Gary Mulder
* Update custom.md * Removed Model section as it is better placed in README.md * Updates to README.md model section * Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models * Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway * Updated the perplexity section to point at Perplexity scores #406 discussion
2023-03-23Delete SHA256SUMS for now (#416)anzz1
Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format Re-add after #374 is resolved
2023-03-23Adjust repetition penalty ..Georgi Gerganov
2023-03-23Add link to recent podcast about whisper.cpp and llama.cppGeorgi Gerganov
2023-03-23CI: CMake: Separate build and test steps (#376)anzz1
* CI: Separate Build and Test steps (CMake) * CI: Make sure build passes before running tests (CMake) * CI: Standardise step id names
2023-03-23Fix instruct mode broken by PR #354 (#409)tjohnman
Co-authored-by: Johnman <tjohnman@github>
2023-03-22Update issue template so people will use it (#404)Gary Mulder
2023-03-22Deduplicate q4 quantization functions (#383)Stephan Walter
* Deduplicate q4 quantization functions * Use const; add basic test * Re-enable quantization test * Disable AVX2 flags in CI --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-22fix: add POSIX functionality for Linux compilation (#51)Valentyn Bezshapkin
* fix: add POSIX functionality for Linux compilation * fix: older standard for compatibility
2023-03-22Don't force immediate interactive without `-i` (#354)tjohnman
* Don't force immediate interactive without -i Sometimes we might want to use a reverse prompt but we want to let the model generate tokens right after the initial prompt. So we don't force user input mode if the -i flag wasn't specified and instead let it run until we encounter the reverse prompt. This gives use some more flexibility, since it doesn't force the user to enter a newline if they want to let the model generate text right after the initial prompt and only be asked for input if the reverse prompt is encountered. The `--interactive-first` flag is reintroduced to force the old behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it can be specified more than once). * Update help output. --------- Co-authored-by: Johnman <tjohnman@github>
2023-03-22cmake: make llama an actual library (#392)Erik Scholz
2023-03-22fix perplexity after c-api refactor (#390)Erik Scholz
* preallocate a buffer of fitting size for tokenization (utils.cpp) * don't create a new std::string (especially here, where it's usually large)
2023-03-22Add details on perplexity to README.md (#395)Gary Linscott
2023-03-22Add missing header for memcpy (#386)Yusuf Kağan Hanoğlu
fixed: memcpy is not defined
2023-03-22When seed <= 0 - use the clock to generate oneGeorgi Gerganov
2023-03-22Init llama_context_params properly from CLI (#370)Georgi Gerganov
2023-03-22Remove temporary notice and update hot topicsGeorgi Gerganov
2023-03-22Introduce C-style API (#370)Georgi Gerganov
* Major refactoring - introduce C-style API * Clean up * Add <cassert> * Add <iterator> * Add <algorithm> .... * Fix timing reporting and accumulation * Measure eval time only for single-token calls * Change llama_tokenize return meaning
2023-03-21Add SHA256SUMS file and instructions to README how to obtain and verify the ↵Gary Mulder
downloads Hashes created using: sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS
2023-03-22Fix bin dir for win cianzz1
2023-03-21specify build type for ctest on windows (#371)Erik Scholz
2023-03-21Add notice about pending changeGeorgi Gerganov
2023-03-21fix typo in chatLLaMa (#368)Mathieu Nayrolles
The prompt contains a typo where 'alound' is used instead of 'aloud'.
2023-03-21Update issue templatesGeorgi Gerganov
2023-03-21We could use std::unordered_map over std::map (#305)Fabio R. Sluzala
* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token; * fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size()); * Removed include <map> * Nest struct token score inside gpt_vocab * renamed token to tok
2023-03-21Fix color codes emitting mid-UTF8 code. (#312)Matvey Soloviev