aboutsummaryrefslogtreecommitdiff
path: root/main.cpp
AgeCommit message (Collapse)Author
2023-03-23Replace EOS with newline to prevent context/memory being flushed by EOS in ↵rabidcopy
interactive mode (#333) * Improve interactive mode's coherence after EOS Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached. Not sure what token 13 is or why it seems to help. See conversation for examples. * Make newline token a constant * dynamically determine newline token * relocate previous newline token const * cleanup whitespace * print a new line on end of text in interactive this may need to be looked into further when not using a reverse prompt * only print manual newline with reverse prompt fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise * alternate approach to replace end of text tokens * Inject the reverse prompt again after eos in interactive mode * tokenize reverse prompt when needed makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330 * tokenize and inject only first reverse prompt thanks to tjohnman * tokenize first reverse prompt once * add newline token * add newline token * tokenize/inject reverse prompt for refactor this doesn't seem right though * tokenize nothing for antiprompt if no reverse * Update main.cpp * Update main.cpp * tokenize and inject reverse prompt as needed this doesn't seem to work if the reverse prompt is tokenized outside earlier on * not needed * remove newline token * remove newline token * tokenize newline token * add space to comment * Update main.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23Fix instruct mode broken by PR #354 (#409)tjohnman
Co-authored-by: Johnman <tjohnman@github>
2023-03-22Don't force immediate interactive without `-i` (#354)tjohnman
* Don't force immediate interactive without -i Sometimes we might want to use a reverse prompt but we want to let the model generate tokens right after the initial prompt. So we don't force user input mode if the -i flag wasn't specified and instead let it run until we encounter the reverse prompt. This gives use some more flexibility, since it doesn't force the user to enter a newline if they want to let the model generate text right after the initial prompt and only be asked for input if the reverse prompt is encountered. The `--interactive-first` flag is reintroduced to force the old behavior. `-r` behaves like `-i` plus introduces a reverse prompt (it can be specified more than once). * Update help output. --------- Co-authored-by: Johnman <tjohnman@github>
2023-03-22fix perplexity after c-api refactor (#390)Erik Scholz
* preallocate a buffer of fitting size for tokenization (utils.cpp) * don't create a new std::string (especially here, where it's usually large)
2023-03-22When seed <= 0 - use the clock to generate oneGeorgi Gerganov
2023-03-22Init llama_context_params properly from CLI (#370)Georgi Gerganov
2023-03-22Introduce C-style API (#370)Georgi Gerganov
* Major refactoring - introduce C-style API * Clean up * Add <cassert> * Add <iterator> * Add <algorithm> .... * Fix timing reporting and accumulation * Measure eval time only for single-token calls * Change llama_tokenize return meaning
2023-03-21We could use std::unordered_map over std::map (#305)Fabio R. Sluzala
* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token; * fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size()); * Removed include <map> * Nest struct token score inside gpt_vocab * renamed token to tok
2023-03-21Fix color codes emitting mid-UTF8 code. (#312)Matvey Soloviev
2023-03-21Importer for GPTQ quantized LLaMA models (#301)comex
* [WIP, broken] Importer for GPTQ quantized LLaMA models Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa Current status: Something is busted. The output starts out decent, but quickly degrades into gibberish. This doesn't happen with either the original GPTQ-for-LLaMa using the same weights, or llama.cpp when using weights quantized by its own quantizer. Is there a bug in the conversion script that somehow only comes into play with a large context size? I did notice one potential issue. It's clearly not the main cause of the gibberish, since it doesn't happen when using q4_1 weights quantized by llama.cpp itself, but it seems concerning. When doing a matrix multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when the multiplication is not done with BLAS, the intermediate results are stored in the smaller format rather than f32. This seems like an unnecessary waste of precision, especially in the q4_1 case. I was originally hoping to validate the results by matching the Python implementation's output exactly, but precision and non-associativity issues make this very difficult, including when performing matrix multiplications and, especially, computing norms. Anyway, design details: The models being imported store per-layer weights in essentially q4_1 format, although the addend and scale are shared across an entire row rather than every group of 32 weights. This script duplicates the addend and scale to match ggml's expectations, at the cost of wasting some memory. However, there are two differences which I accommodated changing the output format (and adding corresponding support to main.cpp) rather than having the script match the existing one: - The tok_embeddings and output weights (i.e. the weights that aren't per-layer) are f16 instead of q4_1. They could be converted to q4_1, and the impact of the loss of precision would probably be low, but this would rule out exactly matching the Python implementation's output for validation. - There is no sharding, since the input doesn't have it, and for a CPU-only implementation it seems more useful to avoid having to deal with multiple files. The new format is differentiated from existing q4_1 format by changing the 'f16' header flag to a new value, 4. That said, I think a cleaner approach would be to change main.cpp to support loading each tensor with an arbitrary sharding configuration and type rather than hardcoding specific combinations of types. So far I've wasted too much time debugging to try implementing this... * Add missing permutation. Now it works. --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Compute perplexity over prompt (#270)Gary Linscott
* Compute perplexity over prompt * More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!) * Output all perplexitiies * Add timing/ETA
2023-03-21Enable ANSI colors on Windows 10+ (#311)anzz1
* Enable ANSI colors on Windows 10+ On older versions function will silently fail without any ill effects * Do not call SetConsoleMode if the mode is already set * Update main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Check for reverse prompt by characters instead of tokens (#292) (#330)tjohnman
* Check for reverse prompt by characters instead of tokens (#292) * Update main.cpp Wording. * Cleanup. * Remove unnecessary use of std::stringstream. --------- Co-authored-by: Johnman <tjohnman@github> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Fix convert script, warnings alpaca instructions, default paramsGeorgi Gerganov
2023-03-21cmdline option for custom amount of model parts (--n_parts N) (#348)anzz1
* cmdline option for custom amount of model parts (--n_parts N) * Update main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Add tokenizer test + revert to C++11 (#355)Georgi Gerganov
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand * Added option to convert-pth-to-ggml.py script to dump just the vocabulary * Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests) * Added utility to load vocabulary file from previous point (temporary implementation) * Avoid using std::string_view and drop back to C++11 (hope I didn't break something) * Rename gpt_vocab -> llama_vocab * All CMake binaries go into ./bin/ now
2023-03-20move file magic/version to header, print expected version (#319)Mack Straight
2023-03-20sentencepiece bpe compatible tokenizer (#252)Mack Straight
* potential out of bounds read * fix quantize * style * Update convert-pth-to-ggml.py * mild cleanup * don't need the space-prefixing here rn since main.cpp already does it * new file magic + version header field * readme notice * missing newlines Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
2023-03-19bugfix: default should not be interactive (#304)cocktailpeanut
2023-03-19fix coloring of last `n_batch` of prompt, and refactor line input (#221)Rickey Bowers Jr
* fix coloring of last `n_batch` of prompt, and refactor line input * forgot the newline that needs to be sent to the model * (per #283) try to force flush of color reset in SIGINT handler
2023-03-19Support for multiple reverse prompts. (#299)tjohnman
Co-authored-by: Johnman <> Co-authored-by: Johnman <tjohnman@github>
2023-03-19Make prompt randomization optional. (#300)tjohnman
Co-authored-by: Johnman <>
2023-03-19Respect the maximum number of tokens in interactive. (#298)tjohnman
Co-authored-by: Johnman <johnman@github> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-19Add --ignore-eos parameter (#181)slaren
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-19interactive mode: print '\n' in sigint_handler, this flush stdout thus ↵Qingyou Meng
ensure color reset. (#283)
2023-03-19Command line switch to use F16 for memory_k and memory_v (refactor of #154) ↵Erik Scholz
(#294) * Use F16 for memory_k and memory_v * add command line switch to use f16 instead of f32 for memory k+v --------- Co-authored-by: Ty Everett <ty@tyweb.us>
2023-03-19Fix off-by-one bug (#115)Georgi Gerganov
2023-03-19Drop trailing new line from file prompts (#80)Georgi Gerganov
2023-03-19Add "--instruct" argument for usage with Alpaca (#240)Georgi Gerganov
Also start adding prompts in "./prompts"
2023-03-18Warn user if a context size greater than 2048 tokens is specified (#274)Ronsor
LLaMA doesn't support more than 2048 token context sizes, and going above that produces terrible results.
2023-03-18Remove unused code since n_vocab is model.hparams.n_vocab (#262)Alex Nguyen
2023-03-18fixed warning with std::ignore about unused function result (#151)Justin Suess
fixed warning with std::ignore about unused function result
2023-03-17Implement non-greedy tokenizer that tries to maximize token lengths (#242)thement
* Implement non-greedy tokenizer that tries to maximize token lengths * Insert single space in front of the prompt - this is to match original llama tokenizer behavior --------- Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
2023-03-16Add RMS norm and use it (#187)hoangmit
* add ggml_rms_norm * update op num
2023-03-15add SIGINT support for _WIN32 environments (#120)Rickey Bowers Jr
* add SIGINT support for _WIN32 environments * perhaps more consistent
2023-03-15added ctx_size parameter (#148)Justin Suess
* added ctx_size parameter * added it in more places * Apply suggestions from code review --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-15fixed color reset on exit (#149)Justin Suess
* fixed color reset on exit * added sigint handler for ansi_color_reset * Update main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-13Print system informationGeorgi Gerganov
2023-03-13Use fprintf for diagnostic output (#48)Pavol Rusnak
keep printf only for printing model output one can now use ./main ... 2>dev/null to suppress any diagnostic output
2023-03-13Reduce model loading time (#43)uint256_t
* Use buffering * Use vector * Minor --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-13Fix UTF-8 handling (including colors) (#79)Val Kharitonov
2023-03-13Gate signal support on being on a unixoid system. (#74)Matvey Soloviev
2023-03-13Fix token count accountingMatvey Soloviev
2023-03-13Fix color getting reset before prompt output done (#65)Matvey Soloviev
(cherry picked from commit 7eb2987619feee04c40eff69b604017d09919cb6)
2023-03-12Add interactive mode (#61)Matvey Soloviev
* Initial work on interactive mode. * Improve interactive mode. Make rev. prompt optional. * Update README to explain interactive mode. * Fix OS X build
2023-03-12Add back top_k (#56)beiller
* Add back top_k * Update utils.cpp * Update utils.h --------- Co-authored-by: Bill Hamilton <bill.hamilton@shopify.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-12Windows fixes (#31)Sebastián A
* Apply fixes suggested to build on windows Issue: https://github.com/ggerganov/llama.cpp/issues/22 * Remove unsupported VLAs * MSVC: Remove features that are only available on MSVC C++20. * Fix zero initialization of the other fields. * Change the use of vector for stack allocations.
2023-03-12Add repetition penalty (#20)beiller
* Adding repeat penalization * Update utils.h * Update utils.cpp * Numeric fix Should probably still scale by temp even if penalized * Update comments, more proper application I see that numbers can go negative so a fix from a referenced commit * Minor formatting --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-11Bump memory bufferGeorgi Gerganov
2023-03-11Support all LLaMA models + change Q4_0 quantization storageGeorgi Gerganov