aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-03-22cmake: make llama an actual library (#392)Erik Scholz
2023-03-22fix perplexity after c-api refactor (#390)Erik Scholz
* preallocate a buffer of fitting size for tokenization (utils.cpp) * don't create a new std::string (especially here, where it's usually large)
2023-03-22Add details on perplexity to README.md (#395)Gary Linscott
2023-03-22Add missing header for memcpy (#386)Yusuf Kağan Hanoğlu
fixed: memcpy is not defined
2023-03-22When seed <= 0 - use the clock to generate oneGeorgi Gerganov
2023-03-22Init llama_context_params properly from CLI (#370)Georgi Gerganov
2023-03-22Remove temporary notice and update hot topicsGeorgi Gerganov
2023-03-22Introduce C-style API (#370)Georgi Gerganov
* Major refactoring - introduce C-style API * Clean up * Add <cassert> * Add <iterator> * Add <algorithm> .... * Fix timing reporting and accumulation * Measure eval time only for single-token calls * Change llama_tokenize return meaning
2023-03-21Add SHA256SUMS file and instructions to README how to obtain and verify the ↵Gary Mulder
downloads Hashes created using: sha256sum models/*B/*.pth models/*[7136]B/ggml-model-f16.bin* models/*[7136]B/ggml-model-q4_0.bin* > SHA256SUMS
2023-03-22Fix bin dir for win cianzz1
2023-03-21specify build type for ctest on windows (#371)Erik Scholz
2023-03-21Add notice about pending changeGeorgi Gerganov
2023-03-21fix typo in chatLLaMa (#368)Mathieu Nayrolles
The prompt contains a typo where 'alound' is used instead of 'aloud'.
2023-03-21Update issue templatesGeorgi Gerganov
2023-03-21We could use std::unordered_map over std::map (#305)Fabio R. Sluzala
* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token; * fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size()); * Removed include <map> * Nest struct token score inside gpt_vocab * renamed token to tok
2023-03-21Fix color codes emitting mid-UTF8 code. (#312)Matvey Soloviev
2023-03-21Importer for GPTQ quantized LLaMA models (#301)comex
* [WIP, broken] Importer for GPTQ quantized LLaMA models Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa Current status: Something is busted. The output starts out decent, but quickly degrades into gibberish. This doesn't happen with either the original GPTQ-for-LLaMa using the same weights, or llama.cpp when using weights quantized by its own quantizer. Is there a bug in the conversion script that somehow only comes into play with a large context size? I did notice one potential issue. It's clearly not the main cause of the gibberish, since it doesn't happen when using q4_1 weights quantized by llama.cpp itself, but it seems concerning. When doing a matrix multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when the multiplication is not done with BLAS, the intermediate results are stored in the smaller format rather than f32. This seems like an unnecessary waste of precision, especially in the q4_1 case. I was originally hoping to validate the results by matching the Python implementation's output exactly, but precision and non-associativity issues make this very difficult, including when performing matrix multiplications and, especially, computing norms. Anyway, design details: The models being imported store per-layer weights in essentially q4_1 format, although the addend and scale are shared across an entire row rather than every group of 32 weights. This script duplicates the addend and scale to match ggml's expectations, at the cost of wasting some memory. However, there are two differences which I accommodated changing the output format (and adding corresponding support to main.cpp) rather than having the script match the existing one: - The tok_embeddings and output weights (i.e. the weights that aren't per-layer) are f16 instead of q4_1. They could be converted to q4_1, and the impact of the loss of precision would probably be low, but this would rule out exactly matching the Python implementation's output for validation. - There is no sharding, since the input doesn't have it, and for a CPU-only implementation it seems more useful to avoid having to deal with multiple files. The new format is differentiated from existing q4_1 format by changing the 'f16' header flag to a new value, 4. That said, I think a cleaner approach would be to change main.cpp to support loading each tensor with an arbitrary sharding configuration and type rather than hardcoding specific combinations of types. So far I've wasted too much time debugging to try implementing this... * Add missing permutation. Now it works. --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Compute perplexity over prompt (#270)Gary Linscott
* Compute perplexity over prompt * More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!) * Output all perplexitiies * Add timing/ETA
2023-03-21Add chatLLaMa script (#198)Jean-Christophe Hoelt
* Add chatLLaMa script * Fix shellcheck errors and do some cleanup * Move chatLLaMa script to `examples` directory * Reduce chatLLaMa context size to 2048 Ref d7def1a7524f712e5ebb7cd02bab0f13aa56a7f9 * Include n_predict to 2048 in examples/chatLLaMa
2023-03-21makefile: Fix CPU feature detection on Haiku (#218)Alex von Gluck IV
2023-03-21Enable ANSI colors on Windows 10+ (#311)anzz1
* Enable ANSI colors on Windows 10+ On older versions function will silently fail without any ill effects * Do not call SetConsoleMode if the mode is already set * Update main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Minor style changesGeorgi Gerganov
2023-03-21Add chat.sh scriptGeorgi Gerganov
2023-03-21Check for reverse prompt by characters instead of tokens (#292) (#330)tjohnman
* Check for reverse prompt by characters instead of tokens (#292) * Update main.cpp Wording. * Cleanup. * Remove unnecessary use of std::stringstream. --------- Co-authored-by: Johnman <tjohnman@github> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Check for reverse prompt by characters instead of tokens (#292) (#330)tjohnman
* Check for reverse prompt by characters instead of tokens (#292) * Update main.cpp Wording. * Cleanup. * Remove unnecessary use of std::stringstream. --------- Co-authored-by: Johnman <tjohnman@github> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Fix convert script, warnings alpaca instructions, default paramsGeorgi Gerganov
2023-03-21Add OpenBSD support (#314)Kevin Lo
2023-03-21fix typo in comment (#318)Mack Straight
2023-03-21Makefile: slightly cleanup for Mac Intel; echo instead of run ./main -h (#335)Qingyou Meng
2023-03-21cmdline option for custom amount of model parts (--n_parts N) (#348)anzz1
* cmdline option for custom amount of model parts (--n_parts N) * Update main.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-21Update IPFS links to quantized alpaca with new tokenizer format (#352)Kevin Kwok
2023-03-21Change default repeat_penalty to 1.0Georgi Gerganov
I feel this penalty is not really helping. Especially for the example from the README it makes results pretty bad
2023-03-21Add tokenizer test + revert to C++11 (#355)Georgi Gerganov
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand * Added option to convert-pth-to-ggml.py script to dump just the vocabulary * Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests) * Added utility to load vocabulary file from previous point (temporary implementation) * Avoid using std::string_view and drop back to C++11 (hope I didn't break something) * Rename gpt_vocab -> llama_vocab * All CMake binaries go into ./bin/ now
2023-03-21Add initial AVX512 support for dot product on Linux (#320)Casey Primozic
* Update Makefile to detect AVX512 support and add compiler flags if it's available * Based on existing AVX2 implementation, dot product on one 32-value block of 4-bit quantized ints at a time * Perform 8 bit -> 16 bit sign extension and multiply+add on 32 values at time instead of 16 * Use built-in AVX512 horizontal reduce add to get sum at the end * Manual unrolling on inner dot product loop to reduce loop counter overhead
2023-03-21Adding missing features of CMakeLists.txt & Refactoring (#131)nusu-github
* Functionality addition CMakeLists.txt Refactoring: 1. Simplify more options that are negation of negation. LLAMA_NO_ACCELERATE -> LLAMA_ACCELERATE 2. Changed to an optional expression instead of forcing to enable AVX2 in MSVC. 3. Make CMAKE_CXX_STANDARD, which is different from Makefile, the same. 4. Use add_compile_options instead of adding options to CMAKE_C_FLAGS. 5. Make utils use target_link_libraries instead of directly referencing code. Added features: 1. Added some options. LLAMA_STATIC_LINK,LLAMA_NATIVE,LLAMA_LTO,LLAMA_GPROF,LLAMA_OPENBLAS * Fix Accelerate link in CMake * Windows build Fix * C++11 to C++17 * Reflects C/C++ standard individually * Change the version to 3.12 --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-20Nix flake: set meta.mainProgram to llamaBen Siraphob
2023-03-20Fixed tokenizer.model not found error when model dir is symlink (#325)Qingyou Meng
2023-03-20move file magic/version to header, print expected version (#319)Mack Straight
2023-03-20Docker - Fix publish docker image in GitHub Registry (#235)Bernat Vadell
* fix publish permission * try to fix docker pipeline using as password github_token & username repository_owner
2023-03-20sentencepiece bpe compatible tokenizer (#252)Mack Straight
* potential out of bounds read * fix quantize * style * Update convert-pth-to-ggml.py * mild cleanup * don't need the space-prefixing here rn since main.cpp already does it * new file magic + version header field * readme notice * missing newlines Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
2023-03-20Add tqdm to Python requirements (#293)Stephan Walter
* Add tqdm to Python requirements * Remove torchvision torchaudio, add requests
2023-03-19bugfix: default should not be interactive (#304)cocktailpeanut
2023-03-19Rename scriptGeorgi Gerganov
2023-03-19Add temporary helper script for Alpaca chatGeorgi Gerganov
2023-03-19fix coloring of last `n_batch` of prompt, and refactor line input (#221)Rickey Bowers Jr
* fix coloring of last `n_batch` of prompt, and refactor line input * forgot the newline that needs to be sent to the model * (per #283) try to force flush of color reset in SIGINT handler
2023-03-19Support for multiple reverse prompts. (#299)tjohnman
Co-authored-by: Johnman <> Co-authored-by: Johnman <tjohnman@github>
2023-03-19Improved quantize script (#222)Suaj Carrot
* Improved quantize script I improved the quantize script by adding error handling and allowing to select many models for quantization at once in the command line. I also converted it to Python for generalization as well as extensibility. * Fixes and improvements based on Matt's observations Fixed and improved many things in the script based on the reviews made by @mattsta. The parallelization suggestion is still to be revised, but code for it was still added (commented). * Small fixes to the previous commit * Corrected to use the original glob pattern The original Bash script uses a glob pattern to match files that have endings such as ...bin.0, ...bin.1, etc. That has been translated correctly to Python now. * Added support for Windows and updated README to use this script New code to set the name of the quantize script binary depending on the platform has been added (quantize.exe if working on Windows) and the README.md file has been updated to use this script instead of the Bash one. * Fixed a typo and removed shell=True in the subprocess.run call Fixed a typo regarding the new filenames of the quantized models and removed the shell=True parameter in the subprocess.run call as it was conflicting with the list of parameters. * Corrected previous commit * Small tweak: changed the name of the program in argparse This was making the automatic help message to be suggesting the program's usage as being literally "$ Quantization Script [arguments]". It should now be something like "$ python3 quantize.py [arguments]".
2023-03-19Make prompt randomization optional. (#300)tjohnman
Co-authored-by: Johnman <>
2023-03-19Respect the maximum number of tokens in interactive. (#298)tjohnman
Co-authored-by: Johnman <johnman@github> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-19Add --ignore-eos parameter (#181)slaren
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>