aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-03-25CMake / CI additions (#497)anzz1
* CMake: Add AVX512 option * CI: Add AVX/AVX512 builds (Windows) (AVX512 tests can only be run when the worker happens to support it, building works anyway) * CMake: Fix sanitizer linkage ( merged #468 ) * CI: Add sanitizer builds (Ubuntu) * CI: Fix release tagging (change @zendesk/action-create-release to @anzz1/action-create-release until upstream PR Added commitish as input zendesk/action-create-release#32 is merged)
2023-03-25(Windows) Set console to UTF-8 on init (#420)anzz1
Sets console codepage to 65001 (CP_UTF8) on start for both input and output, should fix problems with UTF-8 characters.
2023-03-25Fix colors enabling on WIN32Georgi Gerganov
2023-03-25If n_predict == -1, generate foreverGeorgi Gerganov
2023-03-25Inifinite generation via context swapping (#71)Georgi Gerganov
2023-03-25Cleanup STL headers + fix embedding examples + minor stuffGeorgi Gerganov
2023-03-25Move chat scripts into "./examples"Georgi Gerganov
2023-03-25Add AVX2 implementation of dequantize_row_q4_1 (#505)slaren
2023-03-25Overhaul the examples structureGeorgi Gerganov
- main -> examples - utils -> examples (renamed to "common") - quantize -> examples - separate tools for "perplexity" and "embedding" Hope I didn't break something !
2023-03-25Retire the ggml_mul_mat() branch for transposed src0 (#500)Georgi Gerganov
* Retire the ggml_mul_mat() for transposed src0 - It can always be made contiguous with ggml_cpy() - The code is now simplified - The results are deterministic in respect to num threads * SIMD-ify dequantize_row_q4_0() for ARM_NEON (#502) * Attempt to SIMD-ify dequantize_row_q4_0() for ARM_NEON * Fix dequantization - forgot to interleave the quants
2023-03-25Disable prompt verbosity by default and add option to enable (#480)Georgi Gerganov
2023-03-25Add AVX2 implementation of dequantize_row_q4_0 (#467)slaren
2023-03-25Don't interefe with BLAS for large prompts by running only 1 threadGeorgi Gerganov
2023-03-25Add longer DAN prompt for testing big batch numbersGeorgi Gerganov
2023-03-25Add timings for the prompt evaluation (#478)slaren
2023-03-25Remove obsolete information from READMEGeorgi Gerganov
2023-03-25Remove obsolete assert and fix compiler warningGeorgi Gerganov
2023-03-25Fix nasty bug in ggml_compute_forward_mul_mat_f32() and reenable BLASGeorgi Gerganov
2023-03-25bounds checking for input prefix (#492)anzz1
2023-03-25feat: '--in-prefix STRING' option (#426)anzz1
Prefix user inputs with a string
2023-03-25Add support for file load progress reporting callbacks (#434)Jed Fox
* File load progress reporting * Move llama_progress_handler into llama_context_params * Renames * Use seekg to find file size instead * More correct load progress * Call progress callback more frequently * Fix typo
2023-03-25Add missing struct annotation (#483)Doomsdayrs
`llama_sample_top_p_top_k` was missing the struct annotation on line 126. This causes a compiler issue when being parsed by the Kotlin C interop generator. This commit fixes the above issue by adding the struct annotation.
2023-03-25Fix crash for 65B model with pre-allocated memory (#485)Chris Kuehl
2023-03-24Disable BLAS altogether - the bug is not just for qunatized mat mulGeorgi Gerganov
2023-03-24Disable BLAS branch in mul_mat - seems there is a bugGeorgi Gerganov
2023-03-24Immediately start processing the prompt before user input has been provided ↵Georgi Gerganov
(#476)
2023-03-24Reduce memory usage and allocate enough memory for largest context (#473)Georgi Gerganov
* Reduce memory usage and allocate enough memory for large contexts * Simpler scratch buffer usage * Reenable BLAS for quantized mul_mat * Fix number of layers in 30B and 65B * Fix KV cache size for F32
2023-03-24Temporary bump the memory buffer size - hopefully fix issues from 483bab2eGeorgi Gerganov
2023-03-24Update README.md (#444)Gary Mulder
Added explicit **bolded** instructions clarifying that people need to request access to models from Facebook and never through through this repo.
2023-03-24fix instruct mode (#445)rabidcopy
changes to EOS behavior in interactive and reverse prompt handling broke instruct mode by erroneously injecting instruct mode's reverse prompt and an extra newline.
2023-03-24Properly free llama_context on failureGeorgi Gerganov
2023-03-24additional optimizations for POWER9 (#454)Cameron Kaiser
2023-03-24Support calling mlock() on loaded model data on Linux and macOS (#453)comex
* Support calling mlock() on loaded model data on Linux and macOS This is enabled by a new --mlock command line option. Using mlock() disables swapping and memory compression for the model data. Doing so can be useful on systems where the model takes up a large fraction of system RAM. In my experience, macOS is quite eager to start compressing llama.cpp's memory, which then makes it halt for a few seconds while it decompresses, even with a model that uses "only" 25GB out of 32GB. Of course, this comes at the cost of forcing the system to swap or compress other processes' memory instead, so it needs to be used with care and shouldn't be enabled by default. In theory it should be possible to support this on Windows as well using VirtualLock(), but I'm not much of a Windows user. * Update llama.cpp --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24Add embedding mode with arg flag. Currently working (#282)Luciano
* working but ugly * add arg flag, not working on embedding mode * typo * Working! Thanks to @nullhook * make params argument instead of hardcoded boolean. remove useless time check * start doing the instructions but not finished. This probably doesnt compile * Embeddings extraction support --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-24Add link to Roadmap discussionGeorgi Gerganov
2023-03-24Revert "Fix memory allocation issues and seg faults"Georgi Gerganov
This reverts commit 4870e455b3653f7d7769fa5772b2c90ffad088df. Will provide the correct fix later
2023-03-24Fix memory allocation issues and seg faultsGeorgi Gerganov
2023-03-23Avoid the transposed X branch in the Z = X * Y matrix multiplication (#439)Georgi Gerganov
Should make results reproducible for different number of threads and batch sizes
2023-03-23Fix quantize script not finding models in parent directory (#428)Jed Fox
2023-03-23Remove oboslete command from Docker scriptGeorgi Gerganov
2023-03-23ObsoleteGeorgi Gerganov
2023-03-23Replace EOS with newline to prevent context/memory being flushed by EOS in ↵rabidcopy
interactive mode (#333) * Improve interactive mode's coherence after EOS Aims to improve coherence and ability to resume the interactive session when the user is given input back after an end of text token is reached. Not sure what token 13 is or why it seems to help. See conversation for examples. * Make newline token a constant * dynamically determine newline token * relocate previous newline token const * cleanup whitespace * print a new line on end of text in interactive this may need to be looked into further when not using a reverse prompt * only print manual newline with reverse prompt fix formatting of reverse prompts so they don't end up at the end of the current line while not introducing unnecessary new lines otherwise * alternate approach to replace end of text tokens * Inject the reverse prompt again after eos in interactive mode * tokenize reverse prompt when needed makes this PR compatible with https://github.com/ggerganov/llama.cpp/pull/330 * tokenize and inject only first reverse prompt thanks to tjohnman * tokenize first reverse prompt once * add newline token * add newline token * tokenize/inject reverse prompt for refactor this doesn't seem right though * tokenize nothing for antiprompt if no reverse * Update main.cpp * Update main.cpp * tokenize and inject reverse prompt as needed this doesn't seem to work if the reverse prompt is tokenized outside earlier on * not needed * remove newline token * remove newline token * tokenize newline token * add space to comment * Update main.cpp Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Slaren <2141330+slaren@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23Fix GPTQ converter (#423)Timmy Knight
* Fix GPTQ converter * Fix comment --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-03-23Generate library with CMake (#430)nusu-github
* Generate library with CMake BUILD_SHARED_LIBS to allow llama library to be generated. * Turn ON PIC when BUILD_SHARED_LIBS is ON
2023-03-23Command line args bounds checking (#424)anzz1
* command line args bounds checking * unknown and invalid param exit codes 0 -> 1
2023-03-23Fix Nix buildBen Siraphob
2023-03-23Revert "Delete SHA256SUMS for now" (#429)Stephan Walter
* Revert "Delete SHA256SUMS for now (#416)" This reverts commit 8eea5ae0e5f31238a97c79ea9103c27647380e37. * Remove ggml files until they can be verified * Remove alpaca json * Add also model/tokenizer.model to SHA256SUMS + update README --------- Co-authored-by: Pavol Rusnak <pavol@rusnak.io>
2023-03-23Fix Makefile echo escape codes (by removing them). (#418)Kerfuffle
2023-03-23Move model section from issue template to README.md (#421)Gary Mulder
* Update custom.md * Removed Model section as it is better placed in README.md * Updates to README.md model section * Inserted text that was removed from issue template about obtaining models from FB and links to papers describing the various models * Removed IPF down links for the Alpaca 7B models as these look to be in the old data format and probably shouldn't be directly linked to, anyway * Updated the perplexity section to point at Perplexity scores #406 discussion
2023-03-23Delete SHA256SUMS for now (#416)anzz1
Delete this for now to avoid confusion since it contains some wrong checksums from the old tokenizer format Re-add after #374 is resolved