aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2023-05-27Add documentation about CLBlast (#1604)Henri Vasserman
Installing, compiling and using.
2023-05-27[CI] Fix openblas (#1613)Henri Vasserman
* Fix OpenBLAS build * Fix `LLAMA_BLAS_VENDOR` CMake variable that should be a string and not a boolean.
2023-05-27ggml : add ggml_tensor_overhead()Georgi Gerganov
2023-05-27[CI] CLBlast: Fix directory name (#1606)Henri Vasserman
2023-05-27ggml : sync ggml core (minor additions, e.g. ggml_get_tensor_by_name())Georgi Gerganov
2023-05-25Some improvements to loading the session with --prompt-cache (#1550)Kerfuffle
Improvements to loading the session with `--prompt-cache` in the `main` example. 1. Fix an issue where the `--seed` parameter was ignored when loading a cached prompt. 2. When loading a cached prompt, you previously had to specify the saved prompt (or a prefix of it) again. This pull changes that behavior to default to the prompt that was cached if a prompt wasn't specified by the user.
2023-05-26cuda : performance optimizations (#1530)Johannes Gäßler
* xor hack * block y dim * loop unrolling * Fixed cmake LLAMA_CUDA_BY option * Removed hipblas compatibility code * Define GGML_CUDA_DMMV_BLOCK_Y if not defined * Fewer iters, more ops per iter * Renamed DMMV X/Y compilation options
2023-05-24Update CLBlast to 1.6.0 (#1580)Henri Vasserman
* Update CLBlast to 1.6.0
2023-05-24readme : add docs for chat-persistent.sh (#1568)Evan Jones
* readme : add docs for chat-persistent.sh * Update README.md
2023-05-24chat-persistent.sh : use bracket expressions in grep (#1564)Senemu
2023-05-23Fix handling of "invalid property" when creating OpenCL command queue (#1565)Maarten ter Huurne
The `clCreateCommandQueue()` function will return the code `CL_INVALID_QUEUE_PROPERTIES` when passed unsupported properties, not `CL_INVALID_PROPERTY` as the original code was checking for.
2023-05-23OpenCL Token Generation Acceleration (#1459)0cc4m
* Move back to C++ for OpenCL * Refactor OpenCL code to work more like the CUDA code, add missing functions * Deduplicate dequant kernels * Add OpenCL compile options * Use compile args for preprocessing constants * Restore default platform + device selection by id behavior --------- Co-authored-by: Johannes Gäßler <johannesg@5d6.de> Co-authored-by: Henri Vasserman <henv@hot.ee>
2023-05-21examples : add server example with REST API (#1443)Steward Garcia
* Added httplib support * Added readme for server example * fixed some bugs * Fix the build error on Macbook * changed json11 to nlohmann-json * removed some whitespaces * remove trailing whitespace * added support custom prompts and more functions * some corrections and added as cmake option
2023-05-21make : .PHONY clean (#1553)Stefan Sydow
2023-05-21ggml : output 3d sizes in ggml_graph_dump_dot()Georgi Gerganov
2023-05-20ggml : update WASM SIMDGeorgi Gerganov
2023-05-20feature : support blis and other blas implementation (#1536)Zenix
* feature: add blis support * feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927 * fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake * Fix typo in INTEGER Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Fix: blas changes on ci --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20OpenCL: Fixes for older devices. (#1435)Henri Vasserman
* Remove `constant` * Rewrite platform and device selection * Fix Q8_0
2023-05-20llama : define magic numbers as integer constants (#1518) (#1520)Juuso Alasuutari
The underlying representation of multibyte character literals is implementation-defined. This could, at least in principle, cause cross-build data export/import issues independent of endianness. Define magic numbers as integer literals to be on the safe side. Signed-off-by: Juuso Alasuutari <juuso.alasuutari@gmail.com>
2023-05-20ggml : add ggml_clamp() (#1539)Georgi Gerganov
* ggml : add ggml_clamp() * ggml : indentation
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, ↵Johannes Gäßler
broadcasting for ggml_mul (#1483) * Broadcasting for ggml_mul * CUDA kernel for ggml_mul, norms in VRAM * GPU weights not in RAM, direct loading with cuFile * fixup! GPU weights not in RAM, direct loading with cuFile * fixup! GPU weights not in RAM, direct loading with cuFile * define default model path once, sync path with readme (#1366) * ~7% faster Q5_1 AVX2 code (#1477) * convert.py: Support models which are stored in a single pytorch_model.bin (#1469) * Support models in a single pytorch_model.bin * Remove spurious line with typo * benchmark-matmul: Print the average of the test results (#1490) * Remove unused n_parts parameter (#1509) * Fixes #1511 lambda issue for w64devkit (mingw) (#1513) * Fix for w64devkit and mingw * make kv_f16 the default for api users (#1517) * minor : fix compile warnings * readme : adds WizardLM to the list of supported models (#1485) * main : make reverse prompt option act as a stop token in non-interactive mode (#1032) * Make reverse prompt option act as a stop token in non-interactive scenarios * Making requested review changes * Update gpt_params_parse and fix a merge error * Revert "Update gpt_params_parse and fix a merge error" This reverts commit 2bb2ff1748513591ad45b175a75ed1d8089d84c8. * Update gpt_params_parse and fix a merge error take 2 * examples : add persistent chat (#1495) * examples : add persistent chat * examples : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * tests : add missing header * ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508) * ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0 * llama : bump LLAMA_FILE_VERSION to 3 * cuda : update Q4 and Q8 dequantize kernels * ggml : fix AVX dot products * readme : update performance table + hot topics * ggml : fix scalar implementation of Q4_1 dot * llama : fix compile warnings in llama_set_state_data() * llama : fix name shadowing and C4146 (#1526) * Fix name shadowing and C4146 * Fix if macros not using defined when required * Update llama-util.h Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Update llama-util.h Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Code style Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Fix for mingw (#1462) * llama : add llama_init_backend() API (close #1527) * feature : add blis and other BLAS implementation support (#1502) * feature: add blis support * feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927 * fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake * Fix typo in INTEGER Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> * Revert "feature : add blis and other BLAS implementation support (#1502)" This reverts commit 07e9ace0f9da424d82e75df969642522880feb92. * GPU weights not in RAM, direct loading with cuFile * llama : code style fixes + progress print fix * ggml : ggml_mul better broadcast support * cmake : workarounds for cufile when CMake version < 3.25 * gg rebase fixup * Loop in llama.cpp, fixed progress callback * Attempt clang-tidy fix * llama : fix vram size computation * Add forgotten fclose() --------- Co-authored-by: András Salamon <ott2@users.noreply.github.com> Co-authored-by: Ilya Kurdyukov <59548320+ilyakurdyukov@users.noreply.github.com> Co-authored-by: Tom Jobbins <784313+TheBloke@users.noreply.github.com> Co-authored-by: rankaiyx <rankaiyx@rankaiyx.com> Co-authored-by: Stephan Walter <stephan@walter.name> Co-authored-by: DannyDaemonic <DannyDaemonic@gmail.com> Co-authored-by: Erik Scholz <Green-Sky@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> Co-authored-by: David Kennedy <dakennedyd@gmail.com> Co-authored-by: Jason McCartney <jmac@theroot.org> Co-authored-by: Evan Jones <evan.q.jones@gmail.com> Co-authored-by: Maxime <672982+maximegmd@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Zenix <zenixls2@gmail.com>
2023-05-20Revert "feature : add blis and other BLAS implementation support (#1502)"Georgi Gerganov
This reverts commit 07e9ace0f9da424d82e75df969642522880feb92.
2023-05-20feature : add blis and other BLAS implementation support (#1502)Zenix
* feature: add blis support * feature: allow all BLA_VENDOR to be assigned in cmake arguments. align with whisper.cpp pr 927 * fix: version detection for BLA_SIZEOF_INTEGER, recover min version of cmake * Fix typo in INTEGER Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-20Fix for mingw (#1462)DannyDaemonic
2023-05-20llama : fix name shadowing and C4146 (#1526)Maxime
* Fix name shadowing and C4146 * Fix if macros not using defined when required * Update llama-util.h Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Update llama-util.h Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Code style Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-20llama : fix compile warnings in llama_set_state_data()Georgi Gerganov
2023-05-20ggml : fix scalar implementation of Q4_1 dotGeorgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
* ggml : use F16 instead of F32 in Q4_0, Q4_1 and Q8_0 * llama : bump LLAMA_FILE_VERSION to 3 * cuda : update Q4 and Q8 dequantize kernels * ggml : fix AVX dot products * readme : update performance table + hot topics
2023-05-19tests : add missing headerGeorgi Gerganov
2023-05-19examples : add persistent chat (#1495)Evan Jones
* examples : add persistent chat * examples : fix whitespace --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-19main : make reverse prompt option act as a stop token in non-interactive ↵Jason McCartney
mode (#1032) * Make reverse prompt option act as a stop token in non-interactive scenarios * Making requested review changes * Update gpt_params_parse and fix a merge error * Revert "Update gpt_params_parse and fix a merge error" This reverts commit 2bb2ff1748513591ad45b175a75ed1d8089d84c8. * Update gpt_params_parse and fix a merge error take 2
2023-05-19readme : adds WizardLM to the list of supported models (#1485)David Kennedy
2023-05-19minor : fix compile warningsGeorgi Gerganov
2023-05-18make kv_f16 the default for api users (#1517)Erik Scholz
2023-05-18Fixes #1511 lambda issue for w64devkit (mingw) (#1513)DannyDaemonic
* Fix for w64devkit and mingw
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-17benchmark-matmul: Print the average of the test results (#1490)rankaiyx
2023-05-17convert.py: Support models which are stored in a single pytorch_model.bin ↵Tom Jobbins
(#1469) * Support models in a single pytorch_model.bin * Remove spurious line with typo
2023-05-16~7% faster Q5_1 AVX2 code (#1477)Ilya Kurdyukov
2023-05-16define default model path once, sync path with readme (#1366)András Salamon
2023-05-16Add alternate include path for openblas (#1476)sandyiscool
In some linux distributions (fedora, for example), the include path for openblas is located at '/usr/local/include'
2023-05-15fix get_num_physical_cores() (#1436)zrm
* fix get_num_physical_cores() had been broken on complex topologies because "cpu cores" in /proc/cpuinfo is per-"physical id" * Add spaces to maintain consistent formatting --------- Co-authored-by: slaren <ddevesa@gmail.com>
2023-05-14benchmark-matmul: fix clang-tidy issues, report results in GFLOPS (#1458)slaren
* benchmark-matmul: fix command line parsing, replace macros with functions, report results in GFLOPS
2023-05-14cuda : deduplicated dequantization code (#1453)Johannes Gäßler
2023-05-14ggml : alternative fix for race condition bug in non-inplace ↵xaedes
ggml_compute_forward_diag_mask_f32 (#1454) * fix race condition bug in non-inplace ggml_compute_forward_diag_mask_f32 memcpy needs to be synchronized across threads to avoid race conditions. => do it in INIT phase * remove trailing whitespace * Update ggml.c --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-05-14ggml : various fixes (#1450)Georgi Gerganov
- `ggml_rope()` - `ggml_diag_mask_inf()` multi-threaded - compatibility with scratch buffers
2023-05-14ggml : add AVX support based on AVX2 code (#1430)katsu560
2023-05-14ggml : add GGML_QNT_VERSION to track quantization format changesGeorgi Gerganov
https://github.com/ggerganov/ggml/issues/150#issuecomment-1546625668
2023-05-13cuda : fix convert function (#1412)Georgi Gerganov