aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-05-24readme : add docs for chat-persistent.sh (#1568)Evan Jones
2023-05-24chat-persistent.sh : use bracket expressions in grep (#1564)Senemu
2023-05-23Fix handling of "invalid property" when creating OpenCL command queue (#1565)Maarten ter Huurne
2023-05-23OpenCL Token Generation Acceleration (#1459)0cc4m
2023-05-21examples : add server example with REST API (#1443)Steward Garcia
2023-05-21make : .PHONY clean (#1553)Stefan Sydow
2023-05-21ggml : output 3d sizes in ggml_graph_dump_dot()Georgi Gerganov
2023-05-20ggml : update WASM SIMDGeorgi Gerganov
2023-05-20feature : support blis and other blas implementation (#1536)Zenix
2023-05-20OpenCL: Fixes for older devices. (#1435)Henri Vasserman
2023-05-20llama : define magic numbers as integer constants (#1518) (#1520)Juuso Alasuutari
2023-05-20ggml : add ggml_clamp() (#1539)Georgi Gerganov
2023-05-20cuda : loading models directly into VRAM, norm calculation on GPU, broadcasti...Johannes Gäßler
2023-05-20Revert "feature : add blis and other BLAS implementation support (#1502)"Georgi Gerganov
2023-05-20feature : add blis and other BLAS implementation support (#1502)Zenix
2023-05-20llama : add llama_init_backend() API (close #1527)Georgi Gerganov
2023-05-20Fix for mingw (#1462)DannyDaemonic
2023-05-20llama : fix name shadowing and C4146 (#1526)Maxime
2023-05-20llama : fix compile warnings in llama_set_state_data()Georgi Gerganov
2023-05-20ggml : fix scalar implementation of Q4_1 dotGeorgi Gerganov
2023-05-19ggml : use F16 instead of F32 in Q4_0, Q4_1, Q8_0 (#1508)Georgi Gerganov
2023-05-19tests : add missing headerGeorgi Gerganov
2023-05-19examples : add persistent chat (#1495)Evan Jones
2023-05-19main : make reverse prompt option act as a stop token in non-interactive mode...Jason McCartney
2023-05-19readme : adds WizardLM to the list of supported models (#1485)David Kennedy
2023-05-19minor : fix compile warningsGeorgi Gerganov
2023-05-18make kv_f16 the default for api users (#1517)Erik Scholz
2023-05-18Fixes #1511 lambda issue for w64devkit (mingw) (#1513)DannyDaemonic
2023-05-17Remove unused n_parts parameter (#1509)Stephan Walter
2023-05-17benchmark-matmul: Print the average of the test results (#1490)rankaiyx
2023-05-17convert.py: Support models which are stored in a single pytorch_model.bin (#1...Tom Jobbins
2023-05-16~7% faster Q5_1 AVX2 code (#1477)Ilya Kurdyukov
2023-05-16define default model path once, sync path with readme (#1366)András Salamon
2023-05-16Add alternate include path for openblas (#1476)sandyiscool
2023-05-15fix get_num_physical_cores() (#1436)zrm
2023-05-14benchmark-matmul: fix clang-tidy issues, report results in GFLOPS (#1458)slaren
2023-05-14cuda : deduplicated dequantization code (#1453)Johannes Gäßler
2023-05-14ggml : alternative fix for race condition bug in non-inplace ggml_compute_for...xaedes
2023-05-14ggml : various fixes (#1450)Georgi Gerganov
2023-05-14ggml : add AVX support based on AVX2 code (#1430)katsu560
2023-05-14ggml : add GGML_QNT_VERSION to track quantization format changesGeorgi Gerganov
2023-05-13cuda : fix convert function (#1412)Georgi Gerganov
2023-05-13make : fix PERF build with cuBLASGeorgi Gerganov
2023-05-13llama : fix unused warningGeorgi Gerganov
2023-05-13ggml : multi-thread mul and diag_mask ops (#1428)Georgi Gerganov
2023-05-13ggml : GPU-accelerated token generation (#1412)Johannes Gäßler
2023-05-13ggml : implement backward pass for llama + small training-llama-from-scratch ...xaedes
2023-05-13ggml : sync alibi fix from ggml repoGeorgi Gerganov
2023-05-13Adding SSE instructions to ggml_vec_dot_q4_0_q8_0 (#1413)3ooabkhxtn
2023-05-13llama : fix various warningsGeorgi Gerganov