aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-04-03Define non-positive temperature behavior (#720)Ivan Stepanov
2023-04-03Remove torch GPU dependencies from the Docker.full image (#665)bsilvereagle
2023-04-02Add a missing step to the gpt4all instructions (#690)Thatcher Chamberlin
2023-04-02Added api for getting/setting the kv_cache (#685)Christian Falch
2023-04-02ggml : change ne to int64_t (#626)Marian Cepok
2023-04-02examples : add gpt4all script (#658)Leonardo Neumann
2023-04-02llama : do not allocate KV cache for "vocab_only == true" (#682)Stephan Walter
2023-04-02make : use -march=native -mtune=native on x86 (#609)Fabian
2023-04-02fix default params for examples/main (#697)Murilo Santana
2023-04-01py: huggingface -> Hugging Face (#686)Ikko Eltociear Ashimine
2023-04-01readme: replace termux links with homepage, play store is deprecated (#680)rimoliga
2023-04-01Show error message when -f failsSlaren
2023-03-31Enable -std= for cmake builds, fix warnings (#598)Stephan Walter
2023-03-31Optimize AVX2 ggml_vec_dot_q4_0 (#642)slaren
2023-03-31Add AVX acceleration (#617)perserk
2023-03-31py : cleanup the codePavol Rusnak
2023-03-31drop quantize.py (now that models are using a single file)Pavol Rusnak
2023-03-30readme : update supported modelsGeorgi Gerganov
2023-03-30Introduce GGML migration tool for new file formatJustine Tunney
2023-03-30Ensure --mlock works properly with mmap() supportJustine Tunney
2023-03-30Make loading weights 10-100x fasterJustine Tunney
2023-03-30Initial windows support (untested)Slaren
2023-03-30Always initialize mm_addr and mm_length in llama_modelSlaren
2023-03-30Unmap the file in llama_freeSlaren
2023-03-30Make mmap_file staticSlaren
2023-03-30Fix ggml_init_params in quantizeSlaren
2023-03-30Add mmap support for model filesSlaren
2023-03-30cmake : properly invoke CTest (#629)Stephan Walter
2023-03-30Remove unused variable (#607)Casey Primozic
2023-03-30make : fix darwin f16c flags check (#615)david raistrick
2023-03-30ggml : fix NEON signs (close #620, #622)Georgi Gerganov
2023-03-30Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)slaren
2023-03-29ci : re-enable AVX512 testing (Windows-MSVC) (#584)anzz1
2023-03-29ggml : init time on first ggml_init() callGeorgi Gerganov
2023-03-29llama : fix compile warnings when reading the vocabGeorgi Gerganov
2023-03-29ggml : add ARM_NEON dequantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON quantize_row_q4_1()Georgi Gerganov
2023-03-29ggml : add ARM_NEON ggml_vec_dot_q4_1()Georgi Gerganov
2023-03-29rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)Pavol Rusnak
2023-03-29Create chat-13B.bat (#592)Thérence
2023-03-29readme : fix typosGeorgi Gerganov
2023-03-29readme : add GPT4All instructions (close #588)Georgi Gerganov
2023-03-29py : add GPT4All conversion scriptGeorgi Gerganov
2023-03-29llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)Maël Kerbiriou
2023-03-29add example of re-act pattern (#583)Tobias Lütke
2023-03-29Fix GCC warning about binary literal (#595)anzz1
2023-03-29Fix typo in llama.h (#593)anzz1
2023-03-28Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)anzz1
2023-03-28CI: fix subdirectory path globbing (#546)anzz1
2023-03-28llama : fix linkage with mingw (#551)anzz1