index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-02
Added api for getting/setting the kv_cache (#685)
Christian Falch
2023-04-02
ggml : change ne to int64_t (#626)
Marian Cepok
2023-04-02
examples : add gpt4all script (#658)
Leonardo Neumann
2023-04-02
llama : do not allocate KV cache for "vocab_only == true" (#682)
Stephan Walter
2023-04-02
make : use -march=native -mtune=native on x86 (#609)
Fabian
2023-04-02
fix default params for examples/main (#697)
Murilo Santana
2023-04-01
py: huggingface -> Hugging Face (#686)
Ikko Eltociear Ashimine
2023-04-01
readme: replace termux links with homepage, play store is deprecated (#680)
rimoliga
2023-04-01
Show error message when -f fails
Slaren
2023-03-31
Enable -std= for cmake builds, fix warnings (#598)
Stephan Walter
2023-03-31
Optimize AVX2 ggml_vec_dot_q4_0 (#642)
slaren
2023-03-31
Add AVX acceleration (#617)
perserk
2023-03-31
py : cleanup the code
Pavol Rusnak
2023-03-31
drop quantize.py (now that models are using a single file)
Pavol Rusnak
2023-03-30
readme : update supported models
Georgi Gerganov
2023-03-30
Introduce GGML migration tool for new file format
Justine Tunney
2023-03-30
Ensure --mlock works properly with mmap() support
Justine Tunney
2023-03-30
Make loading weights 10-100x faster
Justine Tunney
2023-03-30
Initial windows support (untested)
Slaren
2023-03-30
Always initialize mm_addr and mm_length in llama_model
Slaren
2023-03-30
Unmap the file in llama_free
Slaren
2023-03-30
Make mmap_file static
Slaren
2023-03-30
Fix ggml_init_params in quantize
Slaren
2023-03-30
Add mmap support for model files
Slaren
2023-03-30
cmake : properly invoke CTest (#629)
Stephan Walter
2023-03-30
Remove unused variable (#607)
Casey Primozic
2023-03-30
make : fix darwin f16c flags check (#615)
david raistrick
2023-03-30
ggml : fix NEON signs (close #620, #622)
Georgi Gerganov
2023-03-30
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)
slaren
2023-03-29
ci : re-enable AVX512 testing (Windows-MSVC) (#584)
anzz1
2023-03-29
ggml : init time on first ggml_init() call
Georgi Gerganov
2023-03-29
llama : fix compile warnings when reading the vocab
Georgi Gerganov
2023-03-29
ggml : add ARM_NEON dequantize_row_q4_1()
Georgi Gerganov
2023-03-29
ggml : add ARM_NEON quantize_row_q4_1()
Georgi Gerganov
2023-03-29
ggml : add ARM_NEON ggml_vec_dot_q4_1()
Georgi Gerganov
2023-03-29
rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)
Pavol Rusnak
2023-03-29
Create chat-13B.bat (#592)
Thérence
2023-03-29
readme : fix typos
Georgi Gerganov
2023-03-29
readme : add GPT4All instructions (close #588)
Georgi Gerganov
2023-03-29
py : add GPT4All conversion script
Georgi Gerganov
2023-03-29
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
Maël Kerbiriou
2023-03-29
add example of re-act pattern (#583)
Tobias Lütke
2023-03-29
Fix GCC warning about binary literal (#595)
anzz1
2023-03-29
Fix typo in llama.h (#593)
anzz1
2023-03-28
Enable Fused-Multiply-Add (FMA) and F16C/CVT16 vector extensions on MSVC (#375)
anzz1
2023-03-28
CI: fix subdirectory path globbing (#546)
anzz1
2023-03-28
llama : fix linkage with mingw (#551)
anzz1
2023-03-28
ggml : add AVX2 implementation of quantize_row_q4_1 (#515)
slaren
2023-03-28
py : add temporary script to convert old ggml files to newer version (#539)
thement
2023-03-28
py : add capabiliy to convert from ggml back to torch or hf format for furthe...
Tai Duc Nguyen
[prev]
[next]