index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-05
make : missing host optimizations in CXXFLAGS (#763)
Ivan Stepanov
2023-04-05
readme : update with CMake and windows example (#748)
Adithya Balaji
2023-04-05
examples : add Miku.sh (#724)
at8u
2023-04-05
Add Accelerate/BLAS when using Swift (#765)
Andrew Duffy
2023-04-03
Windows: reactive sigint handler after each Ctrl-C (#736)
mgroeber9110
2023-04-03
10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)
SebastianApel
2023-04-03
Define non-positive temperature behavior (#720)
Ivan Stepanov
2023-04-03
Remove torch GPU dependencies from the Docker.full image (#665)
bsilvereagle
2023-04-02
Add a missing step to the gpt4all instructions (#690)
Thatcher Chamberlin
2023-04-02
Added api for getting/setting the kv_cache (#685)
Christian Falch
2023-04-02
ggml : change ne to int64_t (#626)
Marian Cepok
2023-04-02
examples : add gpt4all script (#658)
Leonardo Neumann
2023-04-02
llama : do not allocate KV cache for "vocab_only == true" (#682)
Stephan Walter
2023-04-02
make : use -march=native -mtune=native on x86 (#609)
Fabian
2023-04-02
fix default params for examples/main (#697)
Murilo Santana
2023-04-01
py: huggingface -> Hugging Face (#686)
Ikko Eltociear Ashimine
2023-04-01
readme: replace termux links with homepage, play store is deprecated (#680)
rimoliga
2023-04-01
Show error message when -f fails
Slaren
2023-03-31
Enable -std= for cmake builds, fix warnings (#598)
Stephan Walter
2023-03-31
Optimize AVX2 ggml_vec_dot_q4_0 (#642)
slaren
2023-03-31
Add AVX acceleration (#617)
perserk
2023-03-31
py : cleanup the code
Pavol Rusnak
2023-03-31
drop quantize.py (now that models are using a single file)
Pavol Rusnak
2023-03-30
readme : update supported models
Georgi Gerganov
2023-03-30
Introduce GGML migration tool for new file format
Justine Tunney
2023-03-30
Ensure --mlock works properly with mmap() support
Justine Tunney
2023-03-30
Make loading weights 10-100x faster
Justine Tunney
2023-03-30
Initial windows support (untested)
Slaren
2023-03-30
Always initialize mm_addr and mm_length in llama_model
Slaren
2023-03-30
Unmap the file in llama_free
Slaren
2023-03-30
Make mmap_file static
Slaren
2023-03-30
Fix ggml_init_params in quantize
Slaren
2023-03-30
Add mmap support for model files
Slaren
2023-03-30
cmake : properly invoke CTest (#629)
Stephan Walter
2023-03-30
Remove unused variable (#607)
Casey Primozic
2023-03-30
make : fix darwin f16c flags check (#615)
david raistrick
2023-03-30
ggml : fix NEON signs (close #620, #622)
Georgi Gerganov
2023-03-30
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)
slaren
2023-03-29
ci : re-enable AVX512 testing (Windows-MSVC) (#584)
anzz1
2023-03-29
ggml : init time on first ggml_init() call
Georgi Gerganov
2023-03-29
llama : fix compile warnings when reading the vocab
Georgi Gerganov
2023-03-29
ggml : add ARM_NEON dequantize_row_q4_1()
Georgi Gerganov
2023-03-29
ggml : add ARM_NEON quantize_row_q4_1()
Georgi Gerganov
2023-03-29
ggml : add ARM_NEON ggml_vec_dot_q4_1()
Georgi Gerganov
2023-03-29
rename convert_ggml_to_pth.py -> convert-ggml-to-pth.py (#600)
Pavol Rusnak
2023-03-29
Create chat-13B.bat (#592)
Thérence
2023-03-29
readme : fix typos
Georgi Gerganov
2023-03-29
readme : add GPT4All instructions (close #588)
Georgi Gerganov
2023-03-29
py : add GPT4All conversion script
Georgi Gerganov
2023-03-29
llama : use the same threshold for OpenBLAS and ggml thread limiting (#577)
Maël Kerbiriou
[prev]
[next]