index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-07
zig : don't link examples/common.cpp for non-example (#814)
iacore
2023-04-07
llama : always sort logits before nucleus sampling (#812)
Ivan Stepanov
2023-04-06
Do not crash when it has nothing to say. (#796)
Sergey Alirzaev
2023-04-06
Make docker instructions more explicit (#785)
Pavol Rusnak
2023-04-05
ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)
Georgi Gerganov
2023-04-05
ggml, llama : avoid heavy V transpose + improvements (#775)
Georgi Gerganov
2023-04-05
Update README.md
Georgi Gerganov
2023-04-05
llama : define non-positive top_k; top_k range check (#779)
Ivan Stepanov
2023-04-05
miku.sh : add executable bit (#780)
at8u
2023-04-05
media : add logos and banners
Georgi Gerganov
2023-04-05
readme : change logo + add bindings + add uis + add wiki
Georgi Gerganov
2023-04-05
zig : add build.zig (#773)
iacore
2023-04-05
make : missing host optimizations in CXXFLAGS (#763)
Ivan Stepanov
2023-04-05
readme : update with CMake and windows example (#748)
Adithya Balaji
2023-04-05
examples : add Miku.sh (#724)
at8u
2023-04-05
Add Accelerate/BLAS when using Swift (#765)
Andrew Duffy
2023-04-03
Windows: reactive sigint handler after each Ctrl-C (#736)
mgroeber9110
2023-04-03
10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)
SebastianApel
2023-04-03
Define non-positive temperature behavior (#720)
Ivan Stepanov
2023-04-03
Remove torch GPU dependencies from the Docker.full image (#665)
bsilvereagle
2023-04-02
Add a missing step to the gpt4all instructions (#690)
Thatcher Chamberlin
2023-04-02
Added api for getting/setting the kv_cache (#685)
Christian Falch
2023-04-02
ggml : change ne to int64_t (#626)
Marian Cepok
2023-04-02
examples : add gpt4all script (#658)
Leonardo Neumann
2023-04-02
llama : do not allocate KV cache for "vocab_only == true" (#682)
Stephan Walter
2023-04-02
make : use -march=native -mtune=native on x86 (#609)
Fabian
2023-04-02
fix default params for examples/main (#697)
Murilo Santana
2023-04-01
py: huggingface -> Hugging Face (#686)
Ikko Eltociear Ashimine
2023-04-01
readme: replace termux links with homepage, play store is deprecated (#680)
rimoliga
2023-04-01
Show error message when -f fails
Slaren
2023-03-31
Enable -std= for cmake builds, fix warnings (#598)
Stephan Walter
2023-03-31
Optimize AVX2 ggml_vec_dot_q4_0 (#642)
slaren
2023-03-31
Add AVX acceleration (#617)
perserk
2023-03-31
py : cleanup the code
Pavol Rusnak
2023-03-31
drop quantize.py (now that models are using a single file)
Pavol Rusnak
2023-03-30
readme : update supported models
Georgi Gerganov
2023-03-30
Introduce GGML migration tool for new file format
Justine Tunney
2023-03-30
Ensure --mlock works properly with mmap() support
Justine Tunney
2023-03-30
Make loading weights 10-100x faster
Justine Tunney
2023-03-30
Initial windows support (untested)
Slaren
2023-03-30
Always initialize mm_addr and mm_length in llama_model
Slaren
2023-03-30
Unmap the file in llama_free
Slaren
2023-03-30
Make mmap_file static
Slaren
2023-03-30
Fix ggml_init_params in quantize
Slaren
2023-03-30
Add mmap support for model files
Slaren
2023-03-30
cmake : properly invoke CTest (#629)
Stephan Walter
2023-03-30
Remove unused variable (#607)
Casey Primozic
2023-03-30
make : fix darwin f16c flags check (#615)
david raistrick
2023-03-30
ggml : fix NEON signs (close #620, #622)
Georgi Gerganov
2023-03-30
Fix GGML_F32Cx8_STORE in AVX without F16C path (#619)
slaren
[prev]
[next]