index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-13
readme : llama node binding (#911)
Genkagaku.GPT
2023-04-13
flake.nix: add all binaries from bin (#848)
Pavol Rusnak
2023-04-13
zig : update build.zig (#872)
Judd
2023-04-13
ggml : update cblas_sgemm columns var to be more reasonable (#838)
Vladimir
2023-04-13
examples : add -n to alpaca and gpt4all scripts (#706)
niansa/tuxifan
2023-04-13
cmake : add explicit F16C option (x86) (#576)
anzz1
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-13
do not force the prompt file to end with a new line (#908)
Pavol Rusnak
2023-04-12
Don't crash on ftype (formerly f16) == 4 (#917)
Stephan Walter
2023-04-12
readme : change "GPU support" link to discussion
Georgi Gerganov
2023-04-12
readme : update hot topics with link to "GPU support" issue
Georgi Gerganov
2023-04-12
readme: link to sha256sums file (#902)
Nicolai Weitkemper
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-11
Windows fixes (#890)
comex
2023-04-10
Add BAIR's Koala to supported models (#877)
qouoq
2023-04-10
ggml : fix WASM build
Georgi Gerganov
2023-04-10
ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dst
Georgi Gerganov
2023-04-10
ggml : remove trailing whitespaces
Georgi Gerganov
2023-04-10
Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)
Marco Matthies
2023-04-10
ggml : fix quantize_row_q4_1() ARM_NEON (close #876)
Georgi Gerganov
2023-04-10
Print model version.
comex
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-04-08
cmake should link openblas properly with -lopenblas like how it's done in the...
eiery
2023-04-08
Add new binaries to flake.nix (#847)
lon
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-07
make : add libllama.so target for llama-cpp-python (#797)
bhubbb
2023-04-07
zig : don't link examples/common.cpp for non-example (#814)
iacore
2023-04-07
llama : always sort logits before nucleus sampling (#812)
Ivan Stepanov
2023-04-06
Do not crash when it has nothing to say. (#796)
Sergey Alirzaev
2023-04-06
Make docker instructions more explicit (#785)
Pavol Rusnak
2023-04-05
ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)
Georgi Gerganov
2023-04-05
ggml, llama : avoid heavy V transpose + improvements (#775)
Georgi Gerganov
2023-04-05
Update README.md
Georgi Gerganov
2023-04-05
llama : define non-positive top_k; top_k range check (#779)
Ivan Stepanov
2023-04-05
miku.sh : add executable bit (#780)
at8u
2023-04-05
media : add logos and banners
Georgi Gerganov
2023-04-05
readme : change logo + add bindings + add uis + add wiki
Georgi Gerganov
2023-04-05
zig : add build.zig (#773)
iacore
2023-04-05
make : missing host optimizations in CXXFLAGS (#763)
Ivan Stepanov
2023-04-05
readme : update with CMake and windows example (#748)
Adithya Balaji
2023-04-05
examples : add Miku.sh (#724)
at8u
2023-04-05
Add Accelerate/BLAS when using Swift (#765)
Andrew Duffy
2023-04-03
Windows: reactive sigint handler after each Ctrl-C (#736)
mgroeber9110
2023-04-03
10+% performance improvement of ggml_vec_dot_q4_0 on AVX2 (#654)
SebastianApel
2023-04-03
Define non-positive temperature behavior (#720)
Ivan Stepanov
2023-04-03
Remove torch GPU dependencies from the Docker.full image (#665)
bsilvereagle
2023-04-02
Add a missing step to the gpt4all instructions (#690)
Thatcher Chamberlin
2023-04-02
Added api for getting/setting the kv_cache (#685)
Christian Falch
[prev]
[next]