index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-14
ggml : fix q4_1 dot product types
Georgi Gerganov
2023-04-14
ggml : optimize rope function to avoid call powf in the tight loop (#807)
Howard Su
2023-04-14
perplexity : add support for batch size to `--perplexity` (#407)
Gary Linscott
2023-04-13
common : remove unnecessary includes (#947)
CRD716
2023-04-13
ggml : add GGML_DEFAULT_N_THREADS
Georgi Gerganov
2023-04-13
ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)
Georgi Gerganov
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-13
gitignore : benchmark
Georgi Gerganov
2023-04-13
ggml : optimize non-SIMD Q4_0 vector dot product (#703)
Stephan Walter
2023-04-13
ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)
Pavol Rusnak
2023-04-13
fix whitespace (#944)
CRD716
2023-04-13
readme : remove python 3.10 warning (#929)
CRD716
2023-04-13
readme : llama node binding (#911)
Genkagaku.GPT
2023-04-13
flake.nix: add all binaries from bin (#848)
Pavol Rusnak
2023-04-13
zig : update build.zig (#872)
Judd
2023-04-13
ggml : update cblas_sgemm columns var to be more reasonable (#838)
Vladimir
2023-04-13
examples : add -n to alpaca and gpt4all scripts (#706)
niansa/tuxifan
2023-04-13
cmake : add explicit F16C option (x86) (#576)
anzz1
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-13
do not force the prompt file to end with a new line (#908)
Pavol Rusnak
2023-04-12
Don't crash on ftype (formerly f16) == 4 (#917)
Stephan Walter
2023-04-12
readme : change "GPU support" link to discussion
Georgi Gerganov
2023-04-12
readme : update hot topics with link to "GPU support" issue
Georgi Gerganov
2023-04-12
readme: link to sha256sums file (#902)
Nicolai Weitkemper
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-11
Windows fixes (#890)
comex
2023-04-10
Add BAIR's Koala to supported models (#877)
qouoq
2023-04-10
ggml : fix WASM build
Georgi Gerganov
2023-04-10
ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dst
Georgi Gerganov
2023-04-10
ggml : remove trailing whitespaces
Georgi Gerganov
2023-04-10
Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)
Marco Matthies
2023-04-10
ggml : fix quantize_row_q4_1() ARM_NEON (close #876)
Georgi Gerganov
2023-04-10
Print model version.
comex
2023-04-10
Rewrite loading code to try to satisfy everyone:
comex
2023-04-08
fix for windows utf-8 input (#840)
Tomáš Pazdiora
2023-04-08
cmake should link openblas properly with -lopenblas like how it's done in the...
eiery
2023-04-08
Add new binaries to flake.nix (#847)
lon
2023-04-08
Add quantize-stats command for testing quantization (#728)
unbounded
2023-04-07
make : add libllama.so target for llama-cpp-python (#797)
bhubbb
2023-04-07
zig : don't link examples/common.cpp for non-example (#814)
iacore
2023-04-07
llama : always sort logits before nucleus sampling (#812)
Ivan Stepanov
2023-04-06
Do not crash when it has nothing to say. (#796)
Sergey Alirzaev
2023-04-06
Make docker instructions more explicit (#785)
Pavol Rusnak
2023-04-05
ggml : multi-thread ggml_rope() (~3-4 times faster on M1) (#781)
Georgi Gerganov
2023-04-05
ggml, llama : avoid heavy V transpose + improvements (#775)
Georgi Gerganov
2023-04-05
Update README.md
Georgi Gerganov
2023-04-05
llama : define non-positive top_k; top_k range check (#779)
Ivan Stepanov
2023-04-05
miku.sh : add executable bit (#780)
at8u
2023-04-05
media : add logos and banners
Georgi Gerganov
[prev]
[next]