index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-16
Fix msys2 build error and warnings (#1009)
nanahi
2023-04-15
convert.py: Fix loading safetensors and ggml format on Windows (#991)
comex
2023-04-15
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter
2023-04-15
Refactor ggml.c for future tensor types (#1001)
Stephan Walter
2023-04-15
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-15
ggml : use posix_memalign on non-Windows env
Georgi Gerganov
2023-04-15
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-15
cmake : add finding the OpenBLAS header file (#992)
katsu560
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
py : bump sentencepiece to 0.1.98 to support Python 3.11 (#976)
Pavol Rusnak
2023-04-14
make : fix dependencies, use auto variables (#983)
Stephan Walter
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-14
ggml : add unary and binary map operations (#874)
Kerfuffle
2023-04-14
py : cleanup dependencies (#962)
Pavol Rusnak
2023-04-14
py : fix flake8 and isort nitpicks (#960)
Pavol Rusnak
2023-04-14
ggml : minor
Georgi Gerganov
2023-04-14
ggml : always allocate buffers with size multiple of GGML_MEM_ALIGN
Georgi Gerganov
2023-04-14
py : new conversion script (#545)
comex
2023-04-14
ggml : fix q4_1 dot product types
Georgi Gerganov
2023-04-14
ggml : optimize rope function to avoid call powf in the tight loop (#807)
Howard Su
2023-04-14
perplexity : add support for batch size to `--perplexity` (#407)
Gary Linscott
2023-04-13
common : remove unnecessary includes (#947)
CRD716
2023-04-13
ggml : add GGML_DEFAULT_N_THREADS
Georgi Gerganov
2023-04-13
ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)
Georgi Gerganov
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-13
gitignore : benchmark
Georgi Gerganov
2023-04-13
ggml : optimize non-SIMD Q4_0 vector dot product (#703)
Stephan Walter
2023-04-13
ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)
Pavol Rusnak
2023-04-13
fix whitespace (#944)
CRD716
2023-04-13
readme : remove python 3.10 warning (#929)
CRD716
2023-04-13
readme : llama node binding (#911)
Genkagaku.GPT
2023-04-13
flake.nix: add all binaries from bin (#848)
Pavol Rusnak
2023-04-13
zig : update build.zig (#872)
Judd
2023-04-13
ggml : update cblas_sgemm columns var to be more reasonable (#838)
Vladimir
2023-04-13
examples : add -n to alpaca and gpt4all scripts (#706)
niansa/tuxifan
2023-04-13
cmake : add explicit F16C option (x86) (#576)
anzz1
2023-04-13
benchmark : add tool for timing q4_0 matrix multiplication (#653)
SebastianApel
2023-04-13
do not force the prompt file to end with a new line (#908)
Pavol Rusnak
2023-04-12
Don't crash on ftype (formerly f16) == 4 (#917)
Stephan Walter
2023-04-12
readme : change "GPU support" link to discussion
Georgi Gerganov
2023-04-12
readme : update hot topics with link to "GPU support" issue
Georgi Gerganov
2023-04-12
readme: link to sha256sums file (#902)
Nicolai Weitkemper
2023-04-11
Fix whitespace, add .editorconfig, add GitHub workflow (#883)
Pavol Rusnak
2023-04-11
Add enum llama_ftype, sync ggml_type to model files (#709)
Stephan Walter
2023-04-11
Windows fixes (#890)
comex
2023-04-10
Add BAIR's Koala to supported models (#877)
qouoq
2023-04-10
ggml : fix WASM build
Georgi Gerganov
2023-04-10
ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dst
Georgi Gerganov
[prev]
[next]