index
:
llama.cpp.git
master
llama.cpp
user
about
summary
refs
log
tree
commit
diff
log msg
author
committer
range
Age
Commit message (
Expand
)
Author
2023-04-18
gitignore : vdot
Georgi Gerganov
2023-04-18
ggml : optimize ggml_vec_dot_q4_0_q8_0() using vectorized accumulators
Georgi Gerganov
2023-04-18
Adding a simple program to measure speed of dot products (#1041)
Kawrakow
2023-04-18
readme : update hot topics about new LoRA functionality
Georgi Gerganov
2023-04-18
ci : do not run on drafts
Georgi Gerganov
2023-04-18
Do not close file after mmap (Windows version) (#1034)
Ivan Komarov
2023-04-17
readme : add Ruby bindings (#1029)
Atsushi Tatsuma
2023-04-17
add 4_0 to default outfile namestr dict (#1031)
Cameron
2023-04-17
Add LoRA support (#820)
slaren
2023-04-17
llama : well-defined static initialization of complex objects (#927)
Arik Poznanski
2023-04-17
quantize-stats : fix bug in --type argument
Georgi Gerganov
2023-04-17
ggml : avoid using ggml_fp16_to_fp32() and ggml_fp32_to_fp16() in ggml.c
Georgi Gerganov
2023-04-17
Speedup the AVX-512 implementation of ggml_vec_dot_q4_0() (#933)
Ivan Komarov
2023-04-16
Fix: do not close file on mmap (#1017)
slaren
2023-04-16
stdout : vertical align outputs for better readibility
Georgi Gerganov
2023-04-16
examples: add missing <ctime> include for time() (#1011)
Pavol Rusnak
2023-04-16
Fix msys2 build error and warnings (#1009)
nanahi
2023-04-15
convert.py: Fix loading safetensors and ggml format on Windows (#991)
comex
2023-04-15
Fix potential int8 overflow in non-SIMD vec_dot (#986)
Stephan Walter
2023-04-15
Refactor ggml.c for future tensor types (#1001)
Stephan Walter
2023-04-15
ggml : add Q8_0 quantization for intermediate results (#951)
Georgi Gerganov
2023-04-15
ggml : use posix_memalign on non-Windows env
Georgi Gerganov
2023-04-15
benchmark : fix result validation in benchmark-q4_0-matmult (#987)
Ivan Komarov
2023-04-15
cmake : add finding the OpenBLAS header file (#992)
katsu560
2023-04-14
Revert "main : alternative instruct mode (Vicuna support, etc.) (#863)" (#982)
Pavol Rusnak
2023-04-14
py : bump sentencepiece to 0.1.98 to support Python 3.11 (#976)
Pavol Rusnak
2023-04-14
make : fix dependencies, use auto variables (#983)
Stephan Walter
2023-04-14
Expose type name from ggml (#970)
Pavol Rusnak
2023-04-14
main : alternative instruct mode (Vicuna support, etc.) (#863)
Tomáš Pazdiora
2023-04-14
ggml : add unary and binary map operations (#874)
Kerfuffle
2023-04-14
py : cleanup dependencies (#962)
Pavol Rusnak
2023-04-14
py : fix flake8 and isort nitpicks (#960)
Pavol Rusnak
2023-04-14
ggml : minor
Georgi Gerganov
2023-04-14
ggml : always allocate buffers with size multiple of GGML_MEM_ALIGN
Georgi Gerganov
2023-04-14
py : new conversion script (#545)
comex
2023-04-14
ggml : fix q4_1 dot product types
Georgi Gerganov
2023-04-14
ggml : optimize rope function to avoid call powf in the tight loop (#807)
Howard Su
2023-04-14
perplexity : add support for batch size to `--perplexity` (#407)
Gary Linscott
2023-04-13
common : remove unnecessary includes (#947)
CRD716
2023-04-13
ggml : add GGML_DEFAULT_N_THREADS
Georgi Gerganov
2023-04-13
ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)
Georgi Gerganov
2023-04-13
llama : merge llama_internal.h into llama.h
Georgi Gerganov
2023-04-13
gitignore : benchmark
Georgi Gerganov
2023-04-13
ggml : optimize non-SIMD Q4_0 vector dot product (#703)
Stephan Walter
2023-04-13
ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)
Pavol Rusnak
2023-04-13
fix whitespace (#944)
CRD716
2023-04-13
readme : remove python 3.10 warning (#929)
CRD716
2023-04-13
readme : llama node binding (#911)
Genkagaku.GPT
2023-04-13
flake.nix: add all binaries from bin (#848)
Pavol Rusnak
2023-04-13
zig : update build.zig (#872)
Judd
[next]