aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-04-14main : alternative instruct mode (Vicuna support, etc.) (#863)Tomáš Pazdiora
2023-04-14ggml : add unary and binary map operations (#874)Kerfuffle
2023-04-14py : cleanup dependencies (#962)Pavol Rusnak
2023-04-14py : fix flake8 and isort nitpicks (#960)Pavol Rusnak
2023-04-14ggml : minorGeorgi Gerganov
2023-04-14ggml : always allocate buffers with size multiple of GGML_MEM_ALIGNGeorgi Gerganov
2023-04-14py : new conversion script (#545)comex
2023-04-14ggml : fix q4_1 dot product typesGeorgi Gerganov
2023-04-14ggml : optimize rope function to avoid call powf in the tight loop (#807)Howard Su
2023-04-14perplexity : add support for batch size to `--perplexity` (#407)Gary Linscott
2023-04-13common : remove unnecessary includes (#947)CRD716
2023-04-13ggml : add GGML_DEFAULT_N_THREADSGeorgi Gerganov
2023-04-13ggml : speed-up ggml_vec_dot_q4_1() ARM_NEON + 32-bit ARM support (#900)Georgi Gerganov
2023-04-13llama : merge llama_internal.h into llama.hGeorgi Gerganov
2023-04-13gitignore : benchmarkGeorgi Gerganov
2023-04-13ggml : optimize non-SIMD Q4_0 vector dot product (#703)Stephan Walter
2023-04-13ggml : introduce GGML_ALIGNED_MALLOC/GGML_ALIGNED_FREE macros (#884)Pavol Rusnak
2023-04-13fix whitespace (#944)CRD716
2023-04-13readme : remove python 3.10 warning (#929)CRD716
2023-04-13readme : llama node binding (#911)Genkagaku.GPT
2023-04-13flake.nix: add all binaries from bin (#848)Pavol Rusnak
2023-04-13zig : update build.zig (#872)Judd
2023-04-13ggml : update cblas_sgemm columns var to be more reasonable (#838)Vladimir
2023-04-13examples : add -n to alpaca and gpt4all scripts (#706)niansa/tuxifan
2023-04-13cmake : add explicit F16C option (x86) (#576)anzz1
2023-04-13benchmark : add tool for timing q4_0 matrix multiplication (#653)SebastianApel
2023-04-13do not force the prompt file to end with a new line (#908)Pavol Rusnak
2023-04-12Don't crash on ftype (formerly f16) == 4 (#917)Stephan Walter
2023-04-12readme : change "GPU support" link to discussionGeorgi Gerganov
2023-04-12readme : update hot topics with link to "GPU support" issueGeorgi Gerganov
2023-04-12readme: link to sha256sums file (#902)Nicolai Weitkemper
2023-04-11Fix whitespace, add .editorconfig, add GitHub workflow (#883)Pavol Rusnak
2023-04-11Add enum llama_ftype, sync ggml_type to model files (#709)Stephan Walter
2023-04-11Windows fixes (#890)comex
2023-04-10Add BAIR's Koala to supported models (#877)qouoq
2023-04-10ggml : fix WASM buildGeorgi Gerganov
2023-04-10ggml : add ggml_cont() + optimize ggml_cpy() for contiguous dstGeorgi Gerganov
2023-04-10ggml : remove trailing whitespacesGeorgi Gerganov
2023-04-10Simplify to include lower-case windows.h always, fix compile on mingw32 (#747)Marco Matthies
2023-04-10ggml : fix quantize_row_q4_1() ARM_NEON (close #876)Georgi Gerganov
2023-04-10Print model version.comex
2023-04-10Rewrite loading code to try to satisfy everyone:comex
2023-04-08fix for windows utf-8 input (#840)Tomáš Pazdiora
2023-04-08cmake should link openblas properly with -lopenblas like how it's done in the...eiery
2023-04-08Add new binaries to flake.nix (#847)lon
2023-04-08Add quantize-stats command for testing quantization (#728)unbounded
2023-04-07make : add libllama.so target for llama-cpp-python (#797)bhubbb
2023-04-07zig : don't link examples/common.cpp for non-example (#814)iacore
2023-04-07llama : always sort logits before nucleus sampling (#812)Ivan Stepanov
2023-04-06Do not crash when it has nothing to say. (#796)Sergey Alirzaev