aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-08-04CUDA: use min compute capability of GPUs actually used (#2506)Cebtenzzre
2023-08-04CUDA: check if event is NULL before cudaStreamWaitEvent (#2505)Cebtenzzre
2023-08-04Add --simple-io option for subprocesses and break out console.h and cpp (#1558)DannyDaemonic
2023-08-04Fixing race condition in server and partial stream handling in frontend. (#2391)Stephen Nichols
2023-08-04Stream save llama context data to file instead of allocating entire buffer up...l3utterfly
2023-08-04build : fix several cast and printf warnings (#2499)Borislav Stanimirov
2023-08-02examples : generate JSON according to schema (#1887)Evan Jones
2023-08-02CUDA: faster non k-quant mul_mat_q kernels (#2483)Johannes Gäßler
2023-08-02CUDA: Fix models with output size != 32000 (#2480)Johannes Gäßler
2023-08-02readme : add Aquila-7B model series to supported models (#2487)ldwang
2023-08-02tests : Fix compilation warnings (Linux/GCC) (#2451)Eve
2023-08-02readme : Add Chinese LLaMA-2 / Alpaca-2 to supported models (#2475)Yiming Cui
2023-08-01fix a typo in examples/server/README.md (#2478)Bono Lv
2023-08-01server : Support dark mode (#2414)ebraminio
2023-08-01metal : add gqa8 kernel to allow llama-2-70B on metal (#2459)Matteo Boschini
2023-07-31CUDA: fixed LLAMA_FAST compilation option (#2473)Johannes Gäßler
2023-07-31CUDA: fixed cmake F16 option (#2471)Johannes Gäßler
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-31CUDA: Implemented row flattening for non-glm RoPE (#2468)Johannes Gäßler
2023-07-31CUDA: fewer memory bank conflicts for mul_mat_q (#2458)Johannes Gäßler
2023-07-31Fix Metal backend broken from the allocator changes (#2455)slaren
2023-07-30ggml : add graph tensor allocator (#2411)slaren
2023-07-29CUDA: Quantized matrix matrix multiplication (#2160)Johannes Gäßler
2023-07-29CUDA: faster multi GPU synchronization (#2448)Johannes Gäßler
2023-07-28perplexity : add Hellaswag calculation (#2389)klosax
2023-07-28ggml : workaround for missing _mm256_setr_m128i in GCC < 8 in k_quants.c (#2405)Lee
2023-07-28llama : support more diverse tokenizers? (#2420)eric8607242
2023-07-28examples : fix whitespaceGeorgi Gerganov
2023-07-28examples : server chat mode with llama2 (#2400)nhamanasu
2023-07-28readme : fix the description of the Tail free sampling (TFS) method (#2431)Weird Constructor
2023-07-28llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)Rand Xie
2023-07-28Obtaining LLaMA 2 instructions (#2308)niansa/tuxifan
2023-07-27convert.py : Update to support 70B HF format model files (#2427)mj-shifu
2023-07-27metal : disable graph concurrency optimization due to bug (#2413)Georgi Gerganov
2023-07-26ggml : fix assert in ggml_set_unary_op (#2410)slaren
2023-07-26make : build with -Wmissing-prototypes (#2394)Cebtenzzre
2023-07-26ggml : allocate graphs in a context (#2392)slaren
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
2023-07-25ggml : fix ggml_flash_attn to use op_params (#2387)slaren
2023-07-25convert.py : support bpe tokenizer (#2228)ldwang
2023-07-25ggml : relax contiguous constraints in activation function (#2371)Jiahao Li
2023-07-25ggml : improve graph build time via hash table lookup (#2329)slaren
2023-07-25build : fix line breaking error in build-info.sh (#2349)Hesen Peng
2023-07-25main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)Xiao-Yong Jin
2023-07-25ci : add non-AVX scalar build/test (#2356)Eve
2023-07-25k_quants : add AVX support to dot functions with QK_K as 64 (#2339)katsu560
2023-07-25metal : concurrently dispatch commands (#2358)Shouzheng Liu
2023-07-25Another speed gain for Q4_0 and Q4_1 on Metal (#2375)Kawrakow
2023-07-25Fix Q4_K and Q5_K for QK_K = 64 on CUDA (#2359)Kawrakow
2023-07-25server: add rms_norm_eps parameter (#2380)slaren