aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-09llama : remove "first token must be BOS" restriction (#2153)oobabooga
2023-07-09main : escape prompt prefix/suffix (#2151)Nigel Bosch
2023-07-09readme : update Termux instructions (#2147)JackJollimore
2023-07-09ggml : fix buidling with Intel MKL but ask for "cblas.h" issue (#2104) (#2115)clyang
2023-07-09readme : add more docs indexes (#2127)rankaiyx
2023-07-08Fixed OpenLLaMA 3b CUDA mul_mat_vec_q (#2144)Johannes Gäßler
2023-07-08CUDA: add __restrict__ to mul mat vec kernels (#2140)Johannes Gäßler
2023-07-07docker : add support for CUDA in docker (#1461)dylan
2023-07-07ci : switch threads to 1 (#2138)Georgi Gerganov
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
2023-07-07ggml : remove sched_yield() call in ggml_graph_compute_thread() (#2134)Georgi Gerganov
2023-07-07convert.py: add mapping for safetensors bf16 (#1598)Aarni Koskela
2023-07-07Fix opencl by wrap #if-else-endif with \n (#2086)Howard Su
2023-07-06ggml : fix restrict usageGeorgi Gerganov
2023-07-06convert : update for baichuan (#2081)Judd
2023-07-06alpaca.sh : update model file name (#2074)tslmy
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05Update Server Instructions (#2113)Jesse Jojo Johnson
2023-07-05ggml : fix bug introduced in #1237Georgi Gerganov
2023-07-05tests : fix test-grad0Georgi Gerganov
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
2023-07-05Update server instructions for web front end (#2103)Jesse Jojo Johnson
2023-07-05Quantized dot products for CUDA mul mat vec (#2067)Johannes Gäßler
2023-07-05llama: Don't double count the sampling time (#2107)Howard Su
2023-07-05Fixed OpenCL offloading prints (#2082)Johannes Gäßler
2023-07-05embd-input: Fix input embedding example unsigned int seed (#2105)Nigel Bosch
2023-07-04readme : add link web chat PRGeorgi Gerganov
2023-07-04ggml : sync latest (new ops, macros, refactoring) (#2106)Georgi Gerganov
2023-07-04Add an API example using server.cpp similar to OAI. (#2009)jwj7140
2023-07-04Simple webchat for server (#1998)Tobias Lütke
2023-07-04Allow old Make to build server. (#2098)Henri Vasserman
2023-07-04Update Makefile: clean simple (#2097)ZhouYuChen
2023-07-04CI: make the brew update temporarily optional. (#2092)Erik Scholz
2023-07-04[ggml] fix index for ne03 value in ggml_cl_mul_f32 (#2088)Govlzkoy
2023-07-04fix server crashes (#2076)Henri Vasserman
2023-07-03Fix crash of test-tokenizer-0 under Debug build (#2064)Howard Su
2023-07-03[llama] No need to check file version when loading vocab score (#2079)Howard Su
2023-07-03server: add option to output probabilities for completion (#1962)WangHaoranRobin
2023-07-02ggml : fix build with OpenBLAS (close #2066)Georgi Gerganov
2023-07-01Better CUDA synchronization logic (#2057)Johannes Gäßler
2023-07-01Test-based VRAM scratch size + context adjustment (#2056)Johannes Gäßler
2023-07-01cmake : don't force -mcpu=native on aarch64 (#2063)Daniel Drake
2023-07-01metal : release buffers when freeing metal context (#2062)Aaron Miller
2023-07-01convert : add support of baichuan-7b (#2055)Judd
2023-07-01llama : fix return value of llama_load_session_file_internal (#2022)Georgi Gerganov
2023-07-01llama : catch llama_load_session_file_internal exceptions (#2022)Rand Xie
2023-07-01embd-input : fix returning ptr to temporaryGeorgi Gerganov
2023-07-01train : fix compile warningGeorgi Gerganov
2023-07-01ggml : disable GGML_TASK_INIT and GGML_TASK_FINALIZE by default (#1995)Qingyou Meng