aboutsummaryrefslogtreecommitdiff
path: root/llama.cpp
AgeCommit message (Expand)Author
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
2023-07-23llama : print max tensor size to stderr (#2336)Christian Demsar
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the gu...Guillaume "Vermeille" Sanchez
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-20llama : fix regression from #2000 - could not load no-mmap modelsGeorgi Gerganov
2023-07-19llama : extend API to get max devices at runtime (#2253)Rinne
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
2023-07-17llama : fix t_start_sample_us initialization warning (#2238)Alex Klinkhamer
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-14llama : add functions that work directly on model (#2197)Bach Le
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-11Possible solution to allow K-quants on models with n_vocab!=32000 (#2148)LostRuins
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-09llama : remove "first token must be BOS" restriction (#2153)oobabooga
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter
2023-07-05llama: Don't double count the sampling time (#2107)Howard Su
2023-07-05Fixed OpenCL offloading prints (#2082)Johannes Gäßler
2023-07-03Fix crash of test-tokenizer-0 under Debug build (#2064)Howard Su
2023-07-03[llama] No need to check file version when loading vocab score (#2079)Howard Su
2023-07-01Test-based VRAM scratch size + context adjustment (#2056)Johannes Gäßler
2023-07-01metal : release buffers when freeing metal context (#2062)Aaron Miller
2023-07-01llama : fix return value of llama_load_session_file_internal (#2022)Georgi Gerganov
2023-07-01llama : catch llama_load_session_file_internal exceptions (#2022)Rand Xie
2023-06-29Use unsigned for random seed (#2006)Howard Su
2023-06-28llama : replacing auto &kv with const auto &kv (#2041)m3ndax
2023-06-28llama : remove shards weight file support (#2000)Howard Su
2023-06-28CUDA GPU acceleration for LoRAs + f16 models (#1970)Johannes Gäßler
2023-06-28llama : support input embeddings directly (#1910)ningshanwutuobang
2023-06-27llama : fix rope usage after ChatGLM changeGeorgi Gerganov
2023-06-26ggml : add NUMA support (#1556)zrm
2023-06-26k-quants : support for super-block size of 64 (#2001)Kawrakow
2023-06-24llama : fix top-p sampling to match the canonical definition (#1953)Alex Renda
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-20llama : fix params struct slignment (#1936)Ettore Di Giacinto
2023-06-19llama : use aligned memory during ggml_init call from loading saved sessions ...l3utterfly
2023-06-19llama : only use Q6_K for output weights if tensor size is multiple of 256 (#...Kawrakow
2023-06-19Convert vector to f16 for dequantize mul mat vec (#1913)Johannes Gäßler
2023-06-18Added tokens per second to info prints (#1928)Johannes Gäßler
2023-06-18Fixed incorrectly applying RMS norm twice (#1925)Johannes Gäßler
2023-06-18llama : prevent usage of k-quants when tensor size is not a multiple of 256 (...Kawrakow
2023-06-18metal : handle buffers larger than device's maxBufferLength (#1826)Georgi Gerganov
2023-06-17llama : fix kv_cache `n` init (close #1903)Georgi Gerganov
2023-06-17ggml : fix warnings under MSVC (#1908)Howard Su
2023-06-16llama : fix embd when offloading non-repeating layers (#1891)Johannes Gäßler
2023-06-16build : fix and ignore MSVC warnings (#1889)Borislav Stanimirov
2023-06-14CUDA full GPU acceleration, KV cache in VRAM (#1827)Johannes Gäßler