aboutsummaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2023-06-26readme : add link to new k-quants for visibilityGeorgi Gerganov
2023-06-26k-quants : support for super-block size of 64 (#2001)Kawrakow
2023-06-26Fix assert when free invalid cuda pointer (#2005)Howard Su
2023-06-25readme : add new roadmap + manifestoGeorgi Gerganov
2023-06-25ggml : sync latest ggml (custom operators)Georgi Gerganov
2023-06-25fix server sampling: top k sampler first (#1977)anon998
2023-06-25readme : add Azure CI discussion linkGeorgi Gerganov
2023-06-25zig : upgrade build system support (#1981)sjinzh
2023-06-24#1869 Fix null reference errors when training from scratch with CUDA (#1907)Robyn
2023-06-24tests : sync test-grad0 from ggmlGeorgi Gerganov
2023-06-24flake : fix ggml-metal.metal path and run nixfmt (#1974)Rowan Hart
2023-06-24convert : fix invalid params in write_vocab_only (#1975)AN Long
2023-06-24ggml : improve ggml_graph_dump_dot, add ggml_format_name (#1978)slaren
2023-06-24readme : fix whitespacesGeorgi Gerganov
2023-06-24readme : fixed termux instructions (#1973)Alberto
2023-06-24llama : fix top-p sampling to match the canonical definition (#1953)Alex Renda
2023-06-24llama : make model stateless and context stateful (llama_state) (#1797)Didzis Gosko
2023-06-23Add OpenLLaMA instructions to the README (#1954)eiery
2023-06-22rework convert.py to read hyper-parameters from config.json (#1958)Erik Scholz
2023-06-21cmake: revert CUDA arch default to 52, 61 if f16 (#1959)Johannes Gäßler
2023-06-21Fix typo in README.md (#1961)Rahul Vivek Nair
2023-06-20readme : add link to p1Georgi Gerganov
2023-06-20Fix typo (#1949)Xiake Sun
2023-06-20llama : fix params struct slignment (#1936)Ettore Di Giacinto
2023-06-20[Fix] Reenable server embedding endpoint (#1937)Henri Vasserman
2023-06-19ggml : fix bug in LBFGS optimizer (found by ggml tests)Georgi Gerganov
2023-06-19llama : use aligned memory during ggml_init call from loading saved sessions ...l3utterfly
2023-06-19cmake : fix trailing whitespacesGeorgi Gerganov
2023-06-19llama : only use Q6_K for output weights if tensor size is multiple of 256 (#...Kawrakow
2023-06-19cuda : faster k-quants on older GPUs (#1930)Kawrakow
2023-06-19ggml : sync latest ggml repo (#1924)Georgi Gerganov
2023-06-19cmake : fix build shared ggml when CUDA is enabled (#1929)Howard Su
2023-06-19Convert vector to f16 for dequantize mul mat vec (#1913)Johannes Gäßler
2023-06-18Added tokens per second to info prints (#1928)Johannes Gäßler
2023-06-18Fixed incorrectly applying RMS norm twice (#1925)Johannes Gäßler
2023-06-18ggml : fix bug in ggml_compute_forward_add_q_f32 (#1918)l3utterfly
2023-06-18readme : update Android build instructions (#1922)Mike
2023-06-18llama : prevent usage of k-quants when tensor size is not a multiple of 256 (...Kawrakow
2023-06-18examples : fix examples/metal (#1920)Kawrakow
2023-06-18metal : handle buffers larger than device's maxBufferLength (#1826)Georgi Gerganov
2023-06-18cmake : add CUDA_ARCHITECTURES to new target ggml_static (#1917)Howard Su
2023-06-17make : do not print help for simple exampleGeorgi Gerganov
2023-06-17minor : warning fixesGeorgi Gerganov
2023-06-17Only one CUDA stream per device for async compute (#1898)Johannes Gäßler
2023-06-17llama : fix kv_cache `n` init (close #1903)Georgi Gerganov
2023-06-17make : update for latest Arch (#1701)DaniAndTheWeb
2023-06-17ggml : fix warnings under MSVC (#1908)Howard Su
2023-06-17metal : add norm, cpy f16->f16, alibi kernels (#1823)Aaron Miller
2023-06-17exposed modules so that they can be invoked by nix run github:ggerganov/llama...Faez Shakil
2023-06-17Server Example Refactor and Improvements (#1570)Randall Fitzgerald