aboutsummaryrefslogtreecommitdiff
path: root/examples
AgeCommit message (Expand)Author
2023-08-02examples : generate JSON according to schema (#1887)Evan Jones
2023-08-02tests : Fix compilation warnings (Linux/GCC) (#2451)Eve
2023-08-01fix a typo in examples/server/README.md (#2478)Bono Lv
2023-08-01server : Support dark mode (#2414)ebraminio
2023-07-31CUDA: mmq CLI option, fixed mmq build issues (#2453)Johannes Gäßler
2023-07-28perplexity : add Hellaswag calculation (#2389)klosax
2023-07-28examples : fix whitespaceGeorgi Gerganov
2023-07-28examples : server chat mode with llama2 (#2400)nhamanasu
2023-07-28readme : fix the description of the Tail free sampling (TFS) method (#2431)Weird Constructor
2023-07-28llama : use n_embd_gqa instead of n_embd to handle llama-2 70B (#2433)Rand Xie
2023-07-25Add LLAMA_DEFAULT_RMS_EPS so we can change the default (#2384)Kawrakow
2023-07-25main : add `--in-prefix-bos` to prefix BOS to user inputs; keep EOS (#2304)Xiao-Yong Jin
2023-07-25server: add rms_norm_eps parameter (#2380)slaren
2023-07-25[Server] Escape HTML in webchat (#2368)Henri Vasserman
2023-07-24make rms_norm_eps a parameter (#2374)slaren
2023-07-24Chat UI extras (#2366)Aarni Koskela
2023-07-23llama : add grammar-based sampling (#1773)Evan Jones
2023-07-23Add gqa parameter support to the server (#2351)IgnacioFDM
2023-07-23common : n_threads == -1 uses std::thread::hardware_concurrency() (#2347)wzy
2023-07-23llama : grouped-query attention + LLaMAv2 70B support (#2276)Georgi Gerganov
2023-07-23llama : print help to stdout (#2338)maddes8cht
2023-07-23examples : simplify vim plugin (#2327)AustinMroz
2023-07-22llama : optimize memory buffers (#2325)Georgi Gerganov
2023-07-22Perplexity: Compute scores correlated to HellaSwag (#2312)klosax
2023-07-22examples : basic VIM pluginwhoreson
2023-07-21examples : add easy python script to create quantized (k-bit support) GGML mo...Richard Roberson
2023-07-21examples : fix typo in minigpt4.py (#2298)Ikko Eltociear Ashimine
2023-07-21ggml : fix rope args order + assert (#2054)Georgi Gerganov
2023-07-21llama : remove cfg smooth factor as it is only a reparameterization of the gu...Guillaume "Vermeille" Sanchez
2023-07-21gitignore : changes for Poetry users + chat examples (#2284)Jose Maldonado
2023-07-21llama : make tensor_split ptr instead of array (#2272)Georgi Gerganov
2023-07-21MIKU MAYHEM: Upgrading the Default Model for Maximum Fun 🎉 (#2287)Hatsune Miku
2023-07-21make : fix embdinput library and server examples building on MSYS2 (#2235)Przemysław Pawełczyk
2023-07-19cmake : install targets (#2256)wzy
2023-07-18ci : integrate with ggml-org/ci (#2250)Georgi Gerganov
2023-07-18llama : shorten quantization descriptionsGeorgi Gerganov
2023-07-15llama : add custom RoPE (#2054)Xiao-Yong Jin
2023-07-14examples : fixed path typos in embd-input (#2214)Shangning Xu
2023-07-13Revert "Support using mmap when applying LoRA (#2095)" (#2206)Howard Su
2023-07-11ggml : remove src0 and src1 from ggml_tensor and rename opt to src (#2178)Spencer Sutton
2023-07-11llama : add classifier-free guidance (#2135)Bach Le
2023-07-11Support using mmap when applying LoRA (#2095)Howard Su
2023-07-10mpi : add support for distributed inference via MPI (#2099)Evan Miller
2023-07-09main : escape prompt prefix/suffix (#2151)Nigel Bosch
2023-07-07ggml : change ggml_graph_compute() API to not require context (#1999)Qingyou Meng
2023-07-06convert : update for baichuan (#2081)Judd
2023-07-06alpaca.sh : update model file name (#2074)tslmy
2023-07-05Expose generation timings from server & update completions.js (#2116)Tobias Lütke
2023-07-05Update Server Instructions (#2113)Jesse Jojo Johnson
2023-07-05ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)Stephan Walter