aboutsummaryrefslogtreecommitdiff
path: root/main.cpp
AgeCommit message (Expand)Author
2023-03-22When seed <= 0 - use the clock to generate oneGeorgi Gerganov
2023-03-22Init llama_context_params properly from CLI (#370)Georgi Gerganov
2023-03-22Introduce C-style API (#370)Georgi Gerganov
2023-03-21We could use std::unordered_map over std::map (#305)Fabio R. Sluzala
2023-03-21Fix color codes emitting mid-UTF8 code. (#312)Matvey Soloviev
2023-03-21Importer for GPTQ quantized LLaMA models (#301)comex
2023-03-21Compute perplexity over prompt (#270)Gary Linscott
2023-03-21Enable ANSI colors on Windows 10+ (#311)anzz1
2023-03-21Check for reverse prompt by characters instead of tokens (#292) (#330)tjohnman
2023-03-21Fix convert script, warnings alpaca instructions, default paramsGeorgi Gerganov
2023-03-21cmdline option for custom amount of model parts (--n_parts N) (#348)anzz1
2023-03-21Add tokenizer test + revert to C++11 (#355)Georgi Gerganov
2023-03-20move file magic/version to header, print expected version (#319)Mack Straight
2023-03-20sentencepiece bpe compatible tokenizer (#252)Mack Straight
2023-03-19bugfix: default should not be interactive (#304)cocktailpeanut
2023-03-19fix coloring of last `n_batch` of prompt, and refactor line input (#221)Rickey Bowers Jr
2023-03-19Support for multiple reverse prompts. (#299)tjohnman
2023-03-19Make prompt randomization optional. (#300)tjohnman
2023-03-19Respect the maximum number of tokens in interactive. (#298)tjohnman
2023-03-19Add --ignore-eos parameter (#181)slaren
2023-03-19interactive mode: print '\n' in sigint_handler, this flush stdout thus ensure...Qingyou Meng
2023-03-19Command line switch to use F16 for memory_k and memory_v (refactor of #154) (...Erik Scholz
2023-03-19Fix off-by-one bug (#115)Georgi Gerganov
2023-03-19Drop trailing new line from file prompts (#80)Georgi Gerganov
2023-03-19Add "--instruct" argument for usage with Alpaca (#240)Georgi Gerganov
2023-03-18Warn user if a context size greater than 2048 tokens is specified (#274)Ronsor
2023-03-18Remove unused code since n_vocab is model.hparams.n_vocab (#262)Alex Nguyen
2023-03-18fixed warning with std::ignore about unused function result (#151)Justin Suess
2023-03-17Implement non-greedy tokenizer that tries to maximize token lengths (#242)thement
2023-03-16Add RMS norm and use it (#187)hoangmit
2023-03-15add SIGINT support for _WIN32 environments (#120)Rickey Bowers Jr
2023-03-15added ctx_size parameter (#148)Justin Suess
2023-03-15fixed color reset on exit (#149)Justin Suess
2023-03-13Print system informationGeorgi Gerganov
2023-03-13Use fprintf for diagnostic output (#48)Pavol Rusnak
2023-03-13Reduce model loading time (#43)uint256_t
2023-03-13Fix UTF-8 handling (including colors) (#79)Val Kharitonov
2023-03-13Gate signal support on being on a unixoid system. (#74)Matvey Soloviev
2023-03-13Fix token count accountingMatvey Soloviev
2023-03-13Fix color getting reset before prompt output done (#65)Matvey Soloviev
2023-03-12Add interactive mode (#61)Matvey Soloviev
2023-03-12Add back top_k (#56)beiller
2023-03-12Windows fixes (#31)Sebastián A
2023-03-12Add repetition penalty (#20)beiller
2023-03-11Bump memory bufferGeorgi Gerganov
2023-03-11Support all LLaMA models + change Q4_0 quantization storageGeorgi Gerganov
2023-03-10Fix a bug in the rope calculationGeorgi Gerganov
2023-03-10Final touchesGeorgi Gerganov
2023-03-10Initial releaseGeorgi Gerganov