diff options
author | Stephan Walter <stephan@walter.name> | 2023-03-26 13:14:01 +0000 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-26 16:14:01 +0300 |
commit | b391579db92f095666be1d979899b54ae0981573 (patch) | |
tree | 05df184c4075c805031e61dbbe4ec424d74d5bea | |
parent | 7a87d31f4f0c37bbb2ea695929fa4fe3ad579cda (diff) |
Update README and comments for standalone perplexity tool (#525)
-rw-r--r-- | README.md | 6 | ||||
-rw-r--r-- | examples/perplexity/perplexity.cpp | 2 |
2 files changed, 4 insertions, 4 deletions
@@ -248,7 +248,7 @@ cadaver, cauliflower, cabbage (vegetable), catalpa (tree) and Cailleach. ### Perplexity (Measuring model quality) -You can pass `--perplexity` as a command line option to measure perplexity over the given prompt. For more background, +You can use the `perplexity` example to measure perplexity over the given prompt. For more background, see https://huggingface.co/docs/transformers/perplexity. However, in general, lower perplexity is better for LLMs. #### Latest measurements @@ -271,10 +271,10 @@ Perplexity - model options #### How to run 1. Download/extract: https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip?ref=salesforce-research -2. Run `./main --perplexity -m models/7B/ggml-model-q4_0.bin -f wiki.test.raw` +2. Run `./perplexity -m models/7B/ggml-model-q4_0.bin -f wiki.test.raw` 3. Output: ``` -Calculating perplexity over 655 chunks +perplexity : calculating perplexity over 655 chunks 24.43 seconds per pass - ETA 4.45 hours [1]4.5970,[2]5.1807,[3]6.0382,... ``` diff --git a/examples/perplexity/perplexity.cpp b/examples/perplexity/perplexity.cpp index f617ba3..75d526d 100644 --- a/examples/perplexity/perplexity.cpp +++ b/examples/perplexity/perplexity.cpp @@ -19,7 +19,7 @@ std::vector<double> softmax(const std::vector<float>& logits) { void perplexity(llama_context * ctx, const gpt_params & params) { // Download: https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip?ref=salesforce-research - // Run `./main --perplexity -m models/7B/ggml-model-q4_0.bin -f wiki.test.raw` + // Run `./perplexity -m models/7B/ggml-model-q4_0.bin -f wiki.test.raw` // Output: `perplexity: 13.5106 [114/114]` auto tokens = ::llama_tokenize(ctx, params.prompt, true); |