diff options
author | Stephan Walter <stephan@walter.name> | 2023-03-28 16:48:20 +0000 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-03-28 19:48:20 +0300 |
commit | 436e56193199a1625f8c561069f702e8840a9e08 (patch) | |
tree | 9e7f39e1736ccff5728bb6194f160dfa94cf552d /llama.h | |
parent | 20e1e84884376b3fb44ffbfd48d478b2934b0b5e (diff) |
all : be more strict about converting float to double (#458)
* Be more strict about converting float to double
* Test equivalence of round, SILU implementations
Test module is commented out in CMakeLists.txt because the tests may
take a long time, depending on how much the compiler optimizes.
* Fix softmax in perplexity.cpp
* all : prefer float over double where appropriate
* perplexity : add <cmath>
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'llama.h')
-rw-r--r-- | llama.h | 8 |
1 files changed, 4 insertions, 4 deletions
@@ -45,7 +45,7 @@ extern "C" { } llama_token_data; - typedef void (*llama_progress_callback)(double progress, void *ctx); + typedef void (*llama_progress_callback)(float progress, void *ctx); struct llama_context_params { int n_ctx; // text context @@ -134,9 +134,9 @@ extern "C" { const llama_token * last_n_tokens_data, int last_n_tokens_size, int top_k, - double top_p, - double temp, - double repeat_penalty); + float top_p, + float temp, + float repeat_penalty); // Performance information LLAMA_API void llama_print_timings(struct llama_context * ctx); |