diff options
author | Evan Miller <emmiller@gmail.com> | 2023-07-10 11:49:56 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-07-10 18:49:56 +0300 |
commit | 5656d10599bd756dc0f17284e418e704200b43f3 (patch) | |
tree | a9aba6c867a268d0bcb90bd9174912774a67ed65 /examples/main | |
parent | 1d1630996920f889cdc08de26cebf2415958540e (diff) |
mpi : add support for distributed inference via MPI (#2099)
* MPI support, first cut
* fix warnings, update README
* fixes
* wrap includes
* PR comments
* Update CMakeLists.txt
* Add GH workflow, fix test
* Add info to README
* mpi : trying to move more MPI stuff into ggml-mpi (WIP) (#2099)
* mpi : add names for layer inputs + prep ggml_mpi_graph_compute()
* mpi : move all MPI logic into ggml-mpi
Not tested yet
* mpi : various fixes - communication now works but results are wrong
* mpi : fix output tensor after MPI compute (still not working)
* mpi : fix inference
* mpi : minor
* Add OpenMPI to GH action
* [mpi] continue-on-error: true
* mpi : fix after master merge
* [mpi] Link MPI C++ libraries to fix OpenMPI
* tests : fix new llama_backend API
* [mpi] use MPI_INT32_T
* mpi : factor out recv / send in functions and reuse
* mpi : extend API to allow usage with outer backends (e.g. Metal)
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Diffstat (limited to 'examples/main')
-rw-r--r-- | examples/main/main.cpp | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/examples/main/main.cpp b/examples/main/main.cpp index 0f6391a..07d8fc6 100644 --- a/examples/main/main.cpp +++ b/examples/main/main.cpp @@ -105,7 +105,7 @@ int main(int argc, char ** argv) { params.prompt = gpt_random_prompt(rng); } - llama_init_backend(params.numa); + llama_backend_init(params.numa); llama_model * model; llama_context * ctx; @@ -671,5 +671,7 @@ int main(int argc, char ** argv) { llama_free(ctx); llama_free_model(model); + llama_backend_free(); + return 0; } |